A:
With elections approaching this November and midterms next year, disinformation is no longer just background noise in our digital age. It has become a weapon, increasingly aimed at Black, Latino, Indigenous, and other marginalized communities. These groups are disproportionately targeted by campaigns designed to discourage civic participation and exploit racial and socioeconomic vulnerabilities.
Studies have shown that communities of color are asymmetrically targeted by disinformation, particularly efforts that aim to suppress voter engagement. Raising awareness of these tactics now is crucial to preparing communities to recognize and resist efforts that undermine participation and trust in the democratic process.

In the past, voter suppression took obvious forms: secret polling places, poll taxes, or literacy tests. Today, these strategies are far more insidious, hidden behind screens, embedded in algorithmic code, and customized to fit your social media feed. Rather than blocking votes outright, campaigns rely on confusion and exhaustion. By flooding users with conflicting or misleading content, they count on information overload to push people to disengage altogether.
Thanks to big data, digital voter suppression has become highly technical. Internet campaigns now employ microtargeting and psychological manipulation to shape behavior. Actors can reach individuals based on inferred traits such as race, ZIP code, or interests. In recent elections, disinformation specifically targeted communities of color on platforms like Facebook and WhatsApp with false voting instructions, according to the Brookings Institution. Narrative tailoring sharpens the divide: one audience might see “your vote doesn’t matter,” while another is told their group already holds too much power. Both are designed to sow apathy and resentment. Then emotional manipulation reinforces the message, with posts crafted to provoke fear, anger, or indifference, draining civic energy.

In Michigan, two right-wing operatives used robocalls to target thousands of Black voters and spread false information about voting by mail. The calls said if they voted by mail, their personal information would be used by police to follow up on old arrest warrants or by credit card companies to collect outstanding debts. The two Conservative activists behind the robocalls have been convicted of election law felonies and will be sentenced later this year.
The emergence of AI and deepfake technology has raised the stakes even higher. During the 2024 New Hampshire primary, robocalls impersonating President Joe Biden urged voters to skip the primary and save their vote for the general election.
That was the first known instance of AI voice-cloning to deter people from voting, but it certainly won’t be the last because AI-generated content is incredibly inexpensive and easy to produce. The political operative behind the fake Biden robocall said it cost only $1 and took less than 20 minutes to put together, illustrating how easily it is to blur truth and fiction.

Tactics like these amplify existing vulnerabilities. Marginalized communities are particularly susceptible because several forces are converging to limit their access to reliable information. A legacy of distrust has left many skeptical of official sources of information, as these groups have historically been excluded from media power and representation. At the same time, shrinking local news outlets have created “news deserts,” where algorithm-driven content from questionable sources, sometimes including pink slime sites, fills the void. Disinformation can also spread more quickly within these communities through social media.
This kind of disinformation is not just a local problem. Global actors are increasingly injecting false content into the American information ecosystem through AI-generated videos, fake images, and fabricated threats.
In the 2024 election, hoax bomb threats targeted polling places in several battleground states, including Georgia, Michigan, Arizona, and Wisconsin, with many traced to Russian domains, according to the FBI. Fulton County, Georgia—a majority-Black area that includes Atlanta—received 32 threats, five of which temporarily closed polling stations. None were credible, but the disruptions highlight a broader pattern: foreign actors exploiting fear and confusion to suppress Black voter turnout.

Because disinformation thrives on passivity, active defense matters. Experts suggest practicing SIFT (Stop, Investigate, Find, and Trace) method, which provides a clear, quick way to protect yourself and your community from online manipulation:

- (S)top before you share: Pause and ask who created this content and why.
- (I)nvestigate the source: Do a quick search to see whether the author or organization is credible.
- (F)ind better coverage: Check if reputable outlets are reporting the same thing.
- (T)race it back: Look for the original photo, video, or quote to see if it’s being misused.
Additionally, communities should talk about digital threats. Media literacy should become a community conversation because sharing accurate information helps restore trust.
There’s no time to waste. Digital disinformation doesn’t pause between elections. It continues to warp how citizens see their rights and power. And for disadvantaged groups, there is a cumulative effect. Not because votes are stolen but because when trust in the system is lost, disengagement often follows.
Bottom line: When disinformation suppresses turnout, it doesn’t just distort news. It distorts democracy.


