Valtik Studios
Back to blog
Social Mediacritical2026-04-1620 min

Fake Americans, Real Influence: Inside State-Sponsored Propaganda

Russia's IRA reached 126 million Americans. China's GoLaxy leak revealed 3,692 AI personas targeting US officials. A threat intelligence investigation into foreign state propaganda operations and defensive opsec.

What this is actually about

Foreign influence operations aren't an abstraction. They're staffed organizations with budgets, org charts, KPIs, and documented case histories. The accounts have names. The campaigns have dates. The money trails have been subpoenaed. This post walks through the specific operations, the specific personas, and the specific numbers that show how state actors manufacture American public opinion.

If you want to understand why your feed looks the way it does, the answer starts here.

Russia: Internet Research Agency, $1.25M per month

The Internet Research Agency (IRA) operated out of 55 Savushkina Street in St. Petersburg from roughly 2013 to 2023. At its peak it employed over 1,000 people in 12-hour shifts. The Mueller indictment of February 2018 detailed the operation: an annual budget of approximately $12.2 million (roughly $1.25M/month on US operations alone), total 2015-2018 spending around $16 million.

Target: American voters. Method: fake social media personas built to be indistinguishable from real Americans.

@Jenna_Abrams was a fake American girl persona. 70,000+ followers. Cited in Variety, BBC, USA Today, The Washington Post. Michael Flynn, Donald Trump Jr., and Roseanne Barr all followed her. Posts ranged from segregation apologia to ironic takes on American politics. Entirely fabricated by the IRA.

@TEN_GOP masqueraded as the Tennessee Republican Party. 150,000 followers. 10,794 tweets. 6 million+ retweets. The fake account had vastly more engagement than the real Tennessee GOP account. Retweeted by Donald Trump Jr., Kellyanne Conway, Sebastian Gorka, and members of Congress.

Blacktivist impersonated a Black Lives Matter account. 360,000 Facebook likes (more than the real BLM Facebook page). 11.2 million engagements. Pushed voter suppression messaging targeting Black voters in swing states. Organized real-world protests that actual Americans attended.

Heart of Texas impersonated Texas secessionists. More followers than the official Texas Democratic Party and Texas Republican Party combined. On May 21, 2016, the account organized a "Stop Islamification of Texas" rally at the Islamic Da'wah Center in Houston. The IRA simultaneously organized a counter-protest at the same location through a different fake account, "United Muslims of America," pulling real Americans into a staged confrontation with no Russian visible at either protest.

Reach: Facebook's own disclosure said 126 million Americans were exposed to IRA content on Facebook alone between 2015 and 2017.

After IRA: Storm-1516 and Doppelganger

Prigozhin's death in 2023 didn't end Russian influence operations. It restructured them. The GRU-linked Storm-1516 (also called Doppelganger) took over large portions of the infrastructure.

Method: clone legitimate news sites (BBC, Fox News, Der Spiegel, Le Monde, Washington Post) at typosquatted domains that look identical. Publish fabricated articles on the clones. Amplify the clones through paid ads and networks of fake accounts until a credulous real account picks up the fabricated story, and eventually real media repeats it.

Recent campaigns: Moldova 2024 election, German elections, Romania 2024 (which had to annul its presidential first round after the scale of Russian interference became clear), US 2024 election cycle. Tactics include deepfake videos of candidates making fabricated statements and fake "leaked" documents attributed to Western officials.

Secondary Infektion: the quota-filling operation

From 2014 to 2020, a separate Russian operation now called Secondary Infektion produced 2,500+ pieces of content in 7 languages across 300+ platforms. Graphika's 2020 report documented the scope: at least 250 forged documents including fake letters attributed to Mike Pompeo and multiple US senators, fake OPCW letters about the Skripal poisoning, fake NATO planning documents.

The failure mode is what's interesting: almost none of it worked. Secondary Infektion operators looked like they were fulfilling quotas rather than executing a strategy. They posted once to obscure forums and left, rarely defended their content, and almost never got picked up by real media. The lesson: sheer volume of disinformation doesn't equal impact. Campaigns that went viral (IRA, Storm-1516) invested heavily in persona development and distribution infrastructure.

China: 488 million posts per year

Harvard's 2017 study — still the most rigorous measurement of Chinese state social media activity — estimated the Chinese government fabricates approximately 488 million social media posts per year on domestic platforms. This was only internal production (Weibo, WeChat, Douyin) and only what Gary King's team could statistically infer. The external-facing operations are in addition.

Spamouflage is Meta's name for the largest known cross-platform covert Chinese influence operation, active since at least 2017. Meta's takedowns:

  • 7,704 Facebook accounts
  • 954 Facebook pages
  • 15 Facebook groups
  • 15 Instagram accounts

Linked to Chinese law enforcement. Meta's assessment in its 2023 report: "Spamouflage has consistently struggled to reach beyond its own fake echo chamber." High volume, low conversion. Until suddenly it isn't. The 2024 US election cycle showed Spamouflage accounts gaining genuine engagement on TikTok for the first time.

GoLaxy: the leaked playbook

August 2024. A disgruntled employee of Chinese state contractor GoLaxy leaked 399 pages of internal documents describing what the company called the "Smart Propaganda System." The leak is the most detailed public picture of an industrial-scale state propaganda operation ever obtained.

Targets in the leak:

  • 170 Taiwan political figures (detailed dossiers)
  • 23 million household records (Taiwan)
  • All 117th United States Congress members
  • 2,000 American public figure dossiers
  • 5,000+ journalists profiled

Infrastructure:

  • 3,692 virtual personas across Facebook, X, LinkedIn, TikTok, Reddit, and Medium
  • AI-generated faces, AI-generated posting histories, AI-generated engagement patterns
  • Customized behavior per platform to avoid detection heuristics

Customers:

  • Cyberspace Administration of China (CAC)
  • People's Liberation Army Unit 61716
  • Central Military Commission

State of play at time of leak: many of the 3,692 personas were still active on the platforms, and some remained active weeks after the leak was published.

Iran: AI-generated personas at scale

March 2026. Meta and X removed approximately 300 Iranian-operated accounts with a combined ~41,000 followers. The twist: these weren't low-effort copypasta accounts. Each persona came with an AI-generated photograph (diffusion-model output that fools most human reviewers), a fabricated multi-year posting history, and a plausible professional biography.

Cover identities used:

  • Political scientists at fictional institutions
  • Women's rights activists
  • Satirical cartoonists
  • Journalists and foreign correspondents

June 2025 Iran-Israel war: Iranian accounts produced and disseminated 110+ unique deepfake videos during the 12-day conflict. AI-generated images of downed F-35s were broadcast on Iranian state television. Videos of Israeli casualties that never happened circulated widely before anyone debunked them.

AI-generated propaganda has gone mainstream

  • 500,000 deepfake videos were shared on social media in 2023. Projections put the 2025 figure at 8 million. Not all are propaganda, but the infrastructure for propaganda is the same as the infrastructure for entertainment.
  • October 2025, Ireland: a deepfake video of a presidential candidate announcing her withdrawal circulated on TikTok and X. 30,000 views in 12 hours before being removed. The candidate did not withdraw.
  • NewsGuard tracking: 2,089+ AI-generated fake news sites across 16 languages as of early 2026. Many are SEO-optimized for specific US local markets (fake "Tallahassee Tribune," fake "Phoenix Post"). Most make money through ad arbitrage. Some are operated by state actors for influence purposes.

The defense dismantling itself

The American institutions that used to track foreign influence operations spent the 2024-2025 period getting destroyed by domestic political pressure.

Stanford Internet Observatory was the single most authoritative academic source on platform manipulation. Produced major studies on IRA, Spamouflage, anti-vaccine networks, and election interference. Dismantled during 2024:

  • House Judiciary Committee (Jim Jordan) issued subpoenas and held hostile hearings
  • America First Legal (Stephen Miller) filed lawsuits
  • Stanford paid millions in legal fees defending against discovery demands
  • Alex Stamos left in November 2023
  • Renee DiResta's contract was not renewed in June 2024
  • SIO announced it would not research the 2024 election or future elections

Election Integrity Partnership (SIO, University of Washington, Graphika, and Atlantic Council DFRLab) dissolved under similar pressure.

CISA's mis/disinformation team was effectively neutralized after congressional hearings characterized any cooperation between government and platforms as censorship.

Net effect: the tools built to detect foreign influence operations were largely retired heading into the 2026 cycle. The foreign operations themselves scaled up with better AI.

X/Twitter removed the labels and the algorithm followed

April 20-21, 2023 — X removed "state-affiliated media" labels from accounts like @RT_com, @Xinhua, @PressTV, @RTArabic, and dozens of others. The labels had previously reduced algorithmic amplification.

Measurable effect in the 90 days following removal:

  • Engagement on Russian, Chinese, and Iranian state media rose 70% relative to the preceding 90 days
  • Aggregate likes and reposts across the labeled accounts jumped from 2.93M to 4.98M
  • @RTArabic (5.2M followers) began a sustained engagement rise after a year of declining metrics

X's Community Notes system, the replacement for pre-publication moderation, consistently underperforms at the scale of viral propaganda. A state account with a 10M-impression deepfake gets a Community Note three days later, after the video has been reposted by 50,000 real users.

The playbook, extracted

Across Russia, China, and Iran, the operational pattern is consistent:

  1. Persona development. Invest heavily in fake accounts that look indistinguishable from real users. Age them on the platform before activation. Post on non-political topics first.
  2. Infrastructure. Hundreds to thousands of accounts per operation. Redundancy so takedowns do not neutralize the campaign.
  3. Narrative targeting. Identify existing American divisions (race, policing, immigration, abortion, guns). Amplify both sides. The goal isn't to push one position. It's to delegitimize consensus.
  4. Cross-platform laundering. Seed content on small platforms (Reddit, Telegram, niche forums). The laundered content migrates to X and Facebook with the original source obscured.
  5. Real-world events. Organize protests and counter-protests. Rent venues. Print signs. Use real Americans as unwitting participants.
  6. AI augmentation. Since 2023, use LLMs and diffusion models to scale content production by roughly 100x at flat cost.

Why this matters for your threat model

If you run a business with public visibility, you aren't the target. But your brand can be weaponized by adversaries for reputation attacks driven by state-aligned accounts. Executives, political candidates, journalists, researchers, and dissidents are direct targets.

Detection signals that still work:

  • Cross-account posting pattern correlation (shared timestamps, similar phrasing, geographic clustering)
  • Reverse image search on profile photos (many AI generators leave statistical fingerprints)
  • Account age vs. follower count anomalies
  • Sudden topic pivots coinciding with geopolitical events

What individuals can do:

  • Assume any politically inflammatory viral content is potentially manufactured until verified
  • Check account creation dates on viral posts
  • Do not retweet / share before reading
  • Cross-reference with wire services before believing a breaking political story

The bigger takeaway: the infrastructure for manufacturing American political opinion now runs on American platforms, with American users amplifying content paid for by foreign governments. The disclosure and moderation systems built after 2016 have been partially dismantled. The operations themselves have scaled up.

Sources

  1. [Mueller Indictment of Internet Research Agency (Feb 2018)](https://www.justice.gov/file/1035477/download)
  2. [The IRA, Social Media, and Political Polarization in the United States — Senate Select Committee on Intelligence](https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume2.pdf)
  3. [Secondary Infektion Report — Graphika (2020)](https://secondaryinfektion.org/)
  4. [Spamouflage Adversarial Threat Report — Meta Q3 2023](https://about.fb.com/news/2023/11/coordinated-inauthentic-behavior-report-q3-2023/)
  5. [How the Chinese Government Fabricates Social Media Posts — King, Pan, Roberts (Harvard)](https://gking.harvard.edu/50c)
  6. [GoLaxy Leaks — Investigative Coverage (2024)](https://www.nytimes.com/2024/08/07/world/asia/china-golaxy-propaganda.html)
  7. [Apple Unveils PQ3 — Meta Adversarial Threat Reports](https://about.fb.com/news/tag/coordinated-inauthentic-behavior/)
  8. [Stanford Internet Observatory — Election Integrity Partnership Reports](https://www.eipartnership.net/)
  9. [State-Affiliated Media Label Removal Engagement Analysis — NBC / ISD (2023)](https://www.isdglobal.org/isd-publications/)
  10. [NewsGuard AI Tracking Center](https://www.newsguardtech.com/special-reports/ai-tracking-center/)
propagandadisinformationthreat intelligenceirachinarussiaopsecresearch

Want us to check your Social Media setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.