A recent wave of AI-generated disinformation surrounding US-Israeli strikes against Iran has accumulated millions of views on social media, illustrating how synthetic media is being weaponized to shape public opinion on all sides of the political spectrum.
The content includes AI-generated videos depicting missile strikes in Tel Aviv, children running from conflict zones, and fabricated footage of Chinese-assisted Iranian fighter jet attacks. While much of this content has spread through non-corporate accounts, the source article notes that manipulation of photos and videos has also been employed by influencers, media organizations, and political figures in the United States.
The phenomenon represents what analysts call a 'Post-Truth' era, where traditional barriers to creating disinformation—including cost, time, and the need for institutional legitimacy—have been eliminated by widely available AI tools.
What the Right Is Saying
Conservative commentators and Republican supporters argue that concerns about AI disinformation are disproportionately focused on right-leaning figures while ignoring similar practices by left-leaning media and progressive organizations.
Conservatives note that MS NOW, a left-leaning outlet, circulated an AI-enhanced version of a professional headshot of Alex Pretti, who died during ICE raids in Minnesota. The New York Post reported that the photo depicted Pretti as 'a few shades more tan than he actually was' and reshaped virtually every aspect of his face.
Right-leaning analysts also point to examples from other Democratic politicians, including a viral X post by California Governor Gavin Newsom featuring Trump being WWE body-slammed by political opponents, and former New York Governor Andrew Cuomo's AI attack ad depicting mayoral candidate Zohran Mamdani 'dressed as a socialist.'
Conservatives argue that the mainstream media's focus on AI disinformation from one side while ignoring similar tactics by the other represents selective outrage. They contend that media literacy should apply equally to all political actors.
What the Left Is Saying
Progressive critics and Democratic-leaning analysts have focused heavily on the Trump administration's use of AI-generated content, arguing it represents a deliberate strategy to manipulate emotions and build tribalistic coalitions.
The New York Times reported on multiple instances where the Trump administration utilized AI content to attack enemies and rouse supporters. A widely cited example includes a post depicting former President Barack Obama and former first lady Michelle Obama with their faces transposed on a monkey-like illustration, which Trump later deleted and condemned.
Progressives have also criticized the White House for posting an AI-altered photo of Nekima Levy Armstrong, a protester arrested during ICE demonstrations in Minnesota. The image, which the White House ultimately admitted was fake, depicted Armstrong with darkened skin and tears, prompting accusations that the administration attempted to dehumanize the protester.
Democratic analysts argue this content delegitimizes political discourse and exploits emotional responses rather than substantive policy debate. They point to the 'Trump Gaza' video—reposted on Truth Social—as an example of synthetic media used to create a futuristic fantasy promoting pro-Israel and pro-tech sentiment.
What the Numbers Show
The scale of AI-generated disinformation has grown exponentially alongside the technology's accessibility. The source article notes that prior barriers to creating propaganda—including money, time, and institutional legitimacy—have been effectively eliminated by the normalization of AI content creation.
Social media engagement metrics demonstrate that emotionally charged synthetic content generates significant reach. A popular Instagram account called @BestForces, known for military content with nearly 700,000 followers, posted AI-generated footage claiming to show Iranian missiles striking Tel Aviv.
The phenomenon of 'truth fatigue'—where the proliferation of conflicting information leads audiences to rely on emotional impulses rather than factual verification—has been identified by psychologists as a growing concern. Research suggests that attractive criminals, dubbed the 'Luigi Mangione Effect' in the source article, receive more sympathetic public treatment based on emotional rather than factual responses.
The source article notes that the AI revolution shows no signs of slowing down, with synthetic media production costs approaching near-zero and distribution channels more accessible than ever.
The Bottom Line
The rise of AI-generated propaganda represents a fundamental challenge to established norms around truth and credibility in political communication. Both progressive and conservative critics identify examples of synthetic media being used to manipulate public opinion, though each side tends to focus on the other's excesses.
The implications extend beyond any single political figure or party. Professional news organizations, alternative media creators, and political campaigns now all operate in an environment where the cost of producing convincing disinformation has collapsed.
What remains unclear is whether public demand for media literacy and verification can keep pace with supply of synthetic content. The source article suggests the solution may lie in more robust scrutiny by consumers, regardless of where they consume their news—TikTok, Instagram, Facebook, or traditional outlets.
What to watch: whether legislative proposals addressing AI disclosure requirements gain traction, and how platforms like X and Instagram respond to pressure to label synthetic content.