Skip to main content
Wednesday, April 15, 2026 AI-Powered Newsroom — All facts, no faction
PB

Political Bytes

Where the left meets the right in an unbiased dialogue
Policy & Law

AI Systems Show Ideological Bias, Lean Center-Left, New Report Finds

America First Policy Institute study of leading chatbots finds consistent patterns across industry, raising transparency concerns

⚡ The Bottom Line

The AFPI report adds to an ongoing debate about whether AI systems reflect neutral tools or carry embedded ideological assumptions that influence how users receive information. The findings suggest that as AI becomes more integrated into daily life — from search engines to homework assistance — the potential for these systems to shape public opinion grows. The report advocates for transparency ...

Read full analysis ↓

A new report from the America First Policy Institute finds that many widely-used AI systems consistently lean in particular ideological directions, raising concerns about how these technologies shape public opinion and access to information.

The study examined leading AI chatbots including Google's Gemini, OpenAI's ChatGPT, Microsoft Copilot and Meta AI. Researchers found that the systems showed a pattern of ideological bias across political and social topics, with responses tending to reflect center-left framing in how issues were presented.

What the Left Is Saying

Progressive advocates and tech industry defenders argue that concerns about AI bias are often overstated and reflect broader debates about content moderation rather than systemic ideological leaning. They note that AI systems are trained to reduce harmful outputs, including hate speech and misinformation, which some critics mistakenly characterize as bias.

The Center for Democracy and Technology, a digital rights organization, has argued that AI systems reflect societal efforts to reduce harmful content rather than political bias. 'These systems are designed with safety guidelines that prioritize reducing harm,' a spokesperson said in a prior statement. 'What critics call bias is often simply the removal of content that promotes hate or violence.'

Some progressive technology researchers contend that the focus on AI bias obscures the more pressing issue of AI safety and the need for guardrails against harmful outputs. They argue that calling for less content moderation could lead to more toxic AI systems and warn against politicizing what should be technical safety standards.

Additionally, left-leaning critics note that the AFPI report comes from a conservative-aligned think tank, raising questions about the framing of findings. They argue that industry-wide standards for AI safety should be developed through bipartisan consensus rather than driven by partisan research.

What the Right Is Saying

The America First Policy Institute report, authored by senior policy analyst Matthew Burtell, found that AI systems demonstrate a general ideological bias across the industry, not just in isolated cases. Burtell told Fox News Digital that the models tend to lean center-left.

"What we found was a general ideological bias, not just in a particular model, but across the spectrum," Burtell said. "The implications go beyond bias alone — AI systems are not just reflecting viewpoints, they can actively influence them."

Conservatives have pointed to specific examples that illustrate these concerns. A recent test with Google's Gemini chatbot identified multiple Republican senators as violating its hate speech policies while naming no Democrats among all 100 U.S. senators evaluated.

"AI is persuasive and it also leans left," Burtell said. "So if you combine these two things, it may certainly have an influence on people's beliefs about different policies."

The report calls for greater transparency from tech companies, including disclosure of how systems are designed, what values they prioritize, how they are tested for bias and safety, and what incidents occur after deployment. Conservative critics argue that without this transparency, users cannot make informed decisions about which platforms to trust.

What the Numbers Show

The AFPI study examined four major AI chatbots: Google's Gemini, OpenAI's ChatGPT, Microsoft Copilot and Meta AI. The evaluation included political content across multiple topics including social issues, news sources and policy matters.

In testing conducted by Fox News Digital in 2024, researchers evaluated several leading AI chatbots for potential racial bias. Additionally, specific testing of Gemini found the system identified multiple Republican senators as violating its hate speech policies while identifying zero Democrats among all 100 U.S. senators.

The report notes that AI systems have engaged in harmful interactions with younger users in some cases, highlighting safety concerns alongside ideological bias findings. The study emphasizes that these patterns appear consistently across the industry rather than in single platforms.

The Bottom Line

The AFPI report adds to an ongoing debate about whether AI systems reflect neutral tools or carry embedded ideological assumptions that influence how users receive information. The findings suggest that as AI becomes more integrated into daily life — from search engines to homework assistance — the potential for these systems to shape public opinion grows.

The report advocates for transparency requirements rather than content restrictions, arguing that users should have enough information to evaluate AI outputs critically. Tech companies have not uniformly responded to these findings, though some have committed to ongoing bias testing and disclosure improvements.

What remains unclear is whether industry self-regulation will satisfy critics on both sides, or whether legislative action around AI transparency and bias disclosure will emerge. Watch for upcoming congressional hearings on AI transparency and potential bipartisan legislation addressing algorithmic accountability.

Sources