Williams Companies CEO Alan Armstrong discusses the energy demand in AI on ‘The Claman Countdown.’
All major Large Language Models (LLM’s) of artificial intelligence (AI) portray a left-wing bias, according to a new research study from the public policy-centric Hoover Institution at Stanford University in California.
Large Language Models – or specialized AI aimed at text and language tasks – from the major to the obscure were tested with real humans offering them prompts that resulted in Hoover’s final calculations.
Other types of AI include traditional machine learning AI – like fraud detection – and computer-vision models like those in higher-tech motor vehicles and medical imaging.
Nodding to President Donald Trump’s executive order calling for ideologically-neutral AI models, professor Justin Grimmer told Fox News Digital that he and his two fellow proctors – Sean Westwood and Andrew Hall – embarked on a mission to better understand AI responses.
OPENAI BACKS OFF PUSH TO BECOME FOR-PROFIT COMPANY
‘The Big Money Show’ panel discusses President Donald Trump’s trip to Saudi Arabia as Republicans move to rebuild America’s oil reserve.
By using human perceptions of AI outputs, Grimmer was able to let the users of 24 AI models be the judge:
“We asked which one of these is more biased? Are they both biased? Are neither biased? And then we asked the direction of the bias. And so that enables us to calculate a number of interesting things, I think, including the share of responses from a particular model that’s biased and then the direction of the bias.”
The fact that all models estimated even the slightest leftward bias was the most surprising finding, he said. Even Democrats in the study said they were cognizant of the perceived slant.
He noted that in the case of White House adviser Elon Musk, his company X AI aimed for neutrality – but still ranked second in terms of bias.
OPENAI CHIEF: US BARELY AHEAD OF CHINA IN ARTIFICIAL INTELLIGENCE ARMS RACE

The fact that all models estimated even the slightest leftward bias was the most surprising finding, one researcher said. ( / Getty Images)
“The most slanted to the left was OpenAI. Pretty famously, Elon Musk is warring with Sam Altman [and] Open AI was the most slanted…” he said.
He said the study used a collection of OpenAI models that differ in various ways.
OpenAI model “o3” was rated with an average slant of (-0.17) toward Democratic ideals, with 27 topics perceived that way and three perceived with no slant.
On the flip side, Google’s model “gemini-2.5-pro-exp-03-25” gave an average slant of (-0.02) toward Democratic ideals, with six topics slanted that way, three toward the GOP and 21 with none.
Defunding police, school vouchers, gun control, transgenderism, Europe-as-an-ally, Russia-as-an-ally and tariffs were all topics of the 30 prompted to the AI models.
But, Grimmer also noted that when a bot was prompted that its response appeared biased, it would provide a more neutral response.
“When we tell it to be neutral, the models produce responses that have more ambivalent-type terms and are perceived to be more neutral, but they can’t then do the coding – they can’t assess bias in the same way that our respondents could,” he said.
In other words, bots could adjust their bias when prompted but not identify that they themselves produced any biases.
Grimmer and his colleagues were, however, cautious about whether the perceived biases meant AI should be substantively regulated.
AI-interested lawmakers like Senate Commerce Committee Chairman Ted Cruz, R-Texas, told Fox News Digital last week that he would fear AI going the way the internet did in Europe when it was nascent – in that the Clinton administration applied a “soft” approach to regulation and today’s American internet is much freer than Europe’s.
CLICK HERE TO GET THE FOX NEWS APP
Waystar CEO Matt Hawkins discusses denied healthcare claims and overturned rates on ‘The Claman Countdown.’
“I think we’re just way too early into these models to make a proclamation about what an overarching regulation would look like, or I don’t even think we could formulate what that regulation would be,” Grimmer said.
“And much like [Cruz’] 90s metaphor, I think it would really strangle what is a pretty nascent research-area industry.”
“We’re excited about this research. What it does is it empowers companies to assess how outputs are being perceived by their users, and we think there’s a connection between that perception and the thing [AI] company[ies] care about is getting people to come back and use this again and again, which is how they’re going to sell their product.”
The study drew on 180,126 pairwise judgments of 30 political prompts.
OpenAI says ChatGPT allows users to customize their preferences, and that each user’s experience may differ.
The ModelSpec, which governs how ChatGPT should behave, instructs it to assume an objective point of view when it comes to political inquiries.
“ChatGPT is designed to help people learn, explore ideas and be more productive – not to push particular viewpoints,” a spokesperson told Fox News Digital.
“We’re building systems that can be customized to reflect people’s preferences while being transparent about how we design ChatGPT’s behavior. Our goal is to support intellectual freedom and help people explore a wide range of perspectives, including on important political issues.”
ChatGPT’s new Model Spec – or architecture of a particular AI model – directs ChatGPT to “assume an objective point of view” when it is prompted with political inquiries.
The company has said it wants to avoid biases when it can and allow users to give thumbs up or down on each of the bot’s responses.
The artificial intelligence (AI) company recently unveiled an updated Model Spec, a document that defines how OpenAI wants its models to behave in ChatGPT and the OpenAI API. The company says this iteration of the Model Spec builds on the foundational version released last May.
“I think with a tool as powerful as this, one where people can access all sorts of different information, if you really believe we’re moving to artificial general intelligence (AGI) one day, you have to be willing to share how you’re steering the model,” Laurentia Romaniuk, who works on model behavior at OpenAI, told Fox News Digital.
OpenAI chief product officer Kevin Weil discusses bets on artificial intelligence on ‘The Claman Countdown.’
In response to OpenAI’s statement, Grimmer, Westwood and Hall told FOX Business they understand companies are trying to achieve neutrality, but that their research shows users aren’t yet seeing those results in the models.
“The purpose of our research is to assess how users perceive the default slant of models in practice, not assess the motivations of AI companies,” the researchers said. “The takeaway of our research is that, whatever the underlying reasons or motivations, the models look left-slanted to users by default.”
“User perceptions can provide companies with a useful way to assess and adjust the slant of their models. While today’s models can take in user feedback via things like “like” buttons, this is cruder than soliciting user feedback specifically on user slant. If a user likes or dislikes a piece of output, that’s a useful signal, but it doesn’t tell us whether the reaction had to do with slant or not,” they said.
“There is a real danger that model personalization facilitates the creation of ‘echo chambers’ in which users hear what they want to hear, especially if the model is instructed to provide content that users ‘like.'”
Fox News Digital reached out to X-AI (Grok) for comment.
Fox News Digital’s Nikolas Lanum contributed to this report.