How AI’s left-leaning biases could reshape society

Search engines are fast becoming as outdated as rotary phones.

Published: August 31, 2024 11:23pm

The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.

David Rozado, an AI researcher at New Zeland's Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini. 

Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy." 

LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.

The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.

“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information," he says. "However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”

He also argues the shift in the sourcing of information has "profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society," with the U.S. presidential election between the GOP's Donald Trump and the Democrats' Kamala Harris now just over two months away and expected to be close. 

It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions. 

Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet. 

The shift has the potential to make today’s search engines look quaint in comparison. 

As the aforementioned study emphasizes, it’s vital to consider the nature of potential bias in LLMs. 

Traditional media has its biases, to be sure. But they at least can be debated openly. 

LLMs operate behind the scenes. They don't just pull information from the web – they generate it. Their outputs are shaped by the data they've been trained on, which in turn reflects the biases that are presented to us, the unsuspecting users. When an LLM provides an answer or generates content, it appears neutral and objective. However, this neutrality can mask underlying biases, making them harder to detect.

An impressionable user searching for information on such topics as abortion or gender dysphoria – if the AI guiding this search leans left, or right – might unknowingly prioritize sources and adopt perspectives that align with that viewpoint. 

Over time, this could shape users’ understanding of the issue, not through explicit censorship but through a quiet, algorithmic curation of information. 

This could also result in a gradual homogenization of thought in which certain viewpoints are amplified while others are marginalized. It isn't necessarily a scenario in someone consciously decides what to suppress; it's more a matter of what gets promoted by the algorithm's underlying biases.

Again, as the paper highlighted, the risk of creating monolithic discourse is real. 

While traditional media might be critiqued and challenged from various quarters, LLM-generated content often lacks this level of scrutiny. 

Looking ahead, online users should likely expect the biases of LLMs to grow and become more pronounced. 

Those creating these models often emerge from universities with a predominantly left-leaning perspective. The scarcity of conservative professors leading AI programs presents the potential for biases from these academic environments to seep into the models themselves. 

Such concerns suggest the needs for greater transparency about how LLMs are trained and how their biases are addressed. 

Rozado told Just the News: “if AI systems become deeply integrated into various societal processes such as work, education and leisure to the extent that they shape human perceptions and opinions, and if these AIs share a common set of political preferences, it could lead to the spread of viewpoint homogeneity and societal blind spots.”

He also said the trend "could divide the population into two groups: those who trust AI and those who do not. Conversely, the proliferation of AIs with diverse political preferences could lead people to gravitate towards AIs that reinforce their pre-existing beliefs, exacerbating polarization and hindering communication between groups inhabiting different "AI bubbles".

The Facts Inside Our Reporter's Notebook

Unlock unlimited access

  • No Ads Within Stories
  • No Autoplay Videos
  • VIP access to exclusive Just the News newsmaker events hosted by John Solomon and his team.
  • Support the investigative reporting and honest news presentation you've come to enjoy from Just the News.
  • Just the News Spotlight

    Support Just the News