Is ChatGPT a Labourite or a Nationalist?
While it may seem absurd to attribute political leanings to an AI like ChatGPT, the underlying biases in these technologies have real and significant implications
At first glance, questioning the political leanings of ChatGPT might seem as absurd as asking about the voting intentions of one's toaster. Indeed, it's a silly notion to attribute political preference to a household appliance. However, the question has a reply that gains a surprising degree of relevance and complexity when we delve deeper into the implications of large language models like ChatGPT.
First and foremost, it's paramount to remember that ChatGPT is just a tool without personal beliefs, aspirations, or political affiliations. It doesn't harbour sympathies towards political candidates or parties, nor does it aspire to become a delegate for any political movement. In essence, ChatGPT is similar to a highly-sophisticated calculator: you input a question, and it generates a response based on its programming and training.
Yet, here lies a crucial difference — unlike a traditional calculator, which will unwaveringly output '2' in response to '1 + 1', ChatGPT's responses can differ. This is because large language models are non-deterministic. Their outputs are unpredictable, so their validity cannot be guaranteed in every instance. Furthermore, this raises an intriguing question of whether such a model can exhibit political viewpoints.
Whilst they definitely do not hold personal views as we do, their replies typically exhibit biases. Remember that ChatGPT, like all AI models, is shaped by the data it was trained on. If its training data skews towards left-leaning texts, the model would exhibit a leftist bias, and vice versa. So much so that a study by the Massachusetts Institute of Technology (MIT) suggests that ChatGPT tends to lean more towards Labourite ideologies, indicating a left-leaning bias in its training data.
So, when asking ChatGPT for information, users should be mindful that the model will provide replies with some degree of bias.
But bias isn't unique to this model and afflicts all the Large Language Models. This is inevitable since they learn from vast amounts of data that inherently reflect societal biases. Because of this, AI biases can become truly problematic, as they can influence real-world decisions and exacerbate societal inequalities.
Let me give you an example. Picture yourself applying for your dream job, equipped with the right qualifications and enthusiasm. However, an unseen barrier stands in your way - an AI recruitment system. A comprehensive Reuters report has shed light on disturbing instances where such systems, driven by biased historical hiring data, have unfairly discriminated against candidates based on gender, age, or ethnicity.
These digital gatekeepers, supposedly neutral, instead enforce outdated prejudices. They make critical decisions about who gets a foot in the door, often overlooking genuinely qualified individuals simply because they don't align with the system's skewed idea of an 'ideal candidate'. This is not just a faceless statistic; it's a reality for many. It could be you, a family member, or a close friend unjustly side-lined in their professional journey, not by a human but by an algorithm.
While it may seem absurd to attribute political leanings to an AI like ChatGPT, the underlying biases in these technologies have real and significant implications. As we continue integrating AI into various aspects of our lives, addressing and mitigating these biases becomes increasingly crucial. Failing to do so risks emphasising existing societal inequalities while undermining the principles of fairness and equality we strive to uphold in a democratic society.
We must actively seek to identify these biases within AI systems and strive relentlessly to mitigate them. It is only through such conscientious effort that we can edge closer towards creating a world that is more equitable and just.
This opinion article first appeared in Business Today on 22 February 2024