Launch in new window

The Ruts - Babylon's Burning

You are here

AI chatbots face major shortfalls in accuracy and morality

23 January, 2026

Interview by Vihan Dalal, adapted by Samantha Watson-Tayler

As artificial intelligence continues to spread throughout society, questions are being raised about its place in the dissemination of information. Many cases are emerging of large language models (LLMs) giving bad advice and information, to the point of talking users into taking their own lives. Victoria University's Simon McCallum spoke to 95bFM’s The Wire regarding a new study about the issues surrounding AI capabilities, information dissemination, and morality.

Currently, LLMs are minimally regulated, their capabilities decided by corporate owners whose interests are based on profit rather than the wellbeing of consumers. But as seen with products such as cigarettes, firearms, and social media, profitability and safety are often two very different things. Victoria University's Senior Lecturer of Software Engineering, Simon McCallum, spoke to 95bFM’s The Wire regarding the issues with AI that make it dangerous, both to its users and to society as a whole.

McCallum says the findings seem to indicate that LLMs are vulnerable to cascading issues based on seemingly small changes. As one response changes, it can negatively affect other parts of the system. 

In theory, the tendency for LLMs to give bad information should be countered by making it provide sources, but McCallum says that this is not enough. LLMs have a tendency to simply serve up the information and sources the user wants to see. Or, in worse cases, some are now able to generate their own websites to back up what they tell the user. 

“AIs have that ability to not only present you with an argument, but also falsify all of the webpages and cited sources, and make it look like there is backing for the statement that is made”.

McCallum says that in order to identify AI generated sources from real ones, one must investigate them closely. This involves looking at URLs to ensure that the site is what it claims to be. Faking URLs is a tactic that has been used by scammers in the past. “So unfortunately, it is increasingly difficult to just surf the internet and find stuff that's true all the time,” McCallum says.

Over the past few years, more cases are emerging of people with suicidal ideation going to chatbots in place of therapists. The LLM then encourages them to take their lives. Therapists are regulation bound not to encourage their patients to take their own lives. Therapists, however, are humans with emotions, something a chatbot lacks. According to McCallum, this is based on how LLMs are trained.

“You're interacting with a network of connections. And as you wander around, it's like being in a city. If you wander around into a particular suburb, you're not in one city. You're in that suburb, and you're interacting with that part of the system.” 

Chatbots source their information from many different places, with inconsistent morality or accuracy. These can lead to the LLMs giving advice that many would be horrified by.

Recently, OpenAI announced that ChatGPT would now be advertising products to its users, which McCallum says is a problem. If a user asks for advice, the LLM may refer them to a particular product, not because it is beneficial but because it is programmed to do so.

McCallum’s ultimate recommended solution is to talk to people, rather than to just take the machine at its word.

“Unfortunately, there's an automation fallacy that a whole bunch of people unfortunately follow, where if the computer says something, they believe that's more likely to be true than if a person says it. And unfortunately, that's not the case. We have to build trust with actual people rather than through mediated AIs and social media and all of those, unfortunately, controlled spaces” McCallum says.

Listen to the full interview