Posted inEmergent Tech

Fact or Fiction? The imperative of validating AI-generated information 

Artificial Intelligence (AI) hallucinations are unavoidable but what can be done to verify the information that gen AI gives us?

Call them pathological liars, hallucinations, or disinformation there is a glitch in the super powerful ChatGPTs or the other generative AI (gen AI). These AI-powered tools with their ability to produce human-like responses, to any prompt, have gained our trust and attention, so much so that we have started using them for our research, health inquiries, and workplace research. But its biggest pitfall is that sometimes these models make things up.  

Take for example when earlier this year, a ChatGPT model was asked about how a historical figure used Google LLC’s G Suite to organise resistance against British violence, the model very promptly responded saying a Gmail account was created and used to send emails and organise meetings.  

It stated that Google Docs were used to share documents and collaborate on projects. A little hard, considering the resistance was in the 1940s, and Google itself did not exist until 1998. The GPT model had hallucinated.  

AI hallucinations are an odd byproduct of large language models (LLMs) and other forms of AIs. Currently, these are impossible to prevent. “Calling them hallucinations is a bit of a stretch, as these are not human, they cannot hallucinate, what I prefer calling them is disinformation,” explained Triveni Gandhi, Responsible AI Lead, Dataiku.  

Input determines the output  

Reports suggest while infrequent, AI hallucinations are constant and makeup between 3 to 10 per cent of the responses to prompts that users submit to gen AI models. IBM has defined AI hallucinations as – a phenomenon where an LLM perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. 

The inherent risk lies in the quality and authenticity of information available online. The internet is rife with misinformation, biased sources, and inaccuracies. During critical events such as the COVID-19 pandemic, the rapid dissemination of information online led many individuals to trust information without verifying its credibility. Consequently, there is a significant challenge in discerning between accurate and false information, perpetuated by the widespread trust in technology. 

“While AI, particularly models like Gen AI, exhibit remarkable capabilities in content generation and transformation, it’s essential to recognise their limitations. Gen AI operates based on patterns and historical data, lacking true understanding, feeling, or sense. These limitations can lead to inconsistencies or “hallucinations” in its outputs. Additionally, repeatability, a critical aspect of reliability in computer programs, is often lacking in gen AI. Such limitations pose challenges, especially in critical applications where reliability is paramount,” explained Suresh Sambandam, CEO, Kissflow. 

The machine says, the human does  

The problem arises when there is blind belief and acceptance to everything that a machine says. As Gandhi explains, many assume that the mathematics and AI behind the responses and data is objectively true. “We as a society have a significant reverence towards technology, and also see how the markets and world around us move towards data-based decisions and insights,” added Gandhi.  

There also is the need to move towards faster and more efficient modes of work, and as Gandhi put it, there is a belief that machines and computers aren’t wrong.  

“The perception that information obtained from a computer is inherently factual stems from the widespread reliance on technology for accessing and processing data in modern society. From an early age, children are exposed to digital devices like laptops and tablets, where information is readily available at their fingertips. This ease of access can create a sense of trust in the accuracy of the information provided by technology,” explains Gayathri Murukan, an engineer turned mental health and wellness expert.  

Carolyn Yaffe, Psychotherapist from Medcare-Camali Jumeriah branch, Dubai added people today getting reliant on gen AI tools for everything. “Any form of modern technology has an impact on mental health. It can cause feelings of isolation, depression, anxiety, and it also an information overload, that reduces our attention span, which adds to the anxiety factor,” explains Yaffe.  

How do we fix this?  

The biggest factor to ensure that these hallucinations do not help decide for you is to verify and cross-check the responses that a system produces. “Checks and balances always help,” added Gandhi. 

Organisations increasingly leverage advanced technologies like large language models, such as GPT-3, to streamline operations and enhance productivity. However, the reliance on AI systems introduces the risk of misinformation, particularly if the underlying data feeding these systems is flawed.  

“AI hallucinations, where the system generates false outputs based on erroneous data, pose significant challenges within organizations. These inaccuracies can propagate through decision-making processes, leading to unintended consequences and misinformed strategies. To address this, organisations must implement robust validation mechanisms and encourage critical thinking among users,” added Murukan. 

One approach is to advocate for a culture of scepticism, where users are encouraged to question the information provided by AI systems and cross-reference it with reliable sources. Additionally, investing in data quality assurance processes and ongoing training programs can equip employees with the skills to identify and mitigate instances of false information effectively.  

Gandhi explained building unbiased and equitable AI models needs a concentrated effort from organisations across different fronts. Firstly, diverse and inclusive teams should be involved in the development and testing of AI algorithms to mitigate the risk of bias and ensure that different perspectives are considered.  

Sambandam explained that the focus needs to be on controlling hallucinations by setting constraints on AI-generated content.  

“By restricting the AI’s output to specific guidelines or datasets, we mitigate the risk of generating unreliable or inconsistent content. These constraints ensure the reliability and accuracy of AI-generated outputs, particularly in applications where consistency is crucial,” added Sambandam. 

Yaffe added it is important to use AI as a tool and not as a substitution. “This is the way the world is going, are the technologies of the world, so it is important to adopt it judiciously, and efficiently.”   

Additionally, organisations need to prioritise data quality and integrity, as biased or incomplete data can lead to skewed results. Regular audits and evaluations of AI systems should be conducted to identify and rectify any biases or disparities. Moreover, stakeholders must be transparent about the limitations of AI technologies and the potential risks involved, fostering a culture of accountability and continuous improvement.