Posted inEmergent Tech

Beyond imitation: AI may pass the ‘Turing Test’ but trust issues remain

As we celebrate ‘Alan Turing Day’ and our 200th anniversary, we honour Turing’s legacy as the father of modern computing. His Turing Test remains a key AI benchmark.

The most famous benchmark test of artificial intelligence – whether a machine can think – is known as the ‘Turing Test’.  It was originally called the imitation game and was developed by Professor Alan Turing in 1950 at The University of Manchester. Alan Mathison Turing (1912–1954) was a mathematician and computer scientist who was also responsible for breaking the Nazi Enigma code during WW2. 

He is often described as ‘the father of modern computing’, having developed the first model of a general-purpose computer, known as a ‘Turing Machine.’ We remember his lasting influence as we celebrate ‘Alan Turing Day’.  He is rightfully considered a ‘heritage hero’ of the University, as we also celebrate our 200th anniversary this year.

Turing stated that if an AI system can be perceived as a human being in a long enough conversation, it is deemed to have passed the ‘Turing Test’; hence, we can consider it to be generally intelligent – this is the core of our definition of AI.

Dr. Nikolay Mehandjiev, Professor of Enterprise Information Systems at Alliance Manchester Business School, The University of Manchester.

The recent success of ChatGPT and other large language models (LLMs) raises questions about this test because they do look very intelligent to the uninitiated observer, and some of the LLMs may be able to pass the Turing test.

Today, the use of AI has flourished due to the wide availability of data. Still, the concept of artificial intelligence has been with us since the Dartmouth Workshop in 1956, gradually gaining popularity in medicine and then business, but with applications restricted because of the effort it took to provide explicitly encoded knowledge into the AI system.

The difference today is that the AI systems extract this knowledge themselves from patterns in the data – a process known as machine learning. The wide availability of data thus caused a rapid growth of AI in all areas of our lives, from voice assistants ordering our shopping to self-driving vehicles. 

An interesting trend is the use of AI to augment human decision-making rather than to replace it, the IA (Intelligent Assistant) mode of applying AI. The best chess players today are neither humans nor computers but the combination of a human and an AI player working together to win.   I am a big fan of this mode of working, and I see it as the best model for using AI, especially in more tactical rather than operational decision-making situations.  This working mode combines the best features of human intelligence and machine intelligence. And this brings with it a different view of the world.

The purpose of artificial intelligence is presumably to replace humans in large areas of human activity, whilst the purpose of an intelligent assistant is to augment human decision-making and support us. The approach to building intelligent assistive systems is very human-centric.

The use of AI to support complex decision-making is one of the collaboration themes between

Alliance Manchester Business School and computer science academics from Manchester and Cambridge, who have just has won a prestigious grant to develop the next generation of researchers in AI through a PhD training program (funded by the UK government) in complex decision support – supporting the intelligent assistance model.

However, with AI, there are still associated security risks and issues surrounding trust. The risks are so important because AI is increasingly found everywhere, impacting all aspects of our lives and business.    

Regarding potential risks linked to AI, the first one is that AI is often used to replace human beings. And when it makes decisions, it does so very, very fast in a frictionless manner. In the stock market crashes in the 1990s and 1980s, AI-based trading robots (trading bots) were triggered to sell stock based on other bots selling stocks on the financial markets, causing a decimation of the market within seconds or minutes.

So, the financial markets had to implement some controls; for example, if a financial market monitoring program detects a certain threshold of selling activity, it stops market trading.

Other key risks specific to AI are related to its complex nature, often resulting in a lack of transparency regarding how certain decisions are made.  We need to be certain that AI is making the decisions we are expecting it to and that these decisions are not biased because of the biased data used for training. In a simple example, AI was “taught” to distinguish a husky from a wolf, seemingly achieving this with a respectable accuracy of 80 per cent.

The truth was that the majority of husky photos were on a green background (grass), whilst the wolf photos were on a white background (snow), and this was the main parameter taken into consideration when making the distinction. Explainable AI is a rapidly growing branch of AI research, aiming to plug that gap and provide users with explanations regarding how and why a decision was made.

We are currently running research initiatives on AI Trust at the University of Manchester, where our Centre for Digital Trust and Society will now host a special thread of research in AI trust and security. We are asking research questions, such as – Is AI competent? Does it do what it is designed to do, and how do we ensure that? Is it responsible? Is it focused on social good? Can it be verified? And is it robust? What happens when the technology is exploited?

The projects include one I lead, which looks at the factors impacting users’ trust in selecting driverless cars. This work follows directly from Alan Turing’s original, innovative thinking and is a fitting tribute to his remarkable legacy, which is even more relevant today.