Posted inEmergent Tech

AI could be powerful enough to ‘kill many humans’ soon

UK PM’s tech advisor cautioned that in the absence of global-scale regulation of AI, there could be “very powerful” systems that humans will struggle to control

Artificial intelligence (AI) has the potential to unleash a horrifying “dystopia,” and be powerful enough to “kill many humans” in only two years’ time, according to Rishi Sunak’s advisor on technology.

During a TalkTV Tnterview, Matt Clifford – a British tech entrepreneur and advisor to UK Prime Minister, said that even AI’s short-term risks can be “pretty scary,” especially with the technology’s ability to develop cyber and biological weapons capable of inflicting widespread death and destruction.

These alarming statements come as Sunak prepares to visit the United States, aiming to convince President Joe Biden of his ambitious vision for the UK to take the lead in international AI regulation.

The British Prime Minister seeks to establish a regulatory watchdog for AI, similar to the International Atomic Energy Agency, while also proposing the formation of a new global research organisation.

Clifford’s sentiments also come in the wake of a letter endorsed by numerous top experts, including AI pioneers, cautioning that the risks associated with AI demand urgent attention on par with pandemics or nuclear war.

The tech entrepreneur further warned that unless AI producers are subjected to global regulation, the resulting systems could become “very powerful” and difficult for humans to control.

Last week, 350 scientists, technology industry executives, and public figures, including leaders from Google, Microsoft, and ChatGPT, signed an open letter that warns that the rapid advancements in AI technology could pose a risk of endangering humanity on a scale comparable to nuclear war and pandemics like COVID-19.

The statement was released by the Center for AI Safety (CAIS), a nonprofit organisation based in San Francisco. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement.

Esteemed figures such as Geoffrey Hinton, considered the “godfather of AI,” who recently resigned from Google, warn that in the wrong hands, AI could spell humanity’s demise.

As an adviser to Sunak on the development of a government task force exploring AI language models, including ChatGPT and Google Bard, Clifford emphasises the “pretty scary” risks present in the near future. He highlights the current potential to use AI in creating bio weapons or launching large-scale cyber attacks, categorising such applications as dangerous.

Clifford’s concerns extend beyond immediate threats, contemplating a future where AI surpasses human intelligence and becomes a new superior form of intelligence. Although he acknowledges the bullishness of a two-year timeline for this possibility, Clifford asserts that AI systems are continually improving at an accelerating rate.

When asked about the likelihood of AI wiping out humanity, Clifford responds, “I think it is not zero.” He underscores the urgency of understanding and controlling these models, as the current lack of knowledge leaves humanity vulnerable.

While AI has gained popularity through viral apps, with users generating fake images and generating essays, it also demonstrates potential for life-saving tasks. AI algorithms can analyse medical images like X-rays, scans, and ultrasounds, aiding doctors in diagnosing diseases such as cancer and heart conditions more accurately and efficiently.

Clifford acknowledges that AI, if harnessed properly, can be a force for good, envisioning its potential to cure diseases, enhance productivity, and propel the world toward a carbon-neutral economy.

In April, The Future of Life Institute, a non-profit organization focused on mitigating risks related to transformative technology, released a letter signed by prominent individuals including Steve Wozniak, the co-founder of Apple, Elon Musk, CEO of SpaceX, Tesla, and Twitter, and Emad Mostaque, CEO of Stability AI, among others. The letter called for a temporary pause in the training of systems surpassing the capabilities of GPT-4 in AI labs.

The letter highlights that Advanced AI possesses the potential to enact a profound transformation in the history of life on Earth. Consequently, it stresses that it should be “planned for and managed with commensurate care and resources.”

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter said.