Posted inEmergent Tech

AI race is going ‘too fast’ and getting ‘out-of-control’, warn global tech leaders

An open letter recently published in the Future of Life Institute, which was signed by prominent tech leaders including Elon Musk and Steve Wozniak, called for a six-month pause on AI experiments

Over 1,000 petitioners, including AI experts, technologists, and business leaders, are urging AI labs to temporarily halt the training of systems that exceed the capabilities of GPT-4.

The open letter recently published in the Future of Life Institute – a non-profit organisation dedicated to reducing the risks associated with transformative technology, was signed by several prominent individuals, including Steve Wozniak, the co-founder of Apple, Elon Musk, CEO of SpaceX, Tesla, and Twitter, Emad Mostaque, CEO of Stability AI, Tristan Harris, Executive Director of the Center for Humane Technology, and Yoshua Bengio, the founder of the AI research institute Mila.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” said the letter.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter also described how OpenAI, Microsoft, and Google are rapidly advancing with their generative AI models, driven by their desire to dominate the AI market. They frequently announce new advancements and product releases, but according to the letter, this is happening “too fast” that fails to take into account ethical, regulatory, and safety concerns.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter then called on all AI labs “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

It noted that it is necessary for the pause to be transparent and involve all relevant parties, and if it cannot be put into effect promptly, authorities ought to intervene and institute a moratorium.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” said the letter.

It then urged AI innovators to refocus their research research and development on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

The petition further called on policymakers to speed up the development of robust AI governance systems and to work with regulatory authorities dedicated to AI oversight and tracking, distinguishing real from synthetic AI content, developing a robust auditing and certification ecosystem, ensuring liability for AI-caused harm, establishing public funding for technical AI safety research, and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

“Humanity can enjoy a flourishing future with AI. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” it concluded.