Tech chiefs including Elon Musk and Steve Wozniak call on scientists to pause development of AI systems

Tech chiefs sign open letter demanding all labs stop training AI systems for at least six months
Jacob Phillips30 March 2023

Technology experts including Elon Musk have urged scientists to pause developing artificial intelligence (AI) to ensure it does not pose a risk to humanity.

Tech chiefs including Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn have signed an open letter demanding all labs stop training AI systems for at least six months.

The prevalence of AI has increased massively in recent years, with systems such as chatbot ChatGPT quickly becoming part of everyday life.

The letter said: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control.

“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”

Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall

Open letter

It added: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The technology chiefs do not want any AI systems more powerful than new chatbot GPT-4 and called for researchers to focus on making sure the technology is accurate, safe and transparent.

US tech firm OpenAI released its latest version of AI chatbot ChatGPT earlier this month.

ChatGPT was launched late last year and it has become an online sensation because of its ability to hold natural conversations but also to generate speeches, songs and essays.

The bot can respond to questions in a human-like manner and understand the context of follow-up queries, much like in human conversations. It can even admit its own mistakes or reject inappropriate requests.

According to OpenAI, GPT-4 has “more advanced reasoning skills” than ChatGPT but, like its predecessors, GPT-4 is still not fully reliable and may “hallucinate” – a phenomenon where AI invents facts or makes reasoning errors.

The letter said humanity can now enjoy an “AI summer” where it can reap the rewards of the systems but only once safety protocols have been made.

The letter added: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.

“Society has hit pause on other technologies with potentially catastrophic effects on society.

“We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in