Elon Musk
Julie Hambleton
Julie Hambleton
May 7, 2024 ·  4 min read

Elon Musk, other major tech leaders call to halt ‘giant AI experiments’

The world’s top tech leaders are calling for a halt to developing artificial intelligence (AI) systems past GPT-4. Elon Musk, the founder of Tesla and SpaceX, along with other leaders in the tech industry, have written an open letter to society asking for this. They worry that we don’t yet understand its full capabilities and how it might affect society. We need to study that more before we develop the technology to a point where we can’t manage it.

Elon Musk and Tech Leaders Call For A Halt On AI Experiments

Tech leaders from nearly all major tech companies wrote an open letter to society asking them to halt, or at least slow down, the further development of AI technology. The letter, which was signed by Musk and 116 other tech leaders, including Amazon CEO Jeff Bezos, Apple co-founder Steve Wozniak and Google DeepMind founder Demis Hassabis, says that we need to slow down the development of AI systems past GPT-4. The letter states that the reason for this is because we don’t know how AI systems will affect humanity and suggests we shouldn’t develop them until we do. (1)

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” (2)

Big Names Signed This Letter

As already mentioned, many prominent leaders in the tech industry signed this letter. Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. This is not the first time that a group of prominent individuals have called for a ban on AI research. In April, Elon Musk and other industry leaders signed an open letter to the UN calling for “laws” to control how AI systems are developed. It was later revealed that these laws would include a ban on autonomous weapons. (3)

Musk later clarified that he did not support a ban on AI research, arguing that it would hurt humanity’s chances of surviving the coming technological singularity. In their letter, the signatories argue that there is “no need to panic” because AI presents many benefits but also poses “unprecedented challenges” for humanity. These include job loss and the dangers of autonomous weapons. The authors call for an international initiative to tackle these issues together rather than leaving them for individual countries or companies to solve independently.

The Benefits of AI

The signatories all agree that there are benefits to AI technology for society. These include improved healthcare, increased safety, and security, the creation of new jobs, and economic growth. The letter also points out that AI could help to eliminate poverty, reduce environmental damage and improve access to education. However, it stresses that these benefits must be shared fairly across society. Again, these leaders are not calling for a permanent stop, just a pause to do some learning and law-making before continuing.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” they write. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

The Potential Risks of AI

Of course, there are many potential risks that AI can pose to society. These include systems that can be used by bad actors to cause harm or systems that could make mistakes that lead to accidents. Some of the most obvious risks are related to autonomous vehicles and other technologies where AI is being used for decision-making. For example, there’s a fear that an AI system operating in a high-stakes situation could make a mistake and cause injury or death (even if it was not directly at fault). There’s also concern about bias creeping into the data sets used by these systems and how this might affect their decisions—such as when Amazon had problems with its facial recognition software.

The Bottom Line

AI technology is not going anywhere. Sadly, this letter might not make any difference to the pace in which AI developers are racing to produce the latest artificially intelligent products and systems. Hopefully, however, it will make some stop and think before programming. Perhaps as well it will make policymakers move AI technology laws and regulations to a higher priority. Society, after all, could depend on it.

Keep Reading: Meet Loab, the AI Art Woman Haunting the Internet


  1. Elon Musk and other tech experts call for ‘pause’ on advanced AI systems.” FT
  2. Pause Giant AI Experiments: An Open Letter.” Future of Life
  3. Elon Musk and top AI researchers call for pause on ‘giant AI experiments’.” The Verge. James Vincent. March 29, 2023.