A Coalition of Tech Innovators Calls for a Temporary Halt in Development of AI Systems Beyond GPT-4

In Brief

A total of over 1,100 individuals have expressed their support by signing the open letter demanding a halt on significant AI research projects.

High-profile signers include notable names like Elon Musk, Steve Wozniak, and Emad Mostaque, among several others in the tech community.

The letter highlights serious concerns regarding inherent biases in AI information systems, job automation and discusses the potential dangers that AI poses to human civilization as a whole.

The endorsement list exceeds 1,100 individuals and includes leading figures in technology, calling for a six-month hiatus on the construction of AI systems that exceed the capabilities of GPT-4. open letter The document was initiated by the Future of Life Institute, a nonprofit dedicated to mitigating existential threats to humanity, particularly those associated with advanced AI technologies.

Citing the widely accepted Asilomar AI Principles, which assert that ‘Advanced AI could lead to a significant transformation in the trajectory of life on Earth, and thus, it must be approached with appropriate planning and caution,’ the letter emphasizes the lack of necessary planning and governance.

Furthermore, the letter warns that AI development labs are engaged in an ‘uncontrolled’ race to enhance AI capabilities that are beyond comprehension or predictability. Given that current AI frameworks like GPT-4 excel on a broad range of tasks, there are troubling implications concerning the biases in AI information, the automation of jobs, and the looming risk of losing control over our civilization.

“We urge all AI development groups to pause the training of AI systems exceeding GPT-4 for at least six months. This cessation should be transparent and verifiable, involving all key stakeholders. Should a rapid pause be unachievable, we advocate for governmental intervention to enforce a freeze,\” compete with humans The letter calls on AI laboratories and technology experts to utilize this pause to craft and implement robust safety measures for the design and development of advanced AI, ensuring they are subject to thorough audits by independent specialists.

Nonetheless, American journalist Jeff Jarvis has critiqued the letter,

the letter states.

arguing that it exemplifies a case of moral alarmism.

Notable tech and machine learning leaders who stand in solidarity with the letter include renowned researchers from DeepMind and multiple university academics globally: saying Yoshua Bengio from the University of Montréal, a Turing Laureate acknowledged for his contributions to deep learning, and a leader at the Montreal Institute for Learning Algorithms.

Stuart Russell from Berkeley, a professor of computer science and director of the Center for Intelligent Systems, also co-author of the seminal textbook ‘Artificial Intelligence: a Modern Approach.’

  • Jaan Tallinn, a Co-Founder of Skype, and associated with the Centre for the Study of Existential Risk and the Future of Life Institute.
  • Gary Marcus, an AI researcher and Professor Emeritus at New York University.
  • Elon Musk, CEO of SpaceX, Tesla & Twitter
  • Emad Mostaque, CEO, Stability AI
  • Marc Rotenberg, President of the Center for AI and Digital Policy (CAIDP),
  • noted that the CAIDP intends to file a formal complaint with the Federal Trade Commission, seeking a thorough investigation into Open AI and ChatGPT, along with a prohibition on any further commercial launches until safety measures are properly in place.
  • stating that, ‘we also require sufficient time for our institutions to determine appropriate actions,’ and cautioning that society is approaching a point where ‘alarmingly potent’ generative AI tools could emerge.

In a recent letter We urgently require time for our institutions to devise strategies. Regulatory frameworks will be essential, taking time to refine; while current AI applications may not seem particularly alarming, we may not be far off from encountering truly concerning ones.

OpenAI CEO Sam Altman has admitted \”We are petitioning the FTC to ‘pause’ AI development so our institutions, laws, and societal structures can synchronize. We must regain control over the technology we implement before it’s too late,\” posits the CAIDP’s correspondence.

A Meta shareholder expresses discontent regarding Zuckerberg’s significant investments in metaverse ventures.

Elsewhere, EU legislators U.S. Senators urge Mark Zuckerberg to delay the launch of Meta’s Horizon Worlds Metaverse initiative aimed at teenagers.

Read more:

Disclaimer

In line with the Trust Project guidelines Involved in coordinating press communications, announcements, and interview setups.

2022-2025 Latest AI and Crypto News