

Discover more from Noobletter
Experts Sound the Extinction Threats alarm over AI
In March, a collective of tech industry experts raised concerns about the development of advanced AI models, emphasizing the potential risks these technologies pose to society and humanity at large. Now, another statement signed by notable figures, including OpenAI CEO Sam Altman and AI pioneer Geoffrey Hinton, seeks to address the risks of AI, highlighting the need for open discussions about its most severe threats. The significance of these concerns is likened to global pandemics and nuclear war.
The Severity of AI's Risk:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The statement underscores the gravity of AI's risk to humanity, comparing it to global pandemics and nuclear war. The signatories, including researchers from Google DeepMind, Microsoft's chief technology officer Kevin Scott, and internet security pioneer Bruce Schneier, urge industry leaders to openly discuss these profound threats. While current large language models (LLMs) like Altman's ChatGPT are not capable of achieving artificial general intelligence (AGI), concerns lie in the possibility of LLMs advancing to that point. AGI refers to an artificially intelligent being that can match or surpass human intelligence.
The Quest for AGI and Acknowledging Consequences:
OpenAI, Google DeepMind, and Anthropic are among the organizations aspiring to achieve AGI, but they also recognize the potential consequences associated with such advancements. Altman, in his testimony before Congress, expressed his greatest fear: AI causing significant harm to the world. This apprehension stems from the understanding that AI's impact can manifest in various detrimental ways.
Hinton's Perspective:
Geoffrey Hinton, a prominent figure in the AI field, recently made headlines by abruptly resigning from his position at Google. In an interview with CNN, he candidly admitted, "I'm just a scientist who suddenly realized that these things are getting smarter than us." Hinton's resignation reflects the growing realization among experts of the rapid progress and potential implications of AI technologies.
The joint statements from tech industry leaders and experts highlight the increasing concerns surrounding advanced AI and its potential risks to humanity. With comparisons drawn to global crises, the need for proactive discussions about the severe threats posed by AI becomes apparent.
While AGI remains a sought-after milestone, it is crucial for organizations to grapple with the potential consequences and strive for responsible development. As the field of AI progresses, it is imperative to navigate the path forward while ensuring the benefits of AI are balanced with the mitigation of risks to humanity and society at large.