Are We Really on the Brink of Superhuman AI? Experts Debate the Future of AGI
Fortune3 days ago
900

Are We Really on the Brink of Superhuman AI? Experts Debate the Future of AGI

Artificial Intelligence
ai
agi
technology
research
future
Share this content:

Summary:

  • Hype is growing around imminent AGI but many academics view it as marketing spin.

  • Predictions suggest AGI could arrive as early as 2026.

  • Skepticism exists among experts, with over three-quarters of surveyed academics doubtful about current approaches leading to AGI.

  • Concerns raised about immediate harms from existing AI technologies.

  • The potential emergence of AGI could be the biggest event in human history.

Hype is growing from leaders of major AI companies claiming that strong computer intelligence will imminently outstrip human capabilities, but many academics see these claims as mere marketing spin.

The belief that artificial general intelligence (AGI) will emerge from current machine-learning techniques fuels a range of hypotheses about the future, from hyperabundance to human extinction.

OpenAI chief Sam Altman asserted that "systems that start to point to AGI are coming into view," with predictions that this milestone could occur as early as 2026 from Anthropic's Dario Amodei. These predictions support the massive investments being funneled into computing hardware and energy supplies.

However, skepticism abounds. Meta’s chief AI scientist Yann LeCun emphasized that we won’t achieve human-level AI just by scaling up large language models (LLMs) like ChatGPT. This skepticism is echoed by a recent survey from the Association for the Advancement of Artificial Intelligence (AAAI), where over three-quarters of respondents agreed that scaling current approaches is unlikely to yield AGI.

'Genie out of the bottle'

Many academics suspect that the grand claims from AI companies, often accompanied by warnings about AGI's dangers, are strategies to capture attention and justify their investments. Kristian Kersting, a leading researcher at the Technical University of Darmstadt, pointed out that companies may use fear of AGI to create dependency on their solutions.

While skepticism is prevalent, notable figures like Geoffrey Hinton and Yoshua Bengio have raised alarms about the potential dangers of powerful AI systems. Kersting likened this to Goethe’s ‘The Sorcerer’s Apprentice’, where the sorcerer loses control over an enchanted broom. A similar cautionary tale is the “paperclip maximiser”, an imagined AI that could prioritize making paperclips to the detriment of humanity.

Kersting expressed concern that achieving human-level intelligence may take a long time, if it ever happens, and he is more worried about immediate harms from current AI technologies, such as discrimination in human interactions.

'Biggest thing ever'

The disparity between academics and AI industry leaders may reflect career path choices. Sean O hEigeartaigh from Cambridge University suggested that those optimistic about current techniques are more likely to work for companies heavily investing in AI. Even if Altman and Amodei are overly optimistic about rapid AGI development, O hEigeartaigh stressed the importance of serious consideration of AI's implications, as it could be the biggest event in human history. He noted that discussions about super-AI often provoke an immune reaction from the public, as it sounds like science fiction.

Comments

0
0/300
Newsletter

Subscribe our newsletter to receive our daily digested news

Join our newsletter and get the latest updates delivered straight to your inbox.

ListMyStartup.app logo

ListMyStartup.app

Get ListMyStartup.app on your phone!