Editor choice

2024-03-12

Leading AI experts predict artificial general intelligence could arrive by 2030

The advent of artificial general intelligence (AGI) - a system capable of performing any intellectual task a human can - may be closer than many realize. Esteemed AI pioneers are making bold predictions that AGI will likely be achieved within the next several years, with profound implications for a potential "technological singularity."

 

 

At the recent Beneficial AGI Summit 2024 in Panama City, Ben Goertzel, Chief Scientist of AI research company SingularityNET, shared his assessment that AGI is poised to emerge between 2027-2030. Goertzel, known as the "father of AGI," believes this milestone could catalyze an intelligence explosion leading to superintelligent AI systems surpassing the combined cognitive capabilities of humanity.

 

"Once AGI becomes a reality and can self-improve by rewriting its own code, we may rapidly see an intelligence explosion leading to superintelligence," Goertzel stated. "I think when AGI can explore its own architecture, it will be able to achieve rapid self-guided development far beyond current human-level abilities."

 

Goertzel's predictions align with forecasts from other prominent AI figures. Shane Legg, co-founder of DeepMind, has predicted AGI by 2028 based on accelerating trends in AI research and development. Futurist Ray Kurzweil has similarly pointed to around 2029 for AGI given the exponential growth of computing power and capabilities.

 

The rapidly advancing performance of large language models (LLMs) like GPT-4 and PaLM has lent credibility to the prospect of general intelligence on the horizon. However, Goertzel cautioned that while LLMs represent an important component, an AGI system capable of general reasoning will require integrating multiple AI training approaches and models.

 

To that end, Goertzel has been working on the OpenCog Hyperon project - an open-source infrastructure aimed at developing human-level AGI through combining different paradigms and architectures in a unified, modular system.

 

"Language models alone are not enough," Goertzel explained. "Hyperon integrates neural networks, logical reasoning, evolutionary learning and other AI methods to make progress toward general intelligence."

 

If the experts' predictions prove accurate, the emergence of AGI this decade could profoundly impact fields from scientific research and technological development to decision-making and creative pursuits. It may representeither a stunning evolutionary leap for intelligence - or an existential risk if not developed safely and aligned with human ethics and goals.

 

Many AI safety advocates have urged proactive governance to ensure positive outcomes from transformative AGI breakthroughs. But opinions diverge on whether superintelligence would be an "intelligence explosion" liberating humanity's potential, or a threat to human agency and autonomy.

 

"We must be thoughtful about what we develop and how," Goertzel cautioned. "AGI done right could open up astounding opportunities. If mishandled, it could become an existential pitfall."

 

With the AI community racing toward general intelligence solutions, crucial questions about rights, ethics and existential safety surrounding AGI loom larger than ever before.

Share with friends:

Write and read comments can only authorized users