Editor choice


Ben Goertzel forecasts human-level AI by 2030

One of the leading minds working towards artificial general intelligence (AGI) has doubled down on his prediction that human-level AI will arrive by the end of this decade.



Dr. Ben Goertzel, the "father of AGI" and CEO of AI company SingularityNET, gave the closing speech at the Beneficial AGI Summit. He laid out an ambitious but plausible timeline for achieving AGI, as well as the subsequent "technological singularity" that could follow.


"It seems to me quite plausible that we will be able to achieve a general artificial intelligence at the human level within, say, the next three to eight years," Goertzel told the rapt audience of AI researchers and philosophers. "Most likely, I believe the first examples of human-level AGI will arrive around 2029 or 2030."


AGI refers to AI systems with the general intelligence and reasoning abilities of the human mind, rather than the specialized narrow AI that powers current technologies like chatbots and self-driving cars. It's seen by many as a revolutionary milestone that could kick off an intelligence explosion leading to superintelligence that vastly outpaces human cognition.


Goertzel's timeline aligns with prominent futurist Ray Kurzweil, who has long predicted that the singularity will happen by 2029 or 2030 based on the accelerating pace of technological change. However, many AI experts remain deeply skeptical that human-level AI is possible anytime soon given the immense challenges involved.


The 56-year-old AI pioneer acknowledged that his forecast could be "over-optimistic." Achieving AGI may require paradigm-shifting computing breakthroughs like million-qubit quantum computers, he allowed. But Goertzel thinks we're hot on the trail thanks to recent rapid advances, particularly in large language models (LLMs) like GPT-4.


"These large language models, while not remotely being AGI themselves, are absolutely an important component towards getting to AGI," he said. LLMs demonstrate remarkable language understanding and generation abilities that could be combined with other AI systems for reasoning, learning, and general intelligence.


Goertzel's OpenCog Hyperon project aims to integrate various AI components and models, including LLMs, into a cohesive AGI architecture. This modular approach, he argues, can achieve human-level AI without having to solve the mind-bogglingly complex challenge in one fell swoop.


Once AGI is achieved, Goertzel believes the systems will quickly modify their own code in an accelerating feedback loop of recursive self-improvement. This could lead surprisingly quickly to superintelligent AI that vastly outstrips the abilities of the human mind.


"I think that once AGI is able to introspect into its own mindset, it will be able to engage in engineering and science at a human or superhuman level," he said. "It will be able to create smarter and smarter versions of itself in an intelligence explosion toward the Singularity."


The technological singularity is a hypothetical point when superintelligent AI triggers upheaved change across human civilization, for better or worse. Some envision a utopia of abundance with poverty, disease, and aging solved. Others fear an existential catastrophe if a superintelligence's goals aren't perfectly aligned with humanity's wellbeing.


Goertzel acknowledged the existential risk of a "precipitous" singularity, but argued it's likely unavoidable and our best path is to work on beneficial AGI that internalizes human ethics and values.


While experts remain divided on the singularity's plausibility and timing, Goertzel's prediction will certainly fuel more urgency around AI ethics and safety efforts. Global technical and policy leaderships will be critical to steer an AGI transition that improves the human condition.

Share with friends:

Write and read comments can only authorized users