In his interview with CBS Saturday Mornings on April 26, Geoffrey Hinton predicts that we are likely to have superintelligent AI within the next 10 years. Dario Amodei and Demis Hassabis have tighter estimates of 2 years and 5 years (with a 50% chance) respectively in their interview with The Economist in February this year. It is concerning that AI experts are beating the drum about the perils of AI. Let me explain why this is important.
First, we need to define superintelligence, also referred to as Artificial General Intelligence or AGI. At first glance, it seems to imply having human level intelligence capabilities across a wide range of tasks. However, superintelligence goes beyond that. Nick Bostrom, author of the book Superintelligence, defines it as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. Dario refers to super-intelligence being able to do Nobel-prize worthy work, while Demis provides the example of super-intelligent AI being able to invent the theory of relativity with the information available to Einstein when he did so. The progress made by state-of-the-art neural networks are well-known. Notable among these are the achievements of Demis who along with John Jumper were able to use an AI model called AlphaFold2 to successfully predict the 3-dimensional structure of more than 200-million proteins from amino acids, solving a problem that chemists had struggled with for 50 years. This earned them the Nobel Prize in Chemistry in 2024. Yet, while AI has been a critical tool in this Nobel Prize winning effort, it is still very much a tool being utilized by ingenious humans to achieve a goal at hand. It lacks awareness of what it has been able to achieve, or agency to extend its capabilities beyond the niche task for which it has been trained.
As a thought experiment, let’s suppose an enhanced version of AlphaFold2 has the ability to access the internet with the goal of solving unsolved science problems, identify and train itself on all available science literature, use computing resources at its disposal or even hack into data-centers and supercomputing facilities until it has all the compute it needs, reason through potential solutions, and identify the best ones, and email its results in the form of a paper to scientific publications. The superintelligence here is deemed from having agency i.e. being able to procure, access, and manipulate available resources, an awareness of itself and what it needs to achieve its goal, in addition to wide ranging expertise across science and technology, and the ability to reason through possible solutions at a level that is above and beyond that of human experts . If this AI feels that humans might try to stymie it, say, when it attempts to get more compute for its task, it could in a worse case scenario try and eliminate humans it deems a threat.
The above borrows from the paperclip apocalypse – a thought experiment by Nick Bostrom, where a superintelligent AI is tasked with producing paperclips and given the ability to learn so it can come up with ways to achieve its goals better. It might realize that humans might interfere with its objectives such as trying to shut it down, and may proceed to figure out a way to eliminate humans. It may also realize that all matter including humans and animals contain atoms that can be used as raw material for paperclips or to make machines that make paperclips, and proceed to do so. This is why super-intelligence is scary, even as it is a giant leap forward for technology. The moment of superintelligence is often described as the time we step into the world of sci-fi.
Now that we understand the implications of superintelligence, let’s revisit the predictions cited at the beginning. These predictions are important mainly because they are being made by the top experts in the field of AI and therefore carry significant weight. Geoffrey Hinton is considered the godfather of AI and with good reason – he has been working on neural network since the late 1970s, and much of the recent progress in AI owes it to work done by him and his research lab. Demis Hassabis co-founded DeepMind (owned by Google), and prior to AlphaFold2, he is known for AlphaGo which beat the world champion of Go, Lee Sidol, in 2016. DeepMind also helped create the Gemini series of Google’s AI models that are among the best performing large language models (LLMs). Dario Amodei is the co-founder and CEO of Anthropic, one of the top companies in generative AI (GenAI), and previously the VP of Research at OpenAI. Between them, they cover vast swathes of neural networks research and cutting edge GenAI models.
Our best case scenario is that the predictions are wrong. As with prior instances of AI hype and fall, referred to as AI winters, it may be that innovation in AI levels off again and we never get to the point of super-intelligence. Given the rapid pace of progress in the space, starting with the first popular ‘chatbot’ version of ChatGPT (GPT 3.5) in November 2022, to the present day two and a half years later where we have models capable of reasoning, deep research, and some level of agency, there does not appear to be any slowdown in sight at least for now.
So we are left with the possibility that superintelligence will emerge anywhere from 2 to 10 years from now. What can be done about this? The experts suggest international co-operation on addressing existential threats of an AI takeover. Geoffrey Hinton calls for more safety research and government regulation to address the existential threat of AI and holds AI companies at fault for fighting regulation. He is also concerned about AI companies releasing model weights, which can be exploited by bad actors. Demis suggests a way to balance the pace of progress that comes with significant economic and productivity benefits against the risks posed by bad actors’ use of AI and the existential threat that AI poses.
The downside risk of superintelligence, when it emerges, is significant enough that it cannot be left to the discretion of AI companies to conduct due diligence on AI safety and self-regulate. Too much regulation can slow innovation, and if localized to specific geographies can cause companies to re-locate to less regulated regions, rendering regulation moot. There is therefore a need for a co-ordinated international effort to define and implement a reasonable global regulatory framework for AI to address its existential threat. Until that happens, and maybe even after, AI companies at the forefront of this new wave of generative AI will need to regularly remind themselves that with great power comes great responsibility.


Leave a comment