We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

We Aren't Sure If (Or When) Artificial Intelligence Will Surpass the Human Mind

Experts say the future of AI is uncertain, but it wouldn't hurt to prepare for the possibility of singularity.

By Avery Hurt
Mar 3, 2022 7:50 PMMar 3, 2022 7:49 PM
artificial intelligence human brain
(Credit: Omelchenko/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

It may sound like nothing more than a thrilling science fiction trope, but scientists who study artificial intelligence warn that AI singularity — a point when the technology irreversibly surpasses the capabilities of the human mind — is a real possibility, and some say it will happen within a few decades.  

Surveys of AI experts, including this one published in the Journal of Artificial Intelligence Research in 2018, tend to find that a significant chunk of researchers think there’s at least a 50 percent chance that some people alive today will live to see an AI singularity. Some expect it within the next decade.  

From Deep Blue to Siri 

The moment AI reaches human-level intelligence will mark a profound change in the world. Such sophisticated AI could create more, increasingly advanced AI. At that point it might become difficult — if not impossible — to control.  

For some background, AI caught the public’s attention in 1997 when a computer program called Deep Blue beat Garry Kasparov (then the World Chess grandmaster) at his own game. More recently, the technology has been taught to drive cars, diagnose cancer and assist with surgery, among other applications. It can even translate languages and troll you on Twitter. And, of course, it also helps many of us search the web and map our way home. 

But these are all examples of narrow AI, which is programmed for a specific, yet often incredibly complex, task. A program that can beat a Go master can’t drive a car; AI that can spot a tumor can’t translate Arabic into French. While narrow AI is often far better than humans at the one thing it’s trained to do, it isn’t up to speed on everything people can do. Unlike us, narrow AI can’t apply its intelligence to whatever problem or goal comes up.   

Meanwhile, artificial general intelligence (AGI) could apply a general set of knowledge and skills to a variety of tasks. While it doesn’t currently exist, AGI would no longer depend on human-designed algorithms to make decisions or accomplish tasks. In the future, AGI could hypothetically build even smarter AGI, over and over again. And because computers can evolve much faster than humans, this might quickly result in what is sometimes called “superintelligence” — an AI that is far superior to human smarts. It could adapt to specific situations and learn as it goes. That’s what experts mean when they talk about AI singularity. But at this point, we likely aren’t even close.

When Can We Expect Singularity?

In a recent blog post, roboticist and entrepreneur Rodney Brooks said he thinks the field of AI is probably “a few hundred years” less advanced than most people think. “We’re still back in phlogiston land, not having yet figured out the elements,” he wrote.   

It’s also important to note that we still haven’t even figured out how precisely the human mind works, says Shane Saunderson, a robotics engineer and research fellow at the Human Futures Institute in Toronto. Saunderson describes himself as “a bit bearish” on the idea of an impending AI singularity. “We understand so little about human psychology and neuroscience to begin with that it's a bit of hubris to say we're only 10 years away from building a human-like intelligence," he says. "I don't think we're 10 years away from understanding our own intelligence, let alone replicating it.” 

Still, others insist that AGI may be difficult to avoid, even if the timeline is uncertain. “It's pretty inevitable that it’s going to happen unless we humans wipe ourselves out first by other means,” says Max Tegmark, a physicist who researches machine learning at MIT. “Just as it was easier to build airplanes than figure out how birds fly, it’s probably easier to build AGI than figure out how brains work.” 

Despite a lack of consensus on the subject, many scientists, the late Stephen Hawking included, have warned of its potential dangers. If and when AI reaches the point where it can continually improve itself, the fate of our species could depend on the actions of this superintelligent machine, warns Nick Bostrom, a University of Oxford philosopher, in his book Superintelligence: Paths, Dangers, Strategies

Yet that fate might not necessarily be a dismal one. The experts also point out that superintelligent AI could offer a solution to many of our problems. If we can’t figure out how to tackle climate change, eradicate poverty and ensure world peace, perhaps AI can. 

“This remarkable technology has the potential to help everybody live healthy, wealthy lives so humanity can flourish like never before,” says Tegmark, who is also the founder of the Future of Life Institute, an organization that aims to ensure these positive outcomes. Yet, he adds, it “might wipe out humanity if its goals aren’t aligned with ours.” Or as Bostrom put it in Superintelligence, when it comes to confronting an intelligence explosion, “We humans are like small children playing with a bomb.” 

Preparing for AGI 

Whether it is ultimately a panacea or doomsday device, we likely don’t want to be taken by surprise. If there is a reasonable chance an AI singularity is on the way, Tegmark thinks we should prepare accordingly. “If someone told us that an alien invasion fleet is going to arrive on Earth in 30 years, we would be preparing for it it — not blowing it off as being 30 years from now,” he says. Tegmark points out that it could take at least three decades to figure out how to control this technology and make sure its goals align with ours. We need to be ready not only to control it, Tegmark argues, but also to use it in the best interests of humanity.   

Of course, that assumes we all can agree on our goals and interests. However, Tegmark is optimistic that we could concur on the basics and work together to protect ourselves from an existential threat posed by a superintelligent AI. If the threat of a climate disaster isn’t enough to bring humanity together, perhaps both the promise and peril of superintelligent AI will be. 

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.