What is at stake?

Human intelligence has led to the greatest triumphs of humanity, but it is also behind some of history’s greatest catastrophes. So what happens if we create artificial intelligence (AI) that’s significantly smarter than any person? Will it help us reach even greater heights or will it trigger, as some experts worry, the greatest catastrophe of all: human extinction?

Today’s artificial intelligence systems already outperform humans in the tasks they were trained for, especially when it comes to the speed with which they act. In just a matter of seconds, an AI system can play the winning move in chess or Go, translate an article, or plot a route to a given destination while taking into account current traffic patterns.

Though a human requires more time to do any of these, a key aspect of human intelligence is that we can perform all of these tasks. We have what’s known as general intelligence. While AI systems can only perform the tasks they were trained on, a human can learn from context and experience and develop new skills or solve novel problems.

Many experts worry that if an AI system achieves human-level general intelligence, it will quickly surpass us, just as AI systems have done with their narrow tasks. At that point, we don’t know what the AI will do.

Why is this a risk?

First, it’s important to note that experts are not worried that an AI will suddenly become psychopathic and begin randomly hurting or killing people. Instead, experts worry that an AI program will either be intentionally misused to cause harm, or it will be far too competent at completing a task that turned out to be poorly defined.

Just looking at some of the problems caused by narrow AI programs today can give us at least some sense of the problems an even more intelligent system could cause. We’ve already seen that recommendation algorithms on social media can be used to help spread fake news and upend democracy. Yet even as AI researchers race to find ways to prevent the spread of fake news, they worry the problem will soon worsen with the rise of DeepFakes – in which AI programs modify what’s seen or heard in a video without the viewer recognizing it’s been doctored.

At the same time, AI systems that were deployed with the best of intentions to identify images, parse through job applications, or minimize mindless tasks have instead inadvertently reinforced institutional racism, put jobs at risk, and exacerbated inequality.

It’s not hard to imagine how much worse these problems could get with advanced AI systems functioning across many platforms or falling into the hands of terrorists or despots.

What do we know?

Though science fiction often portrays artificial intelligence systems as humanoid robots, the AI systems we interact with in our daily lives are typically algorithms running in the background of some program we’re using. They work so seamlessly that people outside of the AI world often don’t even realize they’ve just interacted with artificial intelligence.

For now, these programs can only perform those narrow tasks. But it is widely accepted that we will be able to create AI systems capable of performing most tasks as well as a human at some point. According to the median surveyed expert, there is a roughly 50% chance of such AI by 2050 – with at least a 5% chance of superintelligent AI within two years after human-level AI, and a 50% chance within thirty years. The long-term social impact of human-level AI and beyond, however, is unclear, with extreme uncertainty surrounding experts’ estimates.

No Off Switch

It’s tempting to say that if an AGI system starts behaving badly, we’ll just turn it off. But again, we need only look at some of today’s problems to see this is unrealistic.

 

Consider the efforts to stop fake news. Fake news is often spread across social media by recommendation algorithms that are powered by artificial intelligence. Yet the platforms use these same algorithms to share accurate news and personal stories that users are likely to appreciate.

 

“Turning off” fake news without negatively impacting the spread of real information (or even legitimate advertising, as annoying as many of us might find it) is challenging. It’s unlikely that social media platforms would choose to switch themselves off entirely to fix the problem, and it’s likely that fake news would simply pop up on another platform anyway.

 

As artificial intelligence becomes more advanced and ubiquitous, similar challenges will arise. Turning off an AGI in the future could become akin to turning off the Internet today.

AI in the Fight Against COVID-19: Lessons Learned

If AI poses such a threat to humanity, why develop it? Most AI researchers go into the field precisely because the technology promises to do so much good. As COVID-19 swept across the world in spring of 2020, AI researchers immediately began developing algorithms, models, and programs to help medical professionals in variety of ways.

 

Thousands of papers were published on COVID applications of AI, covering prediction and tracking, contact tracing, monitoring outbreaks and individual cases, early diagnosis, assisting healthcare professionals, and more. Some AI applications were so helpful that the US Food and Drug Administration issued a handful of Emergency Use Authorizations to implement the programs as quickly as possible. Meanwhile, in the UK, the ISARIC4C study is using big data to predict which patients have greater risk of becoming seriously ill.

 

However, most AI work failed to meet medical standards. For example, one review found that “almost all published prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic.” More recently, a study in Nature Machine Learning found that over 300 papers that tried applying AI to identify COVID-19 via chest x-rays were ultimately insufficient for clinical use.

 

The World Health Organization partnered with the medical publication, the BMJ, to identify best practices and lessons learned from using AI for research, diagnosis, and treatment of COVID-19. They found issues with discrimination, bias, and ethics, with two of the five review papers specifically addressing concerns that the AI applications exacerbated inequality.

 

In September of 2020, The BMJ, Nature Medicine, and The Lancet co-published AI-specific updates to their guidelines for clinical trial reports and protocols for intervention. New tools like this will help AI researchers improve their reporting and develop better, more reproducible studies and ensure that AI continues to advance medicine.

"Most AI work failed to meet medical standards...the AI applications exacerbated inequality"

What is artificial intelligence?

The current quest for AGI builds on the capacity for a system to automate predictive analysis – a process generally described as machine learning. One important element of machine learning is the use of neural networks: systems that involve a large number of processors operating in parallel and arranged in tiers. The first tier receives a raw input, and each successive tier receives the output from the tier preceding it. Neural networks adapt and modify themselves autonomously, according to initial training and input of data, in ways that are typically not transparent to the engineers developing them.

If researchers one day succeed in building a human-level AGI, it will probably include expert systems, natural language processing and machine vision as well as mimicking cognitive functions that we today associate with a human mind, e.g., learning, reasoning, problem solving, and self-correction. However, the underlying mechanisms may differ considerably from those happening in the human brain just as the workings of today’s airplanes differ from those of birds.

What are key factors impacting risk levels?

AI risk is still emerging today, but could rapidly accelerate if sudden technological breakthroughs left inadequate time for social and political institutions to adjust risk management mechanisms. If AI development gets automated, in particular, new capabilities might evolve extremely quickly.

Risks can be exacerbated by geopolitical tensions leading to an AI weapons race, AI development races that cut corners on safety, or ineffective governance of powerful AI. 

The level of AI risk will partly depend on the possibility to align the goals of advanced AI with human values – which will require more precise specification of human values and/or novel methods by which AIs can effectively learn and retain those values.

"Risks can be exacerbated by geopolitical tensions leading to an AI weapons race, AI development races that cut corners on safety, or ineffective governance of powerful AI"

The current quest for Artificial General Intelligence (AGI) builds on the capacity for a system to automate predictive analysis – a process generally described as machine learning. One important element of machine learning is the use of neural networks: systems that involve a large number of processors operating in parallel and arranged in tiers.

The first tier receives a raw input; each successive tier receives the output from the tier preceding it. Neural networks adapt and modify themselves autonomously, according to initial training and input of data, in ways that are typically not transparent to the engineers developing them.

If researchers one day succeed in building a human level AGI, it will probably include expert systems, natural language processing and machine vision as well as mimicking cognitive functions that we today associate with a human mind, e.g., learning, reasoning, problem solving, and self-correction. However, the underlying mechanisms may differ considerably from those happening in the human brain just as the workings of today’s airplanes differ from those of birds.

Governance of Artificial Intelligence risk

In recent years, the risks of artificial intelligence have become much more tangible, with real-world threats appearing regularly in news articles. The most well known problems surround Facebook,...

Read more

Related content