Artificial intelligence

What is at stake?

Human intelligence has led to the greatest triumphs of humanity, but it is also behind some of history’s greatest catastrophes. So what happens if we create artificial intelligence (AI) that’s significantly smarter than any person? Will it help us reach even greater heights or will it trigger, as some experts worry, the greatest catastrophe of all: human extinction?

Today’s artificial intelligence systems already outperform humans in the tasks they were trained for, especially when it comes to the speed with which they act. In just a matter of seconds, an AI system can play the winning move in Chess or Go, translate an article, or plot a route to a given destination while taking into account current traffic patterns.

Though a human requires more time to do any of these, a key aspect of human intelligence is that we can perform all of these tasks. We have what’s known as general intelligence. While AI systems can only perform the tasks they were trained to do, a human can learn from context and experience and develop new skills or solve novel problems.

Many experts worry that if an AI system achieves human-level general intelligence, it will quickly surpass us, just as AI systems have done with their narrow tasks. At that point, we don’t know what the AI will do.

"…it is widely accepted that we will be able to create AI systems capable of performing most tasks as well as a human at some point."

Why is this a risk?

First, it’s important to note that experts are not worried that an AI will suddenly become psychopathic and begin randomly hurting or killing people. Instead, experts worry that an AI programme will either be intentionally misused to cause harm, or it will be far too competent at completing a task that turned out to be poorly defined.

Just looking at some of the problems caused by narrow AI programmes today can give us at least some sense of the problems an even more intelligent system could cause. We’ve already seen that recommendation algorithms on social media can be used to help spread fake news and upend democracy. Yet even as AI researchers race to find ways to prevent the spread of fake news, they worry the problem will soon worsen with the rise of Deepfakes – in which AI programmes modify what’s seen or heard in a video without the viewer recognising it’s been doctored.

At the same time, AI systems that were deployed with the best of intentions to identify images, parse through job applications, or minimise mindless tasks have instead inadvertently reinforced institutional racism, put jobs at risk, and exacerbated inequality.

It’s not hard to imagine how much worse these problems could get with advanced AI systems functioning across many platforms or falling into the hands of terrorists or despots.

What do we know?

Though science fiction often portrays artificial intelligence systems as humanoid robots, the AI systems we interact with in our daily lives are typically algorithms running in the background of some programme we’re using. They work so seamlessly that people outside of the AI world often don’t even realise they’ve just interacted with artificial intelligence.

What is artificial intelligence?

For now, these programmes can only perform those narrow tasks. But it is widely accepted that we will be able to create AI systems capable of performing most tasks, as well as a human, at some point. According to the median surveyed expert, there is a roughly 50 per cent chance of such AI by 2050 – with at least a five per cent chance of super-intelligent AI within two years after human level AI, and a 50 per cent chance within thirty years. The long-term social impact of human-level AI and beyond, however, is unclear, with extreme uncertainty surrounding experts’ estimates.

What are key factors impacting risk levels?

AI risk is still emerging today, but could rapidly accelerate if sudden technological breakthroughs left inadequate time for social and political institutions to adjust risk management mechanisms. If AI development gets automated, in particular, new capabilities might evolve extremely quickly.

Risks can be exacerbated by geopolitical tensions leading to an AI weapons race, AI development races that cut corners on safety, or ineffective governance of powerful AI.  

The level of AI risk will partly depend on the possibility to align the goals of advanced AI with human values – which will require more precise specification of human values and/or novel methods by which AIs can effectively learn and retain those values.

The current quest for Artificial General Intelligence (AGI) builds on the capacity for a system to automate predictive analysis – a process generally described as machine learning. One important element of machine learning is the use of neural networks: systems that involve a large number of processors operating in parallel and arranged in tiers.

The first tier receives a raw input; each successive tier receives the output from the tier preceding it. Neural networks adapt and modify themselves autonomously, according to initial training and input of data, in ways that are typically not transparent to the engineers developing them.

If researchers one day succeed in building a human level AGI, it will probably include expert systems, natural language processing and machine vision as well as mimicking cognitive functions that we today associate with a human mind, e.g., learning, reasoning, problem solving, and self-correction. However, the underlying mechanisms may differ considerably from those happening in the human brain just as the workings of today’s airplanes differ from those of birds.

AI for Good: beating pandemics

If AI poses such a threat to humanity, why develop it? Most AI researchers go into the field precisely because the technology promises to do so much good. The COVID-19 pandemic highlights some of the ways in which AI can help improve the world.

  • Sift through data: Perhaps AI’s greatest skill to date is parsing and analysing huge quantities of data. This was put to use in a partnership between the White House Office for Science and Technology and a number of AI companies and non-profits who joined forces to create a database2 that tracks medical journal articles related to the COVID-19 pandemic. It’s helping doctors and scientists search through tens of thousands of articles to better treat and prevent the coronavirus.
  • Identify illness: AI systems are increasingly proficient at recognising anomalies in x-rays, so it’s no surprise they’re being used to identify the coronavirus in chest x-rays.
  • Drug development: AI is already used to develop novel drugs to treat disease, and a handful of companies have turned to AI to model which existing drugs might help fight the virus, as well as what new drugs could be developed to help save more lives.
  • Track the spread of a pandemic: This work is still in beginning stages, but if another pandemic strikes, we may be able to use AI systems to identify the threat early, so we can stop the spread of the disease before anyone realises that the threat exists.
  • Ensuring social distancing: Robots could be deployed in some cases to help minimise exposure to disease, for example in disinfecting a space, and apps could help track who has travelled where and who is standing too close to whom. AI and robotics systems could also be deployed to track hospital activities to maximise treatment for patients while minimising exposure to nurses and doctors.

Reviewed by

Ariel Conn

Founder and President, Magnitude 10 Consulting

Governance of Artificial Intelligence risk

In recent years, the risks of artificial intelligence have become much more tangible, with real-world threats appearing regularly in news articles. The most well known problems surround Facebook,...

Read more