The views expressed in this report are those of the authors. Their statements are not necessarily endorsed by the affiliated organisations or the Global Challenges Foundation.

What is artificial intelligence?

AI is non-biological intelligence – more specifically, technology that enables machines to accomplish complex goals. One typically distinguishes between weak/narrow AI, designed and trained for a particular task such as spam filters, self-driving cars or Facebook’s newsfeed, and general AI or Artificial General Intelligence (AGI), which is able to find a solution when presented with an unfamiliar task, with human-level ability or beyond.

The current quest for AGI builds on the capacity for a system to automate predictive analysis – a process generally described as machine learning. One important element of machine learning is the use of neural networks: systems that involve a large number of processors operating in parallel and arranged in tiers. The first tier receives a raw input, and each successive tier receives the output from the tier preceding it. Neural networks adapt and modify themselves autonomously, according to initial training and input of data, in ways that are typically not transparent to the engineers developing them.

If researchers one day succeed in building a human-level AGI, it will probably include expert systems, natural language processing and machine vision as well as mimicking cognitive functions that we today associate with a human mind, e.g., learning, reasoning, problem solving, and self-correction. However, the underlying mechanisms may differ considerably from those happening in the human brain just as the workings of today’s airplanes differ from those of birds.

What is at stake?

In narrow domains, artificial intelligence (AI) systems have reached superhuman level relatively quickly – for instance, in identifying the location of a photograph or playing complex games like Jeopardy or Go. In the coming decades, there is a high probability that these systems may surpas humans in broader domains. The danger of entities more intelligent than us can be understood by considering the power we humans have drawn from being the smartest creatures on the planet. Even if the values of artificial intelligence systems can be aligned with those of their creators, they are likely to have a profound impact on socio-economic structures and geopolitical balance. But if the goals of powerful AI systems are misaligned with ours, or their architecture even mildly flawed, they might harness extreme intelligence towards purposes that turn out to be catastrophic for humanity. This is particularly concerning as most organizations developing artificial intelligence systems today focus on functionality much more than ethics.

AI is non-biological intelligence – technology that enables machines to accomplish complex goals.

Possible scenarios

Most experts agree that a superintelligent AI is likely to be designed as benevolent or neutral and is unlikely to become malevolent on its own accord. Instead, concern centers around the following two scenarios:

  • The AI is programmed to do something devastating: autonomous weapons are AI systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  • The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: this can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with an ambitious societal project, it might wreak havoc as a side effect, and view human attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals are not aligned with ours, we have a problem. You are probably not an evil ant-hater who stomps on ants out of malice, but if you are in charge of a hydroelectric green energy project and there is an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

How much do we know?

It is now widely accepted that we will be able to create AI systems capable of performing most tasks as well as a human at some point. According to the median surveyed expert, there is a roughly 50% chance of such AI by 2050 – with at least a 5% chance of superintelligent AI within two years after human-level AI, and a 50% chance within thirty years. The long-term social impact of human-level AI and beyond, however, is unclear, with extreme uncertainty surrounding experts’ estimates.

The ability to align AI with human values is widely considered to be important in determining the risk factor. However, aside from the open question of which values to select, there are important unsolved technical problems regarding how to make an AI understand human goals, making an AI adopt these goals, and ensuring that it retains these goals if it recursively self-improves. 

What are key factors affecting risk levels?

  • AI risk is still emerging today, but could rapidly accelerate if sudden technological breakthroughs left inadequate time for social and political institutions to adjust risk management mechanisms. If AI development gets automated, in particular, new capabilities might evolve extremely quickly.
  • Risks can be exacerbated by geopolitical tensions leading to an AI weapons race, AI development races that cut corners on safety, or ineffective governance of powerful AI.  
  • The level of AI risk will partly depend on the possibility to align the goals of advanced AI with human values – which will require more precise specification of human values and/or novel methods by which AIs can effectively learn and retain those values.

Anthony Aguirre

Co-founder, Future of Life Institute.


Max Tegmark

President and Co-founder, Future of Life Institute.


Ariel Conn

Director of Media and Outreach, Future of Life Institute.


Richard Mallah

Director of AI Projects, Future of Life Institute.


Victoria Krakovna

Co-founder, Future of Life Institute.