The views expressed in this report are those of the authors. Their statements are not necessarily endorsed by the affiliated organisations or the Global Challenges Foundation.

Until recently, advanced artificial intelligence was still thought of as science fiction. As such, researchers in industry, academia, and government were more concerned with simply making it work. Only in the last few years, as AI has become more advanced and commonplace, have more people considered the possible risks of advanced AI. 

Since the general perception is that human-level AI is at least decades away, there has been relatively little action planning for it. However, the timelines are uncertain. Meanwhile, the problem of controlling or aligning very advanced AI with human goals is extremely difficult and may require decades to solve, motivating current research on the problem. In the shorter term, current or near-future AI also poses less extreme threats — for example in warfare, finance, cybersecurity, and political institutions, threatening privacy, employment, and income equality — that need to be managed now and will only increase in magnitude. 

Such concerns are currently managed by the many existing laws and institutions that apply to particular fields where AI plays a role. However, governance of AI will present a unique challenge requiring special consideration, some of it on a short timescale. A particular and timely issue concerns AI systems deliberately designed to kill or destroy, a.k.a. “Lethal Autonomous Weapons Systems” (LAWS). LAWS are more likely to be used offensively, rather than defensively, and an arms race could be highly destabilizing or have strong undesired side-effects such as empowering terrorists and other non-state actors. There is ongoing debate and formal United Nations discussion regarding the use of international agreements to curtail LAWS development and deployment, supported by thousands of AI researchers.

In fact, various actions by AI researchers in academia and industry – from signing open letters that oppose autonomous weapons to boycotting universities that pursue AI weapons research – have helped motivate governments at local and federal levels to take a stance on autonomous weapons, with 26 countries supporting an outright ban on LAWS at the time of writing this text. These efforts were boosted in late 2017 when FLI released its popular video, Slaughterbots, which introduced the public to some of the greatest threats posed by LAWS.

Another major issue coming onto the radar is that of automation and potential resulting large-scale economic impacts, including massive loss of jobs and increase in income inequality. 

There has been significant debate around the extent to which AI will ultimately impact jobs and economic inequality, with some arguing that AI will be a boon to the job market and others predicting unemployment on scales never seen before. Some governments are starting to take the risk seriously, as shown by the AI Jobs Act, introduced in the US in early 2018. Efforts are also made at more local levels to address potential job loss. For example, the Jobs of the Future Fund, proposed by Jane Kim of the San Francisco Board of Supervisors, is essentially a “robot tax” which would require companies to put money into a fund for every human whose job is displaced by automation.

Longer-term concerns surrounding highly advanced AI have essentially no special-purpose formal structures in place at the government level to manage risk, though recent legislation in the European Union attempts to set a roadmap for developing AI-related policies. It is highly unclear what formal structures at the governmental level would currently be appropriate concerning advanced AI, and for now, investigation and planning for advanced AI risk occurs mainly in the academic, corporate, and non-profit communities.

Since the general perception is that human-level AI is at least decades away, there has been relatively little action planning for it.

In the past few years, many non-profits (MIRI, FHI, CSER, FLI, CFI, CHAI, OpenAI) have taken it upon themselves to develop early solutions to help push AI development in safer directions. Groups such as the Partnership on AI, the Institute of Electrical and Electronics Engineers (IEEE), and some groups within governments have also begun trying to understand those risks. These initiatives and structures operate essentially on a voluntary basis. The IEEE “Ethically Aligned AI” program  and the Asilomar AI Principles are seen as best practices and general aspirational principles, but they have no specific legal authority or binding force. The nascent Partnership on AI has tenets that are formally binding for members of the partnership, though the enforcement mechanism is unclear and the tenets provide only weak constraints on AI development. Generally, the most effective enforcement mechanism within the AI community today is social stigma, which can harm recruitment and participation for groups and individuals.

In addition to those mentioned above, initiatives by various risk-oriented groups, in particular those mentioned previously, have led  to a dramatic increase in AI safety sessions at professional AI conferences and meetings, as well as significantly more research on the technical side. At this point, the most effective short-term strategy for ensuring that AI remains beneficial as it advances may be continued and enhanced support for such AI safety organizations as well as creating government grant funding for AI safety research, to nurture a robust and growing AI safety research community permeating both academia and industry. This could result both in technical solutions being available by the time they are needed, and also in a pool of technically skilled AI safety experts from which governments can recruit expertise when needed.  

Projects to know about

Over the past decade, various initiatives have been set up to explore potential safety issues associated with the development of artificial intelligence. Seven of those deserve special mention.


  • OpenAI, a nonprofit research organization developed under the leadership of Elon Musk, aims to discover and enact a path to safe artificial general intelligence, with an aim to make high-powered AI systems available more widely and apart from a corporate profit motive or government structure.

  • DeepMind, part of the Alphabet Group, has developed several breakthrough AI systems including AlphaGo. It also has a strong safety focus, with an internal ethics board and safety research group.

  • The Machine Intelligence Research Institute (MIRI) is a non-profit organization originally founded in the year 2000 to research safety issues related to the development of Strong AI. The British non-profits Future of Humanity Institute (FHI), Centre for the Study of Existential Risk (CSER) and Centre for Intelligence have joined this research effort.

  • The Future of Life Institute, established in 2014 with a mission to support the beneficial use of technology, granted 7 million dollars in 2015 to 37 research teams dedicated to “keeping AI robust and beneficial”. 

  • Partnership on AI, created in 2016, is a consortium of industry and non-profit members with an aim to establish best practices to maximize AI’s widespread benefit.

  • SAIRC is a joint Oxford-Cambridge initiative housed by the Future of Humanity Institute, that aims to solve the technical challenge of building AI systems that remain safe even when highly capable, and to better understand and shape the strategic landscape of long-term AI development.

  • AI Now is “an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.” They look at issues related to civil liberties, bias, jobs, and safety, especially those that are either currently or soon expected to be impacted by AI.

It’s worth noting that many government groups have been established in the last year or so to observe and join these efforts, including, but not limited to: the European AI Alliance, the European Commission Working Group on the Ethics of AI, the UK Parliament Select Committee on AI, the UK Government’s Centre for Data Ethics and Innovation, and the New York Algorithm Monitoring Task Force.

The AI research and development community has taken an unusually proactive stance toward self-governance, with businesses organizing their own ethics committees and developing incentive systems for research and development, independently of national governments or the UN. While this ensures that the development of norms and guidelines is conducted by people with most expertise in the field, it has also raised concerns as to potential conflicts of interest and balanced representation.

Anthony Aguirre

Co-founder, Future of Life Institute.


Max Tegmark

President and Co-founder, Future of Life Institute.


Ariel Conn

Director of Media and Outreach, Future of Life Institute.


Richard Mallah

Director of AI Projects, Future of Life Institute.


Victoria Krakovna

Co-founder, Future of Life Institute.