In recent years, the risks of artificial intelligence have become much more tangible, with real-world threats appearing regularly in news articles. The most well known problems surround Facebook, with the Cambridge Analytica scandal and the use of AI and fake news to interfere with elections. But countless AI issues and concerns have graced the covers of prominent news sites, leading the public and government officials alike to consider the development of AI with more scrutiny.

The Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory has identified “over 300 AI policy initiatives from 60 countries,” including 36 policy initiatives in the United States and 22 in the European Union. Though the focus of AI policy in various countries has had more to do with research and development -- such as China’s plan to become the world leader in AI by 2030 and the American Artificial Intelligence Initiative -- many efforts do mention safe and beneficial AI.

Additionally, many organisations have taken it upon themselves to create their own principles and guidelines to develop AI for good.

In late 2019, researchers published a Global Landscape of AI Ethics6, in which they “identified 84 documents containing ethical principles or guidelines for AI,” 88 per cent of which were released after 2016. These documents were written by some of the world’s most prominent companies and organisations, including groups like Google, SAP, the European Commission’s High Level Expert Group on Artificial Intelligence, the OECD, IEEE’s Ethically Aligned Design, the UK House of Lords, the US Department of Defense (the latter adopted AI principles after the Landscape paper was published), and many more.

The Landscape paper found “eleven overarching ethical values and principles have emerged”: “transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity.”

To address these issues, some non-governmental groups, like AI Now, have been tracking problems that are already cropping up with AI, including bias, racism, discrimination, violations of human rights, job loss and more. Meanwhile, other groups have focused on emphasising and supporting AI developed for good, including the United Nations AI for Good Global Summit and the nascent US$1 million AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity.

Legislation is still in early stages, and experts anticipate governments will become increasingly interested in AI development and use. For now though, companies and countries face minimal oversight as they develop AI.

"Though fully- autonomous weapons don’t exist yet, the idea of such weaponry has triggered intense ethical and legal debates around the world, as people try to determine the extent to which an algorithm can decide who lives and who dies and how."

Autonomous weapons

Autonomous weapons systems are weapons that could select and attack a target, without someone overseeing the decision-making process.

Though fully-autonomous weapons don’t exist yet, the idea of such weaponry has triggered intense ethical and legal debates around the world, as people try to determine the extent to which an algorithm can decide who lives and who dies and how. Member states of the United Nations Convention on Conventional Weapons have considered this question for many years but have yet to find consensus on legal definitions or on regulations regarding the development and use of such weapons.

Meanwhile weapons systems are becoming increasingly autonomous; without clear definitions regarding what’s acceptable and unacceptable, many experts expect we’ll have autonomous weapons systems in a matter of years.

Autonomous weapons pose another threat too: if countries race to develop more powerful autonomous weapons, they could inadvertently find themselves in a race for advanced AI more generally. In such a situation, developers may cut corners or get sloppy in their efforts to be the first to create something new, and the resulting artificial intelligence systems are more likely to behave unpredictably or cause problems in some way.