In recent years, the risks of artificial intelligence have become much more tangible, with real-world threats appearing regularly in news articles. The most well known problems surround Facebook, with the Cambridge Analytica scandal and the use of AI and fake news to interfere with elections. But countless AI issues and concerns have graced the covers of prominent news sites, leading the public and government officials alike to consider the development of AI with more scrutiny.

In fall of 2019, researchers published a Global Landscape of AI Ethics, in which they “identified 84 documents containing ethical principles or guidelines for AI,” 88% of which were released after 2016. These documents were written by some of the world’s most prominent companies and organizations, including groups like Google, SAP, the European Commission’s High Level Expert Group on Artificial Intelligence, the Organization for Economic Co-operation and Development (OECD), IEEE’s Ethically Aligned Design, the UK House of Lords, the US Department of Defense (the latter adopted AI principles after the Landscape paper was published), and many more.

The Landscape found “eleven overarching ethical values and principles have emerged”: “transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity.”

To address these issues, some non-governmental groups, like AI Now, have been tracking problems that are already cropping up with AI, including bias, racism, discrimination, violations of human rights, job loss, and more. Meanwhile, other groups have focused on emphasizing and supporting AI that’s developed for good, including the United Nations AI for Good Global Summit and the nascent $1,000,000 AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity.

However, in all cases, these efforts have been little more than advisory, offering guidelines and suggestions rather than concrete laws and regulations. This situation has proven woefully insufficient in recent months as companies like Google have garnered negative public attention for their struggles to address ethics and discrimination, even within their own organization. Yet for now, companies and countries continue to be expected to develop AI for good with little real oversight or direction.

Autonomous weapons

Autonomous weapons systems are generally considered to be weapons that could select and engage a target, without a person overseeing the decision-making process.

 

The idea of such weaponry has triggered intense ethical and legal debates around the world, as people try to determine the extent to which an algorithm can (or should) decide who lives, who dies, and how. Though fully autonomous weapons don’t exist yet, weapons with increasingly autonomous and intelligent functions made headlines in 2020 and 2021, and many experts are concerned these systems will be used without sufficient ethical and legal guidelines and norms.

 

Recently, the International Committee of the Red Cross recommended “that States adopt new legally binding rules,” providing three specific suggestions for aspects of autonomy that should be ruled out or regulated. Leadership at the Brookings Institution has also suggested that developing global treaties will be easier to do now, “before AI capabilities are fully fielded and embedded in military planning.”

 

However, though member states of the United Nations Convention on Conventional Weapons have considered this question for nearly a decade, they have yet to find consensus on legal definitions or on regulations regarding the development and use of such weapons.

 

Autonomous weapons pose another threat too: if countries race to develop more powerful autonomous weapons, they could inadvertently find themselves in a race for advanced AI more generally. In such a situation, developers may cut corners or get sloppy in their efforts to be the first to create something new, and the resulting artificial intelligence systems are more likely to behave unpredictably or cause problems in some way.

 

"Autonomous weapons pose another threat too: if countries race to develop more powerful autonomous weapons, they could inadvertently find themselves in a race for advanced AI…"