Technology

Researchers assess Global Agreement on the Ethical Usage of AI

Researchers assess Global Agreement on the Ethical Usage of AI

Measuring worldwide consensus on the ethical use of artificial intelligence (AI) is a complicated and continuing process involving surveys, research projects, and international initiatives.

A team of Brazilian researchers conducted a comprehensive assessment and meta-analysis of worldwide AI standards to examine the global state of AI ethics. The researchers discovered that, while the majority of the rules respected privacy, transparency, and responsibility, relatively few emphasized truthfulness, intellectual property, or children’s rights. Furthermore, the majority of the guidelines described ethical concepts and values without offering practical ways to execute them or advocating for legally obligatory regulation.

“Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step toward promoting trust and confidence, mitigating risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.

“Previous work primarily centered on North American and European documents, prompting us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.

Our research demonstrates and reinforces our call for the Global South to wake up, as well as a plea for the Global North to be ready to listen and welcome us.

Camila Galvo

The researchers undertook a comprehensive evaluation of policy and ethical recommendations published between 2014 and 2022 to establish whether a worldwide consensus exists about the ethical development and use of AI, and to assist lead such a consensus. They derived 200 documents on AI ethics and governance from 37 countries and six continents, written or translated into five languages (English, Portuguese, French, German, and Spanish). Recommendations, practical guidelines, policy frameworks, legal milestones, and codes of conduct were among the documents included.

The researchers then performed a meta-analysis of these documents to identify the most frequent ethical principles, investigate their global distribution, and assess biases based on the types of organizations or persons authoring these documents.

Transparency, security, justice, privacy, and accountability were the most often encountered principles, appearing in 82.5%, 78%, 75.5%, 68.5%, and 67% of the texts, respectively. Labor rights, truthfulness, intellectual property, and children/adolescent rights were the least prevalent principles, appearing in 19.5%, 8.5%, 7%, and 6% of the documents, respectively, and the authors suggest that these values deserve more attention.

For example, with the emergence of generative AI technologies such as ChatGPT, the concept of honesty (the idea that AI should offer accurate information) is becoming more prevalent. Furthermore, because AI has the ability to displace workers and transform the way we work, real steps are needed to avoid widespread unemployment or monopolies.

Researchers measure global consensus over the ethical use of AI

The majority of the guidelines (96%) described “normative” ethical norms that should be addressed during AI development and use, whereas just 2% advised practical methods of applying AI ethics and only 4.5% proposed legally binding forms of AI control.

“It’s mostly voluntary commitments that say, ‘these are some principles that we hold important,’ but they lack practical implementations and legal requirements,” Santos said. “If you’re trying to build AI systems or if you’re using AI systems in your enterprise, you have to respect things like privacy and user rights, but how you do that is the gray area that does not appear in these guidelines.”

The researchers also identified several biases in terms of where these guidelines were produced and who produced them. The researchers noted a gender disparity in terms of authorship. Though 66% of samples had no authorship information, the authors of the remaining documents more often had male names (549 = 66% male, 281 = 34% female).

Geographically, the majority of the instructions came from Western Europe (31.5%), North America (34.5%), and Asia (11.5%), with fewer than 4.5% coming from South America, Africa, and Oceania combined. Some of these distributional discrepancies may be attributable to language and public access constraints, but the team claims that these findings indicate that many sections of the Global South are underrepresented in the global conversation on AI ethics. This includes countries strongly invested in AI research and development, such as China, whose production of AI-related research surged by more than 120% between 2016 and 2019.

“Our research demonstrates and reinforces our call for the Global South to wake up, as well as a plea for the Global North to be ready to listen and welcome us,” says Camila Galvo of the Pontifical Catholic University of Rio Grande do Sul, co-author. “We must not lose sight of the fact that we live in a plural, unequal, and diverse world. We must remember the voices that haven’t had the chance to express their choices, explain their surroundings, and perhaps tell us something we don’t know.”

As well as incorporating more voices, the researchers say that future efforts should focus on how to practically implement principles of AI ethics. “The next step is to build a bridge between abstract principles of ethics and the practical development of AI systems and applications,” says Santos.