Skip to main content
eScholarship
Open Access Publications from the University of California

Should We Trust a Black Box to Safeguard Human Rights? A Comparative Analysis of AI Governance

Abstract

The race to take advantage of the numerous economic, security, and social opportunities made possible by artificial intelligence (AI) is on—with states, intergovernmental organizations, cities, and firms publishing an array of AI strategies. Simultaneously, there are various efforts to identify and distill an array of AI norms. Thus far, there has been limited effort to mine existing AI strategies to see whether common AI norms such as transparency, human-centered design, accountability, awareness, and public benefit are entering into these strategies. Such data is vital to identify areas of convergence and divergence that could highlight opportunities for further norm development in this space by crystallizing State practice.

This Article analyzes more than forty existing national AI strategies paying particular attention to the US context, comparing those strategies with private-sector efforts, and addressing common criticisms of this process within a polycentric framework. Our findings support the contention that State practices are converging around certain AI principles, focusing primarily upon public benefit. AI is a critical component of international peace, security, and sustainable development in the twenty-first century, and as such, reaching consensus on AI governance will become vital to help build bridges and trust.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View