• November 21, 2024
U.S. Department of Commerce Expands AI Safety Consortium

Last week, the U.K. government touted a testing platform released by its AI Safety Institute, one of the first government-backed bodies attempting to establish guardrails around AI development, which is outpacing all efforts to regulate it. Several months ago, the U.S. launched its own effort—under the auspices of the Department of Commerce—to work on the same thing: ensuring safe, trustworthy AI.

The National Institute of Standards and Technology (NIST), housed within the Commerce Department, announced in February a consortium made up of more than 200 AI stakeholders that will develop science-based and empirically backed guidelines for AI measurement and policy that assesses risks to people and businesses. Members of the consortium include businesses, academic institutions, law firms and other stakeholders.

Just a few days ago, global law firm Clifford Chance was admitted to the consortium, joining other members including software company Adobe, global consultancy Booz Allen Hamilton, the California Department of Technology’s Office of Information Security, Carnegie Mellon University, the Center for AI Safety, networking giant Cisco, The New York Public Library, The University of Notre Dame, enterprise software company Salesforce, Stanford University and hundreds of others.

“AI is moving the world into very new territory. And like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts,” Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “That is why NIST brings together these incredible collaborations of representatives from industry, academia, civil society and the government, all coming together to tackle challenges that are of national importance.”