• May 29, 2024
UK Launches First Government-Backed Platform for AI Testing

As businesses rush to integrate AI and generative AI technology into their platforms and products, there is a growing chorus concerned with the lack of attention being paid to safety and ethics. Governments have generally been slow to keep up with the pace of innovation, but the U.K. recently announced a way to evaluate AI models and tools for safety.

In November, 2023, U.K. Prime Minister Rishi Sunak announced the formation of the AI Safety Institute, one of the first government-backed efforts to limit risk to citizens from new AI-related technologies. This week, that body launched a testing platform that will enable startups, academics, enterprises, international governments and AI developers to evaluate the risks associated with AI tech they are working on.

The Inspect platform is a software library that will assess specific capabilities of new models and evaluate them on multiple parameters including core knowledge, ability to reason, and autonomous capabilities. As the emergence of new models accelerates, Inspect is the first effort by a government to have a shared, accessible approach to evaluations, according to officials.

“We have been inspired by some of the leading open source AI developers—most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back,” said AI Safety Institute Chair Ian Hogarth. “We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”