Rishi Sunak, the UK Prime Minister, has unveiled a collaborative agreement involving like-minded countries and AI companies to conduct safety testing on AI models prior to their release. This initiative builds upon the G7’s Hiroshima Process and the global partnership on AI.
Under this plan, newly developed AI models will undergo evaluation by the AI Safety Institute, the successor to the Frontier AI Taskforce led by Ian Hogarth. The nature of this agreement, whether voluntary or binding, remains unclear.
Sunak stressed that safety will be upheld with the involvement of the public sector, which will assess cutting-edge AI models.
Notably, AI companies attending the summit, including OpenAI (creator of ChatGPT) and Elon Musk’s xAI, have granted the UK special access to their technology.
“Our Safety Institute is committed to establishing an evaluation process to assess the next generation of models before their deployment next year,” Sunak explained.
The Labour Party also voiced its intent to mandate AI companies to subject new models to independent safety tests before release if they come into power.
This development occurred at the AI Safety Summit on November 1 and 2, which marks the first in a series of international meetings addressing AI’s potential risks.
The summit’s invitation list included China, a decision that generated debate due to geopolitical tensions and espionage allegations. Prime Minister Sunak acknowledged the complexities of inviting China and expressed uncertainty about their commitment to the summit’s agreements. However, the presence of China and its endorsement of the Bletchley Declaration was deemed a success.
In parallel, the United States recently introduced its own AI safety institute following President Joe Biden’s executive order compelling AI developers to share safety results with the US government. This demonstrates the global recognition of AI safety as a paramount concern.