US and UK team up to test safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI continue to improve their technologies and release better and better language models. To create a common approach for independent safety assessment of these models as they are released, the UK and US governments have sign a memorandum of understanding. Together, the UK’s AI Safety Institute and […]

US and UK team up to test safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI continue to improve their technologies and release better and better language models. To create a common approach for independent safety assessment of these models as they are released, the UK and US governments have sign a memorandum of understanding. Together, the UK’s AI Safety Institute and its US counterpart, announced by Vice President Kamala Harris but which has not yet begun operations, will develop test suites to assess risks and ensure the safety of ” most advanced AI models”. “

They plan to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals appears to be to conduct a joint testing exercise on a publicly available model. British Science Minister Michelle Donelan, who signed the agreement, said The Financial Times that they “really need to act quickly” as they expect a new generation of AI models to come out within the next year. They think these models could be “completely game-changing,” and they still don’t know what they might be capable of.

According to The temperature, This partnership is the first bilateral agreement on AI security in the world, although the United States and the United Kingdom intend to partner with other countries in the future. “AI is the defining technology of our generation. This partnership will accelerate the work of our two institutes across the full spectrum of risks, whether for our national security or for our society as a whole,” declared the US secretary at Commerce, Gina Raimondo. “Our partnership makes it clear that we are not running away from these concerns – we are confronting them. Through our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust assessments, and issue more rigorous guidance. “

While this particular partnership focuses on testing and evaluation, governments around the world are also developing regulations to control AI tools. Last March, the White House signed an executive order aimed at ensuring that federal agencies only use AI tools that “do not endanger the rights and security of the American people.” Just weeks before, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban “AI that manipulates human behavior or exploits people’s vulnerabilities”, “biometric categorization systems based on sensitive characteristics”, as well as “non-targeted scraping” of faces from CCTV images and of the Web to create facial recognition databases. Additionally, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules.

Teknory