2024-04-10
U.S. and U.K. join forces to bolster security of AI language models
In a significant move to address the growing concerns surrounding the rapid advancement of artificial intelligence (AI), the United States and the United Kingdom have signed a landmark agreement to verify the security of large language models (LLMs) underlying AI systems. The agreement, formalized through a Memorandum of Understanding (MoU) signed in Washington on Monday, aims to harmonize the scientific approaches of both nations and foster close cooperation in the development of assessment frameworks for AI models, systems, and agents.
The MoU was signed by U.S. Secretary of Commerce Gina Raimondo and British Minister of Technology Michelle Donelan, underscoring the global urgency to ensure the responsible development and deployment of AI technologies. According to Raimondo, the collaborative efforts to create mechanisms for verifying the security of AI models, such as those developed by industry giants like OpenAI and Google, will be immediately undertaken by the newly established British Institute for AI Security (AISI) and its American counterpart.
This agreement comes just a few months after the UK government hosted a global AI security summit last September, where several countries, including China, the United States, the European Union, India, Germany, and France, agreed to work together on AI security. The participating nations signed the "Bletchley Declaration," a commitment to forming a common line of thinking that will monitor the development of AI and ensure the safe advancement of this transformative technology.
The urgency behind these collaborative efforts stems from growing concerns within the scientific and technological communities. Last year, hundreds of industry leaders, scientists, and public figures signed an open letter warning that the unchecked development of AI could potentially lead to the extinction of humanity.
The United States has taken proactive steps to regulate AI systems and related LLMs. In November 2022, the Biden administration issued a long-awaited executive order that outlined clear rules and oversight measures designed to mitigate the risks associated with AI while creating conditions for its responsible development.
Earlier this year, the U.S. government established an advisory group on AI security, the Consortium of the U.S. AI Security Institutes (AISIC), which includes AI creators, users, and scientists. This consortium, operating under the auspices of the National Institute of Standards and Technology, has been tasked with developing recommendations on the redistricting of AI systems, evaluating AI capabilities, risk management, security and protection, and watermarking AI-generated content.
Several major technology companies, including OpenAI, Google, Microsoft, Amazon, Intel, and Nvidia, have joined the AISIC consortium, underscoring the industry's commitment to ensuring the safe and responsible development of AI.
As AI technologies continue to advance at an unprecedented pace, the collaboration between the United States and the United Kingdom represents a significant step towards establishing global standards and protocols for AI security. By combining their scientific expertise and resources, these nations aim to proactively address the challenges posed by AI, fostering innovation while safeguarding against potential risks to humanity.
Share with friends:
Write and read comments can only authorized users