The agreement will see the UK’s new AI Safety Institute and its US counterpart collaborate to formulate a framework to test the safety of large language models. Credit: istock/FotografieLink The US and the UK have signed an agreement to test the safety of large language models (LLMs) that underpin AI systems. The agreement or memorandum of understanding (MoU) — signed in Washington by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan on Monday — will see both countries working to align their scientific approaches and working closely to develop suites of evaluations for AI models, systems, and agents. The work for developing frameworks to test the safety of LLMs, such as the ones developed by OpenAI and Google, will be taken by the UK’s new AI Safety Institute (AISI) and its US counterpart immediately, Raimondo said in a statement. The agreement comes into force just months after the UK government hosted the global AI Safety Summit in September last year, which also saw several countries including China, the US, the EU, India, Germany, and France agree to work together on AI safety. The countries signed the agreement, dubbed the Bletchley Declaration, to form a common line of thinking that would oversee the evolution of AI and ensure that the technology is advancing safely. The agreement came after hundreds of tech industry leaders, academics, and other public figures signed an open letter warning that AI evolution could lead to an extinction event in May last year. OpenAI, Meta, Nvidia to cooperate with AI Safety Institutes The US has also taken steps to regulate AI systems and related LLMs. In November last year, the Biden administration issued a long-awaited executive order that hammered out clear rules and oversight measures to ensure that AI is kept in check while also providing paths for it to grow. Earlier this year, the US government created an AI safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development. The advisory group named the US AI Safety Institute Consortium (AISIC), which is part of the National Institute of Standards and Technology, was tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content. Several major technology firms, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, joined the consortium to ensure the safe development of AI. Similarly, in the UK, firms such as OpenAI, Meta, and Microsoft have signed voluntary agreements to open up their latest generative AI models for review by the country’s AISI, which was set up at the UK AI Safety Summit. The EU has also made strides in the regulation of AI systems. Last month, the European Parliament signed the world’s first comprehensive law to govern AI. According to the final text, the regulation aims to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights, and environmental protection against harmful effects of artificial intelligence systems.” Related content feature 10 most powerful ERP vendors today The Big 3 continue to differentiate themselves with broad ERP offerings, but the race to capitalize on AI enhancements and cater to industry-specific needs is having broad market impact. By Neal Weinberg May 23, 2024 13 mins Technology Industry ERP Systems feature From IT leader to tech spinoff CEO: How to win a CIO-plus role Associa CIO Andrew Brock expanded his C-suite mandate by parlaying his IT purview to helm proptech spinoff HOAM Ventures. Here’s his advice on doing the same. By Michael Bertha May 23, 2024 6 mins CIO Business IT Alignment IT Leadership news Insights from Middle Eastern CIOs: AI's transformative impact on healthcare By Andrea Benito May 23, 2024 3 mins news CSO30 Awards: Introducing the top 30 security leaders in the UAE By Andrea Benito May 22, 2024 4 mins PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe