UK and US sign landmark agreement to regulate AI risks
The UK and US have signed a landmark artificial intelligence (AI) agreement to collaborate on testing and mitigating the risks of AI models.
The deal is the first global bilateral agreement on AI safety, bringing together the UK and US governments to share technical knowledge, information and talent on AI safety. The deal was signed by UK Technology Secretary Michelle Donelan and US Commerce Secretary Fina Raimondo in Washington on Monday.
Technology Secretary Michelle Donelan said: “The next year is when we’ve really got to act quickly because the next generation of models are coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet. The fact that the United States, a great AI powerhouse, is signing this agreement with us, the United Kingdom, speaks volumes for how we are leading the way on AI safety.”
Under the deal, the UK’s new AI Safety Institute and the subsequent US organisation, which is yet to begin work, will exchange research expertise with the aim of mitigating the risks of AI, including how to independently evaluate private AI models from organisations such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.
Ramprakash Ramamoorthy, Head of AI Research at Zoho Corporation, commented: “As the UK strives to be recognised as a leader in AI, collaboration with our US counterparts to share knowledge and resources is essential. AI is already playing a significant role for businesses in areas such as data analysis, forecasting and customer experience to enhance the efficiency of day-to-day operations, but to maximise the benefits AI offers, it is also important to promote trust and safety in its development and adoption. Ensuring AI’s development and use are governed by trust and safety is paramount. Taking safeguards to protect training data, for instance, mitigates risks and bolsters confidence among those deploying AI solutions, leading to superior outcomes for customers.”
John Kirk, Deputy CEO at Inspired Thinking Group commented: “AI-powered tools and platforms are helping to transform the future of marketing, empowering creatives and relieving the burden of the ever-increasing content demand, all while delivering brand-compliant localisation across all content.”
“It’s promising to see collaboration between the UK and US in such a rapidly evolving area. Ensuring a well-managed governance model to support the development of AI and content operations in the creative industries can help mitigate risks and any hesitancy towards the adoption of AI in day-to-day applications.”
Despite the focus on risk, Donelan insisted that the UK has no plans to regulate AI more broadly in the short term. Meanwhile, President Joe Biden has taken a stricter position on AI models that threaten national security. Elsewhere, the EU AI Act, passed in April, has taken a tougher stance on AI regulation to slow down rapid advancement and put-up guardrails.
Dr Henry Balani, Global Head of Industry & Regulatory Affairs at Encompass Corporation, said: “Generative AI, in particular, has a huge role to play across the financial services industry, improving the accuracy and speed of detection of financial crime by analysing large data sets, for example. Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration and supporting innovation in a crucial, advancing area of technology.
“GenAI is here to augment the work of staff across the financial services sector, and particularly Know Your Customer (KYC) analysts, by streamlining processes and combing through vast data sets quickly and accurately. But for this to be truly effective, banks and financial institutions need to first put in place robust digital and automated processes to optimise data quality and deliver deeper customer insights, which can help to fuel the use of GenAI.”
Read more:
UK and US sign landmark agreement to regulate AI risks