“There will be some agreements that we broker,” Britain’s secretary of state for science, innovation and technology, Michelle Donelan, said in an interview ahead of a key summit in Seoul. “We’ll be going to ask companies how they can go even further in showing they’ve built safety into the release of their models.”
Yet some diverging approaches have already emerged between major nations: while the UK has not wanted to “rush to regulate”, the EU passed a sweeping law that placed guardrails on the technology earlier this year and some US cities and states have already passed laws limiting the use of AI in specific areas.
Donelan defended Britain’s approach thus far, saying the government has prioritised getting to grips with the risks posed by AI and encouraging an international focus on the issue, such as through the Bletchley summit. She also said any legislation passed in the UK would’ve been out of date by the time it came into force.
“We want to lean in to and support innovation,” Donelan said, as the British government also announced a new overseas office in San Francisco focused on AI safety. “There will always be slightly different approaches, what we want is commonality on taking this seriously.”