Back to all blog posts

Can Sunak secure AI ‘leadership’ status for the UK?

Throughout almost 80 years of the ‘special relationship’ between the UK and the US, the diplomatic allies have had their ups and downs. From the famously close Thatcher-Reagan and Blair-Bush relationships of the ’80s and ‘90s respectively to American President Joe Biden’s recent engagement in Northern Ireland and Irish peace. Now, with the world racing to finalise artificial intelligence (AI) regulations, British Prime Minister Rishi Sunak’s first official visit to the US saw him meet with President Joe Biden to discuss a number of items from Northern Ireland and Ukraine to energy security, and even attend a baseball game. High on Sunak’s agenda for the trip was AI. More specifically, securing the UK a ‘leadership’ role in AI development.

The state of play

Calls for caution and regulation are mounting, with a recent letter published by AI scientists and industry experts warning that AI could pose an ‘extinction risk’ on par with pandemics and nuclear war. This follows a letter from big tech earlier in the year urging a pause in significant projects so the capabilities and dangers of advanced generative AI systems could be properly studied and the risks mitigated.

Governments around the world have been responding, laying out their intentions for regulations. The UK’s Department for Science, Innovation and Technology (DSIT) published a whitepaper that laid the groundwork for Sunak’s proposal, with the government wishing to balance the benefits of the technology with mitigating the risks posed by uncontrolled development. The whitepaper focused primarily on the benefits, with this more risk-focused element highlighted at the recent G7 summit in Japan. In the face of potential threats to society, economics, and national security, Sunak called for global ‘guardrails’ to mitigate the risks. He followed this by meeting with the CEOs of major AI companies to discuss what those regulatory guardrails might look like. Then, hot off the heels of his US visit, he took to the London Tech Week stage to cement his British leadership calls to “make this the best country in the world to start, grow, and invest in tech businesses”. He announced a £100 million expert taskforce for AI safety and the first-ever Summit on global AI Safety to be hosted by the UK later this year.

Assessing the AI options

Sunak has a few potential strategies for a London-based global AI authority that he likely explored with Biden.

From reports, the most popular idea seems to be modelled on the International Atomic Energy Agency (IAEA). This Vienna-based body focuses on keeping nuclear safe, promoting essential standards and checks, and monitoring the use of nuclear energy. In particular, Sam Altman, chief executive of OpenAI – the creator of ChatGPT – backs this approach, arguing that this new body should have the power to inspect systems, compel audits, and test new products for compliance with certain safety standards.

Another option is an international body based on CERN. The Geneva-based international particle physics laboratory project conducts its research in a tightly controlled ethical and physical environment. This approach could pioneer a safe AI research scenario.

Meanwhile, the opposition party, Labour, voiced its preference – that AI should be licensed like medicines or nuclear power. Meaning tech developers would be barred from working on advanced AI tools unless they have a pre-approved licence. These approaches all focus on regulating the way AI is developed and managed rather than banning certain technologies, a route the EU has taken to mitigate the risks associated with facial recognition tech.

So, why the UK?

The British government’s strategy so far has been to focus on business growth and creating an environment in which AI development could flourish, much like the leading fintech hub reputation London has cultivated for many years. Meanwhile, the EU is refining its Artificial Intelligence Act, which is expected to take a much harder line on oversight of the sector. While this regulation is yet to be finalised, it might be too strict for an EU county to become a world leader in AI development.

Across the pond, the US is still considering how it will regulate the industry but is currently expected to be less severe than Europe’s approach. However, given that it’s the home to many of the big AI firms, it’s unlikely to be trusted as the centre of global regulation. This means the UK is well-placed to be an honest broker between US and EU regulatory approaches.

Sunak’s post-Brexit British government could become a halfway house between the powerful US and EU trading blocks, the UK also currently has a working relationship with China broadening its international appeal. Whether the US will want a bigger say in the UK’s bid for AI innovation and regulation leadership, however, remains to be seen.

While there’s a lot to discuss, it is clear from the sentiments of leading AI scientists and big tech that actions must be taken soon or risk innovation outpacing any regulation efforts.

Ultimately, we know the UK PM is keen to secure international alignment on his leadership approach. Number 10 has been clear they see this as a way to ensure we can “benefit from the opportunities but manage the risks.”  While no particular model has been endorsed yet, we can expect that AI announcements from the Government to continue in the lead up to the UK’s inaugural global AI safety summit.

Get in touch with the team