How are the UK’s pro-innovation stance, the EU’s comprehensive AI Act, and the US’s shift towards AI de-regulation shaping the future of artificial intelligence? With AI legislation and regulation evolving at pace, what are the key differences you need to be aware of?
In this blog, Head of Service Architecture, Tristan Watkins, explains the current AI regulatory landscape across the UK, EU and US. Get a better understanding of the latest on compliance, security and international co-operation to ensure you understand the implications of the regulation on your own AI project.
In May 2023, before any major AI regulation existed, Sam Altman testified before the US Congress as part of a three-hour session on AI risk and regulation. This was how he opened his statement on working with governments.
“OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits. It is also essential that a technology as powerful as AI is developed with democratic values in mind. OpenAI is committed to working with US policymakers to maintain US leadership in key areas of AI and to ensuring that the benefits of AI are available to as many Americans as possible.”
In that session, some senators were astounded that an innovative company sat before them, requesting to be regulated. Not long later, Altman would embark on a global tour of governments, including the EU policymakers. And we now see some of the tenets of regulation requested by OpenAI in EU law.
In recent months, we’ve seen huge change in US and UK government, and this has manifested in changes to regulation and investment. US regulation had been introduced and has now been torn up. We are also now on the EU AI Act’s enforcement schedule. What does all this mean to an organisation in the UK, and how does it vary when your organisation interacts across borders?
AI legislation and compliance: current state in the UK
For the most part, the UK doesn’t really have any AI regulation of its own, although this may now change with some haste. What we do have is the March 2023 whitepaper titled A pro-innovation approach to AI regulation from our former government. That whitepaper set out ‘the need to act quickly’ but explicitly chooses not to ‘put these principles on a statutory footing initially’. In other words, this initial whitepaper set out some ideas, but it didn’t create new law. However, we have existing laws that can apply to AI solutions, such as the Equality Act and the UK GDPR, and the principles set out in that whitepaper.
Under the current government, we have the recently announced AI Opportunities Action Plan. This new plan sets out ambitions, objectives, and concrete actions, but as yet no regulatory changes. In other words, UK law itself has not adapted in any appreciable way over the last two or more years of massive generative AI change, but few organisations are only concerned with UK law.
We should expect UK regulations to change as the government delivers on its new action plan, but we should also expect that any new regulation will have to provide time for organisations to adapt, whenever it may be announced.
AI legislation and compliance: current state in the EU
Meanwhile, the EU AI Act was published in July 2024, and came into force in August 2024. Although the UK retains its own variant of the GDPR post-Brexit, the EU AI Act only indirectly applies to the UK.
For instance, the November 2024 deadline for member states to identify their ‘authorities/bodies responsible for fundamental rights protection’ doesn’t apply to the UK because it isn’t a member state anymore. However, the scope of the regulation applies to: “anyone who makes, uses, imports, or distributes AI systems in the EU, regardless of where they are based. It also applies to AI systems used in the EU, even if they are made elsewhere.”
Where things get slightly messier is its application to ‘affected persons [that] are located in the Union’. That becomes meaningful if your AI use or solution affects someone in the EU and it either violates the prohibited AI practices (obviously dangerous stuff) or is a High-Risk AI System (working with sensitive stuff).
There is also a raft of AI regulation specific to large compute AI systems (such as that required to create an large language model (LLM)), known as ‘General-Purpose AI Models’ in the regulation, and transparency requirements pertaining to:
- Intellectual property used in production of the content
- Deep fakes
- Biometric categorisation
- Emotion recognition
Users of a system must be made aware that AI is being used in their interactions; content generated by AI must be clearly detectable as such.
These distinct scopes come into force at different stages on the Implementation Timeline. At the highest level, it’s important to note that although the act has come into force, we only see enforcement of prohibited AI practices in February 2025, and rules pertaining to General-Purpose AI Models apply from August 2025. The rest of the EU AI Act does not start to apply until August 2026 (with one exception pertaining to safety components applying in August 2027).
So, if your AI use/solution is prohibited, high-risk, or must be transparent to an EU person, you should concern yourself with the implementation timeline. You almost certainly needn’t worry about obligations pertaining to a General-Purpose AI Model, as those few organisations are clearly already very focused on those needs.
OpenAI has already detailed its commitments in an article providing a primer on the EU AI Act. Others, including Microsoft, have also joined a voluntary alliance known as the ‘AI Pact’. We should expect a deeper statement from Microsoft about its commitments before August 2025, when the General-Purpose AI Model rules will presumably start to apply to them.
UK and EU co-operation
It’s worth noting that despite Brexit, there are still UK/EU co-operative endeavours such as the Discover EuroHPC Joint Undertaking, and the new UK government action plan cites international co-operation/agreements as one of the foundations of that plan, along with UK sovereign capability.
In other words, we need to remember that these dividing lines aren’t always so stark, and use of AI models in the EU will need to adhere to the EU AI Act in a different way than models in the UK.
Given that we still see differences in model availability across countries or differences in consumption plans related to those models, we know that organisations in the UK may choose to deploy in the EU even if their users are all in the UK. It’s a good thing that the new action plan accounts for this.
We should keep in mind that the nuances of the EU AI Act may have an impact on these models served from the EU, although in practice we will probably see any changes applicable to models in the EU applied equally to all regions.
Applicability to organisations in the UK
With no UK AI Act here today, and most EU AI Act needs applying to frontier models or arriving in August 2026, today’s AI compliance needs are found in existing regulations like The UK GDPR and other UK law. In the ICO’s guidance, we see requirements for:
- Transparency in decision making (data subjects must know if AI is being used to make decisions about them)
- Making inferences about people or treating them differently on the basis of that inference
- Statistical accuracy as it relates to fairness
- Bias and discrimination as it relates to fairness
All told, if the AI solution involves processing data about UK persons, the UK GDPR needs to be considered – likewise with EU persons and the EU GDPR, etc. Just like other applications, if the solution works with Personally Identifiable Information (PII), Protected Health Information (PHI), and Payment Card Information (PCI), it probably has unique compliance needs. Given the huge interest in agentic AI at the moment, it’s worth reflecting on whether you might take on unexpected compliance obligations in democratised AI processing and unexpected encounters with personal data.
AI legislation and compliance: current state in the US
Meanwhile, we have seen the US rapidly introduce its own AI regulation in October 2023 with Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
This was then revoked when President Trump took office in the Initial Rescissions Of Harmful Executive Orders And Actions and replaced a few days later with an announcement of enormous funding and new policy, plans, and an implementation timeline for Removing Barriers to American Leadership in Artificial Intelligence. In other words, the Safe, Secure and Trustworthy assurances have been replaced by policy ‘to sustain and enhance America’s global AI dominance’.
Further review of anything that had been created on the basis of the earlier Executive Order is now underway, presumably with the intention of removing any other perceived barriers. The new orders clearly set out an American path towards complete AI de-regulation. But presumably this won’t exempt anyone from their obligations under other regulations such as the CCPA.
The regulatory balancing act
As we’ve seen so far in 2025, AI regulation is deeply politicised, with some people looking at safety and innovation as tightly coupled concerns, and others hoping to tear up the rule book.
Ever since generative AI capability leapt forward in late 2022, governments have sought to balance safety with a desire to nurture their own advancements and attract global AI innovators. Indeed, competition with China was a theme of the congressional hearing on AI that I mentioned at the top of this post. Now with rapidly evolving Chinese LLM capability despite US export controls on the most powerful GPUs, the desire to remove AI safety measures will only get louder in some corners.
We already see fluctuating priorities in line with dogma, diligence, or pragmatism. No government wants their diligence to be construed as an innovation obstacle. But some see safety and innovation as counterparts. Given this polarisation, it’s not inconceivable that we will wind up with competing regulatory needs, as we see with organisations who avoid taking services in the US because of the Patriot Act.
It’s also important to remember that compliance and security are not the same. So even if your own regulatory needs are becoming more relaxed, that shouldn’t be your only lens on safety. In many ways, AI risk is a more meaningful question than compliance today, which we’ll unpack in more detail in a further post on AI risk in 2025.