Site icon Panda Security Mediacenter

EU vs. UK – A tale of two approaches

eu-uk-different-approaches-ai

eu-uk-different-approaches-ai

As is often the case, the United Kingdom (UK) and the European Union (EU) have different ideas and strategies about how to handle the issue of Artificial Intelligence (AI). In the EU, the landmark AI Act heavily regulates the industry. In contrast, the UK government has adopted a ‘wait and see’ approach.

What is the EU’s approach?

The AI Act is a hands-on, risk-based system that regulates the use of artificial intelligence across all industries and sectors. According to the Act, all AI systems are graded according to risk level: unacceptable, high, limited, and minimal risk. 

Any AI application classified as unacceptable, such as social scoring which breaches human rights, is banned. For the other categories, the higher the grading, the tougher the regulations. AI systems used in critical infrastructure, law enforcement or healthcare would probably classify as ‘high risk’.

Importantly, the AI Act applies to any business trading within the EU – even if they are based outside the bloc. This means that a company based in the USA who trades with Germany could be prosecuted for breaching their obligations under the AI Act. And the financial penalties for breaching the Act are stiff – up to 7% of global turnover.

What is the UK’s approach?

In contrast, the UK has decided not to regulate the AI industry as yet. The government hopes that their light touch will encourage greater innovation, establishing Britain as an AI leader. 

Instead of implementing new laws, AI companies are being encouraged to sign up to a voluntary framework.

This framework addresses five key principles: 

  1. Safety, security, and robustness.
  2. Appropriate transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

According to the British government, this framework provides much-needed safeguards and encourages new AI development and innovation.

Are things about to change?

Artificial Intelligence tools are developing faster than governments can adapt, which is why the EU has adopted such a stringent legal framework. Some UK decision makers are actively questioning the effectiveness of the “opt-in” strategy due to concerns surrounding the development of existing systems.

Government sources claim that the UK is now drafting legislation regarding how Large Language Models (the type of technology that underpins ChatGPT) are trained. They are also considering rules that would force advanced AI developers to share their algorithms with the government.

Apparently the changes are being considered to address concerns about AI misuse and market manipulation. Sarah Cardell, CEO of the UK’s Competitions and Markets Authority, has been quoted as saying;

“The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences.”

So while the UK is government is currently “hands off” regarding AI, this situation could – and probably will – change at some point in the near future.

Read also: UK government seeks to strengthen national cyber resilience

Exit mobile version