Skip to content

This site uses cookies

By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and enhance user experience. Learn more

Blog

From patchwork to principle: Three considerations for the Government’s approach to regulating AI

Published 15 Mar 2023
gettyimages-1352429931.jpg
Share:
Technology & Media

It has long been recognised that broadcasting, technology, media and telecoms will play a central role in the future competitiveness of Europe and its member states.

Other related content

Public Affairs and Policy

An unruly voice or constructive partner to the EU? Italian Elections Analysis

Public Affairs and Policy

Irish Government announces sectoral emissions ceilings - Warming to the challenge

Strategy & Insights

What is the anatomy of a resilient reputation?

The recent rise of OpenAI's ChatGPT - a generative AI that can simulate conversation, author original stories, and even design new Wordle-like games - has propelled a technological arms race into the public consciousness. Tech giants across the world are competing to leverage the almost inconceivable power of AI technologies on an industrial scale, with Microsoft already incorporating ChatGPT into its Bing search engine.

The increased prominence of AI has turned attention not only to its vast number of exciting applications, but also to the threats it poses in areas like cyber security and misinformation. This has increased scrutiny of the guardrails (or lack of) around the development of AI and brought us to an inflection point in our regulatory approach to this powerful technology. 

Understanding the Government’s direction on AI and its role in shaping the UK technology sector will be crucial for businesses and UK plc. A 2022 McKinsey digital trust global survey examining responsible AI showed that fewer than 20% of businesses responded that their organisations were actively mitigating risks in areas like data privacy and AI transparency. To avoid being caught on the back foot, businesses will need to pair their technological advancements with focused engagement with policymakers to help ensure that they can continue to operate in a competitive and pro-innovation regulatory AI ecosystem. Demonstrating positive credentials in AI assurance will be key and will align positively with ambitions held on both sides of the political aisle to harness data for the public good. 

If approached correctly, the UK AI Council - an independent expert committee that advises the Government - has estimated in its AI Roadmap that AI could deliver a 10% increase to UK GDP by 2030. With that, here are three key considerations for the Government as it establishes a pro-innovation approach to responsible AI: 

Provide clarity over the strategic direction for regulating AI 

The growing adoption of AI has exposed gaps in the patchwork of existing legal and regulatory requirements that are being applied to AI, most of which were originally designed for other purposes. For instance, we have data protection laws that govern automated decision-making and personal data, and the Online Safety Bill, which contains provisions relating to the design and use of algorithms. This patchwork lacks clarity, contains regulatory overlap, and has not been developed with AI specifically in mind.  

While the Government has acknowledged both the huge benefits and risks associated with widespread use of AI in its 2022 policy paper on ‘Establishing a pro-innovation approach to regulating AI, we are yet to have seen a new regulatory framework that sets out the strategic direction for AI governance. The delayed AI White Paper,  due to be published in “early 2023”, will provide much needed clarity over the Government’s vision for a principles-based regulatory framework based on transparency, explainability, safety, and fairness.  

Regulate the uses of AI, not the tech 

If the Government wishes to avoid a repeat of the drawn-out legislative passage of the Online Safety Bill, it should look to regulate the uses of AI, not the technology itself. This pro-innovation approach will encourage businesses to take risks in the development of pioneering AI systems and help contribute to economic growth. Crucially, regulating AI in this way will ensure that regulation is future-proof and can adapt to the evolving applications and concomitant threats of AI systems.  

A future-proof regulatory framework is paramount at a time of geopolitical uncertainty and when sovereign technological capabilities are regarded as foundational to UK national security. While the Government will be hard pressed to prevent the infiltration of foreign AI technologies into UK infrastructure, upholding domestic guardrails on the uses and application of such systems will maintain UK sovereignty over AI systems and infrastructure.  

Balance sovereign standards with regulatory coherence 

While the Government is keen to exploit post-Brexit regulatory opportunities to set sovereign standards frameworks in fast-emerging areas like AI, there should be caution about over-divergence of the UK’s regulatory framework from those of other western regimes. With the EU and US adopting more rigid, statutory approaches to AI regulation, coherence with these regimes will be key to ensuring that compliance does not become overly burdensome for businesses.  

A looser, principles-based UK regulatory framework for AI would also mean that businesses will need to be aware of the heightened reputational risk when operating within the UK. Besides the benefits to brand reputation, demonstrating strong AI assurance within such a framework will provide a crucial pathway for engagement with policymakers. 

2023 is an important year for the regulation of AI for both the Government and businesses. As we move from patchwork legislation to principles-based policy, the Government must balance the enormous growth potential of AI with the safety and ethical risks that such developments present. The decisions that the Government makes today will determine the development and influence of the technologies of tomorrow.  

© Hanover Communications 2024, an AVENIR GLOBAL company. All rights reserved.

Search

Subscribe