The UK is laying the groundwork for its future artificial intelligence (AI) regulatory model. Like the EU, he suggests adopting a risk-based approach, but will differ from the bloc by entrusting enforcement to a panel of regulators.
The British government presented its “pro-innovation approach to AI regulation” Monday (July 18) alongside its new Data Protection and Digital Information Bill.
This follows the presentation of the National AI Strategy last September, a ten-year plan to ensure the UK becomes a global AI superpower. The country has invested more than 2.3 billion pounds (2.7 billion euros) in AI since 2014.
The UK approach to regulation focuses on high-risk applications, setting aside the low risks associated with AI so that innovation is not stifled and the industry is not slowed down by red tape .
The framework is based on a set of cross-sectoral principles, inspired by the principles of the OECD [Organisation de coopération et de développement économiques] in AI: safety and security by design, transparency and explanation, fairness and contestability.
Unlike the EU’s approach, which sees enforcement of AI law being entrusted to a single national regulator for each member state, the UK plans to assign responsibility to several of them. between them.
Ofcom, the UK communications regulator, the Competition and Markets Authority (Competition and Markets AuthorityCMA), the ICO (Information Commissioner’s Office), at FCA (Financial Conduct Authority) and the Medicines and Health Products Regulatory Agency (Medicines and Health Products Regulatory AgencyMHRA) are on the list. Some of them could eventually see their powers and attributions evolve.
The principles set out in the British approach “provide clear guidance to regulators, but will not necessarily translate into required obligations”clarifies the political declaration, which instead encourages them to “consider lighter options first”.
With regard to the very definition of the technology, which is often problematic, the UK government has refused to establish a universally applicable definition, choosing instead to focus on the characteristics and capabilities of AI that regulators can learn from. Inspire.
They will also be expected to take the initiative to identify, assess and prioritize the risks covered by the principles while working within the framework of enhanced cooperation.
The need for increased cooperation between regulators in order to meet the challenges of AI has been confirmed by an independent report from theAlan Turing Institute published on Monday. He points out that technology has now extended, in terms of scale and complexity, to all areas of social and economic life and that the common nature of the obstacles that regulators are already facing calls for a common strategy.
A global issue
And “thriving AI ecosystem” can be a source of international competitive advantage, Digital Secretary Nadine Dorries wrote in the statement, promising that the UK “will continue to advocate at the international level” for this innovation-friendly approach and emphasizing that the challenges and opportunities of AI are global issues.
London recognizes the cross-border nature inherent in the digital ecosysteme” and emphasizes the need to work “in close collaboration with partners” in order to avoid fragmentation of the global market, “to ensure interoperability and promote the responsible development of AI at the international level”according to the government.
The UK is committed to continuing to fight authoritarianism, repression and discrimination and to remain ” asset “ in global organizations such as the Global Partnership on Artificial Intelligence (PMIA), the OECD or standardization bodies, she continued, before concluding that the country will strive to remain a“pragmatic voice in favor of innovation in the ongoing negotiations at the Council of Europe”.
Stakeholders in the AI ecosystem are invited to share their views on this regulatory approach by the end of September, so as to feed into a future white paper on the implementation of AI. such a strategy.