The UK Department for Science, Innovation, and Technology (DSIT) has released a whitepaper named “A pro-innovation approach to AI regulation” on 29 March 2023. The whitepaper outlines a pro-innovation approach to Artificial Intelligence (AI) regulation, aiming to strike a balance between fostering innovation and addressing potential risks.

Recognizing the complexity and diverse nature of AI risks, the government aims to build on existing regulatory frameworks rather than creating a completely new legislation for regulating AI. By leveraging and maximizing the benefits of current regimes, the government intends to intervene in a proportionate manner to address regulatory uncertainty and fill any existing gaps.

            The proposed regulatory framework is designed to be pro-innovation, proportionate, trustworthy, adaptable, clear, and collaborative.

Key Elements of the Framework

The proposed framework revolves around four key elements:

Defining AI: The framework defines AI based on its two characteristics, such as adaptivity and autonomy, to support regulator coordination. This ensures that the framework remains future-proof and can adapt to emerging AI technologies.

Context-Specific Approach: Instead of imposing rules or risk levels on entire sectors or technologies, the framework takes a context-specific approach. Regulators assess, in specific contexts, the risks of utilizing AI in comparison to the potential costs of not doing so, which provides to avoid undue burdens and stifling innovation.

Cross-Sectoral Principles: The framework is supported by cross-sectoral principles that direct how regulators should address the risks and opportunities associated with AI. These principles, including safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress ensure responsible AI governance throughout the AI life cycle.

Initially, the principles will be issued by the government on a non-statutory basis, and the need to implement statutory measures will depend on the effectiveness of the initial framework. By collaborating with regulators, required legislative changes will be evaluated.

 

Central Functions: The framework introduces new central functions to provide an overarching view of the framework, coherence and improved clarity, support the implementation of framework, and identify opportunities for further coordination. These functions are:

  • Monitoring, assessment and feedback;
  • Support coherent implementation of the principles;
  • Cross-sectoral risk assessment;
  • Support for innovators;
  • Education and awareness;
  • Horizon scanning and,
  • Ensure interoperability with international regulatory frameworks.

AI Sandboxes and Testbeds

            It is stated that the UK government is dedicated to fostering innovation through the creation of AI sandboxes and testbeds. It is thought that these sandboxes will play a crucial role in supporting innovators, testing the regulatory framework, identifying unnecessary barriers to innovation, and identifying and if necessary, adapting to emerging technology and market trends. The DSIT is considering different options for the design of the sandbox, including single sector/single regulator, multiple sectors/single regulator, single sector/multiple regulators, and multiple sectors/multiple regulators. The initial focus will be on a single sector/multiple regulator sandbox, with the intention to expand to cover multiple industry sectors over time.
 

Tools for Trustworthy AI

           
According to the DSIT, tools for trustworthy AI support the implementation of the regulatory framework and the responsible adoption of AI. The whitepaper categorizes these tools into two groups: AI assurance techniques and AI technical standards.

AI assurance techniques, such as impact assessment, audit, and performance testing, will measure and communicate the trustworthiness of AI systems. A Portfolio of AI assurance techniques is planned to be unveiled by the UK government in Spring 2023.

         Technical standards, both international and regional, are being developed to address various aspects of AI, such as risk management, transparency, bias, and safety. These standards can complement sector-specific approaches to regulation and help organizations demonstrate compliance with the regulatory principles.

Writer : Gamze Büşra Kaya