The world has been seeking answers to the questions of how artificial intelligence (AI) technologies should be developed, used, and regulated for a long time. Initially, concerns that regulating AI could stifle innovation led to debates about whether regulation is necessary, when it should be applied, and whether early-stage regulation might hinder innovation. Today, the rapid advancements in the field of AI and the widespread use of productive AI have led to a consensus, at least among a significant majority, regarding the necessity of regulating AI, as they have brought forth new risks and harms. However, the concern about innovation is still relevant: How can we regulate AI systems without impeding innovation? In this short article, we believe that addressing regulatory sandboxes, which can help solve various problems related to the regulation of AI, including concerns about innovation, is beneficial at a time when AI regulation efforts are accelerating worldwide.


            The OECD AI Principles published in 2019 recommended governments to use “experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.” [1]. This principle emphasizes the adoption of experimental policy approaches by governments to ensure both reliable AI and innovation. In this context, the "anticipatory regulation" approach is crucial as it aims to develop regulation iteratively as new technologies emerge, gain a deeper insight into how technology affects society, enable regulators to promote innovation, and respond more quickly to avert harm to consumers [2]. The anticipatory regulation approach encompasses ensuring adaptability to the future, iterative learning, outcomes-based regulation and employing experimental methods [3].

 

Regulatory experimentation can be conducted in two ways: through derogation, where temporary waivers or changes to legal provisions are made to encourage innovation, or through the devolution of authority, where supranational organizations or national governments grant lower level governments the power to conduct experiments through new regulations [3]. Regulatory sandboxes, which temporarily waive or modify national rules to promote innovation [3], are a type of regulatory experiments conducted through derogation. It's important to note that regulatory sandboxes can provide not only derogation but also bespoke guidance, regulatory comfort, and confirmations [4].


            Regulatory sandboxes are "generally regulatory tools allowing businesses to test and experiment with new and innovative products, services or businesses under supervision of a regulator for a limited period of time." [5]. They aim to assist companies in testing new business models with reduced requirements while striving to achieve comprehensive objectives such as consumer protection [6]. Regulatory sandboxes can take different forms but generally share some common features. These features include their temporary nature, typically allowing for a testing period of up to six months, their role in facilitating collaboration between regulatory bodies and firms, waiving legal provisions, offering customized legal assistance for a particular project, often based on trial-and-error methods [6]. Furthermore, they contribute to the collection of technical and market information, which regulatory authorities use to evaluate the appropriateness of specific legal frameworks or the necessity of adjustments [6]. This approach contributes to promoting innovation while facilitating the creation of effective and balanced regulations.

The Benefits of Regulatory Sandboxes

Regulatory sandboxes offer a range of benefits to regulators, businesses, and consumers.

For Regulators:

  • support long-term policy development through experience and learning,
  • demonstrate commitment to innovation and learning,
  • encourage interaction with market participants,
  • update regulations that may hinder useful innovations [6].

For Businesses:

  • simplify the authorization process, reducing time to market,
  • decrease uncertainty related to regulations,
  • collect feedback on legal requirements and risks,
  • enhance access to capital,
  • democratize information about legal frameworks for particular innovative products, especially benefiting small and medium-sized enterprises [6].

For Consumers:

  • encourage the launch of new and safer products,
  • improve access to financial products and services [6].

Regulatory sandboxes in the field of AI might help to solve some significant challenges in regulating AI. AI regulatory sandboxes can help ensure that AI systems are regulated without limiting innovation by increasing interaction between regulators and market stakeholders, enabling regulators to see the potential of regulations to harm beneficial innovations. This, in turn, facilitates the regulation of AI systems without stifling innovation, enabling the development of balanced policies between innovation and trustworthy AI.

Another important problem encountered in the regulation of AI is that AI is a very rapidly developing technology. Although the rapid development of AI brings uncertainties, insights from regulatory sandboxes can provide a more robust foundation for shaping AI policies.

Another important concern is that regulations might place some burdens on small and medium-sized enterprises (SMEs). While regulatory sandboxes may not eliminate all burdens, they can alleviate some concerns for SMEs by democratizing information related to regulations and providing a broader understanding of regulatory matters.

Challenges and Considerations

Regulatory sandboxes, while offering benefits, also raise certain difficulties and concerns when not appropriately designed. When regulatory sandboxes are not designed properly, unfair privileges might be given to new companies that are considered innovative [4] and regulators might reduce measures and requirements to attract innovators [5]. In other words, carelessly designed regulatory sandboxes can lead to injustices in the market and jeopardize consumer safety. Furthermore, the lack of standards for assessing innovation in regulatory sandboxes [6] strengthens concerns that regulators’ choices might slow down real innovation [5].

Regulatory sandboxes may provide solutions to some of our problems related to AI regulation, but they also pose certain risks and harms. Therefore, policymakers should be mindful of some considerations before implementing regulatory sandboxes.

In the article published by the OECD discussing AI regulatory sandboxes, these considerations are listed as follows:

  • ensuring multi-disciplinary and multi-stakeholder collaborations with national institutions and market stakeholders,
  • providing AI technical, competition, and innovation assessment expertise within regulatory authorities,
  • considering international regulatory interoperability and a possible role for trade policy,
  • working towards harmonized sandbox eligibility and testing criteria at the international level,
  • evaluating the design of regulatory sandboxes, taking into account their impact on innovation and competition, before their implementation,
  • considering AI regulatory sandboxes together  with other pro-innovation mechanisms [6].

 

Some Examples of Regulatory Sandboxes in the Field of AI

Regulatory sandboxes do not have a very long history. The first regulatory sandbox was initiated in the FinTech sector in the United Kingdom in 2016 [7]. Afterwards, it quickly spread worldwide and is now frequently used in the financial technology sector. Interest in regulatory sandboxes within the AI field is also increasing, with some notable initiatives, such as the following:

  • In the United Kingdom, the Information Commissioner’s Office launched a regulatory sandbox in 2019 to support organizations producing services using personal data in innovative and safe ways [8]. Currently, three projects are being developed under this regulatory sandbox. It's worth noting that this sandbox doesn't involve violations of privacy obligations but aims to contribute to reinterpreting privacy principles in the context of emerging technologies [6].
     
  • In 2020, the Norwegian Data Protection Authority launched a regulatory sandbox adhering to the responsible AI principles proposed by the EU High-Level Expert Group on AI, with the aim of promoting the innovative development of ethical and responsible AI from a data protection perspective [9]. This sandbox, which does not grant an overall exemption from the Personal Data Act, exempts firms from any enforcement measures through the development phase [6].
     
  • In 2022, Spain established a regulatory sandbox to test the EU AI Act which has not yet entered into force [10]. This regulatory sandbox will help to evaluate the interaction between the EU AI Act and real AI applications and propose changes and the explanatory guidelines [10].
     
  • On October 3, 2023, the Brazilian Data Protection Authority issued a call for contributions to design a regulatory sandbox for AI and data protection. In the regulatory sandbox being designed, it is planned to test generative AI use cases to help to assess risks and develop appropriate safeguards and regulations for the mitigation of potential harms [11]. Priorities for this sandbox include algorithmic transparency, responsible AI innovation, multi-stakeholder participation, and development of parameters for human intervention [11].

Developments are ongoing in different countries regarding the use of regulatory sandboxes in the field of AI. Although the initiatives are relatively new and we do not yet have definitive findings on AI regulatory sandboxes, they are promising tools for AI regulations. However, it is crucial to design AI regulatory sandboxes carefully and diligently to avoid potential harms, as poorly designed sandboxes can lead to significant consequences.

 References

[1] OECD AI Principles overview. AI. (n.d.). https://oecd.ai/en/ai-principles

[2] A working model for anticipatory regulation - nesta. (n.d.-a). https://media.nesta.org.uk/documents/working_model_for_anticipatory_regulation_0.pdf

[3] Experimental regulations for AI: Sandboxes for morals and mores. (n.d.-b). https://iris.luiss.it/bitstream/11385/210516/1/2747-5174-2021-1-86.pdf

[4] Ranchordas, S. (2021, November 18). Experimental lawmaking in the EU: Regulatory sandboxes. SSRN. https://deliverypdf.ssrn.com/delivery.php?ID=097121094120022097110027083093069023000008008093019054002069103099086029124106005108098036006047039023098029002106103093072097012044036045000024085097069098077118010090052002094024088011093064064092025093109000099098103107100094013031106118074012100072&EXT=pdf&INDEX=TRUE

[5] Artificial Intelligence Act and regulatory sandboxes - european parliament. (n.d.-b). https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf

[6] Regulatory sandboxes in Artificial Intelligence | en | OECD. (n.d.-c). https://www.oecd.org/sti/regulatory-sandboxes-in-artificial-intelligence-8f80a0e6-en.htm

[7] Regulatory sandboxes - the george washington law review. (n.d.-b). https://www.gwlr.org/wp-content/uploads/2019/06/87-Geo.-Wash.-L.-Rev.-579.pdf

[8] Regulatory sandbox. ICO. (n.d.). https://ico.org.uk/for-organisations/advice-and-services/regulatory-sandbox/

[9] Framework for the regulatory sandbox. Datatilsynet. (n.d.). https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/framework-for-the-regulatory-sandbox/

[10] The state of implementation of the OECD AI principles Four Years on: En. OECD. (n.d.). https://search.oecd.org/publications/the-state-of-implementation-of-the-oecd-ai-principles-four-years-on-835641c9-en.htm

[11] ANPD’s call for contributions to the regulatory sandbox for artificial intelligence and data protection in Brazil is now open. Autoridade Nacional de Proteção de Dados. (n.d.). https://www.gov.br/anpd/pt-br/assuntos/noticias/anpds-call-for-contributions-to-the-regulatory-sandbox-for-artificial-intelligence-and-data-protection-in-brazil-is-now-open

Writer : Gamze Büşra Kaya