August 06, 2021

BRUSSELS – Today, global tech trade association ITI submitted comments to the European Commission’s proposed Artificial Intelligence (AI) Act Regulation. ITI presented suggestions for EU policymakers to achieve a future-proof and innovation-oriented regulatory framework that addresses potential concerns raised by certain uses of AI technologies.

“As the world’s first proposal for a horizontal regulatory framework on AI, it is paramount that the AI Act balances the need to address potential risks associated with some AI applications with preserving innovation of AI technologies and encouraging their uptake,” said Guido Lobrano, ITI’s Vice President and Director General for Europe. “These outcomes can be achieved by targeting regulation precisely to certain high-risk AI applications, and ensuring that requirements are clear, proportionate, non-prescriptive and goal-oriented. In parallel, it is important that the AI Act takes into account the global nature of the technology. This initiative should emphasize the importance of global cooperation, relying upon innovative mechanisms to facilitate regulatory compatibility and open trade.”

ITI’s key recommendations include:

  • Specifying the definition of Artificial Intelligence to avoid inadvertently including traditional software in the scope of the Regulation;
  • Excluding general purpose software providers from the scope of the regulation when they are not the ones who directly develop or deploy the system as a high-risk AI application;
  • Narrowing down and specifying the list of high-risk AI applications to avoid including non-problematic uses of AI in scope;
  • Building in strict and meaningful safeguards on responsible deployment of real-time remote biometric identification for national security or law enforcement purposes;
  • Ensuring requirements on data governance, recordkeeping, transparency and human oversight reflect the diversity of applications in scope, which should be based on a goal-oriented, rather than prescriptive approach;
  • Promoting reliance on voluntary industry-driven consensus-based international standards to avoid fragmented regulatory approaches between the EU and the rest of the world;
  • Ensuring there are flexible mechanisms in place to accept international testing outcomes;
  • Ensuring requirements for market surveillance investigations are proportionate, and do not require companies to disclose sensitive information such as source code;
  • Equipping national supervisory authorities with broad expertise, while also designating a lead market surveillance authority to achieve a holistic approach to the enforcement of the Regulation; and
  • Ensuring structured stakeholder involvement in the work of the AI Board.

Read ITI’s full comments here.

Public Policy Tags: Artificial Intelligence