September 13, 2021

WASHINGTON – Global tech trade association ITI welcomes the U.S. government's commitment to developing trust and mitigating risks, including bias, in Artificial Intelligence (AI) technologies. In comments submitted today to the National Institute of Standards and Technology’s (NIST) requests for information on its Artificial Intelligence Risk Management Framework and A Proposal for a Framework for Identifying and Managing Bias in Artificial Intelligence, ITI offers recommendations to help NIST better support stakeholders in identifying risks, including mitigating bias, in different contexts.

“ITI and its member companies believe that effective government approaches to AI clear barriers to innovation, provide predictable and sustainable environments for business, protect public safety, and build public trust in the technology,” said John Miller, ITI’s Senior Vice President of Policy and General Counsel. “We share the firm belief that building trust by addressing risks, including that of bias, in the era of digital transformation is essential and agree there are important questions that must be addressed to best facilitate the responsible development and use of AI technology. As this technology evolves, we take seriously our responsibility as AI’s enablers, including by helping drive solutions to address potential negative externalities. We appreciate the opportunity to provide input on what should be included in both a risk management framework and a framework to identify and mitigate bias so that these frameworks are most useful to stakeholders.”

In its response to the RFI for an Artificial Intelligence Risk Management Framework, ITI recommends:

  • Considering what “risk” means in the context of AI as different risks will require different mitigations;

  • Conducting a more granular mapping exercise of the standards landscape, including identifying where specific standards exist and where they might be needed;

  • Taking an outcomes-based approach to protect against the risks of AI while facilitating innovation;

  • Developing a methodology that can help stakeholders determine the risk-level of a specific AI use case and then taking steps based on that identification to mitigate that risk;

  • Helping stakeholders determine how to navigate tensions that may arise in developing and using AI; and

  • Suggesting that the framework account for the deployment context of an AI system, the training data and optimization function of an AI system, and the goal of the product.

In its comments to NIST’s Framework for Identifying and Managing Bias in Artificial Intelligence, ITI recommends:

  • Indicating the preliminary nature of the document, so policymakers do not view it as a definitive guide to approaching and managing bias;

  • Clarifying references to unintentional or other types of bias, and further clarifying the definition of bias;

  • Considering more specific technical guidance for how to address bias in specific instances, including for certain classes of AI technologies where large test sets do not exist, and how to test for disparate impact in machine learning contexts;

  • Including information as to how this proposal will interact with the NIST AI risk management framework;

  • Requiring education and awareness as an important action to take in addressing bias in the pre-design stage;

  • Referencing and integrating ongoing standards efforts; and

  • Articulating a clear plan for how work on measuring and mitigating AI bias will translate into adoption across federal agencies.

ITI issued recommendations earlier this year aimed at helping governments facilitate an environment that supports AI while simultaneously recognizing there are challenges that need to be addressed as the uptake of AI grows around the world.

Public Policy Tags: Artificial Intelligence, Cybersecurity