This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

A Fresh Take

Insights on M&A, litigation, and corporate governance in the US.

| 4 minutes read

The White House’s “Blueprint for an AI Bill of Rights”: The Biden Administration’s Vision for AI

The White House has issued a white paper titled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (“Blueprint”), which sets out principles intended to make AI-driven systems work for the benefit and protection of the American public. Despite the phrase “Bill of Rights” in the title, this white paper prepared by the White House Office of Science and Technology Policy (“OSTP”) does not create legally-binding rights or obligations. Instead, it provides guidance and signals public policy objectives of the Biden administration with respect to AI.

The Blueprint applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services (“automated systems”). The Blueprint consists of a set of five principles, notes on “Applying the Blueprint for an AI Bill of Rights,” and a technical guide called “From Principles to Practice:  A Technical Companion to the Blueprint for an AI Bill of Rights,” which are available in a consolidated document here

The Blueprint proposes the following five principles:

  1. Safe and Effective Systems

The first of these principles focuses on protecting individuals from unsafe or ineffective systems. This principle suggests that automated systems should:

  • Be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.
  • Undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective.
  • Not be designed with an intent or reasonably foreseeable possibility of endangering an individual’s safety or the safety of their community.
  • Be designed to proactively protect individuals from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.
  • Protect individuals from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.
  • Be subject to independent and publicly available evaluation and reporting that confirms that the system is safe and effective.
  1. Algorithmic Discrimination Protections

The second principle seeks to prevent unlawful discrimination against individuals by algorithms and to ensure that systems are used and designed in an equitable way.

Through this principle, the OSTP hopes to prevent unjustified differential treatment or unfavorable impacts on individuals based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.

To avoid such algorithmic discrimination, the OSTP advises that designers, developers, and deployers of automated systems take “proactive and continuous measures” in order to protect individuals and communities and also to use and design systems in an equitable way.

  1. Data Privacy

The third principle aims to protect individuals from abusive data practices via built-in protections as well as by ensuring individuals have agency over how data about them is used.

This principle suggests that designers, developers, and deployers of automated systems should seek permission and respect individuals’ decisions regarding collection, use, access, transfer, and deletion of their data where possible, and where not possible, provide alternative privacy safeguards.

Importantly, the OSTP specifies that there should be “enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth.” In such domains, data and related inferences should only be used for necessary functions and additional protections should be put in place.

Finally, the OSTP notes that individuals and their communities should be free from unchecked surveillance. It adds that continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.

  1. Notice and Explanation

The fourth principle suggests that individuals be made aware that an automated system is being used, and that they are given the opportunity to understand how and why this system may contribute to outcomes which may impact them.

This principle recommends that designers, developers, and deployers of automated systems provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, as well as up-to-date notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

  1. Human Alternatives, Consideration, and Fallback

The fifth and final principle provides that individuals should be able to opt out from automated systems in favor of a human alternative, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

The OSTP indicates that such human consideration and remedy should be a fallback option and escalation process if an automated system fails, produces an error, or if the individual involved would like to appeal or contest its impacts on them. This option should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.

The OSTP also makes special provisions for automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, noting that these should be “tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.”

Conclusion

Although the Blueprint is not legally binding, it highlights valuable considerations for companies engaging in the development or use of AI. It also emphasizes concepts found in other guidance, such as the FTC’s post on Aiming for Truth, Fairness, and Equity in Your Company's Use of AI, and emerging requirements under state consumer data protection laws (e.g., the California Privacy Rights Act, Virginia Consumer Data Privacy Act, Colorado Privacy Act, and Connecticut Data Privacy Act) related to profiling and automated decision-making.  In this respect, the Blueprint may help to preview potential directions in future legislation as well.

Tags

cybersecurity, data protection, ai, data privacy, data