New! Become A member
Subscribe to our newsletter
Insights

White House Artificial Intelligence Efforts Reflect Growing Global Interest

As part of this issue’s focus on a human rights approach to controlling AI-based technologies, we asked attorneys in the Seattle and Washington, DC offices of Davis Wright Tremaine (DWT) to describe the significance of a new call from President Biden’s technology advisors for a “Bill of Rights” for human interaction with artificial intelligence, especially biometric technologies like facial and voice recognition.  This article sets the White House call within the context of the many other regulatory schemes under discussion in the US and European Union, some of which are not grounded in a “human rights” approach.  Thanks to DWT’s technology lawyers for this excellent overview of the plethora of ideas currently in the works for government-boundarying of AI.

In October 2021, the White House Office of Science and Technology Policy (OSTP) joined a growing effort—in the United States and internationally—to reckon with the growth of artificial intelligence (AI) technology. OSTP’s plan to develop a “bill of rights,” premised on assertions of the potential harmful consequences of AI, with particular attention to use of biometric technologies, includes a Request For Information (RFI) to understand the use of AI-enabled biometric technologies.

OSTP’s Activity

OSTP’s AI initiative is premised on the potential risks of AI-enabled biometric technologies, such as the consequences of using incomplete datasets that reflect past biases and enable present-day discrimination. In an opinion piece released concurrently with the RFI, OSTP’s Eric Lander and Alondra Nelson argue that “powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly.”

Lander and Nelson highlighted perceived risks that technologies pose, such as questions about privacy and transparency, datasets that may not adequately represent American society, and, in some instances, abuse of biometric technologies. As with many efforts to stem potential AI risks, OSTP’s narrative highlights concerns about both inputs to (e.g., bias in underlying datasets) and outputs from (e.g., decisions with discriminatory effects) AI systems.

The RFI seeks to understand how AI technology currently uses biometrics—such as facial and voice recognition, gait recognition, keystroke analysis—particularly in the employment, education, and advertising contexts. The RFI specifically asks how biometric information is used for recognition or inferences, how technologies are researched and validated, security considerations, actual and potential harms and benefits of the technology, and existing best practices and governance programs. Public comments in response to the RFI are due January 15, 2022. Comments received in response to the RFI are expected to provide a record basis for OSTP to develop its proposed AI bill of rights.

The Administration appears to be putting its money where its mouth is on AI as well: the recently released Networking and Information Technology R&D Program/National Artificial Intelligence Initiative Office Supplement to the President’s FY 2022 Budget includes a funding request for nondefense AI research and development that represents an 8.8% increase in spending compared to the FY 2021 budget (a total of 11.2% over the FY 2021 request).

 

A “Soft Law” Approach to AI

The biometrics-focused RFI is expected to support OSTP’s overall efforts to develop an AI bill of rights. OSTP intends the bill of rights to “clarify the rights and freedoms” of individuals using or subject to AI technology. Although the exact “rights” remain to be articulated, the characterization suggests a constitutional aspect and is reminiscent of language used in the European Union’s Charter of Fundamental Rights.  Lander and Nelson suggest the following:

  • (i) Right to know when and how AI is influencing a decision that affects an individual’s civil rights and civil liberties;
  • (ii) Freedom from being subjected to AI that has not been “carefully” audited to ensure that it is accurate and unbiased;
  • (iii) Right to be secure in systems being trained on “sufficiently representative” datasets;
  • (iv) Freedom from pervasive or discriminatory surveillance and monitoring in the home, community, and workplace; and
  • (v) Right to “meaningful recourse” should the use of an algorithm result in harm.

OSTP’s invocation of a bill of rights also brings to mind the prevalence of “soft law” standards settings common in international law, particularly international human rights law. While the soft law of the international community is often derided as unenforceable and “lacking teeth,” OSTP’s efforts, while not of constitutional magnitude, could inform the “hard law” found in legislation, executive orders, or administrative regulation, given the significant attention at all levels to AI regulation.

 

Growing US and International Attention to AI

OSTP’s AI focus comes amid a flurry of policy attention to AI. According to the National Conference of State Legislatures, 17 states and the District of Columbia considered AI legislation in 2021, four more states than the previous year. At the federal level, both the Obama and Trump administrations studied AI, going so far as to issue guidance and a strategic plan related to AI development. Congress has its eye on AI, too, with inclusion of AI-related provisions in the 2021 National Defense Authorization Act and proposed AI-specific legislation. In addition, OSTP and the National Science Foundation (NSF) issued an earlier RFI to develop an Implementation Plan for a National Artificial Intelligence Research Resource, and the National Institute for Standards and Technology (NIST) called for public input into the development of AI risk management guidance.  NIST also has separately advanced efforts on the trustworthiness of AI, and the Equal Employment Opportunity Commission (EEOC) recently launched an initiative on AI and algorithmic fairness in employment. The Federal Reserve, Consumer Financial Protection Bureau, and several other finance-related agencies and bureaus issued an RFI regarding use of AI and machine learning by financial institutions.

Attention to AI continues to grow outside the US, as well, and similarly reflects concern about the potential risks of AI with a desire to capture its societal benefits. The European Commission’s AI strategy informed its “AI package,” published in April 2021, which included newly proposed rules , as well as a proposal for harmonized European Union (EU) rules for AI systems. The Office of the Privacy Commissioner of Canada completed a consultation in 2020 intended to reform Canada’s omnibus privacy law addressing AI. Within the international system, the Organisation for Economic Co-operation and Development (OECD) issued Principles on AI, and recently sought stakeholder input on its framework for classifying AI systems. The United Nations (UN) High Commissioner for Human Rights, Michelle Bachelet, highlighted “negative, even catastrophic” risks of AI systems, and called for UN Member States to prohibit the sale and use of such technology until risks can be mitigated. However, the UN also recognized the potential of AI to advance the global community and achieve United Nations Economic Commission for Europe (UNECE) Sustainable Development Goals.

 

Opportunities for Input

The OSTP’s RFI presents an opportunity for interested stakeholders to express their perspectives on the development of AI policy and legal standards. But it is unlikely that the OSTP’s RFI will be the last request for public input, or the final effort to articulate how the US—and the world—should assess the impact and consider approaches to the development and use of AI technology.


Kate Berry, Katherine Sheriff, KC Halm, John Seiver

<b>Kate Berry</b> is a member of Seattle-headquartered law firm Davis Wright Tremaine’s technology, communications, privacy, and security group, where she helps clients navigate and comply with state, federal, and international privacy laws.<br/><br/>
<b>Katherine Sheriff</b> leads DWT's Mobility and Transportation Group. She devotes her legal practice to identifying areas of opportunity, and potential challenges, in emerging technology sectors, particularly in the dynamic fields of autonomous vehicles and artificial intelligence. <br/><br/>
<b>K.C. Halm</b> is a partner in the Washington, DC office of DWT focusing on broadband policy and competition matters involving wireless licensing and spectrum acquisition. He also advises the firm’s emerging technology clients on issues arising in the use of AI and machine learning applications under existing and emerging legal frameworks. <br/><br/>
<b>John Seiver</b> has either directly handled or been substantially involved with most major state and national communications litigation since 1984 with a particular focus more recently on privacy and pole attachments.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter