Taking a Byte From the Regulatory Apple: States Introduce AI Regulations, Creating Conflict Risks With Future Federal Law

*This article was originally published by Legaltech News

Presently, there is no overarching federal law or regulatory scheme specific to the unique challenges of AI. This places AI regulation on track to follow the same path as privacy/data collection—with the states, the courts, the industry itself, and other jurisdictions trying to fill the void.

As the federal government grapples with the complexities of comprehensive AI regulation and competing agendas, U.S. states are computing their own solutions to the challenges posed by the rapid advancement of AI in services, products and industries. There have been some efforts to prompt federal oversight of AI including the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence released in October 2023, and the bipartisan-drafted AI Roadmap revealed in May 2024. Yet, presently, there is no overarching federal law or regulatory scheme specific to the unique challenges of AI. This places AI regulation on track to follow the same path as privacy/data collection—with the states, the courts, the industry itself, and other jurisdictions trying to fill the void. 

Several states are trying to tackle this challenge. At least 12 states, including California, Colorado, Illinois, New York and Utah, have enacted or proposed laws to regulate AI using different approaches. Some states are taking a bold approach with broad, encompassing legislation, while others are taking “baby steps” by regulating small, discrete areas affected by AI. 

Colorado is an example of the comprehensive approach. It recently enacted the Colorado Artificial Intelligence Act (SB 205) to regulate high-risk AI systems over a spread of industries. A system is “high risk” if it makes decisions with significant legal or similar effects on various sectors such as education, employment, finance, government services, health care, housing, insurance, and legal services, referred to as “consequential decisions.” Some of the other far-reaching provisions of the law also include mandating developers and deployers to exercise reasonable care to protect consumers from algorithmic discrimination. 

Colorado’s SB 205 also creates broad disclosure requirements. Developers and deployers alike are required to post on their websites statements about the use of AI systems in making consequential decisions and risk management measures associated with such systems. If they discover that their AI tools utilize a potential algorithmic discrimination, they must disclose it within 90 days to the Colorado attorney general. Deployers also are required to implement risk management policies and conduct annual impact assessments.

Connecticut considered its own sweeping AI bill, SB 2. The bill proposed to regulate the private sector deployment and use of artificial intelligence systems, and included regulations for high-risk AI systems across various sectors. SB 2 died after the state’s governor, Ned Lamont, announced that he would veto the bill if passed by the House. He instead expressed interest in Connecticut working together with other states in tackling AI. 

Among the states taking the alternative approach of more targeted AI regulation is Oregon. The state passed the Oregon Consumer Privacy Act (Senate Bill 619) based off Virginia’s Consumer Data Protection Act. SB 619 created strong, comprehensive consumer data privacy rights. It does not solely target the intersection of AI and privacy, but includes certain restrictions against automated profiling of personal data. 

The federal government also recognizes the importance of AI and is grappling with its regulation. The initial big foray on the federal front was the executive order on AI. The Senate followed with its Bipartisan Senate AI Working Group recently introducing the AI Roadmap. The AI Roadmap covers a broad spectrum of AI-related issues including intellectual property reforms funding for AI research, sector-specific rules, transparent model testing, and national security. The document also pushes for the development of sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation. 

Addressing certain intellectual property rights is a key focal point of the AI Roadmap. This includes potential legislation to protect against the unauthorized use of one’s name, image, likeness and voice in AI. As the states traditionally govern this area, such federal legislation could conflict with existing state laws. The AI Roadmap considers the use of copyright protected materials to train AI systems, an area already ripe with litigation. The AI Roadmap also elevates the risk of AI-generated or augmented content to elections and democracy while balancing First Amendment rights.

Similar to disclosure requirements mandated by Colorado and proposed by Connecticut, the AI Roadmap encourages transparency requirements on high-risk uses of AI. It suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about training data and factors that influence automated or algorithmic decision-making. This provision targets the “black box” nature of some AI systems, which can hinder assessing compliance with, for example, consumer protection and civil rights laws.

The AI Roadmap also delves into the sticky issue of potential federal data privacy law. To protect personal information, it notes that legislation should address data minimization, data security, consumer data rights, and consent and disclosure. Such future legislation has the potential of upending existing state privacy and data AI laws. 

The current landscape of AI regulation in the U.S. is characterized by a growing patchwork of state-level initiatives. State action potentially offers a quicker path to increased individual protection and transparency into the use of AI in consumer facing products and services. It also creates a complex and potentially inconsistent regulatory environment. Businesses and consumers must navigate a complex regulatory landscape consisting of existing federal law with potential relevancy, potentially competing and untested state regulations, and even international regulations such as the EU’s far-reaching Act on Artificial Intelligence.

The prospect of future federal AI legislation looms as a likely major disruptor with global impact. The federal government seems motivated to act on AI. Reasons for this may include retaining U.S. leadership in technology, laying the foundation for the next “industrial” revolution and to keep and attract these companies in the U.S., or circumventing past criticisms as to federal regulation (or its absence) in the technology sector. As the federal government ponders how to proceed, it should monitor the solutions being innovated at the state level. They may provide valuable insights and potential models for future legislation.  

Reprinted with permission from the August 1, 2024, edition of Legaltech News © 2024 ALM Global, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Contacts

Continue Reading