Skip to Main Content
Publications

Rocky Mountain High on AI: Colorado Emerged as the First Mover on State AI Law

Colorado stepped boldly into the difficult area of regulating artificial intelligence (AI) on May 17, 2024, as Governor Jared Polis signed into effect a groundbreaking law, known as Senate Bill 205 or the "Colorado AI Act" (CAIA). The new law has significant similarities to the EU AI Act [see here and here], including having a risk-based approach and setting down rules around AI impact assessment. Effective February 1, 2026, Colorado will require developers and users of "high-risk AI systems" to adopt compliance measures and protect consumers from the perils of AI bias. Noncompliance with the Colorado AI Act could lead to hefty civil penalties for engaging in deceptive trade practices.

Background

The enactment of the AI Act in the Centennial State is the culmination of a nationwide trend in 2024 to regulate the use of AI, with the 3-Cs leading the charge: California, Connecticut, and Colorado. While California is still making slow progress with its proposed regulations of automated decision-making technology (ADMT), Connecticut's ambitious AI law (SB 2) was derailed by a veto from Governor Lamont. In the end, Colorado's SB-205 became the lone horse crossing the finish line. Two other states, Utah and Tennessee, also passed state AI-related laws this year, focusing specifically on regulating generative AI and deepfakes. The Colorado AI Act, therefore, becomes the first comprehensive U.S. state law with rules and guardrails for AI development, use, and bias mitigation. Here are the top takeaways from the legislation:

What Type of AI System Does CAIA regulate?

The CAIA largely adopts the broad definition of "Artificial Intelligence System" nearly verbatim from the EU AI Act adopted on March 13, 2024, (see this alert on EU AI Act). As illustrated below, the CAIA takes a technology-neutral stance and purposefully sets a broad definition so that it does not become obsolete as AI rapidly advances:

EU AI Act
Article 3(1)

Colorado AI Act
Section 6-1-1701(2)

AI Systems: a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

AI Systems: any machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for any explicit or implicit objectives, infers from inputs the system the input it receives, how to generate outputs such as including content, decisions, predictions, content, recommendations, or decisions that can influence physical or virtual environments


What are High-Risk AI Systems Under the CAIA?

The CAIA follows the EU AI Act's risk-based approach but has a narrower focus on the use of "high-risk AI systems" in the private sector. For Colorado residents, an AI system is "high-risk" if it "makes, or is a substantial factor in making, a consequential decision," affecting their access to, or conditions to receiving the following:

a. education enrollment or an education opportunity
b. employment or an employment opportunity;
c. a financial or lending service;
d. an essential government service;
e. health care services;
f. housing;
g. insurance; or
h. a legal service.

Unlike the 2023 Colorado Privacy Act that exempts employee data and financial institutions subject to the "Gramm-Leach-Bliley Act" or GLBA, the Colorado AI Act expressly prohibits algorithmic discrimination affecting Colorado residents' employment opportunities or access to financial or lending services. Further, Colorado's definition of "high-risk AI systems" excludes a list of "low-risk AI tools" for anti-malware, cybersecurity, calculators, spam-filtering, web caching, and spell-checking, among other low-risk activities.

Who are the Key Players Under the CAIA?

While the EU AI Act sets a comprehensive framework to regulate all activities across six key players that develop, use, and distribute AI systems (see this alert on EU AI Act's Key Actors), the Colorado AI Act narrows the field down to only two players:

  • AI Developer, i.e., a legal entity doing business in Colorado that develops, or intentionally and substantially modifies an AI system; and
  • AI Deployer, i.e., a legal entity doing business in Colorado that uses a high-risk AI system.

Under the CAIA, both Developers and Deployers of high-risk AI systems must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. If the Colorado Attorney General brings an enforcement action, a company will be afforded a rebuttable presumption if it complies with its respective obligations for Developer or Deployer, as outlined below.

What are AI Developers' Obligations Under the CAIA?

Upon creation of a new high-risk AI, or intentional and substantial modifications of a high-risk AI, an AI Developer must comply with the following requirements:

1. AI Instructions: provide disclosures and documentation to downstream users regarding the intended use and specifics of its high-risk AI systems;
2. Impact Assessment Facilitation: make available additional documentation or information to facilitate impact assessment by downstream users (aka the AI Deployer);
3. Public Disclosure: maintain and post a current public statement on the Developer's website summarizing: (i) what types of high-risk AI it has developed for use and license; and (ii) how it manages risks of algorithmic discrimination; and
4. Incident Reporting: report to the Colorado Attorney General upon discovery of any algorithmic discrimination.

What are CAIA's Requirements for Deployers That Use AI?

For AI Deployers who are downstream users of high-risk AI, the CAIA imposes similar obligations around public disclosure and incident reporting:

1. Public Disclosure: maintain and post a current public statement on the Deployer's website summarizing its use of high-risk AI;
2. Incident Reporting: report to the Colorado Attorney General upon discovery of algorithmic discrimination;

In addition, an AI Deployer must comply with the following requirements:

3. Risk Management Program: Implement a risk-management policy and program that governs high-risk AI uses;
4. Impact Assessment: Conduct an impact assessment of the current use of high-risk AIs annually and within 90 days after any intentional and substantial modification of high-risk AI;
5. Pre-use Notice to Consumers: Notify consumers with a statement disclosing information about the high-risk AI system in use; and
6. Consumer Rights Disclosure: Inform Colorado consumers of their rights under the CAIA, including the right to pre-use notice, the right to exercise data privacy rights, and the right to an explanation if an adverse decision is made from the use of high-risk AI, among others.

Does the CAIA exempt Small to Medium-Sized Enterprises (SMEs)?

Yes. The CAIA provides an exemption for high-risk AI Deployers if they are small to medium-sized enterprises (SMEs) employing 50 or fewer full-time employees and meet certain conditions, they do not need to maintain a risk management program, conduct an impact assessment, or create a public statement. The SMEs are still subject to a duty of care and must provide the relevant consumer notices.

How Will the Colorado AG Enforce the CAIA?

The CAIA vests the Colorado AG with exclusive enforcement authority. Any violation of the CAIA constitutes a deceptive trade practice subject to hefty civil penalties imposed by the Colorado Consumer Protection Act. Section 6-1-112 of the Colorado Consumer Protection Act currently imposes a civil penalty of up to $20,000 per violation, and up to $50,000 per violation if a deceptive trade practice is committed against a senior citizen over age 60.

Key Takeaways

With the Colorado AI Law set to take effect on February 1, 2026, and potentially serving as a blueprint for other states, companies must start planning their AI compliance roadmap, including policy development, AI audit and assessment, and AI vendor contract management. The time to get ready is now, to ensure compliance and mitigate potential regulatory and operational risks.

For more information or assistance on this topic, please contact Vivien Peaden, AIGP, CIPP/US, CIPP/E, CIPM, PLS, or a member of Baker Donelson's AI Team.

Email Disclaimer

NOTICE: The mailing of this email is not intended to create, and receipt of it does not constitute an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
Cancel Accept