Colorado’s New AI Discrimination Protections a “First Draft” for the Nation

6–9 minutes

·

·

US and Colorado flag flying above legislature building

On May 17, Colorado Gov. Jared Polis signed into law a new bill meant to provide protections against discrimination where AI is used in “high risk” areas. The act’s broad reach has caused it to be labeled as a “first-in-the-nation” law, which seems to be the case with many of the AI bills hitting legislature floors these days. It is the first to take broad steps into regulating the private industry of AI. The act is similar to a law that was brought up in Connecticut and passed by the state’s senate in late April, but failed in early May when Connecticut Gov. Ned Lamont said he would veto the bill if it passed.

Colorado’s new law goes into effect on February 1, 2026.

What’s in the new law?

Colorado’s SB-24-205 details a laundry list of requirements for both developers and deployers of AI systems. The law specifically targets discrimination as a result of what the law refers to as “high-risk” AI being used in decision-making processes, which it labels “algorithmic discrimination.” Whenever AI is used to make a decision that results in differential treatment based on a protected class, the act considers this conduct to be algorithmic discrimination. AI is considered high-risk if the decisions relate to impactful areas for consumers, such as education, employment, finance, healthcare, housing, etc. The definition also expands the typical list of protected classes (race, sex, religion, etc.) to include reproductive health.

The act attempts to protect against algorithmic discrimination by imposing a lengthy list of reporting requirements on both developers and deployers of AI. Deployers of AI are simply anyone using the “high-risk” AI systems the act targets. Developers and deployers have to comply with the act’s requirements by February 1, 2026.

Developer Duties

Under the act, developers have a duty to avoid algorithmic discrimination when developing AI. The duty is not laid out in detail for developers, instead requiring them to take reasonable care when developing high-risk AI to protect consumers from known risks of discrimination, or any discrimination risks that would count as reasonably foreseeable.

To make sure developers are following this duty, the law requires that developers provide documentation to the government covering extensive details about the AI, including:

  • a list of foreseeable uses & known harmful uses
  • high level summaries of the training data used to train the AI model
  • known or reasonably foreseeable limitations of the AI, including those that pose a risk of algorithmic discrimination
  • intended benefits of the system
  • how the AI was evaluated for discrimination risks before deployment
  • measures used to evaluate the “suitability” of the training data
  • steps the developer has taken to mitigate the risk of algorithmic discrimination
  • how the AI should/should not be used and monitored

This list is in addition to more basic information, such as the intended uses and output of the system, as well as any additional documentation deemed” reasonably necessary” for the deployer. Developers have to keep this information readily up-to-date, as well. If any new potential risks pop up, the developer has 90 days to notify the Colorado Attorney General of the risk.

Developers also need to put out a less detailed statement for the public that covers the types of AI they’ve developed and how they manage the risks of algorithmic discrimination. This also needs to be kept up-to-date within 90 days.

If developers manage to adhere to these requirements, they get a rebuttable presumption that they are following their duty to avoid discrimination (which means that if they’re sued over violating their duty, the court operates with the assumption that the developer has upheld the duty until the suing party can prove otherwise).

Deployer Duties

Deployers also have a duty to take “reasonable care” to prevent algorithmic discrimination. For them, that involves implementing a risk management policy for algorithmic discrimination in any high-risk AI they use. For the policy to comply with the new law, the act suggests the 42001 standard from the International Organization for Standardization (or ISO) or any substantially equivalent standard that’s at least nationally recognized.

Deployers also have to conduct impact assessments on the high-risk AI that they use at least annually, but also within 90 days of any new systems or changes. Deployers also need to provide their own extensive documentation to the consumer, including:

  • disclosures whenever a high-risk AI system is being used for decisions before the decisions are made
  • a statement of intended use of the AI
  • a list of categories of data that their AI system processes
  • any available right to opt out of the AI decisionmaking
  • the degree to which the AI contributed to the decisionmaking
  • the type of data the AI processed & the source of that data

This information needs to be given directly to the consumer in plain language and, if the deployer typically works in multiple languages, in all of those languages.

What does this mean for businesses?

As always, these laws are written with the best of intentions. It’s evident from the text of the law that the legislators goal was to ensure AI wasn’t unintentionally discriminating against consumers. These concerns are not unfounded, as exemplified by Meta’s automated ad service on Facebook that unintentionally provided housing ads based on protected classes. Colorado’s new law was likely written with incidents like that in mind.

However, the act’s execution of this goal has the potential to cause more issues for the AI industry while not clearly accomplishing what it set out to do. Even the Colorado governor expressed hesitancy when signing the bill into law. In the governor’s signing statement, he opened saying that he signed the bill “with reservations” and urged the legislature to rethink the bill before it goes into effect. His hesitancies specifically seem to stem from the act’s approach of regulating AI based on the results of its use rather than the more typical approach of regulating based on discriminatory intent.

The governor’s reservations are not without merit. The reporting requirements on developers are lengthy and significant. Some of the more basic requirements are easier to comply with, like listing the intended purposes or high-level summaries of training data. Most AI models are trained on open datasets like LAION or Common Crawl which the developers can identify without much issue. A basic, high-level description of the training data should not be an onerous requirement for most developers. However, the more vague requirements could pose significant hurdles.

Identifying reasonably foreseeable uses and risks of the AI could also be a significant challenge for developers. AI models are complex and adaptable, much more so than most other software on the market today. identifying all of the foreseeable uses of the system could be a massive undertaking. Under threat of an enforcement action from the state AG, developers are likely to waste time coming up with an extensive list of possible uses and risks for their system to avoid noncompliance, making the list effectively useless and a waste of everyone’s time. Since a “reasonably foreseeable risk” is far from a definite category, these warnings have the potential to end up being overly broad and defeating the purpose, like California’s Prop 65 warnings. These reporting requirements are well-intentioned, but they ask developers and deployers to report on things that aren’t readily discernible due to the fundamental nature of how AI works.

Deployers also bear a significant burden under the act’s requirements. In addition to the reporting requirements, deployers are required to implement a risk management policy for algorithmic discrimination. The law recommends following a specific ISO standard or any “substantially equivalent” standard. Seeing as ISO is a world-leading standard, an equivalent standard can be difficult to ascertain. ISO certifications take significant time, money, and effort to obtain. They involve months and months of preparation and involvement from employees in nearly every department of a company. They are well-renowned certifications because they are so extensive. But small companies wouldn’t have the resources to complete these audits and obtain the certification. The act does provide an exception for companies under 50 employees that meet certain requirements, but those requirements are very tailored to specific circumstances that most small companies may not meet. And with the act going into effect in less than two years, it poses a tight timeline for companies to obtain this certification.

The substantial burdens the law places on both developers and deployers have the capacity to stifle AI development in Colorado significantly. The law is a good idea with poor execution. The Colorado legislature should pay close attention to their governor’s call to re-evaluate the law and improve the implementation. Once development is stifled, it’s a very tricky task to encourage it again. It’s imperative to get AI regulation right the first time so we don’t lose momentum in AI development.

,

Leave a comment

AllThis.Tech provides understandable breakdowns and opinions on new tech policy as it unfolds.

Finding a starting point can be the biggest obstacle to staying informed. It shouldn’t be.

Want to be notified of new articles?