Colo. Rev. Stat. § 6-1-1702

Current through 11/5/2024 election
Section 6-1-1702 - Developer duty to avoid algorithmic discrimination - required documentation
(1) On and after February 1, 2026, a developer of a High-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after February 1, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 6-1-1707.
(2) On and after February 1, 2026, and except as provided in subsection (6) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system:
(a) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;
(b) Documentation disclosing:
(I) High-level summaries of the type of data used to train the high-risk Artificial intelligence system;
(II) Known or reasonably foreseeable limitations of the high-risk Artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system;
(III) The purpose of the high-risk artificial intelligence system;
(IV) The intended benefits and uses of the high-risk artificial intelligence system; and
(V) All other information necessary to allow the deployer to comply with the requirements of section 6-1-1703;
(c) Documentation describing:
(I) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer;
(II) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation;
(III) The intended outputs of the high-risk artificial intelligence system;
(IV) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk Artificial intelligence system; and
(V) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and
(d) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of Algorithmic discrimination.
(3)
(a) Except as provided in subsection (6) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system on or after February 1, 2026, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 6-1-1703 (3).
(b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
(4)
(a) On and after February 1, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
(I) The types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and
(II) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (4)(a)(I) of this section.
(b) A developer shall update the statement described in subsection (4)(a) of this section:
(I) As necessary to ensure that the statement remains accurate; and
(II) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (4)(a)(I) of this section.
(5) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:
(a) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused Algorithmic discrimination; or
(b) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused Algorithmic discrimination.
(6) Nothing in subsections (2) to (5) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
(7) On and after February 1, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a Trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.

C.R.S. § 6-1-1702

Added by 2024 Ch. 198,§ 1, eff. 5/17/2024.