Colorado AI Systems Regulation: What Healthcare Developers and Developers Need to Know | Mintz – Health Care Insights

As the first state law regulating the results of the use of an Artificial Intelligence System (AI System), Colorado’s SB24-205, “Relating to Consumer Protection in Interactions with Artificial Intelligence Systems” (the Act), has generated considerable interest among industry, for good reason. In some ways similar to the risk-based approach taken by the European Union (EU) in the EU AI Act, the Act aims to regulate developers and deployers of AI Systems, which are defined by the Act as “any system of based on machinery that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions or recommendations, that may affect physical or virtual environments.”

The act is scheduled to take effect on February 1, 2026, and its scope will be limited to activities in Colorado, entities doing business in Colorado, or entities whose activities affect Colorado residents. It generally focuses on regulating “high-risk” AI Systems, which are defined as any AI systems that, when deployed, create or is an essential factor in making a consequential decision. A “subsequent decision” means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer, or the cost or terms ofamong other services, health care services.

Developer and Developer Requirements

Both developers and deployers of high-risk AI systems must use reasonable care to protect consumers from any known or reasonably foreseeable risk of “algorithmic discrimination.” The Act also imposes several obligations on developers of high-risk AI systems, including disclosing information to setters; publishing summaries of the types of developers’ high-risk AI systems and how they manage any foreseeable risks; and disclosure to the Colorado Attorney General (AG) of “any known or reasonably foreseeable risk” of algorithmic discrimination arising from targeted high-risk AI system uses within 90 days of disclosure. Distributors will need to implement risk management policies and programs to govern the deployment of high-risk AI systems; full impact assessments for high-risk AI systems; send notice to consumers after deploying AI systems at high risk of taking, or being a substantial factor in, making subsequent decisions about a consumer; and send notice to the AG within 90 days of discovering that the high-risk AI system has caused algorithmic discrimination.

Scope and exclusions of health care services

The Act defines “health care services” by reference to the definition of the Public Health Service Act. Although this is a broad definition that can encompass a wide range of services, the drafters also accounted for systems that are not high-risk and some of the work that has already been done or is in progress by the federal government, as there are applicable exceptions. to some health care entities.

Entities covered by HIPAA

The Act will not apply to adopters, developers or others who are Covered Entities under HIPAA and who provide healthcare recommendations that: (i) are generated by an AI System; (ii) require a health care provider to take action to implement the recommendations; and (iii) are not considered high risk (as defined by law). This exemption appears to be directed at healthcare providers as it requires the involvement of a healthcare provider to actually implement the recommendations made by the AI ​​Systems rather than having the recommendations automatically implemented by the systems. However, the scope is not limited to providers, as Covered Entities can be health care providers, health plans, or health care clearinghouses. There are a number of potential uses of AI systems by HIPAA-covered entities, including but not limited to disease diagnoses, treatment planning, clinical outcome predictions, coverage determinations, diagnostics and imaging, clinical research and management of population health. Examples of uses of AI Systems that are not “high risk” in relation to healthcare services, and that could potentially meet this exception, include administrative-type tasks such as clinical documentation and record keeping, billing or appointment scheduling .

FDA approved systems

Developers, developers, and others who deploy, develop, commission, or substantially modify high-risk AI systems that have been approved, authorized, certified, cleared, developed, or awarded by a federal agency such as the Food and Drug Administration ( FDA) are not required to comply with the Act. Since the FDA has deep experience with AI and machine learning (ML) and, as of May 13, 2024, has authorized 882 AI/ML-enabled medical devices, this is an expected and welcome clarification for those entities that have developed already or are working. with AI/ML-enabled FDA-cleared medical devices. Additionally, adopters, developers, or others conducting research to support an application for approval or certification by a federal agency such as FDA or research to support an application otherwise subject to review by the agency are not required to comply with the Act. The use of AI Systems is widespread in drug development, and to the extent these activities are approved by the FDA, the development and deployment of AI Systems under these approvals is not subject to the Act.

Compliance with ONC standards

Also exempt from the law’s requirements are deployers, developers, or others who intentionally and substantially deploy, develop, commission, or modify a high-risk AI system that complies with standards established by federal agencies such as the National Office. Health Information Technology Coordinator (ONC). This exemption helps avoid potential regulatory uncertainty for certified health IT developers and healthcare providers using certified health IT, pursuant to ONC’s HTI-1 Final Rule, which imposes certain disclosure obligations on information and risk management for certified health IT developers. Not all developers of high-risk AI systems in healthcare are certified health IT developers, but the vast majority are certified, and this is an important distinction for those developers who are already compliant or working towards it. meet HTI-1 Final Rule.

Main Agreement

Using a risk-based approach to review the use of AI System can be a new practice for developers and implementers directly or indirectly involved with the provision of healthcare services. For setters in particular, they will want to have processes in place to determine whether they are required to comply with the law and to document the results of any applicable analysis. These analyzes will include determining whether their AI System serves as a substantial factor in subsequent decision-making (and thus the system is “high risk”) regarding the provision of healthcare services. If they determine that they are using high-risk AI systems and none of the above-mentioned exceptions apply, they will need to initiate activities such as developing required risk management policies and procedures, conducting impact assessments for these systems and the rise of consumers. and AG notification mechanisms. It will take some time for some organizations to integrate these new obligations into their respective policies and procedures and risk management systems, and they will want to make sure they are including the right individuals in those conversations and decisions.

[View source.]

#Colorado #Systems #Regulation #Healthcare #Developers #Developers #Mintz #Health #Care #Insights
Image Source : www.jdsupra.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top