On May 6, 2024, OCR published the final rule interpreting and implementing Section 1557 at 45 C.F.R. § 92 (the Final Rule). The Final Rule regulates the use of patient care decision support tools, including AI algorithms for screening, risk prediction, diagnosis, prognosis, clinical decision-making, treatment planning, health care operations, and allocation of resources.
On January 10, 2025, OCR released a "Dear Colleagues" letter focused on how covered entities can comply with the Final Rule's requirements in their use of patient care decision support tools.
Patient care decision support tools are defined as "any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities." 45 C.F.R. § 92. Included in this definition is any artificial intelligence (AI) algorithm used in diagnosis, treatment, patient monitoring, or any other aspect of health care operations. Covered entities that are subject to the Final Rule are (1): recipients of Federal financial assistance; (2): the U.S. Department of Health and Human Services, and (3): entities established under Title I of the Affordable Care Act.
So, what exactly is it that covered entities utilizing AI in their health care operations must do?
In addition to the general prohibition against discrimination on the basis of race, color, national origin, sex1, age, or disability in covered entities' health programs or activities through the use of patient care decision support tools, the January 10 letter expands upon two specific requirements in the Final Rule.
Identification of risk – First, covered entities under the Final Rule have an "ongoing duty to make reasonable efforts to identify uses of patient care decision support tools in [their] health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability." 45 C.F.R. § 92.210(b). The Final Rule does not specifically describe how covered entities must make these reasonable efforts, but the January 10 letter provides the following examples:
- Review OCR's discussion of risks in the use of such tools in the Section 1557 final rule, including categories of tools used to assess risk of heart failure, cancer, lung function, and blood oxygen levels;
- Research published articles of research studies in peer-reviewed medical journals or from health care professional and hospital associations, including those put out by the HHS;
- Utilize, implement, or create AI registries for safety that are developed by non-profit AI organizations or others, including the use of internal registries by the covered entity to determine use cases within an organization; and
- Obtain information from vendors about the input variables or factors included in existing patient care decision support tools.
The January 10 letter also makes clear that any OCR review of whether a covered entity made "reasonable efforts" to identify this risk will be conducted through a case-by-case analysis that may consider a multitude of factors, including:
- The covered entity's size and resources (e.g., a large hospital with an IT department and a health equity officer would likely be expected to make greater efforts to identify tools than a smaller provider without such resources);
- The available information at the time of use, to determine whether there was notice of the potential discriminatory use if a product used input variables on the basis of race, color, national origin, sex, age, or disability;
- Whether the covered entity used the tool in the manner intended by the developer and approved by regulators, if applicable, or whether the covered entity has adapted or customized the tool;
- Whether the covered entity received product information from the developer of the tool regarding the potential for discrimination or identified that the tool's input variables include race, color, national origin, sex, age, or disability; and
- Whether the covered entity has a methodology or process in place for evaluating the patient care decision support tools it adopts or uses.
Mitigation of risk – Second, for each patient care decision support tool for which risk of discrimination is identified, covered entities must "make reasonable efforts to mitigate the risk of discrimination resulting from the tool's use in its health programs or activities." 45 C.F.R. § 92.210(c). Once again, the January 10 letter provides specific examples of how this mitigation might be accomplished:
- Establish written policies and procedures governing how patient care decision support tools are used in decision-making, as well as governance measures;
- Monitor potential impacts and develop ways to address complaints of alleged discrimination;
- Maintain internal AI registry or reference AI registries developed by non-profit AI organizations or others to provide the covered entity with information regarding what is being used internally and to facilitate regulatory compliance;
- Utilize staff to override and report potentially discriminatory decisions made by a patient care decision support tool, including a mechanism for ensuring a "human in the loop" review of a tool's decision by a qualified human professional;
- Train staff members, including how to report results and how to interpret decisions made by the tool, including factors required by other federal rules;
- Establish a registry of tools identified as posing a risk of discrimination and review previous decisions made by these tools;
- Audit the performance of tools in "real world" scenarios and monitor the tool for discrimination; and
- Disclose to patients a covered entity's use of patient care decision support tools that the entity has identified as posing the risk of discrimination.
Whether or not a covered entity took reasonable efforts to mitigate discrimination risks will depend on a variety of factors like entity size, the context in which the AI tool was used, or the policies used to address complaints. Notably, OCR has opined that if an AI tool utilizes an input variable like race, it may trigger greater scrutiny than an input variable like age, which is more likely to have a clinical and evidence-based purpose for its use. Thus, additional mitigation policies and efforts may be needed when a tool uses race as an input variable, as opposed to age.
While the Final Rule's general prohibition on discrimination in the use of patient care decision support tools took effect on July 5, 2024, these requirements to make reasonable efforts to identify and mitigate risks of discrimination in the use of those tools will take effect on May 1, 2025.
We will continue to monitor developments related to Section 1557 and the Final Rule and any actions taken by the new administration.
If you have any questions or concerns regarding this alert, please reach out to Alexandra P. Moylan, Alisa L. Chestler, Samuel Cottle, Michael J. Halaiko, or any member of Baker Donelson's Health Law team.
1 There is a nationwide injunction staying enforcement of portions of the Final Rule which prohibit sex discrimination based on gender identity. See Tennessee v. Becerra, Case No. 1:24cv161-LG-BWR (S.D. Miss. 2024)