Regulating AI in healthcare: A look at Biden’s executive order

A crucial step in ethical AI use in the healthcare space, this blog unpacks Biden’s E.O. 14110 following the HTI-1 regulation.
AI in healthcare

With the December 2023 finalization of HTI-1, Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing regulation for certified health IT, a significant step was taken towards regulating the use of artificial intelligence (AI) in the healthcare sector. This regulation has set new standards and expectations for AI in health technology, drawing attention to its ethical use.

In the midst of these developments, an Executive Order (E.O.) from President Biden on October 30th, often overlooked amidst the flurry of regulatory updates, deserves a closer examination. E.O. 14110, Safe, Secure, and Trustworthy Artificial Intelligence, represents a unified agenda to advance the ethical use of AI throughout the federal government. This Order is a powerful statement of the administration’s commitment to address potential misuse of AI and protect Americans from discrimination, disinformation, loss of privacy, and risks to the security of their data. In this context, we delve into the specifics of E.O. 14110 to understand its implications for the Department of Health and Human Services (HHS) and the healthcare industry as a whole.

Priority areas of E.O. 14110 for AI implementation

E.O. 14110 establishes a foundation for the safe and responsible use of AI with broad directives for over 50 federal entities in eight priority areas that include actions to:

  • Ensure the safety and security of AI systems through adherence to standardized guidelines and best practices for development, evaluation, and deployment with a focus on implementation in biotechnology, cybersecurity, critical infrastructure, and national security.
  • Establish the United States as a leader in the development and implementation of AI through responsible innovation, fair competition, collaboration, and investment in infrastructure to support training, R&D, and capacity.
  • Demonstrate a commitment to workforce development for AI jobs and industries through job training and education, access to employment opportunities, and fair treatment in the workplace.
  • Ensure that the use of AI is consistent with the Administration’s priorities to advance equity and civil rights through robust standards for evaluation, oversight, and rigorous regulation.
  • Enforce consumer protections for Americans in the use, purchase, or interaction with AI and AI-enabled products, enacting safeguards against fraud, unintended bias, discrimination, infringements on privacy in critical fields like healthcare, financial services, education, housing, law, and transportation.
  • Protect the privacy of Americans to ensure that personal data that contains sensitive information about people’s identities, locations, habits, and desires is not exposed and exploited through the use of AI.
  • Manage risks from the government’s own use of AI, ensure that federal agencies have essential information technology infrastructure and the capacity to regulate, govern, and support responsible use of AI.
  • Achieve international leadership through engagement with allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.

E.O. 14110 represents a unified agenda for all agencies within the federal government as to how AI is developed and deployed through agency leadership, regulation of industry, and engagement with international partners. All agencies are expected to incorporate the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights in development and deployment of safe, secure, and trustworthy AI systems.

AI in healthcare and human services

While E.O. 14110 is groundbreaking in that it represents a single agenda for the development and deployment of safe, secure, and trustworthy AI systems throughout the federal government, the focus of HHS is on the use of AI to protect human health, deliver safe healthcare, and ensure fair access in human services programs.

The role of HHS in AI research, regulation, and consumer protection

Directives for HHS call for participation in multi-agency initiatives to address national security and support AI in research. Of particular concern are biosecurity risks in the use of AI in the development and production of synthetic genetic material, and the need for AI model evaluation tools and testbeds that evaluate and protect data and models against security threats and hazards. HHS is named as a key player in the development of a pilot program to advance the use of AI in research, the National AI Research Resource (NAIRR), providing access to computational resources, high-quality data, educational tools, and user support for AI researchers and students.

E.O. 14110 grants HHS sole authority for initiatives that protect consumers in the safe deployment of AI in healthcare and ensures compliance with federal nondiscrimination laws as they relate to the use of AI by health and human service providers. The E.O. also affirms HHS oversight in the use of AI and AI-enabled tools in drug development administered through the Food and Drug Administration (FDA), in supporting responsible AI development through National Institutes of Health (NIH) research, and in collaboration with the private sector to ensure AI is deployed safely and responsibly in healthcare.

Advancing responsible AI: HHS initiatives and collaborations

Yet, even prior to the announcement of E.O. 14110, HHS had established a leadership role in advancing the responsible use of AI within federal programs with HTI-1. This regulation sets standards for the use of predictive AI in electronic health record (EHR) systems that inform medical decision-making by incorporating FAVES AI principles for the Fair, Appropriate, Valid, Effective and Safe use of AI. Recently the Centers for Medicare and Medicaid Services (CMS) initiated projects to evaluate if the algorithms used by health plans and providers to determine medical necessity introduce inappropriate bias in the care of patients. FDA activities to protect consumers in 2022 include guidelines and activities for review and approval of AI-enabled medical devices used in diagnosis and treatment. NIH research initiatives are focused on incorporating AI in priority areas for cancer, Alzheimer’s disease, and mental illness.

In addition to establishing a foundation for the responsible use of AI in federal programs, HHS advanced a major initiative of E.O. 14110 in December 2023 for collaboration with private industry by securing voluntary commitments from provider and payer organizations engaged in the procurement, development, and use of AI in healthcare. These organizations are committed to honoring FAVES AI principles and incorporating the NIST AI Risk Management Framework and the OSTP Blueprint for an AI Bill of Rights for the development and use of AI in healthcare.

FAVES AI principles

Fair: Outcomes of model do not exhibit prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics.

Appropriate: Model and process outputs are well matched to produce results appropriate for specific contexts and populations to which they are applied.

Valid: Model and process outputs have been shown to estimate targeted values accurately and as expected in both internal and external data.

Effective: Outcomes of model have demonstrated benefit in real-world conditions.

Safe: Outcomes of model are free from any known unacceptable risks and for which the probable benefits outweigh any probable risk.

The future of AI policy in healthcare

As Congress continues to explore the policy implications for the use of AI, E.O. 14110 represents decisive action on the part of the Biden administration to establish a foundation for the responsible deployment of AI that will protect – and benefit all Americans.

Curious about what the future of healthcare with AI may look like? Listen to our on-demand webinar, Forecasting healthcare: AI’s role in the years ahead.

Ideas are meant for sharing.

Sign up today and have Ideas delivered straight to your inbox.

Related Ideas