Artificial Intelligence in Healthcare - The Regulatory Environment
(Part Two of a Four-Part Series)

Artificial intelligence (AI) is not the first instance wherein technological innovators have deluged the healthcare delivery system with devices, processes, and other innovative ideas in an effort to decrease costs and improve value-based care. While the development of AI appears to be progressing faster now than when it was originally introduced in the 1950s,1 recent efforts to implement new technology are being hindered by the current regulatory environment. As of the date of publication, no federal statutes exist that specifically govern AI – only general laws, regulations, and guidance related to scientific research, most of which, if not all, lack the term “artificial intelligence.”2 The U.S. Copyright Office, for example, has only stated in its guidance that it will not register or provide patents for mechanisms formed by a machine or “mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.3 The term is also noticeably absent from the 21st Century Cures Act, which was passed in 2016 to “usher in a new, more industry-friendly era of drug and device regulation.4

One of the overarching concerns associated with AI, especially in healthcare, is that the use of AI may violate the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Although HIPAA does not specifically address the use of AI, “a major goal” of the HIPAA Privacy Rule is “to protect individuals’ medical records and other personal health information and applies to health plans, health care clearinghouses, and those health care providers that conduct certain health care transactions electronically.5 AI relies heavily on patient data, often requiring hundreds of thousands of data elements to pattern and understand concepts; consequently, collection of the requisite healthcare data elements to make AI useful may run counter to regulatory protections related to confidential patient data.6

In July 2016, the U.S. Food and Drug Administration (FDA), interested in the potential utility of AI, issued three guidelines meant to encourage medical entrepreneurs to develop devices that rely on advances in AI and machine learning.7 The first guideline addresses general wellness products, which the FDA defined as a product that has either: (1) “an intended use that relates to maintaining or encouraging a general state of health or a healthy activity”; or, (2) “an intended use that relates the role of healthy lifestyle with helping to reduce the risk or impact of certain chronic diseases or conditions and where it is well understood and accepted that healthy lifestyle choices may play an important role in health outcomes for the disease or condition.”8 These products will not be regulated as medical devices, as long as they are anticipated for general wellness only and are not “high risk.”9 The second guideline permits the use of real-world evidence, i.e., “evidence derived from aggregation and analysis of real-world data elements10 outside of the traditional clinical trial setting, in advocating for the allowance of devices that have already received approval in one area to receive FDA approval for usage in other areas.11 The third guideline addresses the adaptive design of clinical trials that support the FDA’s approval of new medical devices.12 As company developers follow these guidelines, new devices may get to patients quicker.13 In response to the growth of the digitization of healthcare, including through the use of AI, the agency is focused on creating a digital health unit within its Center for Devices and Radiological Health (CDRH) to include time and resources to invest in AI.14 The CDRH facilitates “medical device innovation by advancing regulatory science, providing industry with predictable, consistent, transparent, and efficient regulatory pathways, and assuring consumer confidence in devices marketed in the U.S.15

Though efforts to regulate AI in healthcare have been slow to develop, the U.S. government has already integrated AI into different areas of its own oversight of the healthcare industry. For example, Recovery Audit Contractor (RAC) teams working on behalf of the Centers for Medicare and Medicaid Services (CMS) are utilizing algorithms powered by AI to identify billing irregularities in diagnosis-related groups (DRG), a coding system that denotes a particular diagnosis for a hospital inpatient stay for purposes of payment.16 The algorithm in this system is designed to identify codes that may be downgraded; in other words, “it works by running a simulation that switches out billed codes with cheaper codes, and then measures if the resulting code configuration is within the statistical range averaged from other claims.17  If and when codes are out of range, the RAC downgrades the code, thereby reducing the hospital’s reimbursement under the DRG system.18 This process is designed to significantly decrease the number of audits that hospitals receive regarding the collection of improper Medicare payments under fee-for-service Medicare plans.19 Although hospitals have the ability to appeal a downgrade (which is often based on statistical analyses, and less on the specific concerns of the patient care at issue), the cost to appeal a RAC downgrade is often significantly higher than accepting the downgrade(s).20

Public health regulation and enforcement is also increasingly utilizing AI. For example, the Las Vegas Health Department is using AI in its investigation of health violations, resulting in more citations during inspections.21 Instead of selecting restaurants at random, the agency uses an AI software program to examine tens of thousands of tweets via Twitter to pinpoint possible food poisonings.22 Once the program associates tweets with specific restaurants, inspectors are dispatched to those locations.23 This approach has saved the agency time and resources because “unlike big data and analytics, which work off structured data that takes time to collect and analyze, this program was able to quickly calculate possible health problems by reading unstructured data -- words and phrases.”24

As AI systems continue to influence the regulatory environment of the U.S. healthcare delivery system, concerns have been raised regarding the allocation of the responsibility of an autonomous, manufactured system. Unlike industrial products, in which manufacturers may be held liable for injuries resulting from the use of a product, AI systems are designed to operate autonomously using algorithms.25 As AI takes on a more influential role in the medical process, it will become harder “to argue that a physician would be negligent in following the algorithm, even if it turns out to be wrong and the doctor ends up harming a patient.”26 Future regulations will likely be forced to include language related to liability regarding AI systems.

The delay in regulation for AI may generally be attributed to the following: (1) the federal government has not promulgated a clear definition of AI, and without it, the construction of an effective regulatory system is unlikely; (2) the autonomy of AI poses significant liability concerns; and, (3) federal and state governments may struggle to design a regulatory framework around a technology that will likely continue to rapidly evolve, in both design and utilization.27 AI is advancing on a daily basis, and government lawmakers and regulators will necessarily have to respond with innovative laws to regulate this technology, especially as it relates to healthcare.

“1950s: The Beginnings of Artificial Intelligence (AI) Research” World-Information.Org, (Accessed 5/23/17).

“Artificial intelligence and the law” By Jeremy Elman & Abel Castilla, Tech Crunch, January 28, 2017, (Accessed 5/18/17).

“Compendium of U.S. Copyright Office Practices 3rd ed. § 306” U.S. Copyright Office, 2014 (Accessed 5/18/17).

“Senate Clears Bill to Ease FDA Drug and Device Approvals” By Thomas M Burton, The Wall Street Journal, December 7, 2016, (Accessed 5/23/17).

“The HIPAA Privacy Rule” U.S. Department of Health & Human Services, (Accessed 5/23/17).

“Artificial Intelligence and Healthcare in 2030” By Michael Marquis, iReviews, December 3, 2016, (Accessed 5/18/17).

“Artificial Intelligence, Machine Learning, and the FDA” By John Graham, Forbes, August 19, 2016, (Accessed 4/21/17).

“General Wellness: Policy for Low Risk Devices” U.S. Food and Drug Administration, July 29, 2016, (Accessed 5/24/17) p. 2-3.

Ibid, p. 2.

“Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices” U.S. FDA, July 27, 2016, (Accessed 5/24/17) p. 4.

Ibid, p. 6.

“Adaptive Designs for Medical Device Clinical Studies” U.S. Food and Drug Administration, July 27, 2016, (Accessed 5/24/17) p. 1.

Graham, August 19, 2016.

“FDA to create digital health unit” By Zachary Brennan, RAPS, May 4, 2017, (Accessed 5/17/17).

“About the Center for Devices and Radiological Health” U.S. Food & Drug Administration, March 14, 2017, (Accessed 5/11/17).

“Big Data and the Future of Medicare Audits: Part I- DRG Downcoding in Hospitals” By Edward M Roche, Ph.D.,J.D., Barraclough, October 5, 2016, (Accessed 5/18/17).





“Artificial Intelligence: The Next Big Thing in Government” By Tod Newcombe, Governing, October 2016, (Accessed 5/17/17).




“Is Regulation of Artificial Intelligence Possible?” By John Danaher, H+ Magazine, July 15, 2015, (Accessed 5/18/17).

“Artificial Intelligence, Medical Malpractice, and the end of Defensive Medicine” By Shailin Thomas, Harvard Law, January 26, 2017, (Accessed 5/17/17).

Danaher, July 15, 2015.

Healthcare Valuation Banner Advisor's Guide to Healthcare Banner Accountable Care Organizations Banner