Generative AI’s Disruption of the Healthcare Industry

Introduction

Generative artificial intelligence (AI) is the utilization of algorithms to create content such as text, code, imagery, videos, and even simulations in mere seconds.1 The goal of AI generally is to mimic the intelligence of humans to perform tasks, with generative AI (a type of AI) aiming to learn from data without the assistance of humans.2 While today’s generative AI bots are not yet prepared for widespread utilization in patient care settings, AI is garnering significant interest in the healthcare industry as providers begin to test the capabilities of AI in clinics and offices.3 This Health Capital Topics article will review the role that generative AI is beginning to play in the U.S. healthcare system, the potential of AI in healthcare, and concerns related to the technology.

Advantages & Disadvantages

In the coming years, AI will likely be critical to the success of quality improvement, risk adjustment, and population health management, all key tenets of value-based care.4 With the rapid growth in the amount and accessibility of clinical data, AI will likely be utilized to analyze this data to reduce inefficiencies and costs while contributing to better patient outcomes.5 Providers are often time-constrained due to manually entering electronic health records (EHR), increasing chances of burnout.6 Leveraging AI can streamline workflow, close gaps in care, and allow for risk adjustment and the elimination of delays in reimbursement.7 Additionally, with a projected shortage of nurses – the gap between nurse supply and demand is expected to surpass 100,000 by 2030 – AI can serve as an additional “set of hands” by understanding patient medical records and codifying documents, improving clinician efficiency and patient outcomes, and driving higher reimbursement.8 AI also has the potential to question the decisions of a physician that may unknowingly exacerbate the ongoing issue of bias in medicine, and potentially push towards a more equitable healthcare system.9

AI is a tool that is likely to transform the healthcare industry and revolutionize the way patients are treated; however, there are concerns to keep in mind regarding potential bias, security risks, and even privacy.10 Biases have been identified within information technology (IT) applications, which results in possibly exacerbating healthcare inequities that exist within the healthcare, such as ethnicity, income, gender, or race.11 While generative AI can provide solutions to biases in healthcare, there are other challenges that will need to be accounted for.12 The accuracy of generative AI’s outputs is reliant on the data that are utilized to train them, which could include lab results, imaging studies, and medical records.13 Potential errors could put the health of patients at risk, which is why addressing the implications of these challenges, and how they affect patient care, will be imperative.14

Generative AI poses a number of risks to providers and patients. There are significant privacy concerns related to generative AI, especially considering the types of information that healthcare providers handle, including sensitive and patient identifying information.15 For example, patient information may be sold to companies for use in targeted ads. However, these types of potential risks are similar to the risks related to social media generally.16 Other major risks with generative AI could be security – AI will not solve the susceptibility of medical data to being hacked or stolen unless EHR companies allow their application programming interface to be utilized.17 Organizations that maintain EHRs are known to maintain a certain level of security, ensuring that data is at minimal to no risk, and it will be in the best interest of generative AI software to utilize similar tactics.18

While generative AI can make the healthcare system more efficient by reducing bias, detecting errors, and reducing the amount of paperwork, it is very unlikely that they will replace physicians.19 Generative AI is infamous for not providing appropriate (or any) context, which is necessary in real-world settings, particularly in healthcare.20 Physicians can also provide compassion and integrated care more than any AI software or program.21 Generative AI will certainly be able to complement and augment physician work, by reducing inefficiencies within the healthcare system, but will likely never be able to replace the physician workforce.22 Recent reports have shown that 40% of working hours in healthcare settings could be supported by generative, language-based AI.23 The application of AI in healthcare will depend on training in the human experience, along with perception and expertise.24

Regulatory

The sprint toward AI in all industries has raised concern about risks and a lack of scrutiny, and regulators have been scrambling to modify existing rules to cover issues on data privacy and copyright.25 While regulatory agencies are in uncharted territory, few have stepped forward with any sort of strategy to address the negative impacts of AI. The Food and Drug Administration (FDA) has developed an action plan to provide reassurance on effectiveness and safety while utilizing AI in the healthcare industry.26 The plan outlines five areas for focus: (1) develop the proposed framework, including guidance on software that learns over time; (2) develop good practices in machine learning to further improve algorithms; (3) ensure a patient-centered approach with complete transparency; (4) advance pilot performances in a real world setting; and (5) develop methods to evaluate algorithms in machine learning.27

In addition to regulatory agencies, the rapid implementation of AI will require healthcare organizations to monitor any risks (e.g., reputational, legal, and ethical) emanating from AI use and determine how to address those risks, particularly given the current lack of regulatory framework and oversight.28 In June 2023, the American Medical Association (AMA) voted to adopt a proposal to protect patients against misleading or false medical information from AI tools.29 The AMA aims to work with agencies such as the Federal Trade Commission (FTC) and the FDA to mitigate any misinformation, and anticipates the establishment of federal and state regulations in the near future.30

Despite the fluidity of regulation, AI companies are starting to face government scrutiny. In July 2023, the FTC opened an investigation and sent a records request to OpenAI, the company behind ChatGPT.31 In its investigation as to whether OpenAI engaged in practices that resulted in consumer harm, the FTC requested information regarding how OpenAI obtained data used to train their models and descriptions of ChatGPT’s abilities.32 The agency also requested descriptions of OpenAI’s testing, algorithms, responses, and the company’s false information policies.33

The level of development and the pace of clinical AI implementation may be directly influenced by the liability faced by practitioners, designers, and health systems, as more liability could discourage the use of AI in healthcare.34 As technology develops, new legal pathways need to be established, especially as increased liability would likely repel practitioners, designers, and health systems from implementing and developing clinical AI models.35

Advancements & Entrants

ChatGPT, the free-to-use generative AI bot developed by OpenAI, has become the preeminent bot in the field, and has piqued interest across multiple industries with its capability to replicate relevant, coherent, and human-like responses when prompted by users.36 These various capabilities have made it ideal for application in healthcare.37 The generative AI bot is pre-trained on vast amounts of data and can generate content based on the data on which it has been trained.38 Other big tech companies, including Microsoft and Google, have also created publicly accessible generative AI bots such as Bing AI, Copilot, and Bard.39

The rapid evolution of generative AI at large has spurred advancements in AI specifically designed to assist providers in healthcare settings.40 Carbon Health, a primary care company, recently launched a proprietary AI-enabled EHR assistant for hands-free charting within its clinics.41 The company is aiming to reduce provider workload, allowing each provider more time to see patients, and generally enhance the doctor-patient connection by focusing on the care of patients, rather than typing.42 Additionally, Tempus, a precision medicine and AI company, recently launched an AI-enabled clinical assistant that helps clinicians seamlessly access patient data.43 Utilizing Tempus, clinicians can access reports from clinical tests, filter patient incidence by diagnosis, access summarized patient information, and query clinical guidelines for updated standard of care insights.44

In April 2023, Epic, a healthcare software company, announced a collaboration with Microsoft to combine Microsoft’s Azure OpenAI and Epic’s EHR software to respond to patient messages, alleviating provider workload.45 The initial rollout will begin at UNC Health with five to ten clinicians and eventually expand to other health systems.46 The first iteration of this technology will draft suggested responses to the most common patient questions and messages for physicians to review and send.47

Conclusion

While generative AI will continue to disrupt the healthcare industry, it aims to ultimately increase the efficacy of the healthcare system. By streamlining clerical work, performing literature searches, and even reducing error and bias within medicine, generative AI has the potential to revolutionize the way healthcare is delivered.48 While generative AI has nearly unlimited potential, there are also risks associated with the technology, particularly in healthcare. Patient data could result in bias by the bot and even be susceptible to hacking or stealing. Generative AI has the potential to revolutionize the healthcare industry, but industry stakeholders will need to remain up-to-date on the risks and ongoing regulatory changes that affect the usage of generative AI.


“What is Generative AI” McKinsey & Company, January 19, 2023, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai (Accessed 7/21/23); “What is generative AI? Everything you need to know” By George Lawton, TechTarget, https://www.techtarget.com/searchenterpriseai/definition/generative-AI?Offer=abt_pubpro_AI-Insider (Accessed 7/26/23).

McKinsey & Company, January 19, 2023.

“ChatGPT’s Use In Medicine Raises Questions Of Security, Privacy, Bias” By Robert Pearl, Forbes, April 24, 2023, https://www.forbes.com/sites/robertpearl/2023/04/24/chatgpts-use-in-medicine-raises-questions-of-security-privacy-bias/?sh=267cb97d5373 (Accessed 5/23/23).

“Top Three Reasons Why AI is Critical for Value-Based Care” By Jay Ackerman, Managed Healthcare Executive, June 27, 2023, https://www.managedhealthcareexecutive.com/view/top-three-reasons-why-ai-is-critical-for-value-based-care (Accessed 7/7/23).

Ibid.

Ibid.

Ibid.

Ibid; “The State of the Nation’s Nursing Shortage” By Julia Haines, U.S. News & World Report, November 1, 2022, https://www.usnews.com/news/health-news/articles/2022-11-01/the-state-of-the-nations-nursing-shortage (Accessed 7/26/23).

Pearl, Forbes, April 24, 2023.

Ibid.

Ibid.

“How will generative AI impact healthcare?” World Economic Forum, May 12, 2023, https://www.weforum.org/agenda/2023/05/how-will-generative-ai-impact-healthcare/ (Accessed 6/14/23).

Ibid.

Ibid.

Pearl, Forbes, April 24, 2023.

Ibid.

Ibid.

Ibid.

“Opportunities and risks of ChatGPT in medicine, science, and academic publishing: a modern Promethean dilemma” By Jan Homolak, Croatian Medical Journal, National Institute of Health, February 2023, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10028563/ (Accessed 5/24/23).

Homolak, Croatian Medical Journal, National Institute of Health, February 2023.

Ibid.

Ibid.

“Generative AI could augment 40% of healthcare working hours” By Bill Siwicki, Healthcare IT News, May 11, 2023, https://www.healthcareitnews.com/news/generative-ai-could-augment-40-healthcare-working-hours (Accessed 6/14/23); “A new era of generative AI for everyone” Accenture, March 22, 2023, https://www.accenture.com/us-en/insights/technology/generative-ai?c=acn_glb_largelanguagemomediarelations_13427684&n=mrl_0323 (Accessed 6/14/23).

Siwicki, Healthcare IT News, May 11, 2023.

“US FTC opens investigation into OpenAI over misleading statements –document” Reuters, July 13, 2023, https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/ (Accessed 7/13/23).

“Artificial Intelligence/Machine Learning (AI/ML)-Based Software As a Medical Device (SaMD) Action Plan” U.S. Food and Drug Administration, January 2021, https://www.fda.gov/media/145022/download (Accessed 7/7/23).

Ibid.

“Generative AI could augment 40% of healthcare working hours” By Bill Siwicki, Healthcare IT News, May 11, 2023, https://www.healthcareitnews.com/news/generative-ai-could-augment-40-healthcare-working-hours (Accessed 6/14/23).

“AMA adopts proposal to protect patients from false and misleading AI-generated medical advice” By Kristine White, Healthcare Brew, https://www.healthcare-brew.com/stories/2023/06/14/ama-adopts-proposal-to-protect-patients-from-false-and-misleading-ai-generated-medical-advice (Accessed 7/6/23).

White, Healthcare Brew (Accessed 7/6/23).

“US FTC opens investigation into OpenAI over misleading statements –document” Reuters, July 13, 2023, https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/ (Accessed 7/13/23).

“FTC is investigating ChatGPT-maker OpenAI for potential harm to consumers” By Brian Fung, CNN, July 13, 2023, https://www.cnn.com/2023/07/13/tech/ftc-openai-investigation/index.html (Accessed 7/13/23).

Ibid.

“Dr. Watson, A.I.: The Current Approach to Artificial Intelligence Training in the Medical Field and Legal Considerations for AI Diagnosis Dependence” By Allison Newsome and Jasmeet Singh, American Health Law Association, July 6, 2023, https://www.americanhealthlaw.org/content-library/publications/briefings/d88632df-8f1e-48ce-b5f4-066d19b12aa6/Dr-Watson-A-I-The-Current-Approach-to-Artificial-I?Token=60909932-7f8c-4a43-97b9-8bcb7e610f4e (Accessed 7/18/23).

Ibid.

“What Is Chat GPT? – Everything You Need to Know” Enterprise DNA, https://blog.enterprisedna.co/what-is-chat-gpt-everything-you-need-to-know/#:~:text=The%20history%20of%20ChatGPT%20starts,promising%20but%20limited%20language%20model (Accessed 5/23/23).

“Revolutionizing Healthcare: The Top 14 Uses Of ChatGPT In Medicine And Wellness” By Bernard Marr, Forbes, March 2, 2023, https://www.forbes.com/sites/bernardmarr/2023/03/02/revolutionizing-healthcare-the-top-14-uses-of-chatgpt-in-medicine-and-wellness/?sh=ba393de6e547 (Accessed 7/26/23).

Enterprise DNA, https://blog.enterprisedna.co/what-is-chat-gpt-everything-you-need-to-know/#:~:text=The%20history%20of%20ChatGPT%20starts,promising%20but%20limited%20language%20model (Accessed 5/23/23).

“Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet” By Umar Shakir, The Verge, June 14, 2023, https://www.theverge.com/23610427/chatbots-chatgpt-new-bing-google-bard-conversational-ai (Accessed 6/14/23).

“The latest generative AI efforts in healthcare: Carbon Health, Tempus launch tools for docs” By Heather Landi, Fierce Healthcare, June 6, 2023, https://www.fiercehealthcare.com/health-tech/latest-generative-ai-efforts-healthcare-carbon-health-tempus-launch-tools-docs (Accessed 6/14/23).

Ibid.

Ibid.

Ibid.

“Epic is going all in on generative AI in healthcare. Here's why a handful of health systems is eager to test-drive it” By Heather Landi, Fierce Healthcare, May 25, 2023, https://www.fiercehealthcare.com/health-tech/epic-moves-forward-bring-generative-ai-healthcare-heres-why-handful-health-systems-are (Accessed 6/14/23).

Ibid.

Ibid.

Ibid.

Homolak, Croatian Medical Journal, National Institute of Health, February 2023, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10028563/ (Accessed 5/24/23).

Health Capital Topics EBook