Skip to main content

Artificial Intelligence Is Here: What You Need to Know

August 2023, Vol 13, No 8
Dawn Holcombe, MBA, FACMPE, ACHE
Editor-in-Chief
President, DGH Consulting, South Windsor, CT

Where do we really stand when it comes to artificial intelligence (AI)? What is the value? Who is using it? What are the risks and drawbacks? This technology has been discussed as a solution to complicated medical management as long ago as the 1970s, in the early days of developing clinical decision support systems.1 How much closer are we in 2023 to computer-aided medical decision-making? The answer is a bit, in some areas, but use of AI is also raising concerns regarding patient access to care and it may become a competitive advantage or disadvantage, depending on level of adoption by you or others.

What Is AI?

In general, AI is a broad description of the field of computer science in which machines or algorithms are programmed to simulate human intelligence. There are 2 subfields of AI that have become more precise (and even aggressive): machine learning (ML) and deep learning (DL). ML uses AI and computers to perform defined tasks and apply statistical methods to detect hidden patterns in the data and improve model performance, whereas DL uses computers that do not have to be told what to do or look for—multilayered neural networks enable computers to learn and compute on their own, in ways unknown, unanticipated, and untaught by humans.

Natural language processing (NLP) is another specialty within AI that combines computer interpretation with human language. NLP is used to transform unstructured data, such as medical notes or diagnostic reports, into discrete data elements.1

In 2017, Google introduced a transformer model that could figure out relationships between billions of text examples and predict the next text in the sequence. In 2020, an AI startup released GPT-3, a generative pretrained transformer model, which absorbed public domain data in an unequalled manner. In December 2022, OpenAI released ChatGPT, a chatbot that exploded on the scene, reaching 100 million active users within 2 months. Microsoft invested $10 billion in OpenAI and has already integrated it into Windows and the Bing search engine. In March 2023, OpenAI released ChatGPT4, reportedly trained on 500 times more data than the earlier model. You have probably already used ChatGPT4, perhaps without realizing it. It is available for free at https://openai.com/blog/chatgpt. You ask it a question and it gives you an answer.2

AI Offers Promise in Cancer Care

Cancer is a leading driver of morbidity and mortality across the world, and it is estimated that 30.2 million new cancer cases will be diagnosed in 2040.3 Despite significant improvements in diagnosis and management, we need to do better, which will require working smarter, faster, and more effectively with shrinking financial and labor resources.

The large volume of data related to diagnosis, the development of therapies, patient outcomes, imaging, laboratory results, and clinical research is rapidly outstripping the capacity of the human brain to process this much information. It is possible that AI, ML, and DL may evolve into important tools to distill needed information into relevant patterns for the benefit of individual patients and providers.

AI Is Not Yet Ready for Prime Time

Despite all the buzz about AI, many are concerned. Just as new therapies undergo rigorous testing, physicians should expect the same for AI models. F. Perry Wilson, MD, MSCE, Director, Clinical and Translational Research Accelerator, Yale School of Medicine, New Haven, CT, questioned the accuracy of AI models and the gap between data science and actual patient care. “They may give information doctors agree with or find helpful, but until you actually show that the use as opposed to the nonuse of these models actually improves patient outcomes, I would argue that you haven’t completed the science necessary here,” he said.4

Connecticut has formed the Connecticut Health AI Collaborative (www.cthealthai.org) to prioritize transparency, inclusivity, and coordination of ethical and regulatory compliance. The Collaborative is aligned with the National Coalition for Health AI and the FDA’s Community Collaborative and Software as a Medical Device frameworks.

In 2022, the American Society of Clinical Oncology (ASCO) published a paper on the capabilities, opportunities, and ethical considerations for AI in cancer care, referencing the rapid growth and promise for clinical application.1 The authors noted that there are substantial unresolved issues that affect the broad adoption of AI models, including lack of data transparency or standardization, concerns with the AI “black box” mechanism that can lead to hidden poor analytics or predictions, and intrinsic bias against underrepresented persons that limit the reproducibility of AI models and perpetuate healthcare disparities. AI models should not be applied universally; recognizing the diversity of patient populations is a necessity. Because training data sets and clinical end points in AI models often underrepresent comorbidities, health status, age, ethnicity, and other underserved populations, AI models can develop built-in accuracy and scalability challenges. Those challenges limit the usefulness of AI application to every individual patient and can even lead to harm for a patient who does not match the base training set parameters.1

Corporate Disrupters Are Using AI to Enter the Cancer Care Space

In the April 2023 issue of Oncology Practice Management, I wrote about corporate disrupters moving rapidly into healthcare and the need to be aware of their activities.5 AI initiatives are being adopted by health plans, Google, industry, medical and pharmacy benefit managers, and specialty pharmacy, as well as new private vendors that are creating apps using the technology on data sets comprised of various collections of claims and electronic medical records from hospitals and physician offices. The proliferation of AI initiatives will leave patients and physicians buffeted by “real-world evidence” treatment choices with little transparency as to process or source data.

AI Is Near You Now

Google Health has announced partnerships with significant clinical, public health, and academic entities to deploy solutions and transformative healthcare tools and services. Partnerships in the United States with Ascension, Northwestern Medicine, Hartford Healthcare, Mayo Clinic, Stanford Medicine, and others offer Google and other external entities an entry point into the heart of our healthcare delivery system.6 Emergency rooms and physician networks are exploring the use of ML, chatbots, and translation apps. At the 2023 ASCO Annual Meeting, US Oncology Network presented its ML study predicting the behavior of patients with cancer.7

The proliferation of AI initiatives will leave patients and physicians buffeted by “real-world evidence” treatment choices with little transparency as to process or source data.

Regulation of AI Is Currently Limited

It is important to understand that AI technology is currently not well regulated. Several companies using AI for purposes other than healthcare are already publishing risk factor disclosures to shareholders addressing potential exposure for regulatory noncompliance, the unreliability of generative AI, possible corporate reputational harm, potential purported or real impact on human rights, privacy, employment and other social issues, unintentional bias, or discrimination.8

The FDA evaluates AI models used by physicians to detect diseases, such as cancer, or to suggest the most effective treatments. The AI models being created and used by health plans, pharmacy benefit managers, and private vendors to help decide coverage, length of stay, patient needs, and benefit policy limitations and payments are not evaluated or regulated, even though our most vulnerable patients—including those covered under Medicare Advantage plans—are affected.9

Medicare Advantage Plans Apply Universal AI Algorithms

The use of unregulated AI predictive algorithms to create recommendations for treatment and benefit coverage for Medicare Advantage patients is already proving to ignore the individual circumstances of patients and conflict with basic rules on what Medicare plans must cover. The Center for Medicare Advocacy is a nonprofit group that has reviewed Medicare Advantage denials for more than 2 years. Its associate director, David Lipschutz, noted that while firms say such algorithms are suggestions, they end up being hard-and-fast rules. “There is no deviation from it, no accounting for changes in condition, no accounting for situations in which a person could use more care,” he said.10

Health plan AI tools developed in the past decade can supposedly predict how many hours of therapy patients will need, which types of physicians they may see, and exactly when they will be able to leave a hospital or nursing home. Medicare Advantage insurers, including Elevance, Cigna, and CVS Health, which owns Aetna, have come to rely on these predictions so much that they have purchased several of the most widely used tools. Medical providers and hospitals are reporting more frequent Medicare Advantage payment denials for care that should be covered, since it is covered by traditional Medicare.

The Growth and Influence of an Aggressive AI Model

One of the largest and most challenging AI models is NaviHealth, which is now owned by UnitedHealthGroup. NaviHealth was bought and named by Tom Scully, the former head of Centers for Medicare and Medicaid Services, when he saw its potential as a billion-dollar business for health plans running the new Medicare Advantage programs that he had helped to create. NaviHealth uses AI to predict and track patient care (and costs) in the first 60 to 90 days after hospital discharge related to nursing homes, rehabilitation, and cessation of those services. NaviHealth algorithms predict patient medical needs, lengths of stay, and target discharge dates. NaviHealth was then sold—first to Cardinal Health, then to another equity firm, and finally to UnitedHealth (the largest Medicare Advantage insurer in the United States). With the increasing use of NaviHealth in more settings, providers started noticing an increase in care denials, the blocking of needed rehabilitation stays, variations in medical determinations, and misapplication of Medicare coverage criteria.

Between 2020 and 2022, appeals filed by providers to contest Medicare Advantage denials rose 58%, and nearly 150,000 requests were filed to review a denial in 2022. Providers have complained about rigid criteria applied by NaviHealth, which now manages prior authorization for nursing home care on behalf of many of the largest Medicare Advantage insurers, including UnitedHealthcare, Humana, and several Blue Cross Blue Shield plans.

Traditional Medicare allows patients leaving a 3-day hospital stay to remain for up to 100 days in a nursing home following discharge. NaviHealth’s AI algorithms are leading to Medicare Advantage patients having their approvals and payments cut off after 14 days. These algorithms for nursing home stays have not been supported by any scientific studies, and when providers challenge the denials, the insurers claim proprietary protection for their “black box algorithms.”

How Should You Approach AI?

Physicians and practices should examine how AI is being applied to their patients. Are there opportunities for practice integration or study? Do you suspect AI algorithms are affecting your patients through health plan or care coverage policies? Once you understand the ways in which AI is being applied to your patients, you can appropriately support or challenge those models. AI will continue to offer value in second opinions or reviews as part of physician medical decision-making but is not sufficient to replace the role of the physician in determining appropriate care.

Beth Israel Deaconess Hospital is exploring how chatbots can be used—and misused—in training future physicians. They are finding that using chatbots as internet searches provided limited and variable utility, whereas describing a patient and their symptoms as one might a professional consult yielded more useful information. AI may become a thought partner but is not likely to replace physician expertise and experience.10

Is there a collaborative looking at AI in your area? By joining, you can inform the conversation about the utility of AI for patients with cancer, as well as the risks and unintended consequences.

Reach Out

Share comments with me at This email address is being protected from spambots. You need JavaScript enabled to view it.. I would be interested in hearing your stories of how AI is affecting your practice and patients. Control and regulation of AI is just starting at the national and state levels. We should be a part of that conversation, as well as the review and testing of AI to enhance cancer care.

References

  1. Shreve JT, Khanani SA, Haddad TC. Artificial intelligence in oncology: current capabilities, future opportunities, and ethical considerations. Am Soc Clin Oncol Educ Book. 2022;42:842-851.
  2. Turner J. AskX: ChatGPT integration with Amplity’s Insights database. Amplity Health. https://amplity.com/thought-leadership/askx-chatgpt-integration-with-amplitys-insights-database. Accessed August 9, 2023.
  3. Farina E, Nabhen JJ, Dacoregio MI, et al. An overview of artificial intelligence in oncology. Future Sci OA. 2022;8:FSO787.
  4. Dixon Ken. Connecticut is no Silicon Valley, but it could become a center for artificial intelligence. CT Insider. June 29, 2023. Updated July 18, 2023. www.ctinsider.com/politics/article/artificial-intelligence-center-connecticut-yale-18171586.php. Accessed August 9, 2023.
  5. Holcombe D. Corporate disrupters are moving rapidly into healthcare. Are we watching? Oncology Practice Management. 2023;13:1,10,12.
  6. Partnering for a healthier future. Google Health. https://health.google/partners/. Accessed August 9, 2023.
  7. Banks MA. Machine learning predicts cancer patient behavior: ASCO 2023. Specialty Pharmacy Continuum. June 29, 2023. www.specialtypharmacycontinuum.com/Pharmacy-Technology-Report/Article/06-23/Machine-Learning-Predicts-Cancer-Patient-Behavior-ASCO-2023/70709. Accessed August 9, 2023.
  8. Connors E. Companies identify risk factors of artificial intelligence. Intelligize. July 2023. https://go.intelligize.com/2023-Downloads-AI-risk-factors-LinkedIn. Accessed August 9, 2023.
  9. Ross C, Herman B. Denied by AI: how Medicare Advantage plans use algorithms to cut off care for seniors in need. Stat News. March 13, 2023. www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/. Accessed August 9, 2023.
  10. Kolata G. A mystery in the E.R.? Ask Dr. Chatbot for a diagnosis. New York Times. July 22, 2023. www.nytimes.com/2023/07/22/health/chatbot-medical-mystery-diagnosis.html. Accessed August 9, 2023.

Related Items