The AI Death Spiral

Dispatches from the artificial intelligence hype machine whirlpool

By Alicia Puglionesi

Stories meant to showcase the inspiring potential of AI in healthcare can leave the reader in a strange, dark place. In 2022 the New York Times covered a suicide prevention app developed by Harvard psychologists to monitor high risk patients after discharge from the hospital. We follow a woman who enrolls in the study and is saved from a suicide attempt when the app cues the researchers to intervene. The psychologists explain how a continuous stream of FitBit and smartphone data, encompassing movement, sleep, social interaction, and questionnaires, can unlock clues to mental health crisis—for example, sleep deprivation could be a decisive factor. However, we also learn that the featured study participant is homeless, discharged from the hospital with a Fitbit but nowhere to go. She attempted suicide when she could no longer afford a hotel room, and it was not AI, but self-report, that alerted the research team. The story concludes at the end of the six month study, when the participant is living in a tent, charging her monitoring devices at McDonalds, and unable to find a therapist who takes Medicaid.

Like much reporting on health AI, the Times piece slots in a critical perspective—in this case, a mental health advocate who argues that patients need a human support network.  Ms. Cruz, the study participant, “does not have a network like that,” the author flatly asserts. Rather than leading toward generous resources, the app-based solution seems to affirm the inviolable law of austerity. It’s easier to imagine keeping people on 24/7 AI suicide watch than to imagine a world where we have a right to healthcare and housing.

Quickly, however, the promise of humane, affordable, convenient care folds back in on itself as story after story unwittingly reveals the harsh austerity at the core of these innovations. I’ve come to think of this dizzying journey as the health AI whirlpool.

There are hundreds of articles on the internet with headlines like “The AI doctor will see you now” and I’ve read dozens of them in an unscientific survey of the future of medicine as envisioned by tech writers, healthcare industry outlets, and news media. While revolutionary artificial intelligence products flash across the surface, the deeper current of AI hype in healthcare forms a logical loop: first, it calls out real, systemic problems in American medicine and declares a coming technological salvation. Quickly, however, the promise of humane, affordable, convenient care folds back in on itself as story after story unwittingly reveals the harsh austerity at the core of these innovations. I’ve come to think of this dizzying journey as the health AI whirlpool.

Health AI refers to a wide range of digital products that use machine learning, “deep learning”, or artificial intelligence for tasks that range from reading x-rays to filling out patient charts to predicting time to discharge. All of these uses can automate and scale racial bias in ways that are well-documented. However, some are focused on narrow clinical tasks, such as identifying a skin disease from a photograph, while others are incorporated into health systems to manage patient care, operations, and financing. The former are regulated, however imperfectly, by the FDA, while the latter are classed as internal “decision support” or “quality control” tools and are not subject to external oversight. Amid calls for stronger AI safeguards, the FDA issued guidance in 2022 that broadened its definition of “software as a medical device” to include some decision support, but these guidelines are nonbinding and disputed by industry groups.

The media tendency to frame AI as “superhuman” enshrines a set of widely varied technologies as an arbiter of medical truth. Only through lawsuits and complaints have investigators uncovered decision support tools that optimize the goals of health systems or insurers rather than patients, for instance by predicting time to discharge and pressuring clinicians to meet the algorithm’s deadline. As critical AI scholars have long argued, many AI fixes are a band-aid for systemic problems that require political solutions.

Although some health apps boast of AI’s anonymous detachment, tech industry visions of futuristic healthcare are not devoid of the human touch. Indeed, the sentimental ideal of humanity forms another circle in the logical whirlpool. A major premise of health AI, as championed in agenda-setting works such as Eric Topol’s 2019 Deep Medicine, is that AI can “make medicine human again,” reducing time spent on mundane tasks and freeing physicians for the true work of their hearts: having compassionate conversations with patients. However, we learn from the New York Times that doctors are terrible at having compassionate conversations, and that doctors themselves believe ChatGPT writes better scripts for patient interactions. (While there’s plenty of research on physician empathy, this story cites surveys conducted by a health AI startup and a communication training company making the case for their products).

A study that seemed to confirm the superior compassion of AI was heralded with headlines like “ChatGPT outshines physicians in quality and empathy.” Tellingly, the sample of human responses was culled from a Reddit forum where volunteer practitioners answer medical questions from the public. It’s possible that the fee-for-service patient portal is not much better than an anonymous, free online forum for people who have no other source of medical advice, but neither of these settings showcase the ideal of human empathy. As hematologist Jennifer Lycette writes of frantic messages dispatched during lunch and after-hours, “I know my responses to patients in the EMR are more thorough and empathetic when I’m not pressed for time and exhausted.”

The promise that AI will free us to be human and express our natural empathy quickly turns into a demand for a seamless performance of care that has to be managed by AI. For decades now, medical schools have tried to cultivate humanity in their trainees in the name of improved patient relationships, which can contribute to better outcomes. But in the absence of real agency—doctors’ ability to give patients what they need medically, patients’ ability to obtain adequate resources for good health—“humanity” becomes more like customer service, and automating it could produce a more pleasing simulacrum of humanness. After all, chatbots are engineered through multiple rounds of human annotation to optimize emotional satisfaction.

Industry outlets run countless stories echoing the idea that AI will “combat burnout” and create more time for patient care by automating documentation and charting. A hospital executive triumphantly told STAT’s Brittany Trang about a doctor who took his first lunch break in 14 years thanks to this software, but Trang cautions that quality research on AI note-taking products “amounted to brochures about return on investment.” Neither tech companies like Microsoft nor their health system customers are in a position to point out that much documentation in the US is for billing purposes, or to suggest removing the financial pressures at the root of provider burnout. Meanwhile, industry leaders are not hiding the disingenuousness of their preferred solution to the burnout they created: “While AI may take some time to replace medical professionals,” one executive concludes, “the current focus on alleviating administrative burdens is certainly a step in the right direction.”

Another logical circle: algorithms are carefully marketed as “decision support,” with the assurance that a responsible expert will parse the evidence and make a fully-informed decision in the patient’s best interest. Human oversight, health AI proponents widely acknowledge, is necessary to win patient trust and to catch algorithmic errors. Often within the same article, however, we learn that decision support technology will dramatically speed up diagnosis and treatment and insurance claims and billing, as well as scaling up patient loads. A Financial Times “AI doctor” story from 2020 quotes Richard Zane, UCHealth’s chief innovation officer, boasting that “instead of one nurse monitoring eight people on a ward, she can monitor 8,000 people at home.”

Researchers have long studied automation bias, a phenomenon where users become predisposed to accept automated recommendations. Some blame this tendency on popular faith in super-intelligent computers, but accepting an automated recommendation is also the path of least resistance. Extended human review slows down the process, and there are disincentives for seeking an “override”.  National Nurses United, the largest nurses’ union in the US, surveyed its membership and found that 24% of respondents had worked with a decision tool whose choices “were not in the best interest of patients.” Only 17% had the ability to override the decision on their own, and 34% could do so if they got permission from a doctor. 

Legal liability and public blame falls on an individual user, even when the system is doing what it was expressly designed to do.

When AI systems boast of human oversight but discourage deviation from the most efficient workflow, practitioners fall into the “moral crumple zone” of responsibility – a reference to the part of a car that crumples to absorb an impact. In 2022, a Kaiser Permanente telephone advice nurse was held negligent for a patient death that resulted from following a symptom-based algorithm, which channeled her into scheduling a virtual visit rather than in-person care. Another nurse interviewed by the Wall Street Journal explained that overriding automated protocols in her hospital could result in disciplinary action.  Her employer stated that nurses have “an ethical and professional obligation to escalate those concerns immediately,” just as Kaiser maintained that its advice nurse should have used professional judgment. These disavowals illustrate how the industry emphasis on human oversight is also a deflection of corporate responsibility. Legal liability and public blame falls on an individual user, even when the system is doing what it was expressly designed to do.

The whirlpool cycles discussed above—let’s call them the circle of humanness and the circle of responsibility—are just two examples of an obfuscating logic common to industry and media accounts of healthcare automation. The force driving this circular current is, of course, money. The promise of greater humanity through automation will not be fulfilled because that isn’t automation’s purpose under capitalism. The textbook purpose of automation is to cut labor costs while increasing production. While health AI proponents promise that their tools will reduce ever-accelerating US healthcare spending, tech companies would not be crowding around the trough unless they saw continued growth on the horizon. Abolishing private insurance would free physicians from the hell of insurance documentation, but automating it will only allow services to be billed, rejected, and appealed at a higher volume. Compassionate conversations would have to be machine-generated simply to keep up.

Echoing critiques of Silicon Valley “longtermism” put forth by critical AI scholars at the DAIR Institute, it seems that futuristic visions of cheap, superhuman automated healthcare and the threatened “end of doctors” are a distraction from the actually-existing political economy of health. Employers and institutions are adopting AI because they recognize the benefits to them of automating and surveilling to the greatest extent possible. They don’t truly expect to replace human providers with omnipotent AI. Just like in the broader tech economy, this anxiety is a lever for job displacement, fragmentation, and speed-ups. Researchers who study algorithmic bias call for building equity and racial justice into AI development, and point out that few companies put adequate resources towards preventing racialized harm in the rush to roll out new products.  Healthcare is a business where the short-term pursuit of profit is unlikely to lead to an eventual consumers’ utopia.

Even dedicated AI optimist Eric Topol sees how automation can serve to “squeeze clinicians more,” consolidating power and cutting costs at the expense of patients and practitioners. He writes that the AI-enabled renaissance of empathetic care “will require human activism, especially among clinicians, to stand up for the best interest of patients.” Indeed, it will take a fundamental transformation of the healthcare system to create conditions of fairness, justice, and effective oversight for health AI. There are larger reasons, political rather than technical, to fight for this transformation. 

The case for AI as the cure for healthcare’s woes starts from the assumption that nothing else about this extractive, consolidated, and financialized industry can be changed. It’s a magic-bullet response to overwhelming public dissatisfaction, as well as a new generation of doctors, nurse practitioners, physician assistants, and technicians increasingly politicized by their working conditions. AI fixes promise to free practitioners from paperwork so they can be more compassionate, to free them from compassion so they can be more efficient, and so on. The “AI doctor” might free patients from a dysfunctional health system, but might also free that system of the moral obligation to meet patients’ needs. Freedom is another ideal that beckons us into the AI whirlpool and then vanishes beneath the current of technological inevitability. It’s clear that health AI won’t free humanity from labor or illness, but a better world can be claimed through our labor and with our precarious bodies.  


Next
Next

The Pain of Representing Pain