Would You Trust AI With Your Health?
In his new book, Eric Topol argues that AI can be used in hospitals for everything from patient safety to improving workflow.
The following is an excerpt of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol
Better prediction of an important diagnosis in real time is another direction of AI efforts, as we’ve seen, and this issue is of huge importance in hospitals, as one of the major challenges that hospitals face is treating infections that patients catch while hospitalized. Sepsis, a deadly infection common in hospitals, is responsible for 10 percent of intensive care unit admissions in the United States. Treating it costs more than $10 billion per year, and treatment often fails: sepsis accounts for 20 to 30 percent of all deaths among hospitalized patients in the United States.
Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
Timely diagnosis is essential since patients can deteriorate very quickly, often before appropriate antibiotics can be selected, let alone be administered and take effect. One retrospective study by Suchi Saria at Johns Hopkins Medicine used data from 53,000 hospitalized patients with documented sepsis, along with their vital signs, electronic medical records, labs, and demographics, to see whether the condition could be detected sooner than it had been. Unfortunately, the accuracy of the algorithm (ROC ~.70) was not particularly encouraging. A second deadly hospital-acquired infection, Clostridium difficile or C. diff, is also a target of AI. The data to date look a bit more positive. C. diff kills about 30,000 people each year in the United States, out of more than 450,000 patients diagnosed. Erica Shenoy and Jenna Wiens developed an algorithm to predict the risk from 374,000 hospitalized patients at two large hospitals using more than 4,000 structured EHR variables for each. Their ROCs were 0.82 and 0.75 for the two hospitals, with many features that were specific to each institution. With automated alerts to clinicians of high C. diff risk, it is hoped that the incidence of this life-threatening infection can be reduced in the future.
Preventing nosocomial infections, which one in every twenty-five patients will acquire from a caregiver or the environment, is also an important challenge for hospitals. For example, we know that lack of or suboptimal handwashing is a significant determinant of hospital- acquired infections. In a paper titled “Towards Vision-Based Smart Hospitals,” Albert Haque and colleagues at Stanford University used deep learning and machine vision to unobtrusively track the hand hygiene of clinicians and surgeons at Stanford University hospital with video footage and depth sensors. The technology was able to quantify how clean their hands were with accuracy levels exceeding 95 percent. Such sensors, which use infrared light to develop silhouette images based on the distance between the sensors and their targets, could be installed in hospital hallways, operating rooms, and at patient bedsides in the future to exploit computer vision’s vigilance.
Indeed, machine vision has particular promise for deep learning patterns in the dynamic, visual world of hospitals. The intensive care unit is another prime target for machine vision support. Reinforcement learning has been used as a data-driven means to automate weaning of patients from mechanical ventilation, which previously has been a laborious and erratic clinically managed process.
Surveillance videos of patients could help determine whether there is risk of a patient pulling out their endotracheal (breathing) tube and other parameters not captured by vital signs, reducing the burden on the nurse for detection. The ICU Intervene DNN, from MIT’s CSAIL, helps doctors predict when a patient will need mechanical ventilation or vasopressors and fluid boluses to support blood pressure, along with other interventions. Another CSAIL algorithm helps determine optimal time of transfer out of the ICU, with the objective of reducing hospital stay and preventing mortality. Other efforts centered on the ICU reduce the burden on the nurse by automated surveillance with cameras or algorithmic processing of vital signs.
We’re still in the early days of machine vision with ambient sensors, but there is promise that this form of AI can be useful to improve patient safety and efficiency. Another common hospital task that machine vision will likely have a role in changing is placing a central venous catheter, commonly known as a central line, into a patient. Because these lines are so invasive, they carry a significant risk of infection and complications such as a collapsed lung or injury to a major artery. By monitoring proper technique, with respect to both sterile conditions and line placement, safety may improve. Operating rooms could change as machine vision systems continuously track personnel and instruments along with workflow. Prevention of falls in the hospital by cueing into risky patient movements or unsteadiness is also being pursued with AI vision.
A similar story for automated alerts to speed diagnosis and treatment is now ongoing for stroke. The FDA has approved algorithms, developed by Viz.ai, that analyze CT brain images for signs of stroke, enabling neurologists and healthcare teams to rapidly learn whether and what type of stroke has occurred in a patient undergoing scanning. Treatments for reducing the toll of brain damage, including dissolution or removal of clots (thrombectomy), have been validated, so this AI tool is helping to hasten the time to treatment for certain strokes suitable for intervention. That’s a critical goal: we lose about 2 million brain cells for every minute a clot obstructs the blood supply. Even earlier in the diagnosis of stroke, paramedics can apply the Lucid Robotic System, FDA approved in 2018, which is a device put on the patient’s head that transmits ultrasound waves (via the ear) to the brain, and by AI pattern recognition it helps diagnose stroke to alert the receiving hospital for potential clot removal.
Another major change that will come to the medical workflow, both within and outside hospitals, is how AI will empower nonphysicians to take on more work. There are about 700,000 practicing physicians in the United States complemented by about 100,000 physician assistants and 240,000 nurse practitioners—almost 50 percent of the physician workforce. With so many AI algorithms being developed to support clinicians, it is natural to assume that there will be a more level playing field in the future for these three different groups and that PAs and NPs will be taking on a larger role in the years ahead. The critical assessment of deployment of AI in health systems deserves mention; it will require user research, well-designed systems, and thoughtful delivery of decisions based on models that include risk and benefit. This is unlike the rollout of EHRs into clinical medicine, when many of these vital steps were not incorporated and had serious adverse impact on the day-to-day care of patients.
We get even bolder with the planned “extinction” of the hospital, at least as we know it today. Although we clearly need ICUs, operating rooms, and emergency rooms, the regular hospital room, which makes up the bulk of hospitals today, is highly vulnerable to replacement. Mercy Hospital’s Virtual Care Center in St. Louis gives a glimpse of the future. There are nurses and doctors; they’re talking to patients, looking at monitors with graphs of all the data from each patient and responding to alarms. But there are no beds. This is the first virtual hospital in the United States, opened in 2015 at a cost of $300 million to build. The patients may be in intensive care units or in their own bedroom, under simple, careful observation or intense scrutiny, but they’re all monitored remotely. Even if a patient isn’t having any symptoms, the AI surveillance algorithms can pick up a warning and alert the clinician. Their use of high-tech algorithms to remotely detect possible sepsis or heart decompensation, in real time, before such conditions are diagnosed, is alluring. Although being observed from a distance may sound cold, in practice it hasn’t been; a concept of engendering “touchless warmth” has taken hold. Nurses at the Virtual Care Center have regular, individualized interactions with many patients over extended periods, and patients say about the nurses that they feel like they “have fifty grandparents now.”
Apart from elderly patients with an acute illness, there is a concentrated effort to use AI to support seniors’ ability to live and thrive in their home, rather than having to move into assisted living facilities or even needing to have caregivers make frequent visits. There’s an extraordinary array of start-ups developing sensors and algorithms that monitor gait, pulse, temperature, mood, cognition, physical activity, and more. Moreover, AI tools to improve vision and hearing can even augment seniors’ sensory perception, which would promote their safety and improve their quality of life. For example, with the Aipoly app, a senior with significant visual impairment can simply point to an object with a smartphone and AI will quickly kick in with a voice response identification. It does the same for describing colors. Sensors that can detect whether someone has fallen can be embedded in the floor. And robot assistants in the form of pets as well as specially designed Alexa-like voice assistants like ElliQ (by Startup Robotics) are examples of hardware AI to promote independent living.
Remote monitoring has the potential for very broad use in the future. With each night in the hospital accruing an average charge of $4,700, the economic rationale for providing patients equipment and data plans is not hard to justify. Add to that the comfort of one’s home without the risk of acquiring a nosocomial infection or experiencing a sleepless night with constant alarms beeping. Nevertheless, the St. Louis center is pretty much one of a kind right now, and there is little movement to make this the preferred path for patients not requiring an ICU bed. Several issues are holding us back. Some are technological and regulatory. Although systems to monitor all vital signs automatically, such as the Visi device of Sotera Wireless, are approved and currently being used by many health systems, there is no FDA-approved device for home use yet. Until we have FDA devices approved for home use that are automatic, accurate, inexpensive, and integrate with remote monitoring facilities, we’ve got an obstacle. Perhaps more important in the short term is the lack of reimbursement models for such monitoring and the protracted delays that are encountered with getting new codes set up and approved by Medicare and private insurers.
Excerpted from Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Copyright © 2019 by Eric Topol. Available from Basic Books, an imprint of Perseus Books, Hachette Book Group, Inc.
Eric Topol is the author of several books, including Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019) The Patient Will See You Now: The Future of Medicine is in Your Hands (Basic Books, 2015), practicing cardiologist at the Scripps Clinic, and a genomics professor at the Scripps Research Institute in La Jolla, California.