Vistas de página en total

domingo, 22 de junio de 2025

«In healthcare, "artificial intelligence" can lead to doing many things well that shouldn't be done», by Juan Gérvas & Mercedes Pérez-Fernández

Doing well what shouldn't be done
Economists say there's nothing worse than doing well something that shouldn't be done. In the healthcare sector, the classic example is "cutting off the ears of the entire population." It's something that shouldn't be done, but it could be done, and done well.

That is, cutting off the ears of literally everyone, from birth, and doing it without complications, with excellent anesthesia and surgical technique, without infections or disfiguring scars, and reaching the entire population, even in the most remote corners of the country.

We can imagine the pride of politicians, managers, and healthcare professionals for efficiently carrying out such a complex task. Without caution, this example could soon be applied in the healthcare sector itself with the use of "artificial intelligence."



"Scribes"
In Catalonia (and other places around the world), the use of "scribes" is being implemented.

These "artificial intelligence" programs are capable of recording the consultation conversation between a healthcare professional and a patient, saving time and automatically recording it in the electronic medical record. But what's the point of recording this conversation in the electronic medical record? Barely 5% of the record of the entire doctor-patient encounter is useful. The rest is garbage that floods electronic medical records, hindering the best healthcare.

"Scribes" can increase the volume of garbage and noise in clinical records and lead to worse health outcomes.



Diagnoses
Doctors face various difficulties in diagnosing, and in many cases, "artificial intelligence" can help improve the process and achieve more and better diagnoses. But physicians should use only accurate and timely diagnoses and know how to "not diagnose" when diagnosing does not improve care or prognosis.

For example, in hospital emergency rooms, almost 40% of abdominal pain cases are "resolved" without a final diagnosis. And obtaining a diagnosis, for example, in nonspecific abdominal pain in adolescence, can lead to tests that border on cruelty and cruelty, without adding anything to the patient's progress.

"Improving the diagnostic process" with "artificial intelligence" can be harmful to patients and populations. We have a serious problem of "overdiagnosis" in clinical care, and the use of "artificial intelligence" can lead us to exacerbate the problem through the resulting unnecessary "therapeutic cascades." This is what we call "the tyranny of diagnosis."



More Isn't Always Better
Technological fascination is leading to an uncritical acceptance of the use of "artificial intelligence" in medicine, with a disturbing assumption that more is better. When the first automobiles began to circulate, they were seen as "modernity" and occupied public spaces with the approval of authorities and citizens. Over the years, they became a public health problem due to environmental pollution and the reduction of living spaces.

Today, restrictions on automobiles are common in cities, and it is astonishing how easily it was accepted that more is better in this sector of transportation. This rejection has led, for example, to Paris promoting car-free streets with trees and gardens, with up to 60% of the total area being car-free, and 40% of the population having given up car ownership.



Will We Learn?
Artificial intelligence has its applications, but it's important to learn from other "modern technologies" that have preceded it and be rational in its use. The benefits promised by artificial intelligence can be harmful illusions. Technological dazzle must not fool us into believing that more is better when it comes to artificial intelligence applications in the healthcare sector.



NOTE
"Artificial intelligence," like all resources in medicine, has a rational use. Considering, of course, advantages and disadvantages, for example, regarding "scribes" in medical records

And not attributing real "intelligence" to it, which it lacks

In the rush and blindness to the medical applications of "artificial intelligence," including scribes, follow the money: USA "The global market for AI in healthcare, including medical scribes, is projected to reach $45.2 billion by 2026" https://www.deepcura.com/post/navigating-the-future-insights-into-the-ai-medical-scribe-market


Addendum
7 August 2025
UK. GPs should report all suspected inaccuracies caused by AI to the regulator’s yellow card scheme, which is used to pharmacovigilance of adverse incidents and safety concerns over medicines and medical devices. (here).



NOTE 
On December 6, 2025, we received an email which we are reproducing with the author's permission, changing the necessary elements to protect her identity. It reads: 
"I'm Rosa Muñoz, you may not remember me. I'm a Family Physician. We've exchanged greetings before the pandemic, and I've participated in several Primary Care Innovation Seminars, but I don't think we've crossed paths since then. Anyway, since I read your post on the blog 'Health, Money, and Primary Care' from June 7, 2025, about 'Artificial Intelligence,' I felt like saying hello. And while I'm at it, I wanted to let you know that I'm currently working in a hospital where a scribe is mandatory. It's having problems, as you can imagine." From not coding primary care diagnoses to "hallucinating" and transcribing things that weren't said, while omitting important data—because it records bio data fairly well, but ignores psychosocial aspects—I also don't like that all the case histories now seem the same, and I can't identify my clinical "style." I found your point about only 5% of the record of the entire doctor-patient encounter being useful, and the rest being "garbage or noise," quite interesting. I also found the idea of using a yellow card to report adverse effects of medical devices, in addition to medications, very interesting. 

My doctor at Allina [a nonprofit health care system based in Minneapolis, Minnesota, United States]  showed me a new AI tool that they're using to summarize medical histories. It can review your whole chart and provide a human-readable summary. He copy-pasted the summary into my notes for me to look at later.
The errors were astounding.
The AI bot falsely claimed I had a history of sleep apnea (nothing even resembling that).
It claimed I was "diagnosed with heart disease" on a specific date, because that was a date a test to *rule out heart disease* was ordered (and it was ruled out).
It mischaracterized a real spinal issue with the wrong diagnosis.
Because I once had antibiotics for a bullseye tick bite, it said I had a clinically significant "history of rash".
I requested he remove or correct this in my note, but I am really concerned how errors like this could compound over time.
This visit's misinformed summary goes into my record that will then be further misunderstood the next time the bot reviews my info.
To clarify, this was not an issue with the voice-based note-taking app.
This was specifically a tool that reviews chart history and summarizes.
There's no record in the note what's AI-generated versus human-written, so I don't have faith it's going to identify this as particularly untrustworthy.



Authors

Juan Gérvas, retired rural doctor, Equipo CESCA, Madrid, Spain.
jjgervas@gmail.com www.equipocesca.org @JuanGrvas @juangrvas.bsky.social

Mercedes Pérez-Fernández, retired rural doctor, Equipo CESCA, Madrid, Spain. 

1 comentario: