I’ve been really excited for the implications of the new Apple Watch (I wrote on that here). But shortly after I shared my excitement, someone in a WhatsApp group for my medical school alumni brought up some new artificial intelligence (AI) breakthrough, and brought up the old question:

Won’t AI soon phase doctors out?
It’s a good question, now more than ever. And how you answer it depends very much on what you think it is that makes doctors valuable.
If the value of a doctor is in what he or she knows then we’d better start preparing to submit to our AI overlords. Because we can’t even compete. They won’t only know far more than any single one of us ever could: they’ll be able to organise it better and retrieve it faster, to boot.
But I don’t believe our primary value as doctors is in what we know, and that we have sold ourselves short to believe that it is.
Our primary value is in what we are able to do with what we know. And that value is fourfold.
- We interpret. We consider the symptoms people complain of, the findings we make on examining them, the results from lab tests, and knowledge from research and experience. We bring all of that together and try to make sense of it in light of the unique patient in front of us who isn’t interested in being just another data point. Having interpreted all this data…
- We communicate. We explain what we find and what we know and present it all to the patient in simple terms, human to human, ensuring that they understand, filling in gaps where they don’t and being available to help make the right choice, clarify questions and address concerns.
- We decide. We’re making decisions all throughout this whole process of interpreting and communicating—and both are always going on: there’s always new data to interpret, new things to communicate, new concerns to address. And we have to decide which data matters and which doesn’t, what to communicate and when and what not to, what to say to address which concerns. Decisions, decisions, decisions. And because we take all these decisions…
- We’re accountable. We’re the ones invested enough that we’re willing to be on the line if things goes wrong. Our skin is very much in the game: losing the license to practice is right up there among any doctor’s worst nightmares. With it would go reputation, career and livelihood, and even the welfare of our families. And even when we’ve done everything right that we know how, when things still go badly for our patients, our emotional well-being is on the line.
We interpret, we communicate, we decide and we’re accountable.
In this light, the value of computers becomes clearer, but so do their limitations. Computers are far better than us at data manipulation: that’s almost the literal meaning of “compute.” So they’re a great help with giving us more data well as with interpreting them. I think this is the greatest value of computing to healthcare, and it’s absolutely welcome. It’s why I’m excited about the new Apple Watch’s ability to identify heartbeat rhythms, provide ECG readings and even recognise falls.
And yet all of that data (for now anyway) still requires the interpretation of a human doctor to be most valuble.
Computers and AI are at their best when they complement doctors.
This dynamic is also apparent with the other three aspects of communicating, deciding and being accountable that represent doctors’ value. AI is an amazing complement in each, but a pretty poor replacement. Consider…
With the Google Duplex demo, we’ve seen AI-enabled computers able to communicate: making realistic-sounding orders over the phone. But would people seeking healthcare really take that over human-to-human communication? Do you really want some impersonal computer to deliver information to you at your most vulnerable and raw?
Sure, computers can take decisions, but how do we train them to take decisions on issues we ourselves haven’t figured out yet? And I’m referring to not just decisions about healthcare, but also decisions about communication. There’s a good deal of research around this that AI can benefit from, but there’s still a whole lot of decision-making that comes down to that sense and intuition that only comes from experience and empathy.
Last—and perhaps most important—no AI, however good, can take accountability for anything. Being accountable is something only humans can do. If AI were to replace doctors, the responsibility of accountability becomes that of whatever corporation owns the AI.
This last is something so rarely mentioned very often in conversations about AI in medicine, I’m not even sure it’s considered. But it’s quite important. When your health is at stake, it’s important to know who’s personally responsible for your care. The last thing you need at a time like that is some impersonal computer backed by an indifferent corporation driven by profit, working for shareholders and represented by some faceless administrator in some faraway office. But then, that’s happening already, isn’t it?
You’ll notice the real issue in all of this, in the end, isn’t AI itself, but that we are human, and connecting with other humans matters to us. And it matters most of all when matters of life and death call that humanity into question. As long as that doesn’t change, not even the ultimate AI may suffice.
The persistence of several dubious forms of alternative medicine should convince anyone who doubts this. By focusing on human-to-human interaction, they earn the trust of people despite their gross lack of any kind of actual evidence. And trust, it turns out, is a far more essential form of value in healthcare than the cost-cutting corporations typically focus on.
Are we more interested in earning trust or cutting costs?
That said, in each of the three areas of value, it’s not hard to imagine how AI and computers can help make doctors even more valuable, by complementing our communication and decision-making and helping us be better at being accountable.
The problem is, AI doesn’t make itself. People design it. And people design in line with how they see things and what they’re trying to achieve . The question is, whether what they’re trying to achieve is good for the rest of us.
One thing’s for sure: the face of medicine is changing, and so is what it means to be a doctor. The issue is whether we—all of us, not just doctors—are prepared to do the work to understand how the changes will affect us, decide which changes we actually want and act to make them happen.