January 25, 2012
TechCrunch recently published a guest post from Vinod Khosla with the headline “Do We Need Doctors or Algorithms?“. Khosla is an investor and engineer, but he is a little outside his depth on some of his conclusions about health IT.Let me concede and endorse his main point that doctors will become bionic clinicians by teaming with smart algorithms. He is also right that eventually the best doctors will be artificial intelligence (AI) systems — software minds rather than human minds.That said, I disagree with Khosla on almost all of the details. Khosla has accidentally embraced a perspective that too many engineers and software guys bring to health IT.Bear with me — I am the guy trying to write the “House M.D.” AI algorithms that Khosla wants. It’s harder than he thinks because of two main problems that he’s not considering: The search space problem and the good data problem.
Any person even reasonably informed about AI knows about Go, an ancient game with simple rules. Those simple rules hide the fact that Go is a very complex game indeed. For a computer, it is much harder to play than chess.Almost since the dawn of computing, chess was regarded as something that required intelligence and was therefore a good test of AI. In 1997, the world chess champion was beaten by a computer. In the year after, a professional Go player beat the best Go software in the world with a 25 stone handicap. Artificial intelligence experts study Go carefullyprecisely because it is so hard for computers.
The approach that computers take toward being smart — thinking of lots of options really fast — stops working when the number of options skyrockets, and the number of potentially right answers also becomes enormous. Most significantly, Go can always be made more computationally difficult by simply expanding the board.Make no mistake, the diagnosis and treatment of human illness is like Go. It’s not like chess. Khosla is making a classic AI mistake, presuming that because he can discern the rules easily, it means the game is simple. Chess has far more complex rules than Go, but it ends up being a simpler game for computers to play.To be great at Go, software must learn to ignore possibilities, rather than searching through them. In short, it must develop “Go instincts.”
The same is true for any software that could claim to be a diagnostician.How can you tell when software diagnosticians are having search problems? When they cannot tell the difference between all of the “right” answers to a particular problem. The average doctor does not need to be told “could it be Zebra Fever?” by a computer that cannot tell that it should have ignored any zebra-related possibilities because it is not physically located in Africa. (No zebras were harmed in the writing of this article, and I do not believe there is a real disease called Zebra Fever.)
The second problem is the good data problem, which is what I spend most of my time working on.Almost every time I get over-excited about the Direct Project or other health data exchange progress, my co-author David Uhlman brings me back to earth:
What good is it to have your lab results transferred from hospital A to hospital B using secure SMTP and XML? They are going to re-do the labs anyway because they don’t trust the other lab.
While I still have hope for health information exchange in the long term, David is right in the short term. Healthcare data is not remotely solid or trustworthy. A good majority of the time, it is total crap. The reason that doctors insist on having labs done locally is not because they don’t trust the competitor’s lab; it’s more of a “devil that you know” effect. They do not trust their own labs either, but they have a better understanding of how and when their own labs screw up. That is not a good environment for medical AI to blossom.The simple reality is that doctors have good reason to be dubious about the contents of an EHR record.
For lots of reasons, not the least of which is that the codes they are potentially entering there are not diagnostically helpful or valid.Non-healthcare geeks presume that the dictionaries and ontologies used to encode healthcare data are automatically valid. But in fact, the best assumption is that ontologies consistently lead to dangerous diagnostic practices, as they shepherd clinicians into choosing a label for a condition rather than a true diagnosis. Once a patient’s chart has a given label, either for diagnosis or for treatment, it can be very difficult to reassess that patient effectively. There is even a name for this problem: clinical inertia. Clinical inertia is an issue with or without computer software involved, but it is very easy for an ontology of diseases and treatments to make clinical inertia worse.
The fact is, medical ontologies must be constantly policed to ensure that they do not make things worse, rather then better.It simply does not matter how good the AI algorithm is if your healthcare data is both incorrect and described with a faulty healthcare ontology. My personal experiences with health data on a wide scale? It’s like having a conversation with a habitual liar who has a speech impediment.So Khosla is not “wrong” per-se; he’s just focused on solving the wrong parts of the problem. As a result, his estimations of when certain things will happen are pretty far off.I believe that we will not have really good diagnostic software until after the singularityanduntil after we can ensure that healthcare data is reliable. I actually spend most of my time on the second problem, which is really a sociological problem rather then a technology problem.Imagine if we had a “House AI” before we were able to feed it reliable data? Ironically it would be very much like the character on TV: constantly annoyed that everyone around him keeps screwing up and getting in his way.Anyone who has seen the show knows that the House character is constantly trying to convince the other characters that the patients are lying. The reality is that the best diagnosticians typically assume that the chart is lying before they assume that the patient is lying. With notable exceptions, the typical patient is highly motivated to get a good diagnosis and is, therefore, honest.
The chart, on the other hand, be it paper or digital, has no motivation whatsoever, and it will happily mix in false lab reports and record inane diagnoses from previous visits.The average doctor doubts the patient chart but trusts the patient story. For the foreseeable future, that is going to work much better than an algorithmically focused approach.Eventually, Khosla’s version of the future (which is typical of forward-thinking geeks in health IT) will certainly happen, but I think it is still 30 years away. The technology will be ready far earlier. Our screwed up incentive systems and backward corporate politics will be holding us back. I hardly have to make this argument, however, since Hugo Campos recently made it so well.Eventually, people will get better care from AI.
For now, we should keep the algorithms focused on the data that we know is good and keep the doctors focused on the patients. We should be worried about making patient data accurate and reliable.I promise you we will have the AI problem finished long before we have healthcare data that is reliable enough to train it.Until that happens, imagine how Watson would have performed on “Jeopardy” if it had been trained on “Lord of the Rings” and “The Cat in the Hat” instead of encyclopedias. Until we have healthcare data that is more reliable than “The Cat in the Hat,” I will keep my doctor, and you can keep your algorithms, thank you very much.