Artificial intelligence has the potential to reinvent the world, from how businesses operate to the types of jobs people hold to the way wars are fought. In health care, AI promises to help doctors diagnose and treat diseases as well as help people track their own wellness and monitor chronic conditions. But Watson’s recently publicized struggles teach us a few important things about the success criteria for AI-enabled solutions.
In 2012 IBM famously launched an initiative to apply its Watson Artificial Intelligence platform to improve cancer patient outcomes. Six years later, the results are decidedly mixed. According to a recent Wall Street Journal article, several IBM partners and clients have stopped or reduced Watson’s oncology-related projects. According to company filings, IBM has spent close to $15 billion on Watson and related efforts. So, what went wrong?
Watson can teach us a few crucial lessons about what it takes to develop successful AI solutions:
1. AI Solutions Must Create Demonstrable, Needle-Moving Value: Watson’s most fundamental problem has been that in many cases it simply wasn’t perceived to be adding much value. In some cases, Watson wasn’t found to be accurate.
Where Watson does demonstrate value is in enabling physicians to better stay up-to-date with medical research and developments. In that way, Watson demonstrably augmented the physician’s ability to stay current on relevant medical knowledge.
2. Availability of (Good) Data is Crucial: Watson was often tripped up by a lack of data in rare or recurring cancers, and treatments that ended up evolving faster than human trainers could input them into Watson’s underlying underlying systems.
Building intelligent software to recommend personal medical treatment is a difficult problem. Not only does the AI have to be trained with data from historical success cases, but must also be able to correlate outcomes with detailed patient health records. Currently, all that data sits in a variety of disparate formats across fragmented information silos. Training AI solutions with incomplete, incompatible, or inconsistent data doesn’t lead to good outcomes.
3. Augmentation is Superior to Automation: It turns out that physicians have a lot of work to do before they can objectively learn the science of diseases, such as cancer. Ever heard of a patient seeking a “second opinion”?
Despite recent advancements in medicine, physician diagnosis is based on common sense, judgment, context recognition, intuition, and instinct (tacit knowledge). As we’ve discussed before, these innately human qualities call for Intelligence Augmentation – where the AI must work alongside the human, not substitute for the human.