Going by first principles, any medical diagnosis is the interpretation of data based on known correlations. What disease or condition does that data correlate to?
The data could include stated symptoms, visual/touch/sound observations, ECG readings, ultrasound imaging, MRIs, X-Rays, CAT scans, blood reports, radiology reports, biopsy observations, reactions/cultures and more.
The correlations may not be simple or direct. Often the interpretation could simply throw up the need for more data. A different test perhaps.
Quite a few of these diagnostics involve the interpretation of images. This could be an MRI report, an Ultrasound test, an X-Ray, a photograph…
There are reports that despite spending billions of dollars, IBM Watson has not lived up to its promise in cancer detection.
However, I will argue that irrespective of whether some AI experiment succeeds or fails, machines are better than human experts already in interpreting images for applications like tumor detection. The only caveat is that this statement will be true only in those cases where there is a large enough existing dataset to learn from.
I am not saying that doctors will get replaced. On the contrary, doctors will have tools with diagnostic capabilities that have not existed so far.
People will want a human doctor to exercise independent judgement and accept or reject what a machine finds. People will want a doctor to explain the diagnosis to them, to recommend a course of action and to tell them why such an action is being recommended.
But doctors must be prepared for a world that is coming very soon, where diagnostic tests and their interpretation will by machines. They must adopt and embrace these technologies sooner rather than later. These are the tools that will enable them to greatly enhance outcomes for their patients.
here is an article on a technology that detects lung cancer tumors with 95 percent accuracy, when human experts are at 65 percent-