paging dr. google: how ai is learning to diagnose cancer
Despite the rough start artificial intelligence had in oncology thanks to IBM tying both hands behind Watson’s back, the technology has plenty of offer to doctors, as demonstrated by a recent experiment, a deep neural network built by Google and trained on over 42,800 chest CT scans to flag any potential signs of lung cancer. The trained system not only found tumors 5% more often than oncologists, it had 11% fewer false positives, meaning that not only could it diagnose patients more accurately, it’s also less likely to recommend unnecessary and invasive testing which can be really rough both physically and mentally. Even better, it was as much as 9.5% batter at trying to predict cancer risk two years after screenings.
First and foremost, it’s important to note that AI is not replacing doctors, it’s merely providing a second set of well-trained eyes on test results. The reason why the neural network noticed lung cancer slightly better than humans is because while our minds work on inference and habit, and quickly gloss over things we’re pretty sure aren’t relevant, computers don’t simply ignore pixels or allow them to blend into the background as easily as our eyes. Features we’re not entirely sure about stand out slightly sharper and as more relevant to machines. Technicians could take a patient’s scan, upload it to a health records database where it’s scanned by oncological AIs so the doctor can examine the image while considering the flags raised by the machine.
But how does the machine know a patient might have cancer and for what to look? Well, when the scans are uploaded, the input processing logic divides it into similarly sized sections. Then, each section is translated into numbers indicating colors and depths of whatever is pictured. For images as large and high resolution as medical scans, progressive layers with tens, if not hundreds of thousands artificial neurons then go to work, identifying features it guesses are important to making the right decision. After tens of thousands of training iterations, statistical formulas adjust the computations and outputs of each artificial neurons to produce the right result as often as possible. This setup is known as a convolutional neural network, and this is what Google used to outperform six trained radiologists.
Of course, this preliminary experiment doesn’t mean that we’re about to have all diseases diagnosed by a computer, merely that it’s possible, and we can continue testing and training these systems to help doctors make better, more informed decisions. This human-AI tag team approach would be particularly useful when patients come in with vague symptoms and fail to improve as doctors scratch their heads. This is, unfortunately, relatively common as a whole lot of illnesses have similar symptoms, and some life-threatening infections and can viruses hide behind odd, faint rashes or random fevers or gastrointestinal unease that can befuddle medical experts. Having machines doing the equivalent of tapping a doctor on the shoulder and saying “hey, have you considered it’s blankety blank based on the following data?” can mean faster, more accurate diagnoses, and earlier treatment.
This is also not Google’s first foray into medicine. Another AI it created is being used in India to help detect diabetic eye disease while trying to make up for a drastic shortfall of doctors in rural areas. A similar approach with multiple AIs trained to recognize common ailments and forward the results of examination to remote doctors, or even go as far as prescribe a recommended set of treatments themselves after demonstrating impressive real world accuracy, could, in theory, help chronically underserved areas across the world by allowing doctors to treat more patients while providing more individualized attention to more complicated cases as the AI handles the basic diagnostic tests and relevant paperwork. In short, the robots are getting ready to step into the doctor’s office, and your healthcare will be all the better for it.