Ever been worried about a potentially cancerous mole? You might soon be able to use your smartphone to make an evaluation, thanks to a group of computer scientists from Stanford University.
Conscious of the obstacles that prevent people from getting equal access to cancer screenings, the Stanford researchers built an artificially intelligent (AI) diagnosis algorithm to identify images of potentially cancerous moles and lesions as accurately as dermatologists. They say that, in time, a cell phone app might help patients diagnose skin cancer by themselves. That kind of technology could be a game-changer, particularly in less-developed countries where the nearest dermatologist could be miles away.
The skin-cancer detection algorithm is part of a wider trend in healthcare, which has seen a surge of new technologies in the fields of AI, machine learning, and big data. According to one analysis of companies pursuing healthcare-focused AI applications, deals increased sevenfold from fewer than 10 in 2011 to 60 in 2015.
IBM has also been pouring investment into research focused on the use of AI and machine learning technology in healthcare. Their technology includes a “lab on a chip” that can analyze blood and other bodily fluids for the presence of disease, a camera that can examine a pill’s molecular structure to determine if it is real or counterfeit, and a system that can analyze a patient’s words to ascertain the likelihood of mental illness. The tools could both help patients self-diagnose and assist doctors with streamlining normally painstaking processes in pathology and other fields.
On the face of it, tools like the ones under development can be incredibly exciting. If effective, they have the capacity to dramatically decrease health costs, streamline procedures, and redefine longstanding roles for health care practitioners. It’s going to take a considerable amount of time and rigorous testing, however, to determine whether this next tech is truly useful in the hands of untrained professionals. Would you trust your iPhone if it diagnosed you with cancer? That question is going to be critical for the startups making forays into a healthcare tech sector currently dominated by big academic and corporate players (like Stanford and IBM).
There’s good reason to approach new tools with caution. Online symptom checkers like WebMD have promised for years to let patients take healthcare decisions into their own hands, but often do little more than confirm worst suspicions and create new anxieties in their own right. This phenomenon, known as “cyberchondria,” is the byproduct of a world where “answers” to all health-related questions are just a Google search away. Worse, symptom checkers such as Symcat and iTriage are often wrong. Even when these sites do provide valuable information, they can’t help someone without medical training know what (or what not) to do with it.
Applied too quickly or improperly, new technologies designed to help diagnose diseases could instead feed into our culture’s problematic obsession with finding health dangers at every turn. Especially when it comes to cancer, both the public and the media already have a bad habit of overreacting to risks real and imagined. Not that it’s entirely our fault: even the global health bodies whose job it is to determine the effects of substances on our bodies have an incredibly hard time telling us where the dangers actually lay. One of the chief culprits is the International Agency for Research on Cancer (IARC), an arm of the World Health Organization. At regular intervals, IARC works its way into the news cycle by classifying everything from burnt toast to hot beverages as “possibly carcinogenic” to people.
Despite its small size and budget, a group like IARC can have an outsized impact on how people live, work, eat, and play. Its October 2015 announcement that processed meat products increased the risk of colorectal cancer and that ham and sausages were just as carcinogenic as cigarettes, to take just one example, sent the global public into a panic. Another major controversy that kicked off last year revolved around coffee, a substance IARC had declared “possibly carcinogenic” back in 1991, before reclassifying it in June 2016 because its findings turned out to be inconsistent with the scientific research published in the last two decades.
The disagreements between IARC and these other agencies have even dragged the U.S. government into the mix: since last September, Utah congressman Jason Chaffetz (who heads the House Oversight Committee) has been leading an investigation into the organization’s federal funding. When even the experts have such a hard time agreeing with each other, is it any wonder the average person is tempted to see a cancer threat around every turn?
All of this serves as a stark reminder that, even as our digital diagnostic tools advance, non-medical professionals run the risk of misinterpreting medical information—especially since the information presented on these websites and apps does not allow for differentiated analysis. This is why effective communication between (human) doctor and patient is vital to assure that treatments line up with ailments. More information and technological tools that allow patients to make preliminary decisions are useful, but we shouldn’t let ourselves forget that we can’t (yet) reliably trust AI to answer our most intimate health questions.