How Artificial Intelligence is Transforming Healthcare

Categories Artificial Intelligence, Health

With headlines such as “Your future doctor may not be human” and “The robot doctor will see you now”, the enthusiasm around artificial intelligence (AI) in recent years has been monumental. Such statements are no doubt hyperbolic but regardless there has been significant and exciting progress in recent years, particularly within the medical field. As such, it’s becoming increasingly important for the medically inclined to have some understanding of the advantages and pitfalls of AI as the technology continues to improve and moves toward implementation.

Artificial Intelligence relies on a method of computer programming called deep learning, which affords computers the ability to make sense of text and images in a more meaningful way. While these may seem like simple skills to us, they are ones absolutely integral to medical practice. Deep learning works by using a large set of examples of images and running them through a neural network, a series of nodes interconnected like the neurons of a brain. With each image the network will slightly update itself to slowly improve its accuracy, ultimately learning to precisely distinguish images of different categories. In the past couple of years this technique has led to advances comparable to that of human expertise in specific domains such as diagnosis of diabetic retinopathy, skin cancer and hip fractures. Although we may not see AI taking over our hospitals any time soon, it may begin to transform our working environments in years to come, providing tools to aid in medical practice.

The first of these examples is a study on diabetic retinopathy, an eye condition seen in diabetics which can result in permanent vision loss. Early detection is vital in order to ensure good prognosis, a task expensive in time, expertise and money. The Google study trained its network on fundoscopic images (photos of the back of one’s eye) which resulted in specialist level accuracy detection of diabetic retinopathy and was subsequently FDA approved. It is now being implemented as a detection tool in India and Thailand where access to specialist diagnosis may be limited and could potentially provide more opportune diagnosis.

Heat map on a representative diabetic retinopathy image. Green: Regions that do not change the probability of an abnormal classification (neutral areas); Orange: Regions that increase the probability of an abnormal binary classification (suspicious areas); Clear or light blue: Regions that decrease the probability of abnormal binary classification (normal areas).

Another study out of Stanford classified images of skin cancer as malignant or non-malignant, demonstrating dermatologist level accuracy on the task. Patients could ultimately use this tool to take photos of skin lesions on their smart phone and the app could give them advice as to whether they need expert follow-up, potentially resulting in earlier diagnosis. Melanoma has a 95% survival rate at stage 1 (earliest detection) compared to 15% at stage 4 (melanoma that has spread to other organs) and hence prompt diagnosis could have a dramatic impact on mortality rates. In fact, this is exactly what the Stanford team plans to do and are currently in the process of developing an app that could put specialist level diagnosis in your pocket. (Apps for this function do currently exist, however a number of studies evaluating them found that currently there is a lack of evidence regarding safety and efficacy.)

Finally, a study out of Adelaide University achieved radiologist level accuracy on detecting hip fractures, a prevalent radiological task in most emergency departments. With an area under the ROC of 0.994 (a metric that combines sensitivity and specificity), this study demonstrated an incredibly reliable tool for hip fracture diagnosis. Pelvic x-ray’s account for 6% of emergency referrals and handing over this task to AI could reduce the burden on our overworked radiologists and furthermore save the financially struggling Royal Adelaide Hospital a few pennies.

ROC curve showing the performance of the hip fracture model with AUC 0.994, with a point reflecting the optimistic upper bound of human performance.

While any new technology imposes some risks, there are many advantages of using AI in our hospitals, clinics and amongst the general public. Once developed these systems are exceedingly cheap to run, can be accessed anywhere with an internet connection, and unlike their human counterparts, will never fatigue. However, we don’t need to be too concerned about losing our jobs to AI yet. Currently the tasks AI learns are inherently narrow and specific, unable to tackle the complexity and scope of the work carried out by doctors. We can instead expect AI will begin to change some of the ways we practice medicine, providing useful aids to make healthcare more universally accessible, affordable and reliable.

 

Automatic Detection of Diabetic Retinopathy using Deep Learning

Dermatologist-level classification of skin cancer with deep neural networks

Detecting hip fractures with radiologist-level performance using deep neural networks

Leave a Reply

Your email address will not be published. Required fields are marked *