Deep-learning algorithm matches dermatologists' ability to identify skin cancer
Stanford School of Medicine News Mar 03, 2017
In the hope of creating better access to medical care, Stanford researchers have trained an algorithm to diagnose skin cancer.
Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They created a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.
ÂWe realized it was feasible, not just to do something well, but as well as a human dermatologist, said Sebastian Thrun, PhD, an adjunct professor of computer science at Stanford. ÂThatÂs when our thinking changed. ThatÂs when we said, ÂLook, this is not just a class project for students, this is an opportunity to do something great for humanity.ÂÂ
The final product, the subject of a paper in the Jan. 25 issue of the journal Nature, was tested against 21 board–certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.
Thrun is senior author of the paper.
Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades–long history in computer science, but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.
ÂWe made a very powerful machine–learning algorithm that learns from data, said Andre Esteva, a lead author of the paper and graduate student in the Thrun lab. ÂInstead of writing into computer code exactly what to look for, you let the algorithm figure it out.Â
The algorithm was fed each image as raw pixels with an associated disease label. Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.
ÂWe gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin, said Brett Kuprel, the paperÂs other lead author and a graduate student in the Thrun lab.
After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as with Helen M. Blau, professor of microbiology and immunology at Stanford, and a co–author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions, representing more than 2,000 different diseases.
During testing, the researchers used only high–quality, biopsy–confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers  malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with a biopsy or treatment, or reassure the patient that the lesion was benign. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and noncancerous lesions in more than 370 images.
Go to Original
Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They created a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.
ÂWe realized it was feasible, not just to do something well, but as well as a human dermatologist, said Sebastian Thrun, PhD, an adjunct professor of computer science at Stanford. ÂThatÂs when our thinking changed. ThatÂs when we said, ÂLook, this is not just a class project for students, this is an opportunity to do something great for humanity.ÂÂ
The final product, the subject of a paper in the Jan. 25 issue of the journal Nature, was tested against 21 board–certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.
Thrun is senior author of the paper.
Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades–long history in computer science, but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.
ÂWe made a very powerful machine–learning algorithm that learns from data, said Andre Esteva, a lead author of the paper and graduate student in the Thrun lab. ÂInstead of writing into computer code exactly what to look for, you let the algorithm figure it out.Â
The algorithm was fed each image as raw pixels with an associated disease label. Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.
ÂWe gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin, said Brett Kuprel, the paperÂs other lead author and a graduate student in the Thrun lab.
After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as with Helen M. Blau, professor of microbiology and immunology at Stanford, and a co–author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions, representing more than 2,000 different diseases.
During testing, the researchers used only high–quality, biopsy–confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers  malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with a biopsy or treatment, or reassure the patient that the lesion was benign. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and noncancerous lesions in more than 370 images.
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
-
Exclusive Write-ups & Webinars by KOLs
-
Daily Quiz by specialty
-
Paid Market Research Surveys
-
Case discussions, News & Journals' summaries