Science

AddToAny

Google+ Facebook Twitter Twitter

Man vs machine

Hutan Ashrafian outlines a project that proved a computer algorithm to be as effective as human radiologists in spotting breast cancer from x-ray images.

As a preventative measure, the mammogram has been a big success. It has caught countless cases of breast cancer early, giving treatment the best possible chance. It’s not perfect though and now a multi-disciplinary team of clinicians, neuroscientists and software engineers has developed an algorithm that could read mammogram x-rays at least as accurately as expert human radiologists.

Reading a mammogram requires a high degree of knowledge, skill and experience. It can be a challenge and that means it is also subject to error, resulting in false positives and negatives. Given the sheer number of images that radiologists have to interpret – here in the UK alone, the NHS screened 2.2 million women in 2016-17 – the potential for misinterpretation is considerable. In some cases, mistakes create unnecessary anxiety for women. In other cases, it means the opportunity to catch the disease early has been missed.

Translating data

Overcoming this weakness was the aim for the research team drawn from Google Health, DeepMind, US hospitals, NHS hospitals and Imperial College London. Their raw material for the project was a large dataset of almost 26,000 images from a Cancer Research project in UK hospitals and a smaller set of 3000 images from a handful of US hospitals.

One of the key authors of the report was Hutan Ashrafian, Chief Scientific Advisor and Clinical Lecturer in surgery at Imperial College London, who has been involved in the project from its inception.

“The aim of this work from day one was to take a dataset that could be translated into a digital format and then used in a way that would give the maximum benefit to women called in for routine mammogram screening,” he says. “The mammogram is the best modality we have today for reducing the massive burden of breast cancer. Screening is the strongest way to manage the disease via early diagnosis and treatment. The point of this research, then, was to augment the screening programme, and to support the people working in that programme. There are not enough radiologists, and with an ageing population the numbers are only likely to get larger, so our objective was to look at the potential for a digital solution.”

Analytical platform

With developments in the world of machine learning moving at a pace, the hunt was on for an industry partner that could spearhead the development of a suitable analytical platform. It came in the shape of DeepMind, the innovative UK company of AI experts, which during the project became an integral part of Google Health. Along with the clinical experts and an expansive stock of images, the pieces for the project were now assembled. “All these stakeholders came together,” says Hutan. “The job then was to see if we could come up with a way to support the health system to manage these millions of mammograms.”

The success of the project would depend on the quality of the AI that DeepMind could devise, which in turn depended on the quality of information that underpinned it. The two image datasets were one key strand of this, the other was the clinical expertise. “We gave the AI experts the protocol of selection, the parameters for false positives and negatives, the baseline of what is cancer and what isn’t, and so on,” says Hutan. “It was an intricate process that involved the whole group, including people who take the mammograms.”

What was the margin of error that the AI would seek to improve? “Depending on which dataset you look at, of 100 women screened, four might have a queried image. But only one of those might actually have cancer. So there needs to be an increase in accuracy for the whole process.”

Promising results

Better accuracy is precisely what the AI system delivered. For false positives it gave an absolute reduction of 5.7% on the US dataset and 1.2% on the UK dataset. For false negatives the reduction was 9.4% and 2.7%, respectively. The AI was also tested as part of the double-reading process used in the UK (where one radiologist reads the image and gives a verdict, and a second confirms the result). Here, the AI was found to be non-inferior and could potentially cut the workload of the second reader by up to 88%.

The results look promising. “It might even be better than that,” says Hutan, “because the algorithm had access to the images and nothing else, whereas the radiologists had access to patient details and history. So they were working from more knowledge than the AI.”

The big question now is not whether AI will play a major role in medicine, but how long before it is widely used in clinical settings. Hutan says: “Something like 25% of radiology research now involves an AI element of one sort or another. I think it is an inevitable innovation. Its real potential is to augment processes in terms of sensitivity and to reduce mortality and unnecessary interventions. In this case, we are looking at a more accurate and quicker turn around of the scans, and that would also free up the capacity of the radiologists who read these mammograms to concentrate their skills on more pressing matters.”

To Hutan’s mind, the timeframe for this is not the next 25 years, but far sooner. “The pressing question is how do we get this approved through the national regulatory bodies? We have the AI, now we want to take it to clinical trials and then show efficacy at a national level. We have already started the discussions with Public Health England, NHS Improvement and so on. Ideally, we would like to see it operating in one form or another in the next two to three years.”


Hutan Ashrafian     

1990s: Bachelor of Science, Bachelor of Surgery, Imperial College London     

2001: House Officer, surgery and urology, Royal Free London     

2002: Doctor in A&E, Barts   

2006: Registrar Surgeon, Great Ormond St Hospital  

2007: Honorary Registrar, Imperial College London     

2015: PhD, Imperial College London     2015: Senior Registrar in bariatric and metabolic surgery, Chelsea and Westminster Hospital     

2017: Honorary Senior Clinical Fellow in surgery, Chief Scientific Advisor, Institute of Global Health Innovation, Imperial College London  


Image credit | Shutterstock

Related Articles

Top