top of page
""

FindingPheno

  • LinkedIn
  • Twitter
  • Facebook
  • YouTube

Diagnosing cancer with AI

Updated: Oct 26, 2022

Cancer cells within a tissue look different from normal cells, both in the shape, size and organisation of the cells and also in the expression patterns of various proteins. We can use these differences to diagnose the cancer, confirming not only if it really is cancer, but also what type and how advanced it is. This is done by taking a small amount of tissue from the relevant part of the patient's body (called a biopsy), slicing and staining the tissue to visualise what is in there, then analysing it under a microscope.

Traditionally, this analysis is conducted by a pathologist, a type of doctor trained to understand and interpret the patterns and features within the tissue images. Even though these images are visually complicated and can be highly variable, even within one biopsy sample (see the image above for examples of colon sections), the process works because the human brain is very good at distinguishing even very small changes in how something looks and putting them into known context to find meaning.


While biopsy diagnosis is very effective, it is also a labourious task with each sample taking a significant amount of the pathologist’s time. It may also be subjective where each pathologist relies on their own individual experience to make diagnostic decisions. And while the number and complexity of tests to be analysed is increasing, the number of trained pathologists is currently shrinking, creating shortages in diagnostic capacity. Taken together, these factors lead to delays in the patient receiving their diagnosis so they can begin treatment if required.


To address these problems, doctors are increasingly turning to machine learning (ML)-based image analysis to support the diagnostic pipeline. This is where the microscope images are digitised, turning them into numerical data representing the pixel colour intensities, and this data is then analysed by computer to identify and classify any cancerous cells. This is done using a subset of ML called deep learning (DL). This is where the "machine", i.e. the part of the system which learns from data, is made up of different algorithms arranged in a multi-layered structure called a neural network. DL neural networks are a lot more complex than standard ML algorithms, with tiered layers and nodes inspired by the neural connections within the human brain. This layered approach allows a problem to be broken down into logical steps as it passes between layers, making it particularly suitable for stepwise image processing as required for medical image analysis as described here.


The analysis starts first by finding the hard edges within the pixel data to make lines or simple shapes, then adds details, patterns, or other features as it passes through the layers until it is able to predict what the image is showing. The images are then classified by the system into cancer yes/no? followed by staging and other analysis to give a final diagnosis. Because the DL algorithms do not come with pre-existing biases or context, the features within the image that are chosen as the basis of the prediction and classification may not be the ones that a human would pick. This can make it a lot more efficient, focusing in on only those areas truly relevant to making the decision. However, the lack of context can also allow the system to become side-tracked by things which are not biologically relevant, leading to incorrect classifications. For example, if all of the non-cancer samples in the training data set also have more blue staining due to chance or sampling bias, the system could learn to identify blue samples regardless of disease status.


Because of the complexity of the algorithms and the chance of irrelevant classifications, it is important to train and validate these DL image analysis systems with very large, high quality clinical data sets before they are ready to be used - a non-trivial process. However, once they are trained, DL digital pathology systems can be both faster (e.g. 15 seconds instead of 30 minutes) and more accurate (e.g. 93% computer vs 73% human) than analysis by a pathologist. While the resulting digital diagnosis still needs to be checked and put unto medical context by a pathologist, this represents a significant reduction in their basic workload allowing them to allocate their time and attention more effectively. When coupled with robotic workstations and automated microscopy and image acquisition and microscopy, these powerful AI-based methods have the potential to give a fast, efficient high-throughput workflow with improved detection rates and quicker time to diagnosis for the patients.


For more info and some other case studies on using different types of machine learning in biology, take a look at our video.

Written by Shelley Edmunds

Sign up to get the latest blog posts by email