The use of artificial intelligence and, in particular, machine learning is becoming increasingly popular in research. These systems excel at high-speed data analysis, interpretation, and laborious research tasks, such as image assessment.

One of the areas in which machine learning has been enjoying success is image recognition. Now, researchers have begun to use machine learning to analyze brain tumors.

Training a machine to recognize tumors

Primary brain tumors include a broad range that depends on cell type, aggressiveness, and development stage. Being able to rapidly identify and characterize the tumor is vital for creating a treatment plan. Normally, this is a job for radiologists who work with the surgical team; however, in the near future, machine learning will play an increasing role.

George Biros, professor of mechanical engineering and leader of the ICES Parallel Algorithms for Data Analysis and Simulation Group at The University of Texas at Austin, has spent almost a decade developing accurate computer algorithms that can characterize gliomas. Gliomas are the most common and aggressive type of primary brain tumor.

At the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2017), Professor Biros and collaborators presented the results of a new, automated method of characterizing gliomas. The system combines biophysical models of tumor growth with machine learning algorithms to analyze the magnetic resonance (MR) imaging data of glioma patients. The system is powered by the supercomputers at the Texas Advanced Computing Center (TACC).

The research team put their new system to the test at the Multimodal Brain Tumor Segmentation Challenge 2017 (BRaTS’17), which is a yearly competition at which research groups present new approaches and results for computer-assisted identification and classification of brain tumors using data from pre-operative MR scans. The new system impressively managed to score in the top 25% in the challenge and was near the top rankings for whole-tumor segmentation.

The goal of the contest is to be able to take an image of the brain and have the computer analyze it and automatically identify different kinds of abnormal tissue, including edema, necrotic tissue, and areas with aggressive tumors. This is a little like if you took pictures of your family and used facial recognition to identify each person, only here the images are brain scans, and it is tissue recognition that must be done automatically by the computer.

The team were given 300 sets of brain images to calibrate their systems with; this is known as “training” in machine learning terms and is how the machine is taught to identify features.

During the last part of the contest, the researchers were given 140 new brain images from patients and had to identify the location of tumors and divide them into different tissue types. They were given just two days to do this; for humans, doing the job would be a monumental amount of work.  

The image processing, analysis and prediction pipeline they used has two main stages: a machine learning stage assisted by humans, in which the computer creates a probability map for the target classes it needs to identify, such as whole tumor and edema, and a second stage in which these probabilities are combined with a biophysical model which represents how tumors grow; this serves to impose limits on analyses and aids correlation.

Using supercomputers to characterize brain tumors

The system used the supercomputers of TACC, so they could use employ large-scale nearest neighbor classifiers, a machine learning method. For every voxel, or 3D pixel in an MR image of the brain, the system tries to locate all similar voxels in the 300 brains it had previously seen during training to determine if an area of an image is a tumor or not.

This translates to 1.5 million voxels per brain image, and with 300 brain images to assess, the computer system had to look at half a billion voxels for every new voxel of the 140 unknown brains it had been given in order to determine if a voxel was a tumor or healthy tissue. This was possible thanks to the use of the TACC supercomputers and represents a huge amount of computing power.

Each individual stage in the analysis pipeline utilized different TACC computing systems; the nearest neighbor machine learning classification component used 60 nodes at once (each consisting of 68 processors) on TACCs latest supercomputer Stampede2. The Stampede2 supercomputer is one of the most powerful computer systems in the world, and Professor Biros and his team were able to test and refine their algorithm on the new system in the spring of this year. They were some of the first researchers to gain access to the computer, and they needed the sheer power to perform these highly complex operations.

The end result of having access to this power was that Professor Biros and his team were able to run their analysis on the 140 brains in less than four hours. They correctly characterized the data with an accuracy of almost 90%, which is comparable to human radiologists doing the job and in a fraction of the time. The process is also completely automatic once the system algorithms are trained, and it can then assess image data and classify tumors without any further need for human intervention.

The system is being installed at the University of Pennsylvania by the end of this year in partnership with project collaborator Christos Davatzikos, director of the Center for Biomedical Image Computing and Analytics and a professor of radiology at the university. While the system will not replace radiologists and surgical staff, it will help to improve reproducibility of assessments and could potentially lead to faster diagnoses.


This is yet another example of how machine learning is being employed in research and medicine, and the methods the team has developed here have the potential to go beyond brain tumor analysis. The system could be used for other medical applications of a similar nature though transfer learning, so the possibilities are fairly endless.

If you are excited about how AI and machine learning can change research forever, you may be interested in a related project on The MouseAge project is seeking support to develop a visual recognition and assessment system that will allow researchers to determine the age of mice without the need for invasive testing. If you are interested in helping us create a system that could speed up aging research and reduce animal suffering, check it out.


About the author

Steve Hill

As a scientific writer and a devoted advocate of healthy longevity technologies Steve has provided the community with multiple educational articles, interviews and podcasts, helping the general public to better understand aging and the means to modify its dynamics. His materials can be found at H+ Magazine, Longevity reporter, Psychology Today and Singularity Weblog. He is a co-author of the book “Aging Prevention for All” – a guide for the general public exploring evidence-based means to extend healthy life (in press).
Write a comment:


Your email address will not be published.

thirteen + 18 =

Privacy Policy / Terms Of Use

       Powered by MMD