Using GPUs to Discover Human Brain Connectivity – Neuroscience News

Summary: Researchers developed a new GPU-based machine learning algorithm to help predict the connectivity of networks within the brain.

Source: IISC

A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain.

The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyse the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain.

Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms.

Tasks that previously took hours to days can be completed within seconds to minutes, says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study published in the journalNature Computational Science.

Millions of neurons fire in the brain every second, generating electrical pulses that travel across neuronal networks from one point in the brain to another through connecting cables or axons. These connections are essential for computations that the brain performs.

Understanding brain connectivity is critical for uncovering brain-behaviour relationships at scale, says Varsha Sreenivasan, PhD student at CNS and first author of the study.

However, conventional approaches to study brain connectivity typically use animal models, and are invasive.dMRI scans, on the other hand, provide a non-invasive method to study brain connectivity in humans.

The cables (axons) that connect different areas of the brain are its information highways. Because bundles of axons are shaped like tubes, water molecules move through them, along their length, in a directed manner. dMRI allows scientists to track this movement, in order to create a comprehensive map of the network of fibres across the brain, called a connectome.

Unfortunately, it is not straightforward to pinpoint these connectomes. The data obtained from the scans only provide the net flow of water molecules at each point in the brain.

Imagine that the water molecules are cars. The obtained information is the direction and speed of the vehicles at each point in space and time with no information about the roads. Our task is similar to inferring the networks of roads by observing these traffic patterns, explains Sridharan.

To identify these networks accurately, conventional algorithms closely match the predicted dMRI signal from the inferred connectome with the observed dMRI signal. Scientists had previously developed an algorithm called LiFE (Linear Fascicle Evaluation) to carry out this optimisation, but one of its challenges was that it worked on traditional Central Processing Units (CPUs), which made the computation time-consuming.

In the new study, Sridharans team tweaked their algorithm to cut down the computational effort involved in several ways, including removing redundant connections, thereby improving upon LiFEs performance significantly.

To speed up the algorithm further, the team also redesigned it to work on specialised electronic chips the kind found in high-end gaming computers called Graphics Processing Units (GPUs), which helped them analyse data at speeds 100-150 times faster than previous approaches.

This improved algorithm, ReAl-LiFE, was also able to predict how a human test subject would behave or carry out a specific task. In other words, using the connection strengths estimated by the algorithm for each individual, the team was able to explain variations in behavioural and cognitive test scores across a group of 200 participants.

Such analysis can have medical applications too. Data processing on large scales is becoming increasingly necessary for big-data neuroscience applications, especially for understanding healthy brain function and brain pathology, says Sreenivasan.

For example, using the obtained connectomes, the team hopes to be able to identify early signs of aging or deterioration of brain function before they manifest behaviourally in Alzheimers patients.

In another study, we found that a previous version of ReAL-LiFE could do better than other competing algorithms for distinguishing patients with Alzheimers disease from healthy controls, says Sridharan.

He adds that their GPU-based implementation is very general, and can be used to tackle optimization problems in many other fields as well.

Author: Office of CommunicationsSource: IISCContact: Office of Communications IISCImage: The image is credited to Varsha Sreenivasan and Devarajan Sridharan

Original Research: Open access.GPU-accelerated connectome discovery at scale by Devarajan Sridharan et al. Nature Computational Science

Abstract

GPU-accelerated connectome discovery at scale

Diffusion magnetic resonance imaging and tractography enable the estimation of anatomical connectivity in the human brain, in vivo. Yet, without ground-truth validation, different tractography algorithms can yield widely varying connectivity estimates. Although streamline pruning techniques mitigate this challenge, slow compute times preclude their use in big-data applications.

We present Regularized, Accelerated, Linear Fascicle Evaluation (ReAl-LiFE), a GPU-based implementation of a state-of-the-art streamline pruning algorithm (LiFE), which achieves >100 speedups over previous CPU-based implementations.

Leveraging these speedups, we overcome key limitations with LiFEs algorithm to generate sparser and more accurate connectomes. We showcase ReAl-LiFEs ability to estimate connections with superlative testretest reliability, while outperforming competing approaches.

Moreover, we predicted inter-individual variations in multiple cognitive scores with ReAl-LiFE connectome features. We propose ReAl-LiFE as a timely tool, surpassing the state of the art, for accurate discovery of individualized brain connectomes at scale.

Finally, our GPU-accelerated implementation of a popular non-negative least-squares optimization algorithm is widely applicable to many real-world problems.

Here is the original post:
Using GPUs to Discover Human Brain Connectivity - Neuroscience News

Related Posts