The technique improved the efficiency of algorithms used to build models of biological systems more than seven-fold, creating more realistic models that can account for uncertainty and biological variation. This could impact research areas ranging from drug development to the engineering of biofuels.
Computer models of biological systems have many uses, from predicting potential side-effects of new drugs to understanding the ability of plants to adjust to climate change. But developing models for living things is challenging because, unlike machines, biological systems can have a significant amount of uncertainty and variation.
“When developing a model of a biological system, you have to use techniques that account for that uncertainty, and those techniques require a lot of computational power,” says Dr. Cranos Williams, an assistant professor of electrical engineering at NC State and co-author of a paper describing the research. “That means using powerful computers. Those computers are expensive, and access to them can be limited.
“Our goal was to develop software that enables scientists to run biological models on conventional computers by utilizing their multi-core chips more efficiently.”
The brain of a computer chip is its central processing unit, or “core.” Most personal computers now use chips that have between four and eight cores. However, most programs only operate in one core at a time. For a program to utilize all of these cores, it has to be broken down into separate “threads” – so that each core can execute a different part of the program simultaneously. The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.
In order to “parallelize” algorithms for building models of biological systems, Williams’ research team created a way for information to pass back and forth between the cores on a single chip. Specifically, Williams explains, “we used threads to create ‘locks’ that control access to shared data. This allows all of the cores on the chip to work together to solve a unified problem.”
The researchers tested the approach by running three models through chips that utilized one core, as well as chips that used the new technique to utilize two, four and eight cores. In all three models, the chip that utilized eight cores ran at least 7.5 times faster than the chip that utilized only one core.
“This approach allows us to build complex models that better reflect the true characteristics of the biological process, and do it in a more computationally efficient way,” says Williams. “This is important. In order to understand biological systems, we will need to use increasingly complex models to address the uncertainty and variation inherent in those systems.”
Ultimately, Williams and his team hope to see if this approach can be scaled up for use on supercomputers, and whether it can be modified to take advantage of the many cores that are available on graphics processing units used in many machines.
The paper, “Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing,” was co-authored by NC State master’s student Skylar Marvel and NC State Ph.D. student Maria de Luis Balaguer. The paper was presented at the Workshop on Computational Systems Biology in Zurich, Switzerland, June 6-8.
NC State’s Department of Electrical and Computer Engineering is part of the university’s College of Engineering.
Note to Editors: The study abstract follows.
“Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing”
Authors: Skylar W. Marvel, Maria A. de Luis Balaguer, Cranos M. Williams, North Carolina State University
Presented: June 6-8 at the Workshop on Computational Systems Biology in Zurich, Switzerland
Abstract: The modeling of biological systems often involves the estimation of model parameters. Estimation methods have been developed to model these systems in a bounded-error context due to the uncertainty involved in biological processes. This application of bounded methods to nonlinear and higher-dimensional systems is computationally expensive resulting in excessive simulation times. One possible solution to this problem is parallelizing the computations of bounded-error estimation approaches using multiple processor cores. In this paper, we developed a method for use on a single multi-core workstation using POSIX threads to process subsets of the parameter space while access to shared information was controlled by mutex-locked linked lists. This approach allows the parallelized algorithm to run on easily accessible multicore workstations and does not require utilization of large supercomputers or distributed computing. Initial results of this method using 8 threads on an 8-core machine show speedups of 7.59 and 7.86 when applied to bounded parameter estimation problems involving the nonlinear Lotka-Volterra predator-prey model and SEIR infectious disease model, respectively.
Matt Shipman | EurekAlert!
20.11.2017 | Washington University in St. Louis
Carefully crafted light pulses control neuron activity
20.11.2017 | University of Illinois at Urbana-Champaign
The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake.
Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a...
Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.
That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these...
Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm.
During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles....
The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications.
Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,...
Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University...
15.11.2017 | Event News
15.11.2017 | Event News
30.10.2017 | Event News
20.11.2017 | Earth Sciences
20.11.2017 | Earth Sciences
20.11.2017 | Life Sciences