The technique improved the efficiency of algorithms used to build models of biological systems more than seven-fold, creating more realistic models that can account for uncertainty and biological variation. This could impact research areas ranging from drug development to the engineering of biofuels.
Computer models of biological systems have many uses, from predicting potential side-effects of new drugs to understanding the ability of plants to adjust to climate change. But developing models for living things is challenging because, unlike machines, biological systems can have a significant amount of uncertainty and variation.
“When developing a model of a biological system, you have to use techniques that account for that uncertainty, and those techniques require a lot of computational power,” says Dr. Cranos Williams, an assistant professor of electrical engineering at NC State and co-author of a paper describing the research. “That means using powerful computers. Those computers are expensive, and access to them can be limited.
“Our goal was to develop software that enables scientists to run biological models on conventional computers by utilizing their multi-core chips more efficiently.”
The brain of a computer chip is its central processing unit, or “core.” Most personal computers now use chips that have between four and eight cores. However, most programs only operate in one core at a time. For a program to utilize all of these cores, it has to be broken down into separate “threads” – so that each core can execute a different part of the program simultaneously. The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.
In order to “parallelize” algorithms for building models of biological systems, Williams’ research team created a way for information to pass back and forth between the cores on a single chip. Specifically, Williams explains, “we used threads to create ‘locks’ that control access to shared data. This allows all of the cores on the chip to work together to solve a unified problem.”
The researchers tested the approach by running three models through chips that utilized one core, as well as chips that used the new technique to utilize two, four and eight cores. In all three models, the chip that utilized eight cores ran at least 7.5 times faster than the chip that utilized only one core.
“This approach allows us to build complex models that better reflect the true characteristics of the biological process, and do it in a more computationally efficient way,” says Williams. “This is important. In order to understand biological systems, we will need to use increasingly complex models to address the uncertainty and variation inherent in those systems.”
Ultimately, Williams and his team hope to see if this approach can be scaled up for use on supercomputers, and whether it can be modified to take advantage of the many cores that are available on graphics processing units used in many machines.
The paper, “Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing,” was co-authored by NC State master’s student Skylar Marvel and NC State Ph.D. student Maria de Luis Balaguer. The paper was presented at the Workshop on Computational Systems Biology in Zurich, Switzerland, June 6-8.
NC State’s Department of Electrical and Computer Engineering is part of the university’s College of Engineering.
Note to Editors: The study abstract follows.
“Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing”
Authors: Skylar W. Marvel, Maria A. de Luis Balaguer, Cranos M. Williams, North Carolina State University
Presented: June 6-8 at the Workshop on Computational Systems Biology in Zurich, Switzerland
Abstract: The modeling of biological systems often involves the estimation of model parameters. Estimation methods have been developed to model these systems in a bounded-error context due to the uncertainty involved in biological processes. This application of bounded methods to nonlinear and higher-dimensional systems is computationally expensive resulting in excessive simulation times. One possible solution to this problem is parallelizing the computations of bounded-error estimation approaches using multiple processor cores. In this paper, we developed a method for use on a single multi-core workstation using POSIX threads to process subsets of the parameter space while access to shared information was controlled by mutex-locked linked lists. This approach allows the parallelized algorithm to run on easily accessible multicore workstations and does not require utilization of large supercomputers or distributed computing. Initial results of this method using 8 threads on an 8-core machine show speedups of 7.59 and 7.86 when applied to bounded parameter estimation problems involving the nonlinear Lotka-Volterra predator-prey model and SEIR infectious disease model, respectively.
Matt Shipman | EurekAlert!
Cryo-electron microscopy achieves unprecedented resolution using new computational methods
24.03.2017 | DOE/Lawrence Berkeley National Laboratory
How cheetahs stay fit and healthy
24.03.2017 | Forschungsverbund Berlin e.V.
Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.
The results will be published on March 22 in the journal „Astronomy & Astrophysics“.
Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...
Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.
Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...
In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles.
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to...
20.03.2017 | Event News
14.03.2017 | Event News
07.03.2017 | Event News
24.03.2017 | Materials Sciences
24.03.2017 | Physics and Astronomy
24.03.2017 | Physics and Astronomy