The technique improved the efficiency of algorithms used to build models of biological systems more than seven-fold, creating more realistic models that can account for uncertainty and biological variation. This could impact research areas ranging from drug development to the engineering of biofuels.
Computer models of biological systems have many uses, from predicting potential side-effects of new drugs to understanding the ability of plants to adjust to climate change. But developing models for living things is challenging because, unlike machines, biological systems can have a significant amount of uncertainty and variation.
“When developing a model of a biological system, you have to use techniques that account for that uncertainty, and those techniques require a lot of computational power,” says Dr. Cranos Williams, an assistant professor of electrical engineering at NC State and co-author of a paper describing the research. “That means using powerful computers. Those computers are expensive, and access to them can be limited.
“Our goal was to develop software that enables scientists to run biological models on conventional computers by utilizing their multi-core chips more efficiently.”
The brain of a computer chip is its central processing unit, or “core.” Most personal computers now use chips that have between four and eight cores. However, most programs only operate in one core at a time. For a program to utilize all of these cores, it has to be broken down into separate “threads” – so that each core can execute a different part of the program simultaneously. The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.
In order to “parallelize” algorithms for building models of biological systems, Williams’ research team created a way for information to pass back and forth between the cores on a single chip. Specifically, Williams explains, “we used threads to create ‘locks’ that control access to shared data. This allows all of the cores on the chip to work together to solve a unified problem.”
The researchers tested the approach by running three models through chips that utilized one core, as well as chips that used the new technique to utilize two, four and eight cores. In all three models, the chip that utilized eight cores ran at least 7.5 times faster than the chip that utilized only one core.
“This approach allows us to build complex models that better reflect the true characteristics of the biological process, and do it in a more computationally efficient way,” says Williams. “This is important. In order to understand biological systems, we will need to use increasingly complex models to address the uncertainty and variation inherent in those systems.”
Ultimately, Williams and his team hope to see if this approach can be scaled up for use on supercomputers, and whether it can be modified to take advantage of the many cores that are available on graphics processing units used in many machines.
The paper, “Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing,” was co-authored by NC State master’s student Skylar Marvel and NC State Ph.D. student Maria de Luis Balaguer. The paper was presented at the Workshop on Computational Systems Biology in Zurich, Switzerland, June 6-8.
NC State’s Department of Electrical and Computer Engineering is part of the university’s College of Engineering.
Note to Editors: The study abstract follows.
“Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing”
Authors: Skylar W. Marvel, Maria A. de Luis Balaguer, Cranos M. Williams, North Carolina State University
Presented: June 6-8 at the Workshop on Computational Systems Biology in Zurich, Switzerland
Abstract: The modeling of biological systems often involves the estimation of model parameters. Estimation methods have been developed to model these systems in a bounded-error context due to the uncertainty involved in biological processes. This application of bounded methods to nonlinear and higher-dimensional systems is computationally expensive resulting in excessive simulation times. One possible solution to this problem is parallelizing the computations of bounded-error estimation approaches using multiple processor cores. In this paper, we developed a method for use on a single multi-core workstation using POSIX threads to process subsets of the parameter space while access to shared information was controlled by mutex-locked linked lists. This approach allows the parallelized algorithm to run on easily accessible multicore workstations and does not require utilization of large supercomputers or distributed computing. Initial results of this method using 8 threads on an 8-core machine show speedups of 7.59 and 7.86 when applied to bounded parameter estimation problems involving the nonlinear Lotka-Volterra predator-prey model and SEIR infectious disease model, respectively.
Matt Shipman | EurekAlert!
Atomic-level motion may drive bacteria's ability to evade immune system defenses
24.04.2017 | Indiana University
Two-dimensional melting of hard spheres experimentally unravelled after 60 years
24.04.2017 | University of Oxford
More and more automobile companies are focusing on body parts made of carbon fiber reinforced plastics (CFRP). However, manufacturing and repair costs must be further reduced in order to make CFRP more economical in use. Together with the Volkswagen AG and five other partners in the project HolQueSt 3D, the Laser Zentrum Hannover e.V. (LZH) has developed laser processes for the automatic trimming, drilling and repair of three-dimensional components.
Automated manufacturing processes are the basis for ultimately establishing the series production of CFRP components. In the project HolQueSt 3D, the LZH has...
Reflecting the structure of composites found in nature and the ancient world, researchers at the University of Illinois at Urbana-Champaign have synthesized thin carbon nanotube (CNT) textiles that exhibit both high electrical conductivity and a level of toughness that is about fifty times higher than copper films, currently used in electronics.
"The structural robustness of thin metal films has significant importance for the reliable operation of smart skin and flexible electronics including...
The nearby, giant radio galaxy M87 hosts a supermassive black hole (BH) and is well-known for its bright jet dominating the spectrum over ten orders of magnitude in frequency. Due to its proximity, jet prominence, and the large black hole mass, M87 is the best laboratory for investigating the formation, acceleration, and collimation of relativistic jets. A research team led by Silke Britzen from the Max Planck Institute for Radio Astronomy in Bonn, Germany, has found strong indication for turbulent processes connecting the accretion disk and the jet of that galaxy providing insights into the longstanding problem of the origin of astrophysical jets.
Supermassive black holes form some of the most enigmatic phenomena in astrophysics. Their enormous energy output is supposed to be generated by the...
The probability to find a certain number of photons inside a laser pulse usually corresponds to a classical distribution of independent events, the so-called...
Microprocessors based on atomically thin materials hold the promise of the evolution of traditional processors as well as new applications in the field of flexible electronics. Now, a TU Wien research team led by Thomas Müller has made a breakthrough in this field as part of an ongoing research project.
Two-dimensional materials, or 2D materials for short, are extremely versatile, although – or often more precisely because – they are made up of just one or a...
20.04.2017 | Event News
18.04.2017 | Event News
03.04.2017 | Event News
24.04.2017 | Physics and Astronomy
24.04.2017 | Materials Sciences
24.04.2017 | Life Sciences