But there’s a problem: The latest DNA sequencing instruments are burying researchers in trillions of bytes of data and overwhelming existing tools in biological computing. It doesn’t help that there’s a variety of sequencing instruments feeding a diverse set of applications.
Iowa State University’s Srinivas Aluru is leading a research team that’s developing a set of solutions using high performance computing. The researchers want to develop core techniques, parallel algorithms and software libraries to help researchers adapt parallel computing techniques to high-throughput DNA sequencing, the next generation of sequencing technologies.
Those technologies are now ubiquitous, “enabling single investigators with limited budgets to carry out what could only be accomplished by an international network of major sequencing centers just a decade ago,” said Aluru, the Ross Martin Mehl and Marylyne Munas Mehl Professor of Computer Engineering at Iowa State.
“Seven years ago we were able to sequence DNA one fragment at a time,” he said. “Now researchers can read up to 6 billion DNA sequences in one experiment.
“How do we address these big data issues?”
A three-year, $2 million grant from the BIGDATA program of the National Science Foundation and the National Institutes of Health will support the search for a solution by Aluru and researchers from Iowa State, Stanford University, Virginia Tech and the University of Michigan. In addition to Aluru, the project’s leaders at Iowa State are Patrick Schnable, Iowa State’s Baker Professor of Agronomy and director of the centers for Plant Genomics and Carbon Capturing Crops, and Jaroslaw Zola, a former research assistant professor in electrical and computer engineering who recently moved to Rutgers University.
The majority of the grant – $1.3 million – will support research at Iowa State. And Aluru is quick to say that none of the grant will support hardware development.
Researchers will start by identifying a large set of building blocks frequently used in genomic studies. They’ll develop the parallel algorithms and high performance implementations needed to do the necessary data analysis. And they’ll wrap all of those technologies in software libraries researchers can access for help. On top of all that, they’ll design a domain specific language that automatically generates computing codes for researchers.
Aluru said that should be much more effective than asking high performance computing specialists to develop parallel approaches to each and every application.
“The goal is to empower the broader community to benefit from clever parallel algorithms, highly tuned implementations and specialized high performance computing hardware, without requiring expertise in any of these,” says a summary of the research project.
Aluru said the resulting software libraries will be fully open-sourced. Researchers will be free to use the libraries while developing, editing and modifying them as needed.
“We’re hoping this approach can be the most cost-effective and fastest way to gain adoption in the research community,” Aluru said. “We want to get everybody up to speed using high performance computing.”Srinivas Aluru, Electrical and Computer Engineering,
Mike Krapfl | Newswise Science News
Controlling robots with brainwaves and hand gestures
20.06.2018 | Massachusetts Institute of Technology, CSAIL
Innovative autonomous system for identifying schools of fish
20.06.2018 | IMDEA Networks Institute
In a recent publication in the renowned journal Optica, scientists of Leibniz-Institute of Photonic Technology (Leibniz IPHT) in Jena showed that they can accurately control the optical properties of liquid-core fiber lasers and therefore their spectral band width by temperature and pressure tuning.
Already last year, the researchers provided experimental proof of a new dynamic of hybrid solitons– temporally and spectrally stationary light waves resulting...
Scientists from the University of Freiburg and the University of Basel identified a master regulator for bone regeneration. Prasad Shastri, Professor of...
Moving into its fourth decade, AchemAsia is setting out for new horizons: The International Expo and Innovation Forum for Sustainable Chemical Production will take place from 21-23 May 2019 in Shanghai, China. With an updated event profile, the eleventh edition focusses on topics that are especially relevant for the Chinese process industry, putting a strong emphasis on sustainability and innovation.
Founded in 1989 as a spin-off of ACHEMA to cater to the needs of China’s then developing industry, AchemAsia has since grown into a platform where the latest...
The BMBF-funded OWICELLS project was successfully completed with a final presentation at the BMW plant in Munich. The presentation demonstrated a Li-Fi communication with a mobile robot, while the robot carried out usual production processes (welding, moving and testing parts) in a 5x5m² production cell. The robust, optical wireless transmission is based on spatial diversity; in other words, data is sent and received simultaneously by several LEDs and several photodiodes. The system can transmit data at more than 100 Mbit/s and five milliseconds latency.
Modern production technologies in the automobile industry must become more flexible in order to fulfil individual customer requirements.
An international team of scientists has discovered a new way to transfer image information through multimodal fibers with almost no distortion - even if the fiber is bent. The results of the study, to which scientist from the Leibniz-Institute of Photonic Technology Jena (Leibniz IPHT) contributed, were published on 6thJune in the highly-cited journal Physical Review Letters.
Endoscopes allow doctors to see into a patient’s body like through a keyhole. Typically, the images are transmitted via a bundle of several hundreds of optical...
13.06.2018 | Event News
08.06.2018 | Event News
05.06.2018 | Event News
22.06.2018 | Materials Sciences
22.06.2018 | Earth Sciences
22.06.2018 | Life Sciences