The award provides for the creation of two heterogeneous, HPC systems that will expand the range of research projects that scientists and engineers can tackle, including computational biology, combustion, materials science, and massive visual analytics.
The project brings together leading expertise and technology resources from Georgia Tech’s College of Computing, Oak Ridge National Laboratory (ORNL), University of Tennessee, National Institute for Computational Sciences, HP and NVIDIA.
NSF’s Track 2 program is an activity designed to fund the deployment and operation of several leading-edge computing systems operating at or near the petascale. An underlying goal is to advance U.S. computing capability in order to support computational scientists and engineers in the pursuit of scientific discovery. The award announced today is the part of the fourth round of awards in the Track 2 program.
“Our goal is to develop and deploy a novel, next-generation system for the computational science community that demonstrates unprecedented performance on computational science and data-intensive applications, while also addressing the new challenges of energy-efficiency,” said Jeffrey Vetter, joint professor of computational science and engineering at Georgia Tech and Oak Ridge National Laboratory.
“The user community is very excited about this strategy,” Vetter continued. For example, James Phillips, senior research programmer at the University of Illinois who leads development of the widely-used NAMD application, says "Our experiences with graphics processors over the past two years have been very positive and we can't wait to explore the new Fermi architecture; this new NSF resource will provide an ideal platform for our large biomolecular simulations."
Georgia Tech’s Vetter will lead the five-year project as principal investigator. The project team is comprised of luminaries in the HPC field, including a Gordon Bell Prize winner and previous recipients of the NSF Track 2B award. Co-principal investigators on the project are Prof. Jack Dongarra (University of Tennessee and ORNL), Prof. Karsten Schwan (Georgia Tech), Prof. Richard Fujimoto (Georgia Tech), and Prof. Thomas Schulthess (Swiss National Supercomputing Centre and ORNL).
The platforms will be developed and deployed in two phases, with initial system delivery planned for deployment in early 2010. This system’s innovations in performance and power will be achieved through heterogeneous processing based on widely-available NVIDIA graphics processing units (GPUs). As industry partners, HP and NVIDIA will be providing the computational systems, platforms and processors needed to develop the system.
“Research institutions are looking for energy-efficient, high-performance computing architectures that can speed time to solution,” said Ed Turkel, manager of business development in the Scalable Computing and Infrastructure business unit at HP. “The combination of HP’s industry-standard HPC server technology with NVIDIA processors delivers increased performance and faster application development, accelerating higher education research projects.”
The initial system will pair hundreds of HP high-performance Intel processors with NVIDIA’s new next-generation CUDA architecture, codenamed Fermi, designed specifically for high-performance computing. This project will be the first of the Track 2 awards to realize the vast potential of GPUs for HPC.
“Computational science is a key area driving the worldwide application of GPUs for high-performance computing,” said Bill Dally, chief scientist at NVIDIA. “GPUs working in concert with CPUs is the architecture of choice for future demanding applications.”
A critical component of the program is a focus on education, outreach and training to expand the knowledge and understanding of HPC among a broader audience. The Georgia Tech team will conduct workshops to attract and train new users for the system, engage historically underrepresented groups such as women and minorities, and educate future generations on the vast potential of high-performance computing as a career field.
More information on the project and its resources is available at http://keeneland.gatech.edu.
Stefany Wilson | Newswise Science News
Tracking down the origins of gold
08.11.2017 | Heidelberger Institut für Theoretische Studien gGmbH
Lasagni awarded with Materials Science and Technology Prize 2017
09.10.2017 | Fraunhofer-Institut für Werkstoff- und Strahltechnik IWS
The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake.
Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a...
Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.
That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these...
Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm.
During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles....
The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications.
Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,...
Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University...
15.11.2017 | Event News
15.11.2017 | Event News
30.10.2017 | Event News
17.11.2017 | Physics and Astronomy
17.11.2017 | Health and Medicine
17.11.2017 | Studies and Analyses