Today supercomputers are an indispensable tool in almost all fields of research. However, present concepts cannot be extended indefinitely without causing an unreasonable increase in effort and costs. For this reason, scientists plan to develop a new platform for next-generation supercomputers as part of the EU DEEP project (Dynamical ExaScale Entry Platform), with applications for brain research, climatology and seismology, to name but a few.
Even today scientists already need gigantic computing capacity in order to model biological organs and to develop ever more multifaceted models of climate or the universe or complex building blocks of matter.
To ensure that European research continues to have access to the necessary resources for high-performance computing (HPC) in future, Forschungszentrum Jülich is planning to enter the exaflop/s age by 2020 with the DEEP project – together with Intel, ParTec and 12 other European partners from 8 countries. An exaflop/s computer of this type, performing a quintillion (1018) calculations per second, would be a thousand times faster than today’s supercomputers. The scientists expect a first prototype as early as 2014/2015 that will have a capacity of 100 petaflop/s, around one hundred times faster than today’s petaflop/s computers, such as Jülich’s Petaflop computer JUGENE.
With the exaflop/s class, scientists will be able to tackle challenges which still seem unrealistic today, such as detailed simulation of the human brain. However, increases in performance on this scale can only be achieved by parallel computing employing millions of processors. Using today’s technology, this would mean that energy costs would become prohibitive. In order to pave the way for a viable exascale computer, researchers in the DEEP project, funded with € 8 million by the European Commission, will be optimizing the networking of different hardware components and integrating new energy-saving cooling systems.
Scientists at Jülich have designed a new type of “cluster booster architecture” for DEEP. One important element is the processors that are still under development and are specially designed for parallel computing, the Intel® Many Integrated Core Architecture, with 50 plus cores on a single chip. Each of these 512 MIC processors will be linked to a booster that accelerates the entire system via a high-speed network called Extoll developed by the University of Heidelberg. “Working closely with Intel helps us to accelerate the development of cluster architectures for the exascale and to address the hardware and software challenges of building, programming and operating such systems”, explains Prof. Thomas Lippert, head of the Jülich Supercomputing Centre.
The new approach takes into account the fact that large-scale, future simulations will consist of multiple and very diverse tasks with complicated communication patterns between the processors. The underlying idea: the complex components of a program are executed on the “core” of the parallel computer, a cluster with Intel Xeon server processors. In contrast, simple, highly parallel program components that do not rely on such CPUs will be offloaded to the booster modules which, thanks to their large number of more simply structured computer cores, are able to perform the calculations for tasks of this kind with far greater energy efficiency.
“The close collaboration between Intel, Europe's largest scientific computer centre in Jülich and the leading cluster software vendor ParTec presents a unique opportunity to accelerate the evolution of cluster HPC platforms. Work on the novel DEEP architecture will be a key component in the understanding and development of future exascale systems, middleware and applications”, explains Stephen Pawlowski, Intel Senior Fellow and General Manager, Datacenter and Connected Systems Pathfinding.
Hugo R. Falter, Chief Operating Officer at ParTec, reports: “I am glad that the ParaStation Cluster Operating System can contribute to the success of this visionary project.” Based on an expanded version of this cluster operating system, an entire software environment for the new hardware architecture will be created with DEEP. As part of the project, in addition to tools for application developers, application software for brain research, climatology, seismology, high-temperature superconductivity and computational fluid engineering will also be transferred to the platform.
Forschungszentrum Jülich, Intel and ParTec have collaborated closely since 2010 in the Exacluster Laboratory at Jülich on developing novel system architectures and software tools for cluster computers. The main focus is on the scalability of hardware and software up to the exascale class and on ensuring the reliability of these systems. The DEEP project was initiated under the auspices of the ExaCluster Laboratory.Further information:
http://www.sc11.supercomputing.org/Research at Jülich Supercomputing Centre (JSC):
Annette Stettien | Forschungszentrum Jülich
NASA CubeSat to test miniaturized weather satellite technology
10.11.2017 | NASA/Goddard Space Flight Center
New approach uses light instead of robots to assemble electronic components
08.11.2017 | The Optical Society
The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake.
Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a...
Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.
That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these...
Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm.
During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles....
The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications.
Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,...
Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University...
15.11.2017 | Event News
15.11.2017 | Event News
30.10.2017 | Event News
20.11.2017 | Earth Sciences
20.11.2017 | Earth Sciences
20.11.2017 | Life Sciences