SDSC researchers submitted an entry for Gordon using 218 teraflops per second (Tflop/s) and 12,608 cores – about 75 percent of the system. Built by SDSC researchers and Appro, a leading developer of supercomputing solutions, Gordon, the next generation Appro Xtreme-X™ Supercomputer is currently undergoing acceptance testing and when made available to the research community on January 1, 2012, it will have 16,384 cores and achieve over 280 Tflop/s.
The latest Top500 rankings were announced during the SC11 (Supercomputing 2011) conference in Seattle this week. In related acceptance runs, Gordon was tested as to its ability to do I/O (input/output) operations, and achieved an unprecedented 35 million IOPS (input/output operations per second), making it the most powerful supercomputer ever commissioned by the National Science Foundation (NSF) for doing IO.
IOPS is an important measure for data intensive computing since it indicates the ability of a storage system to perform I/O operations on small transfer sizes of randomly organized data – something prevalent in database and data mining applications.
Gordon’s Top500 result is notable in that the Tflop/s ranking was achieved using about half the number of cores compared to most other systems. That’s because Gordon is among the first systems – and the first one commissioned by the NSF – to use Intel® Xeon® processor E5 Family, which perform twice as many operations per clock (eight versus four) of any system currently in use.
“These are truly remarkable results not only for us, but for countless researchers who are moving along with us into an era of data-intensive computing, where supercomputers using innovative new architectures will become a vital tool for them in accelerating new discoveries across a wide range of disciplines,” said Michael Norman, director of SDSC and a co-principal investigator of the Gordon project, the result of a five-year, $20 million NSF award.
“Taken together, these two results mean that Gordon will be the defacto leader in NSF systems in terms of ‘big data’ analysis computing,” said Allan Snavely, SDSC’s associate director and a co-PI for the Gordon system. “Gordon was designed from the outset to be a balanced system between compute and I/O to take on some of the most data-intensive challenges in the real world, not just to score well in terms of FLOPS.”
Gordon is the first high-performance supercomputer to use large amounts of flash-based SSD (solid state drive) memory – think of it as the world’s largest thumb drive. Flash memory is more common in smaller devices such as mobile phones and laptop computers, but unique for supercomputers, which generally use slower spinning disk technology. Gordon uses 1,024 of Intel’s latest high endurance SSD’s, providing the significantly higher performance and endurance required for data intensive applications.
“Gordon, the Appro Xtreme-X Supercomputer, rises to the challenge of the San Diego Supercomputer Center’s computational demands with delivery of a data-intensive computing solution that blends cutting-edge for compute, network, and storage,” said Steve Lyness, vice president of Appro HPC Solutions Group. “The Appro Xtreme-X system based on the Future Intel Xeon processor E5 Family and the new Intel SSD 710 Series will maximize I/O capabilities and flash memory support to analyze large data sets used by SDSC to answer modern science’s critical problems.”
Gordon is capable of handling massive data bases while providing up to 100 times faster speeds when compared to hard drive disk systems for some queries. With almost 5 TB (terabytes) of high-performance flash memory in each I/O node, Gordon will provide a new level of performance and capacity for solving the most challenging data-intensive challenges, such as large graph problems, data mining operations, de novo genome assembly, database applications, and quantum chemistry. Gordon’s unique architecture includes:‧ 1,024 dual-socket compute nodes, each with 2 8-core Intel Xeon E5 Family processors, and 64 GB (gigabyte) DDR3 1333 memory
‧ High-performance parallel file system with over 4 PB (petabytes) of capacity, and sustained rates of 100 GB/s (gigabytes per second)
The Top500 list of the world’s most powerful supercomputers, now in its 19th year, has been joined by another supercomputer ranking called the Graph500. While developers of the new ranking complements the Top500, industry experts say that the Graph500 ranking seeks to quantify how much work a supercomputer can do based on its ability to analyze very large graph-based datasets that have millions of disparate points, while the Top500 ranking uses LINPACK software to determine sheer speed – or how fast a supercomputer can perform linear algebraic calculations.
Trestles, another new but smaller supercomputer using flash-based memory and launched earlier this year by SDSC, ranked XX on the latest Graph500 lists, also announced this week at SC11. The ranking was based on runs using less than half of Trestles overall compute capabilities.
With 10,368 processor cores, 324 nodes, and a peak speed of 100 teraflops per second (TFlop/s), Trestles has already been used for more than 200 separate research projects since its launch last January, with research areas ranging from astrophysics to molecular dynamics. Gordon and Trestles are available to users of the NSF’s new XSEDE (Extreme Science and Engineering Discovery Environment) program, which integrates 16 supercomputers and high-end visualization and data analysis resources across the country to provide the most comprehensive collection of advanced digital services in the world. The new project replaced the NSF’s TeraGrid program after about 10 years.
Jan Zverina | Newswise Science News
Five developments for improved data exploitation
19.04.2017 | Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI
Smart Manual Workstations Deliver More Flexible Production
04.04.2017 | Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI
More and more automobile companies are focusing on body parts made of carbon fiber reinforced plastics (CFRP). However, manufacturing and repair costs must be further reduced in order to make CFRP more economical in use. Together with the Volkswagen AG and five other partners in the project HolQueSt 3D, the Laser Zentrum Hannover e.V. (LZH) has developed laser processes for the automatic trimming, drilling and repair of three-dimensional components.
Automated manufacturing processes are the basis for ultimately establishing the series production of CFRP components. In the project HolQueSt 3D, the LZH has...
Reflecting the structure of composites found in nature and the ancient world, researchers at the University of Illinois at Urbana-Champaign have synthesized thin carbon nanotube (CNT) textiles that exhibit both high electrical conductivity and a level of toughness that is about fifty times higher than copper films, currently used in electronics.
"The structural robustness of thin metal films has significant importance for the reliable operation of smart skin and flexible electronics including...
The nearby, giant radio galaxy M87 hosts a supermassive black hole (BH) and is well-known for its bright jet dominating the spectrum over ten orders of magnitude in frequency. Due to its proximity, jet prominence, and the large black hole mass, M87 is the best laboratory for investigating the formation, acceleration, and collimation of relativistic jets. A research team led by Silke Britzen from the Max Planck Institute for Radio Astronomy in Bonn, Germany, has found strong indication for turbulent processes connecting the accretion disk and the jet of that galaxy providing insights into the longstanding problem of the origin of astrophysical jets.
Supermassive black holes form some of the most enigmatic phenomena in astrophysics. Their enormous energy output is supposed to be generated by the...
The probability to find a certain number of photons inside a laser pulse usually corresponds to a classical distribution of independent events, the so-called...
Microprocessors based on atomically thin materials hold the promise of the evolution of traditional processors as well as new applications in the field of flexible electronics. Now, a TU Wien research team led by Thomas Müller has made a breakthrough in this field as part of an ongoing research project.
Two-dimensional materials, or 2D materials for short, are extremely versatile, although – or often more precisely because – they are made up of just one or a...
20.04.2017 | Event News
18.04.2017 | Event News
03.04.2017 | Event News
26.04.2017 | Materials Sciences
26.04.2017 | Agricultural and Forestry Science
26.04.2017 | Physics and Astronomy