SDSC’s ‘Gordon’ Among World’s 50 Fastest Supercomputers

SDSC researchers submitted an entry for Gordon using 218 teraflops per second (Tflop/s) and 12,608 cores – about 75 percent of the system. Built by SDSC researchers and Appro, a leading developer of supercomputing solutions, Gordon, the next generation Appro Xtreme-X™ Supercomputer is currently undergoing acceptance testing and when made available to the research community on January 1, 2012, it will have 16,384 cores and achieve over 280 Tflop/s.

The latest Top500 rankings were announced during the SC11 (Supercomputing 2011) conference in Seattle this week. In related acceptance runs, Gordon was tested as to its ability to do I/O (input/output) operations, and achieved an unprecedented 35 million IOPS (input/output operations per second), making it the most powerful supercomputer ever commissioned by the National Science Foundation (NSF) for doing IO.

IOPS is an important measure for data intensive computing since it indicates the ability of a storage system to perform I/O operations on small transfer sizes of randomly organized data – something prevalent in database and data mining applications.

Gordon’s Top500 result is notable in that the Tflop/s ranking was achieved using about half the number of cores compared to most other systems. That’s because Gordon is among the first systems – and the first one commissioned by the NSF – to use Intel® Xeon® processor E5 Family, which perform twice as many operations per clock (eight versus four) of any system currently in use.

“These are truly remarkable results not only for us, but for countless researchers who are moving along with us into an era of data-intensive computing, where supercomputers using innovative new architectures will become a vital tool for them in accelerating new discoveries across a wide range of disciplines,” said Michael Norman, director of SDSC and a co-principal investigator of the Gordon project, the result of a five-year, $20 million NSF award.

“Taken together, these two results mean that Gordon will be the defacto leader in NSF systems in terms of ‘big data’ analysis computing,” said Allan Snavely, SDSC’s associate director and a co-PI for the Gordon system. “Gordon was designed from the outset to be a balanced system between compute and I/O to take on some of the most data-intensive challenges in the real world, not just to score well in terms of FLOPS.”

Gordon is the first high-performance supercomputer to use large amounts of flash-based SSD (solid state drive) memory – think of it as the world’s largest thumb drive. Flash memory is more common in smaller devices such as mobile phones and laptop computers, but unique for supercomputers, which generally use slower spinning disk technology. Gordon uses 1,024 of Intel’s latest high endurance SSD’s, providing the significantly higher performance and endurance required for data intensive applications.

“Gordon, the Appro Xtreme-X Supercomputer, rises to the challenge of the San Diego Supercomputer Center’s computational demands with delivery of a data-intensive computing solution that blends cutting-edge for compute, network, and storage,” said Steve Lyness, vice president of Appro HPC Solutions Group. “The Appro Xtreme-X system based on the Future Intel Xeon processor E5 Family and the new Intel SSD 710 Series will maximize I/O capabilities and flash memory support to analyze large data sets used by SDSC to answer modern science’s critical problems.”

Gordon is capable of handling massive data bases while providing up to 100 times faster speeds when compared to hard drive disk systems for some queries. With almost 5 TB (terabytes) of high-performance flash memory in each I/O node, Gordon will provide a new level of performance and capacity for solving the most challenging data-intensive challenges, such as large graph problems, data mining operations, de novo genome assembly, database applications, and quantum chemistry. Gordon’s unique architecture includes:

‧ 1,024 dual-socket compute nodes, each with 2 8-core Intel Xeon E5 Family processors, and 64 GB (gigabyte) DDR3 1333 memory
‧ Over 300 trillion bytes of high-performance Intel SSD 710 Series, flash memory solid state drives via 64 dual-socket Intel Xeon processor 5600 Series I/O nodes
‧ Large memory supernodes capable of presenting more than 2 TB of cache coherent memory using ScaleMP’s vSMP Foundation software
‧ 3D torus interconnect: Coupled with the dual rail QDR network to provide a cost-effective, power efficient, and fault-tolerant interconnect

‧ High-performance parallel file system with over 4 PB (petabytes) of capacity, and sustained rates of 100 GB/s (gigabytes per second)

The Top500 list of the world’s most powerful supercomputers, now in its 19th year, has been joined by another supercomputer ranking called the Graph500. While developers of the new ranking complements the Top500, industry experts say that the Graph500 ranking seeks to quantify how much work a supercomputer can do based on its ability to analyze very large graph-based datasets that have millions of disparate points, while the Top500 ranking uses LINPACK software to determine sheer speed – or how fast a supercomputer can perform linear algebraic calculations.

Trestles, another new but smaller supercomputer using flash-based memory and launched earlier this year by SDSC, ranked XX on the latest Graph500 lists, also announced this week at SC11. The ranking was based on runs using less than half of Trestles overall compute capabilities.

With 10,368 processor cores, 324 nodes, and a peak speed of 100 teraflops per second (TFlop/s), Trestles has already been used for more than 200 separate research projects since its launch last January, with research areas ranging from astrophysics to molecular dynamics. Gordon and Trestles are available to users of the NSF’s new XSEDE (Extreme Science and Engineering Discovery Environment) program, which integrates 16 supercomputers and high-end visualization and data analysis resources across the country to provide the most comprehensive collection of advanced digital services in the world. The new project replaced the NSF’s TeraGrid program after about 10 years.

Media Contact

Jan Zverina Newswise Science News

More Information:

http://www.sdsc.edu

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

The Sound of the Perfect Coating

Fraunhofer IWS Transfers Laser-based Sound Analysis of Surfaces into Industrial Practice with “LAwave”. Sound waves can reveal surface properties. Parameters such as surface or coating quality of components can be…

Customized silicon chips

…from Saxony for material characterization of printed electronics. How efficient are new materials? Does changing the properties lead to better conductivity? The Fraunhofer Institute for Photonic Microsystems IPMS develops and…

Acetylation: a Time-Keeper of glucocorticoid Sensitivity

Understanding the regulatory mechanism paves the way to enhance the effectiveness of anti-inflammatory therapies and to develop strategies to counteract the negative effects of stress- and age-related cortisol excess. The…

Partners & Sponsors