Trestles is available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure. The system is among the five largest in the TeraGrid repertoire, with 10,368 processor cores, a peak speed of 100 teraflop/s, 20 terabytes memory, and 38 terabytes of flash memory. One teraflop (TF) equals a trillion calculations per second, while one terabyte (TB) equals one trillion bytes of information.
“Trestles is appropriately named because it will serve as a bridge between SDSC’s unique, data-intensive resources available to a wide community of users both now and into the future,” said Michael Norman, SDSC’s director.
Configured by SDSC and Appro, Trestles is based on quad-socket, 8-core AMD Magny-Cours compute nodes connected via a QDR InfiniBand fabric. Each of its 324 nodes has 32 cores, 64 gigabytes (GB) of memory, and 120 GB of flash memory. Debuting at #111 on the top 500 list of supercomputers in the latest ranking, Trestles will work with and span the deployments of SDSC’s recently introduced Dash system and a larger data-intensive system named Gordon, to become operational in late 2011.
“UCSD and SDSC are pioneering the use of flash in high-performance computing,” said Allan Snavely, associate director of SDSC and a co-PI for the new system. “Flash disks read data as much as 100 times faster than spinning disk, write data faster, and are more energy-efficient and reliable.”
“Trestles, as well as Dash and Gordon, were designed with one goal in mind, and that is to enable as much productive science as possible as we enter a data-intensive era of computing,” said Richard Moore, SDSC’s deputy director and co-PI. “Today’s researchers are faced with sifting through tremendous amounts of digitally based data, and such data-intensive resources will give them the tools they need to do so.”
Moore added that that Trestles offers modest-scale and gateway users rapid job turnaround to increase researcher productivity, while also being able to host long-running jobs. Speaking of speed, SDSC and Appro brought Trestles into production in less than 10 weeks from initial hardware delivery. “We committed to getting the system in the hands of our users and meeting NSF’s production deadline,” noted Moore.
Early User Successes
Early users of SDSC’s Trestles include Bridget Carragher and Clint Potter, directors at the National Resource for Automated Molecular Microscopy at The Scripps Research Institute in La Jolla, Calif. Their project focuses on establishing a portal on the TeraGrid for structural biology researchers to facilitate electron microscopy (EM) image processing using the Appion pipeline, an integrated, database-driven system.
"We are very excited about this early opportunity to use the Trestles infrastructure for high performance structural biology projects,” said Carragher. “Based on our initial experience, we are optimistic that this system will have a dramatic impact on the scale of projects we can undertake, and on the resolution that can be achieved for macromolecular structure.”
Another early user is Ross Walker, an adjunct assistant professor of chemistry at UC San Diego and an assistant research professor with SDSC specializing in computational chemistry. “Typically, computational chemists need only a moderate number of cores, between 128 and 512, for longer periods of time,” he said. “This is exactly what Trestles was designed to offer.”
Walker’s group recently ran some simulations of the Adenovirus Protease, a key enzyme in Adenovirus replication and an interesting drug target for severe upper respiratory and stomach infections which now have no remedy other than aspirin or some other anti-inflammatory.
Those calculations ran on 512 cores each, and the group was able to leave them running on Trestles almost unattended for two weeks. “Such 'hands-off' supercomputing greatly increases the productivity of my research team,” noted Walker.
To ensure that productivity on Trestles remains high, SDSC will adjust allocation policies, queuing structures, user documentation, and training based on a quarterly review of usage metrics and user satisfaction data. Trestles, along with SDSC’s Dash and Triton Resource clusters use a matrixed pool of expertise in system administration and user support, as well as the SDSC-developed Rocks cluster management software. SDSC’s Advanced User Support has already established key benchmarks to accelerate user applications, and subsequently will assist users in tuning and optimizing applications for Trestles. Full details of the new system can be found at http://www.sdsc.edu/us/resources/trestles/ .
Walker’s team also recently ran a significant number of quantum geometry optimizations in support of a new force field it is developing for molecular dynamics, taking advantage of Trestles’ generous amount of memory and symmetric multiprocessing (SMP) cores, along with its streamlined scheduler policy. “We were able to get these runs completed in only a few days on Trestles.”
Trestles’ size, allocation range, and scheduling practicesare expected to also benefit the emerging Science Gateway paradigm for high-performance computing system access. Science gateways are a relatively recent phenomenon in supercomputing. Currently led by Nancy Wilkins-Diehr of SDSC, the TeraGrid Gateway program began in 2004 as web portals designed and used by scientists. The program extends the analysis capabilities of these community-designed interfaces through the use of supercomputers, yet insulates users from supercomputing complexities.
During the final quarter of 2010, gateway users represented 42% of all researchers who ran jobs on the TeraGrid during that period, reflecting a steady growth in the number of users accessing high-end resources. Trestles’ policies are designed to meet the needs of that increasing user base.
NSF’s award to build and deploy Trestles was announced last August by SDSC, and Trestles will be available to TeraGrid users through 2013. In November 2009, SDSC announced a five-year, $20 million grant from the NSF to build and operate Gordon, the first high-performance supercomputer to employ a vast amount of flash memory. Dash, a smaller prototype of Gordon, was deployed in April 2010. All these systems are being integrated by Appro and use a similar design philosophy of combining commodity parts in innovative ways to achieve high-performance architectures.
Jan Zverina | Newswise Science News
Terahertz spectroscopy goes nano
20.10.2017 | Brown University
New software speeds origami structure designs
12.10.2017 | Georgia Institute of Technology
University of Maryland researchers contribute to historic detection of gravitational waves and light created by event
On August 17, 2017, at 12:41:04 UTC, scientists made the first direct observation of a merger between two neutron stars--the dense, collapsed cores that remain...
Seven new papers describe the first-ever detection of light from a gravitational wave source. The event, caused by two neutron stars colliding and merging together, was dubbed GW170817 because it sent ripples through space-time that reached Earth on 2017 August 17. Around the world, hundreds of excited astronomers mobilized quickly and were able to observe the event using numerous telescopes, providing a wealth of new data.
Previous detections of gravitational waves have all involved the merger of two black holes, a feat that won the 2017 Nobel Prize in Physics earlier this month....
Material defects in end products can quickly result in failures in many areas of industry, and have a massive impact on the safe use of their products. This is why, in the field of quality assurance, intelligent, nondestructive sensor systems play a key role. They allow testing components and parts in a rapid and cost-efficient manner without destroying the actual product or changing its surface. Experts from the Fraunhofer IZFP in Saarbrücken will be presenting two exhibits at the Blechexpo in Stuttgart from 7–10 November 2017 that allow fast, reliable, and automated characterization of materials and detection of defects (Hall 5, Booth 5306).
When quality testing uses time-consuming destructive test methods, it can result in enormous costs due to damaging or destroying the products. And given that...
Using a new cooling technique MPQ scientists succeed at observing collisions in a dense beam of cold and slow dipolar molecules.
How do chemical reactions proceed at extremely low temperatures? The answer requires the investigation of molecular samples that are cold, dense, and slow at...
Scientists from the Max Planck Institute of Quantum Optics, using high precision laser spectroscopy of atomic hydrogen, confirm the surprisingly small value of the proton radius determined from muonic hydrogen.
It was one of the breakthroughs of the year 2010: Laser spectroscopy of muonic hydrogen resulted in a value for the proton charge radius that was significantly...
17.10.2017 | Event News
10.10.2017 | Event News
10.10.2017 | Event News
20.10.2017 | Information Technology
20.10.2017 | Materials Sciences
20.10.2017 | Interdisciplinary Research