Trestles is available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure. The system is among the five largest in the TeraGrid repertoire, with 10,368 processor cores, a peak speed of 100 teraflop/s, 20 terabytes memory, and 38 terabytes of flash memory. One teraflop (TF) equals a trillion calculations per second, while one terabyte (TB) equals one trillion bytes of information.
“Trestles is appropriately named because it will serve as a bridge between SDSC’s unique, data-intensive resources available to a wide community of users both now and into the future,” said Michael Norman, SDSC’s director.
Configured by SDSC and Appro, Trestles is based on quad-socket, 8-core AMD Magny-Cours compute nodes connected via a QDR InfiniBand fabric. Each of its 324 nodes has 32 cores, 64 gigabytes (GB) of memory, and 120 GB of flash memory. Debuting at #111 on the top 500 list of supercomputers in the latest ranking, Trestles will work with and span the deployments of SDSC’s recently introduced Dash system and a larger data-intensive system named Gordon, to become operational in late 2011.
“UCSD and SDSC are pioneering the use of flash in high-performance computing,” said Allan Snavely, associate director of SDSC and a co-PI for the new system. “Flash disks read data as much as 100 times faster than spinning disk, write data faster, and are more energy-efficient and reliable.”
“Trestles, as well as Dash and Gordon, were designed with one goal in mind, and that is to enable as much productive science as possible as we enter a data-intensive era of computing,” said Richard Moore, SDSC’s deputy director and co-PI. “Today’s researchers are faced with sifting through tremendous amounts of digitally based data, and such data-intensive resources will give them the tools they need to do so.”
Moore added that that Trestles offers modest-scale and gateway users rapid job turnaround to increase researcher productivity, while also being able to host long-running jobs. Speaking of speed, SDSC and Appro brought Trestles into production in less than 10 weeks from initial hardware delivery. “We committed to getting the system in the hands of our users and meeting NSF’s production deadline,” noted Moore.
Early User Successes
Early users of SDSC’s Trestles include Bridget Carragher and Clint Potter, directors at the National Resource for Automated Molecular Microscopy at The Scripps Research Institute in La Jolla, Calif. Their project focuses on establishing a portal on the TeraGrid for structural biology researchers to facilitate electron microscopy (EM) image processing using the Appion pipeline, an integrated, database-driven system.
"We are very excited about this early opportunity to use the Trestles infrastructure for high performance structural biology projects,” said Carragher. “Based on our initial experience, we are optimistic that this system will have a dramatic impact on the scale of projects we can undertake, and on the resolution that can be achieved for macromolecular structure.”
Another early user is Ross Walker, an adjunct assistant professor of chemistry at UC San Diego and an assistant research professor with SDSC specializing in computational chemistry. “Typically, computational chemists need only a moderate number of cores, between 128 and 512, for longer periods of time,” he said. “This is exactly what Trestles was designed to offer.”
Walker’s group recently ran some simulations of the Adenovirus Protease, a key enzyme in Adenovirus replication and an interesting drug target for severe upper respiratory and stomach infections which now have no remedy other than aspirin or some other anti-inflammatory.
Those calculations ran on 512 cores each, and the group was able to leave them running on Trestles almost unattended for two weeks. “Such 'hands-off' supercomputing greatly increases the productivity of my research team,” noted Walker.
To ensure that productivity on Trestles remains high, SDSC will adjust allocation policies, queuing structures, user documentation, and training based on a quarterly review of usage metrics and user satisfaction data. Trestles, along with SDSC’s Dash and Triton Resource clusters use a matrixed pool of expertise in system administration and user support, as well as the SDSC-developed Rocks cluster management software. SDSC’s Advanced User Support has already established key benchmarks to accelerate user applications, and subsequently will assist users in tuning and optimizing applications for Trestles. Full details of the new system can be found at http://www.sdsc.edu/us/resources/trestles/ .
Walker’s team also recently ran a significant number of quantum geometry optimizations in support of a new force field it is developing for molecular dynamics, taking advantage of Trestles’ generous amount of memory and symmetric multiprocessing (SMP) cores, along with its streamlined scheduler policy. “We were able to get these runs completed in only a few days on Trestles.”
Trestles’ size, allocation range, and scheduling practicesare expected to also benefit the emerging Science Gateway paradigm for high-performance computing system access. Science gateways are a relatively recent phenomenon in supercomputing. Currently led by Nancy Wilkins-Diehr of SDSC, the TeraGrid Gateway program began in 2004 as web portals designed and used by scientists. The program extends the analysis capabilities of these community-designed interfaces through the use of supercomputers, yet insulates users from supercomputing complexities.
During the final quarter of 2010, gateway users represented 42% of all researchers who ran jobs on the TeraGrid during that period, reflecting a steady growth in the number of users accessing high-end resources. Trestles’ policies are designed to meet the needs of that increasing user base.
NSF’s award to build and deploy Trestles was announced last August by SDSC, and Trestles will be available to TeraGrid users through 2013. In November 2009, SDSC announced a five-year, $20 million grant from the NSF to build and operate Gordon, the first high-performance supercomputer to employ a vast amount of flash memory. Dash, a smaller prototype of Gordon, was deployed in April 2010. All these systems are being integrated by Appro and use a similar design philosophy of combining commodity parts in innovative ways to achieve high-performance architectures.
Jan Zverina | Newswise Science News
Equipping form with function
23.06.2017 | Institute of Science and Technology Austria
Can we see monkeys from space? Emerging technologies to map biodiversity
23.06.2017 | Forschungsverbund Berlin e.V.
An international team of scientists has proposed a new multi-disciplinary approach in which an array of new technologies will allow us to map biodiversity and the risks that wildlife is facing at the scale of whole landscapes. The findings are published in Nature Ecology and Evolution. This international research is led by the Kunming Institute of Zoology from China, University of East Anglia, University of Leicester and the Leibniz Institute for Zoo and Wildlife Research.
Using a combination of satellite and ground data, the team proposes that it is now possible to map biodiversity with an accuracy that has not been previously...
Heatwaves in the Arctic, longer periods of vegetation in Europe, severe floods in West Africa – starting in 2021, scientists want to explore the emissions of the greenhouse gas methane with the German-French satellite MERLIN. This is made possible by a new robust laser system of the Fraunhofer Institute for Laser Technology ILT in Aachen, which achieves unprecedented measurement accuracy.
Methane is primarily the result of the decomposition of organic matter. The gas has a 25 times greater warming potential than carbon dioxide, but is not as...
Hydrogen is regarded as the energy source of the future: It is produced with solar power and can be used to generate heat and electricity in fuel cells. Empa researchers have now succeeded in decoding the movement of hydrogen ions in crystals – a key step towards more efficient energy conversion in the hydrogen industry of tomorrow.
As charge carriers, electrons and ions play the leading role in electrochemical energy storage devices and converters such as batteries and fuel cells. Proton...
Scientists from the Excellence Cluster Universe at the Ludwig-Maximilians-Universität Munich have establised "Cosmowebportal", a unique data centre for cosmological simulations located at the Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences. The complete results of a series of large hydrodynamical cosmological simulations are available, with data volumes typically exceeding several hundred terabytes. Scientists worldwide can interactively explore these complex simulations via a web interface and directly access the results.
With current telescopes, scientists can observe our Universe’s galaxies and galaxy clusters and their distribution along an invisible cosmic web. From the...
Temperature measurements possible even on the smallest scale / Molecular ruby for use in material sciences, biology, and medicine
Chemists at Johannes Gutenberg University Mainz (JGU) in cooperation with researchers of the German Federal Institute for Materials Research and Testing (BAM)...
19.06.2017 | Event News
13.06.2017 | Event News
13.06.2017 | Event News
23.06.2017 | Physics and Astronomy
23.06.2017 | Physics and Astronomy
23.06.2017 | Information Technology