Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:


More Chip Cores Can Mean Slower Supercomputing, Simulation Shows

THE MULTICORE DILEMMA: more cores on a single chip don't necessarily mean faster clock speeds, a Sandia simulation has determined.

The worldwide attempt to increase the speed of supercomputers merely by increasing the number of processor cores on individual chips unexpectedly worsens performance for many complex applications, Sandia simulations have found.

A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.

The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. (The memory bus is the set of wires used to carry memory addresses and data to and from the system RAM.)

A supermarket analogy

To use a supermarket analogy, if two clerks at the same checkout counter are processing your food instead of one, the checkout process should go faster. Or, you could be served by four clerks.

Or eight clerks. Or sixteen. And so on.

The problem is, if each clerk doesn’t have access to the groceries, he or she doesn’t necessarily help the process. Worse, the clerks may get in each other’s way.

Similarly, it seems a no-brainer that if one core is fast, two would be faster, four still faster, and so on.

But the lack of immediate access to individualized memory caches — the “food” of each processor — slows the process down instead of speeding it up once the number of cores exceeds eight, according to a simulation of high-performance computers by Sandia’s Richard Murphy, Arun Rodrigues and former student Megan Vance.

“To some extent, it is pointing out the obvious — many of our applications have been memory-bandwidth-limited even on a single core,” says Rodrigues. “However, it is not an issue to which industry has a known solution, and the problem is often ignored.”

“The difficulty is contention among modules,” says James Peery, director of Sandia’s Computations, Computers, Information and Mathematics Center. “The cores are all asking for memory through the same pipe. It’s like having one, two, four, or eight people all talking to you at the same time, saying, ‘I want this information.’ Then they have to wait until the answer to their request comes back. This causes delays.”

“The original AMD processors in Red Storm were chosen because they had better memory performance than other processors, including other Opteron processors, “ says Ron Brightwell. “One of the main reasons that AMD processors are popular in high-performance computing is that they have an integrated memory controller that, until very recently, Intel processors didn’t have.”

Multicore technologies are considered a possible savior of Moore’s Law, the prediction that the number of transistors that can be placed inexpensively on an integrated circuit will double approximately every two years.

“Multicore gives chip manufacturers something to do with the extra transistors successfully predicted by Moore’s Law,” Rodrigues says. “The bottleneck now is getting the data off the chip to or from memory or the network.”

A more natural goal of researchers would be to increase the clock speed of single cores, since the vast majority of applications are designed for single-core performance on word processors, music, and video applications. But power consumption, increased heat, and basic laws of physics involving parasitic currents meant that designers were reaching their limit in improving chip speed for common silicon processes.

“The [chip design] community didn’t go with multicores because they were without flaw,” says Mike Heroux. “The community couldn’t see a better approach. It was desperate. Presently we are seeing memory system designs that provide a dramatic improvement over what was available 12 months ago, but the fundamental problem still exists.”

In the early days of supercomputing, Seymour Cray produced a superchip that processed information faster than any other chip. Then a movement — led in part by Sandia — proved that ordinary chips, programmed to work different parts of a problem at the same time, could solve complex problems faster than the most powerful superchip. Sandia’s Paragon supercomputer, in fact, was the world’s first parallel processing supercomputer.

Today, Sandia has a large investment in message-passing programs. Its Institute for Advanced Architectures, operated jointly with Oak Ridge National Laboratory (ORNL) and intended to prepare the way for exaflop computing, may help solve the multichip dilemma.

ORNL’s Jaguar supercomputer, currently the world’s fastest for scientific computing, is a Cray XT model based on technology developed by Sandia and Cray for Sandia’s Red Storm supercomputer. Red Storm’s original and unique design is the most copied of all supercomputer architectures.

The current work was funded by Sandia’s Laboratory-Directed Research and Development office.

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

Neal Singer | Newswise Science News
Further information:

More articles from Information Technology:

nachricht A novel hybrid UAV that may change the way people operate drones
28.03.2017 | Science China Press

nachricht Timing a space laser with a NASA-style stopwatch
28.03.2017 | NASA/Goddard Space Flight Center

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: A Challenging European Research Project to Develop New Tiny Microscopes

The Institute of Semiconductor Technology and the Institute of Physical and Theoretical Chemistry, both members of the Laboratory for Emerging Nanometrology (LENA), at Technische Universität Braunschweig are partners in a new European research project entitled ChipScope, which aims to develop a completely new and extremely small optical microscope capable of observing the interior of living cells in real time. A consortium of 7 partners from 5 countries will tackle this issue with very ambitious objectives during a four-year research program.

To demonstrate the usefulness of this new scientific tool, at the end of the project the developed chip-sized microscope will be used to observe in real-time...

Im Focus: Giant Magnetic Fields in the Universe

Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.

The results will be published on March 22 in the journal „Astronomy & Astrophysics“.

Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...

Im Focus: Tracing down linear ubiquitination

Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.

Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...

Im Focus: Perovskite edges can be tuned for optoelectronic performance

Layered 2D material improves efficiency for solar cells and LEDs

In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...

Im Focus: Polymer-coated silicon nanosheets as alternative to graphene: A perfect team for nanoelectronics

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...

All Focus news of the innovation-report >>>



Event News

International Land Use Symposium ILUS 2017: Call for Abstracts and Registration open

20.03.2017 | Event News

CONNECT 2017: International congress on connective tissue

14.03.2017 | Event News

ICTM Conference: Turbine Construction between Big Data and Additive Manufacturing

07.03.2017 | Event News

Latest News

Researchers shoot for success with simulations of laser pulse-material interactions

29.03.2017 | Materials Sciences

Igniting a solar flare in the corona with lower-atmosphere kindling

29.03.2017 | Physics and Astronomy

As sea level rises, much of Honolulu and Waikiki vulnerable to groundwater inundation

29.03.2017 | Earth Sciences

More VideoLinks >>>