Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

More Chip Cores Can Mean Slower Supercomputing, Simulation Shows

16.01.2009
THE MULTICORE DILEMMA: more cores on a single chip don't necessarily mean faster clock speeds, a Sandia simulation has determined.

The worldwide attempt to increase the speed of supercomputers merely by increasing the number of processor cores on individual chips unexpectedly worsens performance for many complex applications, Sandia simulations have found.

A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.

The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. (The memory bus is the set of wires used to carry memory addresses and data to and from the system RAM.)

A supermarket analogy

To use a supermarket analogy, if two clerks at the same checkout counter are processing your food instead of one, the checkout process should go faster. Or, you could be served by four clerks.

Or eight clerks. Or sixteen. And so on.

The problem is, if each clerk doesn’t have access to the groceries, he or she doesn’t necessarily help the process. Worse, the clerks may get in each other’s way.

Similarly, it seems a no-brainer that if one core is fast, two would be faster, four still faster, and so on.

But the lack of immediate access to individualized memory caches — the “food” of each processor — slows the process down instead of speeding it up once the number of cores exceeds eight, according to a simulation of high-performance computers by Sandia’s Richard Murphy, Arun Rodrigues and former student Megan Vance.

“To some extent, it is pointing out the obvious — many of our applications have been memory-bandwidth-limited even on a single core,” says Rodrigues. “However, it is not an issue to which industry has a known solution, and the problem is often ignored.”

“The difficulty is contention among modules,” says James Peery, director of Sandia’s Computations, Computers, Information and Mathematics Center. “The cores are all asking for memory through the same pipe. It’s like having one, two, four, or eight people all talking to you at the same time, saying, ‘I want this information.’ Then they have to wait until the answer to their request comes back. This causes delays.”

“The original AMD processors in Red Storm were chosen because they had better memory performance than other processors, including other Opteron processors, “ says Ron Brightwell. “One of the main reasons that AMD processors are popular in high-performance computing is that they have an integrated memory controller that, until very recently, Intel processors didn’t have.”

Multicore technologies are considered a possible savior of Moore’s Law, the prediction that the number of transistors that can be placed inexpensively on an integrated circuit will double approximately every two years.

“Multicore gives chip manufacturers something to do with the extra transistors successfully predicted by Moore’s Law,” Rodrigues says. “The bottleneck now is getting the data off the chip to or from memory or the network.”

A more natural goal of researchers would be to increase the clock speed of single cores, since the vast majority of applications are designed for single-core performance on word processors, music, and video applications. But power consumption, increased heat, and basic laws of physics involving parasitic currents meant that designers were reaching their limit in improving chip speed for common silicon processes.

“The [chip design] community didn’t go with multicores because they were without flaw,” says Mike Heroux. “The community couldn’t see a better approach. It was desperate. Presently we are seeing memory system designs that provide a dramatic improvement over what was available 12 months ago, but the fundamental problem still exists.”

In the early days of supercomputing, Seymour Cray produced a superchip that processed information faster than any other chip. Then a movement — led in part by Sandia — proved that ordinary chips, programmed to work different parts of a problem at the same time, could solve complex problems faster than the most powerful superchip. Sandia’s Paragon supercomputer, in fact, was the world’s first parallel processing supercomputer.

Today, Sandia has a large investment in message-passing programs. Its Institute for Advanced Architectures, operated jointly with Oak Ridge National Laboratory (ORNL) and intended to prepare the way for exaflop computing, may help solve the multichip dilemma.

ORNL’s Jaguar supercomputer, currently the world’s fastest for scientific computing, is a Cray XT model based on technology developed by Sandia and Cray for Sandia’s Red Storm supercomputer. Red Storm’s original and unique design is the most copied of all supercomputer architectures.

The current work was funded by Sandia’s Laboratory-Directed Research and Development office.

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

Neal Singer | Newswise Science News
Further information:
http://www.sandia.gov

More articles from Information Technology:

nachricht Magnetic Quantum Objects in a "Nano Egg-Box"
25.07.2017 | Universität Wien

nachricht 3-D scanning with water
24.07.2017 | Association for Computing Machinery

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Carbon Nanotubes Turn Electrical Current into Light-emitting Quasi-particles

Strong light-matter coupling in these semiconducting tubes may hold the key to electrically pumped lasers

Light-matter quasi-particles can be generated electrically in semiconducting carbon nanotubes. Material scientists and physicists from Heidelberg University...

Im Focus: Flexible proximity sensor creates smart surfaces

Fraunhofer IPA has developed a proximity sensor made from silicone and carbon nanotubes (CNT) which detects objects and determines their position. The materials and printing process used mean that the sensor is extremely flexible, economical and can be used for large surfaces. Industry and research partners can use and further develop this innovation straight away.

At first glance, the proximity sensor appears to be nothing special: a thin, elastic layer of silicone onto which black square surfaces are printed, but these...

Im Focus: 3-D scanning with water

3-D shape acquisition using water displacement as the shape sensor for the reconstruction of complex objects

A global team of computer scientists and engineers have developed an innovative technique that more completely reconstructs challenging 3D objects. An ancient...

Im Focus: Manipulating Electron Spins Without Loss of Information

Physicists have developed a new technique that uses electrical voltages to control the electron spin on a chip. The newly-developed method provides protection from spin decay, meaning that the contained information can be maintained and transmitted over comparatively large distances, as has been demonstrated by a team from the University of Basel’s Department of Physics and the Swiss Nanoscience Institute. The results have been published in Physical Review X.

For several years, researchers have been trying to use the spin of an electron to store and transmit information. The spin of each electron is always coupled...

Im Focus: The proton precisely weighted

What is the mass of a proton? Scientists from Germany and Japan successfully did an important step towards the most exact knowledge of this fundamental constant. By means of precision measurements on a single proton, they could improve the precision by a factor of three and also correct the existing value.

To determine the mass of a single proton still more accurate – a group of physicists led by Klaus Blaum and Sven Sturm of the Max Planck Institute for Nuclear...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

Clash of Realities 2017: Registration now open. International Conference at TH Köln

26.07.2017 | Event News

Closing the Sustainability Circle: Protection of Food with Biobased Materials

21.07.2017 | Event News

»We are bringing Additive Manufacturing to SMEs«

19.07.2017 | Event News

 
Latest News

CCNY physicists master unexplored electron property

26.07.2017 | Physics and Astronomy

Molecular microscopy illuminates molecular motor motion

26.07.2017 | Life Sciences

Large-Mouthed Fish Was Top Predator After Mass Extinction

26.07.2017 | Earth Sciences

VideoLinks
B2B-VideoLinks
More VideoLinks >>>