Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Parallel programming may not be so daunting

25.03.2014

'Lock-free' parallel algorithms may match performance of more complex 'wait-free' algorithms

Computer chips have stopped getting faster: The regular performance improvements we've come to expect are now the result of chipmakers' adding more cores, or processing units, to their chips, rather than increasing their clock speed.

In theory, doubling the number of cores doubles the chip's efficiency, but splitting up computations so that they run efficiently in parallel isn't easy. On the other hand, say a trio of computer scientists from MIT, Israel's Technion, and Microsoft Research, neither is it as hard as had been feared.

Commercial software developers writing programs for multicore chips frequently use so-called "lock-free" parallel algorithms, which are relatively easy to generate from standard sequential code. In fact, in many cases the conversion can be done automatically.

Yet lock-free algorithms don't come with very satisfying theoretical guarantees: All they promise is that at least one core will make progress on its computational task in a fixed span of time. But if they don't exceed that standard, they squander all the additional computational power that multiple cores provide.

In recent years, theoretical computer scientists have demonstrated ingenious alternatives called "wait-free" algorithms, which guarantee that all cores will make progress in a fixed span of time. But deriving them from sequential code is extremely complicated, and commercial developers have largely neglected them.

In a paper to be presented at the Association for Computing Machinery's Annual Symposium on the Theory of Computing in May, Nir Shavit, a professor in MIT's Department of Electrical Engineering and Computer Science; his former student Dan Alistarh, who's now at Microsoft Research; and Keren Censor-Hillel of the Technion demonstrate a new analytic technique suggesting that, in a wide range of real-world cases, lock-free algorithms actually give wait-free performance.

"In practice, programmers program as if everything is wait-free," Shavit says. "This is a kind of mystery. What we are exposing in the paper is this little-talked-about intuition that programmers have about how [chip] schedulers work, that they are actually benevolent."

The researchers' key insight was that the chip's performance as a whole could be characterized more simply than the performance of the individual cores. That's because the allocation of different "threads," or chunks of code executed in parallel, is symmetric. "It doesn't matter whether thread 1 is in state A and thread 2 is in state B or if you just swap the states around," says Alistarh, who contributed to the work while at MIT. "What we noticed is that by coalescing symmetric states, you can simplify this a lot."

In a real chip, the allocation of threads to cores is "a complex interplay of latencies and scheduling policies," Alistarh says. In practice, however, the decisions arrived at through that complex interplay end up looking a lot like randomness. So the researchers modeled the scheduling of threads as a process that has at least a little randomness in it: At any time, there's some probability that a new thread will be initiated on any given core.

The researchers found that even with a random scheduler, a wide range of lock-free algorithms offered performance guarantees that were as good as those offered by wait-free algorithms.

That analysis held, no matter how the probability of thread assignment varied from core to core. But the researchers also performed a more specific analysis, asking what would happen when multiple cores were trying to write data to the same location in memory and one of them kept getting there ahead of the others. That's the situation that results in a lock-free algorithm's worst performance, when only one core is making progress.

For that case, they considered a particular set of probabilities, in which every core had the same chance of being assigned a thread at any given time. "This is kind of a worst-case random scheduler," Alistarh says. Even then, however, the number of cores that made progress never dipped below the square root of the number of cores assigned threads, which is still better than the minimum performance guarantee of lock-free algorithms.

###

Additional background

Archive: "Multicore may not be so scary": http://web.mit.edu/newsoffice/2010/multicore-0930.html

Abby Abazorius | EurekAlert!

Further reports about: Computing Engineering MIT Technion Technology decisions individual processing programming randomness technique

More articles from Information Technology:

nachricht In a New Method for Searching Image Databases, a Hand-drawn Sketch Is all it Takes
31.05.2016 | Universität Basel

nachricht New technique controls autonomous vehicles on a dirt track
24.05.2016 | Georgia Institute of Technology

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Attosecond camera for nanostructures

Physicists of the Laboratory for Attosecond Physics at the Max Planck Institute of Quantum Optics and the Ludwig-Maximilians-Universität Munich in collaboration with scientists from the Friedrich-Alexander-Universität Erlangen-Nürnberg have observed a light-matter phenomenon in nano-optics, which lasts only attoseconds.

The interaction between light and matter is of key importance in nature, the most prominent example being photosynthesis. Light-matter interactions have also...

Im Focus: Worldwide Success of Tyrolean Wastewater Treatment Technology

A biological and energy-efficient process, developed and patented by the University of Innsbruck, converts nitrogen compounds in wastewater treatment facilities into harmless atmospheric nitrogen gas. This innovative technology is now being refined and marketed jointly with the United States’ DC Water and Sewer Authority (DC Water). The largest DEMON®-system in a wastewater treatment plant is currently being built in Washington, DC.

The DEMON®-system was developed and patented by the University of Innsbruck 11 years ago. Today this successful technology has been implemented in about 70...

Im Focus: Computational high-throughput screening finds hard magnets containing less rare earth elements

Permanent magnets are very important for technologies of the future like electromobility and renewable energy, and rare earth elements (REE) are necessary for their manufacture. The Fraunhofer Institute for Mechanics of Materials IWM in Freiburg, Germany, has now succeeded in identifying promising approaches and materials for new permanent magnets through use of an in-house simulation process based on high-throughput screening (HTS). The team was able to improve magnetic properties this way and at the same time replaced REE with elements that are less expensive and readily available. The results were published in the online technical journal “Scientific Reports”.

The starting point for IWM researchers Wolfgang Körner, Georg Krugel, and Christian Elsässer was a neodymium-iron-nitrogen compound based on a type of...

Im Focus: Atomic precision: technologies for the next-but-one generation of microchips

In the Beyond EUV project, the Fraunhofer Institutes for Laser Technology ILT in Aachen and for Applied Optics and Precision Engineering IOF in Jena are developing key technologies for the manufacture of a new generation of microchips using EUV radiation at a wavelength of 6.7 nm. The resulting structures are barely thicker than single atoms, and they make it possible to produce extremely integrated circuits for such items as wearables or mind-controlled prosthetic limbs.

In 1965 Gordon Moore formulated the law that came to be named after him, which states that the complexity of integrated circuits doubles every one to two...

Im Focus: Researchers demonstrate size quantization of Dirac fermions in graphene

Characterization of high-quality material reveals important details relevant to next generation nanoelectronic devices

Quantum mechanics is the field of physics governing the behavior of things on atomic scales, where things work very differently from our everyday world.

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

Networking 4.0: International Laser Technology Congress AKL’16 Shows New Ways of Cooperations

24.05.2016 | Event News

Challenges of rural labor markets

20.05.2016 | Event News

International expert meeting “Health Business Connect” in France

19.05.2016 | Event News

 
Latest News

Better combustion for power generation

31.05.2016 | Power and Electrical Engineering

Stick insects produce bacterial enzymes themselves

31.05.2016 | Life Sciences

In a New Method for Searching Image Databases, a Hand-drawn Sketch Is all it Takes

31.05.2016 | Information Technology

VideoLinks
B2B-VideoLinks
More VideoLinks >>>