Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:


Parallel programming may not be so daunting


'Lock-free' parallel algorithms may match performance of more complex 'wait-free' algorithms

Computer chips have stopped getting faster: The regular performance improvements we've come to expect are now the result of chipmakers' adding more cores, or processing units, to their chips, rather than increasing their clock speed.

In theory, doubling the number of cores doubles the chip's efficiency, but splitting up computations so that they run efficiently in parallel isn't easy. On the other hand, say a trio of computer scientists from MIT, Israel's Technion, and Microsoft Research, neither is it as hard as had been feared.

Commercial software developers writing programs for multicore chips frequently use so-called "lock-free" parallel algorithms, which are relatively easy to generate from standard sequential code. In fact, in many cases the conversion can be done automatically.

Yet lock-free algorithms don't come with very satisfying theoretical guarantees: All they promise is that at least one core will make progress on its computational task in a fixed span of time. But if they don't exceed that standard, they squander all the additional computational power that multiple cores provide.

In recent years, theoretical computer scientists have demonstrated ingenious alternatives called "wait-free" algorithms, which guarantee that all cores will make progress in a fixed span of time. But deriving them from sequential code is extremely complicated, and commercial developers have largely neglected them.

In a paper to be presented at the Association for Computing Machinery's Annual Symposium on the Theory of Computing in May, Nir Shavit, a professor in MIT's Department of Electrical Engineering and Computer Science; his former student Dan Alistarh, who's now at Microsoft Research; and Keren Censor-Hillel of the Technion demonstrate a new analytic technique suggesting that, in a wide range of real-world cases, lock-free algorithms actually give wait-free performance.

"In practice, programmers program as if everything is wait-free," Shavit says. "This is a kind of mystery. What we are exposing in the paper is this little-talked-about intuition that programmers have about how [chip] schedulers work, that they are actually benevolent."

The researchers' key insight was that the chip's performance as a whole could be characterized more simply than the performance of the individual cores. That's because the allocation of different "threads," or chunks of code executed in parallel, is symmetric. "It doesn't matter whether thread 1 is in state A and thread 2 is in state B or if you just swap the states around," says Alistarh, who contributed to the work while at MIT. "What we noticed is that by coalescing symmetric states, you can simplify this a lot."

In a real chip, the allocation of threads to cores is "a complex interplay of latencies and scheduling policies," Alistarh says. In practice, however, the decisions arrived at through that complex interplay end up looking a lot like randomness. So the researchers modeled the scheduling of threads as a process that has at least a little randomness in it: At any time, there's some probability that a new thread will be initiated on any given core.

The researchers found that even with a random scheduler, a wide range of lock-free algorithms offered performance guarantees that were as good as those offered by wait-free algorithms.

That analysis held, no matter how the probability of thread assignment varied from core to core. But the researchers also performed a more specific analysis, asking what would happen when multiple cores were trying to write data to the same location in memory and one of them kept getting there ahead of the others. That's the situation that results in a lock-free algorithm's worst performance, when only one core is making progress.

For that case, they considered a particular set of probabilities, in which every core had the same chance of being assigned a thread at any given time. "This is kind of a worst-case random scheduler," Alistarh says. Even then, however, the number of cores that made progress never dipped below the square root of the number of cores assigned threads, which is still better than the minimum performance guarantee of lock-free algorithms.


Additional background

Archive: "Multicore may not be so scary":

Abby Abazorius | EurekAlert!

Further reports about: Computing Engineering MIT Technion Technology decisions individual processing programming randomness technique

More articles from Information Technology:

nachricht Laser process simulation available as app for first time
23.11.2015 | Fraunhofer-Institut für Lasertechnik ILT

nachricht Powering the next billion devices with Wi-Fi
19.11.2015 | University of Washington

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Innovative Photovoltaics – from the Lab to the Façade

Fraunhofer ISE Demonstrates New Cell and Module Technologies on its Outer Building Façade

The Fraunhofer Institute for Solar Energy Systems ISE has installed 70 photovoltaic modules on the outer façade of one of its lab buildings. The modules were...

Im Focus: Lactate for Brain Energy

Nerve cells cover their high energy demand with glucose and lactate. Scientists of the University of Zurich now provide new support for this. They show for the first time in the intact mouse brain evidence for an exchange of lactate between different brain cells. With this study they were able to confirm a 20-year old hypothesis.

In comparison to other organs, the human brain has the highest energy requirements. The supply of energy for nerve cells and the particular role of lactic acid...

Im Focus: Laser process simulation available as app for first time

In laser material processing, the simulation of processes has made great strides over the past few years. Today, the software can predict relatively well what will happen on the workpiece. Unfortunately, it is also highly complex and requires a lot of computing time. Thanks to clever simplification, experts from Fraunhofer ILT are now able to offer the first-ever simulation software that calculates processes in real time and also runs on tablet computers and smartphones. The fast software enables users to do without expensive experiments and to find optimum process parameters even more effectively.

Before now, the reliable simulation of laser processes was a job for experts. Armed with sophisticated software packages and after many hours on computer...

Im Focus: Quantum Simulation: A Better Understanding of Magnetism

Heidelberg physicists use ultracold atoms to imitate the behaviour of electrons in a solid

Researchers at Heidelberg University have devised a new way to study the phenomenon of magnetism. Using ultracold atoms at near absolute zero, they prepared a...

Im Focus: Climate Change: Warm water is mixing up life in the Arctic

AWI researchers’ unique 15-year observation series reveals how sensitive marine ecosystems in polar regions are to change

The warming of arctic waters in the wake of climate change is likely to produce radical changes in the marine habitats of the High North. This is indicated by...

All Focus news of the innovation-report >>>



Event News

Fraunhofer’s Urban Futures Conference: 2 days in the city of the future

25.11.2015 | Event News

Gluten oder nicht Gluten? Überempfindlichkeit auf Weizen kann unterschiedliche Ursachen haben

17.11.2015 | Event News

Art Collection Deutsche Börse zeigt Ausstellung „Traces of Disorder“

21.10.2015 | Event News

Latest News

Harnessing a peptide holds promise for increasing crop yields without more fertilizer

25.11.2015 | Agricultural and Forestry Science

Earth's magnetic field is not about to flip

25.11.2015 | Earth Sciences

Tracking down the 'missing' carbon from the Martian atmosphere

25.11.2015 | Physics and Astronomy

More VideoLinks >>>