SPLASH 2014
Mon 20 - Fri 24 October 2014 Portland, Oregon, United States
Fri 24 Oct 2014 13:30 - 13:52 at Salon F - Distributed Computing Chair(s): Madan Musuvathi

Many vertex-centric graph algorithms can be expressed via asynchronous parallelism by relaxing certain read-after-write data dependences and thus allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithm to converge. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. We demonstrate that for a range of asynchronous graph algorithms, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach performs well in comparison to GraphLab, a popular distributed graph processing framework.

ASPIRE: Exploiting Asynchronous Parallelism in Iterative Algorithms using a Relaxed Consistency based DSM (oopsla2014-vora.pdf)1.84MiB

Fri 24 Oct

Displayed time zone: Tijuana, Baja California change

13:30 - 15:00
Distributed ComputingOOPSLA at Salon F
Chair(s): Madan Musuvathi Microsoft Research
13:30
22m
Talk
ASPIRE: Exploiting Asynchronous Parallelism in Iterative Algorithms using a Relaxed Consistency based DSM
OOPSLA
Keval Vora University of California, Riverside, Sai Charan Koduru University of California, Riverside, Rajiv Gupta UC Riverside
Link to publication Media Attached File Attached
13:52
22m
Talk
Alembic: Automatic Locality Extraction via Migration
OOPSLA
Brandon Holt University of Washington, Preston Briggs University of Washington, Luis Ceze University of Washington, Mark Oskin University of Washington
Link to publication Media Attached File Attached
14:15
22m
Talk
Cybertron: Pushing the Limit on I/O Reduction in Data-Parallel Programs
OOPSLA
Tian Xiao Tsinghua University / Microsoft Research, Zhenyu Guo Microsoft Research, Hucheng Zhou Microsoft Research, Jiaxing Zhang Microsoft Research, Xu Zhao University of Toronto, Chencheng Ye Huazhong University of Science and Technology, Xi Wang MIT CSAIL, Wei Lin Microsoft Bing, Wenguang Chen Tsinghua University, Lidong Zhou Microsoft Research
Link to publication Media Attached
14:37
22m
Talk
Translating Imperative Code to MapReduce
OOPSLA
Cosmin Radoi University of Illinois, Stephen J Fink IBM, Rodric Rabbah IBM Research, Manu Sridharan Samsung Research America
Link to publication Media Attached