SPLASH 2014
Mon 20 - Fri 24 October 2014 Portland, Oregon, United States
Fri 24 Oct 2014 13:52 - 14:15 at Salon F - Distributed Computing Chair(s): Madan Musuvathi

Partitioned Global Address Space (PGAS) environments simplify writing parallel code for clusters because they make data movement implicit — dereferencing global pointers automatically moves data around. However, it does not free the programmer from needing to reason about locality — poor placement of data can lead to excessive and even unnecessary communication. For this reason, modern PGAS languages such as X10, Chapel, and UPC allow programmers to express data layout constraints and explicitly move computation. This places an extra burden on the programmer, and is less effective for applications with limited or data-dependent locality (e.g., graph analytics).

This paper proposes Alembic, a new static analysis that frees programmers from having to manually move computation to exploit locality in PGAS programs. It works by determining regions of code that access the same cluster node, then transforming the code to automatically migrate parts of the execution by passing around continuations to increase the proportion of accesses to local data. We implement the analysis and transformation for C++ in LLVM and show that in irregular application kernels, Alembic can achieve 82% of the hand-tuned performance (for comparison, naïve compiler-generated communication achieves only 13%).

Alembic talk slides (alembic-oopsla.pdf)1.22MiB

Fri 24 Oct

Displayed time zone: Tijuana, Baja California change

13:30 - 15:00
Distributed ComputingOOPSLA at Salon F
Chair(s): Madan Musuvathi Microsoft Research
13:30
22m
Talk
ASPIRE: Exploiting Asynchronous Parallelism in Iterative Algorithms using a Relaxed Consistency based DSM
OOPSLA
Keval Vora University of California, Riverside, Sai Charan Koduru University of California, Riverside, Rajiv Gupta UC Riverside
Link to publication Media Attached File Attached
13:52
22m
Talk
Alembic: Automatic Locality Extraction via Migration
OOPSLA
Brandon Holt University of Washington, Preston Briggs University of Washington, Luis Ceze University of Washington, Mark Oskin University of Washington
Link to publication Media Attached File Attached
14:15
22m
Talk
Cybertron: Pushing the Limit on I/O Reduction in Data-Parallel Programs
OOPSLA
Tian Xiao Tsinghua University / Microsoft Research, Zhenyu Guo Microsoft Research, Hucheng Zhou Microsoft Research, Jiaxing Zhang Microsoft Research, Xu Zhao University of Toronto, Chencheng Ye Huazhong University of Science and Technology, Xi Wang MIT CSAIL, Wei Lin Microsoft Bing, Wenguang Chen Tsinghua University, Lidong Zhou Microsoft Research
Link to publication Media Attached
14:37
22m
Talk
Translating Imperative Code to MapReduce
OOPSLA
Cosmin Radoi University of Illinois, Stephen J Fink IBM, Rodric Rabbah IBM Research, Manu Sridharan Samsung Research America
Link to publication Media Attached