|Institution:||University of Rochester|
|Keywords:||Parallel programming; scripting language; deterministic parallelism|
|Full text PDF:||http://hdl.handle.net/1802/29300|
The fast development of parallel computer systems poses a challenge to programming language design and implementation. On the one hand, simple semantics are desirable; on the other hand, language implementations are expected to help programmers fully utilize complex parallel systems. Scripting languages are becoming popular because of their simplicity, but parallel performance has never been irrelevant for their design and implementation. In this thesis, we propose the design and implementation of a deterministic parallel scripting language. Determinism is an appealing property for parallel programs, as it simplifies understanding, reasoning and debugging. It is particularly appealing in scripting languages, where ease of programming is a dominant design goal. We start by proposing a formal semantic framework for deterministic parallel programming. We suggest several history-based definitions of determinism, discuss some of their comparative advantages, prove containment relationships among them, and identify programming idioms that ensure them. On top of the proposed formal semantic framework, we discuss the design of Deterministic Parallel Ruby (DPR), a parallel dialect of the Ruby programming language. DPR extends Ruby by adding simple deterministic parallel constructs while giving up on full generality (of conventional parallel programming models). We introduce the constructs of DPR, and a dynamic determinism checking mechanism (Tardis) that verifies properties required for determinism. With our DPR implementation, RB-DPR (RB stand for Rochester-Beijing Institute of Technology), experimental results confirm that DPR can provide scalable performance on multicore machines and that the overhead of Tardis is low enough for practical testing. In particular, Tardis significantly outperforms alternative data-race detectors with comparable functionality. We conclude with a discussion of future directions in parallel scripting languages that are simple to use.