TITLE

Compilation Techniques for High Level Parallel Code

AUTHOR(S)
Gaster, Benedict R.; Bainbridge, Tim; Lacey, David; Gardner, David
PUB. DATE
February 2010
SOURCE
International Journal of Parallel Programming;Feb2010, Vol. 38 Issue 1, p4
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
This paper describes methods to adapt existing optimizing compilers for sequential languages to produce code for parallel processors. In particular it looks at targeting data-parallel processors using SIMD (single instruction multiple data) or vector processors where users need features similar to high-level control flow across the data-parallelism. The premise of the paper is that we do not want to write an optimizing compiler from scratch. Rather, a method is described that allows a developer to take an existing compiler for a sequential language and modify it to handle SIMD extensions. As well as modifying the front-end, the intermediate representation and the code generation to handle the parallelism, specific optimizations are described to target the architecture efficiently.
ACCESSION #
47626248

 

Related Articles

  • Maximizing Application Performance through Inter-Procedural Optimization. Chow, Fred // Scientific Computing & Instrumentation;Oct2004, Vol. 21 Issue 11, p24 

    The article focuses on new generation compilers, which deliver whole new program optimization. In separate compilation, incomplete program information is available during compilation, forcing compilers to make worst-case assumptions about programs that access external data or call external...

  • Inline Expansion.  // Network Dictionary;2007, p246 

    A definition of the term "Inline Expansion" is presented. It refers to a compiler optimization which expands a functional call site into the actual implementation of the function which is called, rather than each call transferring control to a common piece of code. This helps call-site-specific...

  • Dead Code Elimination.  // Network Dictionary;2007, p138 

    A definition of the term "dead code elimination" is presented. It refers to a compiler optimization technique used to reduce program size by the removal of the code that can never be executed.

  • A Vectorizing Compiler for Multimedia Extensions. Sreraman, N.; Govindarajan, R. // International Journal of Parallel Programming;Aug2000, Vol. 28 Issue 4, p363 

    In this paper, we present an implementation of a vectorizing C compiler for Intel's MMX (Multimedia Extension). This compiler would identify data parallel sections of the code using scalar and array dependence analysis. To enhance the scope for application of the subword semantics, our compiler...

  • OpenMP Implementation of SPICE3 Circuit Simulator. Tien-Hsiung Weng; Ruey-Kuen Perng; Chapman, Barbara // International Journal of Parallel Programming;Oct2007, Vol. 35 Issue 5, p493 

    In this paper, we describe our experience of creating an OpenMP implementation of the SPICE3 circuit simulator program. Given the irregular patterns of access to dynamic data structures in the SPICE code, a parallelization using current standard OpenMP directives is impossible without major...

  • Handling Global Constraints in Compiler Strategy. Rohou, Erven; Bodin, François; Eisenbeis, Christine; Seznec, André // International Journal of Parallel Programming;Aug2000, Vol. 28 Issue 4, p325 

    To achieve high-performance on processors featuring ILP, most compilers apply locally a set of heuristics. This leads to a potentially high-performance on separate code fragments. Unfortunately, most optimizations also increase code size, which may lead to a global net performance loss. In this...

  • Experiences with Sweep3D implementations in Co-array Fortran. Coarfa, Cristian; Dotsenko, Yuri; Mellor-Crummey, John // Journal of Supercomputing;May2006, Vol. 36 Issue 2, p101 

    As part of the recent focus on increasing the productivity of parallel application developers, Co-array Fortran (CAF) has emerged as an appealing alternative to the Message Passing Interface (MPI). CAF belongs to the family of global address space parallel programming languages; such languages...

  • Constant Folding.  // Network Dictionary;2007, p120 

    A definition of the term "Constant Folding" is presented. It refers to one of the compiler optimization techniques used by many modern compilers. It is also the process of simplifying constant expressions at compile time. Terms used in constant expressions are usually simple literals but can...

  • Copy Propagation.  // Network Dictionary;2007, p123 

    A definition of the term "Copy Propagation" is presented. It refers to an optimization technique in the program compiler. It is the process of replacing the occurrences of targets of direct assignments with their values. It is a useful clean up optimization most often used after other...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics