TITLE

A Compositional Framework for Developing Parallel Programs on Two-Dimensional Arrays

AUTHOR(S)
Emoto, Kento; Hu, Zhenjiang; Kakehi, Kazuhiko; Takeichi, Masato
PUB. DATE
December 2007
SOURCE
International Journal of Parallel Programming;Dec2007, Vol. 35 Issue 6, p615
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
Computations on two-dimensional arrays such as matrices and images are one of the most fundamental and ubiquitous things in computational science and its vast application areas, but development of efficient parallel programs on two-dimensional arrays is known to be hard. In this paper, we propose a compositional framework that supports users, even with little knowledge about parallel machines, to develop both correct and efficient parallel programs on dense two-dimensional arrays systematically. The key feature of our framework is a novel use of the abide-tree representation of two-dimensional arrays. The presentation not only inherits the advantages of tree representations of matrices where recursive blocked algorithms can be defined to achieve better performance, but also supports transformational development of parallel programs and architecture-independent implementation owing to its solid theoretical foundation � the theory of constructive algorithmics.
ACCESSION #
26845992

 

Related Articles

  • Parallel processing of multicomponent seismic data. Falfushinsky, V. V. // Cybernetics & Systems Analysis;Mar2011, Vol. 47 Issue 2, p330 

    n algorithm for processing multicomponent seismic data is proposed. It is implemented in and its performance is measured on the Inparcom cluster. Several improvements are applied to speed up the program and to reduce the filesystem load, in particular, local folders are used to store temporary...

  • Parallel programming in Grid: Using MPI. Heli Xu; Guixin Wu // Proceedings of the International Symposium on Electronic Commerc;Jun2010, p136 

    Compared with the traditional parallel technologies, parallel computing comes true in the Grid, which has a great advantage. According to features of Message Passing Interface (MPI), put forward applying methods of MPI in the grid, include MPI application models based on grid, theories of grid...

  • Parallel Computing.  // Network Dictionary;2007, p365 

    A definition of the term "parallel computing" is presented. In the context of computer software, this refers to the process of simultaneously executing the same task on multiple processors or multiple computers. The idea behind parallel computing is based on the fact that the process of solving...

  • A Review of Parallelization Tools and Introduction to Easypar. Sah, Sudhakar; Vaidya, Vinay G. // International Journal of Computer Applications;10/15/2012, Vol. 56, p17 

    Multicore processors have paved the way to increase the performance of any application by the virtue of benefits of parallelization. However, exploiting parallelism from a program is not easy, as it requires parallel programming expertise. In addition, manual parallelization is a cumbersome,...

  • Introduction. Evripidou, Paraskevas // International Journal of Parallel Programming;Feb2001, Vol. 29 Issue 1, p1 

    Presents an introduction to articles on parallel programming.

  • Progressing with parallel processing. Coffee, Peter // eWeek;9/18/2006, Vol. 23 Issue 37, pD5 

    The article offers information regarding the optimization of multithread code writing in the U.S. Because of the increasing development of parallel processing hardware, the need of multithreading skills is also growing. It is important that a developer knows how to write a safe and concurrent...

  • Performance Comparison of Parallel Programming Environments for Implementing AIAC Algorithms. Bahi, Jacques; Contassot-Vivier, Sylvain; Couturier, Raphaël // Journal of Supercomputing;Mar2006, Vol. 35 Issue 3, p227 

    AIAC algorithms (Asynchronous Iterations Asynchronous Communications) are a particular class of parallel iterative algorithms. Their asynchronous nature makes them more efficient than their synchronous counterparts in numerous cases as has already been shown in previous works. The first goal of...

  • Optimizing I/O server placement for parallel I/O on switch-based irregular networks. Lin, Yih-Fang; Wang, Chien-Min; Wu, Jan-Jan // Journal of Supercomputing;Jun2006, Vol. 36 Issue 3, p201 

    In this paper, we study I/O server placement for optimizing parallel I/O performance on switch-based clusters, which typically adopt irregular network topologies to allow construction of scalable systems with incremental expansion capability. Finding optimal solution to this problem is...

  • Non-Strict Execution in Parallel and Distributed Computing. Cristobal-Salas, Alfredo; Tchernykh, Andrei; Gaudiot, Jean-Luc; Lin, Wen-Yen // International Journal of Parallel Programming;Apr2003, Vol. 31 Issue 2, p77 

    This paper surveys and demonstrates the power of non-strict evaluation in applications executed on distributed architectures. We present the design, implementation, and experimental evaluation of single assignment, incomplete data structures in a distributed memory architecture and Abstract...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics