Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Orca: A Language for Parallel Programming of Distributed Systems
Bal H., Kaashoek M., Tanenbaum A. IEEE Transactions on Software Engineering18 (3):190-205,1992.Type:Article
Date Reviewed: Jan 1 1993

The Orca language is an attempt to assist in the execution of coarse-grained parallel applications on distributed computing systems. The overhead of the runtime system and of the message passing of the network both significantly influence the types of applications that make sense on such a distributed system. Additionally, for a distributed system, transparency is important.

The language semantics of Orca uses guards on shared data structures as a synchronization mechanism; the syntax of this mechanism is presented clearly. The runtime impact of this method, specifically due to the local copy mechanism, which was chosen to avoid problems with indivisibility, is of concern. The authors note the possible performance penalty incurred by this mechanism and allude to a compiler solution. The example applications do not contain sufficiently sophisticated data structures to test this solution, however.

A large part of the paper is spent discussing a graph-based data structure that preserves type security. This data structure is necessary for a distributed implementation, since addressing by pointers across disjoint address spaces is impossible. Its applicability to sequential programming should not be overlooked, however. In terms of transparency, the authors mention the need for process migration in their abstract, but have foregone the ability in their implementation. The distributed fork operation, for instance, uses explicit CPU binding for the created process. Additionally, while the language semantics is designed around shared structures, all three applications explicitly mention the notion of application-level messages.

From a performance viewpoint, Orca relies heavily on reliable broadcast. The efficient implementation is, however, necessitated by a broadcast physical network layer. The extensions to point-to-point interconnection networks that the authors allude to would seem to create major problems with runtime efficiency. Performance is measured over a small (16-processor), somewhat slow (SUN 3) system for classic parallel programming applications. Without the grain size information from the experiments, the extensibility of the method to more modern machines is difficult to judge.

Reviewer:  Bruce McMillin Review #: CR116514
Bookmark and Share
 
Orca (D.3.2 ... )
 
 
Amoeba (D.4.0 ... )
 
 
Concurrent, Distributed, And Parallel Languages (D.3.2 ... )
 
 
Distributed Systems (D.4.7 ... )
 
 
Parallel Programming (D.1.3 ... )
 
 
Concurrent Programming (D.1.3 )
 
  more  
Would you recommend this review?
yes
no

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy