Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Spending your free time
Gelernter D. (ed), Philbin J. BYTE15 (5):213-ff,1990.Type:Article
Date Reviewed: Apr 1 1992

I wanted to get a bigger picture of Linda software than is presented in this paper about the Linda parallel processing software developed at Yale University, so I solicited literature from Scientific Computing Associates in New Haven, Connecticut, and Torque Computer in New York City. I was familiar with IBM PC and Macintosh computers and wanted to know whether Linda was available for those machines.

Scientific Computing Associates has developed a version of Linda called Network Linda (NL) for workstations from Hewlett-Packard, Sun, DEC, and Silicon Graphics. Scientific designed NL on the assumption that Linda runs on workstations in the background during prime time by using idle machine cycles. NL is termed a hypercomputer, a workstation network–based parallel computer that emerges from the idle portion of the network. The hypercomputer grows or contracts depending on the availability of nodes in the network. The availability of each workstation’s resources to NL is determined by a local user configuration program and file.

Torque’s implementation of Linda for desktop computer systems is called Tuplex. Torque offers Tuplex for networked or standalone Macintoshes, Suns, or IBM 386 PCs. For networking, use Torque’s ComputeServer hardware and software and Ethernet to connect to the PCs. The Torque design is intended to provide Linda on commonly available desktop computers.

Linda is a development environment for parallel programs. Its design criteria were parallel program portability, efficiency of execution, and ease of use. Linda works with the operating system to develop and run parallel programs written in C or FORTRAN. Program control and input/output data are stored in a distributed, associative, relational database called tuple-space. Linda provides six basic parallel functions in C or FORTRAN: eval() initiates (forks) a parallel process to be assigned a processor; out() adds data to tuple space; in() reads data and removes them from tuple space; and rd() reads data from tuple space without removing them. The in() and rd() functions perform a record lock, while inp() and rdp() do not. A preprocessor translates these parallel Linda language statements into runtime library calls.

One could conclude that a parallel processing standard is developing in Linda. Linda is being made widely available on many standard workstation, Apple, and IBM PC networks. Programming is possible in parallel versions of the C or FORTRAN languages supplied with Linda software. This approach is a significant departure from the situation in which many university computer science departments have their own parallel processing projects developing special hardware and software.

College computer science departments could dedicate a network of Apple or IBM PCs for parallel processing. The college could offer a course called “Introduction to Parallel Programming.” A more advanced class would be “Parallel Programming Algorithms.” More specialized parallel programming courses could cover algorithms specific to electrical and mechanical engineering, chemistry, physics, and finance. A production program needed by the college is a parallel program class scheduling system. Students in this series of courses on parallel programming would gain experience in parallel programming computer systems. They would learn the commands of the parallel development system (Linda) for job control and job status, parallel distributed relational database design, C and FORTRAN parallel interfacing verbs, debugging tools, loading and unloading utilities to the distributed database, and algorithm and program design. Most of these topics are common to the Linda implementations from Scientific Computing Associates and Torque Computer.

Reviewer:  Neil Karl Review #: CR115347
Bookmark and Share
 
Parallel Processors (C.1.2 ... )
 
 
Linda (D.3.2 ... )
 
 
Workstations (C.5.3 ... )
 
 
Network Architecture And Design (C.2.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Processors": Date
Higher speed transputer communication using shared memory
Boianov L., Knowles A. Microprocessors & Microsystems 15(2): 67-72, 1991. Type: Article
Jun 1 1992
On stability and performance of parallel processing systems
Bambos N., Walrand J. (ed) Journal of the ACM 38(2): 429-452, 1991. Type: Article
Sep 1 1992
Introduction to parallel algorithms and architectures
Leighton F. (ed), Morgan Kaufmann Publishers Inc., San Francisco, CA, 1992. Type: Book (9781558601178)
Apr 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy