Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Internet agents
Cheong F., New Riders Publishing, Indianapolis, IN, 1996. Type: Book (9781562054632)
Date Reviewed: Feb 1 1997

The huge amount of information provided by the Internet is at once a blessing and a curse. The net can deliver information much more rapidly and at lower expense than older technologies, but the time needed to track it down in a jungle of possible sources can outweigh the benefits for a user who does not already know where to go. One solution to this problem is a computer program with the ability to search the Internet automatically on behalf of a human user. Such programs are commonly called “agents.” This use of the term, derived from expressions such as “travel agent” or “real-estate agent,” emphasizes the role of the program--to represent a human--and should not be confused with an alternative use of the term, derived from the etymological root meaning “to act,” which denotes a software object with its own thread of control and its own initiative, but no necessary connection to the Internet or to a specific human user.

Cheong has distilled his extensive experience with Internet agents, both as a researcher and in the commercial world, to provide an accurate summary of this technology at a popular level. The book is organized into five parts.

Part 1, an introduction, describes the foundation on which Internet agents are built. It defines agents as “personal software assistants with authority delegated from their users” and summarizes a number of pioneering and current research projects that are developing the underlying tools and demonstrating their potential benefits. This part also gives a brief history of the Internet and an overview of its operation, and outlines the structure of the World Wide Web, including a summary of its lingua franca, the Hypertext Transfer Protocol (HTTP) and the Hypertext Markup Language (HTML).

Part 2, “Web Robot Construction,” shows how a computer program can travel through the Web to gather information for its human master, describing several existing programs (or “spiders,” including Lycos, harvest, and WebAnts) that crawl through the Web in order to index it. Automated programs can impose undesirable loads on Web servers that hinder access by their primary human audience, so this section lays down operational guidelines for Web robots, including four laws of Web robotics and six laws for robot operators. It provides a detailed summary of HTTP, showing how the protocol supports the recommended operational guidelines, and then outlines the architecture and operation of the WebWalker, an Internet agent that searches the Web for dangling pointers and reports them back to its master.

While much of the information available on the Web today is provided without charge to the user, the environment can only become self-sustaining if ways are found for people to pay for what they find valuable. Part 3, “Agents and Money on the Net,” discusses two important technologies to support commercial transactions on the Internet: security (which enables people to entrust proprietary information to the net), and electronic cash and payment services (which allow people to pay for a net-based service in the same environment in which they receive it).

Part 4, “Bots in Cyberspace,” describes several software denizens of the Internet that do not fall directly under the category of assistants to humans, but demonstrate some of the techniques and capabilities on which the earlier chapters draw. These include malicious applications that can travel over the network, such as worms and viruses, and programs such as Julia and Colin, which impersonate humans in online virtual worlds, also known as MUDs.

Part 5, “Appendices,” is the largest of the book’s five parts. It includes the specifications for HTTP 1.0; Perl listings for two Web robots (the WebWalker from Part 2, and a WebShopper that compares prices for CDs and books offered by online stores); lists of online bookstores, CD shops, and MUD sites; and a list of Web spiders and robots. These appendices offer a wealth of detail, and for many readers will be the most frequently referenced section of the book. Ironically, the technology described in the book makes it less and less important to have lists such as these printed on paper and occupying space on a shelf, but users new to the Internet will find these lists an invaluable source of starting points. Unfortunately, but not surprisingly, some of the most important links printed in the book (such as that for Martijn Koster’s list of active robots) are already out of date.

The volume includes chapter-by-chapter bibliographies through 1995 and a comprehensive index.

Reviewer:  H. Van Dyke Parunak Review #: CR124571 (9702-0093)
Bookmark and Share
  Featured Reviewer  
 
Search Process (H.3.3 ... )
 
 
Data Sharing (H.3.5 ... )
 
 
Distributed Applications (C.2.4 ... )
 
 
Human Factors (H.1.2 ... )
 
 
Hypertext Navigation And Maps (H.5.1 ... )
 
 
Security and Protection (C.2.0 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Search Process": Date
Search improvement via automatic query reformulation
Gauch S., Smith J. ACM Transactions on Information Systems 9(3): 249-280, 1991. Type: Article
Jul 1 1993
Criteria for the selection of search strategies in best-match document-retrieval systems
McCall F., Willett P. International Journal of Man-Machine Studies 25(3): 317-326, 1986. Type: Article
Oct 1 1987
The use of adaptive mechanisms for selection of search strategies in document retrieval systems
Croft W. (ed), Thompson R.  Research and development in information retrieval (, King’s College, Cambridge,1101984. Type: Proceedings
Aug 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy