Human beings consider some environments hostile, such as deep space, nuclear reactors, or contaminated areas, and thus are not very keen on working in them. In cases such as these, robots might just step in to accomplish tasks in place of humans. The idea is not new, as the broad reference section of this paper demonstrates; it is not easy either, as the whole paper leading up to the references explains.
Teams of robots, whose behavior has been inspired by social insect colonies such as ants or termites, have been in use for some time now. Yet some questions still arise. Are they efficient? Effective? Quick? Reliable? This paper uses computational analysis to answer these questions. Computational analysis tackles problems that are special cases of more general problems. If a solution exists that solves the general problem, then it solves also the special case; unfortunately, the opposite is not always true in that solutions to specific problems do not always solve the more general ones. In any case, problems are solved by algorithms; it is possible to map algorithms to problems, and in that case algorithm efficiency depends, among other things, on environment, initial conditions, and whether the problem is a one-off or may be easily reproduced. If algorithms can be mapped to problems, then teams of robots can be assigned to problems; teams are built by choosing individual robots from libraries of robots. Although this may feel weird at first, it is after all what mankind has been doing for ages by selecting the individuals most suited for particular tasks.
The paper first models the environment, the robots, their controllers, and the algorithms they must use to accomplish the assigned tasks. It then demonstrates the mathematical intractability of accomplishing these tasks in the general case, and how by imposing some constraints the same problems cease to be intractable.
The environment is modeled as a grid of finite squares robots can occupy, travel through, or work in; each square has properties that can affect robot behavior; some of the squares are identified as structures, and in some of them structural damage is identified by some of their properties. Individual robots are modeled as finite state automata able to perform movement, recognize square type, and change the state of the square according to sets of rules. Each robot can occupy one single square at any given time, cannot avoid collisions, and can always perform changes on squares. Robots in a team communicate with each other via changes they perform on squares, and communication and movement can be performed synchronously or asynchronously. Finally, computational goals are set for the whole robot team such that any activity on any square must be performed correctly, as quick as possible but in any case at most in polynomial time; such algorithms are said to be polynomial-time tractable.
The above theoretical computational goals are applied to real-world construction, maintenance, and repair activities, first for heterogeneous robot teams composed of robots selected from different libraries and then for homogeneous robot teams composed of robots selected from the same library. Teams of homogeneous robots in general perform better than teams of heterogeneous robots.
All of these results, however, do not apply well to real-world problems: their mathematical modeling is too abstract. On the other hand, adding constraints to describe reality more accurately results in these algorithms running too slowly.
All in all, though sometimes the reader is subject to an eerie sensation of a world where robots roam freely around with little human interaction, the sobering reality remains: this is not science fiction but an academic paper, describing problems and algorithms that can help humans solve them. Humans still rule!