54
M. de Berg et al. / Computational Geometry 23 (2002) 53–68
magnitude. This is probably one of the reasons that most of the exact algorithms were never implemented.
5
One exception is Bañon’s implementation [3] of the O(n ) algorithm of Schwartz and Sharir [14] for
a ladder moving in a two-dimensional workspace, which performs surprisingly well, and much better
than the worst-case theoretical analysis predicts. The reason is that the running time of the algorithm is
f
sensitive to the actual complexity of the free space, and this is in practice far less than the ꢀ(n ) worst-
case bound.
These observations inspired research [1,2,4,7–11,13,15,16,19–21] where geometric problems are
studied under certain (hopefully realistic) assumptions on the input—in the case of motion planning:
the environment in which the robot is moving. The goal of this line of research is to be able to predict
better the practical performance of algorithms. For instance, van der Stappen et al. [16] studied the free-
space complexity for a bounded-reach robot moving in environments consisting of fat obstacles. (A robot
has bounded reach if it is not too large compared to the obstacles in its workspace; an obstacle is fat if
it has no long and skinny parts.) They showed that in this restricted type of environments the worst-case
free-space complexity is only ꢀ(n). Van der Stappen [17,18] also proved that in such environments naive
2
and slightly adapted versions of Schwartz and Sharir’s ladder algorithm run in O(n ) and O(nlogn) time,
respectively, which is more in line with the experimental results of Bañon. Van der Stappen and Overmars
[19] used the linear free-space complexity result to obtain an efficient general approach to robot motion
planning amidst fat obstacles. These results were extended to the more general setting of low-density
environments by van der Stappen et al. [20].
De Berg et al. [5] brought together various of the realistic input models that were proposed in
the literature, namely fatness, low density, unclutteredness, and small simple-cover complexity—see
Section 2 for formal definitions of these models. They showed that these models form a strict hierarchy
in the sense that fatness implies low density, which in turn implies unclutteredness, which implies small
simple-cover complexity, and that no other implications exist between the models. A natural question
that arises is whether the results of van der Stappen et al. [20] remain valid when, instead of a low-
density scene, we assume a more general setting, like an uncluttered scene or a scene with small simple-
cover complexity. In other words, does the complexity of the free space of a bounded-reach robot with
f degrees of freedom moving in an uncluttered scene (alternatively, in a scene with small simple-cover
complexity) remain O(n)?
The main result of this paper is a negative answer to this question. We prove that the maximum
complexity of the free space of a bounded-reach robot moving in either an uncluttered scene or a scene
with small simple-cover complexity is ꢀ(nf/2 + n) when the workspace is two-dimensional. These
f
bounds fit nicely between the ꢀ(n) bound for low-density scenes and the ꢀ(n ) bound for general
scenes. For three-dimensional uncluttered scenes the bound becomes ꢀ(n2 +n). Contrary to the planar
f/3
case, small simple-cover complexity does not result in a reduced maximum free-space complexity for
f
three-dimensional workspaces: the maximum complexity is ꢀ(n ).
Our upper-bound proofs use the concept of guarding sets [6]. A guarding set for a collection of
objects—in our case the obstacles in the robot’s workspace—is, informally speaking, a set of points
(sometimes referred to as guards) that approximates the spatial distribution of these objects. Guarding
sets allow us to define a simplifying generalization [6] of unclutteredness that implies small simple-cover
complexity in 3D and is even equivalent to it in the plane.
Section 2 recalls the input models that play a role in this paper and briefly reviews the relations between
these models and the concept of guarding sets. Section 3 establishes an upper bound on the number of
large objects intersecting a hypercube given the number of guards in its vicinity. In Sections 4 and 5 we