Papers and Talks by Kenneth L. Clarkson
I left Lucent in January 2007, so this page is no longer maintained.
with John D. Hobby.
2006.
We give a model for the performance
impact on wireless systems of the limitations of certain resources, namely,
the base-station power amplifier and the available OVSF codes. These
limitations are readily modeled in the loss model formulation as a
stochastic knapsack. A simple and well-known recurrence of Kaufman
and Roberts allows the predictions of the model to be efficiently calculated.
We discuss the assumptions and approximations we have made that allow the use
of the model. We have included the model in Ocelot, a Lucent tool for
modeling and optimizing cellular phone systems. The model is fast to compute,
differentiable with respect to the relevant parameters, and able to model
broad ranges of capacity and resource use. These conditions are critical to
our application of optimization.
Survey talk related to two papers below, 2005.
Metric spaces are very simple geometric
structures: they comprise a set and a distance measure that obeys the
triangle inequality. `epsilon`-nets, also called "low-dispersion sets" or
"farthest-point samples", are subsets of metric spaces that are
"well-distributed" in a certain sense. They can be found efficiently using a
greedy algorithm. This talk will review that algorithm, and briefly survey
applications of `epsilon`-nets in motion planning, nearest neighbor
searching, building meshes to approximate curves and surfaces, and building
triangulations to approximate functions.
In
STOC 2006: Proceedings of the Thirty-eighth Annual SIGACT
Symposium, 2006.
This work addresses the problem of
approximating a manifold by a simplicial mesh, and the related problem of
building triangulations for the purpose of piecewise-linear approximation of
functions. It has long been understood that the vertices of such meshes or
triangulations should be "well-distributed," or satisfy certain "sampling
conditions." This work clarifies and extends some algorithms for finding such
well-distributed vertices, by showing that they can be regarded as finding
`\epsilon`-nets or
Delone sets in appropriate metric
spaces. In some cases where such Delone properties were already understood,
such as for meshes to approximate smooth manifolds that bound convex bodies,
the upper and lower bound results are extended to more general manifolds; in
particular, under some natural conditions, the minimum Hausdorff distance for
a mesh with `n` simplices to a `d`-manifold `M` is
`\Theta((\int_M\sqrt{|\kappa(x)|}/n)^{2/d})`
as
`n\rightarrow\infty`, where `\kappa(x)` is the Gaussian curvature at point
`x\in M`. We also relate these constructions to Dudley's approximation scheme
for convex bodies, which can be interpreted as involving an `\epsilon`-net in
a metric space whose distance function depends on surface normals. Finally, a
novel scheme is given, based on the Steinhaus transform, for scaling a metric
space by a Lipschitz function to obtain a new metric. This scheme is applied
to show that some algorithms for building finite element meshes and for
surface reconstruction can be also be interpreted in the framework of metric
space `\epsilon`-nets.
Notes:
- Revised from 20/2/06: patches to upper bound proof, lower bound
proof, many typos etc.
- Revised from version of 11/19/05: better
upper bound proof, typos in Dudley example, cites peyre/cohen.
(Survey).
In G. Shakhnarovich, T. Darrell, and P. Indyk, editors,
Nearest-Neighbor Methods for Learning and Vision: Theory and
Practice, pages 15--59.
MIT Press, 2006.
Given a set `S` of points in a metric
space with distance function `D`, the nearest-neighbor searching
problem is to build a data structure for `S` so that for an input query point
`q`, the point `s\in S` that minimizes `D(s,q)` can be found quickly. We
survey approaches to this problem, and its relation to concepts of metric
space dimension. Several measures of dimension can be estimated using
nearest-neighbor searching, while others can be used to estimate the cost of
that searching. In recent years, several data structures have been proposed
that are provably good for low-dimensional spaces, for some particular
measures of dimension. These and other data structures for nearest-neighbor
searching are surveyed.
Notes:
Some dimensions discussed: box, packing, Hausdorff, Assouad, pointwise,
information, correlation, quantization, energy, Renyi.
with Kasturi Varadarajan.
Discrete & Computational Geometry, to appear.
Preliminary version in SOCG '05: Proceedings of the Twenty-first
Symposium on Computational Geometry, 2005.
Given a collection `S` of subsets of
some set `U`, and `M \subset U`, the set cover problem is to find
the smallest subcollection `C\subset S` such that `M` is a subset of the
union of the sets in `C`. While the general problem is NP-hard to solve, even
approximately, here we consider some geometric special cases, where usually
`U = \RR^d`. Combining previously known techniques [BG,CF], we show that
polynomial time approximation algorithms with provable performance exist,
under a certain general condition: that for a random subset `R\subset S` and
function `f()`, there is a decomposition of the complement
`U\setminus\cup_{Y\in R} Y` into an expected `f(|R|)` regions, each region of
a particular simple form. Under this condition, a cover of size `O(f(|C|))`
can be found in polynomial time. Using this result, and combinatorial
geometry results implying bounding functions `f(c)` that are nearly linear,
we obtain `o(\log c)` approximation algorithms for covering by fat triangles,
by pseudodisks, by a family of fat objects, and others. Similarly,
constant-factor approximations follow for similar-sized fat triangles and fat
objects, and for fat wedges. With more work, we obtain constant-factor
approximation algorithms for covering by unit cubes in `\RR^3`, and for
guarding an `x`-monotone polygonal chain.
In
SODA '05 : Proceedings of the Sixteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, 2005.
Given an `n\times d` matrix `A` and an
`n`-vector `b`, the `L_1` regression problem is to find the vector
`x` minimizing the objective function `||Ax-b||_1`, where `||y||_1 \equiv
\sum_i |y_i|` for vector `y`. This paper gives an algorithm needing `O(n\log
n)d^{O(1)}` time in the worst case to obtain an approximate solution, with
objective function value within a fixed ratio of optimum. Given `\epsilon>0`,
a solution whose value is within `1+\epsilon` of optimum can be obtained
either by a deterministic algorithm using an additional
`O(n)(d/\epsilon)^{O(1)}` time, or by a Monte Carlo algorithm using an
additional `O((d/\epsilon)^{O(1)})` time. The analysis of the randomized
algorithm shows that weighted coresets exist for `L_1` regression. The
algorithms use the ellipsoid method, gradient descent, and random
sampling.
Let `S` be a set of `n` points in `d`
dimensions. A `k`-set of `S` is a subset of size `k` that is the intersection
of `S` with some open halfspace. This note shows that if the points of `S`
are random, with a coordinate-wise independent distribution, then the
expected number of `k`-sets of `S` is `O((k\log(en/k))^{d-1})2^d/(d-1)!`, as
`k\log n->oo`, with a constant independent of the dimension.
with John D. Hobby.
In
VTC-2004-Spring: IEEE Vehicular Technology Conference,
volume 3, pages 1534--1538, 2004.
We introduce a simple model of the effect
of temporal variation in signal strength on active-set membership, for
cellular phone systems that use the soft-handoff algorithm of IS-95a. This
model is based on a steady-state calculation, and its applicability is
confirmed by Monte Carlo studies.
Given a set `S` of `n` sites (points),
and a distance measure `d`, the nearest neighbor searching problem
is to build a data structure so that given a query point `q`, the site
nearest to `q` can be found quickly. This paper gives a data structure for
this problem; the data structure is built using the distance function as a
"black box". The structure is able to speed up nearest neighbor searching in
a variety of settings, for example: points in low-dimensional or structured
Euclidean space, strings under Hamming and edit distance, and bit vector data
from an OCR application. The data structures are observed to need linear
space, with a modest constant factor. The preprocessing time needed per site
is observed to match the query time. The data structure can be viewed as an
application of a "kd-tree" approach in the metric space setting, using
Voronoi regions of a subset in place of axis-aligned boxes.
This paper gives an algorithm for solving
linear systems, using a randomized version of incomplete `LU` factorization
together with iterative improvement. The factorization uses Gaussian
elimination with partial pivoting, and preserves sparsity during
factorization by randomized rounding of the entries. The resulting
approximate factorization is then applied to estimate the solution. This
simple technique, combined with iterative improvement, is demonstrated to be
effective for a range of linear systems. When applied to medium-sized sample
matrices for PDEs, the algorithm is qualitatively like multigrid: the work
per iteration is typically linear in the order of the matrix, and the number
of iterations to achieve a small residual is typically on the order of
fifteen to twenty. The technique is also tested for a sample of asymmetric
matrices from the Matrix Market, and is found to have similar
behavior for many of them.
with Simon C. Borst, John Graybeal, Harish
Viswanathan, and Phillip Whiting.
Bell Labs Technical Journal, 8(2):33--47, 2003.
3G wireless systems such as 3G-1X, 1xEV-DO
and 1xEV-DV provide support for a variety of high-speed data applications.
The success of these services critically relies on the capability to ensure
an adequate QoS experience to users at an affordable price. With wireless
bandwidth at a premium, traffic engineering and network planning play a vital
role in addressing these challenges. We present models and techniques that we
have developed for quantifying the QoS perception of 1xEV-DO users generating
FTP or Web browsing sessions. We show how user-level QoS measures may be
evaluated by means of a Processor-Sharing model which explicitly accounts for
the throughput gains from multi-user scheduling. The model provides simple
analytical formulas for key performance metrics such as response times,
blocking probabilities and throughput. Analytical models are especially
useful for network deployment and in-service tuning purposes due to the
intrinsic difficulties associated with simulation-based optimization
approaches. We discuss the application of our results in the context of
Ocelot, which is a Lucent tool for wireless network planning and
optimization.
with K. Georg Hampel, John D. Hobby, and Paul A.
Polakos.
In
VTC-2003-Fall: IEEE Vehicular Technology Conference,
pages 927--932, 2003.
For 3G cellular networks, capacity is an
important objective, along with coverage, when characterizing the performance
of high-data-rate services. In live networks, the effective network capacity
heavily depends on the degree that the traffic load is balanced over all
cells, so changing traffic patterns demand dynamic network reconfiguration to
maintain good performance. Using a four-cell sample network, and antenna
tilt, cell power level and pilot fraction as adjustment variables, we study
the competitive character of network coverage and capacity in such a network
optimization process, and how it compares to the CDMA-intrinsic
coverage-capacity tradeoff driven by interference. We find that each set of
variables provides its distinct coverage-capacity tradeoff behavior with
widely varying and application-dependent performance gains. The study shows
that the impact of dynamic load balancing highly depends on the choice of the
tuning variable as well as the particular tradeoff range of operation.
with John D. Hobby.
2003.
We give a simple analytic model of coverage
probability for CDMA cellular phone systems under lognormally distributed
shadow fading. Prior analyses have generally considered the coverage
probability of a single antenna; here we consider the probability of coverage
by an ensemble of antennas, using some independence assumptions, but also
modeling a limited form of dependency among the antenna fades. We use the
Fenton-Wilkinson approach of approximating the external interference `I_0` as
lognormally distributed. We show that our model gives a coverage probability
that is generally within a few percent of Monte Carlo estimates, over a wide
regime of antenna strengths and other relevant parameters.
with Mihai B\u{a}doiu.
In
SODA '03: Proceedings of the Fourteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, 2003.
Given a set of points `P\subset R^d` and
value `\epsilon>0`, an `\epsilon`-core-set `S \subset P` has the
property that the smallest ball containing `S` is within `\epsilon` of the
smallest ball containing `P`. This paper shows that any point set has an
`\epsilon`-core-set of size `|~1/\epsilon~|`, and this bound is tight in the
worst case. A faster algorithm given here finds an `\epsilon`-core-set of
size at most `2/\epsilon`. These results imply the existence of small
core-sets for solving approximate `k`-center clustering and related problems.
The sizes of these core-sets are considerably smaller than the previously
known bounds, and imply faster algorithms; one such algorithm needs `O(d
n/\epsilon+(1/\epsilon)^{5})` time to compute an `\epsilon`-approximate
minimum enclosing ball (1-center) of `n` points in `d` dimensions. A simple
gradient-descent algorithm is also given, for computing the minimum enclosing
ball in `O(d n / \epsilon^{2})` time. This algorithm also implies slightly
faster algorithms for computing approximately the smallest radius `k`-flat
fitting a set of points.
with Mihai B\u{a}doiu.
Manuscript, 2002.
Given a set of points `P\subset \RR^d`
and value `\epsilon>0`, an $\epsilon$-core-set `S \subset P` has the
property that the smallest ball containing `S` is within `\epsilon` of the
smallest ball containing `P`. This paper shows that any point set has an
`\epsilon`-core-set of size `|~1/\epsilon~|`, and this bound is tight in the
worst case. Some experimental results are also given, comparing this
algorithm with a previous one, and with a more powerful, but slower
one.
Notes:
- Revised May 2006: Removed dependence on size of smallest ball, consider
variant, etc.
with Wim Sweldens and Alice Zheng.
IEEE Trans. Commun., 49(2):253--261, 2001.
We present an algorithm based on lattice
reduction for the fast decoding of diagonal differential modulation across
multiple antenna. While the complexity of the maximum likelihood algorithm is
exponential both in the number of antenna and the rate, the complexity of our
approximate lattice algorithm is polynomial in the number of antennas and the
rate. We show that the error performance of our lattice algorithm is very
close to the maximum likelihood algorithm.
Discrete & Computational Geometry, 22:63--93, 1999.
Preliminary version in STOC '97: Proceedings of the Twenty-ninth
SIGACT Symposium, 1997.
Given a set `S` of `n` sites (points), and a
distance measure `d`, the nearest neighbor searching problem is to
build a data structure so that given a query point `q`, the site nearest to
`q` can be found quickly. This paper gives data structures for this problem
when the sites and queries are in a metric space. One data structure, `D(S)`,
uses a divide-and-conquer recursion. The other data structure, `M(S,Q)`, is
somewhat like a skiplist. Both are simple and implementable. The data
structures are analyzed when the metric space obeys a certain sphere-packing
bound, and when the sites and query points are random and have distributions
with an exchangeability property. This property implies, for example, that
query point `q` is a random element of `S\cup\{q\}`. Under these conditions,
the preprocessing and space bounds for the algorithms are close to linear in
`n`. They depend also on the sphere-packing bound, and on the logarithm of
the distance ratio `\upsilon(S)` of `S`, the ratio of the distance
between the farthest pair of points in `S` to the distance between the
closest pair. The data structure `M(S,Q)` requires as input data an
additional set `Q`, taken to be representative of the query points. The
resource bounds of `M(S,Q)` have a dependence on the distance ratio of `S\cup
Q`. While `M(S,Q)` can return wrong answers, its failure probability can be
bounded, and is decreasing in a parameter `K`. Here `K\leq |Q|/n` is chosen
when building `M(S,Q)`. The expected query time for `M(S,Q)` is `O(K\log
n)\log\upsilon(S\cup Q)`, and the resource bounds increase linearly in `K`.
The data structure `D(S)` has expected `O(\log n)^{O(1)}` query time, for
fixed distance ratio. The preprocessing algorithm for `M(S,Q)` can be used to
solve the all-nearest-neighbor problem for `S` in `O(n(\log
n)^2(\log\upsilon(S))^2)` expected time.
The technique of randomized incremental
construction allows a variety of geometric structures to be built quickly.
This note shows that once such a structure is built, it is possible to store
the geometric input data for it so that the structure can be built again by a
randomized algorithm even more quickly. Except for the randomization, this
generalizes the technique of Snoeyink and van Kreveld that applies to planar
problems.
Given points moving with constant, but
possibly different, velocities, the minimum moving diameter problem is to
find the minimum, over all time, of the maximum distance between a pair of
points at each moment. This note gives a randomized algorithm requiring `O(n
\log n )` expected time for this problem, in two and three dimensions. Also
briefly noted is a randomized `O(n\log n\log\log n)` expected-time algorithm
for the discrete 1-center problem in three dimensions; in this problem, a
member `p` of a set `S` of points is desired, whose maximum distance to `S`
is minimum over all points of `S`.
Notes: Slight foreshadowing of sublinear geometric
algorithms.
In
FOCS '94: Proceedings of the Thirty-fifth Annual IEEE Symposium
on Foundations of Computer Science, pages 695--702, 1994.
A simple idea for speeding up the computation
of extrema of a partially ordered set turns out to have a number of
interesting applications in geometric algorithms; the resulting algorithms
generally replace an appearance of the input size `n` in the running time by
an output size `A\leq n`. In particular, the `A` coordinate-wise minima of a
set of `n` points in `R^d` can be found by an algorithm needing `O(nA)` time.
Given `n` points uniformly distributed in the unit square, the algorithm
needs `n+O(n^{5/8})` point comparisons on average. Given a set of `n` points
in `R^d`, another algorithm can find its `A` extreme points in `O(nA)` time.
Thinning for nearest-neighbor classification can be done in time `O(n\log
n)\sum_i A_i n_i`, finding the `A_i` irredundant points among `n_i` points
for each class `i`, where `n=\sum_i n_i` is the total number of input points.
This sharpens a more obvious `O(n^3)` algorithm, which is also given here.
Another algorithm is given that needs `O(n)` space to compute the convex hull
of `n` points in `O(nA)` time. Finally, a new randomized algorithm finds the
convex hull of `n` points in `O(n\log A)` expected time, under the condition
that a random subset of the points of size `r` has expected hull complexity
`O(r)`. All but the last of these algorithms has polynomial dependence on the
dimension `d`, except possibly for linear programming.
Notes: There is some overlap with the work of
Chan and
Ottman et
al, in particular, for finding extreme points.
This paper gives an algorithm for
polytope covering: let `L` and `U` be sets of points in `R^d`,
comprising `n` points altogether. A
cover for `L` from `U` is a set
`C\subset U` with `L` a subset of the convex hull of `C`. Suppose `c` is the
size of a smallest such cover, if it exists. The randomized algorithm given
here finds a cover of size no more than `c(\rboundp)`, for `c` large enough.
The algorithm requires `O(c^2n^{1+\delta})` expected time. (In this paper,
`\delta` will denote any fixed value greater than zero.) More exactly, the
time bound is
`O(cn^{1+\delta}+c(nc)^{
1/(1+\gamma/(1+\delta))}),`
where `\gamma\equiv 1/|__d/2__|`.
The previous best bounds were `c O(\log n)` cover size in `O(n^d)` time
[MiS]. A variant algorithm is applied to the problem of approximating the
boundary of a polytope with the boundary of a simpler polytope. For an
appropriate measure, an approximation with error `\epsilon` requires
`c=O(1/\epsilon)^{(d-1)/2}` vertices, and the algorithm gives an
approximation with `c(\apboundp)` vertices. The algorithms apply ideas
previously used for small-dimensional linear programming. The final result
here applies polytope approximation to the the
post office problem:
given `n` points (called sites) in `d` dimensions, build a data structure so
that given a query point `q`, the closest site to `q` can be found quickly.
The algorithm given here is given also a relative error bound `\epsilon`, and
depends on a ratio `\rho`, which is no more than the ratio of the distance
between the farthest pair of sites to the distance between the closest pair
of sites. The algorithm builds a data structure of size
`O(n(\log\rho)/\epsilon^{d/2}` in time `O(n^2(\log\rho))/\epsilon^d`. With
this data structure, closest-point queries can be answered in `O(\log
n)/\epsilon^{d/2}` time.
Notes:
Brönnimann and Goodrich show that the
same iterative randomized algorithm applies in the general setting of range
spaces of bounded VC-dimension, and that a linear-sized `epsilon`-net implies
a constant-factor approximation algorithm. Their results are extended by this
later paper.
In
SOCG '94: Proceedings of the Tenth Symposium on Computational
Geometry, pages 160--164, 1994.
In
WADS '93: Proceedings of the Third Workshop on Algorithms and
Data Structures, pages 246--252, 1993.
with David Eppstein, Gary L. Miller, Carl
Sturtivant, and Shang-Hua Teng.
International Journal of Computational Geometry and
Applications, 6(3):357--377, 1996.
Preliminary version in SOCG '93: Proceedings of the Ninth Symposium
on Computational Geometry, 1993.
We give a practical and provably good Monte
Carlo algorithm for approximating center points. Let `P` be a set of `n`
points in `\R^d`. A point `c \in \R^d` is a `\beta`-center point of `P` if
every closed halfspace containing `c` contains at least `\beta n` points of
`P`. Every point set has a `1/(d+1)`-center point; our algorithm finds an
`\Omega(1/d^2)`-center point with high probability. Our algorithm has a small
constant factor and is the first approximate center point algorithm whose
complexity is subexponential in `d`. Moreover, it can be optimally
parallelized to require `O(\log^2d\log\log n)` time. Our algorithm has been
used in mesh partitioning methods and can be used in constructing high
breakdown estimators for multivariate datasets in statistics. It has the
potential to improve results in practice for constructing weak
`\epsilon`-nets. We derive a variant of our algorithm whose time bound is
fully polynomial in `d` and linear in `n`, and show how to combine our
approach with previous techniques to compute high quality center points more
quickly.
Presentation at Fifth MSI-Stony Brook Workshop on Computational Geometry,
1996.
This talk sketches: a convenient method
for computing the volumes of Voronoi regions; a proof that "area-stealing"
natural neighbor interpolation works; a scheme for smoother natural neighbor
interpolation alternative to Sibson's method; the interpolation scheme used
in the "finite volume element" method; and the observation that the minimax
piecewise-linear interpolant of a convex function is the (lower) convex
hull.
In
FOCS '92: Proceedings of the Thirty-first IEEE Symposium on
Foundations of Computer Science, pages 387--395, Pittsburgh, PA,
October 1992.
The problem of evaluating the sign of the
determinant of a small matrix arises in many geometric algorithms. Given an
`n\times n` matrix `A` with integer entries, whose columns are all smaller
than `M` in Euclidean norm, the algorithm given here evaluates the sign of
the determinant `\det A` exactly. The algorithm requires an arithmetic
precision of less than `1.5n+2\lg M` bits. The number of arithmetic
operations needed is `O(n^3)+O(n^2)\log\OD(A)/\beta`, where `\OD(A)|\det A|`
is the product of the lengths of the columns of `A`, and `\beta` is the
number of "extra" bits of precision,
`\min\{\lg(1/bb
u)-1.1n-2\lg n-2,\lg N - \lg M - 1.5n - 1\},`
where `bb u` is
the roundoff error in approximate arithmetic, and `N` is the largest
representable integer. Since `\OD(A)\leq M^n`, the algorithm requires
`O(n^3\lg M)` time, and `O(n^3)` time when `\beta=\Omega(\log M)`.
Discrete & Computational Geometry, 10:227--233, 1993.
This paper shows that the `i`-level of an
arrangement of hyperplanes in `E^d` has at most `((i+d-1), (d-1))` local
minima. This bound follows from ideas previously used to prove bounds on
`(\le k)`-sets. Using linear programming duality, the Upper Bound Theorem is
obtained as a corollary, giving yet another proof of this celebrated bound on
the number of vertices of a simple polytope in `E^d` with `n` facets.
Notes: This
paper by Mulmuley seems closely related,
and probably should have been cited.
(Survey).
In F.~K. Hwang and D.~Z. Hu, editors,
Computers and Euclidean
Geometry.
World Scientific Publishing, 1992.
This paper surveys some of the applications of
randomization to computational and combinatorial geometry. Randomization
provides a general way to divide-and-conquer geometric problems, and gives a
simple incremental approach to building geometric structures. The paper
discusses closest-point problems, convex hulls, Voronoi diagrams, trapezoidal
diagrams of line segments, linear programming in small dimension, range
queries, and bounds for point-line incidences and for `(\le k)`-sets.
Relations to the Vapnik-Chervonenkis dimension, PAC-learnability of geometric
concepts, and the Hough transform are briefly noted.
with Kurt Mehlhorn and Raimund Seidel.
Comp. Geom.: Theory and Applications, pages 185--121, 1993.
Preliminary version in Proc. Symp. Theor. Aspects of Comp.
Sci., 1992.
We prove four results on randomized
incremental constructions (RICs):
- an analysis of the expected
behavior under insertion and deletions,
- a fully dynamic data
structure for convex hull maintenance in arbitrary dimensions,
- a
tail estimate for the space complexity of RICs,
- a lower bound on the
complexity of a game related to RICs.
Notes: Among other things, this paper extends Seidel's
"backwards analysis" approach (not far from the "leave one out" technique of
learning theory) to a general version of RIC; this involves a kind of
"searching in history" and exploitation of the exchangeability of
members of a random sample.
with Richard Cole and Robert E. Tarjan.
Int. J. Comp. Geom. and Applications, pages 117--133, 1992.
Preliminary version in Proceedings of the Seventh Annual Symposium
on Computational Geometry, 1991.
We describe randomized parallel algorithms for
building trapezoidal diagrams of line segments in the plane. The algorithms
are designed for a CRCW PRAM. For general segments, we give an algorithm
requiring optimal `O(A+n\log n)` expected work and optimal `O(\log n)` time,
where `A` is the number of intersecting pairs of segments. If the segments
form a simple chain, we give an algorithm requiring optimal `O(n)` expected
work and `O(\log n\log\log n\log^**n)` expected time, and a simpler algorithm
requiring `O(n\log^**n)` expected work. The serial algorithm corresponding to
the latter is among the simplest known algorithms requiring `O(n\log^**n)`
expected operations. For a set of segments forming `K` chains, we give an
algorithm requiring `O(A+n\log^**n+K\log n)` expected work and `O(\log
n\log\log n\log^** n)` expected time. The parallel time bounds require the
assumption that enough processors are available, with processor allocations
every `\log n` steps.
Notes: The
serial version of our basic algorithm, as applied to non-intersecting
segments, is a bit simpler than the divide-and-conquer scheme of the earlier
paper, and not very far from Seidel's
algorithm, independently discovered at the
same time as this one. (His algorithm uses planar point location for a key
task, while we use a sweepline algorithm.)
In
SODA '91: Proceedings of the Second Annual ACM-SIAM Symposium on
Discrete Algorithms, January 1991.
This paper gives a partitioning scheme for the
geometric, planar traveling salesman problem, under the Euclidean metric:
given a set `S` of `n` points in the plane, find a shortest closed tour
(path) visiting all the points. The scheme employs randomization, and gives a
tour that can be expected to be short, if `S` satisfies the condition that a
random subset `R\subset S` has on average a tour much shorter than an optimal
tour of `S`. This condition holds for points independently, identically
distributed in the plane, for example, for which a tour within `1+\epsilon`
of shortest can be found in expected time `nk^2 2^k`, where `k=O(\log\log
n)^3/\epsilon^2`. One algorithm employed in the scheme is of interest in its
own right: when given a simple polygon `P`, it finds a Steiner triangulation
of the interior of `P`. If `P` has `n` sides and perimeter `L_P`, the edges
of the triangulation have total length `L_PO(\log n)`. If this algorithm is
applied to a simple polygon induced by a minimum spanning tree of a point
set, the result is a Steiner triangulation of the set with total length
within a factor of `O(\log n)` of the minimum possible.
Notes: A better partitioning scheme was given by
Eppstein, using quadtrees, and of course Arora's algorithm makes the whole
approach moot. There is a certain foreshadowing here of
"pseudo-triangulations", however.
with Herbert Edelsbrunner, Leonidas J. Guibas, Micha
Sharir, and Emo Welzl.
Discrete & Computational Geometry, 5(2):99--160, 1990.
Preliminary version in FOCS '88: Proceedings of the Twenth-ninth
Annual IEEE Symposium on Foundations of Computer Science, 1988.
with Jon L. Bentley and David B. Levine.
Algorithmica, pages 168--183, 1993.
Preliminary version in SODA '90: Proceedings of the First Annual
ACM-SIAM Symposium on Discrete Algorithms, 1990.
Notes: Related to
this
paper.
Journal of the ACM, 42(2):488--499, 1995.
Preliminary version in FOCS '88: Proceedings of the Twenty-ninth
Annual IEEE Symposium on Foundations of Computer Science, 1988.
This paper gives an algorithm for solving
linear programming problems. For a problem with `n` constraints and `d`
variables, the algorithm requires an expected
`O(d^2n)+(\log
n)O(d)^{d/2+O(1)} +O(d^4\sqrt{n}\log n)`
arithmetic operations,
as `n\rightarrow\infty`. The constant factors do not depend on `d`. Also, an
algorithm is given for integer linear programming. Let `\varphi` bound the
number of bits required to specify the rational numbers defining an input
constraint or the objective function vector. Let `n` and `d` be as before.
Then the algorithm requires expected
`O(2^ddn+8^dd\sqrt{n\ln
n}\ln n) +d^{O(d)}\varphi\ln n`
operations on numbers with
`d^{O(1)}\varphi` bits, as `n->oo`, where the constant factors do not depend
on `d` or `\varphi`. The expectations are with respect to the random choices
made by the algorithms, and the bounds hold for any given input. The
technique can be extended to other convex programming problems. For example,
an algorithm for finding the smallest sphere enclosing a set of `n` points in
`E^d` has the same time bound.
Notes:
The bound given for integer programming is not quite right, as corrected
by F. Eisenbrand,
here. (The
accompanying talk is the one given at FOCS in 1988; note that it gives the
results of the paper in terms of the Smallest Enclosing Sphere (or Minimum
Enclosing Ball) problem.)
The delay between conference and journal
publication is not the fault of the journal.
Developments between 1988 and
1995, roughly as discussed at the conclusion of the journal version of the
paper:
Several developments have occurred since the conference
version of this paper appeared. Adler and
Shamir have shown that these ideas can be applied to general convex
programming. Chazelle and Matou{\v s}ek have
derandomized the recursive algorithm, obtaining a deterministic algorithm
requiring `d^{O(d)}n` time. Alon and Megiddo
have applied and extended the ideas of this paper to a parallel setting. Seidel gave a different randomized
algorithm, requiring `O(d!n)` expected time, with a somewhat simpler
analysis; Matousek, Sharir and Welzl found a variant of Seidel's algorithm
requiring time subexponential in `d`. Their
algorithm is a randomized instance of the simplex algorithm. Kalai was the first to find a subexponential
simplex algorithm. Problem instances have long been known for which versions
of the simplex algorithm require at least `2^d` operations.[KM] These results cast new light on the
complexity of the simplex algorithm, and on the possibility that linear
programming problems can be solved in "strongly polynomial" time; such an
algorithm would need `(nd)^{O(1)}` operations, with the number of operations
independent of the size of the numbers specifying a problem
instance.
Some more recent related results:
Vapnik's leave-one-out error estimate for support
vector machines is a version of Lemma 3.2, generalized to quadratic
programming. (Such
deleted error estimates were found in the
sixties.)
Chazelle
et al. observe that one of these algorithms can be the basis for
sublinear geometric algorithms; another paper observes that the approach
works well from the standpoint of multi-pass algorithms. Lemma 3.2
(or its generalization to convex programming),
was rediscovered recently: Calafiore and Campi,
Theorem 1. Their proof rediscovers "backwards analysis".
with Robert E. Tarjan and C. J. Van Wyk.
Discrete & Computational Geometry, 4(1):423--432, 1989.
Preliminary version in Proceedings of the Fourth Annual Symposium
on Computational Geometry, 1988.
We present an algorithm that
triangulates a simple polygon on `n` vertices in `O(n log ^** n)` expected
time. The algorithm uses random sampling on the input, and its running time
does not depend on any assumptions about a probability distribution from
which the polygon is drawn.
Notes: The first algorithm to apply randomization to
the problem, and obtain essentially linear time. See
here
also.
Chazelle got rid of the
"essentially" and the randomization, at the cost of some complexity;
Amato et al. took out some of that complexity, at
the cost of putting the randomization back in.
with P. W. Shor.
Discrete & Computational Geometry, 4(1):387--421, 1989.
Merges two papers below.
We use random sampling for several new
geometric algorithms. The algorithms are "Las Vegas," and their expected
bounds are with respect to the random behavior of the algorithms. These
algorithms follow from new general results giving sharp bounds for the use of
random subsets in geometric algorithms. These bounds show that random subsets
can be used optimally for divide-and-conquer, and also give bounds for a
simple, general technique for building geometric structures incrementally.
One new algorithm reports all the intersecting pairs of a set of line
segments in the plane, and requires `O(A+n\log n)` expected time, where `A`
is the number of intersecting pairs reported. The algorithm requires `O(n)`
space in the worst case. Another algorithm computes the convex hull of `n`
points in `E^d` in `O(n\log n)` expected time for `d=3`, and
`O(n^{|__d/2__|})` expected time for `d>3`. The algorithm also gives fast
expected times for random input points. Another algorithm computes the
diameter of a set of `n` points in `E^3` in `O(n\log n)` expected time, and
on the way computes the intersection of `n` unit balls in `E^3`. We show that
`O(n\log A)` expected time suffices to compute the convex hull of `n` points
in `E^3`, where `A` is the number of input points on the surface of the hull.
Algorithms for halfspace range reporting are also given. In addition, we give
asymptotically tight bounds for `(\le k)`-sets, which are certain halfspace
partitions of point sets, and give a simple proof of Lee's bounds for high
order Voronoi diagrams.
with P. W. Shor.
In
SOCG '88: Proceedings of the Fourth Annual Symposium on
Computational Geometry, Urbana, Illinois, June 1988.
In
SOCG '88: Proceedings of the Fourth Annual Symposium on
Computational Geometry, Urbana, Illinois, June 1988.
Algorithmica, 4:461--469, 1989.
Included in PhD Thesis.
We describe an algorithm for finding a
minimum spanning tree of the weighted complete graph induced by a set of `n`
points in Euclidean `d`-space. The algorithm requires nearly linear expected
time for points that are independently uniformly distributed in the unit
`d`-cube. The first step of the algorithm is the spiral search procedure
described by Bentley, Weide, and Yao [BWY] for finding a supergraph of the
MST that has `O(n)` edges. (The constant factor in the bound depends on `d`.)
The next step is that of sorting the edges of the supergraph by weight using
a radix distribution, or "bucket," sort. These steps require linear expected
time. Finally, Kruskal's algorithm is used with the sorted edges, requiring
`O(n\alpha(cn,n))` time in the worst case, with `c>6`. Since the function
`\alpha(cn,n)` grows very slowly, this step requires linear time for all
practical purposes. This result improves the previous best
`O(n\log\log^**n)`, and employs a much simpler algorithm. Also, this result
demonstrates the robustness of bucket sorting, which requires `O(n)` expected
time in this case despite the probability dependency between the edge
weights.
with S. Kapoor and P. Vaidya.
In
SOCG '87: Proceedings of the Third Annual Symposium on
Computational Geometry, Waterloo, Ontario, June 1987.
The problem of finding a rectilinear shortest
path amongst obstacles may be stated as follows: Given a set of obstacles in
the plane find a shortest rectilinear (`L_1`) path from a point `s` to a
point `t` which avoids all obstacles. The path may touch an obstacle but may
not cross an obstacle. We study the rectilinear shortest path problem for the
case where the obstacles are non-intersecting simple polygons, and present an
`O( n log^2 n )` algorithm for finding such a path, where `n` is the number
of vertices of the obstacles. This algorithm requires `O(n log n )` space.
Another algorithm is given that requires `O(n ( log n ) ^{ 3/2} )` time and
space. We also study the case of rectilinear obstacles in three dimensions,
and show that `L_1` shortest paths can be found in `O( n^2 log^ 3 n )`
time.
In
STOC '87: Proceedings of the Nineteenth Annual SIGACT
Symposium, New York, New York, May 1987.
This paper gives approximation algorithms for
solving the following motion planning problem: Given a set of polyhedral
obstacles and points `s` and `t`, find a shortest path from `s` to `t` that
avoids the obstacles. The paths found by the algorithms are piecewise linear,
and the length of a path is the sum of the lengths of the line segments
making up the path. Approximation algorithms will be given for versions of
this problem in the plane and in three-dimensional space. The algorithms
return an `\epsilon`-short path, that is, a path with length within
`(1+\epsilon)` of shortest. Let `n` be the total number of faces of the
polyhedral obstacles, and `\epsilon` a given value satisfying
`0<\epsilon\leq\pi`. The algorithm for the planar case requires `O(n\log
n)/\epsilon` time to build a data structure of size `O(n/\epsilon)`. Given
points `s` and `t`, an `\epsilon`-short path from `s` to `t` can be found
with the use of the data structure in time `O(n/\epsilon+n\log n)`. The data
structure is associated with a new variety of Voronoi diagram. Given
obstacles `S\subset E^3` and points `s,t\in E^3`, an `\epsilon`-short path
between `s` and `t` can be found in
`O(n^2\lambda(n)\log(n/\epsilon)/\epsilon^4 +n^2\log
n\rho\log(n\log \rho))`
time, where `\rho` is the ratio of the
length of the longest obstacle edge to the distance between `s` and `t`. The
function `\lambda(n)=\alpha(n)^{O(\alpha(n)^{O(1)})}`, where the `\alpha(n)`
is a form of inverse of Ackermann's function. For `\log(1/\epsilon)` and
`\log\rho` that are `O(\log n)`, this bound is
`O(n^2(\log^2n)\lambda(n)/\epsilon^4)`.
Notes: This paper introduces the observation that
Yao's fan of cones can be used to build a spanner. This observation was
independently made by
Keil and Gutwin, and by
Ruppert and Seidel. The resulting spanners have
seen recent (circa 2005) application in the wireless literature, where they
are called
Yao graphs.
with L. J. Guibas, Jorge Stolfi.
Theoretical Computer Science, 49:81--84, 1987.
Discrete & Computational Geometry, 2:195--222, 1987.
Preliminary version: Further applications of random sampling to
computational geometry, STOC '86: Proceedings of the Eighteenth Annual
SIGACT Symposium, 1986.
This paper gives several new demonstrations of
the usefulness of random sampling techniques in computational geometry. One
new algorithm creates a search structure for arrangements of hyperplanes by
sampling the hyperplanes and using information from the resulting arrangement
to divide and conquer. This algorithm requires `O(s ^ {d + \epsilon})`
expected preprocessing time to build a search structure for an arrangement of
`s` hyperplanes in `d` dimensions. The expectation, as with all expected
times reported here, is with respect to the random behavior of the algorithm,
and holds for any input. Given the data structure, and a query point `p`, the
cell of the arrangement containing `p` can be found in `O( \log s)`
worst-case time. (The bound holds for any fixed `\epsilon >0`, with the
constant factors dependent on `d` and `\epsilon`.) Using point-plane duality,
the algorithm may be used for answering halfspace range queries. Another
algorithm finds random samples of simplices to determine the separation
distance of two polytopes. The algorithm uses expected `O(n ^{|__d/2__|} )`
time, where `n` is the total number of vertices of the two polytopes. This
matches previous results [DoK] for the case `d=3` and extends them. Another
algorithm samples points in the plane to determine their order `k` Voronoi
diagram, and requires expected `O(s^{1+\epsilon}k)` time for `s` points. (It
is assumed that no four of the points are cocircular.) This sharpens the
bound `O(sk ^ 2 \log s)` for Lee's algorithm [Lee], and `O(s ^ 2 \log s +
k(s-k) \log ^ 2 s)` for Chazelle and Edelsbrunner's algorithm [ChE]. Finally,
random sampling is used to show that any set of `s` points in `E^3` has
`O(sk^2\log ^ 8 s / ( \log \log s) ^ 6 )` distinct `j`-sets with `j\leq k`.
(For `S \subset E^d`, a set `S'\subset S` with `|S'|=j` is a `j`-set of `S`
if there is a halfspace `h^+` with `S'=S \cap h^+`.) This sharpens with
respect to `k` the previous bound `O(s k ^ 5 )` [ChP]. The proof of the bound
given here is an instance of a "probabilistic method" [ErS].
Notes: This paper extends the ideas of an
earlier
one to a much more general setting; This framework
is slightly less general than that of range spaces of finite VC-dimension,
which were introduced to geometric algorithms at the same time by
Haussler and Welzl (in the same issue of the
journal). The scheme here is very close to the
sample compression framework in the
learning theory literature. It is not clear that there are any natural
applications of the VC-dimension for which this framework is not also
applicable. (Examples of such a thing would be of interest, at least to me.)
The framework is much the same as this
later one, which
gives bounds that hold on average; here the bounds hold with high
probability, but with an extra factor of `log r`. The arrangement search
structure uses a construction that sharpens an idea of Megiddo, and in turn
was later sharpened (removing the `epsilon`) and called
cuttings.
The polytope separation distance algorithm was made obsolete by Gaertner, who
gave an `O(n)`-time algorithm by application and extension of
these ideas, and later improvements. The `k`-set bounds are
sharpened
here, with a much cleaner argument.
Information Processing Letters, 22:21--24, January
1986.
SIAM Journal on Computing, pages 830--847, 1988.
Preliminary version: A probabilistic algorithm for the post office
problem, STOC '85: Proceedings of the Seventeenth Annual SIGACT
Symposium, 1985.
An algorithm for closest-point queries is
given. The problem is this: given a set `S` of `n` points in `d`-dimensional
space, build a data structure so that given an arbitrary query point `p`, a
closest point in `S` to `p` can be found quickly. The measure of distance is
the Euclidean norm. This is sometimes called the post-office problem
[Kn]. The new data structure will be termed an RPO tree, from
Randomized Post Office. The expected time required to build an RPO tree is
`O(n^{|~d/2~| (1+ \epsilon )} )`, for any fixed `\epsilon > 0`, and a query
can be answered in `O(\log n )` worst-case time. An RPO tree requires
`O(n^{|~d/2~| (1+ \epsilon )} )` space in the worst case. The constant
factors in these bounds depend on `d` and `\epsilon`. The bounds are
average-case due to the randomization employed by the algorithm, and hold for
any set of input points. This result approaches the `\Omega(n^{|~d/2~|} )`
worst-case time required for any algorithm that constructs the Voronoi
diagram of the input points, and is a considerable improvement over previous
bounds for `d>3`. The main step of the construction algorithm is the
determination of the Voronoi diagram of a random sample of the sites, and the
triangulation of that diagram.
Notes:
The first appearance of the kind of analyis done in more generality
here, the analysis essentially reduces to the observation that
there are `r^{O(1)}` balls associated with a sample of size `r`, that might
be Delaunay, and an exponentially small probability that a given such ball
with many points in it would be Delaunay in the sample, and finally, the
union bound.
This dissertation reports a variety of
new algorithms for solving closest-point problems. The input to these
algorithms is a set or sets of points in `d`-dimensional space, with an
associated `L_p` metric. The problems considered are:
- The all nearest neighbors problem.
For point set `A`, find the nearest neighbors in `A` of each point in
`A`.
- The nearest foreign neighbor
problem. For point sets `A` and `B`, find the closest point in
`B` to each point in `A`.
- The
geometric minimum spanning tree problem. For point set `A`, find
the minimum spanning tree for the complete weighted undirected graph
associated with `A`, where the vertices of the graph correspond to the points
of `A`, and the weight of an edge is the distance between the points defining
the edge.
These problems arise in routing, statistical
classification, data compression, and other areas. Obvious algorithms
for them require a running time quadratic in `n`, the number of points in the
input. In many cases they can be solved with algorithms requiring `O(n
log ^{O(1)} n)` time.
In this work, approximation algorithms for
some cases of these problems have been found. For example, for the
minimum spanning tree problem with the `L_1` metric, an algorithm has been
devised that requires `O(n log^d (1/rho))` time to find a spanning tree with
weight within `1+rho` of the minimum. Several other algorithms have
been found with time bounds dependent on `log(1/rho)` for attaining error
`rho`.
Algorithms have also been found that require linear expected
time, for independent identically distributed random input points with a
probability density function satisfying weak conditions. One such
algorithm depends on the fact that under certain conditions, values that are
identically distributed, but dependent, can be bucket sorted in linear
expected time.
An algorithm has been found for the all nearest
neighbors problem that requires `O(n log n)` expected time for any input set
of points, where the expectation is on the random sampling performed by the
algorithm. This algorithm involves the construction of a new data
structure, a compressed form of digital trie.
In
STOC '84: Proceedings of the Sixteenth Annual SIGACT
Symposium, Washington, DC, April 1984.
Included in PhD Thesis.
In
FOCS '83: Proceedings of the Twenty-fourth Symposium on
Foundations of Computer Science, Tucson, AZ, November 1983.
Included in PhD Thesis.
We present new algorithms for the
all
nearest neighbors problem:
Given a set `S` of `n` sites
(points) in `d`-dimensional space, find the nearest neighbors in set `S` of
each site in `S`.
Our results:
- An algorithm for
solving the all nearest neighbors problem in `O(n\log\sigma)` time, where
`\sigma` is the ratio of the distance between the farthest pair of sites to
the distance between the closest pair of sites. A similar algorithm is
described for finding all `k`'th nearest neighbors.
- An algorithm
for building a celltree, a compressed form of digital trie, in
`O(n\log n)` probabilistic time. The logarithm, floor, and bitwise
exclusive-or functions are assumed available at unit cost.
- An
algorithm for solving the all nearest neighbors problem in `O(n)` worst-case
time, given a celltree for the sites.
- An algorithm for building a
celltree in linear expected time, assuming the sites are independently
identically distributed random variables, with an unknown probability density
function obeying some very weak conditions.
Notes: A "celltree" is now more commonly called a
"compressed quadtree" or "compressed hyperoctree", and this paper is
apparently the first appearance of such a construction.
Vaidya refined the algorithm here to avoid
randomness and bit-twiddling, at some cost in dependence on `d`; his
algorithm, and this one, and that of Gabow, Bentley, and Tarjan, all use the
same basic geometrical observation, which implies that the total number of
nearest neighbors is `O(n)`, even up to approximate neighbors.
Callahan and Kosaraju used similar ideas for
"well-separated pairs decomposition". The `sigma` ratio is now more commonly
called the
spread.
Information Processing Letters, 16:23--25, January
1983.
(Survey).
In P. Cohen and E. Feigenbaum, editors,
The Handbook of Artificial
Intelligence.
William Kaufman, Inc., Los Altos, CA, 1982.
In
Proceedings of the DARPA Image Understanding Workshop,
Maclean, VA: Science Applications, Inc., 1981.
Technical report, Claremont Graduate School, Claremont, CA, 1977.
* The slides were done using
S5 and asciimathml,
and are only viewable with Mozilla Firefox. Navigate as in PowerPoint, or use
the navigation arrows.
** must have SVG-enabled
Firefox for a few of the slides. (Versions of Firefox after 1.5 are
SVG-enabled by default.) These SVG-based slides have simple interactive
animations: move the green dot(s), if any, or press the green square to go,
yellow to single-step, red to stop. Navigation may only be by clicking
the arrows in the corners (in recent slides, the arrows appear when the mouse
cursor is over them, in the lower right). Some of the xbl-related code is
adapted from examples. The
slides may be slow to load, and require javascript to be enabled (and in one
case, java).
This file was made from a bibtex bibliography file using bibtex and latex, and
a bibtex style file. The file bib.html, with bibtex references, was also made
from the same bibtex bibliography file, using a small awk script. Here
are the sources, in case someone else might find them useful.