Alcatel-Lucent  
  Bell Laboratories  
  Mathematical and Algorithmic Sciences Center  
  FMA Department  
Fundamental and Industrial Mathematics
Research Department

  Overview  
  People  
  Research  
  History  
  Directions  
  Contact  
Debasis Mitra, Vice-President
Kathy Cwikla, Secretary


History

The Fundamentals of Mathematics and Algorithms Research Department was formed in October 2005 by merging the Fundamental Mathematics Research Department and the Algorithms Research Department:

More details follow below.

History of Fundamental Mathematics Research at Bell Laboratories



The Early Years

Our department derives its heritage from the pioneering research done in the Math Center in the fields of fundamental mathematics and theory of computing. Over the course of the last seven decades, research in the two areas has often been closely intertwined. The earliest researchers can be traced back to the 1930's. This was the time when the mathematical foundations of the theory of computing were being laid. In 1937, Claude Shannon showed how boolean algebra can be used for design and analysis of logic circuits. Later in the 50's, Edward Moore and George Mealy made fundamental contributions to automata theory. Meanwhile, another mathematician George Stibitz, designed the first binary relay computer in 1937. It was around then that researchers began looking at error-correction as a way to build fault-tolerant computing hardware. Motivated by this, in 1948, Richard Hamming laid the foundations of error-correcting codes. Around the same time, Edgar Gilbert came up with a lower bound on the size of an error-correcting code with a certain minimum distance (the famous ``Gilbert Bound"), and David Slepian established the underpinnings of algebraic coding theory. In 1968, Hamming was awarded the prestigious Turing award for his pioneering work. Some of the early combinatorialists at the Math Center include people such as John Riordan, who wrote a celebrated text on combinatorial analysis in 1958, and Ron Graham who won a George Polya prize for pathbreaking work in Ramsey theory.

Theory of Computing

In the following years, the need for efficient algorithms became evident as people attempted to solve large instances of complex problems on computers. This spawned extensive research in several fields of operations research and computer science; the problems in these fields experienced world-wide popularity not only because they have an enormous number of applications in computer, communications, and industrial engineering, but also because they have supplied an ideal proving ground for new algorithmic techniques. In 1956, Joseph Kruskal, and in 1957, Robert Prim, discovered two different efficient algorithms for computing a minimum spanning tree in a weighted graph--- a fundamental problem in network design. In the mid 60's, scientists at the Math Center were among the pioneers of the modern theory of scheduling, particularly in the analysis of approximation algorithms and in multiprocessor scheduling (Ed Coffman, Ron Graham, David Johnson and Mike Garey). Over the next three decades, many outstanding contributions were added to the general theory, which continues to prosper. Yet another example is provided by bin packing theory (Garey, Graham, Johnson and Ullman); its mathematical foundations originated in the Math Center during the early 70's. One of its earliest contributions remains, some 25 years and hundreds of articles later, the deepest and most definitive combinatorial analysis of bin packing approximation algorithms.

Emerging from these problem-specific studies, often as primary goals, were far-reaching contributions to the methodology of algorithm analysis. A prime example is the competitive analysis of approximation algorithms (Graham, and later, Danny Sleator and Robert Tarjan), which originated in the Math Center during the mid 60's and soon became an essential tool in algorithmic studies cutting across virtually all of the engineering disciplines. In the 80's, the Math Center produced amortized analysis (Sleator and Tarjan) as another important addition to methodology; the new approach adopted new objective functions by which more useful comparisons of alternative algorithms could be made. Math Center scientists went on to collect firsts in the average-case analysis of algorithms; they pioneered the application of these techniques to scheduling, bin packing, two-dimensional packing, and dynamic storage allocation problems (Coffman, Flatto, Gilbert, Johnson and Shepp). Stochastic planar matching theory (Coffman and Shor) was also developed as a technique for the analysis of algorithms in diverse problems of computer science and operations research. Development of mathematical underpinnings of average-case analysis owes much to the Math Center's successes in analyzing many of the widely known problems in applied probability that are beguilingly simple to state but notoriously difficult to solve. Prominent examples include the longest monotone subsequence problem and the analysis of optimal and/or on-line greedy algorithms for covering, parking, set partitioning, interval packing, and sequential selection problems. Math Center researchers also made lasting contributions by way of writing numerous celebrated textbooks in theory of computing.

Despite this burst of activity in the theory of algorithms, there remains a large collection of important optimization problems for which no efficient algorithms are known. In 1971-72, Stephen Cook (University of Toronto) and Richard Karp (IBM), showed that these intractable problems were essentially all equivalent and were among the hardest problems in the complexity class NP. This seminal work established the foundations of NP-completeness theory. The efforts were soon directed towards approximately solving these difficult problems, commonly referred to as NP-hard problems. In a seminal paper (1974), Johnson studied which NP-hard problems could be efficiently well-approximated and which others resisted even good approximations. Shortly afterwards, a landmark treatise on the subject of NP-completeness and computation of approximate solutions, was written by Garey and Johnson. Two decades later, it continues to be the most often cited reference on the subject. In 1979, Garey and Johnson were awarded the prestigious Lanchester Prize by ORSA for their book. Later, for several years, Johnson wrote a periodic column in the Journal of Algorithms, popularly referred to as the ``Johnson's NP-Completeness Column''.

The search for efficient algorithms to approximately/optimally solve hard optimization problems has been an on-going one at the Math Center. In 1984, Narender Karmarkar discovered the interior point method to solve very large scale linear programming problems in polynomial time. This approach turns out to be much faster than the ellipsoid method, the only other polynomial time algorithm known at the time. Later, Pravin Vaidya, designed more efficient variants of the interior-point method and ultimately, the fastest convex programming algorithm (1989).

Meanwhile, network optimization emerged as one of the focal points for algorithmic research at Math Center. In 1987, Tarjan (now at Princeton Univ.) and Andrew Goldberg (an MIT student at the time) discovered a striking new minimum cost network flow algorithm. Tarjan also did extensive data structures work during his stay at the Math Center, including the discovery of persistent data structures (1985) and perfect hashing with updates in expected time (1988). It was also during this period that he received a Turing Award (1986). David Johnson, and later David Applegate and others, did extensive work on heuristics to solve the notorious Traveling Salesman problem. This prototypical difficult problem is often used as a testbed for new optimization techniques. The 80's also saw more widespread usage of randomization as a powerful paradigm for designing simple yet effective algorithms. In late 80's, Peter Shor and Ken Clarkson demonstrated the power of randomization in solving computational geometry problems. In 1989, P. G. Doyle along with Don Coppersmith, Prabhakar Raghavan and Marc Snir from IBM, discovered a beautiful connection between designing randomized algorithms against adaptive online adversaries and electrical networks.

In early 90's, a surprising connection was discovered between the so-called ``interactive proofs'' and the inherent difficulty of even approximately solving hard optimization problems. Building on the discovery of this connection and a sequence of technical breakthroughs, Carsten Lund (Computing Sciences Research Center) and Mario Szegedy, along with Sanjeev Arora and Madhu Sudan from University of California at Berkeley, and Rajeev Motwani from Stanford University, proved the following landmark result in 1992: The class NP has ``efficiently verifiable" proof systems. This result, widely referred to as the ``PCP theorem'', has played instrumental role in the resolution of many longstanding questions in combinatorial optimization. Researchers are continuing to discover powerful implications of this theorem. Shortly afterwards, in the year 1994, Shor made another breakthrough discovery, namely, factoring is in quantum polynomial time; the result gave the first ever evidence of the power of quantum computing. It also cast some doubts on the intractability of the factoring problem, an assumption that is used widely in the design of cryptographic systems.


Mathematical Coding Theory


The period of 70's through 90's also saw many exciting developments of theoretical as well as practical importance in the field of coding theory. In 1977, F.J. MacWilliams and N.J.A. Sloane wrote a pioneering text titled ``The Theory of Error-Correcting Codes" which to date is the most frequently cited general source on coding theory. Some of the many remarkable results obtained during this period include a new upper bound on self-dual codes (Conway-Sloane, 1988), long-term study of coset codes, trellis codes, and nonequiprobable signaling. One of the more influential theoretical result of the 90's came from R. Calderbank, Sloane, P. Sole, R. Hammons and V. Kumar; it showed that a well-known class of nonlinear binary codes becomes linear over the ring of integers and forms families of dual codes over this ring. In 1994, Shor, following his research on fast probabilistic quantum algorithms, discovered a scheme for reducing decoherence in quantum memory, thus giving start to the study of quantum error-correcting codes.

Discrete Mathematics and Number Theory

In 1985, Andrew Odlyzko and H. J. J. te Riele (Center for Mathematics and Computer Science, Amsterdam) disproved a 100 year old conjecture in number theory known as the Mertens conjecture; the conjecture generated a great deal of interest since a positive resolution would have implied the truth of Riemann hypothesis. Tarjan, Sleator and W.P. Thurston (Princeton) solved the problem of determining the maximum rotation distance between any two n-node binary trees (1985). Shor, P. Agarwal (NYU) and M. Sharir (NYU) obtained almost tight bounds on the length of the Davenport-Schinzel sequences that arise in differential equations and computation geometry (1987). In 1988, P.C. Fishburn settled three open problems for finite partially ordered sets with geometric representations. J.C. Lagarias developed a higher-dimensional generalization of the continued fractions method for finding a rational approximation of a real number (1989). In the same year, Sloane and J.H. Conway (Princeton) solved a long standing problem of determining the number of dimensions for which there exists integral lattice packings of determinant 1 with no roots. In 1990, F.K. Hwang and D.Z. Du (Academia Sinica, Beijing) settled the steiner ratio conjecture which says that the maximum ratio between the lengths of a minimal spanning tree and a steiner tree 2/(\sqrt{3}). Jeong Han Kim and Jeff Kahn (Rutgers) solved an old and celebrated problem, namely to find an optimal sequence of binary comparisons to sort a partially ordered set (1993). In the following year, Kim settled another famous problem, a question in extremal graph theory, posed by Paul Erdos in 1961: What is the minimum number of vertices in a triangle-free graph that guarantees existence of an independent set of size t? In 1996, Peter Winkler and Graham Brightwell (London School of Economics) devised a combinatorial model of systems with "hard constraints" and proved a surprising collections of results characterizing systems that exhibit phase transitions.

Cryptography

Cryptography and mathematics have a long history together. In the first half of this century though, the work on cryptography was largely done in the domain of defense and military applications. In 1949, Claude Shannon wrote a pioneering paper titled ``The communication theory of secrecy systems''. Not much further work appeared in the public literature for the two subsequent decades. In 1975, Whitfield Diffie and Martin Hellman, both at Stanford University, discovered public-key cryptography which sparked a widespread interest among the mathematicians in every domain. The researchers at Bell Labs were no exception. Lagarias and Odlyzko developed an attack on a general class of knapsack systems, known as low-density knapsacks (1985); significantly adding to the mounting doubt on the security of such systems. Since mid-80's, Odlyzko has made major contributions towards understanding the complexity of discrete logarithm, a problem assumed to be intractable in the design of many public key cryptosystems. In 1991, Jim Reeds and P. Treventi (BU) developed an authentication and privacy system for digital cellular phones.


The Present Department

The present day department continues to span major fundamental areas of mathematics, as they relate to Alcatel-Lucent's interests, including combinatorics, probability, algorithms, game theory and combinatorial optimization.