My work has mainly been on geometric algorithms, and in particular on algorithms that have provable properties, but are relatively simple. Randomization is quite useful for this, whether via the Vapnik-Chervonenkis dimension, or using the general framework introduced here. See, for example, a survey on randomized geometric algorithms.
Ocelot has been a major project for the last several years.
I've coded up a data structure for nearest neighbor searching in metric spaces, which supports nearest neighbor queries, fixed radius queries, and k'th-nearest-neighbor queries. I've tested it on Euclidean data, strings under Hamming and edit distances, and some bitvector datasets. It gives a pretty good speedup for low-dimensional data, and sometimes high-dimensional data, but isn't going to be as fast as more specialized methods.
I've coded up a program for computing convex hulls, Delaunay triangulations, alpha shapes, and related information; this uses a simple randomized incremental algorithm, and an algorithm of mine that allows some matrix operations using exact arithmetic to be done quickly.
Some other software of mine, not necessarily the most polished:
My bibliography
Sadly, I haven't managed to: