Online Limited-Memory Quasi-Newton BFGS implementation based on the algorithms in
Nocedal, Jorge, and Stephen J. Wright. 2000. Numerical Optimization. Springer. pp. 224--
and modified to the online version presented in
A Stocahstic Quasi-Newton Method for Online Convex Optimization
Schraudolph, Yu, Gunter (2007)
As of now, it requires a
Stochastic differentiable function (AbstractStochasticCachingDiffFunction) as input.
The basic way to use the minimizer is with a null constructor, then
the simple minimize method:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
THIS IS NOT UPDATE FOR THE STOCHASTIC VERSION YET.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Minimizer qnm = new QNMinimizer();
DiffFunction df = new SomeDiffFunction();
double tol = 1e-4;
double[] initial = getInitialGuess();
double[] minimum = qnm.minimize(df,tol,initial);
If you do not choose a value of M, it will use the max amount of memory
available, up to M of 20. This will slow things down a bit at first due
to forced garbage collection, but is probably faster overall b/c you are
guaranteed the largest possible M.
The Stochastic version was written by Alex Kleeman, but about 95% of the code
was taken directly from the previous QNMinimizer written mostly by Jenny.