edu.stanford.nlp.optimization
Class StochasticInPlaceMinimizer<T extends Function>

java.lang.Object
  extended by edu.stanford.nlp.optimization.StochasticInPlaceMinimizer<T>
All Implemented Interfaces:
HasEvaluators, Minimizer<Function>

public class StochasticInPlaceMinimizer<T extends Function>
extends Object
implements Minimizer<Function>, HasEvaluators

In place Stochastic Gradient Descent Minimizer - Follows weight decay and tuning of learning parameter of crfsgd of Leon Bottou: http://leon.bottou.org/projects/sgd - Only support L2 regularization (QUADRATIC) - Requires objective function is an AbstractStochasticCachingDiffUpdateFunction NOTE: unlike other minimizers, regularization is done in the minimizer, not the objective function

Author:
Angel Chang

Nested Class Summary
static class StochasticInPlaceMinimizer.InvalidElementException
           
 
Field Summary
protected  int bSize
           
protected  Random gen
           
protected  double lambda
           
protected  long maxTime
           
protected  int numPasses
           
protected  boolean quiet
           
protected  double sigma
           
protected  int t0
           
protected  int tuningSamples
           
protected  double[] x
           
protected  double xnorm
           
protected  double xscale
           
 
Constructor Summary
StochasticInPlaceMinimizer()
           
StochasticInPlaceMinimizer(double sigma, int numPasses)
           
StochasticInPlaceMinimizer(double sigma, int numPasses, int tuningSamples)
           
StochasticInPlaceMinimizer(LogPrior prior, int numPasses, int batchSize, int tuningSamples)
           
 
Method Summary
protected  String getName()
           
 double getObjective(AbstractStochasticCachingDiffUpdateFunction function, double[] w, double wscale, int[] sample)
           
protected  void init(AbstractStochasticCachingDiffUpdateFunction func)
           
 double[] minimize(Function function, double functionTolerance, double[] initial)
          Attempts to find an unconstrained minimum of the objective function starting at initial, within functionTolerance.
 double[] minimize(Function f, double functionTolerance, double[] initial, int maxIterations)
           
protected  void say(String s)
           
protected  void sayln(String s)
           
 void setEvaluators(int iters, Evaluator[] evaluators)
           
 void shutUp()
           
 double tryEta(AbstractStochasticCachingDiffUpdateFunction function, double[] initial, int[] sample, double eta)
           
 double tune(AbstractStochasticCachingDiffUpdateFunction function, double[] initial, int sampleSize, double seta)
          Finds a good learning rate to start with eta = 1/(lambda*(t0+t)) - we find good t0
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

xscale

protected double xscale

xnorm

protected double xnorm

x

protected double[] x

t0

protected int t0

sigma

protected double sigma

lambda

protected double lambda

quiet

protected boolean quiet

numPasses

protected int numPasses

bSize

protected int bSize

tuningSamples

protected int tuningSamples

gen

protected Random gen

maxTime

protected long maxTime
Constructor Detail

StochasticInPlaceMinimizer

public StochasticInPlaceMinimizer()

StochasticInPlaceMinimizer

public StochasticInPlaceMinimizer(double sigma,
                                  int numPasses)

StochasticInPlaceMinimizer

public StochasticInPlaceMinimizer(double sigma,
                                  int numPasses,
                                  int tuningSamples)

StochasticInPlaceMinimizer

public StochasticInPlaceMinimizer(LogPrior prior,
                                  int numPasses,
                                  int batchSize,
                                  int tuningSamples)
Method Detail

shutUp

public void shutUp()

getName

protected String getName()

setEvaluators

public void setEvaluators(int iters,
                          Evaluator[] evaluators)
Specified by:
setEvaluators in interface HasEvaluators

init

protected void init(AbstractStochasticCachingDiffUpdateFunction func)

getObjective

public double getObjective(AbstractStochasticCachingDiffUpdateFunction function,
                           double[] w,
                           double wscale,
                           int[] sample)

tryEta

public double tryEta(AbstractStochasticCachingDiffUpdateFunction function,
                     double[] initial,
                     int[] sample,
                     double eta)

tune

public double tune(AbstractStochasticCachingDiffUpdateFunction function,
                   double[] initial,
                   int sampleSize,
                   double seta)
Finds a good learning rate to start with eta = 1/(lambda*(t0+t)) - we find good t0

Parameters:
function -
initial -
sampleSize -
seta -
Returns:

minimize

public double[] minimize(Function function,
                         double functionTolerance,
                         double[] initial)
Description copied from interface: Minimizer
Attempts to find an unconstrained minimum of the objective function starting at initial, within functionTolerance.

Specified by:
minimize in interface Minimizer<Function>
Parameters:
function - the objective function
functionTolerance - a double value
initial - a initial feasible point
Returns:
Unconstrained minimum of function

minimize

public double[] minimize(Function f,
                         double functionTolerance,
                         double[] initial,
                         int maxIterations)
Specified by:
minimize in interface Minimizer<Function>

sayln

protected void sayln(String s)

say

protected void say(String s)


Stanford NLP Group