edu.stanford.nlp.optimization
Class StochasticInPlaceMinimizer<T extends Function>
java.lang.Object
edu.stanford.nlp.optimization.StochasticInPlaceMinimizer<T>
- All Implemented Interfaces:
- HasEvaluators, Minimizer<Function>
public class StochasticInPlaceMinimizer<T extends Function>
- extends Object
- implements Minimizer<Function>, HasEvaluators
In place Stochastic Gradient Descent Minimizer
- Follows weight decay and tuning of learning parameter of crfsgd of
Leon Bottou: http://leon.bottou.org/projects/sgd
- Only support L2 regularization (QUADRATIC)
- Requires objective function is an AbstractStochasticCachingDiffUpdateFunction
NOTE: unlike other minimizers, regularization is done in the minimizer, not the objective function
- Author:
- Angel Chang
Method Summary |
protected String |
getName()
|
double |
getObjective(AbstractStochasticCachingDiffUpdateFunction function,
double[] w,
double wscale,
int[] sample)
|
protected void |
init(AbstractStochasticCachingDiffUpdateFunction func)
|
double[] |
minimize(Function function,
double functionTolerance,
double[] initial)
Attempts to find an unconstrained minimum of the objective
function starting at initial , within
functionTolerance . |
double[] |
minimize(Function f,
double functionTolerance,
double[] initial,
int maxIterations)
|
protected void |
say(String s)
|
protected void |
sayln(String s)
|
void |
setEvaluators(int iters,
Evaluator[] evaluators)
|
void |
shutUp()
|
double |
tryEta(AbstractStochasticCachingDiffUpdateFunction function,
double[] initial,
int[] sample,
double eta)
|
double |
tune(AbstractStochasticCachingDiffUpdateFunction function,
double[] initial,
int sampleSize,
double seta)
Finds a good learning rate to start with
eta = 1/(lambda*(t0+t)) - we find good t0 |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
xscale
protected double xscale
xnorm
protected double xnorm
x
protected double[] x
t0
protected int t0
sigma
protected double sigma
lambda
protected double lambda
quiet
protected boolean quiet
numPasses
protected int numPasses
bSize
protected int bSize
tuningSamples
protected int tuningSamples
gen
protected Random gen
maxTime
protected long maxTime
StochasticInPlaceMinimizer
public StochasticInPlaceMinimizer()
StochasticInPlaceMinimizer
public StochasticInPlaceMinimizer(double sigma,
int numPasses)
StochasticInPlaceMinimizer
public StochasticInPlaceMinimizer(double sigma,
int numPasses,
int tuningSamples)
StochasticInPlaceMinimizer
public StochasticInPlaceMinimizer(LogPrior prior,
int numPasses,
int batchSize,
int tuningSamples)
shutUp
public void shutUp()
getName
protected String getName()
setEvaluators
public void setEvaluators(int iters,
Evaluator[] evaluators)
- Specified by:
setEvaluators
in interface HasEvaluators
init
protected void init(AbstractStochasticCachingDiffUpdateFunction func)
getObjective
public double getObjective(AbstractStochasticCachingDiffUpdateFunction function,
double[] w,
double wscale,
int[] sample)
tryEta
public double tryEta(AbstractStochasticCachingDiffUpdateFunction function,
double[] initial,
int[] sample,
double eta)
tune
public double tune(AbstractStochasticCachingDiffUpdateFunction function,
double[] initial,
int sampleSize,
double seta)
- Finds a good learning rate to start with
eta = 1/(lambda*(t0+t)) - we find good t0
- Parameters:
function
- initial
- sampleSize
- seta
-
- Returns:
minimize
public double[] minimize(Function function,
double functionTolerance,
double[] initial)
- Description copied from interface:
Minimizer
- Attempts to find an unconstrained minimum of the objective
function
starting at initial
, within
functionTolerance
.
- Specified by:
minimize
in interface Minimizer<Function>
- Parameters:
function
- the objective functionfunctionTolerance
- a double
valueinitial
- a initial feasible point
- Returns:
- Unconstrained minimum of function
minimize
public double[] minimize(Function f,
double functionTolerance,
double[] initial,
int maxIterations)
- Specified by:
minimize
in interface Minimizer<Function>
sayln
protected void sayln(String s)
say
protected void say(String s)
Stanford NLP Group