edu.stanford.nlp.optimization
Class AbstractStochasticCachingDiffUpdateFunction
java.lang.Object
edu.stanford.nlp.optimization.AbstractCachingDiffFunction
edu.stanford.nlp.optimization.AbstractStochasticCachingDiffFunction
edu.stanford.nlp.optimization.AbstractStochasticCachingDiffUpdateFunction
- All Implemented Interfaces:
- DiffFunction, Function, HasInitial
- Direct Known Subclasses:
- LogConditionalObjectiveFunction
public abstract class AbstractStochasticCachingDiffUpdateFunction
- extends AbstractStochasticCachingDiffFunction
Function for stochastic calculations that does update in place
(instead of maintaining and returning the derivative).
Weights are represented by an array of doubles and a scalar
that indicates how much to scale all weights by.
This allows all weights to be scaled by just modifying the scalar.
- Author:
- Angel Chang
Fields inherited from class edu.stanford.nlp.optimization.AbstractStochasticCachingDiffFunction |
allIndices, curElement, finiteDifferenceStepSize, gradPerturbed, hasNewVals, HdotV, lastBatch, lastBatchSize, lastElement, lastVBatch, lastXBatch, method, randGenerator, recalculatePrevBatch, returnPreviousValues, sampleMethod, scaleUp, thisBatch, xPerturbed |
Method Summary |
void |
calculateStochasticGradient(double[] x,
int batchSize)
Performs stochastic gradient updates based
on samples indexed by batch and do not apply regularization. |
abstract void |
calculateStochasticGradient(double[] x,
int[] batch)
Performs stochastic gradient updates based
on samples indexed by batch and do not apply regularization. |
abstract double |
calculateStochasticUpdate(double[] x,
double xScale,
int[] batch,
double gain)
Performs stochastic update of weights x (scaled by xScale) based
on samples indexed by batch. |
double |
calculateStochasticUpdate(double[] x,
double xScale,
int batchSize,
double gain)
Performs stochastic update of weights x (scaled by xScale) based
on next batch of batchSize. |
int[] |
getSample(int sampleSize)
Gets a random sample (this is sampling with replacement). |
abstract double |
valueAt(double[] x,
double xScale,
int[] batch)
Computes value of function for specified value of x (scaled by xScale)
only over samples indexed by batch. |
Methods inherited from class edu.stanford.nlp.optimization.AbstractStochasticCachingDiffFunction |
calculateStochastic, clearCache, dataDimension, decrementBatch, derivativeAt, derivativeAt, getBatch, HdotVAt, HdotVAt, HdotVAt, incrementBatch, incrementRandom, initial, lastDerivative, lastValue, scaleUp, valueAt, valueAt |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
skipValCalc
protected boolean skipValCalc
AbstractStochasticCachingDiffUpdateFunction
public AbstractStochasticCachingDiffUpdateFunction()
getSample
public int[] getSample(int sampleSize)
- Gets a random sample (this is sampling with replacement).
- Parameters:
sampleSize
- number of samples to generate
- Returns:
- array of indices for random sample of sampleSize
valueAt
public abstract double valueAt(double[] x,
double xScale,
int[] batch)
- Computes value of function for specified value of x (scaled by xScale)
only over samples indexed by batch.
- Parameters:
x
- unscaled weightsxScale
- how much to scale x by when performing calculationsbatch
- indices of which samples to compute function over
- Returns:
- value of function at specified x (scaled by xScale) for samples
calculateStochasticUpdate
public abstract double calculateStochasticUpdate(double[] x,
double xScale,
int[] batch,
double gain)
- Performs stochastic update of weights x (scaled by xScale) based
on samples indexed by batch.
- Parameters:
x
- unscaled weightsxScale
- how much to scale x by when performing calculationsbatch
- indices of which samples to compute function overgain
- how much to scale adjustments to x
- Returns:
- value of function at specified x (scaled by xScale) for samples
calculateStochasticUpdate
public double calculateStochasticUpdate(double[] x,
double xScale,
int batchSize,
double gain)
- Performs stochastic update of weights x (scaled by xScale) based
on next batch of batchSize.
- Parameters:
x
- unscaled weightsxScale
- how much to scale x by when performing calculationsbatchSize
- number of samples to pick nextgain
- how much to scale adjustments to x
- Returns:
- value of function at specified x (scaled by xScale) for samples
calculateStochasticGradient
public abstract void calculateStochasticGradient(double[] x,
int[] batch)
- Performs stochastic gradient updates based
on samples indexed by batch and do not apply regularization.
- Parameters:
x
- unscaled weightsbatch
- indices of which samples to compute function over
calculateStochasticGradient
public void calculateStochasticGradient(double[] x,
int batchSize)
- Performs stochastic gradient updates based
on samples indexed by batch and do not apply regularization.
- Parameters:
x
- unscaled weightsbatchSize
- number of samples to pick next
Stanford NLP Group