| Package | Description |
|---|---|
| de.jungblut.classification.nn | |
| de.jungblut.math.activation |
| Modifier and Type | Field and Description |
|---|---|
ActivationFunction[] |
MultilayerPerceptronCostFunction.NetworkConfiguration.activations |
| Modifier and Type | Method and Description |
|---|---|
ActivationFunction[] |
MultilayerPerceptron.getActivations() |
| Modifier and Type | Method and Description |
|---|---|
static RBM.RBMBuilder |
RBM.RBMBuilder.create(ActivationFunction activation,
int... layer)
Creates a new
RBM.RBMBuilder from an activation function and
layersizes. |
static MultilayerPerceptron.MultilayerPerceptronBuilder |
MultilayerPerceptron.MultilayerPerceptronBuilder.create(int[] layer,
ActivationFunction[] activations,
LossFunction errorFunction,
Minimizer minimizer,
int maxIteration)
Creates a new TrainingConfiguration with the mandatory configurations of
the activation functions, the to be used minimizer and the maximum
iterations.
|
static RBM |
RBM.single(int numHiddenNodes,
ActivationFunction func) |
static RBM |
RBM.singleGPU(int numHiddenNodes,
ActivationFunction func) |
static RBM |
RBM.stacked(ActivationFunction func,
int... numHiddenNodes)
Creates a new stacked RBM with sigmoid activation and with the given number
of hidden nodes in each stacked layer.
|
static RBM |
RBM.stackedGPU(ActivationFunction func,
int... numHiddenNodes)
Creates a new stacked RBM with sigmoid activation and with the given number
of hidden nodes in each stacked layer.
|
| Constructor and Description |
|---|
RBMCostFunction(de.jungblut.math.DoubleVector[] currentTrainingSet,
int batchSize,
int numThreads,
int numHiddenUnits,
ActivationFunction activationFunction,
TrainingType type,
double lambda,
long seed,
boolean stochastic) |
| Modifier and Type | Class and Description |
|---|---|
class |
AbstractActivationFunction
Implements the boiler plate code for applying functions on container classes
like vectors and matrices by applying the function on every element.
|
class |
ElliotActivationFunction
Implementation of the elliot activation function.
|
class |
LinearActivationFunction
Linear activation function.
|
class |
LogActivationFunction
Log activation function, guarded against NaN and infinity edge cases.
|
class |
ReluActivationFunction
Rectified linear units implementation.
|
class |
SigmoidActivationFunction
Implementation of the sigmoid function.
|
class |
SoftMaxActivationFunction
Softmax activation that only works on vectors, because it needs to sum and
divide the probabilities.
|
class |
SoftplusReluActivationFunction
Smoothed approximation to a
ReluActivationFunction. |
class |
StepActivationFunction
Classic perceptron-like step function.
|
class |
TanhActivationFunction
Implementation of the Tanh activation based on
FastMath. |
| Modifier and Type | Method and Description |
|---|---|
ActivationFunction |
ActivationFunctionSelector.get() |
Copyright © 2016. All rights reserved.