BATCHgdwm.MLPnet {AMORE}R Documentation

Batch gradient descent with momentum training

Description

Modifies the neural network weights and biases according to the training set. Mostly written to be called by the train.

Usage

BATCHgdwm.MLPnet(net,P,T)
BATCHgdwm.MLPnet.R(net,P,T)

Arguments

net Neural Network to train.
P Input values of the training pattern.
T Target values to reach at the network outputs.

Value

These functions return a neural network object modified according to the chosen data.

Author(s)

Manuel Castejón Limas. manuel.castejon@unileon.es
Joaquin Ordieres Meré. joaquin.ordieres@dim.unirioja.es
Ana González Marcos. ana.gonzalez@unileon.es
Alpha V. Pernía Espinoza. alpha.pernia@alum.unirioja.es
Eliseo P. Vergara Gonzalez. eliseo.vergara@dim.unirioja.es
Francisco Javier Martinez de Pisón. francisco.martinez@dim.unirioja.es
Fernando Alba Elías. fernando.alba@unavarra.es

References

Simon Haykin. Neural Networks. A comprehensive foundation. 2nd Edition.

See Also

train


[Package AMORE version 0.2-0 Index]