newff {AMORE} | R Documentation |
Creates a feedforward artificial neural network according to the structure established by the AMORE package standard.
newff(n.inputs, n.hidden, n.outputs, learning.rate.global, momentum.global, error.criterium, Stao, hidden.layer, output.layer)
n.inputs |
Number of input neurons or predictors. |
n.hidden |
Number of hidden layer neurons. |
n.outputs |
Number of output layer neurons. |
learning.rate.global |
Learning rate. |
momentum.global |
Momentum (Set to 0 if you do not want to use it). |
error.criterium |
Criterium used to measure to proximity of the neural network prediction to its target. Currently we can choose amongst:
|
Stao |
Stao parameter for the TAO error criterium. Unused by the rest of criteria. |
hidden.layer |
Activation function of the hidden layer neurons. Available functions are:
|
output.layer |
Activation function of the hidden layer neurons according to the former list shown above. |
newff returns a feedforward neural network object.
Manuel Castejón Limas. manuel.castejon@unileon.es
Joaquin Ordieres Meré. joaquin.ordieres@dim.unirioja.es
Ana González Marcos. ana.gonzalez@unileon.es
Alpha V. Pernía Espinoza. alpha.pernia@alum.unirioja.es
Eliseo P. Vergara Gonzalez. eliseo.vergara@dim.unirioja.es
Francisco Javier Martinez de Pisón. francisco.martinez@dim.unirioja.es
Fernando Alba Elías. fernando.alba@unavarra.es
Pernia Espinoza, A.V. TAO-robust backpropagation learning algorithm. Neural Networks. In press.
Simon Haykin. Neural Networks. A comprehensive foundation. 2nd Edition.
init.neuron
, random.init.NeuralNet
, random.init.neuron
, select.activation.function
, init.neuron
#Example 1 library(AMORE) # P is the input vector P <- matrix(sample(seq(-1,1,length=1000), 1000, replace=FALSE), ncol=1) # The network will try to approximate the target P^2 target <- P^2 #We create a feedforward network, with 2 neurons in the hidden layer. Tansig and Purelin activation functions. net <- newff(n.inputs=1,n.hidden=2,n.outputs=1,learning.rate.global=1e-1, momentum.global=0.5 , error.criterium="MSE", hidden.layer="tansig", output.layer="purelin") net <- train(net,P,target,n.epochs=100, g=adapt.NeuralNet,error.criterium="MSE", Stao=NA, report=TRUE, show.step=10 )