Hi
I'm working on implementing a neural network, but I'm having trouble calculating error gradients on both output and hidden layers. I'm using the identity function as my activation function: f(x) = x. I am pretty clueless when it comes to calculus so I'm really having trouble with it.
I found this Web page that has a good explanation
http://www.willamette.edu/~gorr/clas...9/linear2.html
I just can't seem to figure out how to implement it.
I have an example of the gradient calculation of a network that is using a sigmoid activation function
Code:
inline double trainer::getOutputErrorGradient( double desiredValue, double outputValue)
{
//return error gradient
return outputValue * ( 1 - outputValue ) * ( desiredValue - outputValue );
}
but this isn't all that helpful.
Any help would be greatly appreciated.