Hello,

I'm attempting to create my first neural network, but I only have highschool maths so most of the formulas out there are pretty hard for me to understand, so I was hoping someone could tell me if i'm doing this properly.

I'm creating a spam classifier, I have 7 inputs and I assume I only need one output, and if it were to output something close to a 1 it would be considered "spam", and something close to a 0 it would be considered "not spam".

Basically this is all I have:


Do I need more than this, other than to calculate error and adjust the weights for it to work?

As for calculating error for the final weight i would take the expected result of the network(say 1) and minus the actual result from it, say (0.7), which would give me (0.3).

What confuses me next is

1) How would I use this number to edit the weights in the previous layer?

and 2) How do I work out the desired value for weights other than the final one? For example weights in the middle of the network.

And I found this while looking around

Code:
inline double trainer::getOutputErrorGradient( double desiredValue, double outputValue)
{
	//return error gradient
	return outputValue * ( 1 - outputValue ) * ( desiredValue - outputValue );
}
What this would do is take my 0.7 output and 1 desired value and return a 0.063, what would i do with that?

any help is appreciated.