1. It compiles just fine on VC++ 6.0, the stated target compiler. Perhaps you misread wikipedia, or just dont understand what monotonic means.

From wiktionary.org -
monotonic
1. of or using the Greek system of diacritics which discards the breathings and employs a single accent to indicate stress.
2. (mathematics) said of a function that either never decreases or never increases as its independent variable increases.

From Wikipedia.org -

In general, a sigmoid function is real-valued and differentiable, having either a non-negative or non-positive first derivative and exactly one inflection point.

So, while I suppose a sigmoid is monotonic ( I may have mispoken earlier), not all monotonic functions are sigmoids. I think that citizens formula may produce bad results for negative values of X lets see -5/sqrt(25+1)? Nope, its a negative number that decreases as the variable decreases. It approaches 1.0 for large values of X. For a< b < c in the positive domain it satsifies f(a) < f(b) < f(c) and sa > sb > sc. It also satisfies the negative domain equivelant. So it qualifies. If he implements it in assembly it will not produce errors, although depending on the compiler, it may produce errors for values of X greater than sqrt(1.7E+308) when compiled as is.

2. It appears that Rashakil understands the definition of a monotonic function perfectly well. He stated that sigmoidal functions are monotonic, not that all monotonic functions are sigmoidal.

3. As I stated. Glad you agree. So, citizen's entry is not monotonic. Therefore, it's not a sigmoid. Please disqualify it.

Also, I doubt your implementation compiles on VC++ 6.0.

Code:
```void sigmoid(double* Input){
double temp;

while(Input[0] != 0.0){
__asm fld2e;
__asm fstp temp;
temp = pow(2 , temp);
temp = pow(temp , (0-X));
temp += 1.0;
Input[0] = pow(temp , -1.0);
Input++;
}

return;
}```
This relies on a global variable X and doesn't look at the input parameter at all.

And on VC++ 2003, I'm getting a syntax error before then.

4. Fixing up your function so that I could compile it, I changed it to the following:

Code:
```void sigmoid(double* Input) {
double temp;

while(Input[0] != 0.0) {
temp = 2.71828182845904523536028747135662497757247; /* hope that's right */
temp = pow(temp , (0-Input[0]));
temp += 1.0;
Input[0] = pow(temp , -1.0);
Input++;
}

return;
}```
It turns out that your function has a constant slope on the interval [36.737,5000]. This fails to meet the requirements you have given.

5. It's easy to prove that it's impossible to submit a valid entry to this contest:
1. There are a finite number of possible values a double can take.
2. The number of values a double in the interval [-1,1] can take is thus smaller.
3. By the pigeonhole principle, there is no bijective function between R and [0,1] (using doubles).
4. There is no sigmoid function using doubles.

6. Originally Posted by Rashakil Fol
It turns out that your function has a constant slope on the interval [36.737,5000]. This fails to meet the requirements you have given.

T = 36.737
e = 2.718281828459045235360287471356

1 / (1 + e ^-T) = 0.99999999999999988899983654344547

T = 5000

1 / (1 + e ^ -T) = 1.0

seems there is plenty of room in between seeing as how the FPU uses 80 bits internally, and even if my example fails above values of 36 or so, thats certainly better than any equation you have submitted, all of which fail at every value. Personally I woudlnt use this particular example (its just the one provided on wiki so giving it doesnt give an unfair advantage to any contestant), and the one I use right now doesnt fail until the values exceed 60 million or so, which is well beyond what our application of sigmoids would ever require. if you want a mathmatically pure solution, then implementing on hardware will never work, but a pure solution is worthless if you cant make toys with it.

But since we now have two people claiming that sigmoids are impossible to implement in hardware, I will now tell my boss that neural networks are impossible to implement on computers and he should just burn his PhD and go farm goats in a cave. I figured it would be a nice change of pace to have a serious contest, but I can see you would rather cry about the limitations of hardware than work on anything more complicated than a tic tac toe game or shortest path algorithms.

7. Maybe you should just change the function from a sigmoid to something more specific, so we can end this little flamewar and have a competition.

8. >80 bit
FWIW

9. Originally Posted by Dave_Sinkula
>80 bit
FWIW
The FPU uses 80 bit internally, regardless of the limits of the double, or if long doubles map to doubles, the FPU still converts doubles to 80 bit precision.

FLD

Description
Pushes the source operand onto the FPU register stack. The source operand can be in singleprecision,
double-precision, or double extended-precision floating-point format. If the source
operand is in single-precision or double-precision floating-point format, it is automatically
converted to the double extended-precision floating-point format before being pushed on the
stack.

10. Maybe you should just change the function from a sigmoid to something more specific, so we can end this little flamewar and have a competition.
I agree, it looks like this has become a contest about the contest.

11. I concur. Therefor I am closing the contest for new entries. Since citizen is the only one that spent 5 minutes writing an entry instead of 5 hours looking for loopholes, citizen is the winner.

12. Originally Posted by abachler
The FPU uses 80 bit internally, regardless of the limits of the double, or if long doubles map to doubles, the FPU still converts doubles to 80 bit precision.
Whether the FPU uses 64 bit or 52 bit precision or whatnot is not relevant. The output of the function is an array of doubles; what the number "could have been" isn't important.

All you need to do is clearly specify what restrictions you want to specifiy which implementations are acceptable, instead of having a vague definition. So please do.

13. The issue here is that floating point math is almost always useless in the realm of pure math, and you have put forward a pure math problem, with the requirement that we'll use floating point math to solve it.

First we have requirements from the wiki article...
• Real-Valued
• Differentiable
• Positive Slope
• Having exactly one inflection point

• The function must approach -1 as it tends to negative infinity
• The function must approach +1 as it tends to positive infinity
• Non-Zero slope at all points

Now, let's look at the properties of a double precision method...
• It is inherently not continuous.
• -IF- you play connect-the-dots to cheat a continuous function, then it is not differentiable.

Already, ideas like "limit", and "slope" are pretty much doomed.
• A double function which maps from (-inf, inf) to (-1, 1) will have over 99.9% of its adjacent-point ranges have 0 slope

The traditional way to handle this is to simply let the double function approximate a real valued function. So, you could define the problem as...
Write a function which approximates a Sigmoid Family Curve.
In which case, returning 0 for all values is perfectly correct.

Accept the fact that giving pure math requirements for a double precision method is hopeless. If if is impossible to programatically verify the correctness of an answer, then the question was bad. Give us some code which the function must fulfill and we'll give answers.

One other thing... "Weasal Code" is the spirit of mathematical programming. You can't get more weasely than some beautiful and amazing code like
Code:
```double d;
//...
long i = *(long *)d;```