Just a question about entering fractions...
I have a function set up to do a simple calculation. If I code in a whole number to be multiplied by one of the variables, it gives the right answer but if it's a fraction - ie (1/2)*x - it gives the wrong answer. The answer it gives is as if the fraction were a 0 instead. It's also wrong for fractions greater than 0 that don't reduce to a whole number, except that the answer is like the x were multiplied by the fraction rounded to a whole number.
Everything's okay if I put in the decimal instead, but I was just wondering if there's a special way to enter fractions?