1. ## simple problem

how do you make an integer accept a decimal value?

I know i knew this, but now...and it's not in the faq

2. Use a float, not an int?

3. how do you make an integer accept a decimal value?
use a double
eg;
double n = 6.0

U R right, pretty simple, my friend
cj

4. u could cast it.

int variable;
int final;

final = double(variable);

5. Originally posted by Ride -or- Die
u could cast it.

int variable;
int final;

final = double(variable);
errr... no. final is an int and will only ever hold an int value. The use of the implied cast is also incorrect, you meant this:
>final = (double)variable;
... but it wouldn't store the digits to the right of the decimal point, if there were any. Of course, there can't be any though, as variable is an int as well.

6. wow....

thanks for all the alternatives, but i think i'll stick with "float".

Sorry to trouble you with such a simple question.

7. Originally posted by Hammer

>>final = double(variable);
The use of the implied cast is also incorrect, you meant this:
final = (double)variable;
I don't see anything wrong with double(variable).
It is C++-style casts.

Nevertheless, the cast is useless.

8. I don't think we can cast with: double(variable).
it's just like:
Code:
```int x=5;
double y=9;

x=y;```
isn't it?

9. >>It is C++-style casts.
Oops, you're right, I didn't spot that

>>Nevertheless, the cast is useless.
That was my main point too.