# simple problem

• 10-28-2002
none
simple problem
how do you make an integer accept a decimal value?

I know i knew this, but now...and it's not in the faq

• 10-28-2002
Azuth
Use a float, not an int?
• 10-28-2002
correlcj
Quote:

how do you make an integer accept a decimal value?
use a double
eg;
double n = 6.0

U R right, pretty simple, my friend
cj
• 10-28-2002
RoD
u could cast it.

int variable;
int final;

final = double(variable);
• 10-28-2002
Hammer
Quote:

Originally posted by Ride -or- Die
u could cast it.

int variable;
int final;

final = double(variable);

errr... no. final is an int and will only ever hold an int value. The use of the implied cast is also incorrect, you meant this:
>final = (double)variable;
... but it wouldn't store the digits to the right of the decimal point, if there were any. Of course, there can't be any though, as variable is an int as well.
• 10-28-2002
none
wow....

thanks for all the alternatives, but i think i'll stick with "float".

:D Sorry to trouble you with such a simple question.
• 10-28-2002
Sang-drax
Quote:

Originally posted by Hammer

>>final = double(variable);
The use of the implied cast is also incorrect, you meant this:
final = (double)variable;

I don't see anything wrong with double(variable).
It is C++-style casts.

Nevertheless, the cast is useless.
• 10-28-2002
ammar
I don't think we can cast with: double(variable).
it's just like:
Code:

```int x=5; double y=9; x=y;```
isn't it?
• 10-30-2002
Hammer
>>It is C++-style casts.
Oops, you're right, I didn't spot that :rolleyes:

>>Nevertheless, the cast is useless.
That was my main point too.