I have a problem with the following code sample which does not return the expected result:
Code:
int main ( int argc, char **argv )
{
int in_mth_alloc = 0;
float db_result = 0.0;
float db_bill_amt = 0.0;
int return_code = 0 ;
/* Check start up parameters and logon to DB */
if ( ( return_code = program_init( argc, argv ) ) == 0 )
{
EXEC SQL SELECT integer1, decimal1
INTO :in_mth_alloc,
:db_bill_amt
FROM lookup_table;
printf("in_mth_alloc = %i, db_bill_amt = %.2f, sqlcode = %i\n", in_mth_alloc, db_bill_amt, sqlca.sqlcode );
db_result = ( db_bill_amt * ( ( 102 - in_mth_alloc ) / 102 ) * 12 );
printf("%.6f = ( %.2f * ( ( 102 - %i ) / 102 ) * 12 )\n", db_result, db_bill_amt, in_mth_alloc );
}
exit( 0 ); /* Completed program */
}
The Oracle table that the code reads from has two columns, and a single row of data as shown:
integer1, type=number(2), value=30
decimal1, type=number(12,2), value=76484.29
The program displays the following output:
in_mth_alloc = 30, db_bill_amt = 76484.29, sqlcode = 0
0.000000 = ( 76484.29 * ( ( 102 - 30 ) / 102 ) * 12 )
Obviously the result of the calculation is incorrect, the answer should be 647867.011764 (rounded to 6 dp). The only explantion that I can think of is that for some reason the result of the divison part of the calculation has evaluated to zero. But I can't imagine why that should have happened.
Does anyone have any idea about what's going wrong.
Thanks
Paul