Thread: Pointer help

  1. #16
    Registered User
    Join Date
    Sep 2020
    Posts
    31
    for (double abw = 0.0; abw <= 1.0; abw += 0.1) {
    printf("%lf, %lf, %lf, %lf\n", abw, calc_abv(abw), calc_abv2(abw), calc_abv3(abw));


    Nice lesson in display. Thank you for that.

    I didn't notice that the "coefficients" were being added backwards and the implementation is using methods described here:
    Astute.

    I don't know which of the below implementations below I prefer, but I do know that comments should be added (if there was comments in the initial code I'd not have initially thought it was implemented incorrectly, for example )
    I'm also getting a lesson on the importance of comment, particularly when the code is reviewed by others.

    The original Horner form. It may have been more obvious had I stuck with it.
    abw * ( -0.000039705486746795932 +
    abw * ( 1.2709666849144778 +
    abw * ( -0.40926819348115739 +
    abw * ( 2.0463351302912738 +
    abw * ( -7.8964816507513707 +
    abw * ( 15.009692673927390 +
    abw * ( -15.765836469736477 +
    ( 8.8142267038252680 -
    2.0695760421183493 * abw) * abw))))))

    Though I was partial to my original construct I'm intrigued by the FOR loop proposed by laserlight and yourself. Calc_abv(abw) has a compliment calc_abw(abv). Same algorithm, different coefficients. Its not in the code yet but will be implemented. I see now, I think, that instead of two functions I can pass an array of either set of coefficients to a single convert() function.

    I'm going to revise my code to all of laserlight's and your suggestions. This is my first C program and I muddled through ad hoc, focusing on the trees not the forest. I truly appreciate the all advice and insight I've gotten.

    Much thanks to you both.

    Don

  2. #17
    Registered User
    Join Date
    Feb 2019
    Posts
    1,078
    I must point out that, conceptually, using a loop instead of a bunch of calls to `pow()`, inline, is correct, but you are dealing with floating point here and floating point aren't exact representations of fractional values. This simple test can show this:
    Code:
    #include <stdio.h>
    #include <math.h>
    
    double calc_abv2(double abw)
    {
        const double a = -0.000039705486746795932;
        const double b =  1.2709666849144778;
        const double c = -0.40926819348115739;
        const double d =  2.0463351302912738;
        const double f = -7.8964816507513707;
        const double g =  15.009692673927390;
        const double h = -15.765836469736477;
        const double i = 8.8142267038252680;
        const double j = -2.0695760421183493;
     
        return  a +
                b * abw +
                c * pow(abw, 2) +
                d * pow(abw, 3) +
                f * pow(abw, 4) +
                g * pow(abw, 5) +
                h * pow(abw, 6) +
                i * pow(abw, 7) +
                j * pow(abw, 8)
        ;
    }
    
    double calc_abv3(double abw)
    {
        static const double coefficients[] = {
            -0.000039705486746795932,
            1.2709666849144778,
            -0.40926819348115739,
            2.0463351302912738,
            -7.8964816507513707,
            15.009692673927390,
            -15.765836469736477,
            8.8142267038252680,
            -2.0695760421183493
        };
     
        double abv = coefficients[0];
        double abw_mult = abw;
        for (int i = 1; i < sizeof(coefficients) / sizeof(coefficients[0]); ++i)
        {
            abv = abv + coefficients[i] * abw_mult;
            abw_mult *= abw;
        }
    
        return abv;
    }
    
    int main( void )
    {
      double r1, r2;
    
      // 10.2 as a test! Both functions should give you the same value, isn't it?
      r1 = calc_abv2( 10.2 );
      r2 = calc_abv3( 10.2 );
    
      printf( "r1 = %.30f\nr2 = %.30f\nr1 %s r2\n",
        r1, r2, r1 == r2 ? "==" : "!=" );
    }
    If you compile and run you'll see:
    Code:
    $ cc -o test test.c -lm
    $ ./test
    r1 = -157417092.855879038572311401367187500000
    r2 = -157417092.855879098176956176757812500000
    r1 != r2
    Another known fact is that double precision floating point structure gives you a garantee only about 16 decimal algarisms of precision, everything else is rounded or truncated.

    The correct value is -157417092.855879180651745689527008

    I've got this by using a multiprecision calculator (bc) with this script:
    Code:
    /* calc.bc */
    scale=50
    
    a=-0.000039705486746795932
    b=1.2709666849144778
    c=-0.40926819348115739
    d=2.0463351302912738
    f=-7.8964816507513707
    g=15.009692673927390
    h=-15.765836469736477
    i=8.8142267038252680
    j=-2.0695760421183493
    
    x=10.2
    
    y=a+b*x+c*x^2+d*x^3+f*x^4+g*x^5+h*x^6+i*x^7+j*x^8
    
    y
    quit
    r1 is off by 1.420794342881598205*10⁻⁷
    r2 is off by 8.24747895127691955*10⁻⁸

    Tiny errors, huh? And the loop function, in this case, is more "precise" because it isn't using "pow()". But, the point is: floating point is always an approximation and calculations must be done with care.

    Take a look at those 9 constant above. Let's view their TRUE values, shall we:
    Code:
    #include <stdio.h>
    
    void showDouble( double v )
    {
      struct dblstruc {
        unsigned long f:52;
        unsigned long e:11;
        unsigned long s:1;
      };
    
      struct dblstruc *p = (struct dblstruc *)&v;
    
      printf( "%70.65f -> S=%d  E=%-4d  F=%lu\n", v, (int)p->s, (int)p->e - 1023, (unsigned long)p->f );
    }
    
    int main( void )
    {
      int i;
      static const double values[] = {
        -0.000039705486746795932,
         1.2709666849144778,
        -0.40926819348115739,
         2.0463351302912738,
        -7.8964816507513707,
         15.009692673927390,
        -15.765836469736477,
        8.8142267038252680,
        -2.0695760421183493
      };
    
      for ( i = 0; i < sizeof values / sizeof values[0]; i++ )
        showDouble( values[i] );
    }
    Compiling and running:
    Code:
    $ cc -o test test.c
    $ ./test
      -0.00003970548674679593185651849118755762901855632662773132324218750 -> S=1  E=-15   F=1355895991351192
       1.27096668491447783999603871052386239171028137207031250000000000000 -> S=0  E=0     F=1220325461210661
      -0.40926819348115739405358226576936431229114532470703125000000000000 -> S=1  E=-2    F=2869120707254850
       2.04633513029127378501925704767927527427673339843750000000000000000 -> S=0  E=1     F=104337437756972
      -7.89648165075137065116450685309246182441711425781250000000000000000 -> S=1  E=2     F=4387048327594962
      15.00969267392738970556820277124643325805664062500000000000000000000 -> S=0  E=3     F=3946106164285136
     -15.76583646973647745426205801777541637420654296875000000000000000000 -> S=1  E=3     F=4371777278915676
       8.81422670382526796117872436298057436943054199218750000000000000000 -> S=0  E=3     F=458368884992823
      -2.06957604211834933494174038060009479522705078125000000000000000000 -> S=1  E=1     F=156671318679056
    S, F and E are integer values used to satisfy the equation for normalized floating point (IEEE-754).

    Pointer help-png-latex-png

    []s
    Fred
    Last edited by flp1969; 09-13-2020 at 03:29 PM.

  3. #18
    Registered User
    Join Date
    Feb 2019
    Posts
    1,078
    Another example why floating point is another beast entirely:
    Code:
    #include <stdio.h>
    
    #define SHOW_RESULT(c) \
      printf("[%s]\n", ((c))?"yes":"no")
    
    void testfp1 ( void )
    {
      double x = 1.2;
    
      printf ( "x = 1.2 - 0.4 - 0.4 - 0.4; x = 0.0? " );
    
      x -= 0.4;
      x -= 0.4;
      x -= 0.4;
    
      /* x should be 0.0, right!? Wrong! */
      SHOW_RESULT ( x == 0.0 );
    }
    
    void testfp2 ( void )
    {
      double x;
      double y;
    
      printf ( "x = (0.1 + 0.2) + 0.3; y = 0.1 + (0.2 + 0.3); x == y ? " );
    
      x = ( 0.1 + 0.2 ) + 0.3;
      y = 0.1 + ( 0.2 + 0.3 );
    
      /* x == y, right? Wrong! */
      SHOW_RESULT ( x == y );
    }
    
    void testfp3 ( void )
    {
      double x;
      double y;
    
      printf ( "x = (0.1 * 0.2) * 0.3; y = 0.1 * (0.2 * 0.3); x == y? " );
    
      x = ( 0.1 * 0.2 ) * 0.3;
      y = 0.1 * ( 0.2 * 0.3 );
    
      /* x == y, right? Wrong! */
      SHOW_RESULT ( x == y );
    }
    
    void testfp4 ( void )
    {
      double x;
      double y;
    
      printf ( "x = (0.1 + 0.2) * 0.3; y = (0.1 * 0.3) + (0.2 * 0.3); x == y? " );
    
      x = ( 0.1 + 0.2 ) * 0.3;
      y = ( 0.1 * 0.3 ) + ( 0.2 * 0.3 );
    
      /* x == y, right? Wrong! */
      SHOW_RESULT ( x == y );
    }
    
    int main ( void )
    {
      testfp1();
      testfp2();
      testfp3();
      testfp4();
    
      return 0;
    }
    If you compile (without optimizations to be fair) and run, you'll get:
    Code:
    x = 1.2 - 0.4 - 0.4 - 0.4; x = 0.0? [no]
    x = (0.1 + 0.2) + 0.3; y = 0.1 + (0.2 + 0.3); x == y ? [no]
    x = (0.1 * 0.2) * 0.3; y = 0.1 * (0.2 * 0.3); x == y? [no]
    x = (0.1 + 0.2) * 0.3; y = (0.1 * 0.3) + (0.2 * 0.3); x == y? [no]
    []s
    Fred

  4. #19
    Registered User Sir Galahad's Avatar
    Join Date
    Nov 2016
    Location
    The Round Table
    Posts
    277
    Quote Originally Posted by flp1969 View Post
    I must point out that, conceptually, using a loop instead of a bunch of calls to `pow()`, inline, is correct, but you are dealing with floating point here and floating point aren't exact representations of fractional values. This simple test can show this:
    Code:
    #include <stdio.h>
    #include <math.h>
    
    double calc_abv2(double abw)
    {
        const double a = -0.000039705486746795932;
        const double b =  1.2709666849144778;
        const double c = -0.40926819348115739;
        const double d =  2.0463351302912738;
        const double f = -7.8964816507513707;
        const double g =  15.009692673927390;
        const double h = -15.765836469736477;
        const double i = 8.8142267038252680;
        const double j = -2.0695760421183493;
     
        return  a +
                b * abw +
                c * pow(abw, 2) +
                d * pow(abw, 3) +
                f * pow(abw, 4) +
                g * pow(abw, 5) +
                h * pow(abw, 6) +
                i * pow(abw, 7) +
                j * pow(abw, 8)
        ;
    }
    
    double calc_abv3(double abw)
    {
        static const double coefficients[] = {
            -0.000039705486746795932,
            1.2709666849144778,
            -0.40926819348115739,
            2.0463351302912738,
            -7.8964816507513707,
            15.009692673927390,
            -15.765836469736477,
            8.8142267038252680,
            -2.0695760421183493
        };
     
        double abv = coefficients[0];
        double abw_mult = abw;
        for (int i = 1; i < sizeof(coefficients) / sizeof(coefficients[0]); ++i)
        {
            abv = abv + coefficients[i] * abw_mult;
            abw_mult *= abw;
        }
    
        return abv;
    }
    
    int main( void )
    {
      double r1, r2;
    
      // 10.2 as a test! Both functions should give you the same value, isn't it?
      r1 = calc_abv2( 10.2 );
      r2 = calc_abv3( 10.2 );
    
      printf( "r1 = %.30f\nr2 = %.30f\nr1 %s r2\n",
        r1, r2, r1 == r2 ? "==" : "!=" );
    }
    If you compile and run you'll see:
    Code:
    $ cc -o test test.c -lm
    $ ./test
    r1 = -157417092.855879038572311401367187500000
    r2 = -157417092.855879098176956176757812500000
    r1 != r2
    Another known fact is that double precision floating point structure gives you a garantee only about 16 decimal algarisms of precision, everything else is rounded or truncated.

    The correct value is -157417092.855879180651745689527008

    I've got this by using a multiprecision calculator (bc) with this script:
    Code:
    /* calc.bc */
    scale=50
    
    a=-0.000039705486746795932
    b=1.2709666849144778
    c=-0.40926819348115739
    d=2.0463351302912738
    f=-7.8964816507513707
    g=15.009692673927390
    h=-15.765836469736477
    i=8.8142267038252680
    j=-2.0695760421183493
    
    x=10.2
    
    y=a+b*x+c*x^2+d*x^3+f*x^4+g*x^5+h*x^6+i*x^7+j*x^8
    
    y
    quit
    r1 is off by 1.420794342881598205*10⁻⁷
    r2 is off by 8.24747895127691955*10⁻⁸

    Tiny errors, huh? And the loop function, in this case, is more "precise" because it isn't using "pow()". But, the point is: floating point is always an approximation and calculations must be done with care.

    Take a look at those 9 constant above. Let's view their TRUE values, shall we:
    Code:
    #include <stdio.h>
    
    void showDouble( double v )
    {
      struct dblstruc {
        unsigned long f:52;
        unsigned long e:11;
        unsigned long s:1;
      };
    
      struct dblstruc *p = (struct dblstruc *)&v;
    
      printf( "%70.65f -> S=%d  E=%-4d  F=%lu\n", v, (int)p->s, (int)p->e - 1023, (unsigned long)p->f );
    }
    
    int main( void )
    {
      int i;
      static const double values[] = {
        -0.000039705486746795932,
         1.2709666849144778,
        -0.40926819348115739,
         2.0463351302912738,
        -7.8964816507513707,
         15.009692673927390,
        -15.765836469736477,
        8.8142267038252680,
        -2.0695760421183493
      };
    
      for ( i = 0; i < sizeof values / sizeof values[0]; i++ )
        showDouble( values[i] );
    }
    Compiling and running:
    Code:
    $ cc -o test test.c
    $ ./test
      -0.00003970548674679593185651849118755762901855632662773132324218750 -> S=1  E=-15   F=1355895991351192
       1.27096668491447783999603871052386239171028137207031250000000000000 -> S=0  E=0     F=1220325461210661
      -0.40926819348115739405358226576936431229114532470703125000000000000 -> S=1  E=-2    F=2869120707254850
       2.04633513029127378501925704767927527427673339843750000000000000000 -> S=0  E=1     F=104337437756972
      -7.89648165075137065116450685309246182441711425781250000000000000000 -> S=1  E=2     F=4387048327594962
      15.00969267392738970556820277124643325805664062500000000000000000000 -> S=0  E=3     F=3946106164285136
     -15.76583646973647745426205801777541637420654296875000000000000000000 -> S=1  E=3     F=4371777278915676
       8.81422670382526796117872436298057436943054199218750000000000000000 -> S=0  E=3     F=458368884992823
      -2.06957604211834933494174038060009479522705078125000000000000000000 -> S=1  E=1     F=156671318679056
    S, F and E are integer values used to satisfy the equation for normalized floating point (IEEE-754).

    Pointer help-png-latex-png

    []s
    Fred
    Where is the damn "like" button?!

    Anyway, nice breakdown mate.

  5. #20
    misoturbutc Hodor's Avatar
    Join Date
    Nov 2013
    Posts
    1,791
    Quote Originally Posted by flp1969 View Post
    I must point out that, conceptually, using a loop instead of a bunch of calls to `pow()`, inline, is correct, but you are dealing with floating point here and floating point aren't exact representations of fractional values.
    I understand your point, but the difference in your example is not a result of the loop, it's because calc_abv2() is using pow(). The loop could be modified to use pow() and it would give the same result. Alternatively if you manually unroll the loop the result is the same, as expected:

    Code:
    #include <stdio.h>
    #include <math.h>
      
    double calc_abv3(double abw)
    {
        static const double coefficients[] = {
            -0.000039705486746795932,
            1.2709666849144778,
            -0.40926819348115739,
            2.0463351302912738,
            -7.8964816507513707,
            15.009692673927390,
            -15.765836469736477,
            8.8142267038252680,
            -2.0695760421183493
        };
      
        double abv = coefficients[0];
        double abw_mult = abw;
        for (int i = 1; i < sizeof(coefficients) / sizeof(coefficients[0]); ++i)
        {
            abv = abv + coefficients[i] * abw_mult;
            abw_mult *= abw;
        }
     
        return abv;
    }
    
    double calc_abv3_ur(double abw)
    {
        const double a = -0.000039705486746795932;
        const double b =  1.2709666849144778;
        const double c = -0.40926819348115739;
        const double d =  2.0463351302912738;
        const double f = -7.8964816507513707;
        const double g =  15.009692673927390;
        const double h = -15.765836469736477;
        const double i = 8.8142267038252680;
        const double j = -2.0695760421183493;
      
        double abv = a;
        double abw_mult = abw;
        
        abv += b * abw_mult;
        abw_mult *= abw;
        abv += c * abw_mult;
        abw_mult *= abw;
        abv += d * abw_mult;
        abw_mult *= abw;
        abv += f * abw_mult;
        abw_mult *= abw;
        abv += g * abw_mult;
        abw_mult *= abw;
        abv += h * abw_mult;
        abw_mult *= abw;
        abv += i * abw_mult;
        abw_mult *= abw;
        abv += j * abw_mult;
        
        return abv;
    }
    
     
    int main( void )
    {
      double r1, r2;
     
      // 10.2 as a test! Both functions should give you the same value, isn't it?
      r1 = calc_abv3( 10.2 );
      r2 = calc_abv3_ur( 10.2 );
     
      printf( "r1 = %.30f\nr2 = %.30f\nr1 %s r2\n",
        r1, r2, r1 == r2 ? "==" : "!=" );
    }
    Yields
    Code:
    $ ./a.out 
    r1 = -157417092.855879098176956176757812500000
    r2 = -157417092.855879098176956176757812500000
    r1 == r2
    The order of evaluation of floating point calculations is of course important and evaluating things in a different order will likely produce different results (i.e. commutativity doesn't hold) ... which is why a behaving C compiler will not re-order the evaluation of FP calculations even with optimisations enabled (unless you force the compiler to do so, using say funsafe-math-optimizations). The compiler could unroll the loop in calc_abv3() though... so long as the order of calculations remains the same.

    Anyway, I'm not disagreeing with you per se., I'm just saying that the difference is because of pow() rather than the use of a loop. That said, it's good that the OP has been made aware of this now (no time like the present I guess). The OP needs to be aware of these things and there's no time like the present I guess.

    That said, and without downplaying the importance of thinking carefully when using FP calculations, I wonder how much it really matters in this case. The function is converting ABW% to ABV% in a way that none of my chemistry textbooks mention... there are tables for conversion; i.e. the polynomial seems to be an approximation and if it was a good approximation surely the chemistry literature would mention it rather than providing tables (?)

  6. #21
    Registered User
    Join Date
    Sep 2020
    Posts
    31
    That said, and without downplaying the importance of thinking carefully when using FP calculations, I wonder how much it really matters in this case. The function is converting ABW% to ABV% in a way that none of my chemistry textbooks mention... there are tables for conversion; i.e. the polynomial seems to be an approximation and if it was a good approximation surely the chemistry literature would mention it rather than providing tables (?)
    For the record: The source data are from Perry's Chemical Engineering Handbook - 7th ed. Table 2-111 "Densities of Mixtures of (ethanol and water) at 20 deg C. Edwin Croissant fit the data at zunzun.com to obtain the polynomial I am using. Much of Edwins work can be found on github. Now referenced by comment in the conversion function of my program.

    International reference: https://www.oiml.org/en/files/pdf_r/r022-e75.pdf


    In the US: refer to the TTB gauging tables.

    https://www.ttb.gov/foia/distilled-s...7:1.0.1.1.25.4


    There is also an excellent commercial program, "AlcoDens" from Katmar software.
    Ethanol (alcohol) blending, dilution and density-strength conversions | AlcoDens


    I'm impressed by the level of attention given to the conversion function. My simple goal was to create a utility for the small community of craft/hobby distillers. Also, as I approach my seventh decade, I'm reminded of the importance of maintaining neuroplasticity. Such as learning to play an instrument, or a new language, etc. I'm not musical and I don't travel. I chose C programming.

    A bit unfortunate about the thread title. Not really about pointer help after all.



    Peace.

  7. #22
    Registered User
    Join Date
    Feb 2019
    Posts
    1,078
    Quote Originally Posted by Buckeye Bing View Post
    Also, as I approach my seventh decade, I'm reminded of the importance of maintaining neuroplasticity. Such as learning to play an instrument, or a new language, etc. I'm not musical and I don't travel. I chose C programming.
    I'm getting there. Past my 5th decade... Here's a good read about floating point if you are interested: Handbook of Floating Point arithmetic.

    []s
    Fred

  8. #23
    misoturbutc Hodor's Avatar
    Join Date
    Nov 2013
    Posts
    1,791
    Quote Originally Posted by Buckeye Bing View Post
    For the record: The source data are from Perry's Chemical Engineering Handbook - 7th ed. Table 2-111 "Densities of Mixtures of (ethanol and water) at 20 deg C. Edwin Croissant fit the data at zunzun.com to obtain the polynomial I am using. Much of Edwins work can be found on github. Now referenced by comment in the conversion function of my program.
    Well, yes. My chemistry textbooks are kind of old so I expect some progress has been made since they were printed LOL. I still think the polynomial is an approximation to fit empirical data though :P

    Edit: Ah yeah ok. So, yes, Croissant used zunzun to create a polynomial to fit (approximate) the tables of published data. Ethanol + water mixtures are kind of tricky to convert to/from %abw and %abv because the volume changes depending on how many ethanol molecules are floating about (because of the molecule charges pushing each other apart; i.e. hydro-phobia/philia). In fact the volume (and therefore the %/vol) changes constantly, albeit by a very small amount. Which is why, I guess, my textbooks have tables and not a polynomial. From what I gather the tables are/were compiled using empirical data. We occasionally use ethanol + H2O mixtures at work and we just make the mixtures as if the volume doesn't change (e.g. 70+30 ETHO + H2O to make 70% volume concentration even though we know that's not _strictly_ correct). I guess that relates back to being aware of how accurate you need your numbers to be... if you need to be super super accurate then the order you add floating point numbers in C does matter. If you only need 1 decimal place it's not going to matter at all.
    Last edited by Hodor; 09-14-2020 at 05:49 AM.

  9. #24
    misoturbutc Hodor's Avatar
    Join Date
    Nov 2013
    Posts
    1,791
    Quote Originally Posted by flp1969 View Post
    I'm getting there. Past my 5th decade... Here's a good read about floating point if you are interested: Handbook of Floating Point arithmetic.

    []s
    Fred
    And of course: What Every Computer Scientist Should Know About Floating-Point Arithmetic

  10. #25
    Registered User
    Join Date
    Feb 2019
    Posts
    1,078

  11. #26
    Registered User
    Join Date
    Sep 2020
    Posts
    31
    From what I gather the tables are/were compiled using empirical data.
    Until we have a quantum theory of non ideal mixtures...

    I think there's some pretty good data out there but mostly behind pay walls. Dortmund Data Bank is a good source but they are only accommodating to a point. Investigating the contraction phenomena was initially intriguing; but you gotta be selective about which rabbit hole you want go down.

    Regards all.

  12. #27
    Registered User
    Join Date
    Jul 2009
    Posts
    6
    It's interesting (and rather common) for someone to ask how to do X (help with pointers), but that question is a distraction from the real problem.

    In this case, you never needed that new pointer at all. It didn't help in any way with your problem. It just introduced more confusion. Pointers are important, but ignore them for this specific question.

    For the question you asked, you simply needed to move the declaration/definition of seed_temp (and initialization) prior to your loop. I also inserted the missing update of index.

    Most of the rest of the detail was unnecessary.

    Secondly, you have an assumption that (eventually) psat1 + psat2 will grow to 760 or beyond. If you don't like the result of any of the calc_psat calls, you can ask that question separately.

    Focus on just the issues of interest. Remember, just mention the given, actual and expected behavior. Reduce the test case to just what is necessary. Make sure the test case is complete, but minimal.

    It should usually compile, run and produce the behavior you don't understand.

    For your direct question, this is all you needed:

    Code:
    float seed_temp = 20;
    while (index < 101)
    {
      do {
        psat1 = calc_psat(eA, eB, eC, seed_temp, gamma_e, x1);
        psat2 = calc_psat(wA, wB, wC, seed_temp, gamma_w, x2);
        seed_temp = seed_temp + 0.0001;
      } while (psat1 + psat2 < 760);
      boil_pt[index] = seed_temp;
    
      index++;
    }
    If calc_psat is the next issue, reduce that to some minimal test case and ask a new question.

    There are lots of other possible suggestions. Let's start here. One step at a time.

  13. #28
    Registered User
    Join Date
    Sep 2020
    Posts
    31
    Thank you for your review and advice.

    In this case, you never needed that new pointer at all.
    Yes, I found that eventually. In fact, the pointer created the problem. The first pass returned a correct value of 100 (boil pt of water). When the seed_temp of 100 was passed to calc_psat the resulting sum is over 760, exiting the inner loop. The way it's written, temperature needs to increment up. Took me a bit to notice it. I did make a brief acknowledge of that in post #5.

    My first post did not show enough code; subsequent ones, too much. I think I tested laserlight's patience by not showing that index does increment. When I posted the entire code she suggested I increment in the form of a for loop which I'm now doing.

    Focus on just the issues of interest. Remember, just mention the given, actual and expected behavior.
    My inexperience in all this has certainly obfuscated a problem that never really existed but I got so much good advice, notwithstanding. I'm very pleased with how my program is shaping up.

    In hindsight, it would have been so much clearer if I put the goal of the inner loop in terms of Excels 'goal seek': Have psat1 + psat2 = 760 by changing seed_temp. It's an algorithm I hope in include in a later revision. Newton Raphson method?

    Reduce the test case to just what is necessary. Make sure the test case is complete, but minimal.
    I have done this for what may be my next question. Still trying to find the error on my own.

    Thanks again.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 1
    Last Post: 08-05-2018, 12:14 AM
  2. Replies: 4
    Last Post: 08-18-2015, 03:13 PM
  3. Replies: 8
    Last Post: 03-01-2015, 12:39 AM
  4. Replies: 3
    Last Post: 10-30-2009, 04:41 PM
  5. Replies: 4
    Last Post: 08-27-2007, 11:51 PM

Tags for this Thread