Thread: algebra solver

  1. #16
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    >>Isn't 100 a little overkill?
    It is, for most practical uses. In some cases, i.e. (x-9999999999)*(x+9999999999)=0, 10 iterations was VERY inadequate however. Since the performance hit of the unnecessary iterations seems to be unnoticeable to the user (takes several ms either way in most cases, output to the screen is what holds the program up instead), I figured I'd leave it at 100 so that large/small roots could all be accurately computed.

    I also did some experimenting with different methods of estimating the number of iterations needed, but came up with nothing consistent so I left it hardcoded.

    Come to think of it, though, it could be changed to this:
    Code:
    double findRootNewton(const Polynomial& p, const Polynomial& dp, double x1, double x2)
    {
    	double x = (x1 + x2) / 2;
    	double xNext;
    
    	for(int i = 0; i < 500; ++i)
    	{
    		xNext = x - (p.evaluate(x) / dp.evaluate(x));
    		if(equals(x, xNext))
    			return x;
    
    		x = xNext;
    	}
    
    	return x;
    }
    **EDIT**
    The original reason I used a fixed-iterations approach was because of a problem I encountered in the original version, where x would never be *quite* accurate enough for the loop to quit, and would remain the same forever, leaving the user stuck in the loop. At the time, it never occurred to me that I could test for an unchanging x instead.

    **EDIT2**
    Newton's method work by taking the tangent of the function curve at x(n). x(n+1) is where the tangent intersects the x-axis.
    I see, thanks for the info.

    **EDIT3**
    Hmm, well, tweaked the code some more so that it'll quit after 500 iterations rather than continuing indefinitely, in case for some reason x keeps overflowing and never stays constant.

    Also, I just realized the implications of how Newton's Method works - it seems that, yes, the possibility does exist that if a poor starting point is chosen then overlapping roots may be found. So.. whatever, I guess Whichever implementation of findRoot you choose, there will always be the chance of missing some roots.
    Last edited by Hunter2; 03-16-2005 at 07:31 PM. Reason: Tweaking, more stuff
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  2. #17
    Registered User
    Join Date
    Aug 2003
    Posts
    470
    Hunter2, one technique for improving your Newton's iteration is to expand the sequence out, sort of. For instance, your Newton's approximation will generate values for x such as x_1, x_2, x_3, x_4, x_5 ... x_n. But you can use a more complicated algebraic equation to generate x_1, x_3, x_5, .... x_n. It will converge faster and perhaps reduce rounding errors. Adding a random factor somewhere would make Newton's method always converge, I think. Maybe there's an algorithm that does this somewhere.

  3. #18
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    >>But you can use a more complicated algebraic equation to generate x_1, x_3, x_5, .... x_n.
    That's an idea, I suppose - perhaps it would also reduce the chance of overshooting into the 'zone' of a different root.

    >>Adding a random factor somewhere would make Newton's method always converge, I think.
    I was under the impression that it would always converge anyway unless you picked a spot on a vertical asymptote or extremum or something? Anyway, I don't want to spend much more time on this - it was more of a 'prove I can do it' thing, really

    **EDIT**
    OK, final version of findRootNewton() - I'm going to leave it at this; if anyone wants to improve on it, go ahead, but I'm done with it for now (until I can actually figure out what I'm doing )
    Code:
    double findRootNewton(const Polynomial& p, const Polynomial& dp, double x1, double x2)
    {
    	Polynomial ddp = dp.derivative();
    
    	double x = (x1 + x2) / 2;
    	double xNext;
    	double evalF, evalDF, evalDDF;
    
    	for(unsigned long its = 0; its < 700; ++its)
    	{
    		evalF = p.evaluate(x);
    		evalDF = dp.evaluate(x);
    		evalDDF = ddp.evaluate(x);
    
    		xNext = x - (evalF/evalDF * (1 + ((evalF * evalDDF) / (2 * evalDF * evalDF))));
    
    		if(equals(x, xNext, 2))
    			return xNext;
    		
    		x = xNext;
    	}
    
    	return x;
    }
    Last edited by Hunter2; 03-17-2005 at 05:23 PM.
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  4. #19
    Registered User
    Join Date
    Aug 2003
    Posts
    470
    I was under the impression that it would always converge anyway unless you picked a spot on a vertical asymptote or extremum or something? Anyway, I don't want to spend much more time on this - it was more of a 'prove I can do it' thing, really
    Yeah, I think so. Just something I remembered from the calculus book.

  5. #20
    Super Moderater.
    Join Date
    Jan 2005
    Posts
    374
    Nice work hunter,

    Would the 'bisection' method of root finding be more reliable than 'Newtons'? Just a thought. Although, I imagine it all depends on what starting point/points you choose regardless of which method of root finding you use? Which I think this is what you may have concluded.

    OK, I wanna say a couple of things. Although, you haven't necessarily achieved a break-through in root finding (since these methods existed back in 'Newtons' time), you have made massive inroads in manipulating algebraic equations (using binary trees and parsing)-excellent.

    And any programme developed which mimicks the way humans solve things can only be a good thing.

    A quick question. Your programme solves equations so long as they only contain 'x' terms- polynomial or linear. I guess, in time you could improve it so that it handles other terms. For example:

    Rearrange the following equation to make 'g' the subject.

    Code:
    f= 10 +g
       -----
         3
    
    
    3f= 10+g
    
    
    g= 3f-10
    Or to simplify equations. For example:

    Simplify the following:
    Code:
    p-2q  - 2p +q
    ----    -----
     4        2
    
    
    2(p-2q) - 4(2p+q)
    -----------------
            8
    
    
    2p-4q-8p-4q
    -----------
         8
      
    -6p-8q
    ------
       8
    
    
    -3p-4q
    ------
       4


    Finally, if you enjoyed my 'algebra project' I have a similiar proposal for a general 'vector' diagram solver, which I will post soon. However, I think that would be better solved using a language such as 'java' since it is more visual than c++. Given that the nature of vector diagrams are more visual than equations.

    Much thanks treenef

  6. #21
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    Would the 'bisection' method of root finding be more reliable than 'Newtons'?
    It may be, but I think it's less efficient (and a lot longer and uglier to look at). With that method you're restricted to some hardcoded boundaries for finding roots, and although that won't be an issue in most cases as long as the roots are within those boundaries, I always dislike solutions involving "use a number big enough that you'll never run into problems". Especially when using a number too big will also give problems (that's why I hardcoded a value instead of using "std::numeric_limits<double>::min()" )

    Your programme solves equations so long as they only contain 'x' terms- polynomial or linear. I guess, in time you could improve it so that it handles other terms. For example:

    Rearrange the following equation to make 'g' the subject.
    That *might* be achieved by replacing the vector<double> in Polynomial with vector<Polynomial> and adding a char member variable called 'name' or something. It would also require modifying the parsing algorithm (so that it doesn't discard input containing more than 1 unique alphabetical character). If this were set up properly and equations were restricted to powers of 1, then it should probably be fairly straightforward to solve in terms of the 'base' variable, x. Unfortunately, it would likely be much more complex to implement a more general solving algorithm that would allow for higher-order polynomials (the site I was looking at for Newton's Method did mention something about "Using Newton's Method with Multiple Variables", but it was quite over my head).

    Or to simplify equations. For example:
    That would be more complicated - although the division operator of my Polynomial class can divide a polynomial by a factor of the polynomial, if it isn't a factor then there will be a remainder to turn things ugly real quick. This would likely require a different class, Expression (or Fraction, something like that), that has a dynamically allocated Expression for the numerator and another for the denominator. Stuff like that, and I really don't want to spend any more time on this for now, as I'm already starting to forget what I was doing on my other project that I put on the shelf for this one
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Linear algebra algorithm
    By Cogman in forum C++ Programming
    Replies: 3
    Last Post: 03-09-2009, 08:53 AM
  2. boolean algebra....urgent help needed!
    By Saimadhav in forum C++ Programming
    Replies: 1
    Last Post: 07-28-2008, 07:22 AM
  3. Algebra project
    By treenef in forum C++ Programming
    Replies: 10
    Last Post: 04-03-2005, 04:58 PM
  4. Linear Algebra API
    By curlious in forum A Brief History of Cprogramming.com
    Replies: 2
    Last Post: 01-13-2005, 05:34 PM
  5. Looking for a Boolean Algebra site
    By Majin_Flack38 in forum A Brief History of Cprogramming.com
    Replies: 1
    Last Post: 11-21-2002, 12:03 PM