# Thread: Step-solving with Gauss-Jordan

1. ## Step-solving with Gauss-Jordan

In trying to program a step-by-step Gauss-Jordan solver, I'm currently merely recursively building a tree of possible operations at each step and then choosing the smallest path. That will be the presented solution.

But I wonder. Is there any math I can apply to a system of linear equations to better predict more efficient operations?

2. By "Gauss-Jordan solver" do you mean the Gauss-Jordan Elimination for solving a set of linear equations of the form Ax = b, where A is a square matrix and b is a vector?

Or something else?

3. Yup. The very same.

Essentially I'm looking for any heuristics that help me reduce the size of my tree, or eliminate it altogether. When manually solving systems of linear equations, I often find myself adopting strategies for faster resolution. It's often easy to gauge at the coefficients and find patterns or arrangements that allow for simpler math or faster resolution.

Example 1: Exchanging rows to reduce the steps necessary to reach the reduced echelon form. This is easy to compute.
Example 2: Choosing the best option between two possible row exchange operations. Moderately easy to compute.
Example 3: Realizing that certain coefficient arrangements allow for simpler math and it may trump any attempt at row exchange. Right now I find this difficult to compute.

The program is to be used as an educational tool in the high school where I'm teaching. It's for this reason that it is important for me to not just solve a system of linear equations, but also present a good strategy.

4. Yeah, okay. I'm not convinced your scheme will achieve much. You might get some local variations of paths (as you call them) but the overall path will be similar cost.

With Gaussian elimination, all paths to solve a linear equation are equivalent. Gauss-Jordan is just a variation of Gaussian elimination (the only difference is that Gaussian reduces to echelon form, and then does back substitution, while Gauss-Jordan reduces to reduced row echelon form, and doesn't need back substitution).

In terms of number of operations, the algorithms are broadly equivalent - the number of divisions, multiplications, and subtractions are essentially the same. (i.e. I'm doubtful that the difference between the best and worst path will be enough to justify the cost of doing your search). There are variations of the algorithms, but they work by doing more comparisons (e.g. to pivot on a row/column maximum, rather than on the first non-zero value found) and the impact on number of [non-comparison] math operations is mostly insignificant. The reason for the algorithm variations (which amount to different strategies for selecting pivots) is actually numerical convergence (although the algorithms are all strongly stable, some forms are more precise than others).

5. It is precisely pivot selection heuristics I'm after. I'm not concerned with performance. There's no perceivable cost in solving an augmented matrix. The math is straightforward and a typical processor can handle in under a second a large matrix like none my students will ever see.

Assume a system of linear equations with 3 variables:

x + 1/2y = 2

This is a perfect candidate for the second row. I can see it, you can see it. The computer can't if I don't tell it how.

Note: I'm also concerned with numerical stability. But I do have a paper somewhere on the matter which I seem to recall presents some methods for numerical analysis.

Popular pages Recent additions