Originally Posted by
ElNoob
Code:
#define I2D(i, j, nc) ((i)*(nc)+(j))
Why would anyone torture themselves like that?
I mean, why not define
Code:
#define ELEMENT(matrix, row, column, stride) ((matrix)[(column) + (row)*(stride)])
instead?
If you compared that to the I2Dmacro, the above has row==i, column==j, and stride==nc. matrix is an array of or a pointer to matrix elements, say doubles, and the others are some integer type; desired row and column in the matrix, and the stride of the matrix rows. (Since C uses row-major order, consecutive columns on the same row are consecutive in memory, and thus have unit stride, 1. The row stride is at least the number of columns.)
Instead of dragging the data pointer, the number of rows, the number of columns, and the row stride, you can make the code much more readable by using a trivial structure:
Code:
typedef struct {
int rows;
int cols;
int stride;
double *data;
} matrix_t;
#define ELEMENT(matrix, row, col) ((matrix).data[(row)*(matrix).stride + (col)])
which simplifies your matrix multiplication function to
Code:
int pMatC(const matrix_t *const left, const matrix_t *const right, matrix_t *const result);
which could return 0 if successful, and a nonzero errno code otherwise (like incompatible matrix sizes, for example).
You often need transposes and views to other matrices. Consider a further jump into deeper waters:
Code:
struct data {
struct data *next;
long refcount;
size_t size;
double data[];
};
typedef struct {
long rows;
long cols;
long rowstep;
long colstep;
double *origin;
struct data *owner;
} matrix_t;
typedef struct {
long size;
long step;
double *origin;
struct data *owner;
} vector_t;
#define MATRIX_INIT { 0L, 0L, 0L, 0L, NULL, NULL }
#define VECTOR_INIT { 0L, 0L, NULL, NULL }
#define MATRIX(m, row, col) ((m).origin[(m).rowstep * (row) + (m).colstep * (col)])
#define VECTOR(m, row, col) ((m).origin[(m).rowstep * (row) + (m).colstep * (col)])
These are truly powerful. They give you the ability to create "views" to other matrices, and e.g. transpose a matrix without copying a single element. You can even create a vector that is really the diagonal elements of a matrix. When you modify the matrix, the changes are also reflected in the vector. With a practically insignificant added cost to element access, you now have a vastly more powerful abstract matrix and vector types.
As long as you destroy each matrix after you're done with it, the reference counting in the actual data structures will release the matrix and vector data when it is no longer needed (referred to). You will never need to worry about whether a matrix was unique or shared elements with another matrix or vector, just destroy each matrix you use after you no longer need it.
Memory management can be done even easier, using regions. Nginx and Apache sources call these pools, and I like that better, so that's what I'll call them. Basically, you'd also have
Code:
void *pool_new(void);
void *pool_destroy(void *const pool);
and pass void *const pool to each matrix or vector function that may need to dynamically allocate memory, unless they only need it for internal (temporary) data. Each matrix and vector will effectively belong to exactly one pool. Unless you are creating or destroying matrices or vectors, the pools are completely invisible; they are not a boundary for use, only for allocation/deallocation.
Instead of worrying about managing each vector and matrix, you start by creating a new pool for your temporary needs. After you're done, you use a helper function to move the final result matrix or vector out from the temporary pool to the caller's pool, and destroy the entire temporary pool at once. (No problems with circular references, forgotten temporary matrices or vectors .. this is an extremely robust approach, and completely frees you from worrying about memory management details. And it is super efficient, too.)
It's pretty funny, isn't it? First you have to learn a lot of stuff (three distinct "learning steps" in above, by my count), and then you can implement ways to do complicated stuff very easily.
I don't know the level of your course or learning, so I cannot say which point of the above would be optimal for you (considering the learning curve et cetera). It just hurts my brain to see learners led away from this path.
If you want to see how I would implement any of the stages above, just let me know.