You can try extracting a vector<double *> from a vector<vector<double> >. For example, assuming the vector<vector<double> > is 2n by 2n, the upper left n x n submatrix can be found;
Code:
std::vector<double *> upperleft(std::vector<std::vector<double> > &matrix)
{
std::size_t i, newsize = matrix.size()/2;
std::vector<double *> retval(newsize);
for (i = 0; i < newsize; ++i)
retval[i] = &(matrix[i][0]);
return retval;
}
and the bottom right can be found by;
Code:
std::vector<double *> bottomright(std::vector<std::vector<double> > &matrix)
{
std::size_t i, newsize = matrix.size()/2;
std::vector<double *> retval(newsize);
for (i = 0; i < newsize; ++i)
retval[i] = &(matrix[i + newsize][newsize]);
return retval;
}
Of course, that leaves the problem of logically multiplying them together ..... which still means you need to provide additional memory space to hold the result.
What this will give you, I don't know.
If your algorithm implementation is not particularly sophisticated, the dimension of matrices will need to be pretty large before your algorithm becomes more efficient than conventional matrix multiplication. I remember reading a report somewhere about a very rudimentary implementation of Strassen's algorithm, and it was less efficient than conventional matrix multiplication for dimensions up to 40000 or so.
Incidentally, you are also aware that Strassen's algorithm is less stable numerically than the conventional matrix multiplication? Errors due to finite numerical precision build up more rapidly, and the effects will be more noticeable for larger matrices. Fast algorithms are rarely a free lunch .....