Hi!

I am dealing with a problem in which I have to allocate a huge distance matrix (~600 GB on a cluster node). In order to save some memory, since the matrix is symmetric, I defined an upper triangular class as follows:

Code:

class utriag_matrix {
uint32_t dim;
std::vector<double> buffer;
public:
utriag_matrix(uint32_t d): dim(d), buffer( ( (d*(d-1))/2 ) ,0 ){}
uint32_t size() {return dim;}
double& at(uint32_t i, uint32_t j)
{
if(i==j){ throw std::invalid_argument( "at(i,j) with i == j is not a valid element of utriag_matrix." );}
if (i>j){std::swap(i,j);}
return buffer[dim*i - (i*(i+1))/2 - i + j - 1 ];
}
};

Note that the internal buffer is an std::vector.

The problem I think I am having is that the system is not able to allocate the std::vector even though I have more than enough RAM (the node has 1.5 TB of RAM so I am not even using half of it).

Is there a way of catching a std::bad_alloc of a vector constructed inside a member initializer list?

Is it a problem to allocate an std::vector of hundreds of GBs if my RAM is big enough (this is the first time I work with this amount of RAM)?

Thanks in advance!