Originally posted by gustavosserra
For critical apps it can be slower. Besides it's better to use the old way if know the exact size, isn´t?
Actually, with a good optimizing compiler, it's not at all slower. The access function (operator[]) will be inlined, and the access time is exactly the same.
And, I don't think there is EVER a case (with the exception of times you need arrays of bool) when it is better to use a dynamic array in place of a vector.
This code:
Code:
int nRow = 4, nCol = 6;
typedef std::vector<int> iRow;
typedef std::vector<iRow> iArray;
iArray myArray(nRow,iRow(nCol));
looks a lot cleaner than:
Code:
int nRow = 4, nCol = 6;
int ** myArray = new int*[nRow];
for (int i = 0; i < nRow; i++) myArray[i] = new int[nCol];
/* ... */
for (int i = 0; i < nRow; i++) delete[] myArray[i];
delete[] myArray;
The only reason I would use the built-in arrays for dynamic memory was if I needed to create an array that was contiguous in memory. For example, one way of creating and accessing a display buffer might be:
Code:
int * videoBuffer = new int[640*480];
int ** video = new int *[640];
for (int i = 0; i < 640; i++) video[i] = videoBuffer + 480*i;
// Now you could use video[45][64] to access the correct position in videoBuffer
delete[] video;
delete[] videoBuffer;
However, unless I really needed to do something where I had to have an array contiguous in memory, I wouldn't bother.