Good luck with your quest!
I don't get one thing of the whole discussion, though that might be based on my lack of knowledge. But, doesn't technically vector call new to allocate memory on the heap? That's the way I see it, that's why I said the discussion is kind of pointless. It is like comparing a door to a house. The one includes the other. I don't really see how dynamically allocating a chunk of memory and assigning a pointer be compared with an object that does that and many many other things.
And I don't see why new cannot be better than a vector. Much better maybe not. But generally better of course it can be. Problem: store X int inserted by the user. Then make calculations with those values, which wont be further changed.
Answer1: dynamically allocate an array of X to store the values.
Answe2: use a vector of size X to store the values.
Note that both answers will work. But if I had said:
Answer3: dynamically allocate an array of X+2 to store the values
then a lot would say that answer1 is obviously better, since you don't need the extra 8 bytes. But the vector will use more than 8 extra bytes.
So there are cases that new is better, slightly but yet better. But why not use the slightly better solution? Its free.
>> That is true. On the other hand, these checks are unavoidable, even if you implement a dynamic array by hand. If they are avoidable, then push_back() was avoidable to begin with, so you have simply used the wrong function. <<
I meant what if you don't need to use a function at all?
It really doesn't matter unless you're a zealot.
>> But, doesn't technically vector call new to allocate memory on the heap? That's the way I see it, that's why I said the discussion is kind of pointless. <<
The discussion is not pointless. Just because both techniques use the same type of memory allocation doesn't mean one isn't better than the other.
>> It is like comparing a door to a house. The one includes the other. <<
A house is better for living in than a door. Sure, you could start with a door and build a house around it, but why put in the extra work if you already have a house that an expert built for you?
As to your problem and answer, I fail to see where the extra 8 bytes come from. Could you go into more detail. It seems to me both would be virtually the same. Also note that even though new and vector are not exactly the same, the minor differences that may seem to favor new are overshadowed by the simple benefit of automatic memory management.
I meant that new int[X+2] will allocate 2 more int than new[X], thus 8bytes if you assume an int to be 4 bytes. Which will make new[X] better in a way, since it lacks nothing but gains some space. So new[X] will be better than a vector since it lacks nothing (for the given example) and gains some space.
Once you have settled on the facts, "better" is objective. However, the choice of facts can be subjective (e.g., I may value a particular criterion more than another based simply on "experience" or "intuition").Quote:
Originally Posted by Daved
No, that there exists at least one case where manual memory management is fine is self-evident (it must be used, at least indirectly, to implement any container that performs memory management). The question of whether such manual memory management is better than RAII is a question in general, not in special cases.Quote:
Originally Posted by FillYourBrain
That sounds like a straw man argument. Why allocate X+2 when you only need X? How is new[X] better than vector<T>(X)? In terms of space, you may only end up saving on the capacity variable since the array size is equal to its capacity. Besides, this is not "free", as you claim. The price is manual memory management.Quote:
Originally Posted by C_ntua
In performance critical code, when debugging, you might want memory bounds checking, but if you use bounds checking in your performance critical code, perhaps it slows it down too much to debug. vector<> uses bounds checking, new doesn't.
Traditionally 'std::vector<?>::operator [int_type]' doesn't do bounds checking where 'std::vector<?>::at(int_type)' does. I don't know of a single provider that does checking for 'std::vector<?>::operator [int_type]' in release mode. I'd be interesting in knowing of this bizarre case you use.Quote:
vector<> uses bounds checking, new doesn't.
Edit: And as far as it goes: I've always interpreted the standard as implying that 'std::vector<?>::operator [int_type]' must not do bounds checking even though some implementations do when some compiler flag or macro is or isn't set or defined.
Hmm. Umm... lemme do some research...
Aha... yeah I wasn't talking about release mode. I was talking about debugging.
Did... did you just go "aha"?! O_oQuote:
Aha... yeah I wasn't talking about release mode.
Edit: Aha! I actually know of eight library distributions that never do bounds checking for 'std::vector<?>:operator [int_type]' debugging mode or no.
Edit: Aha! ^_^
>> So new[X] will be better than a vector since it lacks nothing (for the given example) <<
Here is where you're missing the important point. new[x] does lack something. And that something, automatic memory management, is not trivial at all. The extra eight bytes that I assume you are talking about coming from the overhead of the vector (?) is trivial. Can you think of a use of new or vector where the size will be small enough to not dwarf those two bytes?
>> I don't know of a single provider that does checking for 'std::vector<?>::operator [int_type]' in release mode. <<
I think the latest versions of VC++ do. Although you can turn that behavior off.
>> I've always interpreted the standard as implying that 'std::vector<?>::operator [int_type]' must not do bounds checking <<
I've never heard that. How would it make such a requirement? The bounds check shouldn't be more than a comparison against the size of the vector, so it would be difficult to require it via a complexity guarantee.
>> Once you have settled on the facts, "better" is objective. <<
Perhaps a better way to put it is that once you've defined what better means (in our case in almost always means a program that works better, is easier to maintain, less likely to fail, easier to extend, etc), then the determination of what is better can be based on facts and reasoning. The point is that if someone claimed that square wheels were better for vehicles than round ones because "better" is subjective, you'd just laugh. Is it possible that a situation exists where a square wheel works better for that situation than a round one? Sure. But under the common definition, it's pretty obvious that the facts and reasoning point to the round wheel being better.
The point is that vector versus new is not subjective. My original claim stands. I have yet to hear or read of a situation where it made more sense to use new/delete rather than vector (except perhaps in the implementation of a vector or similar container).