As a beginner to C++, I would just like to ask what makes c# better than C++, and if I should wait to learn C++ before jumping right into C#.
Printable View
As a beginner to C++, I would just like to ask what makes c# better than C++, and if I should wait to learn C++ before jumping right into C#.
If you actually mean "in what way is C# better than C++", then just search the Web for phrases like "C# versus C++". I easily found C++ vs. C# - a Checklist from a C++ Programmers Point of View.
If you want to learn C#, then learn C#. You can always come back to learn the other one some day.Quote:
Originally Posted by ashinms
It would not surprise me if this is a common issue for people going C#->C++, instead of the other way around.
Even in spite of C++ being(Or at least having the possibility to be) somewhat higher level than C, there are still things such as pointers, arrays and so on, which deal with things on a much lower level than you can in C#. This means that were you you start C++ with no idea of programming beforehand, you would be forced to learn the low level ways, and therefore, going a level higher will be cake.
Going the other way around; e.g. programming without ever having to deal with these things, then going to another language that's a level lower, where you do have to deal with them, will be much harder.
And not unique for C#->C++, but also applies to moving from any other language that "does memory management for you", such as Java, Python, Perl or Basic and many others.
C++ (and C) have very basic handling of memory - it requires programmer effort to maintain pointers to valid memory, and to ensure there are no memory leaks or other problems. (This is in the standard language - obviously, with sufficient code added on top of the standard language, we can CREATE the environment for "automatic memory management", e.g. Python and C# are written in C or C++, so obviously the memory management done by those languages can be written in C or C++).
Almost all other basic principles of C++ are identical between C#, Java, Python - there are some subtle differneces, but for most programmers, those are a simple case of "learning how to do it in this language".
The principles of managing your own memory is a bigger task to learn.
--
Mats
the feedback I've heard is that it's not just memory management. knowing where to use -> and :: when you've been using the dot for everything can be confusing for some, and the fact that C++ doesn't keep track of how big an array is when you dynamically create it was frustrating for one of my friends a while back.
It can be frustrating, yes, but it will in turn make you a better programmer to know how these things work. Deep down under C# and other similarly managed languages, unmanaged memory management is doing the work. Pointers are almost nonexistent in C#, but understanding them is pretty important in my opinion.
It does, if you use std::vector or std::tr1::array.
However, by default, it will not do bounds checking.
For dynamic arrays with vector, however, using the .at member function will guarantee that no out-of-bounds can be made.
boost::array also raises an assert if an out-of-bounds access is made.
How about the fact that in theory, your code you write in C# should be language independent to other .net languages. IE a class you write in C# can be inherited from and expanded by a VB.net developer, and you can then take that class and use it in your app.
Yes. You helped. :)
They're taking it a step further with C# 4.0, allowing you (with proper binding libraries of course) to call functions and instantiate objects dynamically across any language (javascript, python, ruby, etc).
That stuff is more advanced and may find little use but it's an interesting addition.
Personally I haven't done much stuff in C++ lately, would like to get back into it though. I prefer C# because of the frameworks it offers for web development (ASP.NET MVC) and desktop/web applications (WPF, silverlight).
C# might have a lot of connectivity features, but the truth is it's really just a big ole kludge. Besides that, any language lacking deterministic destructors just seems inherently flawed to me. I'd say stick with C++ unless you have no other choice.
You can still have the deterministic deallocation of resources via Dispose(). Plus, it helps improve performance -- if a large number of objects go out of scope at once, they don't need to all be destroyed immediately while your thread sits on its ass waiting. Rather, they can be garbage collected whenever there's time available to do so.
hmm... I was surprised to hear that C# has deterministic destruction, but a quick check shows that Dispose() is not automatically called when the object goes out of scope, so that does not adequately address Sebastiani's point.Quote:
Originally Posted by Cat
A recent garbage collection discussion on the Boost mailing list gives me the impression that a well designed allocator would provide that benefit as well, though in this case it would be about delaying the freeing of memory since the objects would nonetheless be destroyed.Quote:
Originally Posted by Cat
Finalize() is called when an object is being deleted. It is in effect a destructor for C#, usually it's only used to free managed resources. Its syntax is also similar to C# as in ~ClassName();.
for me personally i am way more productive with C# than i am with C++, and using the library is much more convenient/consistent than a mix/match of Win32, stdlib, or boost libraries.
back on topic, there is no X is better than Y. there is only X is better suited for Z than Y. in general i think learning C++ first will make you a better C# programmer....
the one thing you need to watch out for is not to think in the old language. learn to think in the new language.
people who have seen "VB6 style" code in .NET know what i'm talking about.... :'(
>> Plus, it helps improve performance -- if a large number of objects go out of scope at once, they don't need to all be destroyed immediately while your thread sits on its ass waiting.
The role that destructors play in memory management is just a side-effect of their real purpose; to do *something* precisely when the object goes out of scope. For example, you may have an object that writes an XML start tag in it's constructor, and a closing tag in it's destructor, eg:
Obviously, for things to work properly, the objects have to be destroyed in a specific order.Code:xml_writer a( "outermost" ), b( "middle" ), c( "innermost" );
The importance and usefulness of the destructor really can't be overstated. Their absence from languages such as C# and Java is a real shame, indeed.
yes, you *could* do that with a constructor/destructor, but i'm still unconvinced from your XML example why C# lacking true deterministic deallocation is "inherently flawed."
edit: btw, i say true deterministic deallocation because even though in C# you can manually call Dispose() on an object, technically the instance is still available in memory, unlike C++'s delete.
>> but i'm still unconvinced from your XML example why C# lacking true deterministic deallocation is "inherently flawed."
That was just a very simple demonstrative example. There are literally innumerable mechanisms that can be described using the constructor/destructor idiom, as they essentially model a stack (and thus recursion), arguably one of the most important concepts of computer science. The question isn't "what are they good for?" but "what can't they be used for?".
well, if you're argument is simply to do something upon construction and destruction, rather than allocating/deallocating memory, then there really isn't much difference between C#'s IDisposable pattern vs C++'s destructor....
actually, in C++/CLI if you declare a C++ destructor it actually compiles into a Dispose() method, which via stack semantics Dispose() will automatically be called when the object goes out of scope.
i will say that *properly* implementing the IDisposable pattern in .NET is a complete pain in the butt with numerous places where you can screw up. and since it is a pattern, there is no guarantee that it will be done exactly the same across different libraries and programmers....which leads to more pain in the butt.
here's one of the most comprehensive guides to implementing the pattern:
Joe Duffy's Weblog
yes...25 pages printed.
>> i will say that *properly* implementing the IDisposable pattern in .NET is a complete pain in the butt
I don't even bother with it; I always invoke cleanup code manually. It sure would be nice if I didn't have to, though. ;)
Okay, this is confusing :(Quote:
Originally Posted by bling
Are you saying that although the programmer can call Dispose earlier (i.e., deterministic but manual deallocation), Dispose will be invoked automatically when the object goes out of scope? My impression (from Clemens Szyperski's annotation in the article you linked to) is that if the programmer does not call Dispose, Dispose will be called sometime during finalization, in which case it is not equivalent to the deterministic destruction provided by (standard) C++ destructors.
...and in a tangentially related point
...as opposed to stopping executing at ANOTHER point to do it, when you have no control on the performance hit.Quote:
Plus, it helps improve performance -- if a large number of objects go out of scope at once, they don't need to all be destroyed immediately while your thread sits on its ass waiting.
My point here is that this doesn't save time overall - the same work has to be done at some point, and if you're not in control of that then it will increase the memory footprint while those objects that are out of scope hang around.
It's a double edged cutting issue.
If a collection of 20,000 objects fall out of scope, how long does that take? It depends on what happens as they do. Tens of millions can be processed per second on typical hardware when the destructor is trivial, and depending on what's really going on, the net result (sometimes due to compiler optimization) can be zero work has to be done. That's almost never true on .NET; some under the hood management of the implied base object has to be managed at the minimum on a collection of such objects.
I always thought that it was deterministic, but apparently not. According to MSDN:
Note the misuse of the word "automatically".Quote:
The primary use of this interface is to release unmanaged resources. The garbage collector automatically releases the memory allocated to a managed object when that object is no longer used. However, it is not possible to predict when garbage collection will occur.
[edit]
I see. It's deterministic iff the object is manipulated within the "using" syntax.
[/edit]
How is it misuse? It is true that the garbage collector automatically releases the memory for managed objects, since (at least ideally) the programmer does not need to manually do so.Quote:
Originally Posted by Sebastiani
I should have amended that in my edit. But yes, it is automatic (as long as the "using" syntax is employed, of course).
yes, very confusing indeed :mad:!! i hope Sebastiani's link from MSDN made it a little more clear.
basically:
a) the programmer cannot deterministically clean up managed resources
b) IDisposable provides a pattern for cleaning up unmanaged resources (and optionally do so deterministically by calling Dispose() directly)
anyways, IDisposable is such a complicated mess probably because .NET has to support multiple languages. for most languages, Dispose() is optional, so if you do not call it, unmanaged resources will not be freed until the GC kicks in. however, for langauges like C++, which has RAII, Dispose() will automatically called when an object goes out of scope. i believe the C++/CLI compiler will automatically generate that code even if you didn't define it.
You can mark an object for collection and call GC.Collect();
Besides whats the big deal about it possibly not being collected as soon as all references to a heap object are gone? If the program needed more memory for an allocation it would immediately collect everything that is safe to be collected. You probably can't manage it better yourself.
well, for 1, just because you mark an object for collection doesn't mean it will be collected, even with GC.Collect. objects get elevated to different generations depending on how long they live. GC.Collect is an all or nothing kinda deal, so if you want to free up memory for object a, which is generation 1, you need to free up *all* objects in generation 1, not to mention you have no idea what order objects are collected.
also, true deterministic allocation/deallocation is important for applications where predictability is a requirement. many high-performance applications fall into this category.
I hear this a lot, and it simply isn't true.Quote:
You probably can't manage it better yourself. (Memory management )
If it were even close to true, the wide range of C++ applications that manage rather large volumes of information (like SQL engines, 3D Studio Max, AutoCAD, FormZ, Pro-E, Catia, Photoshop) wouldn't work as well as they do.
In fact, it is precisely because you CAN manage it better yourself that these ambitious applications aren't written in Java, C# or similar languages.
I'll give you this, however - you can't manage it better with the same simplicity - that much is true.
Also, the subject relative to deterministic destruction isn't as much about memory management specifically as it is about the RAII paradigm (or design pattern, if you think of it that way).