Thread: Insulted for using C++?

  1. #46
    Registered User
    Join Date
    Sep 2004
    Location
    California
    Posts
    3,268
    The most glaring, and in my view, unforgivable, omission from these languages is the deterministic destructor. It really irritates me, honestly.
    Destructors make less sense when you consider that there is no guarantee of when the object will be deleted. It makes it tough to use RAII principals when you cannot control the lifetime of the object itself (it is completely up to the garbage collector).

    The above paragraph only relates to Java as I've not done much with C#. I remember hearing that C# does have destructors though, so maybe it doesn't suffer from the same lack of RAII.
    bit∙hub [bit-huhb] n. A source and destination for information.

  2. #47
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by bithub View Post
    Destructors make less sense when you consider that there is no guarantee of when the object will be deleted. It makes it tough to use RAII principals when you cannot control the lifetime of the object itself (it is completely up to the garbage collector).

    The above paragraph only relates to Java as I've not done much with C#. I remember hearing that C# does have destructors though, so maybe it doesn't suffer from the same lack of RAII.
    Well, that's just the thing. It would have made much better sense to use a true reference counting system (eg: at the precise moment the last reference goes out of scope, the object's destructor is invoked, and the memory freed) instead of the clumsy (and inefficient) garbage-collection scheme that they both use. For example, which is better - to close a file "some time in the future" (and Java doesn't even guarantee that much with it's 'finalize' provision), or *precisely* when you're done with it? The fact is, there is no compelling reason not to provide true destructors, given their incredible usefulness (I challenge anyone to name just one!).

    And no, C# does not have truly deterministic destructors, although they are guaranteed to be called, eventually (as such, I never use them anyway).

  3. #48
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    a true reference counting system (eg: at the precise moment the last reference goes out of scope, the object's destructor is invoked, and the memory freed) instead of the clumsy (and inefficient) garbage-collection scheme that they both use
    A very risky statement, that. Inevitably, we came from C++ vs Java/C# to garbage collection or not.

    The fact is, there is no compelling reason not to provide true destructors, given their incredible usefulness (I challenge anyone to name just one!).
    Garbage collection.

    And no, C# does not have truly deterministic destructors
    But it does have deterministic resource freeing via the using block.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  4. #49
    Registered User
    Join Date
    Sep 2004
    Location
    California
    Posts
    3,268
    Well, that's just the thing. It would have made much better sense to use a true reference counting system (eg: at the precise moment the last reference goes out of scope, the object's destructor is invoked, and the memory freed) instead of the clumsy (and inefficient) garbage-collection scheme that they both use.
    Actually, efficiency is one of the main reasons GC works this way. Let's say you have a class named Foo. In one function you create a Foo instance, and it goes out of scope at the end of the function. With destructors, the object would be destroyed at the end of the function. In Java, the object can be reused the next time a Foo object is needed.

    For example, which is better - to close a file "some time in the future" (and Java doesn't even guarantee that much with it's 'finalize' provision), or *precisely* when you're done with it?
    The answer to this question is obvious. That's why you explicitly call the close() function instead of relying on Java's finalize (which no programmer should _ever_ rely on being called).

    The fact is, there is no compelling reason not to provide true destructors, given their incredible usefulness (I challenge anyone to name just one!).
    I think I already named one Mainly, that the JVM can reuse objects without reallocating memory.

    All this being said, I would really love for some sort mechanism where a function is immediately called when the reference count reaches zero. It doesn't have to be a destructor (in that it doesn't actually free the object's memory), but it just allows the programmer to run some code when an object goes out of scope. Considering neither C# or Java provide this functionality, there is probably some implementation issue that I can't think of which prevents this.
    bit∙hub [bit-huhb] n. A source and destination for information.

  5. #50
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Garbage collection.
    Sorry, circular logic doesn't count.

    But it does have deterministic resource freeing via the using block.
    True enough, but that's still a kludge, IMO.

    I think I already named one Mainly, that the JVM can reuse objects without reallocating memory.
    You seriously think that allocating a chiunk of memory is *that* expensive? Nonsense.

    The answer to this question is obvious. That's why you explicitly call the close() function instead of relying on Java's finalize (which no programmer should _ever_ rely on being called).
    Well I know what has to be done, what I was asking, though, was "Which is better?".

    All this being said, I would really love for some sort mechanism where a function is immediately called when the reference count reaches zero. It doesn't have to be a destructor (in that it doesn't actually free the object's memory), but it just allows the programmer to run some code when an object goes out of scope. Considering neither C# or C++ provide this functionality, there is probably some implementation issue that I can't think of which prevents this.
    For C++, that would be the destructor (it's not as if it's only for freeing memory, you know).

  6. #51
    Registered User
    Join Date
    Sep 2004
    Location
    California
    Posts
    3,268
    For C++, that would be the destructor (it's not as if it's only for freeing memory, you know).
    Bah, I meant to write "C# or Java", but C++ came out instead. I edited my post right after I hit "reply", but you must have gotten to it first

    You seriously think that allocating a chiunk of memory is *that* expensive? Nonsense.
    Actually, it is. Since you can't allocate object on the stack (like you can with C++), everything is allocated on the heap. If you are continuously allocating and deallocating heap memory, it will actually add quite a bit of overhead (as well as possibly fragmenting the heap). We've done quite a bit of GC tuning at work, and it's amazing the performance difference you can get just by adjusting how much memory the GC has to work with (more memory means fewer allocations/deallocations).

    Well I know what has to be done, what I was asking, though, was "Which is better?".
    Obviously it's better if we could rely on RAII. This is my #1 Java wishlist item.
    bit∙hub [bit-huhb] n. A source and destination for information.

  7. #52
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    I think the issue is not problematic, Sebastiani. The language idioms adjust to each other and you will be given the needed tools to code under a non-deterministic destructors environment. That is, lack of a deterministic destructor won't stop good and efficient code from being written in Java. Neither it hinders in any way the ability of the language to provide usable software.

    You seriously think that allocating a chiunk of memory is *that* expensive? Nonsense.
    When you consider all the extra work besides just setting a memory space for an object, it can become expensive. Member initialization can be an expensive proposal and I wouldn't be surprised (I don't know Java that well) if the JVM takes advantage of exactly this to enable member initialization shortcuts.

    Well I know what has to be done, what I was asking, though, was "Which is better?".
    I'm grabbing this quote to introduce another consideration into your reasoning.

    Can it be that we sometimes confuse our personal preferences with the notion of right and wrong? Or to put it in another way, should this be about doing things in a different way, instead of the best way? Because in many things in life (programming included) there's no better way. But instead a different way.
    Last edited by Mario F.; 09-17-2009 at 12:21 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  8. #53
    Registered User
    Join Date
    Sep 2004
    Location
    California
    Posts
    3,268
    I found a great post written by someone who developed Microsoft's CLR. In explains the difficulties in implementing deterministic resource management in languages that use garbage collection.
    bit∙hub [bit-huhb] n. A source and destination for information.

  9. #54
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Mario F. View Post
    Can it be that we sometimes confuse our personal preferences with the notion of right and wrong? Or to put it in another way, should this be about doing things in a different way, instead of the best way? Because in many things in life (programming included) there's no better way. But instead a different way.
    I can't believe you are saying this

    Garbage collection is great, altho the "lack of control" always makes me think that doing it yerself will be more optimal. But probably that is not even so true, and people have been tuning this tish for years. Same for dynamic typing.

    IMO the problem with (explicit) reference counting is it becomes potentially more ambiguous (and therefore prone to error) than regular allocate & free stuff.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  10. #55
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Actually, it is. Since you can't allocate object on the stack (like you can with C++), everything is allocated on the heap. If you are continuously allocating and deallocating heap memory, it will actually add quite a bit of overhead (as well as possibly fragmenting the heap). We've done quite a bit of GC tuning at work, and it's amazing the performance difference you can get just by adjusting how much memory the GC has to work with (more memory means fewer allocations/deallocations).
    Maybe back in like 1969 that would have been an issue (but then again, maybe not). These days, though, I would hardly consider it a major bottleneck for most applications. Maybe the GC is implemented poorly - I don't know - but what I can tell you is that you can write a C program that calls malloc/free in a tight loop and see quite satisfactory performance (for most implementations, anyway). That is enough to tell me that it most certainly can be efficient if designed properly.

    I think the issue is not problematic, Sebastiani. The language idioms adjust to each other and you will be given the needed tools to code under a non-deterministic destructors environment. That is, lack of a deterministic destructor won't stop good and efficient code from being written in Java. Neither it hinders in any way the ability of the language to provide usable software.
    Well sure, good programmers learn how to deal with these things, naturally. I'm just saying that I'm not enjoying it one bit. It's a pain in the arse, pure and simple.

    When you consider all the extra work besides just setting a memory space for an object, it can become expensive. Member initialization can be an expensive proposal and I wouldn't be surprised (I don't know Java that well) if the JVM takes advantage of exactly this to enable member initialization shortcuts.
    I don't know what shortcuts you're alluding to, but I'm betting the perfomance difference is effectively negligible.

    Can it be that we sometimes confuse our personal preferences with the notion of right and wrong?
    Nope. Let me use an example that I think I've used before. Consider this very simple class:

    Code:
    struct tag_writer
    {
    	tag_writer( string const& name )
    	: name( name )
    	{	
    		cout << "<" << name << ">" << endl;
    	}
    	
    	virtual ~tag_writer( void )
    	{
    		cout << "</" << name << ">" << endl;	
    	}	
    	
    	string
    		name;
    };
    
    int main( void )
    {
    	tag_writer
    		a( "a" ), 
    		b( "b" ), 
    		c( "c" ), 
    		d( "d" ), 
    		e( "e" ), 
    		f( "f" ), 
    		g( "g" ),
    		h( "h" ),
    		i( "i" ), 
    		j( "j" ), 
    		k( "k" ), 
    		l( "l" ), 
    		m( "m" ), 
    		n( "n" ), 
    		o( "o" ), 
    		p( "p" ), 
    		q( "q" ), 
    		r( "r" ), 
    		s( "s" ), 
    		t( "t" ), 
    		u( "u" ), 
    		v( "v" ), 
    		w( "w" ), 
    		x( "x" ), 
    		y( "y" ),
    		z( "z" ); 
    	return 0;
    }
    Now, just write a program in Java that produces the same output! See my point? And this is a trivial example - the reality is that deterministic destructors create all sorts of opportunities that just aren't possible (without a lot of effort, anyway) otherwise.
    Last edited by Sebastiani; 09-17-2009 at 12:50 PM.

  11. #56
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by MK27 View Post
    I can't believe you are saying this
    hmm... why?
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  12. #57
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    Quote Originally Posted by CornedBee View Post
    Garbage collection.
    .
    This is actually the first I've heard that java/C# do not execute the destructor immediately, but if thats true it just kills what little respect I had for those languages to begin with. Pray someone tell me that I'm misunderstanding what non-deterministic destructor means. OMFG I just read an article on the issue and sadly I'm not mistaken.

    Personally I can think of several scenarios where a non-deterministic destructor is a very bad idea, thankfully only half of them involve loss of life and/or limb, the rest just cause the machine to spontaneously restore itself to a quasi-preassembled state while simultaneously performing a terminal failure analysis of several components.

    Perhaps it never occurred to web-heads that the destructor may execute time critical code and that the resources it frees may not be just managed memory.

  13. #58
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Sebastiani View Post
    And this is a trivial example - the reality is that deterministic destructors create all sorts of opportunities that just aren't possible (without a lot of effort, anyway) otherwise.
    Well, you realize that your example simply introduces a coding shortcut? I do not see that as particularly advantageous.

    But even on the case of objects that initialize and control the lifetime of other objects, Java and C# present mechanisms to control this. In C# you have the Dispose method for the CLR and the using keyword under C#. I'm unaware of any outstanding issues current programmers in Java or C# have while using non-deterministic destructors.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  14. #59
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    Quote Originally Posted by Mario F. View Post
    I'm unaware of any outstanding issues current programmers in Java or C# have while using non-deterministic destructors.
    CodeProject: Non-Deterministic Destructors - What Are They?. Free source code and programming help

    Dispose creates its own problems.
    Last edited by abachler; 09-17-2009 at 01:22 PM.

  15. #60
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by Mario F.
    Well, you realize that your example simply introduces a coding shortcut? I do not see that as particularly advantageous.
    Yes - of course! In fact, that's all a destructor really is - a shorcut. But it goes deeper than that - consider a function where you have a dozen or so objects that all have some critical cleanup code associated with them. How do you structure such a function so that this code gets executed properly? Well, in C++ you could structure it like so:

    Code:
    bool foo( void )
    {
        A 
            a;
        if( !a )
            return false;
        B 
            b( a );
        if( !b )
            return false;
        C 
            c( b );
        if( !c )
            return false;
        D 
            d( c );
        if( !d )
            return false;
    /*
        ...etc...
    */
        return true;
    }
    Now pretend C++ didn't have destructors. You'd basically be forced to resort to something like:

    Code:
    bool foo( void )
    {
        bool 
            success = false;        
        A 
            a;
        if( !a )
            return false;
        B 
            b( a );
        if( b )
        {
            C 
                c( b );
            if( c )
            {            
                D 
                    d( c );
                if( d )
                {
                /*
                    ...etc...
                */
                    success = true;
                    d.cleanup( );
                }
                c.cleanup( );
            }
            b.cleanup( );
        }
        a.cleanup( );
        return success;
    }
    Obviously, this is much less flexible altogether (and frankly, quite error prone).

    Quote Originally Posted by Mario F.
    But even on the case of objects that initialize and control the lifetime of other objects, Java and C# present mechanisms to control this. In C# you have the Dispose method for the CLR and the using keyword under C#. I'm unaware of any outstanding issues current programmers in Java or C# have while using non-deterministic destructors.
    Well, this is basically a non-issue with most Java programmers since it doesn't have them to begin with (eg: you can't miss what you've never had!).

    EDIT: Case in point: notice that I accidentally placed the cleanup code in the wrong scope! As I was saying, error prone.
    Last edited by Sebastiani; 09-17-2009 at 01:38 PM.

Popular pages Recent additions subscribe to a feed