Thread: Insulted for using C++?

  1. #76
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by MK27 View Post
    Actually there would be none; since js is often embedded in HTML, it would be impossible to use it "compiled".
    If it was necessary to compile it ahead of time (AOT), there simply wouldn't be any inline JS.

    Even if you relagated it exclusively to .js files (the "unobtrusive" approach), using code that was compiled into asm for the browser would be an assbackward approach.
    Very much so. But you still could compile it to bytecode, the way Java and ActionScript work.

    Also, javascript would be much easier to debug if it were compiled first,
    No, it wouldn't be. All interpreters I know parse the entire file ahead of time anyway, so they pick up all syntax errors except those in late-interpreted strings (e.g. eval()). And just because you compile it first doesn't mean you have a static type system, which is the other place where you can catch bugs. I don't see what other advantage AOT compilation could have.

    so this idea that the interpreter makes "the edit-try-repeat cycle" easier is also assbackward, at least in the case of js.
    Edit. Reload. Try again. The fact that there is no separate compile step makes it a lot easier. The fact that I can enter stuff in the error console, or even the URL bar, makes it a lot easier. And if necessary I can even go down to the command line JS interpreter, as long as my script isn't about DOM interaction. There's Firebug and similar tools. They are not in the least hindered by JS's usually interpreted nature.

    Altho to be fair, I think error handling is better in most interpreted languages (as opposed to compiled ones), so your point is valid.
    Another place where I don't see interpreted vs compiled making any difference.

    "Flash" code is compiled into bytecode, but witness flash lacks javascript's potential for page interaction (except to the extent that it uses external js).
    That's got nothing to do with being compiled. That's about the nature of the Flash plug-in. It doesn't expose the DOM of the browser, only of its own content. Therefore, ActionScript can mostly only interact with the Flash content.

    The key point is that interpretation vs compilation is a matter of the implementation, not of the language. There's a C interpreter. (More than one, actually.) There's even a C++ interpreter used at CERN, but I don't know how good it is.
    On the flip side, every now and then somebody writes an AOT compiler for various previously interpreted languages.
    As a matter of fact, there are hardly any straight interpreters left. Show me one significant interpreter that doesn't first parse the source code into some kind of bytecode (and possibly caches it, as CPython does) and interprets, or even JIT-compiles that. Significant JITters of previously interpreted languages are, for example, TraceMonkey (JavaScript, Mozilla), V5 (JavaScript, Google), SquirrelFish Extreme (JavaScript, Apple), and Unladen Swallow (Python, still in alpha). Flash 10, I think, was the first Flash version that JITted ActionScript. (The JIT engine of Flash formed the base for TraceMonkey.)
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  2. #77
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by CornedBee View Post
    All interpreters I know parse the entire file ahead of time anyway, so they pick up all syntax errors except those in late-interpreted strings (e.g. eval()). And just because you compile it first doesn't mean you have a static type system, which is the other place where you can catch bugs. I don't see what other advantage AOT compilation could have.
    Primarily that you could do the compile time checking outside the browser. And yeah, I need the DOM.

    Edit. Reload. Try again. The fact that there is no separate compile step makes it a lot easier. The fact that I can enter stuff in the error console, or even the URL bar, makes it a lot easier.
    Only if you are talking about a very trivial amount of it! With even a medium small AJAX app, there is usually a lot more you will have to do beyond just "reloading the page".

    Of course, there is "Js Unit" which allows you to write automated tests to somewhat circumvent this, and you can try and circumvent it yourself by temporarily preseeding the code with data and placing little shims here and there -- but it would be WAY WAY easier to just catch the errors that can be caught at compile time, at compile time, without having to run the app at all.

    Vis, Flash arbitrarily not including the DOM model, I will go out on a limb and say I think that is because it cannot. Other than the browser, without seriously changing the way the browser works, I do not think a compiled application could do this. But I know next to nothing about the js engine, so I could be wrong.


    There's Firebug and similar tools. They are not in the least hindered by JS's usually interpreted nature.
    Sure they are. They can only work at runtime. Errors that could be caught at compile time are usually near irrelevant (typos) but it is pretty irritating to start something from scratch, perform all the user interaction to get to the point where you can check a new function's "functionality", and then crap out due to a typoi that would have been a caught by the compiler.

    However, this is kind of a silly argument I started, since js always involves HTML and in serious web apps there is a server side component, so it would be impossible to get out of using the browser with all these components.

    ps. bytecode still requires an interpreter or a virtual machine or whatever, eg, IMO Java is an interpreted language, and I would think that interpreters have always used intermediate forms, not just recently. So I understand what you are saying about "implimentation" -- really I am opposing executables that are precompiled into asm to everything else, "everything else" being interpreted in some manner.
    Last edited by MK27; 09-18-2009 at 09:54 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  3. #78
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    Well ultimately the big winner for C/C++ boils down to two words - inline assembly

    AFAIK neither java nor C# support this feature, although you coudl argue that you coudl write the inlien assembly in c++ then access the DLL from java/C# that doesnt make it a feature of the language, and many times the whole point of inline assembly is to do goofy things that a function call migth mess up, e.g. cache control operations inside your inner loop. By the same token the big loser for java/c# is the fact that they are interpreted languages. An interpreted language will NEVER be faster than native machine code. The best it could every theoretically hope for is parity.
    Last edited by abachler; 09-18-2009 at 12:43 PM.

  4. #79
    Registered User
    Join Date
    Sep 2004
    Location
    California
    Posts
    3,268
    Quote Originally Posted by abachler View Post
    Well ultimately the big winner for C/C++ boils down to two words - inline assembly
    If inline assembly was the big winner for C/C++, I would have switched to a different language years ago. In the last 10 years, I think I've used inline assembly once (and I found out later that it wasn't even needed).

    Quote Originally Posted by abachler View Post
    By the same token the big loser for java/c# is the fact that they are interpreted languages. An interpreted language will NEVER be faster than native machine code. The best it could every theoretically hope for is parity.
    That's not true. Using adaptive optimization, a Java program could (in theory) perform faster than an equivalent C++ program. In practice, this doesn't happen often today, but it could become a more common occurrence in the future. The JIT compiler has been getting better and better over the years, so it's possible that through adaptive optimization (which the HotSpot JIT compiler already uses) that Java is actually faster than C++ in the future.
    bit∙hub [bit-huhb] n. A source and destination for information.

  5. #80
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by abachler View Post
    An interpreted language will NEVER be faster than native machine code. The best it could every theoretically hope for is parity.
    Okay, but neither will a compiled language. And you could just as easily "inline" assembly via an interpreter as anywhere else, for what that is worth.

    I would have thot the best hope for an interpreted language is parity with the compiled language via which it is implimented, but here is an apparently feasible claim:

    Quote Originally Posted by POGL website
    When used for online services, Perl (via Apache/mod_perl or IIS/ActivePerl) performance approaches that of C, as the interpretter [sic] and modules are already loaded in memory and executed in-process.

    This is further enhanced by the fact that many performance-critical Perl modules are written in C.

    In the case of OpenGL, much of the work is actually performed by the GPU, making Perl overhead statistically insignificant.

    Due to Perl's strength in string manipulation, there are cases where Perl can even outperform C when loading/manipulating shader programs.
    [...]
    This benchmark demonstrates that POGL is 35% faster than SDL::OpenGL using normal Vertex Arrays, and 46% faster when using POGL's OpenGL::Array (OGA) objects.
    Last edited by MK27; 09-18-2009 at 01:13 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  6. #81
    Disrupting the universe Mad_guy's Avatar
    Join Date
    Jun 2005
    Posts
    258
    2nd, your assertion is completely out of touch with real world programming, where performance can make or break an application. By abstracting away that 'unnecessary' work you lose sight of the fact that how you do it is often almost as important as what you do. This is specifically what the poster you quoted was stating. Iif you can come up with an elegant solution that is completely abstracted away from the hardware and 100% portable, if it takes 100 years or in some cases 100ms to run it is fail.
    Your argument makes little sense - the implicit assumption behind it seems to be that all 'real world programming' involves writing high performance software where said 'performance' can 'make or break the application' and this is simply not reality. A lot of software doesn't need to worry about incredibly high performance and raw clock speeds, and thus, the overall gains of abstracting the problem using higher-level programming languages and tools increases dramatically. Which makes them a better choice by almost every logical/economic/business standpoint.

    I'm not sure what point you're ever really getting to be honest - your argument almost always seems to boils down to your own authority, that is, "I write software where high performance is key, and therefore, my tools that I use for it are just obviously better than all the other ones for every possible domain." The world is simply not that small.

    FYI, I work on high performance data backup software and we have a lot of code written in C++ (and a lot of low level code ranging from network protocols to kernel drivers.) I am aware speed is of very high importance in a lot of cases. But in the larger scheme, there are more important things than just your raw clock speed when writing software.

    Except that those are things not inherent with languages such as C# and Java, IMO. Rather, they have a tendency to weigh down the programmer with clumsy and unnecessary constructs.
    What 'things'? Notation? Garbage collection? using blocks? How are they 'clumsy' and 'unnecessary'? You will have to elaborate on exactly what you think makes them 'clumsy' and I ask that when you do that, you ask what makes things like malloc/free etc not clumsy, error-prone, etc etc.

    No. We have GC because history has shown that the average shmoe tends to forget to release resources properly.
    That reasoning seems a little short sighted - a motivation for GC was so that the programmer didn't have to manage memory, because programmers tended to get it wrong a lot.

    Your argument seems to be that GC was invented as a 'crutch' to help along the foolish people for some reason. It is not: I would gladly like more of my work to be done by the machine, because it allows me to focus on my actual problem more intensely.

    In any case, if so many people did get this wrong it lead to GC, how are you sure they're all actually fools who just can't code, and it has nothing to do with the fact that, I dunno, maybe managing memory manually can get very difficult on larger projects, and really has little to do with your problem? I'm not so convinced it was for 'fools' as much as it was for the practical reason of alleviating workload, but you're free to disagree.

    And, well, a proliferation of programming languages quite simply because everyone has their own druthers.
    We use programming languages to solve problems. New problems may require new programming languages, because the model an existing language embodies may not be a very good fit for the problem domain at hand.

    I promise you, it is not just "because" (at least not all the time, anybody can hack up a simple interpreter for some non-interesting programming language,) they're not just doing it for giggles because 'the greatest programming languages ever have already been invented' (which essentially is the implicit argument behind your statement if you think about it for a moment.) New languages are created because they have a goal in mind and a problem to solve. You cannot tell me that the creators of Erlang invented it 'just because' - they invented it because they needed distributed, fault-tolerant software for their telephony systems back in the early 90s. No other programming language came close to offering what they wanted, so they built a language specifically for that paradigm. They didn't do it just because they preferred building their own stuff as opposed to using stuff that was already there, they did it because there was no stuff there to begin with.

    If you persist in this thought that that people 'invent' these things only in the interest of tomfoolery, killing time or "because they can," I can only conclude that you are a fool who has never truly stepped out of the bounds in which you currently reside, as a software developer. And I'm sorry for you about that, because there's a wonderful amount of stuff out there worth exploring all with their own merits.

    So my point being that when you know that it all depends on you to get things working, you're much more likely to gain some real insight (and expertise).
    And nobody is arguing against that. I am all for the education of the machines we as developers use, because that information is very important and relevant to our jobs. What I'm arguing against is the premise that java/c# make you a 'bad developer' because you're "away from the machine" or they "inhibit you from being a good programmer" or something. It's important to know the foundations upon which your technology is built. But that does not mean you should have to needlessly express these things all the time, when they are irrelevant to your actual problem and the goal you have in mind.
    operating systems: mac os 10.6, debian 5.0, windows 7
    editor: back to emacs because it's more awesomer!!
    version control: git

    website: http://0xff.ath.cx/~as/

  7. #82
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Sheesh, first he tries to claim CS is about more than just computers*, now he wants people to believe programming is about more than just performance. What a Mad_guy

    * still think that is "out to lunch" kookoo
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  8. #83
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Mad_guy View Post
    What 'things'? Notation? Garbage collection? using blocks? How are they 'clumsy' and 'unnecessary'? You will have to elaborate on exactly what you think makes them 'clumsy' and I ask that when you do that, you ask what makes things like malloc/free etc not clumsy, error-prone, etc etc.
    I know you didn't ask me. But here's some of my main beefs. One per language.

    C++
    - Excessive dogmatism. The language is plagued with an excessive concern for Good Practices that often scares newcomers away and impedes imagination. This is furthered by a strong fundamentalist approach to the language by a good number of the people most actively seeking to spread C++ among the masses. The language ends up looking rigid, hard to learn and impossible to master. Much more than the syntax and semantics, which many of these fundamentalists insist is the main obstacle to an easy apprenticeship. It's not. They are!

    Java
    - The absolute ridicule of not supporting the operating system native widgets by default and without any extra work by the developer.

    C#
    - A masses programming language which future is not directly controlled by the development community should never be taken seriously. Any investment in it should be to boycott and force the delivery of its control over to who can in fact preserve it.
    Last edited by Mario F.; 09-18-2009 at 03:42 PM. Reason: added "masses" to the last point to make it more clear.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  9. #84
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Mario F. View Post
    C++
    - Excessive dogmatism.
    You forgot clunky and ridiculous xPPP

    Just kidding, I don't have any serious thots about C++. Although while from the outside it would appear to try to be everything to everybody -- which is a dumb idea -- I respect what has been done with it, and perhaps one day will be as enamoured as some.

    The thing I always think after flipping thru one of those "Best Practices" books is that they are 50% honest and informative and 50% total BS. Beyond a certain point, w/r/t to "maintainability, readability" etc. my response to people who have gone overboard in the "anal retentive" department would be that it doesn't matter because:

    1) I would rather pump gas than work for or with you.
    2) You will never have to maintain or seriously read my code, I promise.
    3) I also promise never to maintain or seriously read your code.
    4) End of story.
    Last edited by MK27; 09-18-2009 at 04:18 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  10. #85
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Vis, Flash arbitrarily not including the DOM model, I will go out on a limb and say I think that is because it cannot. Other than the browser, without seriously changing the way the browser works, I do not think a compiled application could do this. But I know next to nothing about the js engine, so I could be wrong.
    You are. Nothing about the DOM requires the accessing language to be interpreted. But it's a minor point.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  11. #86
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by Mad_guy
    What 'things'? Notation? Garbage collection? using blocks? How are they 'clumsy' and 'unnecessary'? You will have to elaborate on exactly what you think makes them 'clumsy'...
    - Lack of real templates (please don't mention Generics, either)
    - Lack of real function objects (and no, delegates don't count)
    - Having to explicitly 'new' every object
    - Goofy iterator interfaces
    - Oh yeah, lack of deterministic destructors
    - ...ad nauseum...

    Quote Originally Posted by Mad_guy
    ...and I ask that when you do that, you ask what makes things like malloc/free etc not clumsy, error-prone, etc etc.
    As far as C++ goes, this is really a non-issue.

    Quote Originally Posted by Mad_guy
    That reasoning seems a little short sighted - a motivation for GC was so that the programmer didn't have to manage memory, because programmers tended to get it wrong a lot.
    Isn't that what I said?

    Quote Originally Posted by Mad_guy
    Your argument seems to be that GC was invented as a 'crutch' to help along the foolish people for some reason. It is not: I would gladly like more of my work to be done by the machine, because it allows me to focus on my actual problem more intensely.

    In any case, if so many people did get this wrong it lead to GC, how are you sure they're all actually fools who just can't code, and it has nothing to do with the fact that, I dunno, maybe managing memory manually can get very difficult on larger projects, and really has little to do with your problem? I'm not so convinced it was for 'fools' as much as it was for the practical reason of alleviating workload, but you're free to disagree.
    Incidentally, memory management is never an issue when I'm working on a C++ project. If I need something to live on the heap I can use an std::vector, a smart pointer, or similar. The only case I can think of where GC might be required would be when you have some sort of shared smart pointer that could involve cyclic references. But as such, I've never encountered a situation where a simple alternative wasn't available, so I would argue that GC isn't at all necessary, actually.

    Quote Originally Posted by Mad_guy
    We use programming languages to solve problems. New problems may require new programming languages, because the model an existing language embodies may not be a very good fit for the problem domain at hand.
    I can't think of a single situation where C++ wouldn't be appropriate for some particular problem (except where a compiler/interpreter for that particular system may be necessary, but isn't available). That's not to say that some other language wouldn't suffice, mind you, just that C++ would be sufficient as well.

    Quote Originally Posted by Mad_guy
    If you persist in this thought that that people 'invent' these things only in the interest of tomfoolery, killing time or "because they can," I can only conclude that you are a fool who has never truly stepped out of the bounds in which you currently reside, as a software developer. And I'm sorry for you about that, because there's a wonderful amount of stuff out there worth exploring all with their own merits.
    I don't have any reservations about using any particular programming language. I'm proficient at a good number of them and certainly not afraid to learn new ones, either. Hell, I'll write in assembly language, if that's what the job calls for.

    Quote Originally Posted by Mad_guy
    And nobody is arguing against that. I am all for the education of the machines we as developers use, because that information is very important and relevant to our jobs. What I'm arguing against is the premise that java/c# make you a 'bad developer' because you're "away from the machine" or they "inhibit you from being a good programmer" or something. It's important to know the foundations upon which your technology is built. But that does not mean you should have to needlessly express these things all the time, when they are irrelevant to your actual problem and the goal you have in mind.
    Believe what you like - I'm just calling a spade a spade. Are there advanced/competent Java/C# programmers out there? Absolutely. I'm really just saying that I don't think that these languages *in general* are structured in such a way that teaches new programmers to approach the problem in depth. At any rate, it's just my opinion, and as most opinions go, should be taken with a grain of salt.

  12. #87
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Sebastiani View Post
    The only case I can think of where GC might be required would be when you have some sort of shared smart pointer that could involve cyclic references. But as such, I've never encountered a situation where a simple alternative wasn't available, so I would argue that GC isn't at all necessary, actually.
    Not even then a GC is needed. When cyclic references are necessary one can go with boost::weak_ptr idiom or, with less overhead, LeadTwin and AideTwin interface idiom in Austria.
    Last edited by Mario F.; 09-18-2009 at 09:16 PM. Reason: typos as usual
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  13. #88
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Most of the people I have come across who abhore C++ and adore C# and make the common mistake of trying to force everyone to use C# I did not consider to be extremely strong C++ programmers in the first place. That is not to say there aren't those out there that are strong C++ devs but prefer C#.

    But comparing these is pretty ridiculous. C# has its uses and C++ has its uses. I've also started working with C++/CLI classes so I can interop C++ with C# without using Pinvoke which even though Pinvoke works is quite clunky.

    Use the right language for the job. But please do not attempt to force all languages to be like one another. I do not use C# for the garbage collection. That does not even concern me since I can very well collect my own garbage in C++. What C# and C++/CLI do offer is simple access to the .NET framework and C# makes GUI programming so simple it should be banned. I love C# and C++ and for interop I'm beginning to like C++/CLI. In my daily duties it is not uncommon for me to work with Lua, ActionScript, C#. C++/CLI, C++ and/or assembly when it isn't possible to view the source of a DLL. Out of all of them the one i dislike the most is ActionScript but it does have its uses and it is very good at what it was designed for so long as the developer uses it correctly. I've met the 'managed' crowd and my biggest beef with them is they keep trying to give me answers to questions that I'm not asking. I'm not asking how to manage memory and that is not going to get me to use C# over C++. Managing memory is so basic and fundamental in C++ that I can't believe it is the source of so many debates. There is nothing wrong with managing your own memory. And if you really don't like pointers or you are one of those 'everything must be a reference' people then by all means use boost shared pointers. There are alternatives to memory management in C++ via third party libraries so I'm not sure why it is such a hot topic. To me it's just not an issue at all.

    All of these new technologies give the developer quite a bit of power and flexibility when developing. Why do we want to use one at the exclusion of another when combined they are extremely powerful?

    C#
    - A masses programming language which future is not directly controlled by the development community should never be taken seriously. Any investment in it should be to boycott and force the delivery of its control over to who can in fact preserve it.
    I disagree. C# has been certified by ECMA and Microsoft does not own nor maintain the standard. It is no different than say ActionScript which is also ECMA certified which means Adobe had to give up the rights to it and follow whatever standards the ECMA committee set forth. Because C# is ECMA certified this means that MS cannot just change things at the drop of a hat. They must conform the standard if they want ECMA's approval of it. There is no inherent danger in developing C# applications. They will work and be supported long into the future. C# is certainly here to stay.
    This is where Java utterly failed. Sun was not willing to give up the rights to the language and/or standard. Perhaps that has changed now but I do not know since I do not use Java.

    In short, I like any language that makes or has made me money. Since Java is not one of them I do not like Java. VB also falls into this category since I've never made one red cent off of it.
    Last edited by VirtualAce; 09-18-2009 at 10:13 PM.

  14. #89
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by Bubba View Post
    Most of the people I have come across who abhore C++ and adore C# and make the common mistake of trying to force everyone to use C# I did not consider to be extremely strong C++ programmers in the first place. That is not to say there aren't those out there that are strong C++ devs but prefer C#.

    But comparing these is pretty ridiculous. C# has its uses and C++ has its uses. I've also started working with C++/CLI classes so I can interop C++ with C# without using Pinvoke which even though Pinvoke works is quite clunky.

    Use the right language for the job. But please do not attempt to force all languages to be like one another. I do not use C# for the garbage collection. That does not even concern me since I can very well collect my own garbage in C++. What C# and C++/CLI do offer is simple access to the .NET framework and C# makes GUI programming so simple it should be banned. I love C# and C++ and for interop I'm beginning to like C++/CLI. In my daily duties it is not uncommon for me to work with Lua, ActionScript, C#. C++/CLI, C++ and/or assembly when it isn't possible to view the source of a DLL. Out of all of them the one i dislike the most is ActionScript but it does have its uses and it is very good at what it was designed for so long as the developer uses it correctly. I've met the 'managed' crowd and my biggest beef with them is they keep trying to give me answers to questions that I'm not asking. I'm not asking how to manage memory and that is not going to get me to use C# over C++. Managing memory is so basic and fundamental in C++ that I can't believe it is the source of so many debates. There is nothing wrong with managing your own memory. And if you really don't like pointers or you are one of those 'everything must be a reference' people then by all means use boost shared pointers. There are alternatives to memory management in C++ via third party libraries so I'm not sure why it is such a hot topic. To me it's just not an issue at all.

    Why are we arguing over which is better? All of them together give the developer quite a bit of power and flexibility when developing.
    And just to be clear, I'm not arguing whether or not these languages are useful - certainly they are (GUI programming being an excellent example). It's that the choices that were made in their implementations just kind of puzzles me, honestly. I mean, after you've gotten past all of the marketing hype, you find out pretty quickly that load hasn't really become much lighter - in fact, in many cases it turns out that the hill just got a lot steeper.

  15. #90
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    That's not true. Using adaptive optimization, a Java program could (in theory) perform faster than an equivalent C++ program. In practice, this doesn't happen often today, but it could become a more common occurrence in the future. The JIT compiler has been getting better and better over the years, so it's possible that through adaptive optimization (which the HotSpot JIT compiler already uses) that Java is actually faster than C++ in the future.
    But C++ compilers can still do profile guided optimization, with a profile generated by a representative run. Of course, theoretically that's not as good as JIT re-compilation (optimization), since a "representative run" is not as good as "this particular run", but the gap is a lot smaller. And then C++ has the intrinsic advantage of being native. I doubt Java will be faster than C++ anytime soon.

Popular pages Recent additions subscribe to a feed