Your argument makes little sense - the implicit assumption behind it seems to be that all 'real world programming' involves writing high performance software where said 'performance' can 'make or break the application' and this is simply not reality. A lot of software doesn't need to worry about incredibly high performance and raw clock speeds, and thus, the overall gains of abstracting the problem using higher-level programming languages and tools increases dramatically. Which makes them a better choice by almost every logical/economic/business standpoint.
2nd, your assertion is completely out of touch with real world programming, where performance can make or break an application. By abstracting away that 'unnecessary' work you lose sight of the fact that how you do it is often almost as important as what you do. This is specifically what the poster you quoted was stating. Iif you can come up with an elegant solution that is completely abstracted away from the hardware and 100% portable, if it takes 100 years or in some cases 100ms to run it is fail.
I'm not sure what point you're ever really getting to be honest - your argument almost always seems to boils down to your own authority, that is, "I write software where high performance is key, and therefore, my tools that I use for it are just obviously better than all the other ones for every possible domain." The world is simply not that small.
FYI, I work on high performance data backup software and we have a lot of code written in C++ (and a lot of low level code ranging from network protocols to kernel drivers.) I am aware speed is of very high importance in a lot of cases. But in the larger scheme, there are more important things than just your raw clock speed when writing software.
What 'things'? Notation? Garbage collection? using blocks? How are they 'clumsy' and 'unnecessary'? You will have to elaborate on exactly what you think makes them 'clumsy' and I ask that when you do that, you ask what makes things like malloc/free etc not clumsy, error-prone, etc etc.
Except that those are things not inherent with languages such as C# and Java, IMO. Rather, they have a tendency to weigh down the programmer with clumsy and unnecessary constructs.
That reasoning seems a little short sighted - a motivation for GC was so that the programmer didn't have to manage memory, because programmers tended to get it wrong a lot.
No. We have GC because history has shown that the average shmoe tends to forget to release resources properly.
Your argument seems to be that GC was invented as a 'crutch' to help along the foolish people for some reason. It is not: I would gladly like more of my work to be done by the machine, because it allows me to focus on my actual problem more intensely.
In any case, if so many people did get this wrong it lead to GC, how are you sure they're all actually fools who just can't code, and it has nothing to do with the fact that, I dunno, maybe managing memory manually can get very difficult on larger projects, and really has little to do with your problem? I'm not so convinced it was for 'fools' as much as it was for the practical reason of alleviating workload, but you're free to disagree.
We use programming languages to solve problems. New problems may require new programming languages, because the model an existing language embodies may not be a very good fit for the problem domain at hand.
And, well, a proliferation of programming languages quite simply because everyone has their own druthers.
I promise you, it is not just "because" (at least not all the time, anybody can hack up a simple interpreter for some non-interesting programming language,) they're not just doing it for giggles because 'the greatest programming languages ever have already been invented' (which essentially is the implicit argument behind your statement if you think about it for a moment.) New languages are created because they have a goal in mind and a problem to solve. You cannot tell me that the creators of Erlang invented it 'just because' - they invented it because they needed distributed, fault-tolerant software for their telephony systems back in the early 90s. No other programming language came close to offering what they wanted, so they built a language specifically for that paradigm. They didn't do it just because they preferred building their own stuff as opposed to using stuff that was already there, they did it because there was no stuff there to begin with.
If you persist in this thought that that people 'invent' these things only in the interest of tomfoolery, killing time or "because they can," I can only conclude that you are a fool who has never truly stepped out of the bounds in which you currently reside, as a software developer. And I'm sorry for you about that, because there's a wonderful amount of stuff out there worth exploring all with their own merits.
And nobody is arguing against that. I am all for the education of the machines we as developers use, because that information is very important and relevant to our jobs. What I'm arguing against is the premise that java/c# make you a 'bad developer' because you're "away from the machine" or they "inhibit you from being a good programmer" or something. It's important to know the foundations upon which your technology is built. But that does not mean you should have to needlessly express these things all the time, when they are irrelevant to your actual problem and the goal you have in mind.
So my point being that when you know that it all depends on you to get things working, you're much more likely to gain some real insight (and expertise).