You do realize this is a circular definition, right? Compilers go from source -> target, end of story. Check wikipedia if you wish; check anywhere, the idea is all that matters and what you're talking about are useless details. Whether or not the target is actual assembly code, or the compiler has a backend to compile to another languages (many high level languages like chicken scheme will compile scheme to C) like C, it's still a - strictly speaking - a translation. Therefore, all compilers are inherently translators, and all 'programming language translators' are inherently compilers.
No, a compiler transforms it from a programmign language to executable code. What you described is a translator, not a compiler.
You're doing nothing but nitpicking on words, and you haven't in any way made the definition I gave of a compiler false. How do the semantics and definition of this 'black box program' change, based on what the target output is? It's always going from source -> target, and there is nothing that strictly specifies the target must be machine code.
Do you have something to back this up (honestly)? I've never heard of this approach, and I'm pretty skeptical unless you're just generalizing it a bit in which case I think I might get what you mean..
Used to be all C++ code was translated into C code then compiled, until compiler could directly compile C++ code.
Bringing up the speed of the approach completely misses the point I was trying to make - I was only pointing out that the JVM will take that java bytecode, and on the fly, compile it into adequate machine code. I never mentioned speed, although I might have said the word 'optimized.' It's irrelevant either way.
Interpreted languages are inhernetly slower even if there is a one to one relationship between the bytecode and the machine language.
Unfounded, useless statement.
This means that at most, and interpreted application will be half as fast as native code even with a perfectly optimized interpreter.
This - again - is completely orthogonal and utterly irrelevant to the point I was making.
I/O will generally be almost as fast for technical reason I wont go into here. Because computers are so fast, and becuase most .NET appications are heavily I/O bound (internet bandwidth limitations), the perceived speed difference will be negligeable.