This is a discussion on basic(possibly insane) question about compilers. within the Tech Board forums, part of the Community Boards category; Originally Posted by hqt will use other function--> so binary code will be different. I have read somewhere, they tell ...
this forum is good but theres too much distraction and bad language for a technical forum.i quit. if possible somebody delete my account.
ok thats all folks.
When learning C read the page, type up the examples and play with them, change them break them, improve them until you understand what is going on... then move to the next page...
For information about hardware standards such as USB, X86, IA64, SATA... there are hundreds of them... locate and download the standards documents themselves and read them.
We all get stuff wrong... I do it all the time... but that is part of how we learn.
Certainly making the occasional mistake does not make one stupid...
Stupid happens when you don't learn from your mistakes.
C's source code if written to cross-platform standards (eg C-99) can be recompiled on different platforms (Windows, Linux, Mac, etc.) but you aren't goint to run a Windows program on Linux or a Linux program on windows without adding an OS-specific emulator layer to do so. Even on windows64 32 bit windows code is run through an emulator layer called WOW64.
This is what the java runtime is... it's an OS specific interpreter for the Java language. It takes Java standard "token code" (look it up) and translates it for the underlying OS...
Hmm ... I don't think I agree with quite a bit of the stuff posted here.
Let's see. A compiler compiles code to a specific target. A target is typically a combination of instruction set (+ extensions) and executable format, with the ABI of whatever libraries you use mixed in for good measure. A compiled executable can thus run wherever there is an environment compatible with its target.
Let's explore in greater detail what that means.
Executable format is basically Linux (and others) using ELF (used to be a.out), Windows (and OS/2) using PE, and Mac using MachO. I will not go into these any further.
Instruction set means what the CPU (or interpreting software) understands. There are some fundamentally different ISAs, like the x86 family, ARM (iPhone and most other smart phones), IA64, and MIPS (some game consoles, e.g. PSX I think). Most ISAs also have optional parts, e.g. Thumb for ARM, or the various SSE sets for x64.
Any CPU may understand more than one ISA, actually. For example, the Itanium2 CPUs have hardware emulation capability which allows them to understand 32-bit x86 code. x64 CPUs still have legacy modes so that they can execute 16-bit, 32-bit or 64-bit code. More specifically, an x64 CPU can be in legacy mode, where it can execute 16-bit or 32-bit code, or in long mode, where it can execute 32-bit or 64-bit code. Switching between the two big modes needs a reboot, but switching between the two sub-modes is as easy as a context switch, which allows 32-bit and 64-bit apps to run side-by-side in a 64-bit OS.
In addition, you can emulate CPUs in software, if the host CPU is fast enough. That's what all the console emulators do (e.g. a GameBoy emulator would emulate a Z80 CPU), but also some other applications: PearPC emulates a PowerPC CPU to let legacy Mac apps run on Intel Macs. Dosbox emulates a 16-bit x86 CPU. Even virtualization hosts, such as VMWare, do a bit of emulation.
A compiler must target a specific instruction set. Nowadays, a typical target would be x86-64 code. But then you might want to use those fancy SSE3 and SSE4 instructions too. You can tell the compiler to use these as well, but then your code won't run on older 64-bit CPUs anymore. You have various options. You can forget about these instructions, sticking to the base instruction set. You can require that a new enough CPU is used. Or you can compile multiple versions, one with these instructions and one without, and then write code to detect the CPU at runtime (x64 has the handy CPUID instruction for this) and choose the correct version of the code. Some compilers can generate the switch automatically.
That leaves ABI. ABI is basically: "what libraries/directly accessible hardware does the target environment have, and how do I access them". Back in DOS times, many things were done by directly accessing hardware, reading and writing I/O registers. If those registers change, that's breaking the ABI. Even though the OS can understand the executable, and the CPU run the code, it won't do the right thing.
Nowadays hardware access is limited to the OS, and you just access libraries provided by the OS. Of course, the library you want must actually exist, containing the functions you want to call, and use the calling convention you expect. That last part is important: for example, Windows and Linux typically have different rules for how arguments are passed to functions. Get this wrong, and your code will misbehave. An x64 CPU has no problem executing 32-bit code, but you also need the 32-bit libraries. (That, by the way, is 99% of what WoW64 (Windows on Windows64) does - it doesn't actually emulate anything. It just is a bunch of 32-bit libraries that calls the 64-bit OS routines while translating arguments.)
Now, the WINE project does two things: First, it implements a PE loader for *nix, which enables Windows executables to be loaded, thus solving the executable format interoperability. Second, it provides libraries ABI-compatible with the Windows libraries, which enables Windows executables to run, thus solving the ABI interoperability. Note that WINE doesn't emulate a CPU: you have to run it on a x86 CPU. This is why WINE is not an emulator: it doesn't provide a complete environment, only parts.
Finally, bytecodes like JVM bytecode (Java compilation target) and MSIL (C# compilation target) are basically hypothetical CPU ISAs. The specifications also specify library calling conventions and executable formats (Java .class files and JAR archives, .Net assemblies), thus completing the abstract execution environment. Virtual machines, then, are basically emulators for these environments. Since emulators provide an abstraction between the host and the emulated platform, Java and C# programs can in principle run anywhere after being compiled. Because the abstract execution environment is defined in a way that allows high-performance VMs, this is even reasonably fast, unlike emulators for real CPUs, which tend to eat up a lot of performance. (Of course, such emulators also typically have to emulate a bunch of other hardware.)
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law