That must have been a COM file. Most executable format headers are larger than that.
An effort to create the smallest possible binary that Linux will actually load:
http://www.muppetlabs.com/~breadbox/...ny/teensy.html
That must have been a COM file. Most executable format headers are larger than that.
An effort to create the smallest possible binary that Linux will actually load:
http://www.muppetlabs.com/~breadbox/...ny/teensy.html
Sure, but if you are calling a system library or one thats very common then for practical purposes its going to take up no extra space.
It effects how much of the program is in the L1 and L2 cache. There is more to system performance than processor speed. A program than runs entirely in the L1 cache will be twice as fast as one that runs entirely in the L2 cache, which willl be 4-8 times faster than one that has to continously pull data from main memory.
Simply being large does not imply that code will not go in cache. Most programs adhere to the 80/20 rule -- 80% of time is spent executing 20% of the code. If the code which is currently executing is small enough to fit in cache, there is no problem.
Most code in large programs is rarely executed.
Assembly language is the closest you will normally get to the hardware. Although I have done small amounts of assembly language coding before, it has been several years. I couldn't find any of my old x86 Assembly code, so here is a small snippet of LC-3 Assembly code:This might sound funny, but what exactly is the Assembler?
Looks pretty ugly, doesn't it? (Although some readers here my enjoy reading it )Code:.ORIG x3000 ; start at addess 0x3000 AND R1, R1, #0 ; clear R1 dtp TRAP x23 ; input a character AND R2, R2, #0 ; clear R2 ADD R2, R0, #-15 ; check to see if it is a ! input ADD R2, R2, #-15 ; do this by subtraction ADD R2, R2, #-3 ; and so on BRz end ; if the result is 0, a ! was inputted, end the program ADD R3, R0, #-16 ; otherwise, convert from ASCII to int ADD R3, R3, #-16 ; etc. ADD R3, R3, #-16 ; etc. ADD R1, R1, R3 ; Add the int to R1 to give us a sum BRnzp dtp ; loop back to get another input end HALT ; end the program .end
If you want an in depth discussion on Assembly language and also how it translates to binary machine-code, you should buy some books or read wikipedia.
These are two nice small COM executables, first one is 30 bytes, second one is 208 bytes, both do amazing things for their size.
Found them from some assembly-related site. Just rename them to .com .
"The Internet treats censorship as damage and routes around it." - John Gilmore
Yeah, but they code that actually runs often is scattered around the process. A function at 0x401000 may often call functions at 0x610000, 0x57F000, 0x425000, 0x4B4545 etc, there are no limits, there are just too many functions that are run. The many small class-related CBlaBla::Get*** and CBlaBla::Set*** functions are the basic things that make the code run slower in this case.
Edit: Damn, double post, sorry.
"The Internet treats censorship as damage and routes around it." - John Gilmore
Hello,
Here's my take on it. Back when computers were still being predominantly time-shared, the most expensive part was the hardware. Even a few cycles of CPU time or a few kilobytes of hard-disk (if you had one) or floppy-disk or even RAM was very expensive, sometimes prohibitively expensive. So, companies were very willing to pay programmers to take the time to make their programs very efficient, both in compiled size and executable size and speed.
But, hardware soon became cheaper and cheaper, until now when a gigabyte of RAM or hard-disk space is approaching pennies - or less. How much does a second of CPU time cost? Who measures that anymore? So businesses are much less willing to pay to have programmers save a couple hundred kilobytes of space or a few seconds of speed.
This ends in the modern situation of not even caring how big a program is or how long it takes to run (though the ever shrinking patience of a user to wait for a program can dictate run times). Companies like M$ take the attitude of throwing more hardware at it instead of fixing the code (hello Vista), since hardware is much cheaper now than a software engineer. Their bottom line is profits.
I was just at a panel discussion the other day where one of the guests talked about Moore's law and how by about 2020, we'll be creating chips at the atomic level, and thus can't go any smaller and Moore's law will be broken. Perhaps then, when newer and more hardware can't be thrown at the problem, companies will once again begin to devote resources to software and more efficiently utilizing the hardware resources that the laws of physics have put a cap on.
Hello,
This question may be a little off topic, but - I have two older assembly books, one from 1998 and the other from 1999, both on assembly language for the IBM PC/Intel-based computer. Would these still be relevant today? Can they teach me how to program in assembly code for the current crop of x86 and x86_64 processors? How about for the next generation of CPU's or for other chip architectures (i.e., Sun T1)?
The consumer may indeed get the rotten end of the deal sometimes, although it has to be said that it is still a general practice to procure the best possible performance when coding, whereas disk space is indeed not as important as it used to be, with the only controllable factor being the media where the software is planned to be distributed.
However there are exceptions. Not all software is intended for the masses and some software is mission critical. Here, performance issues are constantly being researched and improved. The costs of not doing that would be monstrous. An interesting example is the google cluster architecture (pdf document). Although, not directly mentioned, you can expect that the software running on those rather non impressive servers is squeezing every bit of performance and disk space it can. But you can expect that any other software meeting mission and performance critical requirements is going to be developed with high standards in mind.
There's a whole other world out there.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.