Thread: 64kb

  1. #46
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by abachler View Post
    smallest program I ever wrote was 12 bytes.
    That must have been a COM file. Most executable format headers are larger than that.

    An effort to create the smallest possible binary that Linux will actually load:

    http://www.muppetlabs.com/~breadbox/...ny/teensy.html

  2. #47
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    Quote Originally Posted by mike_g View Post
    You dont have to use assembler to create programs < 10k if you dynamically link your libraries like brewbuck said. Theres a load of 4k graphics progs here. Quite a few are done in C. ASM can go really small, some of the 256 byte stuff on there is really impressive. Stuff that drops below that, well, generally cant do much really.
    I sorta count any loaded libraries in the total, otherwise you could just make the exe do nothing but load a dll that has all the code in it.

    And yes, the 12 bytes program was a com.

  3. #48
    Dr Dipshi++ mike_g's Avatar
    Join Date
    Oct 2006
    Location
    On me hyperplane
    Posts
    1,218
    Sure, but if you are calling a system library or one thats very common then for practical purposes its going to take up no extra space.

  4. #49
    Registered User AcerN30's Avatar
    Join Date
    Apr 2008
    Location
    England
    Posts
    32
    Does the size affect processor speed?

  5. #50
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    It effects how much of the program is in the L1 and L2 cache. There is more to system performance than processor speed. A program than runs entirely in the L1 cache will be twice as fast as one that runs entirely in the L2 cache, which willl be 4-8 times faster than one that has to continously pull data from main memory.

  6. #51
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by abachler View Post
    It effects how much of the program is in the L1 and L2 cache. There is more to system performance than processor speed. A program than runs entirely in the L1 cache will be twice as fast as one that runs entirely in the L2 cache, which willl be 4-8 times faster than one that has to continously pull data from main memory.
    Simply being large does not imply that code will not go in cache. Most programs adhere to the 80/20 rule -- 80% of time is spent executing 20% of the code. If the code which is currently executing is small enough to fit in cache, there is no problem.

    Most code in large programs is rarely executed.

  7. #52
    l'Anziano DavidP's Avatar
    Join Date
    Aug 2001
    Location
    Plano, Texas, United States
    Posts
    2,743
    This might sound funny, but what exactly is the Assembler?
    Assembly language is the closest you will normally get to the hardware. Although I have done small amounts of assembly language coding before, it has been several years. I couldn't find any of my old x86 Assembly code, so here is a small snippet of LC-3 Assembly code:

    Code:
    .ORIG x3000		; start at addess 0x3000
    AND R1, R1, #0		; clear R1
    dtp TRAP x23		; input a character
    AND R2, R2, #0		; clear R2
    ADD R2, R0, #-15	; check to see if it is a ! input
    ADD R2, R2, #-15	; do this by subtraction
    ADD R2, R2, #-3		; and so on
    BRz end			; if the result is 0, a ! was inputted, end the program
    ADD R3, R0, #-16	; otherwise, convert from ASCII to int
    ADD R3, R3, #-16	; etc.
    ADD R3, R3, #-16	; etc.
    ADD R1, R1, R3		; Add the int to R1 to give us a sum
    BRnzp dtp		; loop back to get another input
    end HALT		; end the program
    .end
    Looks pretty ugly, doesn't it? (Although some readers here my enjoy reading it )

    If you want an in depth discussion on Assembly language and also how it translates to binary machine-code, you should buy some books or read wikipedia.
    My Website

    "Circular logic is good because it is."

  8. #53
    Reverse Engineer maxorator's Avatar
    Join Date
    Aug 2005
    Location
    Estonia
    Posts
    2,318
    These are two nice small COM executables, first one is 30 bytes, second one is 208 bytes, both do amazing things for their size.

    Found them from some assembly-related site. Just rename them to .com .
    "The Internet treats censorship as damage and routes around it." - John Gilmore

  9. #54
    Reverse Engineer maxorator's Avatar
    Join Date
    Aug 2005
    Location
    Estonia
    Posts
    2,318
    Quote Originally Posted by brewbuck View Post
    Simply being large does not imply that code will not go in cache. Most programs adhere to the 80/20 rule -- 80&#37; of time is spent executing 20% of the code. If the code which is currently executing is small enough to fit in cache, there is no problem.

    Most code in large programs is rarely executed.
    Yeah, but they code that actually runs often is scattered around the process. A function at 0x401000 may often call functions at 0x610000, 0x57F000, 0x425000, 0x4B4545 etc, there are no limits, there are just too many functions that are run. The many small class-related CBlaBla::Get*** and CBlaBla::Set*** functions are the basic things that make the code run slower in this case.

    Edit: Damn, double post, sorry.
    "The Internet treats censorship as damage and routes around it." - John Gilmore

  10. #55
    Registered User
    Join Date
    Apr 2008
    Posts
    7
    Hello,

    Here's my take on it. Back when computers were still being predominantly time-shared, the most expensive part was the hardware. Even a few cycles of CPU time or a few kilobytes of hard-disk (if you had one) or floppy-disk or even RAM was very expensive, sometimes prohibitively expensive. So, companies were very willing to pay programmers to take the time to make their programs very efficient, both in compiled size and executable size and speed.

    But, hardware soon became cheaper and cheaper, until now when a gigabyte of RAM or hard-disk space is approaching pennies - or less. How much does a second of CPU time cost? Who measures that anymore? So businesses are much less willing to pay to have programmers save a couple hundred kilobytes of space or a few seconds of speed.

    This ends in the modern situation of not even caring how big a program is or how long it takes to run (though the ever shrinking patience of a user to wait for a program can dictate run times). Companies like M$ take the attitude of throwing more hardware at it instead of fixing the code (hello Vista), since hardware is much cheaper now than a software engineer. Their bottom line is profits.

    I was just at a panel discussion the other day where one of the guests talked about Moore's law and how by about 2020, we'll be creating chips at the atomic level, and thus can't go any smaller and Moore's law will be broken. Perhaps then, when newer and more hardware can't be thrown at the problem, companies will once again begin to devote resources to software and more efficiently utilizing the hardware resources that the laws of physics have put a cap on.

  11. #56
    Registered User
    Join Date
    Apr 2008
    Posts
    7
    Hello,

    Quote Originally Posted by DavidP View Post
    Assembly language is the closest you will normally get to the hardware. Although I have done small amounts of assembly language coding before, it has been several years. I couldn't find any of my old x86 Assembly code, so here is a small snippet of LC-3 Assembly code:
    This question may be a little off topic, but - I have two older assembly books, one from 1998 and the other from 1999, both on assembly language for the IBM PC/Intel-based computer. Would these still be relevant today? Can they teach me how to program in assembly code for the current crop of x86 and x86_64 processors? How about for the next generation of CPU's or for other chip architectures (i.e., Sun T1)?

  12. #57
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by JMJ_coder View Post
    So businesses are much less willing to pay to have programmers save a couple hundred kilobytes of space or a few seconds of speed.
    The consumer may indeed get the rotten end of the deal sometimes, although it has to be said that it is still a general practice to procure the best possible performance when coding, whereas disk space is indeed not as important as it used to be, with the only controllable factor being the media where the software is planned to be distributed.

    However there are exceptions. Not all software is intended for the masses and some software is mission critical. Here, performance issues are constantly being researched and improved. The costs of not doing that would be monstrous. An interesting example is the google cluster architecture (pdf document). Although, not directly mentioned, you can expect that the software running on those rather non impressive servers is squeezing every bit of performance and disk space it can. But you can expect that any other software meeting mission and performance critical requirements is going to be developed with high standards in mind.

    There's a whole other world out there.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. "far"?
    By SgtMuffles in forum C Programming
    Replies: 7
    Last Post: 04-05-2005, 07:08 AM
  2. INI 64kb limit
    By iniguy in forum Windows Programming
    Replies: 1
    Last Post: 02-05-2002, 04:52 PM
  3. Replies: 2
    Last Post: 01-27-2002, 10:40 AM
  4. Memory allocation greater than 64KB in TC 3.0
    By Unregistered in forum C Programming
    Replies: 0
    Last Post: 01-27-2002, 10:28 AM