Can someone tell me why on very old timers computers ha, a programmer could write up some code and it would be so small compared today. 500kb?
Printable View
Can someone tell me why on very old timers computers ha, a programmer could write up some code and it would be so small compared today. 500kb?
:) Those were the days. Getting PCPro or PCPlus magazines with a 3 1/2-inch floppy disk packed with software! Even prior to that, you could fill a 5.25-inch floppy with heaps of software.
So many factors... programs were simpler, programming languages were limited in features and libraries were simpler and smaller, operating systems were less demanding. The fact hardware systems and the media that carried this software at the time was also very limited in storage and processing capacity also forced programmers to squeeze their code to the best of their abilities.
Beacuse they were very small people with small brains that could only fit small code inside their head. Seriously tho, your program can be as bloated as you want to make them. Reasons why modern sofware often takes up more space is down to many reasons. Ultimately storage space is not really an issue on computers nowadays so less time is spent worrying about it. Computers and programs are also way more complex now. Theres nothing stopping you from producing tiny programs in C tho.
Oh, I was actually looking at VB. It isn't like some kind of small text file, with your code and stuff.
Gates once said, 300+KB would be enough for anyone...:rolleyes:
There's a resurge of sorts if you look at mobile phone programming. Very small apps, some bursting with functionality.
Yes, and at that time, it would have taken a senior software engineers yearly salary for several years to buy 1GB of RAM. The original IBM PC with 256KB (or was it 64KB) would set you back the best part of $5000. Of course, there was no hard-disk in that type of machine, that only came with the XT machines. My first PC had 8MB of RAM, and that was plenty to run OS/2 with gcc compiler, emacs and a few other apps with no problem.
--
Mats
Code was less abstract and more "to the metal." It was also necessarily less feature-rich, because features take space. Think about what was available: A disk, with a simple file system on it. A text based screen. A keyboard. Maybe a printer, maybe a modem.
A lot of the code size of today's applications is from layer upon layer of abstraction, which is designed to make the programmer's life easier in exchange for being bigger and slower. If you want to use the features (maybe only a small subset) of some library, you just link with it without giving it a second thought. In other times the design goals were opposite. With so little RAM available the emphasis was on small, tight code.
It's not that we couldn't write small programs today, we just don't bother anymore.
Sure. The reason given above explains what and why makes modern apps several hundred kilobytes (or even megabytes) when the corresponding application in olden days would have been a few kilobytes.
Many different flavours of hardware creates a need for a layer of software-to-hardware translation. Modern systems have more memory so there is no need to strip out and link statically to make sure that the application is small. Writing the code to write "Hello, World" inside a Window in MS Window will require something like (at least) 15-20 lines of rather cryptic code, or 50 lines of less cryptic code. Of course, turning that Hello World program into a bigger application that does some simple word processing will add about the same amount of code as it would to the 4-line console version.
Dealing with graphical UI's [in a nice and modern way] requires graphics resources to be compiled into the application, menu handling, dialog boxes, etc, etc. Where a console application could do that in a few lines of C or Pascal, it now requires a few more lines and a "resource" from a resource-file.
C++ also tends to add to the burden by making many layers of things.
But if needs be, almost all applications can be stripped into something smaller, simpler, and less flabby. At one time, working on a driver, I saved some 30KB [although not noticable in the overall size, it's a large amount in itself] by simply putting "static" in front of a bunch of const initialized data in functions - because the compiler will then only generate the data once, rather than storing the data and generating instructions to copy it into the local variable. Just as one example.
As a summary: Applications are bigger now because they can be, in PC's. If you are targetting an embedded system with 32MB of RAM, then having a single application that takes up 400MB won't work!
--
Mats
Well, could you make a small 2D graphics program, like a bunch of squares. or triangles. One move around, and another animated or AI sort of thing and interact with the shape that is controlled by you? A small program.
Oh, I ment to say me. I am interested in trying to do something along those lines. A possible shooting game of shapes?
I started another thread elsewhere. In Gaming.
Right, thank you.
So...Um, could someone please direct me to a C compiler thing? Sorry people, I'm not familiar with the terms.
Um, then I can try writing out the traditional er... code thingi.
MinGW is a good (an excellent, actually) compiler / toolchain and I recommend using it with the Code::Blocks IDE (MinGW comes included).
http://www.codeblocks.org/
I still write extremely compact code. The secret - don't use MFC or the STL or any 3rd party libraries if you can help it. It also helped that my first computer only had 2.5k of ram and I have spent years programmign microcontrolelrs and PLC's which dont have a lto fo memory to throw at a problem. An additional benefit of course is that sicne my programs are compact they almost alwasy run entirely in the cahce, so they run much faster than teh competition.
He means this:
Attachment 8109Code:#include <iostream>
int main() {
std::cout << "STL adds bloat. Lots of it.";
return 0;
}
gcc version 3.4.2 (mingw-special)
When you strip it, the size is reduced to 260kb.
Right, iostreams are part of the standard library, not STL. STL, strictly, is not a library but a set of templates stored in header files.
And secondly, look at a linker map of the resulting executable. I bet you'll find that a lot of that size is taken up by standard runtime support.
Try linking the equivalent C program statically, and see how big that is. Then you'll have something approaching a valid comparison (even though it's still not comparing what I'm talking about, which is the effect of STL on code size)
Attachment 8110Code:#include <vector>
#include <string>
int main() {
std::vector<int> v(10);
for (int i=0;i<10;i++) {
v[i] = i*(i+i^3);
}
reverse(v.begin(), v.end());
std::string q = "omgomgom";
return 0;
}
It's less, but still. 58kb stripped.
Attachment 8111Code:#include <stdio.h>
int main() {
puts("C doesn't");
return 0;
}
6kb stripped. I don't know how to link to libstdc statically on windows, sorry.
I don't see how anyone would think STL would result in larger source code size, if that's what you're talking about.
uh? Try instead adding vector and string functionality to that C code snippet before you try to compare oranges to apples again.
I don't see why that should make much of a difference. stdio is small. Its going to be around the same size compiled with g++.Quote:
uh? Try instead adding vector and string functionality to that C code snippet before you try to compare oranges to apples again.
The issue is not between C and C++ code. Lets not get into that endless discussion. Is about using STL and not using STL. C doesn't have STL. so, don't use it has a means for comparison.
Well then doodle77 could always compile this:
as c++, and it would come out around the same size. Which would be a valid comparion. And if compiling with C or C++ is around the same as far as file size goes then that means he was originally making a valid comparison too :PCode:#include <stdio.h>
int main() {
puts("C doesn't");
return 0;
}
Ok then. You agree it's not an issue of stladdsbloat.exe against cdoesnt.exe. That's a start.
Now, translate that to real-life code and examples.
The comparison between C and C++ equivalents of the same task is very relevant, because there is a lot of shared runtime between C and C++. In other words:
code_size(C) = implementation(C) + runtime;
code_size(C++) = implementation(C++) + runtime;
In order to figure out how big "runtime" is, you have to try it both ways. Once you have this figure, you have to subtract it out, before making any statements about the code size induced by STL.
Try linking this statically (it's C, not C++):
On my system, the resulting binary is over 500k in size. So just because the C++ version is large doesn't necessarily mean anything at all. C can be bloated, too. The problem is not the language itself, but the toolchain.Code:int main()
{
printf("Hello, world!\n");
return 0;
}
Excellent brewbuck. But only as a means to an end.
The original argument was about not using the STL to produce smaller code. Which basically means the option to use STL is there.
As such, C is irrelevant to this discussion in the way it was being presented. Particularly the whole STL-Bloats-Code versus C-Doesn't.
I'm using C as a means of discovering the value of a particular variable in the equation. I am not at all trying to compare C vs. C++.
My point is that you cannot use the size of a statically linked binary as proof that STL produces code bloat, when a simple C program that prints "Hello world" is also linking statically to 500k. By that logic, basic C is bloated, and I hope everybody would agree that that's untrue.
It indicates something about static linking, and the complexity of modern runtime libraries, it says nothing about the code size of STL.
do not divert the original discussion!
Earlier softwares were smaller, simply because they had less features, were less powerful.
The original discussion is still being... discussed. Abachler argument and the discussion that followed was pertinent. Your own answer adds nothing new as what you said is only repetition of what has been said before.
I understood that brewbuck. And I appreciated your thoughts on that. It was what was being said before that made my ears perk; that is, the not so subtle innuendo behind stladdsbloat.exe and cdoesnt.exeQuote:
I'm using C as a means of discovering the value of a particular variable in the equation. I am not at all trying to compare C vs. C++.
Because old time softwares were having less features and were less powerful!Quote:
Originally Posted by OP
That was the correct answer, so, if I repeated or not, matters less.
But later discussions try to prove, if new software is pure bloat or not! If C is better or C++? Who is doing more bloat?
By the way no body is forcing to use boost library just to use a to upper case function, you can build your own. Older days it was done like 'a' ^= 32.
But now, with all the unicode and other stuff, makes it bigger, yet better and more powerful.
That's the basic difference between old and new!
Actually that will switch the case; nt convert to upper case. it will also only work on ASCII encoding, on an encoding that uses the 32 bit to signify the case.Quote:
By the way no body is forcing to use boost library just to use a to upper case function, you can build your own. Older days it was done like 'a' ^= 32
No, that's not what was being discussed. The comparison with C was for the purposes of factoring out "common bloat" inherent to ALL compiled programs, not just C++.
If you went to a country where every citizen weighed over 500 pounds, it would be weird to point at a particular person and say "That guy's fat!" They're ALL fat.
A better gauge of code size is the size of the object files, not the final, linked binary. You can't blame the runtime for introducing a ton of code when you choose to link statically -- that's why people don't link statically anymore.
I mean when I write 3000 lines of code it compiles to 150kb. I know a lot of my preferences and suggestions cause a lot of eye rolling, but I get the results and the performance. So what if it takes me 3 weeks to impliment something another programmer can do in 2 days, my code runs 10 times faster than hers and makes some projects feasable that otherwise wouldnt be. You use cout, bam code bloat, because the lib just impliments cout using printf. You use string instead of BYTE*, bam code bloat, because the compiler adds all the functions that you dont use. You use MFC, bam code bloat because etc etc etc. Each little thing might only add 20kb here and there, but it adds up.
Any chance of a 10KB program?:D
Write something in assembler?
It's also worth it to mention that 64Kb (kilobit) is smaller than 10KB (kilobyte). Watch your cases.
This has been mentioned before, but to give you an idea of how small modern programs can get... here is a 98KB 3D FPS that is completely procedural.
http://www.vgpro.com/file/18896_kkrieger-beta.zip.html
I mention this also because I love this 2006 quote. :)Quote:
Needs high end user PC to play. Minimum specs are: 128MB graphics Card with pixel shader 1.3, 512MB ram, and Athlon 1.5Ghz
This might sound funny, but what exactly is the Assembler?
I was thinking a very basic game with no levels or something like that.
smallest program I ever wrote was 12 bytes. It changed the file attributes on a file using a dos INT21h call. It even had keyboard input to select the attribute mask.
Assembly language tool used to write directly in assembly and/or compile assembly. VS includes an inline assembler. Lets you write directly in processor dependant mnemonics. Requires low level knowledge of the processor itself.
You dont have to use assembler to create programs < 10k if you dynamically link your libraries like brewbuck said. Theres a load of 4k graphics progs here. Quite a few are done in C. ASM can go really small, some of the 256 byte stuff on there is really impressive. Stuff that drops below that, well, generally cant do much really.
That must have been a COM file. Most executable format headers are larger than that.
An effort to create the smallest possible binary that Linux will actually load:
http://www.muppetlabs.com/~breadbox/...ny/teensy.html
Sure, but if you are calling a system library or one thats very common then for practical purposes its going to take up no extra space.
Does the size affect processor speed?
It effects how much of the program is in the L1 and L2 cache. There is more to system performance than processor speed. A program than runs entirely in the L1 cache will be twice as fast as one that runs entirely in the L2 cache, which willl be 4-8 times faster than one that has to continously pull data from main memory.
Simply being large does not imply that code will not go in cache. Most programs adhere to the 80/20 rule -- 80% of time is spent executing 20% of the code. If the code which is currently executing is small enough to fit in cache, there is no problem.
Most code in large programs is rarely executed.
Assembly language is the closest you will normally get to the hardware. Although I have done small amounts of assembly language coding before, it has been several years. I couldn't find any of my old x86 Assembly code, so here is a small snippet of LC-3 Assembly code:Quote:
This might sound funny, but what exactly is the Assembler?
Looks pretty ugly, doesn't it? (Although some readers here my enjoy reading it :D )Code:.ORIG x3000 ; start at addess 0x3000
AND R1, R1, #0 ; clear R1
dtp TRAP x23 ; input a character
AND R2, R2, #0 ; clear R2
ADD R2, R0, #-15 ; check to see if it is a ! input
ADD R2, R2, #-15 ; do this by subtraction
ADD R2, R2, #-3 ; and so on
BRz end ; if the result is 0, a ! was inputted, end the program
ADD R3, R0, #-16 ; otherwise, convert from ASCII to int
ADD R3, R3, #-16 ; etc.
ADD R3, R3, #-16 ; etc.
ADD R1, R1, R3 ; Add the int to R1 to give us a sum
BRnzp dtp ; loop back to get another input
end HALT ; end the program
.end
If you want an in depth discussion on Assembly language and also how it translates to binary machine-code, you should buy some books or read wikipedia.
These are two nice small COM executables, first one is 30 bytes, second one is 208 bytes, both do amazing things for their size.
Found them from some assembly-related site. Just rename them to .com . :)
Yeah, but they code that actually runs often is scattered around the process. A function at 0x401000 may often call functions at 0x610000, 0x57F000, 0x425000, 0x4B4545 etc, there are no limits, there are just too many functions that are run. The many small class-related CBlaBla::Get*** and CBlaBla::Set*** functions are the basic things that make the code run slower in this case.
Edit: Damn, double post, sorry. :(
Hello,
Here's my take on it. Back when computers were still being predominantly time-shared, the most expensive part was the hardware. Even a few cycles of CPU time or a few kilobytes of hard-disk (if you had one) or floppy-disk or even RAM was very expensive, sometimes prohibitively expensive. So, companies were very willing to pay programmers to take the time to make their programs very efficient, both in compiled size and executable size and speed.
But, hardware soon became cheaper and cheaper, until now when a gigabyte of RAM or hard-disk space is approaching pennies - or less. How much does a second of CPU time cost? Who measures that anymore? So businesses are much less willing to pay to have programmers save a couple hundred kilobytes of space or a few seconds of speed.
This ends in the modern situation of not even caring how big a program is or how long it takes to run (though the ever shrinking patience of a user to wait for a program can dictate run times). Companies like M$ take the attitude of throwing more hardware at it instead of fixing the code (hello Vista), since hardware is much cheaper now than a software engineer. Their bottom line is profits.
I was just at a panel discussion the other day where one of the guests talked about Moore's law and how by about 2020, we'll be creating chips at the atomic level, and thus can't go any smaller and Moore's law will be broken. Perhaps then, when newer and more hardware can't be thrown at the problem, companies will once again begin to devote resources to software and more efficiently utilizing the hardware resources that the laws of physics have put a cap on.
Hello,
This question may be a little off topic, but - I have two older assembly books, one from 1998 and the other from 1999, both on assembly language for the IBM PC/Intel-based computer. Would these still be relevant today? Can they teach me how to program in assembly code for the current crop of x86 and x86_64 processors? How about for the next generation of CPU's or for other chip architectures (i.e., Sun T1)?
The consumer may indeed get the rotten end of the deal sometimes, although it has to be said that it is still a general practice to procure the best possible performance when coding, whereas disk space is indeed not as important as it used to be, with the only controllable factor being the media where the software is planned to be distributed.
However there are exceptions. Not all software is intended for the masses and some software is mission critical. Here, performance issues are constantly being researched and improved. The costs of not doing that would be monstrous. An interesting example is the google cluster architecture (pdf document). Although, not directly mentioned, you can expect that the software running on those rather non impressive servers is squeezing every bit of performance and disk space it can. But you can expect that any other software meeting mission and performance critical requirements is going to be developed with high standards in mind.
There's a whole other world out there.