Remember, this is just ONE compiler, your mileage may vary:
Code:
$ g++ -static foo.cpp
$ ls -l a.out
-rwxr-xr-x 1 forum forum 1260949 2010-03-05 17:53 a.out
$ strip a.out
$ ls -l a.out
-rwxr-xr-x 1 forum forum 1029308 2010-03-05 17:54 a.out
$
$ gcc -static foo.c
$ ls -l a.out
-rwxr-xr-x 1 forum forum 584420 2010-03-05 17:55 a.out
$ strip a.out
$ ls -l a.out
-rwxr-xr-x 1 forum forum 521428 2010-03-05 17:55 a.out
$
$ g++ foo.cpp
$ ls -l a.out
-rwxr-xr-x 1 forum forum 9787 2010-03-05 17:55 a.out
$ gcc foo.c
$ ls -l a.out
-rwxr-xr-x 1 forum forum 9143 2010-03-05 17:55 a.out
There's a difference between static and dynamic linking.
There's a difference in not having symbol tables.
> Well, I more meant that if I so-much as #include <iostream>, my few-kilobytes executable suddenly booms to several-hundred-kilobytes in size.
The benefit of C++ is in larger programs. Yes there is a big up-front hit, but you'll find that there are a lot of goodies to play with. The incremental cost of adding more functionality to your code (that uses the existing libraries) results in a much shallower gradient.
C starts with a lot less. Every bit of functionality you add has to add the whole thing. The delta increment is more because you're writing more code (and debugging it, and testing it).
With RAM and disks measured in GB, why stress over +/- 1MB?
Customers don't care that it's 100K bigger.
They WILL care if it has more bugs and is 3 months late!.
If YOU still care at the end, then you can tinker with micro-optimisations with the foundation of a known working program to test against. If you add bugs at that stage, all you lost was the time to add the bugs.