Am I the only one who still uses low bit-sized variables when I know for a fact that the application will only run on 32-bit systems? I know that most of this stuff is getting the hell padded out of it, yet it just seems horrible to occupy 32-bits of memory when I only need maybe 3 bits of it.
Likewise I conserve memory everywhere else I come across the opportunity. I have on occasion poured through my classes searching for memory saving possibilities. And yet, the average computer in the store today has 128 MB or so of RAM. Its not like shaving a few K off is speeding allocation up to any noticable extent. But its just *drum roll*... good coding practice.
Which brings me to many other points. So many of the things that we (well, _some_ of us) think of as standard practice, are often irrelevant on the systems they will be run on. For instance: I avoid many functions due to their slowness or lack of optomization; Yet I continue to avoid them, using my own more complicated measures, even when writting up some useless little application whos need for speed and optomization is nil. I optomize and spend countless hours parring down functions which aren't even close to bottlenecks in the application.
Why? Are we just holding on to the days when these little things were so important?
(Note: I'm not talking about ignoring good memory management, or sloppy code resulting in slow performance, I just wonder about how tiddly I (and I'm sure others), can get whith code that doesnt need that kind of attention. Just because we can, does this mean we should waste the time to do it?)
I'm hoping to hear from more of the people that have coded a while, rather than people who dont think theres anything wrong with using a 10x10 2D array when every other row uses only 7 elements.