You'll be a great manager one day.
Think so?
I been wondering about a new career.
Depends on what standards we are talking about. Development standards aren't.
Whiteflags already addressed that point. But let me try and make it clearer to you, since you don't seem to have caught the gist of his post.
First notice your use of the adverb "faster". It implies there's a need to compare your lookup table solution to something else. Only by comparison can one use adverbs as "faster", "cheaper" or "quicker". If you compare your lookup table solution to something else, you are always bound to find differences. And these differences may reflect on faster code, simpler code, or quicker to write code, some, none, or all of them.
But should you care? The reality of day-to-day programming is that you find solutions to your problems that fit your requirements. Once that solution is found, it becomes an ideal solution. You didn't compromise any attribute of your code. That is:
Your code will,
- always be fast if it meets the performance requirements
- always be simple if it meets the maintenance requirements
- always be quick to write if it meets the deadline requirements
To say that under no circumstance you can have all three is the premise of that saying on the OP. And as you can easily guess, it's false. It's irrelevant if there are other better solutions. You have found one. And if that's the one you decided to use, comparing it to other solutions will be a waste of resources. In fact, it can compromise your deadline right there.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Ok, I have a question.
Can a compiler optimize for both size and speed at the same time?
They generally do, yes, unless you specify a bias.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
A bias?
So you have to give up some speed to get small size or vice versa?
Depends.
Several optimisations that reduce size also increase performance, usually because they make use of dedicated machine instructions. In other words, these optimisations do not require a trade-off of speed and executable size. If the compiler is tweaked to use those particular optimisations, it will maximise speed while simultaneously minimise size.
However, there are also quite a few optimisations that give increased performance by increasing executable file size. If the compiler is tweaked to use those particular optimisations, then it is necessary reduce performance in order to reduce executable size (or vice versa).
Generally, if a compiler supports "optimise for size" settings, it will use all features that minimise size. Depending on what your code does, its performance may increase or decrease.
Similarly, if a compiler supports "optimise for performance" setting, it will use all features that maximise performance. Depending on what your code does, the executable size may increase or decrease.
Bias between the two tends to mean a mix of both, with some weighting scheme.
Most compilers at "higher" optimisation settings (for example, gcc -O3 and above) tend to optimise for performance (i.e. be biased toward performance) rather than executable size. However, depending on that the code does and what instructions are supported by the target machine, that can still give a reduction of executable size.
Last edited by grumpy; 08-23-2011 at 10:32 PM. Reason: Fixed typos
So when optimizing for one thing completely preferentially, any improvement in the other is incidental?
No. "Incidental" is the wrong word, as it suggests confidence that an improvement against one measure also causes an improvement against another measure. It is possible to gain an improvement against two measures, simultaneously, but there is no general guarantee of that.
The one I see from project managers is: "Budget, schedule, capability. Pick two."
Given that two of those (budget and schedule) are measurable and quantifiable, they tend to be defined in advance. It therefore tends to be capability (i.e. what the system does and how well it does it) that is sacrificed.
Sometime in the past 30 years; I heard the phrase space versus time; As in memory used versus run-time.
If you use more memory in the program (both for code and data) you can reduce run-time.
Tim S.
It sort of holds today. But only to some extent. For non scalable programs, there's only so much memory you can throw at a program before it no longer requires it and more memory won't solve any performance issues still in existence (nor it will resolve performance issues related to the code itself). In a time when memory was counted in KBs and even when MBs, this was more pertinent because it was a lot easier to write programs that hit system memory ceilings.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
There are so many factors to a program. Features, time to market, compile time, runtime speed, disk usage, memory usage, code maintainability, reliability, and probably more that I forgot. When developing any particular bit of software, be it the architecture or some small function, you make trade-offs between some of these. Depending on what you do, these trade-offs may be different, and combine different aspects. Some aspects are commonly improved together (e.g. memory usage and runtime speed, or code maintainability and reliability), some have a tendency to conflict (e.g. runtime speed and code maintainability, or compile time and runtime speed). So it's naive to pick three aspects and claim that they never can go together.
The memory/speed tradeoff isn't clear-cut either. We develop a HPC app here, and regularly optimize for memory usage. This results in speed improvements, because any memory that we save somewhere can then be used to cache data. Also, cutting away uninteresting parts of the input data results in both memory reduction (less data to save) and speed improvements (less data to process).
Regarding compiler optimizations: there are several kinds.
There are those that make the code smaller and faster. Constant folding and common subexpression elimination are very important here. Good register allocation and instruction selection algorithms are also very important. Inlining of functions that are smaller than a call also helps both traits.
There are those that make the code smaller without affecting performance. Dead code elimination, for example. (Note that because the instruction cache of the CPU is small, making code smaller can actually lead to an overall increase in performance.)
There are those that make the code faster without affecting size. Instruction scheduling does this.
Then there are optimizations that actively sacrifice one trait for another. Instruction selection can choose compact but slow instructions or bigger, faster instructions in some special cases. Loop unrolling always makes the code larger, but might also speed it up. Inlining can also lead to code size increase.
The first three kinds of optimization will always be applied by compilers when you enable optimization. The last kind will usually depend on additional compiler switches giving the user a choice.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
I was just learning some programming myself when I heard this, and this was back
in the 70's and 80's.
I don't think there was was much compiler optimizing back then. They would usually
dissasemble the obect code (or something) and do their own optimizing.
But some had to write the algorithms for modern optimizers. That took time. And if
they didn't exist now, you would have to spend time doing yourself.
They worked under real size and speed restraints. If it didn't fit, or wasn't fast enough,
there simply wasn't any product. These were dedicated computers, data aquisition and
analysis, not PC applications. But early PC's certainly had limitations.
I imagine that the "triangle philosophy" referred to a programmers ability, not the amount
of resources available. So maybe in some sense it was true. If it was true in some sense,
then my refined question becomes:
"Can a modern programmer meet all three, unassisted"?
Technology has advanced. Have programmers advanced also? Or have they just taken
advantage of the technology?