Long ago I used to hear often that you can accomplish three things in programming:
small code size
fast code
short developement time
And that you could always accomplish any two of those but not all three.
Is that still true today?
Printable View
Long ago I used to hear often that you can accomplish three things in programming:
small code size
fast code
short developement time
And that you could always accomplish any two of those but not all three.
Is that still true today?
Nope. Probably never was...
A decent programmer acquires a set of skills that contribute to both development speed and well written code, good compilers provide automatic production of highly optimized code...
These are just soundbites meant to impress and give some sort of romantic notion to programming. They are hardly verifiable, do not under any circumstance constitute a universal truth to such a complex and heterogenous task as programming, and can in fact, under most circumstances, be easily proven wrong. Because they are usually given in the shape of some sort of unquestionable truth, they tend to restrain critical thinking.
Ignore those and many other sayings.
You can always accomplish all three simultaneously by changing your notion of "small", "fast" and "short". However, if you ignore these relative notions and understand the saying as just being about trade-off, then it becomes obvious that some trade-off between time and improvements of different kinds will always be present, and not only in programming.
Interesting answers.
The time I was referring to had very little hardware to perform things like video or even
audio.
Speed was needed in the code itself. So that has definitely changed and may not be
a factor, at least for certain applications.
But assuming there was no hardware to handle the speed problem, Would it still be true?
Being defeatist in many areas of my life, I refuse to accept further notions of failure in programming.
I don't understand what you mean by "no hardware". But does it matter if I don't? What can possibly exist in every software development project ever written that makes it true that no code can be both simple, fast and quick to write? If we are ready to accept this, we need to identify the causes. Otherwise we are dealing with an unproven statement.
Laserlight makes an interesting point about trade-offs. It's a well known problem in programming. But these don't exist everywhere, or are always relevant if they do (in which case the terms "fast", "simple" and "quick" assume different meanings). So the simple matter of fact is that the way the saying on your OP is constructed makes it very easy to write it off as being false.
Oh boy, I couldn't empathize more!
It does seem difficult to achieve all three at once, though. I remember writing a graphics library for the
Commodore 64, which had none. The computer was so slow, that assembler was only way to do it.
I came up with a very small code size of very fast graphics functions, callable from BASIC. But I really
can't say if it could have been done in less time. I couldn't. And I don't know if some else could have
either. That graphics library was not an easy project.
Well, at least this is in the GD forum.....:rolleyes:
Oh, yes, I am talking about the 80's.
But now, is speed coming from speed efficient code or hardware?
Indeed.
How much incredibly sluggish code might be out there, and no one notices?
Fast machines do free up a programmer to concentrate on things other than speed.
Sometimes.
At one time almost everything was done in software. There were no graphics accelerators or sound chips.
Machines were so slow and small, that some things could only be done in assemby language.
Also, it wouln't apply to every program, of course.
I imagine it applied to projects where all three considerations were either critical or very important.
You're not thinking of this are you?
Project triangle - Wikipedia, the free encyclopedia
No, but that does looks like the same type thing. One was likely derived from the other.
Just something the software guys at one place I worked used to talk about.
Their projects were fairly large, and the "triangle" seemed to be a constant consideration.
Then, quite frankly, they were worried about all the wrong stuff.
It's small (4 lines of code plus two lines with brackets).Code:#include <stdio.h>
int main(void)
{
puts("Hello world!");
return 0;
}
It's fast. You won't find anything too much quicker at outputting text than the stdio functions.
It was quick to develop. It took much well under a minute to type.
Quite literally ... tons of it.
What passes for efficient code today --especially with 3rd party abstractions and object oriented programming-- isn't efficient at all. Software overhead (also called "bloat") has become an accepted fact of life, especially in "managed" languages like C# and Java... There was a time when people were refusing to install Visual Basic code because of all the DLLs that came with it, essentially duplicating functions already on their systems. These days we install monstrosities like .NET and Java runtimes that are 10 times as big without a second thought. Every CPU cycle uses time, but that's not even a consideration these days...
Yes they do... but they also lead to "stuff we couldn't do before" which can be a good thing. Consider things like high definition video, multi-channel audio and enormously complex scientific code. But just as often they lead to crap like Borland's Delphi wich sucked up a full megabyte of memory just to play a single midi file in MCI... Bloatware is no joke, but we seem increasingly unlikely to even care how big our code gets. What the heck... memory is cheap and CPUs are fast, just pound it out and get it working in the shortest time possible.Quote:
Fast machines do free up a programmer to concentrate on things other than speed.
Sometimes.
This triangle of yours is a balancing act...
The goal is to get working code that is reasonably stable into the hands of users as quickly as possible...
Yes, there new challanges with the new possibilities.
Now if the programmers had to work within size and speed constraints,
how quickly could they have delivered that stuff?
What does it matter?
The entire programming industry is a whole different animal than it was in the 1980s... there have been large changes, even in the past 5 years we've seen new operating systems, 64 bit code becoming the norm, and even some talk of 128 bit processors on the drawing boards... 5 years from now it will be different again... The challenge is to keep up.
I understand what everyone is saying, I just don't think that's what the "triangle" was originally
intended to mean.
Of course a tiny program can meet all three requirements. That was always true. Just as a large
complex program can meet all three on a huge superfast computer.
A question I should have asked first is, "what does it actually mean"? My interpretation is apparently
different from many.
If you use a large lookup table to reach an execution speed goal, I would say that you have not met
the small size requirement, even if it fits on a typical machine. Now that itself is arguable, but I also
think that it was part of the original thinking concerning the "triangle".
The memory in an operating system like windows can shrink pretty fast if you are using other programs
simultaneously. As can the effective processor speed.
It is of great use if the goal is to write a hello world application, which is a very common goal that almost every C programmer has had at least once in their life. Besides, usefulness is not in the OP's 3 criteria, and it's just as relative as small code size, fast code and short development time. Oh, and I was joking :)
I'd be willing to say it is an empty concept. Because:Quote:
A question I should have asked first is, "what does it actually mean"?
You're arbitrarily deciding what is an optimal amount of memory a program can use, and what programs are too slow. If your goal is to solve a complex problem, are you going to count the machine instructions and say that somehow the solution is inadequate even if it solves the problem on a desktop computer in five minutes? Not necessarily. You would have to know what the problem is and how urgently you need the information, as well as what device will give you the answer. If I wanted to run my planner on my wristwatch, that is a very different program than the one on my desktop. And the one on my wristwatch is not better than the one on my desktop, unless I'm using my wristwatch more.Quote:
If you use a large lookup table to reach an execution speed goal, I would say that you have not met
the small size requirement, even if it fits on a typical machine. Now that itself is arguable, but I also
think that it was part of the original thinking concerning the "triangle"
Not to mention I could postulate the existence of a programming team who do nothing but program wristwatches and wrote my planner in a matter of 90 days, or whatever you personally would find acceptable.
I would agree that any standards are arbitrary.
But I think the important point with the large lookup table example, is that small code size
had to be sacrificed for speed.
It does assume that faster code could have been written, given enough time. And that may or may not
be true, depending on the program. But it is also usually unknown. So it is assumed that it could have
been written smaller and faster.
Hope that makes sense.
You'll be a great manager one day.
Think so?
I been wondering about a new career.
Depends on what standards we are talking about. Development standards aren't.
Whiteflags already addressed that point. But let me try and make it clearer to you, since you don't seem to have caught the gist of his post.
First notice your use of the adverb "faster". It implies there's a need to compare your lookup table solution to something else. Only by comparison can one use adverbs as "faster", "cheaper" or "quicker". If you compare your lookup table solution to something else, you are always bound to find differences. And these differences may reflect on faster code, simpler code, or quicker to write code, some, none, or all of them.
But should you care? The reality of day-to-day programming is that you find solutions to your problems that fit your requirements. Once that solution is found, it becomes an ideal solution. You didn't compromise any attribute of your code. That is:
Your code will,
- always be fast if it meets the performance requirements
- always be simple if it meets the maintenance requirements
- always be quick to write if it meets the deadline requirements
To say that under no circumstance you can have all three is the premise of that saying on the OP. And as you can easily guess, it's false. It's irrelevant if there are other better solutions. You have found one. And if that's the one you decided to use, comparing it to other solutions will be a waste of resources. In fact, it can compromise your deadline right there.
Ok, I have a question.
Can a compiler optimize for both size and speed at the same time?
They generally do, yes, unless you specify a bias.
A bias?
So you have to give up some speed to get small size or vice versa?
Depends.
Several optimisations that reduce size also increase performance, usually because they make use of dedicated machine instructions. In other words, these optimisations do not require a trade-off of speed and executable size. If the compiler is tweaked to use those particular optimisations, it will maximise speed while simultaneously minimise size.
However, there are also quite a few optimisations that give increased performance by increasing executable file size. If the compiler is tweaked to use those particular optimisations, then it is necessary reduce performance in order to reduce executable size (or vice versa).
Generally, if a compiler supports "optimise for size" settings, it will use all features that minimise size. Depending on what your code does, its performance may increase or decrease.
Similarly, if a compiler supports "optimise for performance" setting, it will use all features that maximise performance. Depending on what your code does, the executable size may increase or decrease.
Bias between the two tends to mean a mix of both, with some weighting scheme.
Most compilers at "higher" optimisation settings (for example, gcc -O3 and above) tend to optimise for performance (i.e. be biased toward performance) rather than executable size. However, depending on that the code does and what instructions are supported by the target machine, that can still give a reduction of executable size.
So when optimizing for one thing completely preferentially, any improvement in the other is incidental?
No. "Incidental" is the wrong word, as it suggests confidence that an improvement against one measure also causes an improvement against another measure. It is possible to gain an improvement against two measures, simultaneously, but there is no general guarantee of that.
The one I see from project managers is: "Budget, schedule, capability. Pick two."
Given that two of those (budget and schedule) are measurable and quantifiable, they tend to be defined in advance. It therefore tends to be capability (i.e. what the system does and how well it does it) that is sacrificed.
Sometime in the past 30 years; I heard the phrase space versus time; As in memory used versus run-time.
If you use more memory in the program (both for code and data) you can reduce run-time.
Tim S.
It sort of holds today. But only to some extent. For non scalable programs, there's only so much memory you can throw at a program before it no longer requires it and more memory won't solve any performance issues still in existence (nor it will resolve performance issues related to the code itself). In a time when memory was counted in KBs and even when MBs, this was more pertinent because it was a lot easier to write programs that hit system memory ceilings.
There are so many factors to a program. Features, time to market, compile time, runtime speed, disk usage, memory usage, code maintainability, reliability, and probably more that I forgot. When developing any particular bit of software, be it the architecture or some small function, you make trade-offs between some of these. Depending on what you do, these trade-offs may be different, and combine different aspects. Some aspects are commonly improved together (e.g. memory usage and runtime speed, or code maintainability and reliability), some have a tendency to conflict (e.g. runtime speed and code maintainability, or compile time and runtime speed). So it's naive to pick three aspects and claim that they never can go together.
The memory/speed tradeoff isn't clear-cut either. We develop a HPC app here, and regularly optimize for memory usage. This results in speed improvements, because any memory that we save somewhere can then be used to cache data. Also, cutting away uninteresting parts of the input data results in both memory reduction (less data to save) and speed improvements (less data to process).
Regarding compiler optimizations: there are several kinds.
There are those that make the code smaller and faster. Constant folding and common subexpression elimination are very important here. Good register allocation and instruction selection algorithms are also very important. Inlining of functions that are smaller than a call also helps both traits.
There are those that make the code smaller without affecting performance. Dead code elimination, for example. (Note that because the instruction cache of the CPU is small, making code smaller can actually lead to an overall increase in performance.)
There are those that make the code faster without affecting size. Instruction scheduling does this.
Then there are optimizations that actively sacrifice one trait for another. Instruction selection can choose compact but slow instructions or bigger, faster instructions in some special cases. Loop unrolling always makes the code larger, but might also speed it up. Inlining can also lead to code size increase.
The first three kinds of optimization will always be applied by compilers when you enable optimization. The last kind will usually depend on additional compiler switches giving the user a choice.
I was just learning some programming myself when I heard this, and this was back
in the 70's and 80's.
I don't think there was was much compiler optimizing back then. They would usually
dissasemble the obect code (or something) and do their own optimizing.
But some had to write the algorithms for modern optimizers. That took time. And if
they didn't exist now, you would have to spend time doing yourself.
They worked under real size and speed restraints. If it didn't fit, or wasn't fast enough,
there simply wasn't any product. These were dedicated computers, data aquisition and
analysis, not PC applications. But early PC's certainly had limitations.
I imagine that the "triangle philosophy" referred to a programmers ability, not the amount
of resources available. So maybe in some sense it was true. If it was true in some sense,
then my refined question becomes:
"Can a modern programmer meet all three, unassisted"?
Technology has advanced. Have programmers advanced also? Or have they just taken
advantage of the technology?
Yes, we can. (I know, I know... but it isn't intended)Quote:
I imagine that the "triangle philosophy" referred to a programmers ability, not the amount
of resources available. So maybe in some sense it was true. If it was true in some sense,
then my refined question becomes:
"Can a modern programmer meet all three, unassisted"?
If your finished program meets all your performance, maintenance and usability requirements, you effectively did it. Thousands of programs shipped and will continue to be shipped that do just that. Necessarily this does not mean that in your code you didn't have compromised, here and there, something for something else. But your final variable is the program itself and how it stands to your objectives. It's unthinkable to believe that no program was ever developed that couldn't meet all the programming requirements. We would be in a lot of trouble to justify the world's dependency on computer technology.
I'm not sure why you keep reaching out for an "Yes" answer as you keep changing the terms of your question. Maybe you want this triangle thing to be true? I can understand why. It gives some assurance in our lives. It's the type of saying that answers bad programming practices with "it's just the way things are". But it's not how things are.
Technologies (hardware, compilers, programming languages, etc) have advanced. Programmers have also advanced in their ability and willingness to use the technologies, but have not advanced much in their ability to use the technologies effectively.
The net effect is that, by default, programmers have better tools so can do more, but they use the tools as a crutch.
What does it mean to "meet all three, unassisted"? What's your real question that causes you to ask this again and again?Quote:
Originally Posted by megafiddle
It looks like you have already received the answers on page 1. Adding "modern" and "unassisted" does not change anything. The answer is "yes" if you are talking about these with respect to a requirements specification and "no" if you are talking about these as inherent trade-offs.
It is kind of like asking: I am eating a meal and have these two goals:
1. To finish my food.
2. To quickly be done with the meal so as to do my next activity.
Can these two goals be achieved simultaneously?
Now, eating your food takes time. No matter how fast you finish your food (goal #1), you are still going to take some time, and this conflicts with goal #2, so you cannot achieve both 100% at the same time. But if you finish your food in time for your next activity, who cares?
That's a good point. Maybe the programmers I'm thinking of weren't that good. Or maybe the management
was unrealistic. I tend not to think so, though. As I mentioned they had serious restraints, even to the point
of building their own processor and microcode. For them at least, the philosophy was true.
It was my own assumption that this could be extended to the early PC's. If that assumption was wrong, then
I have no argument with that. It was just based on the limitations of the early machines.
As to why I keep asking, my first question was way too general. And I had thought that there was some general
truth to it.
Wen I say "modern" and "unassisted", I am asking, "If a competent programmer was given an old 286 PC with 1 Meg memory,
and asked to produce some CAD program, could he/she do it quicker than the earler time period programmer"?
(and with an old Borland compiler)
Again, my original question was too general. It should have been in the form of specific questions.
I realize that there is no practical difference between "fast as possible" and "fast enough". I use many programs
that have no apparent speed problems. I would hate to look "behind the scenes" in some of them, but they do
work well enough nonetheless. It seems that programmers way back then, used to care about speed and size,
even when it wasn't important to the final product. There was at least an attempt to avoid large or slow programs
if possible. I think that would be a programming skill that could be lost if not exercised. So part of what I am wondering
is if the same philosophy is true today.
I am not a professional programmer, so I have the luxury of talking my time on a project. I also have the luxury of simply
plugging in more ram or a faster processor. Still, I try to avoid large or slow algorithms. Just seems like good programming
practice.
It depends on the programmers ;)Quote:
Originally Posted by megafiddle
I note that your scenario effectively gives "the earler time period programmer" a kind of "assistance".
It sounds like your real question is embedded right there: do programmers these days attempt to avoid unnecessarily large or slow programs when programming? Again, I would say that it depends on the programmers.Quote:
Originally Posted by megafiddle
How do you mean?
Yes, that's a big part of what I was wondering.
But I'm also wondering about differences between now and then. We can easily quantify speed
and memory size. So we know how those have changed. Is there no way to speak of programmers
abilities and philosophies in a general sense, for now vs then?
Even for cboard->General Discussion, this thread plainly drips with excessive and flippant absurdism. :(
These cats are playing w/ you, megafiddle, because your initial premise is just a logical truism, and there's not much to say about it (except to play semantic games).
There are more languages now than there used to be for a good reason -- in general, those newer languages prioritize development time, because that is the major co$t factor. Notice that the vast majority of them produce slower executables that use more memory resources than programs written in C, which was developed ~40 years ago. This is even true of C++, which still produces fast binaries but encourages practices that use considerably more memory. And, nb, C++ is not that young either.*
The reason people use C++ instead of C is because it reduces development time and provides more mechanisms for standardizing/organizing the development process. It is a great language in that sense. End of story.
I like to prototype things in perl, then when I'm happy with the form convert the code to C or C++ (recently, using Inline::C to further facilitate this). If the project is sufficiently large, for me this actually reduces the development time, because I can screw around and change stuff in perl much easier than I would in C. That also encourages me to experiment more which leads to a better product. And IMO, a perl program makes a great schematic or plan -- much better than just doodling on paper -- meaning the quality of the final C code is probably better than it would have been. So perl saves me time, and C delivers streamlined performance.
* implying to me that C really does set a standard in relation to assembly that perhaps cannot be beaten.
I prefer to be a dog.
Of course, what did you think I meant by calling it a "logical truism"? Which it is, and anyone who tries to claim differently is an absurdist.
My point about "the cats" dogging you is that some people really will try to argue that up is down.
- development effort
- memory use
- execution speed
The ideal is to minimize the first two and maximize the last one. But if product A sets the bar on all 3 counts, someone will come along and best it on *two* of them by sacrificing one of the three, usually:
Product B is faster and uses fewer resources, but cost more to develop.
OR
Product B cost the same or less to develop and is faster, but uses more memory.
This is because:
1) There is almost always a way to improve performance, but it takes more effort.
2) It is easier to improve execution speed by splurging on mem usage than it is to reduce mem usage by sacrificing speed.
3) It is very easy to reduce development time by choosing a model which sacrifices speed and/or mem usage.
So if you hear someone claiming "there is no such relationship", "there never was such a relationship", "this relationship is not significant anymore", check your BS meter. Beep, beep, beep.
I'd actually add another couple corners to this "triangle":
PORTABILITY
SECURITY
RELIABILITY
Ok.
I misunderstood which "cats" you were referring to.
There's also catnip. It has the interesting property of being so evident that it distracts "cats" from their real purpose of hunting mice. Catnip; cats very own logical truism.
But, it isn't a triangle anymore... oh my! We need to redo this logical truism.