Like Tree2Likes

the three mutually exclusive goals of programming

This is a discussion on the three mutually exclusive goals of programming within the General Discussions forums, part of the Community Boards category; You'll be a great manager one day....

  1. #31
    Registered User whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    7,637
    You'll be a great manager one day.

  2. #32
    Registered User
    Join Date
    Mar 2011
    Posts
    400
    Think so?
    I been wondering about a new career.

  3. #33
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,412
    Quote Originally Posted by megafiddle View Post
    I would agree that any standards are arbitrary.
    Depends on what standards we are talking about. Development standards aren't.

    Quote Originally Posted by megafiddle View Post
    But I think the important point with the large lookup table example, is that small code size
    had to be sacrificed for speed.

    It does assume that faster code could have been written, given enough time. And that may or may not
    be true, depending on the program. But it is also usually unknown. So it is assumed that it could have
    been written smaller and faster.

    Hope that makes sense.
    Whiteflags already addressed that point. But let me try and make it clearer to you, since you don't seem to have caught the gist of his post.

    First notice your use of the adverb "faster". It implies there's a need to compare your lookup table solution to something else. Only by comparison can one use adverbs as "faster", "cheaper" or "quicker". If you compare your lookup table solution to something else, you are always bound to find differences. And these differences may reflect on faster code, simpler code, or quicker to write code, some, none, or all of them.

    But should you care? The reality of day-to-day programming is that you find solutions to your problems that fit your requirements. Once that solution is found, it becomes an ideal solution. You didn't compromise any attribute of your code. That is:

    Your code will,
    - always be fast if it meets the performance requirements
    - always be simple if it meets the maintenance requirements
    - always be quick to write if it meets the deadline requirements

    To say that under no circumstance you can have all three is the premise of that saying on the OP. And as you can easily guess, it's false. It's irrelevant if there are other better solutions. You have found one. And if that's the one you decided to use, comparing it to other solutions will be a waste of resources. In fact, it can compromise your deadline right there.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  4. #34
    Registered User
    Join Date
    Mar 2011
    Posts
    400
    Ok, I have a question.

    Can a compiler optimize for both size and speed at the same time?

  5. #35
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,412
    They generally do, yes, unless you specify a bias.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  6. #36
    Registered User
    Join Date
    Mar 2011
    Posts
    400
    A bias?

    So you have to give up some speed to get small size or vice versa?

  7. #37
    Registered User
    Join Date
    Jun 2005
    Posts
    6,207
    Depends.

    Several optimisations that reduce size also increase performance, usually because they make use of dedicated machine instructions. In other words, these optimisations do not require a trade-off of speed and executable size. If the compiler is tweaked to use those particular optimisations, it will maximise speed while simultaneously minimise size.

    However, there are also quite a few optimisations that give increased performance by increasing executable file size. If the compiler is tweaked to use those particular optimisations, then it is necessary reduce performance in order to reduce executable size (or vice versa).

    Generally, if a compiler supports "optimise for size" settings, it will use all features that minimise size. Depending on what your code does, its performance may increase or decrease.

    Similarly, if a compiler supports "optimise for performance" setting, it will use all features that maximise performance. Depending on what your code does, the executable size may increase or decrease.

    Bias between the two tends to mean a mix of both, with some weighting scheme.

    Most compilers at "higher" optimisation settings (for example, gcc -O3 and above) tend to optimise for performance (i.e. be biased toward performance) rather than executable size. However, depending on that the code does and what instructions are supported by the target machine, that can still give a reduction of executable size.
    Last edited by grumpy; 08-23-2011 at 10:32 PM. Reason: Fixed typos
    Right 98% of the time, and don't care about the other 3%.

  8. #38
    Registered User
    Join Date
    Mar 2011
    Posts
    400
    So when optimizing for one thing completely preferentially, any improvement in the other is incidental?

  9. #39
    Registered User
    Join Date
    Jun 2005
    Posts
    6,207
    No. "Incidental" is the wrong word, as it suggests confidence that an improvement against one measure also causes an improvement against another measure. It is possible to gain an improvement against two measures, simultaneously, but there is no general guarantee of that.
    Right 98% of the time, and don't care about the other 3%.

  10. #40
    ATH0 quzah's Avatar
    Join Date
    Oct 2001
    Posts
    14,826
    Quote Originally Posted by megafiddle View Post
    Long ago I used to hear often that you can accomplish three things in programming:

    small code size
    fast code
    short developement time

    And that you could always accomplish any two of those but not all three.

    Is that still true today?
    I don't know where you read that, but it's typically something like:

    "Faster, cheaper, better. Pick two."

    I don't know that I've ever seen "small code size" in the list of choices.


    Quzah.
    Hope is the first step on the road to disappointment.

  11. #41
    Registered User
    Join Date
    Jun 2005
    Posts
    6,207
    Quote Originally Posted by quzah View Post
    I don't know where you read that, but it's typically something like:

    "Faster, cheaper, better. Pick two."
    The one I see from project managers is: "Budget, schedule, capability. Pick two."

    Given that two of those (budget and schedule) are measurable and quantifiable, they tend to be defined in advance. It therefore tends to be capability (i.e. what the system does and how well it does it) that is sacrificed.
    Right 98% of the time, and don't care about the other 3%.

  12. #42
    Registered User
    Join Date
    May 2009
    Posts
    2,514
    Sometime in the past 30 years; I heard the phrase space versus time; As in memory used versus run-time.
    If you use more memory in the program (both for code and data) you can reduce run-time.

    Tim S.

  13. #43
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,412
    It sort of holds today. But only to some extent. For non scalable programs, there's only so much memory you can throw at a program before it no longer requires it and more memory won't solve any performance issues still in existence (nor it will resolve performance issues related to the code itself). In a time when memory was counted in KBs and even when MBs, this was more pertinent because it was a lot easier to write programs that hit system memory ceilings.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  14. #44
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    There are so many factors to a program. Features, time to market, compile time, runtime speed, disk usage, memory usage, code maintainability, reliability, and probably more that I forgot. When developing any particular bit of software, be it the architecture or some small function, you make trade-offs between some of these. Depending on what you do, these trade-offs may be different, and combine different aspects. Some aspects are commonly improved together (e.g. memory usage and runtime speed, or code maintainability and reliability), some have a tendency to conflict (e.g. runtime speed and code maintainability, or compile time and runtime speed). So it's naive to pick three aspects and claim that they never can go together.

    The memory/speed tradeoff isn't clear-cut either. We develop a HPC app here, and regularly optimize for memory usage. This results in speed improvements, because any memory that we save somewhere can then be used to cache data. Also, cutting away uninteresting parts of the input data results in both memory reduction (less data to save) and speed improvements (less data to process).

    Regarding compiler optimizations: there are several kinds.
    There are those that make the code smaller and faster. Constant folding and common subexpression elimination are very important here. Good register allocation and instruction selection algorithms are also very important. Inlining of functions that are smaller than a call also helps both traits.
    There are those that make the code smaller without affecting performance. Dead code elimination, for example. (Note that because the instruction cache of the CPU is small, making code smaller can actually lead to an overall increase in performance.)
    There are those that make the code faster without affecting size. Instruction scheduling does this.
    Then there are optimizations that actively sacrifice one trait for another. Instruction selection can choose compact but slow instructions or bigger, faster instructions in some special cases. Loop unrolling always makes the code larger, but might also speed it up. Inlining can also lead to code size increase.
    The first three kinds of optimization will always be applied by compilers when you enable optimization. The last kind will usually depend on additional compiler switches giving the user a choice.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  15. #45
    Registered User
    Join Date
    Mar 2011
    Posts
    400
    I was just learning some programming myself when I heard this, and this was back
    in the 70's and 80's.

    I don't think there was was much compiler optimizing back then. They would usually
    dissasemble the obect code (or something) and do their own optimizing.
    But some had to write the algorithms for modern optimizers. That took time. And if
    they didn't exist now, you would have to spend time doing yourself.

    They worked under real size and speed restraints. If it didn't fit, or wasn't fast enough,
    there simply wasn't any product. These were dedicated computers, data aquisition and
    analysis, not PC applications. But early PC's certainly had limitations.

    I imagine that the "triangle philosophy" referred to a programmers ability, not the amount
    of resources available. So maybe in some sense it was true. If it was true in some sense,
    then my refined question becomes:
    "Can a modern programmer meet all three, unassisted"?
    Technology has advanced. Have programmers advanced also? Or have they just taken
    advantage of the technology?

Page 3 of 4 FirstFirst 1234 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. including headers mutually?
    By nacho4d in forum C++ Programming
    Replies: 6
    Last Post: 02-21-2010, 05:09 AM
  2. Mutually dependent classes
    By MacNilly in forum C++ Programming
    Replies: 9
    Last Post: 03-17-2009, 03:39 AM
  3. Exclusive or in if statement?
    By Quantum1024 in forum C Programming
    Replies: 3
    Last Post: 03-26-2005, 01:10 PM
  4. GUI Programming :: C++ Exclusive
    By kuphryn in forum C++ Programming
    Replies: 5
    Last Post: 01-25-2002, 02:22 PM
  5. bitwise-exclusive-OR operator: ^
    By mikebrewsj in forum C++ Programming
    Replies: 2
    Last Post: 01-15-2002, 10:52 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21