Thread: is the STL widely used in professional C++ programming?

  1. #31
    and the hat of sweating
    Join Date
    Aug 2007
    Location
    Toronto, ON
    Posts
    3,545
    Quote Originally Posted by cyberfish View Post
    Realistically speaking, you won't get that kind of performance differences compiling the same code using different (modern) compilers.

    Using STL is different. It will be compiling different code (from different implementations of STL).
    There's quite a few STL implementations that you can buy or download separately from the compiler and simply use the same STL code on all platforms. One of my previous companies did that.
    "I am probably the laziest programmer on the planet, a fact with which anyone who has ever seen my code will agree." - esbo, 11/15/2008

    "the internet is a scary place to be thats why i dont use it much." - billet, 03/17/2010

  2. #32
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    I agree. I'm fairly certain they did evaluate the option of using the STL before deciding to go custom. Though I don't know the specifics of what went into that decision.
    Or more likely when that decision was made and at the time what the state of the STL as a fully featured library was.

    I totally agree with my re-qutoing of the tatement that optimizing for 25% performance gain (that might take a long time) is probably not worth it when you can just target computers that are 25% faster. To give you an idea of where 25% is...the Intel I7 is normally 15% to 20% faster than an AMD Phenom II. I highly doubt 25% is going to stop anyone from shipping yet trying to eek out the 25% might. Now, of course, if the FPS is in the toilet to start with then you can't afford 25% but if your FPS is in the toilet it would be my guess that the 25% were speaking of is not the root cause of that.

    Interactive frame rates are right around 25 to 35 FPS. Now as a developer I shoot for the refresh rate of the display mode I'm using but in the end if the system stays interactive then it's not worth the time or effort to optimize b/c no one is going to notice. But people will notice when you fail to ship. I'm sure we can all agree it is a balancing act that is not always as cut and dried as we make it out to be and, for most of us, is a decision that is made far above our little cubicals.

    But your statemens about the STL do not surprise me as I've heard quite a bit of debate between studios that wouldn't use anything else and from studios who would never use it. After all the studio is protecting their investment and if containers have already been built then why re-invent the wheel when it was already re-invented some time ago? As well the containers may not support everything the STL containers do (<algorithm>) but the requirements may be such that they don't have to. So in the end I can see several legit reasons for not using the STL but I can also see as many legit reasons for using it. However all of us have to follow what has been handed down to us so we are definitely not accusing you of being this or that. In fact, I find the discussion and back and forth on it quite interesting.
    Last edited by VirtualAce; 07-26-2010 at 09:18 PM.

  3. #33
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    I'm sure we can all agree it is a balancing act that is not always as cut and dried as we make it out to be.
    I don't think it is a balancing act. The average consumer doesn't have the latest and greatest hardware to kick around. A "triple A" rendering farm targeting customers who outsource their rendering needs will not likely have the cash to buy a 25% increase in the form upgrading ten thousand cores.

    I think it is a case of "best fit". That's why I say the "get 25% by upgrading machines" thing is stupid. When is "throw money at it until it goes away" the best fit?

    [Edit]
    I'm really arguing the case for better algorithms and implementations. I have one "ultimate" reason for my mindset: at least part of the "25%" improvement by machine upgrade will be wasted by programmers not taking the time to "get it right" in the first place. I see a lot of this crap out in the "real world". Programmers don't mind that their code leaks memory because "ram is cheap" or eats cycles because "ghz are cheap". I HATE THIS CRAP!
    [/Edit]

    Soma
    Last edited by phantomotap; 07-26-2010 at 09:38 PM. Reason: none of your business

  4. #34
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    There's quite a few STL implementations that you can buy or download separately from the compiler and simply use the same STL code on all platforms. One of my previous companies did that.
    That makes sense, if a suitable one is available, and costs less than what they have to pay developers to write it.

    Or more likely when that decision was made and at the time what the state of the STL as a fully featured library was.
    The engine was written from scratch and started 4-5 years ago, so I don't think that's it for this particular case.

  5. #35
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by phantomotap View Post
    Someone has in their sig a statement that says if you make code changes to improve performance by a mere 25% that you would probably be better off just getting a computer that was 25% faster.
    Which is a really stupid thing to say. A new or better computer may buy you a few percent performance over what is, but a new or better algorithm or implementation may buy you a few percent forever.

    (A better implementation granting the same performance increase as a better machine will likely provide a similar advantage to any future machines.)
    The above is a quote of somebody quoting someone's sig who was originally quoting me (get that?), and the quote is a bit out of context here.

    To quote a further response from you,

    When is "throw money at it until it goes away" the best fit?
    It's a good fit when a new platform is $10,000, and a new implementation is $10,000,000. That's what I mean by "out of context." I wasn't talking about computer games. But I offer no opinion as to the applicability of my statement to this topic.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  6. #36
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    not taking the time to "get it right" in the first place.
    What is 'right' is meeting the requirements set forth for the software. If the requirements do not require super fast code then it is a waste of time to write it. If the requirements require super fast code then the time is worth it and justified.

    For me it all comes down to requirements. If your software meets the requirements then you did your job. If you start going far above the requirements then you are wasting time and if you go far below them then you haven't done the job right.

  7. #37
    Registered User
    Join Date
    Jan 2007
    Posts
    330
    Quote Originally Posted by cpjust View Post
    Obviously, unless they like wasting lots of time reinventing the wheel...
    I've also seen quite a few people who claim to be C++ programmers, but they're really C programmers that also use classes. I call them C+ programmers.
    Wow so true. These are in my opinion older programmers who learnt C++ long ago (before 1998) and havent updated their knowledge since

  8. #38
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Quote Originally Posted by phantomotap View Post
    [Edit]
    I'm really arguing the case for better algorithms and implementations. I have one "ultimate" reason for my mindset: at least part of the "25%" improvement by machine upgrade will be wasted by programmers not taking the time to "get it right" in the first place. I see a lot of this crap out in the "real world". Programmers don't mind that their code leaks memory because "ram is cheap" or eats cycles because "ghz are cheap". I HATE THIS CRAP!
    [/Edit]
    Being a regular reader of TheDailyWTF I certainly share much of that viewpoint.
    Sometimes some lazy sod writes something so unscalable and RAM hungry that the poor company that runs the app is forced to throw ever more powerful hardware at it year after year, when if the code was written properly to begin with it could still run great on the original hardware ten years later.

    When an app starts out leaky, unless extra effort is made significantly above what is required for if there were no leaks, then the app just gets leakier and leakier as more buggy code is either reused or copy and pasted. "Broken windows syndrome" as well basically. An app the initially gets by perhaps hogging a little more RAM than it needs to turns into something that cannot run for anywhere near as long as the clients need it to run before it needs to be frequently restarted. I've seen this first hand more than once.
    There is absolutely no shortage of sloppy programmers out there.

    However I also agree with Bubba in that there's a time and budget for software and exceeding the requirements in terms of performance by spending more time than necessary on something is not what most of us a typically paid for. At the same time though I'm all too aware of how requirements change and an app that today doesn't need to handle more than XXX for some metric, will need to woefully exceed it six months later.
    With some strategic guesswork you can put the effort not just where it matters now, but where it is likely to matter in the not too distant future.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  9. #39
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    At the same time though I'm all too aware of how requirements change and an app that today doesn't need to handle more than XXX for some metric, will need to woefully exceed it six months later.
    With some strategic guesswork you can put the effort not just where it matters now, but where it is likely to matter in the not too distant future.
    I couldn't agree more. This is what happens all the time in projects and I'm not saying that I never optimize or plan for the future. The extremely bold statement that was made sounded like optimization was the most important part which I disagree with. Optimization and planning for the future is a step in the process but it is not the sum total of the process. Time and budget most certainly dictate what we can and cannot do. In my career thus far I've had come to terms with the idea that 'good enough' is still 'good' even though in my eyes it could be altered a bit or changed to support this or that or what have you. So I guess in the end software is 'abandoned' as opposed to finished.

    will need to woefully exceed it six months later.
    I believe one could argue as well that when that 'official' project comes down then that is the time to address such concerns. I would argue that it may not be within the scope of the current project and thus needs to be addressed later. Of course memory leaks are not something I'm willing to say job done on b/c that, in my opinion, is a major program fault. But when it comes to eeking out performance (where it may not be needed) or adding support for features that have not yet been scoped then I believe at that point one must come back to the main task of meeting the requirements.
    Most of us are probably not given clear cut requirements or they change at a whim but I'm sure all of us are in the habit of constructing our own based on what it is the project has set out to do.

    So back to the topic it is perhaps this reason why the OP was instructed not to use STL. Perhaps the company found that more people were prone to misuse it or not think twice about how and what containers they use that in order to maintain framerate they just decided not to use it. If that meets their requirements then I really cannot say whether it is right or wrong...just that it gets the job done for them. Who can argue with that? Whether one can use the STL or not is really not the question here. Arguments can be made for and against. In the end all that matters is what the studio wants people to do. I'm not in the habit of arguing with leads or chief architects.
    Last edited by VirtualAce; 07-27-2010 at 09:32 AM.

  10. #40
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    The above is a quote of somebody quoting someone's sig who was originally quoting me (get that?), and the quote is a bit out of context here.
    I knew it was you who said it, but I did not know the context.

    Fair enough, it was out of context; saying "an upgrade may be the best bet" instead isn't an issue because an upgrade may be the best bet, but that really doesn't change my opinions.



    It's a good fit when a new platform is $10,000, and a new implementation is $10,000,000.
    It may be. It may not be.

    If the implementation is communal and maybe scalable the cost in investment can be spread out across multiple areas or even recovered in technology resale.

    The new platform is only going to affect those that directly interact with it or interact with it indirectly in specific ways.

    If the choice is based solely on the immediate costs and applicability a lot of people failed to do their jobs.

    Managing financial and technical resource shouldn't be the job of someone with insight limited to the following morning.



    What is 'right' is meeting the requirements set forth for the software.
    What is right is meeting requirements while showing due diligence and intellectual rigour. It isn't just your machine you are playing with.



    I also agree with Bubba in that there's a time and budget for software.
    Absolutely.

    With the tools we programmers have available resource leaks and rampant spinning are almost always trivially conquered and yet still as common as muck. Tracking these down should be a small consideration, but it must be a consideration.

    If you don't have "time" and "budget" for running "Valgrind", "PC-Lint", and "GPerf" how are you going to have the resources it takes to meet any of the "bullet points" in your woefully inadequate five page requirements document?



    The extremely bold statement that was made sounded like optimization was the most important part which I disagree with.
    I'd love to know who made that statement for I disagree with it too. It is almost as stupid as the "out of context" offering of "don't optimize just buy a new machine".

    Soma

  11. #41
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    It is almost as stupid as the "out of context" offering of "don't optimize just buy a new machine".
    Say what you will but in many contexts this is true. Of course one must decide what is an 'old' machine and what isn't. Supporting older hardware, from a purely graphics perspective, is a major PITA and takes more time to develop than just targeting newer hardware.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. C Formatting Using STL
    By ChadJohnson in forum C++ Programming
    Replies: 4
    Last Post: 11-18-2004, 05:52 PM
  2. im extreamly new help
    By rigo305 in forum C++ Programming
    Replies: 27
    Last Post: 04-23-2004, 11:22 PM
  3. STL or no STL
    By codec in forum C++ Programming
    Replies: 7
    Last Post: 04-12-2004, 02:36 PM
  4. Prime Number Generator... Help !?!!
    By Halo in forum C++ Programming
    Replies: 9
    Last Post: 10-20-2003, 07:26 PM
  5. include question
    By Wanted420 in forum C++ Programming
    Replies: 8
    Last Post: 10-17-2003, 03:49 AM