Basic Programming model....

This is a discussion on Basic Programming model.... within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Do you think it is worth the effort to write code that is efficient in cpu cycles and small in ...

  1. #1
    Registered User Jaqui's Avatar
    Join Date
    Feb 2005
    Posts
    416

    Basic Programming model....

    Do you think it is worth the effort to write code that is efficient in cpu cycles and small in ram footprint?

    Or do you think that relying on the hardware to make up for bad code is the better idea?

    An excellent example of relying on the hardware is windows itself, every new version requires a hardware upgrade to run properly.

    Personally, I say that the software company that insists on forcing a hardware upgrade to use the new version of their software should have to purchase the new hardware for their clients.
    Quote Originally Posted by Jeff Henager
    If the average user can put a CD in and boot the system and follow the prompts, he can install and use Linux. If he can't do that simple task, he doesn't need to be around technology.

  2. #2
    Banned SniperSAS's Avatar
    Join Date
    Aug 2005
    Posts
    175
    really because i think computers are evil and anyone who uses them should be burned at the stake like a witch

  3. #3
    Registered User Aran's Avatar
    Join Date
    Aug 2001
    Posts
    1,301
    Quote Originally Posted by SniperSAS
    really because i think computers are evil and anyone who uses them should be burned at the stake like a witch
    That wasn't very polite...


  4. #4
    Banned SniperSAS's Avatar
    Join Date
    Aug 2005
    Posts
    175
    Quote Originally Posted by Aran
    That wasn't very polite...

    we must go to extreme measures to rid the world of all heretics, politeness be damned

  5. #5
    Registered User whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    7,666
    Quote Originally Posted by Jaqui
    Personally, I say that the software company that insists on forcing a hardware upgrade to use the new version of their software should have to purchase the new hardware for their clients.
    Except that hardware does have limits. Let's do an exercise:

    I am Squaresoft, it's 1997 and Final Fantasy VII is scheduled for release. I have a huge Nintendo fanbase, but Nintendo (at the time) refused to use anything other than cartridges, because gamers are hardcore.

    I'm not going to even try putting polygon graphics in a game on some cartridge: it's just not going to work. A CD would be so much better just because it can handle so much more data.

    That's the kind of stuff it comes down to. Companies are polite, they do write system requirements in the manual, if you don't want to follow their suggestion, then your SOL. Companies that are ethical routinely push hardware to their limits to reduce upgrade costs to their cusomers, but asking companies to sell hardware along with a game is a bit overkill--you might as well buy a new PC in a bundle of games or something. That's one way to ensure that customers will have what they need when they play your game.

    The point is ink and paper is cheaper than selling hardware.

  6. #6
    and the Hat of Clumsiness GanglyLamb's Avatar
    Join Date
    Oct 2002
    Location
    between photons and phonons
    Posts
    1,109
    I just finished studying for my OS ( Windows - Linux ) exam tomorrow.

    And I've seen examples where good coding would improve efficiency particularly when it comes to memory management.

    There was this example where an array of ints would be created. They said that you should take into account how pages are allocated in memory by the OS when doing this.
    In the example they used pages of 4kB. The program needed to have a matrix of 4MB. In this case it would be very efficient to create 1024 arrays of 4kB. So that each array would fill up a page.
    In the end all of this would decrease the working set ( active pages ).

    Then again, apart from restructuring your code to gain efficiency , how will your compiler handle/optimize all these things...

    Anyway I think that its better to write efficient code then to rely on hardware improvements.

  7. #7
    Peace
    Join Date
    Aug 2001
    Posts
    1,510
    In general I agree with GanglyLamb, however the other side of the coin is worth a look. I could not count the number of days I have wasted obsessing over tiny fragments of code which I eventually threw out or re-wrote anyways. You have to know where to draw the line. I hate to admit that theres ever a time when something shouldn't be done "perfectly", but there is, and I admit it. It all comes down to the matter of diminishing returns.
    "There's always another way"
    -lightatdawn (lightatdawn.cprogramming.com)

  8. #8
    Registered User mrafcho001's Avatar
    Join Date
    Jan 2005
    Posts
    483
    When you write an application i think you should consider both sides.

    You want efficient code that will run fast, but you dont want to spend decades writting it. You have to decide where your application falls, it'll have to be balanced. Write the most efficient code for the time frame you have.

    Sure if the application is small you can pass by with slow code you slapped together in a few hours. But for large applications you need to optimize the slowest parts (mainly long and intensive algorithms and memory management).
    My Website
    010000110010101100101011
    Add Color To Your Code!

  9. #9
    Yes, my avatar is stolen anonytmouse's Avatar
    Join Date
    Dec 2002
    Posts
    2,544
    There was this example where an array of ints would be created. They said that you should take into account how pages are allocated in memory by the OS when doing this.
    In the example they used pages of 4kB. The program needed to have a matrix of 4MB. In this case it would be very efficient to create 1024 arrays of 4kB. So that each array would fill up a page.
    In the end all of this would decrease the working set ( active pages ).
    Why would there be a difference between 1000 4KB arrays and 1 4MB array? My understanding is that the working set is determined by the memory pages that are touched, rather than the details of allocation.

  10. #10
    Registered User Jaqui's Avatar
    Join Date
    Feb 2005
    Posts
    416
    citizen,

    okay, the game need better hardware, you go buy a better video card or more ram and it runs.

    what about Windows Vista? you need to buy a new motherboard, cpu, ram hard drive, video card to use it.

    Vista's features require a 3 GHZ 64 BIT cpu, and 1 gb ddr ram to function properly. if you don't have that then you will have XP with a couple more tools in it for your version of Vista.

    and before anyone makes a comment about Longhorn, Longhorn was the development code name for Vista
    Quote Originally Posted by Jeff Henager
    If the average user can put a CD in and boot the system and follow the prompts, he can install and use Linux. If he can't do that simple task, he doesn't need to be around technology.

  11. #11
    and the Hat of Clumsiness GanglyLamb's Avatar
    Join Date
    Oct 2002
    Location
    between photons and phonons
    Posts
    1,109
    Well you could make 4MB up of 2048 arrays of 2kB ... if optimization by your compiler or whatever is not done ... then there's a good chance all these arrays would end up in different pages ... creating more internal fragmentation. + Depending on which algorithme is used to determine which pages should be removed it could even be that there would be alot more page faults. Since you will be probably working on that 4MB array at approx. the same time , most of the pages that make up this 4Mb will either be just used, or yet to be used.

    Now suppose you have a counter that keeps track of how many time a page is being used. Then use a timer to decrement the counter as time goes by. Then use a LFU algorithme.
    With this example you are likely to get more pagefaults since there are more pages. As time goes by, the counters will decrease, making it more likely to be removed... while it actually shouldnt be removed since it's still a very active page.

    It's not the algorithme that is bad in this case, its the design of the program. Since you have more pages, you need more lookups, which causes more time to pass before all pages we use in the 4MB are handled... which could eventually leads to a kind of Trashing effect ( although that depends on how many processes are in memory at that time and how much of the memory is being used ).

    Of course in an os there are implementations to deal with this trashing...

    Anyhow this is all very theoretical, in practice this is almost impossible to do, since for every other OS / architecture you would need to restructure your entire code ( which you don't really expect when working with very high level languages , you expect the compiler do this for you....)

    Like i said its how your compiler handles these things.

    And like lightatdawn said there's a limit for everything.
    Last edited by GanglyLamb; 06-02-2006 at 02:03 AM.

  12. #12
    Disrupting the universe Mad_guy's Avatar
    Join Date
    Jun 2005
    Posts
    258
    Quote Originally Posted by Jaqui
    Personally, I say that the software company that insists on forcing a hardware upgrade to use the new version of their software should have to purchase the new hardware for their clients.
    If there was a way to program in which you got a perfect speed to size ratio and you didn't have to do things like sacrifice anything to get the best of everything, everybody would program that way. Considering that that approch hasn't come around yet in practical and modern software development, it kind of tells you something, huh.
    operating systems: mac os 10.6, debian 5.0, windows 7
    editor: back to emacs because it's more awesomer!!
    version control: git

    website: http://0xff.ath.cx/~as/

  13. #13
    Crazy Fool Perspective's Avatar
    Join Date
    Jan 2003
    Location
    Canada
    Posts
    2,640
    >I'm not going to even try putting polygon graphics in a game on some cartridge

    aherm, Mario 64... Zelda (both N64 versions), Golden Eye, most other N64 games.......


    Jaqui,

    You forgot about the 128MB of video RAM Vista needs just to achieve some simple effects OSX has had for years. Every screen show of Vista I see just looks more and more like OSX gone wrong. If they're gonna copy apple they shouldn't ruin all the features... more here:
    Visual Tour: 20 Things You Won't Like About Windows Vista

  14. #14
    Registered User whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    7,666
    @Perspective:
    Thanks for taking part of a sentence and destroying it's context. Final Fantasy VII will never fit on a cartridge, as we know it. End.

  15. #15
    Crazy Fool Perspective's Avatar
    Join Date
    Jan 2003
    Location
    Canada
    Posts
    2,640
    @citizen
    >Thanks for taking part of a sentence and destroying it

    Your welcome.

Page 1 of 2 12 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. microsoft visual basic 6.0 working model
    By milleby in forum Game Programming
    Replies: 1
    Last Post: 11-29-2005, 08:04 PM
  2. [ANN] New script engine (Basic sintax)
    By MKTMK in forum C++ Programming
    Replies: 1
    Last Post: 11-01-2005, 09:28 AM
  3. what are your thoughts on visual basic?
    By orion- in forum A Brief History of Cprogramming.com
    Replies: 16
    Last Post: 09-22-2005, 04:28 AM
  4. Help with file reading/dynamic memory allocation
    By Quasar in forum C++ Programming
    Replies: 4
    Last Post: 05-17-2004, 03:36 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21