Thread: software-based rendering

  1. #1
    Registered User rogster001's Avatar
    Join Date
    Aug 2006
    Location
    Liverpool UK
    Posts
    1,472

    software-based rendering

    i was reading recently there may be more of a return to pure software based rendering for graphics rather than the use of specialised APIs and graphics hardware, is there any evidence for this? the article noted it is more flexible this way but i think surely at a cost in performance?

  2. #2
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    It could be true, in view of the increasing prevalence of multi-core CPU's.

    Your average FPS for example, outside of drawing all the very nice graphics, doesn't need an awful lot of processing power. It's not like say chess exploring a deep game tree.

    Then again, you've got groups like these
    GPGPU.org :: General-Purpose computation on Graphics Processing Units
    CUDA - Wikipedia, the free encyclopedia
    busy trying to off-load CPU intensive algorithms off to the GPU's.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  3. #3
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by rogster001 View Post
    i think surely at a cost in performance?
    The cost in performance will be totally atrocious, which is why that hardware blossomed in popularity so much. openGL, for example, will run with software emulation (ie, so that it uses the processor instead of the GPU).

    Here's how it stacks up: code that runs at 300 FPS+ using saying 25% of one processor in hardware mode will run at 30 FPS absolutely maxing out one processor constantly in software mode.

    So you looking at an improvement (hardware vs. software) that is at least a factor of 10. Remember, the GPU does not have to do anything else, multitask, tend to the OS, etc. It just does vector math real real well.

    I absolutely for a fact promise that you will never, ever, ever see a software based graphics API that even comes close to the performance of the hardware ones. Which means they are either a waste of time or for use only in peculiar circumstances. And no, there is ZERO chance of a "return to software rendering".

    Quote Originally Posted by Salem
    outside of drawing all the very nice graphics,
    Which if you consider what goes into every single pixel say 50 times per second, this is going to make your "deep chess tree" look like "hello world". Maybe not, but neither of them can be considered trivial; actually, here's an interesting link that demonstrates doing the chess tree will be faster on a GPU too. Modern GPU's are faster than modern CPU's. Make use of them!

    Coding Horror: CPU vs. GPU
    Last edited by MK27; 01-28-2010 at 10:28 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  4. #4
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    I don't see why you think that programming a GPU doesn't qualify as "software." This is exactly how graphics programming is returning from hardware land to software again. GPUs are becoming fast enough, numerous enough, and common enough, that it's becoming reasonable again to perform most graphical operations in software (software written for GPUs, not CPUs, but it's still software) because it provides more control than the capacities of fixed hardware.

    Luckily, a lot of people became lazy and decided that nobody needs to know how graphics actually WORKS anymore, so they all forgot how to do it. Now the few of us who understand it are starting to make a crapload of money.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  5. #5
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by brewbuck View Post
    I don't see why you think that programming a GPU doesn't qualify as "software."
    Your sematic point is totally valid brewbuck but in the 3D community this is referred to as the difference between hardware and software rendering, tho of course you write "software code" to do both. As in, you can use a software emulator in place of hardware.

    I am not familiar with a style of graphics rendering which does not involve any software, I would have assumed that kind of nonsensical -- if so, it makes such a distinction pointless.

    And lo, the question was:
    pure software based rendering for graphics rather than the use of specialised APIs and graphics hardware
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  6. #6
    Registered User
    Join Date
    Jan 2010
    Posts
    412
    Quote Originally Posted by brewbuck View Post
    I don't see why you think that programming a GPU doesn't qualify as "software." This is exactly how graphics programming is returning from hardware land to software again. GPUs are becoming fast enough, numerous enough, and common enough, that it's becoming reasonable again to perform most graphical operations in software (software written for GPUs, not CPUs, but it's still software) because it provides more control than the capacities of fixed hardware.
    I guess it comes down to how you define the phrase "software rendering"
    To me, rendering is the rasterization of polygons to pixels. And that is done by hardware.
    But yes, you can control how that rendering happens by software running on the gpu. And I agree that more of the rendering pipeline is done by software these days than a few years back, but I don't think we'll see pure software rendering any time soon.

  7. #7
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by MK27 View Post
    And lo, the question was:

    Quote:
    pure software based rendering for graphics rather than the use of specialised APIs and graphics hardware
    That doesn't really specify 3D in particular. Quite a bit of rendering (as in, scan conversion of abstract graphics entities to a raster form) is still done completely in software, in 2D-land. The rendering of page images for printing, Acrobat Reader displaying a PDF on your screen, etc.

    I used to develop a product which viewed TIFF, JPEG, HPGL/2, PCL5, PCL-XL, CALS, PCX, RTL, and PDF documents. All in software, all from the ground-up. (Please, don't provoke me by trying to suggest that that isn't "graphics")
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  8. #8
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by brewbuck View Post
    (Please, don't provoke me by trying to suggest that that isn't "graphics")
    For sure not. Just using libpng makes me grind my teeth slightly, decompressing and interpreting them yourself on a low level must be a big job. And without the framebuffer, which is part of that realm, there'd be no 3D.

    But you are also talking about static images, not realtime motion stuff. There is no purpose to hardware acceleration there, which rogster brought up the term "performance", so I presumed that's what he's after.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  9. #9
    Just a pushpin. bernt's Avatar
    Join Date
    May 2009
    Posts
    426
    Well, when you take away some processing power (in this case, the GPU), there is obviously going to be a performance hit. Using extra processors on a multi-core CPU is always an option, though. The problem that I see is that you buy more than a processor when you get a Video Card.

    You also get a considerable amount of memory, operating independently of the computer's ROM, as well as outputs for TV and analog/digital montiors. Not to mention a processor specifically designed for graphics processing. The benefits of using video memory are numerous, and as was previously mentioned, vector math is a forte of the GPU. Despite the fact that you're ultimately using software to tell the GPU what to do, the performance gain does lie in the hardware.

    So, will software rendering come back? In short, no. But an alternate question I'd like to ask is: Will the 'physics cards' ever become popular? I have yet to see anyone who actually owns a NVidia PHYSX Card.

  10. #10
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by bernt View Post
    So, will software rendering come back? In short, no.
    But when did software rendering ever go away?

    Let me see... mobile devices, casual games, offline rendering all over the movie industry, fallbacks (I believe some use DX reference rasterizer?), 2D and 3D rendering in pretty much all of the vector graphics platforms, UI libraries, ...

    Quote Originally Posted by bernt View Post
    Will the 'physics cards' ever become popular? I have yet to see anyone who actually owns a NVidia PHYSX Card.
    (I bought Borderlands the other day. Was surprised to see it install NVidia PhysX on my system despite having a AMD card. Talk about shady marketing strategies).

    Anyway, I don't think there will be a need for Physics cards. As far as I understand the thing, instead current graphics cards are more than capable of adding the necessary functionality. And they have: Nvidia's CUDA and ATI Stream provide the necessary hardware performance increase to give room for a physics engine. The number of Nvidia cards that support PhysX is already sizable. As for AMD I think currently it only supports the Havok FX engine through Crossfire. But I would expect ATI Stream cards will soon enough support it.
    Last edited by Mario F.; 02-05-2010 at 09:18 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. C++ - Senior Software Developer Job Opportunities - Nashville, TN
    By DiVango in forum Projects and Job Recruitment
    Replies: 0
    Last Post: 09-14-2007, 09:28 PM
  2. why page based I/O can improve performance?
    By George2 in forum C Programming
    Replies: 1
    Last Post: 06-12-2006, 07:42 AM
  3. scene graph rendering techniques
    By ichijoji in forum Game Programming
    Replies: 7
    Last Post: 03-19-2006, 12:17 AM
  4. Software Developer earning his Doctorate needs help
    By TurboTim in forum A Brief History of Cprogramming.com
    Replies: 1
    Last Post: 04-14-2004, 06:45 AM
  5. fopen();
    By GanglyLamb in forum C Programming
    Replies: 8
    Last Post: 11-03-2002, 12:39 PM