Thread: SLI? Gimmick or performance?

  1. #1
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607

    SLI? Gimmick or performance?

    I'd like to hear your opinions on whether you think SLI is a gimmick or a real performance gain.

    I think it's a gimmick. Here's why.

    Ok so one video card does half the pixels and the other does the other half. Big deal. How does the other video card know which pixels to render? We are talking about 3D here. You CANNOT know where the pixels will be until you transform the vertices into screen space. Thus you must still send your vertices to one card for processing. That card will transform them and then send the one's that are not in it's range of pixels to the next card. Big hair deal. We already know that one video card can blit more pixels and render more tri's in one frame than there are available in any one screen mode. So fill rate is no longer the problem.

    The real bottleneck is getting the vertices to the card, not with the card drawing the pixels to the screen.

    So how does card A know what to send card B w/o first transforming all vertices into screen space?? Now if they found a way to have the cards know which vertices will fall where in screen space, then of course there will be a gain if both card's are transforming vertices.

    But how would you do this? You cannot know which vertices will resolve to which pixels until you do some type of transformation on them.

    I say gimmick.

  2. #2
    S Sang-drax's Avatar
    Join Date
    May 2002
    Location
    Göteborg, Sweden
    Posts
    2,072
    What you say sounds reasonable...
    Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling

  3. #3
    5|-|1+|-|34|) ober's Avatar
    Join Date
    Aug 2001
    Posts
    4,429
    I hear what you're saying Bubba, and it makes sense, but I still don't think it is a gimmick. I don't personally know how it all works or claim to understand why it is better, but I really don't think all these companies would put research and time into building hardware to support it if it wasn't worth the effort.

    However, maybe what you're saying is true, and the reason it failed to reach production status several years ago is because they thought it wasn't that much of an advantage. Or maybe they didn't think the current hardware at that time could support it.

    Either way, I'm stuck thinking there has to be something to the talk, otherwise manufacturer's wouldn't be "walking the walk".

    Damn I'm cheesy.

  4. #4
    5|-|1+|-|34|) ober's Avatar
    Join Date
    Aug 2001
    Posts
    4,429
    http://www.anandtech.com/video/showdoc.aspx?i=2284

    Maybe this will shed some light?

  5. #5
    Registered User
    Join Date
    Mar 2003
    Posts
    580
    The real bottleneck is getting the vertices to the card, not with the card drawing the pixels to the screen.
    Well, I thought both could be problematic if either are substantially slow, but yeah having to transfer stuff through busses, and accessing memory tends to be teh suk


    So how does card A know what to send card B w/o first transforming all vertices into screen space?? Now if they found a way to have the cards know which vertices will fall where in screen space, then of course there will be a gain if both card's are transforming vertices.
    I don't think it has to. Granted, that could only even theoretically be a problem in only one of the modes of SLI because in the other mode they just take turns rendering the entire frame by itself (sort of like normal double buffering mode for a single card, except with two cards), but basically you've got a polygon soup (all your vertexes and related data), the SLI driver keeps track of what gpu did the most work in the past few frames and predicts how to best split up the work between the two gpus...where the pixel ends up in the frame buffer doesn't ultimately matter that I can see.

    EDIT:
    really good article by the way Ober. Very informative.
    Last edited by Darkness; 03-23-2005 at 10:44 AM.
    See you in 13

  6. #6
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Very good article. Now that makes sense. One card renders one frame and then the other one renders the next. While one renders, one processes. Sort of like a circular sound buffer. While one section is playing, the previous/next is loading. Now that would be a performance gain.

    As for the actual SLI mode, again I'm very suspicious of all this branch prediction stuff. So the only way a game can take advantage of SLI is simply to throw all the vertices at the card during loading and then they can choose from that pool. What that article fails to answer is the fact that we are still sending vertices via the bus be it PCI-X or AGP. So if you need to send vertices to the card during the game, it's possible that SLI will actually be slower than non SLI. Since video card memory is no where near enough to stick all your vertices into, plus textures, etc,. I think the other mode where they take turns rendering frames would be the only mode where a performance gain could be attained.

    Well, at least I wasn't too far off. I knew that both cards had to have access to all the vertices because prior to transformation there is no way to know where in screen space that vertex will be.

  7. #7
    5|-|1+|-|34|) ober's Avatar
    Join Date
    Aug 2001
    Posts
    4,429
    Just to be picky, there are no AGP SLI systems. They are all PCI-X.

  8. #8
    Redundantly Redundant RoD's Avatar
    Join Date
    Sep 2002
    Location
    Missouri
    Posts
    6,331
    I, like ober, dont claim to know alot about it. What i do know is the two pc's i setup using it have seen performance gains, both in FPS and the smoothness and quality of the gameplay.

  9. #9
    Registered User
    Join Date
    Mar 2004
    Posts
    494
    SLI as it is now it is being used/developed only for gaming. There is an increase in performance as shown on the different benchmarks. http://www.futuremark.com/community/halloffame/
    When no one helps you out. Call google();

  10. #10
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Just to be picky, there are no AGP SLI systems. They are all PCI-X.
    I never stated there was AGP SLI. SLI is expressly PCI-X and older PCI. I said that it doesn't matter because the bottleneck whether we are using PCI-X or AGP or PCI or ISA or any bus is ...the bus. That's the bottleneck. Not fill rate, not texture rate, not primitive rendering, not primitive counts, etc, etc. It is the bus. So no matter what bus you are using you gotta get the vertices to the card before anything can happen. So even in an SLI system, the vertices must cross the bus at one time or another. So you would want to pre-load all vertices into the card during loading and never send vertices on the fly. This would totally screw up the prediction, cache scheme because you would be constantly adding new data to the mix which would falsify the results of previous cache estimations of hits/misses on data. In fact I could see that if the programmer failed to use SLI correctly, it might even be slower than non-SLI.

  11. #11
    5|-|1+|-|34|) ober's Avatar
    Join Date
    Aug 2001
    Posts
    4,429
    Right... and I got your point, but I was trying to clarify for anyone else that might have gotten confused by the statement (I even had to read it a second time to make sure you weren't implying that).

  12. #12
    Registered /usr
    Join Date
    Aug 2001
    Location
    Newport, South Wales, UK
    Posts
    1,273
    SLI's great if you're one of those "uber-l33t" people who overclock their top of the line CPU to 4GHz, using a cooling system reminiscent of a refrigerator (Trust me, I've seen one of these ). They just seem to have money to throw at hardware manufacturers. Hell, half of them probably even call ATI's or nVidia's offices screaming "D00D, LIKE WHERE'S YOUR NEW CARD MAN? WHY AIN'T IT SITTIN IN MY RIG DAWG? ROFL!!!111!!11!!" (Yes, they say the 1's and excalamation marks).

    To summarise, if you have more money than sense/patience, you will buy two very expensive cards to run in SLI mode. These will do you fine for about 3 months, until the next model comes out that is able to match the performance in one card. But lo, it also has an SLI mode! Repeat ad nauseum.

    Game manufacturers will always be aiming for the processing power of a consumer PC around at the time of their game's release. How likely is it do you think that Dell will make SLI a standard in their systems?

  13. #13
    Registered User
    Join Date
    Mar 2003
    Posts
    580
    The articles admits SLI isn't even always faster than a single card. My single ati does justice.

    Game manufacturers will always be aiming for the processing power of a consumer PC around at the time of their game's release. How likely is it do you think that Dell will make SLI a standard in their systems?
    nada mucho senor
    See you in 13

  14. #14
    Registered User
    Join Date
    Mar 2004
    Posts
    494
    Quote Originally Posted by SMurf
    To summarise, if you have more money than sense/patience, you will buy two very expensive cards to run in SLI mode. These will do you fine for about 3 months, until the next model comes out that is able to match the performance in one card. But lo, it also has an SLI mode! Repeat ad nauseum.
    Where did you come up with all this info? Have you even searched or read anything about SLI? Not all PCI-Express cards are more expensive than AGP cards. You can buy 2 6600 PCI-E cards for the price of 1 6800GT AGP card.

    Besides, gamers are the ones who push this tenchnology, you think if there was no interest in this field, the graphics card would be where they are now? I dont think so.
    When no one helps you out. Call google();

  15. #15
    Registered User
    Join Date
    Mar 2003
    Posts
    580
    I kind of doubt that SLI will become that popular. Seems like too much of a hassle to have two cards. Also, having two of anything doubles the probability of something going wrong, cuz if one hits the high road the whole system doesn't work.
    See you in 13

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Performance and footprint of virtual function
    By George2 in forum C++ Programming
    Replies: 8
    Last Post: 01-31-2008, 07:34 PM
  2. File map performance
    By George2 in forum C++ Programming
    Replies: 8
    Last Post: 01-04-2008, 04:18 AM
  3. Observer Pattern and Performance questions
    By Scarvenger in forum C++ Programming
    Replies: 2
    Last Post: 09-21-2007, 11:12 PM
  4. How can I monitor the performance of my C++ Code?
    By honeysting in forum C++ Programming
    Replies: 1
    Last Post: 03-26-2006, 08:10 AM
  5. inheritance and performance
    By kuhnmi in forum C++ Programming
    Replies: 5
    Last Post: 08-04-2004, 12:46 PM