That's pretty cool, Elysia. I think we tend to accept vendor claims pretty much on face value. I wonder how far from the truth many of those claims would turn out to be if people actually put them to the test?
That's pretty cool, Elysia. I think we tend to accept vendor claims pretty much on face value. I wonder how far from the truth many of those claims would turn out to be if people actually put them to the test?
The GPU can be used for AI, as a raw vector math CPU, physics, and just about every math intensive operation out there. The main advantage to the GPU is it can access its local on-board video memory which means it can use its own bus which is independent of the speed of the system bus. So if you upload all the data to the GPU and allow it to operate on it, it will be faster than the equivalent operation on the CPU simply due to memory access speeds. Plus this means the CPU can twiddle its thumbs or do whatever it wants to while the GPU works on something else.
Your test is highly dependent on how that software uses the card to do the work, the model and manuf. of your video card, the internal memory interface (128/256/512 bit?), the amount of video memory that is onboard, and the speed of the video card memory bus.
Well, Vista eats ~6W for the whole system (including LCD with LED backlight and wifi, calculated from battery reading), so I'm not sure how much lower it can get. Linux eats ~8W.And you get 10 hours on Vista? I wonder what you would get on Win7, since it's actually even more efficient on battery than XP.
It's just a regular 13" 60Wh battery. The long battery life comes from low consumption, not big battery.
Of course. It was never meant to be an extensive test. Take it with a grain of salt and know that gpu acceleration is not always better. For games, obviously it is, though.
I believe you'll see those figures scaling when the system in idle on Win7.
I haven't used Vista, so I can't use it as comparison, though. But I know that Win7 beats XP in battery performance.
It can be. The GPU can turn a compute bound task into an I/O bound task. They have awesome floating point horsepower (single precision only in most cases), but they have crap memory bandwidth. So if you are doing things cryptanalysis, weather modelling, FFT's etc where you are doing a lot of operations on a (relatively) small amount of data they are great. In my experience though, unless you are doing at least 12 floating point operations per byte of data you will not see a performance increase.