That has always been the case until the Intel Core 2 series came about. It's based on mobile processors, so they were a lot cooler, and a lot faster, while eating a lot less power, with insane overclocking headroom.
I run my $50 CPU at the speed of $200 CPUs. The $200 CPU would have used the same amount of power, since it's proportional to frequency at full load. At idle (which is most of the time), they consume about the same power, because CMOS gates only consume significant power when they switch, so at less than 100% load, the power consumption will be similar.Quote:
And also burn a hole in your electricity bill and your PSU.
I can easily test this out actually. I have a clamp-on multimeter. Just need to strip a power cable (from the wall).
My PSU doesn't mind. It's rated for 380W and I'm using a lot less than that.
I did it.
CPU overclocked to 2.8ghz, memory at 800mhz
idle 1.13A (135.6W)
load CPU 1.59A (190.8W)
load GPU 2.32A (278.4W) (just for fun)
CPU at 1.8ghz stock, memory at 533mhz
idle 1.07A (128.4W)
load CPU 1.35A (162W)
CPU is an Intel Core 2 Duo E6300. Load is Orthos (2 instances of Prime 95 stress test to stress both cores). Idle is sitting at the Windows desktop.
Current measured on the power cable (either hot or neutral wire... the color coding is weird. hopefully not the ground wire) using a clamp-on multimeter. Power supply efficiency is ~80% at 50% load, according to reviews.
Power consumptions are calculated assuming the current and voltage (120V RMS, measured using a multimeter*) are in phase, which should be pretty close? I don't have an oscilloscope, so I can't verify that.
Now, assuming I keep my computer on 24/7, with the CPU loaded to 100% 10% of the time (which is a huge over-estimation), the average power consumptions are 141.1W (135.6*0.9+190.8*0.1) and 131.8W. The difference is 9.4W. Over a year, that's 82kWh.
Assuming an electricity price of 6cents/kWh, it will cost me an additional $5/year to run my computer overclocked. A steal IMHO.
*probably assuming a perfect sinusoidal wave instead of actually doing the integration
As for my PSU, it's rated for 380W (at 80% efficiency, that means 475W on the input side), so it won't mind.
Upon thinking about it a bit more, of course the load is not linear. It shouldn't even be sinusoidal - there is a rectifier in there!
Therefore, all my calculated power are greater than the actual power consumed (and you have to pay for). By how much I don't know.
Considering you want the system to last long, the new HD5xxx DX11 ATI cards are what you want, but if Cyberfish is correct regarding the Linux drivers, perhaps you should wait for the nVidia GTX3xx series? My guess is that they will be out before February, just a thought to consider.
PSUs - go for Corsair or Seasonic, high quality.
Motherboards - Gigabyte has some very reliable boards.
Memory - Lots of options, stay away from Elixir, i'd go GeiL or Kingston, their Value series might not overclock well but they sure have reliability and price on their side.
HDDs - WD Black series, don't worry about SSDs.
CPU - Depends on budget really, P55/Core i5 is nice, so is Phenom II, get the 920 or the i5 860. i7 is not a sane choice atm imo.
I have never used an ATI card myself, so I can't say from experience, but a fairly experienced Linux friend of mine just bought an ATI card, played with it for a week trying to get it to work (searching frantically online and everything), returned it (paying the restocking fee), and got an Nvidia card. I heard there are still instability issues (crashing X once in a while) and their driver doesn't even have full support for some of their later cards. Rumours also have it that 2D performance is poor and 3D is pretty much unusable.
NVIDIA driver is pretty much perfect.
I would really go for NVIDIA. It will at least save you a lot of frustration.
I know someone who actually has an ATI AGP (it was back then) card for gaming, and a cheap NVIDIA PCI card just for Linux.
I will never understand ATI's decision to keep their drivers closed source, they sell graphics cards, not driver's. You would figure that they would be thrilled to have the open source community figure out (for free) how to write drivers that actually work. It's not like ATI has cutting edge technology that nVidia is looking to steal by reverse engineering their hardware merely by looking at the driver interface.
ATI cards suck bad enough on their own, keeping the drivers closed source just makes it worse.
ATI has licensed third-party technology that would be exposed if they open-sourced their drivers. They can't do that.
The third one is an ATI 4350 and it works fine, I had no problems with drivers, etc. I'm developing GL stuff now and the thing works great; rather than getting frame rates in the teens and maxing out one core, I'm getting rates in the hundreds (simple stuff: over 1000) and the CPU runs at like 25-30%, which is great.
The reason I tried nvidia first is precisely because of stories like cyberfish's, altho when I dug deeper, it seemed to me that nvidia in fact puts up more hassles wrt linux (altho this hopefully does not apply to their own drivers).
That is probably the trickiest part tho. You might want to order everything else on line cheap, then pick up the video card at a store where you can easily return or exchange it.
Which perhaps doesn't matter much from an end user perspective: they just build a kernel module and install it, hooray.
Tangent: apparently nvidia is putting up a fuss about the obsoleting of "immediate mode" and fixed pipeline rendering in GL 3.2 and saying it won't be obsoleted on their cards. That is kind of a developer friendly thing, since some software is intentionally developed to run on old pre GL 1.5 cards that support only a fixed pipeline, I imagine in institutions like schools, etc.
I wouldn't mind undervolting or overclocking my own cpu without raising the voltage.
I would have thought it would be a higher wattage.
Also, concerning the PSU: don't pay mind to the "rating." It's marketing crap. What really matters it the amount of amps it can put out on its critical rails. My PSU, for example, cannot handle my gpu at full load which might cause the system to use around, like, 250-300W, and it's rated at 400W.
Something you might find helpful:
Best computer you can get for $X.
Those things are usually worthless as they list components in US and US resellers. Not everyone lives in the US, as you know, and for them, it's worthless.
But, we now say that 8GB of memory that is 100X faster than the old days is pricier.
(BTW, including 45GB of storage in the form of 5 9GB Seagate Barracuda SCSI drives I came in under budget at $25K -- The client was thrilled.)