Thread: BSOD after OC

  1. #31
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by CommonTater View Post
    When all goes well --and it doesn't always-- that is what should happen. However, they're not going to simply turn off the processor or the computer... too much risk of data loss. Consider that 10 million record customer file that is about 70% complete when it overheats... The last thing you want is a shutdown. So they engage cooling strategies as you pointed out *trusting* they will work as advertised.
    You really need to research into this area. They will shutdown. Like that. It's called a therm-trip. However, even TJMax is not the maximum heat under which a modern processor can operate. It's instead the amount of heat after which the processor will adopt it's first failsafe mechanism; frequency throttling. It's only if this mechanism fails or proves inadequate that the CPU will therm-trip. And therm-trip it will, whether you like your data or not.

    Meanwhile TJMax is associated with the processor announced TDP (Thermal design power). Essentially TDP determines how much electric heat it can dissipate without exceeding TJMax by using the stock active cooling mechanism. Modern processors, on the case of Intel especially those around the Nehalem architecture, have comparably high TDPs while coming (particularly those of this architecture with the 45nm manufacturing process and better) with very low power requirements. This is exactly what gives these processors their overclocking headroom. So safe in fact that the days of water cooling are now reserved only to extreme OCing.

    The Nehalem architecture and the new manufacturing processes (45nm and better) have really reshaped OCing as you used to know it. Meanwhile, failsafe mechanisms like throttling and therm-trip have made it actually more difficult for you to melt a CPU.

    Ok... bo back to my first message on this and re-read the part about temperature gradients. You can have a tjMax - 30 reading at the sensor but at some point away from the sensor you might actually be getting very close to the limit unless your cooling solution is adequate to keep the whole chip evenly warm. (As I pointed out before, this is an artifact of single sided cooling...)
    I'm not sure I understand. But for my CPU I have in fact 5 independent and differently placed temp readouts. One for each core and the fifth one... I'm not sure where exactly. On my case you could even sort of consider a 6th one, since the motherboard sensor is placed under the CPU.

    I think this layout is pretty standard these days.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  2. #32
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    I don't think you quite realize that you are agreeing with me...

    The only thing I'm saying that you aren't is that you should not blindly trust safety mechanisms to get your butt out of the fire. It is a lot safer to not step into the fire in the first place... you should be designing and adjusting your system so it *never gets there*.

    What has changed lately is not so much technical as it is an increasing lack of precaution... and not just around computers. People, in general, seem to think there's no reason not to take things right to the absolute limit... and then they're all amazed when it turns into a foulup.

    Call me old fashioned, but I still believe in "margins of safety" and "common sense precaution".

  3. #33
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    The CPU's fail-safe mechanisms are for accidental protection. Eg. if the cooler is not secured well and falls off while the computer is running.

    I'm pretty sure extended exposure to temperatures close to the fail-safe threshold will damage the processor.

    In electronics specifications there are almost always 2 ratings - absolute maximum ratings and operating ratings.

    Normal operation should be within operating ratings. Anything between operating ratings and absolute maximum ratings will be fine for short term, but will damage the chip in long term. Proper operation is usually not guaranteed above operating ratings (ie. non-functional but no permanent damage). Anything above absolute maximum ratings can and do often cause instantaneous permanent failure or severe weakening of the chip.

    I believe the temperature it starts throttling is the operating maximum.

  4. #34
    Internet Superhero
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    964
    Quote Originally Posted by cyberfish View Post
    Normal operation should be within operating ratings. Anything between operating ratings and absolute maximum ratings will be fine for short term, but will damage the chip in long term. Proper operation is usually not guaranteed above operating ratings (ie. non-functional but no permanent damage). Anything above absolute maximum ratings can and do often cause instantaneous permanent failure or severe weakening of the chip.
    Regarding the long term damage to an overheated chip: It's called Electromigration and it occurs in all chips at all temperatures, but gets worse the more current and heat the chip is exposed to. This will not happen from day to day.

    Regarding the short term damage to an overheated chip: Isn't that what the aforementioned fail-safes are for? The days of exploding/smoking CPUs are over, even if someone was to purposefully attempt smelting a modern CPU, it would be next to impossible to pull off without opening the case and soaking the innards in nitroglycerine.
    Having done some fairly nasty things to modern CPUs and motherboards i believe i can safely say that the modern microprocessor is next-to idiot proof.
    How I need a drink, alcoholic in nature, after the heavy lectures involving quantum mechanics.

  5. #35
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by cyberfish View Post
    The CPU's fail-safe mechanisms are for accidental protection. Eg. if the cooler is not secured well and falls off while the computer is running.

    I'm pretty sure extended exposure to temperatures close to the fail-safe threshold will damage the processor.
    This is a point of contention, I'll give you that. But it's not how these numbers are to be interpreted. These are safeguard mechanisms meant exactly to avoid damage. More below.

    Quote Originally Posted by cyberfish View Post
    In electronics specifications there are almost always 2 ratings - absolute maximum ratings and operating ratings.

    Normal operation should be within operating ratings. Anything between operating ratings and absolute maximum ratings will be fine for short term, but will damage the chip in long term. Proper operation is usually not guaranteed above operating ratings (ie. non-functional but no permanent damage). Anything above absolute maximum ratings can and do often cause instantaneous permanent failure or severe weakening of the chip.

    I believe the temperature it starts throttling is the operating maximum.
    Yes. There are two numbers indeed. Intel publishes one and not the other. TJMax is not actually not the most important one. It only serves us because it's the base value for temperature calculations because the diodes publish distance-to-TJMax. The Case Thermal Specification (Tcase) is the most important. And it is the one Intel publishes. This is usually set at around 20 c below TJMax and is very dependent on TDP. Tcase is the base for the usual 20-25c range to TJMax you hear OCers always talking about. They want their CPUs under load to say here or below it. Not because there's any risk of damage or degradation, but because these are fully safe operational values that will guarantee system stability. But if you exceed Tcase, you have still ~20 degrees to go before you hit TJMax. The thing is, regardless of the effects this can have on system stability (including the processor own functions), this will not damage or degrade your CPU.

    TJMax is not the point of damage. The point of damage is set somewhere above this within a secure range (maybe 10 c or 20 c above that). It would not make sense to set the point at which the CPU starts throttling or asserting where it is already receiving damage. It must be done before that. TJMax is no different than any other junction temperature that establishes a safe absolute maximum value.

    TJMax is as such still within Intel's operational limits, but considered as the absolute maximum. Now, I may have to search for this if you ask me to show up the citation, but Intel is very clear in their datasheet when they say that any operation below absolute maximum values guarantees your CPU a long-life and incurs in no degradation.

    So no, you will not damage or degrade the life of your CPU by coming between Tcase and TJMax.
    Last edited by Mario F.; 02-22-2011 at 05:01 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  6. #36
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    But if you exceed Tcase, you have still ~20 degrees to go before you hit TJMax.
    You don't have 20 degrees to go. Tcase IS ~20 degrees cooler than TJMax already.

    Actually, there is a linear relationship between (TJMax - Tcase) and power dissipation of the chip. The proportionality constant is called junction to case thermal resistance, measured in degrees per watt, and is usually published in datasheets.

    If you have a thermal resistance of 2C/W, that means, if the core is dissipating 20W, it will have to be 40 degrees hotter than the case. There is nothing you can do about this.

    There is another thermal resistance between case and ambient. That would be the primary parameter in determining the effectiveness of a cooling solution. This is the one you can change, by getting a better cooler.

    I haven't read the Intel datasheets in a long time, so you could be right about TJMax being safe, but I have never seen any chip manufacturer guarantee this, and I have read the datasheet for a few hundred chips (no exaggeration) from about a dozen manufacturers.

    Fail-safe mechanisms are meant to safeguard against immediate damage from exceptional circumstances. They are not designed to be triggered repetitively in normal operation. They are a form of damage control.

    Just because a chip has fail-safe mechanism at X does not mean it is safe to operate it up to X. It's just an added protection.

    It's important to note that, however, Tcase and TJMax don't define a range. They are measurements done at different places.

  7. #37
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    Quote Originally Posted by Mario F. View Post
    Yes. There are two numbers indeed. Intel publishes one and not the other. TJMax is not actually not the most important one. It only serves us because it's the base value for temperature calculations because the diodes publish distance-to-TJMax.
    No they don't... they produce a current that is dependent upon their temperature. When you heat silicon you change it's characteristics... Vj the diode junction voltage is very temperature dependent but not linearly accurate. When they reach a certain Tj (usually in the 100c range) the junction cascades and they begin to conduct heavily and this sudden change in current is what triggers the thermal protections. This is brought out to a pin so we can look at it, but in fact the tolerances are very bad, the temperature which is calculated from voltage across a resistor is likely to be plus or minus 5c of actual.

    The Case Thermal Specification (Tcase) is the most important. And it is the one Intel publishes. This is usually set at around 20 c below TJMax and is very dependent on TDP. Tcase is the base for the usual 20-25c range to TJMax you hear OCers always talking about. They want their CPUs under load to say here or below it. Not because there's any risk of damage or degradation, but because these are fully safe operational values that will guarantee system stability. But if you exceed Tcase, you have still ~20 degrees to go before you hit TJMax. The thing is, regardless of the effects this can have on system stability (including the processor own functions), this will not damage or degrade your CPU.
    No you don't have a 20c safety margin at Tcase... In fact bringing TCase up to max with an inadequate cooling solution might already have the inner parts of the cores over tjMax.

    Once again, on these CPU chips you are only applying a cooling solution to *one side* of the chip. From the side bonded to the heat spreader to the opposite side you can have a linear increase in temperature of several degrees... A heat gradient.

    You get the idea of a color gradient, yeah? Black to bright red as a constant linear fade? That happens with heat in these chips.

    TJMax is not the point of damage. The point of damage is set somewhere above this within a secure range (maybe 10 c or 20 c above that). It would not make sense to set the point at which the CPU starts throttling or asserting where it is already receiving damage. It must be done before that. TJMax is no different than any other junction temperature that establishes a safe absolute maximum value.
    I suppose that's why Intel pubishes it under "Absolute Maximum Ratings"...

    Yes, it might be safe to hit TjMax for a few seconds, beyond that the risk of damage increases exponentially. No joke... that TjMax = 100c means just that... 101c and you're likely to cook the chip. The themal safeties are as close to that as they can get to prevent false trips. They do note exist as a feature... they exist as a last ditch effort to save the chip *because* your cooling solution is inadeqate.

    TJMax is as such still within Intel's operational limits, but considered as the absolute maximum. Now, I may have to search for this if you ask me to show up the citation, but Intel is very clear in their datasheet when they say that any operation below absolute maximum values guarantees your CPU a long-life and incurs in no degradation.

    So no, you will not damage or degrade the life of your CPU by coming between Tcase and TJMax.
    You need to rethink that... as I've been trying to explain --against some interesting resistance-- the measured temperatures on the surface of the chip or it's cores does not fully reflect the actual junction temperatures within the chip... Because you have a large heat sink drawing off heat from one side of the chip, the side opposite that chip can easily be considerably hotter than where case meats fins.

    TcaseMax is actually telling you when the innards are about to hit TjMax, with their recommended cooling solutions... It's not some safety margin that you can safely enter and re-enter with no concern for damage... it is the "red line" at which things start to go wrong.

  8. #38
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    Quote Originally Posted by cyberfish View Post
    I haven't read the Intel datasheets in a long time, so you could be right about TJMax being safe, but I have never seen any chip manufacturer guarantee this, and I have read the datasheet for a few hundred chips (no exaggeration) from about a dozen manufacturers.
    Nope... it's still published under "Absolute Maximum Ratings". Still the red line just before meltdown...

    Fail-safe mechanisms are meant to safeguard against immediate damage from exceptional circumstances. They are not designed to be triggered repetitively in normal operation. They are a form of damage control.
    Eaxctly, they're like a window washer's safety rope... it's there but we hope it never gets used.

    It's important to note that, however, Tcase and TJMax don't define a range. They are measurements done at different places.
    Actually, Cyberfish, they are *equivalent* measurements done at different places.

    I've been watching the OC gang for some time. Now to be honest, I'm not a big fan of pushing thermal and electrical limits in the interests of barely noticeable performance gains but for them it's something of a hobby. What I see them doing is disregarding safety limits because they don't actually care if the chip fails after their experiments. "Longivity" is simply not in their dictionaries. In many cases specifications are deliberately ignored or misinterpreted for the purposes of the experiment.

    Then along comes someone with a good working knowledge of computers from a module level, but no real electronics insight, they read the OC pages and fail to understand they are reading about things operating beyond tolerance, think they can do this for years at a time ... and they end up frying their motherboards. Then to make it worse... they can't understand how it happened... "That OC guy on the web ran his faster than mine" ... but missing the trailing... "for about 8 minutes before it failed".

    You are very correct in pointing out that unless you can afford to be without the computer, you do not play fast and loose with thermal, electrical or physical safety limits.

  9. #39
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by cyberfish View Post
    I haven't read the Intel datasheets in a long time, so you could be right about TJMax being safe, but I have never seen any chip manufacturer guarantee this, and I have read the datasheet for a few hundred chips (no exaggeration) from about a dozen manufacturers.
    Well, Intel gives you that guarantee. This is their thermal specifications (pdf) for the I5 700 series. Meaningful quotes are:

    Quote Originally Posted by 6.1
    To allow the optimal operation and long-term reliability of Intel processor-based systems, the processor must remain within the minimum and maximum case temperature (TCASE) specifications as defined by the applicable thermal profile.
    Quote Originally Posted by 6.2.1
    A new feature in the processors is a software readable field in the IA32_TEMPERATURE_TARGET register that contains the minimum temperature at which the TCC will be activated and PROCHOT# will be asserted (in the i5 and i7 series this is known to be 100 c under most configurations -- Mario)".
    Quote Originally Posted by 6.3.2
    The DTS (digital thermal sensor -- Mario) temperature data is delivered over PECI, in response to a GetTemp() command, and reported as a relative value to TCC activation target. The temperature data reported over PECI is always a negative value and represents a delta below the onset of thermal control circuit (TCC) activation, as indicated by PROCHOT#.
    The actual failsafe mechanisms (let's call them TCC) are described in 6.2.2. It is also made clear that TCC will in fact be fired close or around TJMax:

    When the TCC activation temperature is reached, the processor will initiate TM2 in attempt to reduce its temperature. If TM2 is unable to reduce the processor temperature then TM1 will be also be activated. TM1 and TM2 will work together (clocks will be modulated at the lowest frequency ratio) to reduce power dissipation and temperature. With a properly designed and characterized thermal solution, it is anticipated that the TCC would only be activated for very short periods of time when running the most power intensive applications. The processor performance impact due to these brief periods of TCC activation is expected to be so minor that it would be immeasurable. An under-designed thermal solution that is not able to prevent excessive activation of the TCC in the anticipated ambient environment may cause a noticeable performance loss, and in some cases may result in a TCASE that exceeds the specified maximum temperature and may affect the long-term reliability of the processor.
    I hope this clarifies things a bit. Failsafe mechanisms are meant to take place within safe (but excessive) temperature values. Contrary to what you say, they are not the sign if impeding danger. They are methods to not reach the point of impending danger.

    TM1 and TM2 above are throttling mechanisms, as you can read starting at 6.2.2.1. With proper thermal conditions (a cooler that follows intel guideline4s and is appropriate to the current CPU configuration) TCC will be entered and exited. This is expected and does not damage or reduces processor life. The last failsafe mechanism is therm-trip, and this one indeed seems to take place after possible damage. Intel specifications are however very clear about the duration of each failsafe mechanisms and conditions under which they are fired. Including when should the computer shutdown before therm-trip exactly to avoid damage. The bios should know when to cut power before damage... which it will.

    If TM1 and TM2 have both been active for greater than 20 ms and the processor temperature has not dropped below the TCC activation point, then the Critical Temperature Flag in the IA32_THERM_STATUS MSR will be set. This flag is an indicator of a catastrophic thermal solution failure and that the processor cannot reduce its temperature. Unless immediate action is taken to resolve the failure, the processor will probably reach the Thermtrip temperature (see Section 6.2.3 Thermtrip Signal) within a short time. In order to prevent possible permanent silicon damage, Intel recommends removing power from the processor within ½ second of the Critical Temperature Flag being set.
    Nowhere I said it is ok to enter TJMax. But what I do want to make clear (and Intel does too) is that TJMax is not the point of damage. So let's not get overexcited. Fortunately processors have come a long way since when you could melt them and are rapidly leaving the time you could damage them.

    These processors are meant to be overclocked. Intel publishes all the required data (I was wrong in fact that it didn't publish TJMax. It does through IA32_TEMPERATURE_TARGET). As long as an OCer respects the ranges for temperatures, voltages, etc, there's no way they can damage their CPU.

    There would be absolutely no value in these ranges otherwise, if they only meant you could damage it when you approached the max limit. We would be back to the 90s technology. Much happened since.
    Last edited by Mario F.; 02-22-2011 at 08:41 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  10. #40
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    As long as an OCer respects the ranges for temperatures, voltages, etc, there's no way they can damage their CPU.
    Famous last words, my friend.... Famous last words.

    3 days ago you were begging help for a failed OC on your system...
    today you are lecturing engineers and technicians about it.

    "Yesterday I couldn't spell egg-spurt, today I are one".
    Last edited by CommonTater; 02-22-2011 at 09:27 PM.

  11. #41
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by CommonTater View Post
    Famous last words, my friend.... Famous last words.

    3 days ago you were begging help for a failed OC on your system...
    today you are lecturing engineers and technicians about it.

    "Yesterday I couldn't spell egg-spurt, today I are one".
    Oh that's rich!

    I feel like the student that has to lecture the teacher in fact. I'm a fast learner alright (not that any of this is particularly difficult though). But I didn't expect that at the other side of this conversation I was having someone who speaks from their ass.

    For a technician I'm surprised your lack of knowledge of modern processors features and procedures. Or that you confuse extreme OC done by kids who enjoy to push their processors to the limit (and who never fit in this discussion) from normal overclocking well within Intel's published specifications. That I have been the only one so far to produce official documentation to back up my words is no surprise already; I'm used to you "engineers and technicians" who get stuck in a time when they still understood how things worked. What's slightly discouraging though is that after all the work I went through (researching, reading, cutting, pasting and building an argument), the best I get from an "engineer and technician" is their admission of their ignorance done with an insult.

    Have a nice day.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  12. #42
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    Mario... you know from paper... lots of nice numbers and pretty graphs... which I have studied for each new processor I work with and lord only knows how many audio and discrete devices.

    But the nice papers only tell you so much. There is an entire world of theoretical and practical knowledge not contained on those pages...

    One can safely assume that after 30+ years in electronics, 11 of which were at design level, I might have picked a few things up along the way...

  13. #43
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    I don't think experience really has anything to do with this.

    I tend to trust the datasheet because who knows more about a chip than the manufacturer?

    Looking at the Core i7 datasheet (ftp://download.intel.com/design/proc...hts/320834.pdf), it looks like they define the thermal specifications mostly based on expected cooler performance (in terms of thermal resistance) under different conditions, as opposed to temperature the processor can handle.

    The only temperature rating is Tcase, which curiously depends on power dissipation (even more curiously, higher dissipation allows higher temperature! it's usually the opposite).

    For 130W TDP CPUs, the number given is 67.9C. Unfortunately, that is very difficult to measure (you need to stick a temperature probe between the heatspreader and the heatsink, at the geometric center.

    The processor reports TJunction, but there's no specification for that.

    And -
    There is no specified correlation between DTS temperatures and processor case
    temperatures; therefore it is not possible to use this feature to ensure the processor
    case temperature meets the Thermal Profile specifications.
    So it seems like Intel intends the 2 maximums to be taken independently.

    1) Tcase has to be lower than 67.9C. But there is no easy way to measure this, and has to be guaranteed by system design.
    2) The junction maximum is different for different CPUs (specified in IA32_TEMPERATURE_TARGET register). The difference between current temperature and IA32-TEMPERATURE_TARGET is a field readable by software. It should not exceed 0.

    Intel actually didn't say anything about longevity of the CPU vs temperature, though I think it can be assumed that if their guidelines are followed, the CPU will at least last the warranty period.

    It's simple physics that high temperature causes degradation over time due to electromigration (current flowing through tiny traces on the chip can physically deform the traces and cause damage) being more severe at higher temperatures. My specialty is not nano tech so I don't know much about the details, but this is well accepted knowledge.

    That said, it's possible that the difference won't matter at all.

    If running it at 80C will shorten the lifetime of the processor to 10 years from 20 years if you ran it at 20C, who cares when we only use CPUs for 2-3 years before replacing them?

    Since Intel doesn't publish these numbers, it's impossible to say.

  14. #44
    Banned
    Join Date
    Aug 2010
    Location
    Ontario Canada
    Posts
    9,547
    Quote Originally Posted by cyberfish View Post
    Since Intel doesn't publish these numbers, it's impossible to say.
    This is where common sense and experience come to play...

    You have pinpointed one cause of heat damage, electromigration, but heat also causes junction failures and disconnects... it may not destory the chip per-se but it may leave you with a CPU with bad op-codes or bad memory in the caches...

    Not all failures are a total meltdown... sometimes they are far more subtle than that.

    The general rule for Intel is to keep them under 60c whenever possible, for AMD it's 70. Personally I like to see both running in the mid-40s maximum... and given the power dissipation of some of these chips, quite often that's only achievable by liquid cooling.

    Just in case you've not done the math... 60 watts at 1.3 volts is... 46 AMPS of current flowing through those microscopic cores. You can jump start a car with less...

  15. #45
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by CommonTater View Post
    The general rule for Intel is to keep them under 60c whenever possible, for AMD it's 70. Personally I like to see both running in the mid-40s maximum... and given the power dissipation of some of these chips, quite often that's only achievable by liquid cooling.
    Modern intel processors work at very small voltages and have much higher TDPs. Liquid cooling has been delegated to extreme overclocking for most cases, no longer being a requirement to overclock an Intel processor within recommended max values and still guarantee 60 c under stress.

    This is a i5 760 overclocked from 2.8 to 4 (30%) running under an air cooler and not ever reaching 60 degrees. All values (voltage, multiplier, fsb) are within the safe ranges published by Intel. Thanks to improvements in technology and manufacturing, along with improvements also on air cooling techniques (including the awesome heatpipes) you don't need liquid cooling for OCing these days. In fact the whole interest in liquid cooling has been mostly shifted towards the noise factor for the past years. And its value as a cooling mechanism has been all but entirely lost, except as an entry level for extreme OCing (overclocking above published specifications).

    Will that processor be more damaged that a non OCed one? Yes.
    By how far? Days perhaps. Maybe weeks. One or two months at most. It will however live to its warranty and, judging from my 25 year experience (yeah, I have that too) with intel processors, outlast it by a factor of at least 2.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Visual Studio 2010 - BSOD, always crashing...
    By Devils Child in forum General Discussions
    Replies: 24
    Last Post: 07-08-2010, 07:36 PM
  2. The BSOD struggle continues.
    By Aran in forum Tech Board
    Replies: 17
    Last Post: 06-03-2006, 12:51 PM
  3. BSOD on anything
    By RoD in forum Tech Board
    Replies: 2
    Last Post: 09-30-2004, 10:06 PM
  4. BSOD in XP!!!!!!!
    By DavidP in forum A Brief History of Cprogramming.com
    Replies: 12
    Last Post: 03-19-2002, 01:16 AM
  5. customizing BSOD (serious)
    By iain in forum A Brief History of Cprogramming.com
    Replies: 3
    Last Post: 12-04-2001, 07:18 PM