(Link: EnerJ, the Language of Good-Enough Computing - IEEE Spectrum)
I came across this article in a trade publication, and found the idea interesting.
Basically, software always striving for precision can require lots of power in the hardware for many applications. A proposed solution for reducing power consumption would be to allow controlled imprecision of certain calculations. If certain values were "allowed" to be less precise, the [not yet existing] hardware could take actions to lower power (such as running certain parts of the chip at lower voltages - lower voltages would mean less power consumption at the cost of accuracy).
This imprecision would not be ideal for, say, satellite control systems, but would be very useful for battery powered portable devices running unimportant media files already marred by lossy compression.
The language being developed would have special a data type for variables that have permission to be approximate. There are also strict checks to prevent accidental assignment of an approximate value to a precision variable, as well as limitations on using approximate data in control flow statements.
It should be noted that there is not yet any hard data showing how much energy can be saved with this process, so the value of this approach has not yet been determined.
Still, it's interesting to ponder how one day, programs for certain applications might allow for this level of hardware control to meet more efficient power specifications. And also, what possible effects this might have on applications. I'm sure that if the technology develops, there will be a statistical threshold of imprecision that will give developers a rough idea of how flawed the results will be. But the very nature of imprecision makes it impossible to predict how badly data will be mangled at any given moment, perhaps leading to sparse yet annoying glitches in the running of such applications.
Anyway, just thought this was worth sharing.