This is a discussion on Math Coprocessors making a comeback? within the Tech Board forums, part of the Community Boards category; http://dailytech.com/article.aspx?newsid=1276...
I think that in the long run we will indeed go back to the old days.
Physical limits are being reached right now with processors.
And although there are some projects going on at this moment to make sure that higher clockspeeds can be reached I think the way to go is like it was way back.
It's not even a surprise that dual core processors are being used in modern machines, a single core would not deliver the performance one would expect from a new processor ( since they are reaching the physical limits , and then I think of heat vs speed etc ...).
So why not use a math processor that takes care of maths ( that was logical ).
There are all sorts of trends when it comes down to the architecture of a PC.
Then I think of the time when they wanted to cram alot of micro-instructions onto the processor , even when half of them would not even be used frequently. As opposed to have a limited amount of micro-instructions and leave the rest to single instructions.
"Keep it simple!" is something that could be applied to alot.
Also with whatever you create, design or whatever you should always keep things balanced.
So in this case, I see a future for these kind of things, but I don't think it will be used on a regular basis for the regular home-garden-kitchen PC that people use to browse the internet ( yes trough tubes not a dumptruck you silly ), do some office work ( or openoffice whatever you prefer )...
I like the idea. Instead of going the route of '1 card for every task' go back to the old days and give it to us on the mobo and in the chipset. I'm sick of seeing this GPU or that GPU for 500 bucks and this PhysX or that PhysX for 200 bucks.
Just improve the math-co, make it blazing fast on floats and matrices, keep the integer core cuz we need it too, and let's get our game on.
There was a day when they wanted to cram everything including video and sound on one die. Haha. Now at today's clock speeds that would end up being 1 thing:
A melted pile of goo.
How much different is this than the PhysX card? Or better yet, How much potential could this have used in conjunction with a PhysX card?
I'm sure the physx card is far more single aimed in it's abilities and more heavily optimized for floating point operations than an integer coprocessor (did the article say integer?). I'm looking at the ageia website now to see if they have any decent documentation on instruction set or hardware design but don't see much.
Physx has onboard memory (lots) and the api is designed with parallel processing in mind. I don't think this kind of change to processor design would even make a dent in the performance of a gamer that spends the money to own an extra card like physx (it's not coupled with gpu right?).
Plus Ageia is supposed to release cheaper PhysX cards (albeit at the cost of reduced features and speed). Who knows when that will happen?
I seriously doubt faster math will be that much of a concern especially when only commercial users would actually need it. And chances are they would just have specialised hardware anyways. Like those GeForce Quadro cards. Just offload to those. :P