Forum Home
PC World Chat
 
Thread ID: 93416 2008-09-15 21:53:00 Twilight of the GPU pctek (84) PC World Chat
Post ID Timestamp Content User
705388 2008-09-15 21:53:00 arstechnica.com

"Sweeney was off by at least two years, but otherwise it appears more and more likely that he'll turn out to be correct about the eventual return of software rendering and the death of the GPU as a fixed-function coprocessor. Intel's forthcoming Larrabee product will be sold as a discrete GPU, but it is essentially a many-core processor, and there's little doubt that forthcoming Larrabee competitors from NVIDIA and ATI will be similarly programmable, even if their individual cores are simpler and more specialized.
I think DirectX 9 was the last graphics API that really mattered. DirectX 9 was a revolution: completely programmable shaders of unlimited length with full floating-point precision support. Compared with the fixed-function, 8-bit pipeline it replaced, it was revolutionary. Everything since has been incremental, and kind of backward-looking.

So, DirectX 10 takes DirectX 9 and adds some weird fixed-function components on top of it, which fit in a very particular place in the pipeline, and are hard to use. I'm not saying that it's entirely unwarranted, but I think that DirectX 9 was the last game-changing step in the graphics APIs, and the next non-incremental step will be the move back to programming languages. "
pctek (84)
705389 2008-09-15 22:23:00 Good, I figured multi-core cpu's would mean the death of numerous add-on cards. Metla (12)
705390 2008-09-15 23:58:00 I'm just waiting until one of those cores is an FPGA... A5/1 Rainbow table, here I come! ubergeek85 (131)
705391 2008-09-16 09:05:00 Those powerful processors (whether GPU or CPU) are very comforting on a cold night - although all the necessary fans are a bit noisy. R2x1 (4628)
705392 2008-09-16 21:57:00 On the other hand:


"Today's microprocessors were designed to do sequential processing, designed for very complex code, the GPU type of code it runs is simpler, yet the amount of data it operates on is massive. That's why they call it a data parallel processor. If you were to compare the amount of horsepower (car analogy) - a CPU could be a 100 horsepower car, a GPU would be a 20,000 horsepower car." - Jen Sun Huang - CEO NVidia.

"Is a GPU going to become a CPU ? - yes there will be a form of that coming out in 2010. Will that dominate the market where you'll have one bit of silicon that will do two processes - absolutely not, cause you're not going to be able to take the processing power that we're currently shipping on a GPU and put it on a CPU and conduct both functions. There will however be a single chip coming out at some stage in 2010 particularly focused at the mobile space." - Darren Crasby - AMD/ATI Europe.
pctek (84)
705393 2008-09-17 01:01:00 If you were to compare the amount of horsepower (car analogy) - a CPU could be a 100 horsepower car, a GPU would be a 20,000 horsepower car." - Jen Sun Huang - CEO NVidia.

That's a horrible metaphor. I get his point, but he's kind of just wrong. Apart from the fact that for day to day stuff a 20,000HP car is freaking useless. Or for pretty much anything.

A CPU is 2-4 cars with 400HP each.

A GPU is ~200 cars with 100HP each.

If you want to deliver one thing to one place, a CPU does it best.

If you want to deliver lots of things to lots of places, a GPU does it.

GPU's are massively threaded. Not every program works well with threads.
Thebananamonkey (7741)
705394 2008-09-17 01:37:00 If you want to deliver lots of things to lots of places, a GPU does it........


.............all at the same time.

Or

"A CPU reads a book page by page, line by line quickly.

A GPU rips the book into 500 pieces, then reads them all at once."
pctek (84)
705395 2008-09-17 02:33:00 .............all at the same time.

Or

"A CPU reads a book page by page, line by line quickly.

A GPU rips the book into 500 pieces, then reads them all at once."

Exactly.

Except that doesn't leave room for explaining tasks that a GPU can't do...

Scrolls? ie: they're one page, so can't be ripped into pieces?

Doesn't matter.

They're thinking of integrating GPU tech and CPU, so that you have traditional cores like a CPU, but also have a section that works like a cell processor. So it's not just one chip, it's kind of like two chips on one die.

At least that's how I read it. DDR3 and further starts to look important at this point, as the GFX referencing the memory too will make bandwidth absolutely vital.
Thebananamonkey (7741)
1