Forum Home
Press F1
 
Thread ID: 50816 2004-11-02 09:26:00 nasa ferrite (4221) Press F1
Post ID Timestamp Content User
287207 2004-11-03 01:58:00 More relevant - what graphics card do they have ;) POTUS (5276)
287208 2004-11-03 02:10:00 Whatever video chipset is installed on the chosen motherboards; if there is none, there is probably no video card. The processor nodes don't have screens or keyboards, so video hardware is just a waste of power. Graham L (2)
287209 2004-11-03 02:41:00 Todays modern games (e.g Farcry) are far more dependant on the video card, the CPU makes very lil difference. Because playing DOOM3 isnt exactly top priority for NASA scientist there super computer lack video processing power, you wouldnt even be able to call them GPU's.

You could probably write code so that tasks traditionally performed by the GPU were performed by the CPU. These would be very complex and its unlikely that even with their megaflops of power they would still be outperformed by a Geforce 6800 or Radeon X800. This would be due to the fact that the Radeon and Geforce are designed specifically for playing games.
Pete O'Neil (250)
287210 2004-11-03 07:17:00 > These would be very complex and its unlikely that even with their megaflops of power they would still be outperformed by a Geforce 6800 or Radeon X800.

I highly, highly disagree. Think of the 3dMark03 and 05 CPU tests and how they try and render what the video card just did on the previous tests. On average the CPU tests perform a lot worse than what the video card did. You might get under 5fps in the tricky parts on 3dMark05. Todays fastest desktop CPUs do well under 10,000 MFlops. Now these supercomputers have far surpassed Megaflops, Gigaflops and the NASA one is running at 42.7 Teraflops (42.7 trillion calculations per second). It's actually running 10,240 Intel Itanium 2 processors.

Ok so if they managed to get the game to utilise all of these processors (one hell of a task) and using 3dMark05 as an example then you would probably get 5fps per processor x 10,240 of them = 51200fps. Now what's the best a Geforce 6800 or X800 video card can do on one of the 3dMark05 tests? 30-40fps? The NASA supercomputer would rape it 1500 times over.

Of course it would probably look pretty terrible as the CPUs won't make a good job quality wise, but if they could make it work on Far Cry with software rendering then you would get some pretty good fps.

Of course this is all pointless if your monitor is running only at 85Hz.
alphazulusixeightniner (185)
287211 2004-11-03 07:40:00 I'm sure the screens they use at NASA are measured in GHz :D (refresh rate that is). george12 (7)
287212 2004-11-03 19:42:00 > > These would be very complex and its unlikely that
> even with their megaflops of power they would still
> be outperformed by a Geforce 6800 or Radeon X800.
>
> I highly, highly disagree. Think of the 3dMark03 and
> 05 CPU tests and how they try and render what the
> video card just did on the previous tests. On average
> the CPU tests perform a lot worse than what the video
> card did. You might get under 5fps in the tricky
> parts on 3dMark05. Todays fastest desktop CPUs do
> well under 10,000 MFlops. Now these supercomputers
> have far surpassed Megaflops, Gigaflops and the NASA
> one is running at 42.7 Teraflops (42.7 trillion
> calculations per second). It's actually running
> 10,240 Intel Itanium 2 processors.
>
> Ok so if they managed to get the game to utilise all
> of these processors (one hell of a task) and using
> 3dMark05 as an example then you would probably get
> 5fps per processor x 10,240 of them = 51200fps. Now
> what's the best a Geforce 6800 or X800 video card can
> do on one of the 3dMark05 tests? 30-40fps? The NASA
> supercomputer would rape it 1500 times over.
>
> Of course it would probably look pretty terrible as
> the CPUs won't make a good job quality wise, but if
> they could make it work on Far Cry with software
> rendering then you would get some pretty good fps.
>
> Of course this is all pointless if your monitor is
> running only at 85Hz.
My point was that if you were using software rendering yes you would get some funky frame rates, but software rendering looks like ****.

What i was suggesting is that you could write a peice of software that would do everything a video card could do, instead of perform OpenGL or DX tasks on a hardware level perform them on a software level e.g a software OpenGL rendering engine. This wouldnt be terribly effecient as CPU's are design to do a wide range of tasks where as a GPU is designed specifically to render graphics. You'd also have to rewrite farcry or whatever game you want to play to run on an Itanium2, you'd have to use OpenGL as i doubt microsoft would play nice. All up it would be a complete mission to get a modern game to run on a super computer.
Pete O'Neil (250)
1 2