Page 1 of 2 12 LastLast
Results 1 to 15 of 24
  1. #1

    Post RV770 & GT200 specs chart


  2. #2
    With power bill keep increasing, I really hope they will also consider low power consumption design rather than higher speed with super high power consumption.

  3. #3
    If 157W goes for HD4870..then wonder what will be HD4870 X2's number? I hope they have found something to decrease X2-configurations power consumption since HD3870 X2 consumed more than two separate HD3870 boards.

    [8800 GT consumed more power than single HD3870, but 9800 GX2 consumed less than HD3870 X2..and GX2 is more than two 8800 GT-cards..]
    ------

    They sure are trusting their performance if those prices are correct. With price of HD4850 you can get two 9600 GT cards, and with HD4870's price you can get two 8800 GTS 512 cards.

  4. #4
    the rop and tmu numbers are in the wrong slots for all the Nvidia cards. how did anyone else not notice this?? also if that gets corrected the G80 would be listed at 64 tmu which I believe is wrong.
    Last edited by trek554; May 26th, 08 at 11:45 AM.

  5. #5
    Trek554:
    Well G80 has 64 texture filters, but only 32 texture addressing units.

  6. #6
    Even at the same clock speeds, the Radeon 3870X2 will consume more than twice the power of two separate Radeon 3870's due to the presence of a bridge chip. The difference will be small but always present due to the need for that extra chip.

    On a bit of a side note, I'm surprised AMD hasn't incorporated HyperTransport into the Radeon design. I'd eliminate the need for the bridge chip and finally allow truly shared texture cache between GPU (ie two 512 MB cards in cross fire would function as one 1 GB card). I'd also allow the Radeon GPU's to be placed on motherboards without using PCI-E lanes. Such a scenario would help improve CPU-GPU latencies, which is a factor in some GPGPU work.

  7. #7
    :O look at HD4870 memory clock speed lol GDDR5 FTW

  8. #8
    Will ATI make it this time?

  9. #9
    lol the 4870 is able to pump in more gflops dan the gtx 280... looks like nvidia really screwed up the clock speeds this time round...
    Last edited by annhilator47; May 26th, 08 at 01:47 PM.

  10. #10
    Lower shader clocks will be a problem for GT200's design. I wonde how much it will effect performance...

    G92 was a shader-speed beast with 2 GHz

  11. #11
    The Original table is wrong about TMU and ROP. Here is the correct one:


  12. #12
    Quote Originally Posted by annhilator47 View Post
    lol the 4870 is able to pump in more gflops dan the gtx 280... looks like nvidia really screwed up the clock speeds this time round...
    On other hand Nvidia has stated that GT200 will use 2nd gen shaders as opposed for 1st gen found on G80/G92. They said that these 2nd gen shaders would be around 50% more effective.

    If that's true then, to give some sort of image, that 1240MHz for GTX 260 would equal to 1860MHz on G92, and that 1300MHz for GTX 280 would equal to 1950MHz. Wouldn't look so bad.
    --

    ..but it's funny that everyone ignores that statement from Nvidia, but these rumoured specs are hold most certain truth

  13. #13
    From there we can see that GTX260 it will be more expensive than 4870 so it will be interesting to see the differences between them but more interesting will be a Crossfire with 2x 4850 cause it's close at price to the GTX260 but will see.

  14. #14
    In a way it's a strange thing that for a long period now ATI's most expensive model from series is compete with second or third model in nVidia's high-end line.

  15. #15
    Quote Originally Posted by Niceone View Post
    On other hand Nvidia has stated that GT200 will use 2nd gen shaders as opposed for 1st gen found on G80/G92. They said that these 2nd gen shaders would be around 50% more effective.

    If that's true then, to give some sort of image, that 1240MHz for GTX 260 would equal to 1860MHz on G92, and that 1300MHz for GTX 280 would equal to 1950MHz. Wouldn't look so bad.
    --

    ..but it's funny that everyone ignores that statement from Nvidia, but these rumoured specs are hold most certain truth

    Are you nVidia's PR representative? Yawn quoting PR...

    The charts here already took into account the 3rd possible MUL instruction even when most of the REAL academia discussion concluded that it was usable only about 50% of the time. We're not even sure if it really works now.
    Last edited by Tchock; May 26th, 08 at 06:33 PM.

Page 1 of 2 12 LastLast