XFX R7970 Black Edition Radeon Video Card |
Reviews - Featured Reviews: Video Cards | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by David Ramsey | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Monday, 09 January 2012 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
XFX R7970 Black Edition Double Dissipation
Manufacturer: XFX Creation, Inc. Full Disclosure: The product sample used in this article has been provided by XFX. AMD might have trouble keeping up with the competition in the CPU arena, but their acquisition of ATI allowed them to become a major player in the graphics world. AMD and NVIDIA regularly trade places in the "fastest video card" rankings, and while NVIDIA's been good holding down the title with its current high-end, the Fermi-based GTX 580, it looks as if AMD's new Tahiti-based 7970 will snatch the crown away...especially if it's a hot-rod, factory-overclocked card like today's test subject: the XFX R7970 Black Edition Double Dissipation. AMD's new Tahiti cards aren't even available at retail as of the time of this writing, but XFX has already got their amped-up version ready. That's impressive, since it generally takes a few months after the introduction of a new video card for the third party vendors to start individualizing their products. XFX offers the Radeon HD 7970 video card in four different versions: the Core Edition, the Black Edition, the Double Dissipation Edition, and the subject of our review, the top-end Black Edition Double Dissipation.
This card comes from the factory with a substantial overclock and an elaborate dual-fan cooling system. I'll compare its performance against NVIDIA and AMD's current top-end cards to see how well it performs. AMD "Southern Islands" GPUIt's always exciting to see AMD or NVIDIA come out with a completely new GPU architecture. The "Southern Islands" GPUs are AMD's implementation of its "Graphics Core Next" architecture, and comprise three different families of GPUs:
Right now, the only Southern Islands GPU available is the Tahiti-based Radeon HD 7970 GPU, but cards based on the Pitcairn and Cape Verde variants will appear in the coming months. Graphics Core NextAMD had several goals in mind for Graphics Core Next, and one of the main things they wanted to do was to catch up with NVIDIA in the "GPU compute" arena. Right now, NVIDIA's "CUDA" (Compute Unified Device Architecture) dominates in GPU computing, with a robust set of developer tools and years of track record behind it. AMD's "DirectCompute" alternative has been around almost as long but has failed to catch on with developers to the degree that CUDA has. AMD is making a real push for DirectCompute with these new GPUs, and claims that over 200 applications already benefit from DirectCompute technology. For Southern Islands, AMD has grouped simple ALUs (arithmetic logic units) into a single SIMD (Single Instruction Multiple Data) unit. A number of SIMD units, along with instruction decoders and schedulers, branch units, vector processors, and other items comprise a compute unit, and a number of these compute units (along with memory controllers and whatnot) comprise a Southern Islands GPU chip. The Radeon 7970 has 32 of these compute units. AMD has tweaked their VLIW (very long instruction word) architecture to provide more consistent performance. Previous generations of AMD GPUs often left many compute units/stream processors idle, because dependencies in the data being worked on meant that not all the compute units could be used at once. Southern Islands architecture provides a greater degree of parallelism (it's that SIMD stuff, really, being used effectively) and can keep most compute units working all the time, leading to more consistent (and higher) performance. This has obvious advantages in both graphics processing and general GPU-compute operations. Other enhancements include: Partially Resident Textures: As games increasingly use very large textures, loading and manipulating the texture data takes more time. A Souther Islands GPU can load only the part of the texture that will actually be visible in a frame, reducing the memory bandwidth and workload. Error-correcting code support: There's not much detail on this feature yet, but it looks as if AMD will be able to offer optional ECC support (important for industrial applications) without having to use ECC memory. This will detect and correct memory errors, although AMD's tech white paper doesn't go into specifics such as how many bits can be detected/corrected. PowerTune and ZeroPower: These feature dynamically clock the card's GPU and memory doesn (PowerTune) when high performance isn't needed, and can shut off entire sections of the GPU (ZeroPower) when the card is idle. For example, the second card in a CrossFireX system can be idled down to less than 5 watts if you're just browsing the Windows desktop; a single card system will power down if your display goes to sleep. Combined with the inherent efficiency of the 28nm fabrication process, this results in significant power savings. Side benefits you'll notice include less heat and noise emanating from your rig, especially when you're not gaming. Eyefinity 2.0: New support for 5x1 monitor layouts, improved bezel correction, and support for custom resolutions enhance AMD's existing Eyefinity feature. I saw a 5x1 system demonstrated at an AMD press even a few months ago and it was quite impressive. 28nm fabrication process: If you make 'em smaller, you can fit more of 'em in. The 7970 GPU has a staggering 4.3 billion transistors. The original Intel 4004 microprocessor had about 2,300. PCI Express 3.0 support: This has twice the bandwidth of PCI-E 2.0, but I'm not sure what real-world effect this will have, especially on x16 slots. Even the beefiest current video cards aren't hobbled by 8x PCI-E 2.0 bandwidth. Let's take a look at what these features actually mean on a real, live video card. Closer Look: Radeon HD 7970XFX's package includes the card and a number of accessories. Along with the various bits of documentation is a driver CD, a CrossFireX cable, and a passive HDMI to DVI cable. There's also a cool metal and plastic stick-on "XFX Black Edition" badge you can put somewhere on your case to show off. ![]() If you've owned or read reviews on AMD video cards before, you're familiar with the "black slab with red accents" styling that AMD reference cards have used for the past few years. Indeed, even the new Tahiti-based cards do, and most of the cards you'll see will consist of the reference design with a vendor-specific sticker (often featuring guns, women, and monsters) applied. XFX, however, has decided that their new high-end card deserves better:
The first thing you'll notice is the dual-fan cooler. The cooler is quite elaborate: XFX replaces the stock AMD vapor chamber cooler with a "HydroCell" vapor chamber of their own design. They then apply two Duratec IP-5X dust-free fans whose design prevents dust from ever reaching the fan bearings, and surrounds the whole assembly with an aluminum "Ghost Thermal Technology" shroud whose design is said to improve cooling by directing air out the top and bottom of the card.
That said, there doesn't actually appear to be a lot of space for air to exit from the top of the card. A red trim panel and the cooler base above it block most of the airflow from the top of the card, although there is some space below the trim panel. As we'll see later, this doesn't seem to affect the GPU temperatures, and besides, it looks pretty cool in a tower case with a window.
The back of the card doesn't have any sort of cover plate, but you can see 15 or 16 of the 20+ screws that secure the cooling apparatus to the card. Two of the screws are covered by white circular "Warranty void if removed" stickers. I don't know if it makes any difference in the real world, but I prefer backplates to protect the miniscule components of the card from damage and static. Let's take a closer look at this card in the next section. FX-797A-TDBC Detailed FeaturesThe connectors available include one DVI, one HDMI, and two mini-DisplayPort. Below the ports is the card's exhaust vent, which XFX says provides double the airflow of other such vents. This seems unlikely.
The XFX R7970 Black Edition Double Dissipation video card has two PCI-E power connectors, an 8-pin and a 6-pin. However, solder pads on the back of the card reveal that a version with two 8-pin connectors may be forthcoming...
At the top of the card we can see the two CrossFireX connectors, indicating that the 7970 supports both triple and quad-card CrossFireX configurations. Just to the right is the BIOS switch, a handy feature that's become common on high-end AMD video cards. With the switch in one position, the card uses its default factory BIOS; in the other position, it uses a BIOS with user-flashed settings. This allows overclockers to experiment with high settings without the worry of "bricking" the card, since the factory settings are available with the flip of a switch. Wouldn't it be nice if NVIDIA cards had this feature?
There's not much going on at the bottom of the card, although you can see the power cords for the dual fans.
We'll cover the features and specifications of this card in the next section. FX-797A-TDBC Features
R7970 Black Edition Double Dissipation Specifications
There's a lot going on "under the hood" of this card. In addition to the elaborate cooler, XFX enhances the reference 7970 design with a 2 ounce copper PCB, ferrite core chokes, and solid capacitors capable of operation up to 105 degrees Centigrade. Their Black Edition 7970 cards use hand-selected GPUs that XFX says are in the "top 1%" and can reach higher clock speeds without having to increase the wattage being drawn by the card. XFX delivers the card pre-overclocked with a GPU speed of a solid gigaHertz (as compared to a stock speed of 925MHz) and a memory speed of 1425MHz (stock speed 1375). Let's see if the performance of the card lives up to expectations... VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. All of the tests in this review were run with DX11 graphics. According to the Steam hardware survey, as of December 2011 the most popular desktop resolution (for Steam users) is 1680x1050 pixels, with a 17.59% share, with 1920x1080 pixels coming in second with only 7.7%. However, when testing video cards at this level, the higher the resolution you can test with, the better. I ran most tests at both 1680x1050 and 1920x1200 (I continue to prefer the 16:10 ration of 1920x1200 to the mysteriously more popular 1920x1080 resolution). Frankly the top-end video cards these days are so powerful that even 1920x1200 isn't much of a challenge... A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. DX11 Cost to Performance RatioFor this article Benchmark Reviews has included cost per FPS for graphics performance results. Only the least expensive product price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 8-January-2012:
Intel X79-Express Test System
DirectX-11 Benchmark Applications
Video Card Test Products
|
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Crysis 2
The latest entry in Crytek's "Crysis" series launched in early spring of 2011. The initial release lacked DX11 support and was widely criticized for low-quality textures. A few months later Crytek released patches that provided DX11 features and more than a gigabyte of new high-resolution textures. Thus fortified, Crysis 2 is one of the most visually impressive games available.
I used the Adrenaline Crysis 2 benchmark tool to benchmark a scripted run-through of Times Square. I set the quality level to "Extreme" and turned on DirectX 11 and high-resolution textures. AA was enabled in the benchmark (although it's not a game option normally) to increase the load on these graphics cards.
- Crysis 2 with Adrenaline Benchmark
- Extreme quality, DX11 features, high-resolution textures, 4xAA, Edge AA, Times Square sequence
Cost Analysis: Crysis 2 (1920x1200)
Test Summary: The 6990's surprisingly low score is something I can't explain; I double-checked my settings and re-ran the test several times. Look how close the overclocked 7970 comes to the mighty GTX 590! It's only about 8% slower.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Batman: Arkham City
If there was ever a game that showcased the growing gap between game consoles and high-end gaming PCs, Batman: Arkham City is it. In this dystopian near-future, part of Gotham City has been walled off as an enclave for criminals (rather like Escape from New York). It's a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, and is based on an updated Unreal Engine 3 game engine. Batman: Arkham City is a DirectX 11 title that uses multi-threaded rendering to produce life-like tessellation effects.
One annoyance with the game is that all game settings must be made through a hidden application called "BMLauncher". Once you've made your settings, though, an in-game benchmark provides the feedback you'll need to tune your system's performance.
-
Batman: Arkham City
-
FXAA (high), DirectX 11 features, High tessellation, Extreme Detail, Dynamic Shadows, Motion Blur, Distortion, Lens Flares, Light Shafts, Reflections, Ambient Occlusion, hardware-accelerated PhysX
-
The AMD Radeon graphics cards are at a disadvantage here, since, like Arkham Asylum, Arkham City is a showcase of PhysX effects, which can be directly accelerated by NVIDIA cards but not by AMD cards. Some might argue that turning PhysX off would provide a fairer comparison, but the extensive PhysX effects are such in integral part of the game I decided to leave them on.
Note that the dual-GPU Radeon HD 6990 turns in the same frame rates as the 6970. This is almost certainly because AMD doesn't have a CrossFireX profile for this new game, thus the card runs in single-GPU mode.
Cost Analysis: Arkham City (1920x1200)
Test Summary: Although the HD 7970 is at a disadvantage here with its lack of PhysX support, it still turns in very playable frame rates. Oddly, the overclocked score is virtually identical to the stock clocked score.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
I configured Aliens vs. Predator to use the highest quality settings with 4x AA and 16x AF, as well as turning on DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation, along with advanced shadows.
- Aliens vs Predator
- Texture quality High, Shadow quality High, 4xAA, 16xAF, SSAO on, Hardware tessellation, Advanced Shadow Sampling
Cost Analysis: Aliens vs Predator (1920x1200)
Test Summary: The Radeon HD 6990 comes roaring back here, easily beating the GTX 590. The 7970 continues to post scores superior to the GTX 580, and beats it here in $ per FPS as well.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Lost Planet 2
Capcom provides a stand-alone benchmark tool for Lost Planet 2. Reviewers love stand alone benchmarks, and users should, too, since they allow the evaluation of a system without the trouble and expense of purchasing and configuring the actual game. Lost Planet 2 takes place on E.D.N. III, the same planet in the original Lost Planet game, but ten years later. The snow has melted and somehow giant tropical jungles have grown to fill the landscape.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs. There are two parts to the benchmark: Test A, which is a semi-random script that's a good example of normal game play, and Test B, which is a deterministic script that places a significantly heavier load on the card being tested.
- Lost Planet 2
- 1920x1200, CSAA32X, Motion Blur, High Shadow Detail, High Texture Detail, High Rendering, High DX11 features
Cost Analysis: Lost Planet 2 (1920x1200, Test B)
Test Summary: Radeon cards do very well in this test, although the dual-GPU GTX 580 still posts the highest scores. It's interesting to note that while the Radeon HD 6990 beats the new 7970 in Test A, the scores are virtually identical in Test B.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 features superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
-
Metro 2033
-
DirectX 11, Very High quality, 4xAA, 16xAF, tessellation, DOF, "Frontline" scene
-
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All our tests disable advanced PhysX options.
Cost Analysis: Metro 2033 (1920x1200)
Test Summary: There's no doubt that this is a very demanding game. What Crysis was to top-end video cards a few years ago, Metro 2033 is now, with even the strongest single-GPU cards struggling to break 30 frames per second at 1920x1200. And note that this is with PhysX features turned off! You'll need a dual-GPU card or an SLI/CrossFireX system to make the most of this game.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
DX11: Unigine Heaven 2.5
The Unigine "Heaven 2.1" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:
- Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
- Comprehensive use of tessellation technology
- Advanced SSAO (screen-space ambient occlusion)
- Volumetric cumulonimbus clouds generated by a physically accurate algorithm
- Dynamic simulation of changing environment with high physical fidelity
- Interactive experience with fly/walk-through modes
- ATI Eyefinity support
- Unigine Heaven 2.5
- High Shaders, Normal tessellation, 8xAA, 16xAF
Cost Analysis: Unigine Heaven (1920x1200)
Test Summary: AMD cards dominate in this test, albeit only at the 1920x1200 resolution. At both resolutions, though, the overclocked XFX R7970 Black Edition Double Dissipation is nipping at the heels of the $150 more expensive GTX 590.
Graphics Card | GeForce GTX560Ti | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD 7970 |
GPU Cores | 384 | 480 | 512 | 1024 | 1536 | 3072 | 2048 |
Core Clock (MHz) | 822 | 732 | 772 | 607 | 880 | 800 | 1000 |
Shader Clock (MHz) | 1645 | 1464 | 1544 | 1215 | N/A | N/A | N/A |
Memory Clock (MHz) | 1050 | 1900 | 2004 | 1707 | 900 | 1250 | 1425 |
Memory Amount | 1024 MB GDDR5 | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 256-bit | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit |
XFX 7970 DD Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking headroom. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
At the start of the test, I measure the idle temperature of the card with the card sitting at the Windows desktop, using the GPU-Z utility. Next, I start FurMark's stress test and let it run until the temperature curve flattens and the temperature has not varied more than 1 degree in the last five minutes.
FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.
Equipped with a very energy-efficient 28nm GPU and an enhanced vapor chamber cooler, the R7970 Black Edition Double Dissipation returns temperatures a good 20 degrees lower under stress than I've seen from an NVIDIA GTX 580 card...and that's when the card is overclocked beyond its already overclocked specs!
At stock specifications, the fan RPM maxed out at 39%; when overclocked, the fans reached 49% of their maximum. Even at 49% fan speed, the noise from the card was not objectionable. At 39% it was audible but quiet. If you're familiar with the Dustbuster-level of sound previous generations of high end graphics cards can generate under load, this will come as a welcome relief.
VGA Power Consumption
The Radeon HD7970 is the first graphics card whose GPU is fabricated on a 28nm process. There are two advantages to making transistors smaller: you can make 'em faster, and they use less power. Both are true here, but AMD didn't stop there, since their Tahiti architecture has a number of clever power-saving features.
Like a modern CPU, a Tahiti GPU will aggressively clock itself down when its full capabilities aren't needed, reducing current draw with what AMD calls "PowerTune". But they go even further, with "ZeroCore" technology turning off entire sections of the chip when they aren't in use. This features work amazingly well.
To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
Situation | Power | Card delta |
Windows login, no video card | 98 watts | -- |
Windows login, video card | 163 watts | 65 watts |
Windows desktop | 115 watts | 17 watts |
Windows desktop, display sleep | 103 watts | 5 watts |
FurMark load | 315 watts | 217 watts |
Overclocked FurMark load | 387 watts | 289 watts |
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
It's obvious that AMD's power-saving technologies work incredibly well. Although the card uses a lot of power sitting at the login screen, it drops dramatically once the Windows desktop is finished loading. And if you set your display to sleep, the card draws almost no power at all.
XFX R7970 DD Overclocking
The XFX R7970 Black Edition Double Dissipation video card comes pre-overclocked. While the stock Radeon 7970 runs its GPU at 925MHz and its memory at 1375MHz, out of the box the XFX card is running at 1GHz and 1425MHz, respectively. But maybe there's more to be had. Third party overclocking tools haven't been updated for the Southern Islands GPUs yet, so I was limited to AMD's own Overdrive utility, built into the Catalyst Control Center.
My overclocking consisted of simply slamming the sliders as far to the right as they would go. This gave me a GPU clock of 1125MHz and a memory clock of 1575MHz. I set the card to be able to use the full 20% extra power allowed. I verified the settings using GPU-Z:
The card was completely stable at these settings, and the fans didn't seem any louder except under FurMark testing. Here's how the overclock helped performance:
Benchmark | Stock FPS | OCP FPS | % improvement |
Heaven 2.5 | 43.3 | 50.0 | 15.47 |
3DMark11 GT1 | 11.98 | 13.43 | 12.10 |
3DMark11 GT2 | 13.65 | 15.33 | 12.31 |
3DMark11 GT3 | 11.46 | 12.84 | 12.04 |
3DMark11 GT4 | 6.51 | 7.32 | 12.44 |
Aliens vs. Predator | 57.1 | 64.0 | 12.08 |
Lost Planet 2 Test A | 65.6 | 73.7 | 12.35 |
Lost Planet 2 Test B | 56.0 | 62.2 | 11.07 |
Metro 2033 | 27.0 | 30.3 | 12.22 |
Arkham City | 48 | 49 | 2.08 |
Crysis 2 | 62.0 | 68.5 | 10.48 |
Average Improvement | 11.33% |
Given the card's stability and relatively low temperatures with this overclock, I'm sure further performance increases will be possible when the various third party overclocking tools are updated to work with these cards.
XFX Radeon 7970 DD Final Thoughts
I've tested and owned a number of video cards in the last few years; my main system currently runs a pair of NVIDIA GTX 580 cards. The performance they provide is amazing, but so is the power draw and the noise under load (although they're certainly quieter than their GTX 480 forebears). Even when my system's idling, the GTX 580's are pulling enough power that their heat sinks are quite hot to the touch.
The Radeon 7970 is the first member of AMD's new "Souther Islands" GPU family, and it's the top of the enthusiast-oriented "Tahiti" line (the lower-end "Pitcairn" and "Cape Verde" GPUs will make their appearance in the months ahead). These GPUs represent a number of "firsts" for the industry:
-
The first 28nm GPU
-
The first DirectX 11.1 GPU
-
The first PCI-E 3.0 GPU
...so AMD has a lot to be proud of. There's even more stuff, like support for 4k ultra high resolution displays via the DisplayPort connectors, but it will probably be a while before you can buy one of those.
Starting with this excellent base, XFX has produced a line of four Radeon 7970 video cards. The card I tested, the R7970 Black Edition Double Dissipation, is their top of the line, and it shows in everything from the presentation and appearance of the card to its price (about $50 more than reference model Radeon 7970 video cards).
While the performance of the card was amazing, reports are that most reference cards can hit the same overclocked speeds I reached with this card, so it's reasonable to wonder what you're paying that extra money for. Frankly, there's no way to tell until the third party overclocking tools catch up with these new GPUs and we see if XFX's hand-binned "1%" GPUs and fancy power and cooling solutions have any real-world utility. Basically, you're betting $50 that they will...or you can tell yourself that the card simply looks $50 more cool and interesting than the plain black-box reference design, which is an argument I'd certainly be sympathetic to.
XFX R7970 Black Edition Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested, which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
The performance of the card was excellent. Out of 16 separate tests, it beat the reigning single-GPU champ, the NVIDIA GTX 580, in 13 of them, losing out only in PhysX-heavy Batman: Arkham City and the 1680x1050 Unigine Heaven 2.5 benchmark. The GTX 580 in reference form has relatively limited overclockability; the extra headroom afforded the Radeon 7970, especially with XFX' cooler and power circuitry, hold the promise of even greater performance in the months to come. Cards this powerful are almost wasted on my mere 1920x1200 monitor; they cry out to run a 3D system or triple-monitor setup.
The appearance of the card was striking and original. XFX even made the effort of using a red, lettered top plate so everyone will know what card you have should you display it in a windowed case. I do think the card would look a little "cleaner" with a back plate to cover the exposed circuitry on the rear of the card.
The construction of the card seems very solid. I didn't remove the cooler as I often do in these tests due to time constraints (and, frankly, I was a little intimidated by the sheer number of screws holding the cooler on), so I can't speak to the quality of the PCB, but the fact that the card was cool and stable under the maximum overclock I could achieve with existing tools indicates that it's built pretty well.
AMD's Tahiti GPUs bring new functionality to the graphics card world: extremely efficient power usage, 4K monitor support, PCI-E 3.0 support, and DirectX 11.1. Admittedly some of these features have no immediate use (it's not as if even dual GPU cards saturate a PCI-E 2.0 x16 slot), but it's nice to know they're there. Topping things off is a 3 gigabyte slug of video RAM, which means the card will easily support multi-monitor systems at high resolutions. The only thing lacking is PhysX support. NVIDIA continues to insist PhysX is an open standard, although nobody else has implemented it yet, and there must be a reason why. Nonetheless, PhysX-enabled games such as Batman: Arkham City bring new levels of realism to the consumer space, and not being able to support this feature well is definitely a drawback.
The only possible criticism I have of this card is the value: it's an expensive card, no two ways about it. It costs $120 more than an NVIDIA GTX 580, and in most of my tests the $/FPS were very close...it's as if AMD's bean counters spent weeks testing the performance and using this to determine the price. Compared to AMD's previous top-end GPU, the Radeon 6970, it's much more expensive, both at the retail level ($280 more) as well as a $-per-FPS level. Very few enthusiasts will spend $600 on a video card, and it's certainly less bang-for-the-buck than a Radeon 6970. Granted, it's only $50 more than a reference card, whereas previous factory-overclocked cards often have a much higher "incremental cost" over a reference card. But it's still a lot of money.
But having the bestest and fastest always costs more, and if that's what you want, this is the card to get. At least now the "bestest and fastest" doesn't mean putting up with a jet-engine imitating space heater in your rig.
Pros:
+ Incredible performance
+ Incredible power efficiency
+ Incredible low noise
+ Striking appearance
+ 3GB video RAM
+ 4K display support
+ Switchable BIOS
+ Probably can be overclocked even higher
Cons:
- Incredible (high) price
- No PhysX support
Ratings:
- Performance: 9.9
- Appearance: 9.5
- Construction: 9.5
- Functionality: 8.5
- Value: 8.0
Final Score: 9.08 out of 10.
Excellence Achievement: Benchmark Reviews Golden Tachometer Award.
Benchmark Reviews invites you to leave constructive feedback below, or ask questions in our Discussion Forum.
Related Articles:
- Radeon HD 5770 CrossFireX Performance Scaling
- OCZ Agility-2 SandForce Solid State Drive
- Intel BOXD945GCLF2D Atom 330 mITX Motherboard
- Arctic MC001-BD HTPC Media Center
- Patriot Inferno SSD Kit PI100GS25SSDR
- Cooler Master GeminII S524 CPU Heatsink
- Samsung 22" Widescreen LCD Monitor 2220WM-HAS
- HP Touchpad Performance Hacking Guide
- XFX GeForce 8800 GTS 320MB XXX Edition Video Card
- NZXT Sentry 2 Touchscreen Fan Controller
Comments
The 570 is like 30% faster than the 6950 - the 6950 is not similar "or better".
##techpowerup.com/reviews/AMD/HD_6950_1_GB/22.html
--- it can't PhysX - it has AA annd driver issues, it can't DX11 as well
--- it is not EQUAL it is INFERIOR
the 6950 is a nice card if it doesn't crash and plenty respectable for gaming, but it's NOT GTX570.
----
On the other side of wrong, I've been really surprised by the lackluster reaction of amd fans to these two new SI cards. I mean they really do have some awesome performance, but it appears everyone is down on the price more than ever...
If AMD fans don't start paying for their gaming cards instead of constantly whining about price and pinching every 5 dollars there probably won't be any radeon cards in anyone's future soon enough.
--
Further it appears there might be trouble in speedsville - as I've noticed an awful lot of early adopters (wanted/trade/ are of sites) wanting to ditch their 7000 series card for Nvidia 580's... I suspect the drivers just are not doing it for them...
---
Of course, this only would have applied up until about 2-3 months ago. The stock of 6950's has quickly dwindled as retailers are getting ready to stock the replacement; the 7850.
As for you claim of "driver issues", this is simply stale old FUD. Since AMD took over, 'driver issues' are basically equivalent to Nvidia.
As for PhysX, very few games use it anyway, but if you want it simply spend a few bucks on an old 8800GT or a new low-end card. I'd use a separate card for it anyway, even with a 570, as the PhysX in games like Mafia 2 can be very taxing on the card anyway.
You assertion about DX11 is just flat out wrong. AMD cards have had it since the 5XXX series. Which is ironic, because AMD actually had a DX11 card out 6 months before Nvidia did!
As for the claim that an awful lot of early adopters are wanting to ditch their 7000 series cards, I'm just straight up going to call bull# on that. I'm a member of multiple gaming/overclocking forums, and everyone seems to just absolutely love their 7XXX series chips. They're selling so well retailers are having a hard time keeping them in stock!
--
There ya go hows the feel ?
---
PS - You don't have to be a member of any forums to see what posters are saying - anyone can surf and read - so you shot yourself in the head there for being so stupid.
Also, DX11 performance of radeon is under par - that's why all the amd fans whine Crysis 2 is over tesselated to give Nvidia cards a huge advantage...
--
I mean dude, you denying common knowledge is just about as ridiculous as it gets, but let's face it, that's what amd fans do - and they "make mistakes" like the guy I r4eplied to who had the prices oh so wrong, much in favor of amd, lower than available, OF COURSE.
--
Stop the lying and support or defend fairly. That's all I ask.
47% of 5770 card owners report ongoing unsolved driver issues.
The number is only slightly better for the 6000 series.
Seriously? Come on now, you and I both know that the 7970 is miles ahead of the GTX580 performance wise. Hell, it's even beat by the 7950. What you're doing is no better than what you claim the others here are doing.
Keep in mind that first post you quoted wasn't even mine.
"PS - You don't have to be a member of any forums to see what posters are saying - anyone can surf and read - so you shot yourself in the head there for being so stupid."
PS I never said such a thing. And you've got the gall to call *me* stupid!
"Also, DX11 performance of radeon is under par - that's why all the amd fans whine Crysis 2 is over tesselated to give Nvidia cards a huge advantage"
Equivocating 'DX11 performance' with 'tessellation performance' is again, just dishonest. It's common knowledge that the Evergreen and Northern Islands architectures can't do tesselation as efficiently as Fermi.
"but let's face it, that's what amd fans do"
Seems to me that you, the Nvidia fan, are the only one denying and obfuscating common knowledge here. And by the way, I'm not a fan one way or the other. I buy what gives me the best performance for the price I'm looking for. Sometimes it's Nvidia, sometimes it's AMD.
"and they "make mistakes" like the guy I r4eplied to who had the prices oh so wrong, much in favor of amd, lower than available, OF COURSE."
The prices he quoted were, at the time, completely true. With the 6950 and 6970 being replaced by the 7850/7870, production on the 6950 especially slowed almost to a stop. Product shortages means higher prices. Thus, going back 3-4 months you can see 1GB 6950's hovering around $220, and the 2GB models around $250. Heck, I picked up my 2GB model for $229. Look at Newegg now. Only 2 XFX 6950s listed, and not a single one in stock.
"Stop the lying and support or defend fairly. That's all I ask."
I'm kind of curious why you think it's ok for you to ask this but not follow it yourself?
"47% of 5770 card owners report ongoing unsolved driver issues."
Based on what? Where did that number come from?
Whatever didn't want to waste my time anyway with Steve - another one in my book.
So they do it, I respond in like kind, and I'm the troll.
OK.
Got it.
Fine i'll shut it.
" Just sold my 7970's for $350 less than i bought them for.
They both were monsters 1250/1750mhz 24/7 @ 1225mv.
Sensational hardware, extremely impressive cards but in all honesty, their driver support is horrid and spoils the awesomeness.
Just settled for a gtx 260 sli for $100 for now."
reply : " ... I would really like AMD to devote more resources towards their driver development. "
##overclock.net/t/1196856/official-amd-radeon-hd-7950-7970-owners-club
I did NOT LIE - it's all over the place..... almost everyone is having problems.
" I want to love my 7970 . but the driver issues are killing me. 12.2 whql , random crashes in windows, cant survive a 3dmark loop. 12.3 beta feb 28th , windows dont crash , however bad graphical errors in bf3 and swtor which is the only games i play. I used ATIMAN in between each driver install so i know its not that. The card isnt stable at benchmark loads for more than 15 mins or so. I may have a bad card , but i still have the option to return it for refund."
next: " I've started playing multiplayer BF3 B2K lately, and even at 1100/1500 I am struggling to keep 60fps in intense situations with lots of smoke. I have everything maxed and not using MSAA at all. At stock 925 core I am getting slightly worse performance than the OC 570 (920 core) I had at same settings. I have tried 11.12 WHQL, 12.2/3 WHQL and now using .8960 driverset from Feb 28, all net roughly the same performance.
Is this happening to everyone or just me? "
--
The 570 beats it... LOL
Like I said, I've noticed what's going on.
Nvidia launches are WORLD'S ABOVE the mess amd's are. That goes onward over the life of the cards.
I can't speak for the people who bought the cards, but on the forums I hang out on people have not been whining that I've seen. Personally, I have to wonder why anyone would purchase 7970 cards to run BF3 when a pair of 6950 or 6970 cards will max out that game. I wouldn't purchase them unless I was making a new build from scratch, and then I would still look for the best bang for the buck.
Personally, I have a pair of 6970 cards and they're doing an awesome job. Couldn't be happier. I'm going to skip this next generation anyway since I highly doubt there will be any games that will slow my cards down enough to worry about for at least a year or two.
I bought a 6950 because it was simple to make it into a 6970, and with 2 x 6950 in crossfire you get slightly better performance than 2 x 570. (Unfortunately, at the time, I could only find one reference 6950, so I ended up with a 6950 flashed to a 6970 and a 6970 in crossfire).
tomshardware.com/reviews/crossfire-sli-3-way-scaling,2865.html
And that article is not even 2 x 6970 which would preform slightly better.
A agree that fanboys can be pretty lax with their "facts", but I see it on both sides of the issue. Over the last 20 years I have owned both AMD and Nvidia cards, and both performed as well as I needed them to. This entire discussion is pointless, it's like arguing over a high end Lamborghini vs a Porsche.
If they're really so bad shouldn't they like sue the company for a defective product? Like they did to nvidia after they shipped out millions of defective laptop video cards.
--
" now it's just about equal." The 570 scales like a monster it's NEVER just about equal... and it's NEVER $220.
"I own one.... Only 2 XFX 6950s listed " AT $255 MINIMUM YOU FAILED TO MENTION - SO MYSTERIOUSLY.
--------
As our reviewers in the know have told us straight from the mouth of the devil - AMD purposefully dried up 6000 series so they would not compete against themselves.... "production on the 6950 especially slowed almost to a stop." - THANKS AMD FOR BEING AN EVIL COMPANY SCALPING END USERS - I'M SURE YOU AGREE SINCE YOU CLAIM TO BE NON BIASED ! THANKS FOR AGREEING !
---
THE GTX570 WAS $200 PLUS 7 SHIP ON NEWEGG LAST WEEK.
---
" As for you claim of "driver issues" - YOU'RE A LIAR
" You assertion about DX11 is .."
Jeezus cripes, you're that silly ?
AMD SUCKS WITH DX11, including your 6950 - a 460 SMOKES IT.
---
" Which is ironic, because AMD actually had a DX11 card out 6 months before Nvidia did!"
Ironic is AMD whining nvidia has no dx11 card for 6 months like you just did, then GETTING SMOKED in every Dx11 bench there is and STILL INCAPABLE on many DX11 tests with the 79xx series though finally improved by COPYING NVIDIA ARCHITECTURE.
--
Ironic is 6 months of bragging, then 2 years of FAILURE in every dx11 competition. That's ironic. Now DX11 is "overused" according to you amd fans... and nvidia is CHEATING TURNING IT UP...
LMAO - THAT'S PURE IRONY 10/10 100%
---
" Seriously?" < you copied my words I'll exaggerate too then you ask me if I was serious - you auto copied them over in the reply title - so when you whine I called you stupid know why... I still say so... you are, sorry, stupid.
" "47% of 5770 card owners report ongoing unsolved driver issues.""
you claimed you're a forum member yet you haven't seen it....WHATEVER.NOT THAT MANY LARGE FORUMS DUDE.
--
You're another amd fan period. You got a 6950 so took it personal - you may be the same guy under another name who cares... YOU'RE AMD INVESTED.
---
GTX570 is FASTER OVERALL ABOVE AND AWAY FROM 6950 THAN 7970 IS FROM 580.
HAHAHHAHAHHAHAHHAAAA AAAAAAAAAAAA !!
---
I told the truth - PERIOD
Perhaps you need to scroll down a little more, or look a little closer? Here, I took a screenshot of the relevant graph to help you out. imgur.com/gb37R You'll clearly see my card, the 6950 2GB, at 101%, and the GTX 570 at 119%. Now, my basic arithmetic skills are a bit rusty, but it seems to me that the difference is, in fact, 18%. Actually, since my card is unlocked and overclocked, the difference is about 6%.
"" now it's just about equal." The 570 scales like a monster it's NEVER just about equal... and it's NEVER $220.
"I own one.... Only 2 XFX 6950s listed " AT $255 MINIMUM YOU FAILED TO MENTION - SO MYSTERIOUSLY."
Scales like a monster? Funny how the 6970 beats it by 1% at 2560x1600. I play at 5700X1080 though, so that moving ahead at very high resolutions is extremely relevant.
>THANKS AMD FOR BEING AN EVIL COMPANY SCALPING END USERS - I'M SURE YOU AGREE SINCE YOU CLAIM TO BE NON BIASED ! THANKS FOR AGREEING !
Huh, speaking of 'spinning things'... You of course know it's the OEMs who make the actual boards end users buy, right? So AMD has no control over whether they decide to ramp down production to start production on the new boards.
"THE GTX570 WAS $200 PLUS 7 SHIP ON NEWEGG LAST WEEK."
Yes, a limited number of refurbished EVGA GTX 570 units were available on Newegg last week. That's very different from what you said. The keys here are 'refurbished' and 'limited'.
"YOU'RE A LIAR"
You asserting it just makes me look like the bigger man and you like a fool.
"AMD SUCKS WITH DX11, including your 6950 - a 460 SMOKES IT."
On what games at what settings and resolutions? Where's the data to back this up? Because from what I can see you're looking like a fool again: ##anandtech.com/bench/Product/313?vs=293
How is pointing out the irony in you claiming AMD has "no DX11 cards" whining?
"then GETTING SMOKED in every Dx11 bench there is and STILL INCAPABLE on many DX11 tests"
Ooh boy, look out, all those games with crazy tessellation are going to kill my card! Thank goodness that by the time big tessellation games are out I'll have a new card!
" the 79xx series though finally improved by COPYING NVIDIA ARCHITECTURE."
Ah, so which parts of Fermi does Southern Islands copy?
"Now DX11 is "overused" according to you amd fans"
You keep calling me an AMD fan for some reason. I'm not an AMD fan, I'm a value for money fan. When I built my roommate's PC before the Fermi refresh, I picked a GTX 460 because it was the best performing chip for his budget. Isn't that how you build your computers? Or are you one of those people who's so caught up in vapid brand loyalty e-peen contests to look at it objectively?
>you claimed you're a forum member yet you haven't seen it....WHATEVER.NOT THAT MANY LARGE FORUMS DUDE.
You made the (oddly specific) claim that I knew you couldn't back up with any evidence, so I called you out on it. There are two right choices here. You can either fine the evidence to back up your claimed statistic, or you can apologize for making a baseless claim.
"You're another amd fan period."
When the time comes to upgrade my card, I will weigh my options carefully, and pick something that gives me the best performance for my dollar. Last time I wanted the best performing card for about $250. My options were the 560 Ti and the 6950. I found the 6950 2GB on sale for $230, so I bought it, knowing it would outperform the 560 Ti stock in most of the games I play. I knew that I needed the 2GB frame buffer to handle modern high resolution textures on my eyefinity setup. I also knew that with an AMD card I could run all three monitors from a single card, something a 560 Ti wouldn't be able to do. Thus, it was the best option for me at that price point. If I had more money maybe I would have gone for SLI 560 Ti. Who knows. But I got the best card for my situation.
Unless you're me, playing on an eyefinity system where the low frame buffer on the GTX570 brings it down to the level of my unlocked and overclocked 6950.
>HAHAHHAHAHHAHAHHAAAA AAAAAAAAAAAA !!
...Are you twelve?
The difference between 18% and 22% is negligible, and can be accounted for by which performance you take to represent 100%.
I usually hate to use labels on people, I really find stereotyping to be rather vulgar, but the label "troll" honestly seems to fit you perfectly.
All your messages are full of accusations and labels such as "stupid" and "liar". I have to wonder if you're a 16 year old who thinks he knows everything (not that I think all 16 year olds are like that, but people with attitudes like yours seem to be about that age).
If you can't have a simple discussion without calling people liars and stupid, than why bother at all?
What do you think is going to happen here? You think the people reading these comments are going to say to themselves "Boy, that guy who got is facts wrong is a (place name here), or do you think they're going to look at the immaturity of the person calling everyone names and say "Wow, that kid really is a (place name here).
Personally I'm surprised it's gone on as long as it has. I would have thought the site would draw a line at name calling. I know I would. If you really want to call people names, why not head to a political discussion site?
Also, learn some math. From the link you supplied the 570 is 18% faster, not 30%. Personally, I would say 18% is similar.
By the way, I don't know where else you get your ideas, but the 6950 in my system is running a 6970 flash and has not "crashed" once. I also have a 6970 in crossfire with it, and it performs exactly the same as the 6970. The unlockable 6950 cards are one of the best values available in a video card.
Why r u comparing a 1.5gb 580 to a 3gb 7970?
for one thing the cost would be almost equal, not to mention the performance comparison would be more intuitive.
Thanx
Cause u have to compare flagship with flagship.
Good review.
/index.php?option=com_content&task=view&id=801&Itemid=72
However, (A) another tester has it, not me, and (B) none of these tests would have benefited from the extra memory anyway-- you need a multi-monitor system to make effective use of that much memory.
Fyi from a gtx 580 classified owner @ 900mhz I can tell you the difference from 1.5g to 3gig is minimal for the difference they charge!
Why do ppl still troll the same convo every time something new comes out.
The article even makes a reference to the "limited overclockability" of the reference 580 while apparently ignoring the existence of the 580 Classified, I understand that reviews can't have access to unlimited versions of cards but that card was reviewed here.
Finally I might have missed the explanation but in the $ per fps statements when it refers to the 7970 and the 7970 overclocked which two of the three "cards" are we talking about ?
-The R7970 Reference
-The factory overclocked R7970 from XFX
-The factory overclocked R7970 from XFX further OC'd by sliding all the bars to the right
Since we only had the one XFX 7970 to test, all 7970 results, stock and overclocked, were from this card.
I saw a picture of a GTX-780 performance slide by NVIDIA not long ago and it promises to be much faster than this card is. I believe that this card is fast enough for me, but it's too new and way too expensive at this point. (unrealistically so)
NVIDIA's new card will tame the prices on these, once it's the new Shark in the pond.
Yeah, yeah...talk is cheap. Current rumors have Keppler coming out in late spring/early summer, but recall the delays associated with Fermi.
If you're going to be running two high end cards, will you be using a multi-monitor setup? I'm considering setting one up if only to be able to test these insanely powerful video cards available now.
I have two of them so that when my Sons and Grandsons are here, we can play the same games together. We have a lot of fun together.
Each one has a GTX-570 in it now, so I can wait as long as it takes for prices to settle and for plenty of reviews to surface.
That will probably give you the best value overall, and allow you to keep those 570s useful!
One thing that sort of bothers be about expensive products like this is all the marginal tech they install. For example, the ability to support 4K displays. ATM it's pretty hard to find a monitor at a reasonable price that supports anything over 1080p or 1920 x 1200. I wonder how much of that overly high price for the card is caused by built in tech that will never be used? I have wondered this same thing over the years about various products, including high end video cards. It's almost like a little kid at the ice cream store "biting off more than he can chew". They always build in effects that can bring their new cards to their knees, and add features most people don't need. If we thought this card was taxed by these tests, can we imagine the huge failure trying to run these tests at 4K? What would be the point? Of course, this is a flagship product I guess and they will release budget versions. I just wish the budget versions would cut the excess and not the performance.
Still, it's a good card and a well done review.
I did notice one thing though. In a couple of those pictures, I couldn't help but notice someone got a fingerprint on the card. I know no one likes to use a sweet video card like that when it's been all messed up with greasy fingerprints.
So, I thought I would offer my services to....dispose of...the greasy culprit. Just say the word and I'll email you my address. You can send me the card and I will dispose of it. I'll start by heating it up...real good....
For now, I'd recommend waiting until more non-reference designs, particularly the Asus Direct CU II version and the MSI Twin Froznr III versions come out.
##hardware.fr/focus/59/xfx-radeon-hd-7970-overclocking.html
That chip runs hot.
I agree Furmark is a very brutal test. One thing I love about my GPU right now is that it stays relatively cool even under Furmark.
I have built a lotta boxes with the factory OC'd 560 Ti's and the number of phases in the VRM design seems to have a direct correlation on how high ya can OC the buggers. I also thought in interesting to read the the 7850 reviews that not a single manufacturer will be selling a "reference" design. My guess is that the series OC ability is significantly impacted by the reference VRM and everybody sees the availability of the extra OC ability as far more advantageous than the saving the extra expense of beefing up the vRM.
I don't see the 79xx as something I'd be interested in .... similar to the way I wouldn't consider a 580 (616 fps) for $500 when I can get two of those 560's above for $410 and get 40% more fps.
The 78xx pricing and performance has my interest piqued .... but then again, we haven't seen nVidia's entry in this category. I don't see myself buying top end cards when two mid range cards offer better performance at less money.
Now, if you were to crossfire/sli these cards the story could be different as the 69xx series really scale well and you can expect a 77% performance increase in some games and tests using 4x AA. I have not tested sli but an article I read on Tom's did a scalability test between nVidia and Radeon cards and showed this difference when using two cards. In this case, the 18% difference is pretty much eliminated. Of course, the number of people who use multiple video cards is fairly slim (I use two myself), but I admit if we're talking stock cards with no overclocking the 570 does come out approximately 20% better.
You gave the clocks of the 560 Ti's, but not the 6950's or 6970's?
It's common knowledge the newer AMD cards scale better in crossfire than Nvidia cards do in SLI.
I don't trust your number at all. Why? Because they simply contradict any other comparison I've ever seen.
And funnily enough a simple Google allows me to see that you've pulled out this exact same line more than once.
Where's the source on these numbers?
Because the numbers I'm looking at here: ##tomshardware.com/charts/2011-gaming-graphics-charts/Enthusiast-Index,2674.html say they yours are obviously biased towards Nvidia (imagine that, an Nvidia defender having biased numbers!)
What's that? In relative performance using enthusiast settings (1920x1080, high anti-aliasing) crossfire'd 6950 2GBs have a relative performance score of 201.58, while the crossfire'd 560 Ti's have a relative performance score of 177.21? Who would have thought the slower card would score worse! It looks even more grim against those who've got their 6950's unlocked and overclocked, or 6970s, which score 215.81.
But hey, let's go over to the 'gamer' index; perhaps the 560 Ti will fare better with lower resolutions and less AA (1680x1050, less or no AA)! ##tomshardware.com/charts/2011-gaming-graphics-charts/Gamer-Index,2673.html
Alas, no such luck. In fact, it looks like the gap got bigger! The 6950 2GB crossfire now sits at 244.53, while the 560 Ti SLI sits at 213.55.
Funny how that works, eh?
tomshardware.com/reviews/crossfire-sli-3-way-scaling,2865.html
Tried these setting as per the OC.
Get about 30sec of gameplay before windows says "display driver stopped responding and has been restored"
Would be awesome if someone could help me out.