Archive Home arrow Reviews: arrow Video Cards arrow AMD Radeon HD 6970 Video Card
AMD Radeon HD 6970 Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Wednesday, 15 December 2010

AMD Radeon HD 6970 Video Card Review

Featuring a 1536-Core Cayman GPU, AMD's Radeon HD 6970 competes against NVIDIA's GeForce GTX 470.

With the introduction of AMD's Cayman GPU, the Radeon HD 6870 video card becomes their flagship DirectX-11 desktop graphics product. Aside from the dual-GPU Radeon HD 5970, gamers can expect the new Radeon HD 6970 to quench their thirst for demanding graphics power. The Cayman GPU features dual graphics engines with an asynchronous dispatch and off-chip geometry buffering to 96 tessellation units using a new VLIW4 shader core architecture. Equipped with a 2GB GDDR5 256-bit video buffer, the Cayman GPU can offer up to 24 SIMD engines and 96 Texture Units. Additionally, the AMD Radeon HD 6970 introduces several new MSAA modes including Enhanced Quality Anti-Aliasing (EQAA).

The AMD Radeon HD 6970 takes advantage of improved anti-aliasing features to enhance the DirectX 11 gaming experience. PC gamers are looking for their best value for the money, while producing top-end frame rates to help them build a killstreak. AMD didn't set out to build the fastest graphics card imaginable, likely producing a product so expensive that only the most affluent enthusiasts could afford. Instead, the AMD Radeon HD 6970 was designed for the large majority of consumers, who want top-shelf performance at a fair price. While accomplishing this, they managed to also add accelerated multimedia playback and transcoding, AMD HD3D stereoscopic technology, and the 3D Blu-ray multi-view CODEC (MVC).

Benchmark Reviews tests the Radeon HD 6970 graphical frame rate performance using the most demanding PC video game titles and benchmark software available. DirectX-10 favorites such as Crysis Warhead and PCMark Vantage are all included, in addition to DX11 titles such as Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Mafia II, Metro 2033, and the Unigine Heaven 2.1 benchmark. Built to deliver improved performance to the value-hungry mainstream gaming market, the AMD Radeon HD 6970 video card delivers top-end performance at a value-added price point.

AMD-Radeon-HD-6970-Video-Card-Front.jpg

According to information presented at the AMD Editor's Day event back on 14 October 2010, approximately 33% of all AMD graphics solutions are sold for the desktop platform with over 25-million Radeon DirectX-11 compatible products shipped to date. In many ways this data reinforces my position in the recent Desktop Platform article series, but it could also mean that manufacturers are listening ever more intently to the changing needs of their remaining consumer base. This doesn't always leave room for innovation, but AMD manages to introduce emerging technologies nevertheless.

For those who have been patiently waiting for news on ATI Stream technology, it's been re-tasked as AMD Accelerated Parallel Processing, or APP technology. AMD Eye-Definition represents their commitment to PC gamers, PC game developers, and the PC gaming industry. Through Eye-Definition AMD delivers their "Gamers Manifesto", which they assert will enable the best experience possible regardless of hardware manufacturer.

Manufacturer: Advanced Micro Devices
Product Name: Radeon HD 6970
Suggested Tested: $369.99 MSRP

Full Disclosure: The product sample used in this article was obtained from an outside source.

AMD Radeon 6970 Closer Look

Aside from a few decals, the AMD Radeon HD 6970 video card is identical to the reference design by AMD. While some consumers may want more flash for their cash, the conservative appearance helps maintain an affordable sales price.

AMD-Radeon-HD-6970-Video-Card-Corner.jpg

AMD's Radeon HD 6900-series video cards already look very similar to the previous generation of 6800 and 5800-series products. In fact, of the few discernable differences only the connection header panel, which can add an additional DisplayPort monitor output (if the vendor implements this feature), and the closed rear section. AMD implements dual mini-DisplayPort 1.2 outputs on their 6970, unlike the Sapphire version we recently tested which used a single DP connection.

AMD-Radeon-HD-6970-Video-Card-Front-Corner.jpg

While there are still two digital DVI ports available on the AMD Radeon HD 6970, only one of them is dual-link to support AMD HD3D while the other is reduced to single-link. AMD's HD3D technology currently supports only one 3D display, with plans for multi-monitor 3D available in the future.

AMD-Radeon-HD-6970-Video-Card-Top.jpg

Identical to AMD's reference design, the AMD Radeon HD 6970 measures 10.5" inches long, by 1.25" tall and 3.75" wide. This video card measures slightly shorter than the 11" long Radeon HD 5870, but longer than the 9.75" Radeon HD 6870. Ironically, it measures exactly the same dimensions as its closest competitor: NVIDIA's GeForce GTX 570.

PowerColor-Radeon-HD-6970-Video-Card-Bottom.jpg

One particular item I've been hoping for is a focused blower fan orientation. This design angles the blower fan slightly downward to improve the forward force of air and creates a small separation between adjacent video cards. CrossFire configurations could benefit by such a design, as the competition has done to tame their much warmer products for several generations now.

PowerColor-Radeon-HD-6970-Video-Card-Side.jpg

The AMD Radeon HD 6970 requires an 8-pin and 6-pin PCI-Express power connection for normal operation. AMD suggests the TDP power demands are 190 watts normal use or 250W using PowerTune for the Cayman GPU, although we confirm this with our own power testing discussed later near the end of this article.

AMD-Radeon-HD-6970-Video-Card-Rear-Corner.jpg

With the Radeon HD 6970 cool air is drawn from directly above the blower fan, while exhaust is expelled through the bracket vent and a small side outlet (shown above, far right) that allows a portion of the heated air back inside the computer case. AMD's Radeon HD 6870 lacks any cool air intake vents at the tail end of the video card, behind the blower fan. As a result, gamers with CrossFireX sets must ensure proper cooling inside their computer case for these video cards to receive fresh air.

Radeon Features

  • Microsoft DirectX 11 Support
  • AMD Eyefinity Technology
  • AMD Accelerated Parallel Processing (APP) Technology Designed for DirectCompute 5.0 and OpenCL
  • Accelerate Video Transcoding
  • 40 nm Process Technology
  • Advanced GDDR5 Memory Technology
  • 3rd Generation TeraScale Engine
  • Microsoft Windows 7 Support
  • AMD CrossFireX Technology
  • Enhanced Anisotropic Filtering
  • Accelerated Video Transcoding
  • Display Flexibility, Supports DL-DVI, DP, HDMI and D-Sub
  • HDMI 1.4a support with Deep Color and 7.1 High Bitrate Audio
  • On chip HDCP Support
  • AMD CrossFireX multi-GPU support for highly scalable performance. (Use up to four discrete cards with an AMD 790FX based motherboard)
  • AMD Avivo HD Support
    • ATI Unified Video Decoder 3 (UVD3) for 3D Blu-ray and HD Video.
    • Accelerated Video Transcoding (AVT)
    • DVD Upscaling
    • Dynamic Contrast
    • Built-in HDMI with 7.1 surround sound support
    • Dynamic power management with ATI PowerPlay technology including memory clocks
    • Dolby TrueHD and DTS-HD Master Audio Support

AMD Cayman GPU Details

AMD Radeon HD 6970

AMD Radeon HD 6950

Process

40 nm

40 nm

Die Size

389 mm2

389 mm2

Transistors

2.64 Billion

2.64 Billion

Engine Clock

880 MHz

800 MHz

Stream Processors

1536

1408

Compute Performance

2.7 TFLOPs

2.25 TFLOPs

Texture Units

96

88

Texture Fillrate

84.5 Gtex/s

70.4 Gtex/s

Color ROPs

32

32

Pixel Fillrate

28.2 Gtex/s

25.6 Gtex/s

Z/Stencil ROPs

128

128

Memory Type

GDDR5

GDDR5

Memory Clock

1375 MHz

1250 MHz

Memory Data Rate

5.5 Gbps

5.0 Gbps

Memory Bandwidth

176 Gbps

160 Gbps

PowerTune Maximum Power

250 W

200 W

Typical Gaming Power

190 W

140 W

Typical Idle Power

20 W

20 W

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel X58-Express Test SystemRadeon-HD-6970-GPUZ.gif

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

DirectX-11 Benchmark Applications

  • Aliens vs Predator Benchmark 1.0
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)
  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
  • Tom Clancy's HAWX 2 Benchmark 1.0.4
    • Extreme Settings: (Maximum Quality, 8x AA, 16x AF, DX11 Terrain Tessellation)
  • Unigine Heaven Benchmark 2.1
    • Moderate Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX10: 3DMark Vantage

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Jane Nash Extreme Quality Settings

3DMark Vantage GPU Test: New Calico

New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

3dMark_Vantage_New_Calico_Benchmark.jpg

New Calico Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: Battlefield Bad Company 2

The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)

Battlefield-Bad-Company-2_Benchmark.jpg

Battlefield Bad Company 2 Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: BattleForge

BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)

BattleForge_DX11_Benchmark.jpg

BattleForge Extreme Quality Settings

EDITOR'S NOTE: Several days prior to launch Benchmark Reviews alerted AMD to performance concerns with BattleForge. After launch AMD responded:

"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX9+SSAO: Mafia II

Mafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II.

Mafia II is a SSAO-enabled PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers is equal access to high-detail physics regardless of video card manufacturer.

  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)

Mafia2_DX11_Benchmark.jpg

Mafia II Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

DX11: Unigine Heaven 2.1

The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 2.1
    • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven 2.1 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD5850 Radeon HD6870 GeForce GTX470 Radeon HD5870 Radeon HD6970 GeForce GTX570 GeForce GTX580
GPU Cores 336 1440 1120 448 1600 1536 480 512
Core Clock (MHz) 675 725 900 608 850 880 732 772
Shader Clock (MHz) 1350 N/A N/A 1215 N/A N/A 1464 1544
Memory Clock (MHz) 900 1000 1050 837 1200 1375 950 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 320-bit 256-bit 256-bit 320-bit 384-bit

Radeon HD 6970 Temperatures

Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. FurMark does two things extremely well: drive the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. During all tests, the ambient room temperature remained at a stable 20°C. The temperatures discussed below are absolute maximum values, and may not be representative of real-world temperatures while gaming:

Video Card Idle Temp Loaded Temp Ambient
ATI Radeon HD 5850 39°C 73°C 20°C
AMD Radeon HD 6850 42°C 77°C 20°C
AMD Radeon HD 6870 39°C 74°C 20°C
ATI Radeon HD 5870 33°C 78°C 20°C
NVIDIA GeForce GTX 480 36°C 82°C 20°C
NVIDIA GeForce GTX 570 32°C 82°C 20°C
AMD Radeon HD 6970 35°C 81°C 20°C
NVIDIA GeForce GTX 580 32°C 70°C 20°C

The original plans for AMD's Cayman GPU included a 32nm die process, which was later cancelled and reworked into the familiar 40nm process we've seen for the past several product generations. As a direct result the 40nm AMD Cayman GPU is larger, uses more power, and operates at higher temperatures than the initial design would have delivered. The Cayman GPU measures 389 mm2, which is only slightly larger than the 336 mm2 Cypress GPU (5870), but far larger than the 255 mm2 Barts GPU (6870). The transistor count obviously changes, with 2.15-billion on the Cypress, 1.7-billion on Barts, and 2.64-billion on Cayman.

AMD-Radeon-HD-6970-Video-Card-Exposed.jpg

At idle, the Radeon HD 6970 measured 35°C at 20°C ambient room temperature. This is roughly the same temperature as late-generation GeForce GTX 480's were resting at, but still sits idle a few degrees warmer than the latest GeForce GTX 570 and 580 do. What used to sound like an NVIDIA-specific trait has quickly changed direction, making AMD GPUs out to be the hot-headed product. The new AMD Radeon HD 6970 improves on the recently released 6870 by a few degrees, but the old (and now end-of-life) Radeon HD 5870 measured a few degrees lower at idle.

Under 100% GPU load, the heat output rises to levels not seen from AMD since the Radeon HD 4800-series. Measured at 20°C ambient room temperature, the Radeon HD 6970 reached 81°C after ten minutes stressed under full load. This places the Radeon HD 6970 right on par with its closest competitor, the GeForce GTX 570 (82°C). Unfortunately, the Radeon HD 6870 and 5870 both run a few degrees cooler under full load. Overall the AMD Radeon HD 6970 has a 40nm Cayman GPU to blame for higher temperatures, which would not have been the case if the original 32nm die process had been possible. Let's see how this impacts power consumption...

VGA Power Consumption

For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

As we previously mentioned in the Radeon HD 6970 Temperatures section, the Cayman GPU was originally designed for 32nm but was ultimately constructed at 40nm. This increased the die size, and raised the operating temperature to levels that AMD isn't generally known for. Judging from the chart of results above, it appears that the 40nm Cayman GPU may not have created the power monster we anticipated. The AMD Radeon HD 6970 requires one eight-pin and one six-pin PCI-E power connection for proper operation. Resting at idle with no GPU load, the Radeon HD 6970 consumed only 24W of electricity. Compensating for a small margin of error, this falls roughly in-line with idle power draw from the ATI Radeon HD 5870. The noteworthy idle results were actually 7W less than the competing GeForce GTX 570 video card, but not quite as efficient as the 20W Radeon HD 6870. But what about under full 3D load?

Once 3D-applications begin to demand power from the Cayman GPU, electrical power consumption climbs to 233 watts. Measured at full throttle with FurMark's 3D torture load, these results were 8W lower than the GeForce GTX 570 (241W maximum power draw), and 7W less than the ATI Radeon HD 5870. Overall it seems that the 40nm Cayman GPU is fairly efficient, especially considering the 2.64-billion transistors it feeds. The graphical performance more or less matched the GeForce GTX 570, so it's nice to see the Radeon HD 6970 dropping a few watts from the power consumption.

AMD Radeon HD 6970 Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

Overall, the graphics frame rate performance has the AMD Radeon HD 6970 matched nicely to the GeForce GTX 570. Measured at stock (reference) speeds in ten different tests, the Radeon HD 6970 was either slightly ahead in half of them or deeply trailed in the other half. We've excluded HAWX 2 from this review, until AMD drivers can compensate for the performance skew. The DirectX 10 tests seemed to really score the GeForce GTX 570 way ahead, while many of the DirectX 11 tests pulled the Radeon HD 6970 ahead by a few FPS:

3DMark Vantage has the 6970 ahead by 2.9% (1680x1050) or 5.8% (1920x1200) on the Jane Nash test, but then sinks 16.9/9.4% for New Calico. Crysis Warhead pushed the GTX 570 16.2/9.4% ahead, but then DX11 Aliens vs Predator pushed back 10.4/12.3% in favor of the Radeon HD 6970. Shader intensive games such as Battlefield Bad Company 2 really strained the Radeon HD 6970, giving the GTX 570 a 31.9/25.3% lead. BattleForge did the same, giving the GTX 570 a 53.5/57.1% lead over the 6970. Lost Planet 2 dog-piled also the results in favor the GTX 570, resulting in a 45.1/41.9% lead over the 6970. Then, thankfully, the Radeon HD 6970 fought back in NVIDIA-strong games like Mafia II, producing a 0.9/6.0% lead over the GTX 570. Metro 2033 gave the 6970 a 3.1/5.2% edge, and then the Heaven 2.1 benchmark offered a 2.0/7.5% difference in favor of the 6970.

Overclocking the Radeon HD 6970 doesn't work like it has in the past, and we'll be publishing a separate article with those details. The AMD Radeon HD 6970 hit the limit of AMD's Catalyst Control Center (CCC) software with a noteworthy overclock to 950MHz (+70MHz). We attempted overclocking with a unpublished beta version of MSI Afterburner, but CCC is the only software to include the new PowerTune functionality which allows the video card to be overclocked beyond its TDP. This directly enables users to increase the Cayman GPU clock speeds when overclocking. Since our CCC software was also a non-public media release, we're waiting on a public version to confirm that this is standard functionality.

We didn't test AMD HD3D technology, or the impact it has on video game frame rates, primarily because the middleware was not made available and there are only two monitors that currently exist to support it: the Zalman Trimon 3D and iZ3D H220z1. At the time of launch Viewsonic had announced their 120Hz Fuhzion 3D monitor, but the product had not yet shipped. AMD HD3D technology presently supports one display, using either DL-DVI and DP monitors or HDMI 1.4 3D HDTV, so 3D movie playback on one of the few compatible 3D TVs is a more likely application of this feature.

AMD-Radeon-HD-6970-Video-Card-Front-Angle.jpg

Appearance is a more subjective matter since the rating doesn't have benchmark scores to fall back on. Partners traditionally offer their own unique twist on the design, with improved cooling solutions and colorful fan shroud designs. The reference design allows nearly all of the heated air to externally exhaust outside of the computer case, which could be critically important to overclockers wanting the best possible environment for their computer hardware. This also preserves the Cayman GPU, since the transition to 32nm wasn't achieved and the heat output with standard clock speeds is still considered moderately high.

I personally consider the constant move towards a smaller die process rather insignificant in the grand scheme of things, as NVIDIA once proved when their GeForce GTX 280 successfully launched at 65nm instead of 55nm. Taiwan Semiconductor Manufacturing Company (TSMC) is already building 32nm processors for other clientele, and AMD has noted that Moore's Law still applies - just not in regard to their Cayman GPU. They claim that as a die process becomes smaller, it also becomes much more costly to develop and produce. And then sometimes the manufacturer just can't complete the project as planned, such as the case with TSMC.

There are six PLX display channel bridges present on the Radeon HD 6970 video card, which opens up visual functionality. Two channels are dedicated to the only dual-link DVI port available on this video card, while the other DVI port remains single-link and consumes only one channel. HDMI 1.4a uses one channel, and two mini-DisplayPort outputs use one channel each. The real innovation comes with DP 1.2, which can use a Multi-Stream Transport Hub to drive multiple displays at different resolutions, refresh rates, and color depth in Eyefinity.

Value is a fast moving target, and please believe me when I say that it changes by the minute in this industry. Delivering better performance and additional features at a lower cost to consumers has been the cornerstone of AMD's business philosophy for more than a decade, and they've repeatedly demonstrated this resolve in each of their many battles with Intel CPUs and NVIDIA GPUs. I'm not entirely convinced that the AMD Radeon HD 6970 continues this tradition of giving more for less, since the $369.99 MSRP we were provided is about $20 higher than the NVIDIA GeForce GTX 570. Making matters worse, most of the recent AMD video card launch prices have actually gone up a few weeks later. In my opinion, $340-$350 is a better price point for this product so it can compete with the competition head-on.

In summary, the Radeon HD 6970 matches performance, temperatures, and power consumption very closely with the GeForce GTX 570. Based on the $370 MSRP. it would be great to see the price come down $20-30 to more closely compete against the GeForce GTX 570, especially considering that HD3D and Fusion technology are yet to tip the scales in AMD's favor. Still, products like the AMD Radeon HD 6970 introduces more flexibility for display devices, especially where multi-monitor Eyefinity is used. Stereoscopic 3D gaming is possible with the right equipment, as are 3D Blu-ray and 3D DVD playback. The 40nm Cayman GPU may not have been built on the 32nm die it was originally designed for, but the Radeon HD 6970 still offers stellar gaming performance that rivals the older Radeon HD 5870, as well as the recently introduced Radeon HD 6870. Overall I consider the AMD Radeon HD 6970 to be a good video card intended for the top-end gamers, but I'm not convinced that improved Eyefinity support or added stereoscopic 3D functionality is going to impress consumers until these technologies become more mature. Thankfully the Radeon HD 6970 shines as a solid gaming product, and gives the NVIDIA GeForce GTX 570 a fierce run for the money.

What do you think of the Radeon HD 6970 video card? Leave comments below, or ask questions in our Forum.

Pros:

+ Excellent top-end DX11 graphics performance
+ Cayman GPU includes stereoscopic 3D functionality
+ Nearly silent cooling fan at idle, very quiet under load
+ Fan exhausts most heated air outside of case
+ Multi-view CODEC enables 3D Blu-ray playback
+ Improves DisplayPort to 1.2 with display chaining
+ Supports CrossFireX functionality

Cons:

- Expensive enthusiast product
- Limited AMD HD3D product support


Related Articles:
 

Comments 

 
# RE: AMD Radeon HD 6970 Video CardHarry 2010-12-14 20:16
I think 6850 CF is a cheaper and a better choice since it beats GTX 580 price and performance wise.
Report Comment
 
 
# RE: RE: AMD Radeon HD 6970 Video CardJapipapi 2011-06-14 10:51
Just run 2 of these for the same price as 1 gtx 580 lol besides this is not too far behind the 580
Report Comment
 
 
# disappointingTrajan Long 2010-12-14 20:33
Wow! After all the hype the 6970 is pretty disappointing. That's why competition is great. Without AMD we wouldn't have the GTX 580. NOw AMD better get themselves in gear and come up with another champ or Nvidia will get lazy.
Report Comment
 
 
# RE: AMD Radeon HD 6970 Video CardBaz 2010-12-14 20:42
Ohhhhhh...... The disappointment. Can't wait to hear the AMD camps "It's all about price/performance" AND "But our cards are'nt as hot as a Fermi Nuclear Reactor in breach mode". Except that according to the review, the 6970 is about as hot under load as the old GTX480.... Looks like AMD camps are going to have the temperatures they were joking about on the original Fermi come back and bite them on the ass so they will have to revise their comments this time around.
Nevertheless, competition in this industry is great for everyone as it always results in aggressive pricing so a big welcome thanks goes out to both AMD and Nvidia in this round for giving all of us such powerful cards at more affordable prices...... In the end, isn't that what we all want?
Report Comment
 
 
# Crysis!John 2010-12-14 20:44
"Moderate quality settings"

Put it on enthusiast and watch nvidia burn.
Cayman is faster with actual decent settings in games.
Report Comment
 
 
# Wrong CatalystDavid 2010-12-14 20:45
They used Catalyst 10.11 for the HD 6970. They should have benched with 10.12 at least. Notice how HD 5870 stomps the HD 6970 in Battleforge? Certainly an obvious sign of an unoptimized driver for the HD 6970. A correct optimized driver can make all the difference.
Report Comment
 
 
# RE: Wrong CatalystOlin Coles 2010-12-14 20:47
Thank you for your expert advice, David. The 10.11 Beta driver was actually the only one released by AMD to press that supports the 6970.
Report Comment
 
 
# My badDavid 2010-12-14 21:10
I did some research, and it appears that you are right. Sorry about that, and thank you for the correction.

Catalyst 10.11 seems to be an unoptimized driver for HD 6970. Would you agree, or do you believe that it is working at "full capacity"? If it is working at "full capacity", then why does the HD 6970 lose so badly to HD 5870 in Battleforge? It doesn't seem to make sense. Perhaps when AMD releases Catalyst 10.13 (fully supporting HD 6900 series) we'll see an appreciable improvement? Let's hope!
Report Comment
 
 
# RE: My badOlin Coles 2010-12-14 21:14
Here is AMD's response, received only an hour before launch, which I will post to the article in a moment:

"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."
Report Comment
 
 
# RE: RE: My badOlin Coles 2010-12-14 21:15
I forgot to add that I pointed out BattleForge and Battlefield: Bad Company 2 performance issues two days ago to the entire AMD team, but didn't receive a response until an hour ago.
Report Comment
 
 
# RE: RE: RE: My badDavid 2010-12-14 21:22
I appreciate the info. Thanks again!
Report Comment
 
 
# Disappointing? Are you nuts?Greg 2010-12-14 20:46
Disappointing? It's marginally faster than the GTX570! The games where it looked 20-30% behind are obviously driver issues. Also, after all what hype? Which hype was that? Virtually everybody I read was saying they were waiting to see what kind of performance this would deliver, and it produces better performance than even the unexpected NVidia card released just days before, while consuming less power. How's that disappointing? NVidiot, are we?
Report Comment
 
 
# RE: Disappointing? Are you nuts?Olin Coles 2010-12-14 21:10
Messages with personal attacks and insults will be moderated, Greg.

Besides, I think you're a little glossy-eyed to think the 6970 is "marginally faster" than the GTX 570 simply because of "driver issues". Even more so when you consider how Battlefield: Bad Company 2 and BattleForge are both AMD-sponsored game titles. Even 3dMark Vantage was co-developed with AMD/ATI.

Once drivers are more mature, you can expect to get some performance back. But will it be 20-30%? That might be asking a bit much.
Report Comment
 
 
# RE: AMD Radeon HD 6970 Video CardKokin 2010-12-14 20:50
For the prices, the new cards are decent. Obviously the 580's temps will be with the throttled speed, so I don't think they will as low as the reported temp during normal use. But I gotta agree, had I needed to upgrade, now is really a great time to upgrade or build a new computer, since prices are very competitive.
Report Comment
 
 
# Be Realistic !Baz 2010-12-14 21:02
I for one will not accept any form of argument around "it's a driver issue". If that were the case, AMD would never discount their card at launch at the levels we are seeing. They knew how it would perform with the new shader architecture and they priced it accordingly. The change to the new architecture doesn't automatically unilaterally guarantee that it will be better than the 5870 all the time.... The 5870 was and still is a great card.
Report Comment
 
 
# It's fine.Juste 2010-12-14 21:20
The way I see this card
-Performs slightly better than the GTX570 (save for obvious beta driver-related performance problems, such as in Battleforge)
-Draws less power than its competitor, the GTX570
-Is as quiet as the GTX580 (according to TPU)
-Is priced slightly (~$20) higher than the GTX570 (Going by egg prices)

I'm not sure what there is to be disappointed about. The card does what it's supposed to.
Report Comment
 
 
# It is still very early to argue.clive 2010-12-14 21:50
Guys I agree with both arguments and being an avid AMD fan because of price/performance Amd has to offer, but it may be still to early to tell what the true performance of this card can do. I agree a 20% to 30% increase based on drivers is a bit much however is possible. Some nvidia drivers in the past stated clearly so and so % performance in said game and so on. And maybe a better driver like the 10.12 will increase battleforge etc maybe 15% or slightly more matching the cards price/ performance over the 580/570. Dont get me wrong I am trying to be unpartial as possible, hell I had a GTX 280 and loved it. ever since the 5xxx series I have had only NV cards even back when NV and AMD were friends. Point in both the GTX 5xx and HD 6xxx series are sick cards and personal preference will be the determining factor.
Report Comment
 
 
# power draw isn't right for the 570 and 580davekozy 2010-12-14 22:28
The 570 and 580 throttle in Furmark so the power draw isn't accurate. They would be close to the 470 and 480 without the regulation.
Report Comment
 
 
# RE: power draw isn't right for the 570 and 580Olin Coles 2010-12-15 08:19
You should go back and re-read how we tested then, because it's accurate. It throttles when you use Furmark.exe, but if you rename the file power consumption rises.
Report Comment
 
 
# Other reviews...clive 2010-12-14 22:37
Some other review sites also gave synthetic test result numbers which surprised and confused me a bit. The 6970 pumped out around 18k with the 570/580 shelling around 25-28k. Synthetic as it may be other sites indicate the 6970 tops out close to the 570 and comes close to the 580 in dx11 syths and games. It would seem that older dx10 apps appear to lag with this card and just maybe a newer driver may just provide a 15% or maybe more increase. IDC tho fps are still awesome with this card as is xfire scaling.
Report Comment
 
 
# This may intrest someclive 2010-12-14 22:48
and the first test in my above post was in reference to vantage mark.. however read this

#hothardware.com/Reviews/AMD-Radeon-HD-6970--6950-GPU-Reviews-Enter-Cayman/?page=7

3dmk 11 results are close and for once look at that dam xfire scaling awesome
Report Comment
 
 
# EPIC AMD FAIL!Mike 2010-12-15 01:15
Lmao @ AMD fanboys crying into their cornflakes!

I thought 6970 was gonna be only 5% behind GTX 580 - turns out to be 20% behind, with even GTX 570 beating it. No wonder AMD has been FORCED (by their own FAILURE) to slash prices at launch.

Add to that the fact 6970 is hotter than 5870, much hotter than GTX 580 and only slightly cooler than GTX 480; then add much higher power consumption and higher noise level than 5870 -

All of this adds up to one thing - if you own AMD shares, sell now as Nvidia is set stomp on AMD at least until we see 28nm. AMD's fortunes are headed south for the rest of 40nm, especially given that Nvidia has dual card GTX 595 waiting in the wings to hand 6990 its hat in January.
Report Comment
 
 
# Obvious choice for meRagingShadow07 2010-12-15 04:14
Looks like there's gonna be a GTX 570 in my foreseeable future. I'm pretty disappointed in ATI after all the hype about this card, and the million and two people telling everyone to save their money and wait for it. I wonder whose prices will drop first, the 570 or the 6970.

Great review as always, but I think there's a typo on the Closer Look page, fourth paragraph: "This video card measures slightly shorter than the 11" long Radeon HD 5870, but longer than the 9.75" Radeon HD 6970."

Slightly shorter than itself, eh? Haha. I'm glad I'm not the only one typo'ing AMD's new model numbers every now and then.
Report Comment
 
 
# Couldn't be better.Zen 2010-12-15 13:08
Given the results Radeon 6970 is targeting same market space as GTX 570. This will push price drop of the latter, which are very good news for everyone interesting to upgrade to CUDA supporting card (which are all 3D designers willing to speedup renderng time by use of iRay technology in 3DS Max or other 3D software).
Report Comment
 
 
# LOL EPICKyle 2010-12-18 13:57
The 6970 was getting 30-50 FPS on ULTRA ALL HIGH with ALL AA etc etc with the cata 10.10 and/or 10.11 on my system. The minute I slapped 10.12a hotfix driver I am now getting 60-170 FPS!!!!! Nvidia has lost in my opinion!

1080p also
Report Comment
 
 
# Radeons on the edge (of their mem bandwidth)?Laszlo 2011-01-06 11:28
I'm looking for a replacement card, so I'm doing some 'layman research'. One thing captured my mind: Ati's best still uses 256 bit wide memory bus, like it used it on the good ol' 9700 (if my memory serves well). This 256 bit wide bus limits the theoretical mem. bandwidth of any Ati card, because on this 6970 they already using the fastest(?) gddr5 chips (6GHz effective clock). Nvidia, on the other hand uses only 4 and 5 GHz chips, but their memory bus width is variable (like the GPU chip capabilities/performance), they use(d) anything between (64?) 128-512, and their 580 has more memory bandwidth with slower ram chips than Ati/AMD. Till now the faster speed of the new GPU's always depended on the evolution of ram chips (NV 6800 -> 8/9800 / 280/285 bandwidth doubled, perf. too). Future Ati chips will need more bus width or they will be bottlenecked even with the fastest ram, NV on the other hand if they do an '580' with the 285's 512 bit wide bus + Hynix's best (by the 400/500 specs an NV with 2 GB ram would use an 512 bus), the 6GHz eff. ram chip then they will have a whopping 375 GByte/s memory bw.
Report Comment
 
 
# RE: Radeons on the edge (of their mem bandwidth)?BruceBruce 2011-01-06 11:59
I've noticed on several graphics cards lately, that it is not possible to crank up the memory speed to the stock rating on the GDDR5. The board crashes way before the rated speed for the memory chips. That tells me that the memory controller in the GPU is the weak link.

So, yes. A wider data path is sometimes a cheaper and more reliable way to get bandwidth. particularly when you are on the hairy edge...
Report Comment
 
 
# Lowest FPS benchmarkmmstick 2011-01-07 09:52
I think the lowest, average, and highest FPS for each of the graphics cards should be benched as I have read that the 6000 series, although not exceeding in most FPS, does exceed in the least FPS, which is often times the most important in gaming.
Report Comment
 
 
# I'd like to, but.....BruceBruce 2011-01-07 10:06
There is no consistency in the benchmark applications for reporting MIN-AVG-MAX FPS. What I really like is the graph that is provided by benchmarks like METRO 2033 and the old Far Cry 2 benchmarks. Sometimes there is a tiny stutter in the game that takes the MIN number down, and it's really not the fault of the video card, as the game does it on every card. It doesn't really matter if one card dips down to 8 FPS and another dips to 10 FPS, the user experience will be exactly the same.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive