AMD Radeon HD 6970 Video Card |
Reviews - Featured Reviews: Video Cards | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Wednesday, 15 December 2010 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AMD Radeon HD 6970 Video Card ReviewFeaturing a 1536-Core Cayman GPU, AMD's Radeon HD 6970 competes against NVIDIA's GeForce GTX 470.With the introduction of AMD's Cayman GPU, the Radeon HD 6870 video card becomes their flagship DirectX-11 desktop graphics product. Aside from the dual-GPU Radeon HD 5970, gamers can expect the new Radeon HD 6970 to quench their thirst for demanding graphics power. The Cayman GPU features dual graphics engines with an asynchronous dispatch and off-chip geometry buffering to 96 tessellation units using a new VLIW4 shader core architecture. Equipped with a 2GB GDDR5 256-bit video buffer, the Cayman GPU can offer up to 24 SIMD engines and 96 Texture Units. Additionally, the AMD Radeon HD 6970 introduces several new MSAA modes including Enhanced Quality Anti-Aliasing (EQAA). The AMD Radeon HD 6970 takes advantage of improved anti-aliasing features to enhance the DirectX 11 gaming experience. PC gamers are looking for their best value for the money, while producing top-end frame rates to help them build a killstreak. AMD didn't set out to build the fastest graphics card imaginable, likely producing a product so expensive that only the most affluent enthusiasts could afford. Instead, the AMD Radeon HD 6970 was designed for the large majority of consumers, who want top-shelf performance at a fair price. While accomplishing this, they managed to also add accelerated multimedia playback and transcoding, AMD HD3D stereoscopic technology, and the 3D Blu-ray multi-view CODEC (MVC). Benchmark Reviews tests the Radeon HD 6970 graphical frame rate performance using the most demanding PC video game titles and benchmark software available. DirectX-10 favorites such as Crysis Warhead and PCMark Vantage are all included, in addition to DX11 titles such as Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Mafia II, Metro 2033, and the Unigine Heaven 2.1 benchmark. Built to deliver improved performance to the value-hungry mainstream gaming market, the AMD Radeon HD 6970 video card delivers top-end performance at a value-added price point.
According to information presented at the AMD Editor's Day event back on 14 October 2010, approximately 33% of all AMD graphics solutions are sold for the desktop platform with over 25-million Radeon DirectX-11 compatible products shipped to date. In many ways this data reinforces my position in the recent Desktop Platform article series, but it could also mean that manufacturers are listening ever more intently to the changing needs of their remaining consumer base. This doesn't always leave room for innovation, but AMD manages to introduce emerging technologies nevertheless. For those who have been patiently waiting for news on ATI Stream technology, it's been re-tasked as AMD Accelerated Parallel Processing, or APP technology. AMD Eye-Definition represents their commitment to PC gamers, PC game developers, and the PC gaming industry. Through Eye-Definition AMD delivers their "Gamers Manifesto", which they assert will enable the best experience possible regardless of hardware manufacturer.
Manufacturer: Advanced Micro Devices Full Disclosure: The product sample used in this article was obtained from an outside source. AMD Radeon 6970 Closer LookAside from a few decals, the AMD Radeon HD 6970 video card is identical to the reference design by AMD. While some consumers may want more flash for their cash, the conservative appearance helps maintain an affordable sales price.
AMD's Radeon HD 6900-series video cards already look very similar to the previous generation of 6800 and 5800-series products. In fact, of the few discernable differences only the connection header panel, which can add an additional DisplayPort monitor output (if the vendor implements this feature), and the closed rear section. AMD implements dual mini-DisplayPort 1.2 outputs on their 6970, unlike the Sapphire version we recently tested which used a single DP connection.
While there are still two digital DVI ports available on the AMD Radeon HD 6970, only one of them is dual-link to support AMD HD3D while the other is reduced to single-link. AMD's HD3D technology currently supports only one 3D display, with plans for multi-monitor 3D available in the future.
Identical to AMD's reference design, the AMD Radeon HD 6970 measures 10.5" inches long, by 1.25" tall and 3.75" wide. This video card measures slightly shorter than the 11" long Radeon HD 5870, but longer than the 9.75" Radeon HD 6870. Ironically, it measures exactly the same dimensions as its closest competitor: NVIDIA's GeForce GTX 570.
One particular item I've been hoping for is a focused blower fan orientation. This design angles the blower fan slightly downward to improve the forward force of air and creates a small separation between adjacent video cards. CrossFire configurations could benefit by such a design, as the competition has done to tame their much warmer products for several generations now.
The AMD Radeon HD 6970 requires an 8-pin and 6-pin PCI-Express power connection for normal operation. AMD suggests the TDP power demands are 190 watts normal use or 250W using PowerTune for the Cayman GPU, although we confirm this with our own power testing discussed later near the end of this article.
With the Radeon HD 6970 cool air is drawn from directly above the blower fan, while exhaust is expelled through the bracket vent and a small side outlet (shown above, far right) that allows a portion of the heated air back inside the computer case. AMD's Radeon HD 6870 lacks any cool air intake vents at the tail end of the video card, behind the blower fan. As a result, gamers with CrossFireX sets must ensure proper cooling inside their computer case for these video cards to receive fresh air. Radeon Features
AMD Cayman GPU Details
VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages. A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X58-Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
Video Card Test Products
|
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX10: Crysis Warhead
Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.
Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.
Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.
- Crysis Warhead v1.1 with HOC Benchmark
- Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)
Crysis Warhead Moderate Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
Aliens vs Predator Extreme Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: Battlefield Bad Company 2
The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.
At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.
In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.
- BattleField: Bad Company 2
- Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
Battlefield Bad Company 2 Extreme Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: BattleForge
BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.
Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.
- BattleForge v1.2
- Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
BattleForge Extreme Quality Settings
EDITOR'S NOTE: Several days prior to launch Benchmark Reviews alerted AMD to performance concerns with BattleForge. After launch AMD responded:
"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
Lost Planet 2 Moderate Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX9+SSAO: Mafia II
Mafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II.
Mafia II is a SSAO-enabled PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers is equal access to high-detail physics regardless of video card manufacturer.
- Mafia II
- Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)
Mafia II Extreme Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033
- Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
Metro 2033 Moderate Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
DX11: Unigine Heaven 2.1
The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 2.1
- Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA
Heaven 2.1 Moderate Quality Settings
Graphics Card | GeForce GTX460 | Radeon HD5850 | Radeon HD6870 | GeForce GTX470 | Radeon HD5870 | Radeon HD6970 | GeForce GTX570 | GeForce GTX580 |
GPU Cores | 336 | 1440 | 1120 | 448 | 1600 | 1536 | 480 | 512 |
Core Clock (MHz) | 675 | 725 | 900 | 608 | 850 | 880 | 732 | 772 |
Shader Clock (MHz) | 1350 | N/A | N/A | 1215 | N/A | N/A | 1464 | 1544 |
Memory Clock (MHz) | 900 | 1000 | 1050 | 837 | 1200 | 1375 | 950 | 1002 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 1536MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 256-bit | 320-bit | 256-bit | 256-bit | 320-bit | 384-bit |
Radeon HD 6970 Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. FurMark does two things extremely well: drive the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. During all tests, the ambient room temperature remained at a stable 20°C. The temperatures discussed below are absolute maximum values, and may not be representative of real-world temperatures while gaming:
Video Card | Idle Temp | Loaded Temp | Ambient | |||
ATI Radeon HD 5850 | 39°C | 73°C | 20°C | |||
AMD Radeon HD 6850 | 42°C | 77°C | 20°C | |||
AMD Radeon HD 6870 | 39°C | 74°C | 20°C | |||
ATI Radeon HD 5870 | 33°C | 78°C | 20°C | |||
NVIDIA GeForce GTX 480 | 36°C | 82°C | 20°C | |||
NVIDIA GeForce GTX 570 | 32°C | 82°C | 20°C | |||
AMD Radeon HD 6970 | 35°C | 81°C | 20°C | |||
NVIDIA GeForce GTX 580 | 32°C | 70°C | 20°C |
The original plans for AMD's Cayman GPU included a 32nm die process, which was later cancelled and reworked into the familiar 40nm process we've seen for the past several product generations. As a direct result the 40nm AMD Cayman GPU is larger, uses more power, and operates at higher temperatures than the initial design would have delivered. The Cayman GPU measures 389 mm2, which is only slightly larger than the 336 mm2 Cypress GPU (5870), but far larger than the 255 mm2 Barts GPU (6870). The transistor count obviously changes, with 2.15-billion on the Cypress, 1.7-billion on Barts, and 2.64-billion on Cayman.
At idle, the Radeon HD 6970 measured 35°C at 20°C ambient room temperature. This is roughly the same temperature as late-generation GeForce GTX 480's were resting at, but still sits idle a few degrees warmer than the latest GeForce GTX 570 and 580 do. What used to sound like an NVIDIA-specific trait has quickly changed direction, making AMD GPUs out to be the hot-headed product. The new AMD Radeon HD 6970 improves on the recently released 6870 by a few degrees, but the old (and now end-of-life) Radeon HD 5870 measured a few degrees lower at idle.
Under 100% GPU load, the heat output rises to levels not seen from AMD since the Radeon HD 4800-series. Measured at 20°C ambient room temperature, the Radeon HD 6970 reached 81°C after ten minutes stressed under full load. This places the Radeon HD 6970 right on par with its closest competitor, the GeForce GTX 570 (82°C). Unfortunately, the Radeon HD 6870 and 5870 both run a few degrees cooler under full load. Overall the AMD Radeon HD 6970 has a 40nm Cayman GPU to blame for higher temperatures, which would not have been the case if the original 32nm die process had been possible. Let's see how this impacts power consumption...
VGA Power Consumption
For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
As we previously mentioned in the Radeon HD 6970 Temperatures section, the Cayman GPU was originally designed for 32nm but was ultimately constructed at 40nm. This increased the die size, and raised the operating temperature to levels that AMD isn't generally known for. Judging from the chart of results above, it appears that the 40nm Cayman GPU may not have created the power monster we anticipated. The AMD Radeon HD 6970 requires one eight-pin and one six-pin PCI-E power connection for proper operation. Resting at idle with no GPU load, the Radeon HD 6970 consumed only 24W of electricity. Compensating for a small margin of error, this falls roughly in-line with idle power draw from the ATI Radeon HD 5870. The noteworthy idle results were actually 7W less than the competing GeForce GTX 570 video card, but not quite as efficient as the 20W Radeon HD 6870. But what about under full 3D load?
Once 3D-applications begin to demand power from the Cayman GPU, electrical power consumption climbs to 233 watts. Measured at full throttle with FurMark's 3D torture load, these results were 8W lower than the GeForce GTX 570 (241W maximum power draw), and 7W less than the ATI Radeon HD 5870. Overall it seems that the 40nm Cayman GPU is fairly efficient, especially considering the 2.64-billion transistors it feeds. The graphical performance more or less matched the GeForce GTX 570, so it's nice to see the Radeon HD 6970 dropping a few watts from the power consumption.
AMD Radeon HD 6970 Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
Overall, the graphics frame rate performance has the AMD Radeon HD 6970 matched nicely to the GeForce GTX 570. Measured at stock (reference) speeds in ten different tests, the Radeon HD 6970 was either slightly ahead in half of them or deeply trailed in the other half. We've excluded HAWX 2 from this review, until AMD drivers can compensate for the performance skew. The DirectX 10 tests seemed to really score the GeForce GTX 570 way ahead, while many of the DirectX 11 tests pulled the Radeon HD 6970 ahead by a few FPS:
3DMark Vantage has the 6970 ahead by 2.9% (1680x1050) or 5.8% (1920x1200) on the Jane Nash test, but then sinks 16.9/9.4% for New Calico. Crysis Warhead pushed the GTX 570 16.2/9.4% ahead, but then DX11 Aliens vs Predator pushed back 10.4/12.3% in favor of the Radeon HD 6970. Shader intensive games such as Battlefield Bad Company 2 really strained the Radeon HD 6970, giving the GTX 570 a 31.9/25.3% lead. BattleForge did the same, giving the GTX 570 a 53.5/57.1% lead over the 6970. Lost Planet 2 dog-piled also the results in favor the GTX 570, resulting in a 45.1/41.9% lead over the 6970. Then, thankfully, the Radeon HD 6970 fought back in NVIDIA-strong games like Mafia II, producing a 0.9/6.0% lead over the GTX 570. Metro 2033 gave the 6970 a 3.1/5.2% edge, and then the Heaven 2.1 benchmark offered a 2.0/7.5% difference in favor of the 6970.
Overclocking the Radeon HD 6970 doesn't work like it has in the past, and we'll be publishing a separate article with those details. The AMD Radeon HD 6970 hit the limit of AMD's Catalyst Control Center (CCC) software with a noteworthy overclock to 950MHz (+70MHz). We attempted overclocking with a unpublished beta version of MSI Afterburner, but CCC is the only software to include the new PowerTune functionality which allows the video card to be overclocked beyond its TDP. This directly enables users to increase the Cayman GPU clock speeds when overclocking. Since our CCC software was also a non-public media release, we're waiting on a public version to confirm that this is standard functionality.
We didn't test AMD HD3D technology, or the impact it has on video game frame rates, primarily because the middleware was not made available and there are only two monitors that currently exist to support it: the Zalman Trimon 3D and iZ3D H220z1. At the time of launch Viewsonic had announced their 120Hz Fuhzion 3D monitor, but the product had not yet shipped. AMD HD3D technology presently supports one display, using either DL-DVI and DP monitors or HDMI 1.4 3D HDTV, so 3D movie playback on one of the few compatible 3D TVs is a more likely application of this feature.
Appearance is a more subjective matter since the rating doesn't have benchmark scores to fall back on. Partners traditionally offer their own unique twist on the design, with improved cooling solutions and colorful fan shroud designs. The reference design allows nearly all of the heated air to externally exhaust outside of the computer case, which could be critically important to overclockers wanting the best possible environment for their computer hardware. This also preserves the Cayman GPU, since the transition to 32nm wasn't achieved and the heat output with standard clock speeds is still considered moderately high.
I personally consider the constant move towards a smaller die process rather insignificant in the grand scheme of things, as NVIDIA once proved when their GeForce GTX 280 successfully launched at 65nm instead of 55nm. Taiwan Semiconductor Manufacturing Company (TSMC) is already building 32nm processors for other clientele, and AMD has noted that Moore's Law still applies - just not in regard to their Cayman GPU. They claim that as a die process becomes smaller, it also becomes much more costly to develop and produce. And then sometimes the manufacturer just can't complete the project as planned, such as the case with TSMC.
There are six PLX display channel bridges present on the Radeon HD 6970 video card, which opens up visual functionality. Two channels are dedicated to the only dual-link DVI port available on this video card, while the other DVI port remains single-link and consumes only one channel. HDMI 1.4a uses one channel, and two mini-DisplayPort outputs use one channel each. The real innovation comes with DP 1.2, which can use a Multi-Stream Transport Hub to drive multiple displays at different resolutions, refresh rates, and color depth in Eyefinity.
Value is a fast moving target, and please believe me when I say that it changes by the minute in this industry. Delivering better performance and additional features at a lower cost to consumers has been the cornerstone of AMD's business philosophy for more than a decade, and they've repeatedly demonstrated this resolve in each of their many battles with Intel CPUs and NVIDIA GPUs. I'm not entirely convinced that the AMD Radeon HD 6970 continues this tradition of giving more for less, since the $369.99 MSRP we were provided is about $20 higher than the NVIDIA GeForce GTX 570. Making matters worse, most of the recent AMD video card launch prices have actually gone up a few weeks later. In my opinion, $340-$350 is a better price point for this product so it can compete with the competition head-on.
|
|
In summary, the Radeon HD 6970 matches performance, temperatures, and power consumption very closely with the GeForce GTX 570. Based on the $370 MSRP. it would be great to see the price come down $20-30 to more closely compete against the GeForce GTX 570, especially considering that HD3D and Fusion technology are yet to tip the scales in AMD's favor. Still, products like the AMD Radeon HD 6970 introduces more flexibility for display devices, especially where multi-monitor Eyefinity is used. Stereoscopic 3D gaming is possible with the right equipment, as are 3D Blu-ray and 3D DVD playback. The 40nm Cayman GPU may not have been built on the 32nm die it was originally designed for, but the Radeon HD 6970 still offers stellar gaming performance that rivals the older Radeon HD 5870, as well as the recently introduced Radeon HD 6870. Overall I consider the AMD Radeon HD 6970 to be a good video card intended for the top-end gamers, but I'm not convinced that improved Eyefinity support or added stereoscopic 3D functionality is going to impress consumers until these technologies become more mature. Thankfully the Radeon HD 6970 shines as a solid gaming product, and gives the NVIDIA GeForce GTX 570 a fierce run for the money.
What do you think of the Radeon HD 6970 video card? Leave comments below, or ask questions in our Forum.
Pros:
+ Excellent top-end DX11 graphics performance
+ Cayman GPU includes stereoscopic 3D functionality
+ Nearly silent cooling fan at idle, very quiet under load
+ Fan exhausts most heated air outside of case
+ Multi-view CODEC enables 3D Blu-ray playback
+ Improves DisplayPort to 1.2 with display chaining
+ Supports CrossFireX functionality
Cons:
- Expensive enthusiast product
- Limited AMD HD3D product support
Related Articles:
- MSI WindBox 6667BB-004US Barebones-PC Kit
- Sentey Burton GS-6500 Computer Case
- G.Skill Sniper 1866 MHz DDR3 Memory Kit
- NVIDIA Tegra 2 and Project Denver Unveiled
- ASUS My Cinema-U3100Mini HDTV Tuner
- XFX Radeon HD R7790 Video Card
- SSD Benchmark Tests: SATA IDE vs AHCI Mode
- Hitachi LifeStudio Mobile Plus 500GB External Drive
- Tagan CS-EL Diablo Mid Tower ATX Case
- AMD Radeon vs NVIDIA GeForce: Graphics Last Stand
Comments
Nevertheless, competition in this industry is great for everyone as it always results in aggressive pricing so a big welcome thanks goes out to both AMD and Nvidia in this round for giving all of us such powerful cards at more affordable prices...... In the end, isn't that what we all want?
Put it on enthusiast and watch nvidia burn.
Cayman is faster with actual decent settings in games.
Catalyst 10.11 seems to be an unoptimized driver for HD 6970. Would you agree, or do you believe that it is working at "full capacity"? If it is working at "full capacity", then why does the HD 6970 lose so badly to HD 5870 in Battleforge? It doesn't seem to make sense. Perhaps when AMD releases Catalyst 10.13 (fully supporting HD 6900 series) we'll see an appreciable improvement? Let's hope!
"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."
Besides, I think you're a little glossy-eyed to think the 6970 is "marginally faster" than the GTX 570 simply because of "driver issues". Even more so when you consider how Battlefield: Bad Company 2 and BattleForge are both AMD-sponsored game titles. Even 3dMark Vantage was co-developed with AMD/ATI.
Once drivers are more mature, you can expect to get some performance back. But will it be 20-30%? That might be asking a bit much.
-Performs slightly better than the GTX570 (save for obvious beta driver-related performance problems, such as in Battleforge)
-Draws less power than its competitor, the GTX570
-Is as quiet as the GTX580 (according to TPU)
-Is priced slightly (~$20) higher than the GTX570 (Going by egg prices)
I'm not sure what there is to be disappointed about. The card does what it's supposed to.
#hothardware.com/Reviews/AMD-Radeon-HD-6970--6950-GPU-Reviews-Enter-Cayman/?page=7
3dmk 11 results are close and for once look at that dam xfire scaling awesome
I thought 6970 was gonna be only 5% behind GTX 580 - turns out to be 20% behind, with even GTX 570 beating it. No wonder AMD has been FORCED (by their own FAILURE) to slash prices at launch.
Add to that the fact 6970 is hotter than 5870, much hotter than GTX 580 and only slightly cooler than GTX 480; then add much higher power consumption and higher noise level than 5870 -
All of this adds up to one thing - if you own AMD shares, sell now as Nvidia is set stomp on AMD at least until we see 28nm. AMD's fortunes are headed south for the rest of 40nm, especially given that Nvidia has dual card GTX 595 waiting in the wings to hand 6990 its hat in January.
Great review as always, but I think there's a typo on the Closer Look page, fourth paragraph: "This video card measures slightly shorter than the 11" long Radeon HD 5870, but longer than the 9.75" Radeon HD 6970."
Slightly shorter than itself, eh? Haha. I'm glad I'm not the only one typo'ing AMD's new model numbers every now and then.
1080p also
So, yes. A wider data path is sometimes a cheaper and more reliable way to get bandwidth. particularly when you are on the hairy edge...