AMD Radeon HD 6990 Antilles Video Card |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday, 08 March 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AMD Radeon HD 6990 Antilles Review
Manufacturer: Advanced Micro Devices (AMD) Full Disclosure: The product sample used in this article has been provided by AMD. Featuring dual 1536-Core Cayman GPUs, AMD's Radeon HD 6990 Antilles graphics card redefines the high-end.When it comes to computer hardware there's something for everyone, and this rings especially true for graphics cards. If you're on a tight budget, but still like to point and shoot your way through levels, there are plenty of affordable entry-level products that can satisfy your needs. But if you're an enthusiast gamer who demands only the highest level of performance that far surpasses mainstream standards, then it's your lucky day, because manufacturers are making great leaps with every new generation of computer components. Since both ends of the user spectrum clearly exist, then so must there also be products to support them - regardless of how extreme they appear. Enter the AMD Radeon HD 6990, code named Antilles. Hardware enthusiasts may recall that AMD's Cayman GPU, introduced with the Radeon HD 6970 video card, that features two graphics engines with an asynchronous dispatch and off-chip geometry buffering to 96 tessellation units using a new VLIW4 shader core architecture. Now multiply all of that times two, and you've got the Radeon HD 6990. Equipped with a 4GB GDDR5 256-bit video frame buffer, each of the Cayman GPUs offers 24 SIMD engines and 96 Texture Units for 192 total units and a combined stream processor count of 3072. Additionally, the AMD Radeon HD 6990 utilizes several new MSAA modes including Enhanced Quality Anti-Aliasing (EQAA). When only the very best will do, only the AMD Radeon HD 6990 video card will do it for you. Engineered as a 450W-capable dual-GPU graphics card with Volterra regulators, Antilles delivers 169.0 Gtex/s and 5.40 TFLOPs by using two Cayman GPUs clocked at up to 880MHz. For those keeping score, that's exactly twice the performance of two AMD Radeon HD 6970 video cards in a lossless CrossFire-X set, and all while maintaining a profile size identical to the Radeon HD 5970. Despite this level of output, AMD PowerTune technology manages consumption down to 350W on this flagship model.
The AMD Radeon HD 6990 takes advantage of improved anti-aliasing features to enhance DirectX 11 effects and deliver the most realistic gaming experience possible. When it comes down to producing top-end frame rates, the Radeon HD 6990 is unrivaled at helping gamers build a killstreak. AMD set out to design the fastest graphics card possible, and they accomplished this with the Radeon HD 6990, building a premium product only the most affluent enthusiast gamers will enjoy. While the Radeon HD 6990 accomplishes the extraordinary, AMD managed to also add accelerated multimedia playback and transcoding, AMD HD3D stereoscopic technology, and support for the 3D Blu-ray multi-view CODEC (MVC).
For those who have been patiently waiting for news on ATI Stream technology, it's been re-tasked as AMD Accelerated Parallel Processing, or APP technology. AMD Eye-Definition represents their commitment to PC gamers, PC game developers, and the PC gaming industry. Through Eye-Definition AMD delivers their "Gamers Manifesto", which they assert will enable the best experience possible regardless of hardware manufacturer. In this article Benchmark Reviews tests graphical frame rate performance with the AMD Radeon HD 6990 by using the most demanding PC video game titles and benchmark software available. Some favorites using older DirectX technology such as Crysis Warhead, PCMark Vantage, and Mafia II are included, in addition to DX11 titles such as 3dMark11, Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Metro 2033, and the Unigine Heaven 2.1 benchmark. Radeon HD 6990 AppearanceThe AMD Radeon HD 6990 video card is an exclusive product, and launches concurrently with Dragon Age II DirectX-11 video game. To capitalize on this theme, the Radeon HD 6990 we received for testing came inside a nice aluminum carry case with a removable Dragon Age II poster on the front. This is a nice touch, but an expected one, especially considering the $699 price tag that accompanies the Radeon HD 6990 video card. Inside the package, you can expect: the Radeon HD 6990 graphics card, CrossFireX bridge, one mini-DisplayPort to passive single-link DVI adapter, mini-DisplayPort to active single-link DVI adapter, and mini-DisplayPort to passive HDMI adapter.
High-performance video cards often emphasize function over fashion, and usually offer more attractive looks on the display screen than on the hardware itself. Still, that doesn't mean the AMD Radeon HD 6990 isn't an interesting specimen. Completely redesigned to cool a pair of Cayman GPUs, the 6990 takes on a look that is unique to the Radeon family. The first obvious difference is the relocated blower fan, which is now positioned directly at the center of the board so that it can best cool the opposing Cayman GPUs. Later into this article, we'll see how well this design works.
The AMD Radeon HD 6990 may occupy the same profile as its predecessor the Radeon HD 5970, but that simply confirms how big this card is. Measuring a full 12" long by 1.25" tall and 3.75" wide, the Radeon HD 6990 stretches past the 11" long Radeon HD 5870, 10.5" long GeForce GTX 580 or Radeon HD 6970, and 9.75" long Radeon HD 6870.
Current AMD Radeon HD 6900-series video cards already look similar to their previous generation of 6800- and 5800-series products, and the Antilles design adheres to the newly minted tradition of boxy black fan shroud-ed video cards. A few add-in card partners will dress-up their Antilles parts with the simple application of an adhesive graphic applied over the top of the fan shroud, with potential color changes to certain plastic components. Otherwise, expect all Radeon HD 6990 to look the same.
At its core, Antilles is a multi-monitor video card. One single dual-link DVI (DL-DVI) port joins four mini-DisplayPort 1.2 outputs on the 6990. The included adapters will enable 3x1 gaming right out of the box with DVI monitors, but with additional display adapters or DisplayPort monitors, you will be able to drive up to five displays in portrait Eyefinity (5x1 Portrait mode). AMD's HD3D technology currently supports only one 3D display, with plans for multi-monitor 3D available in the future, so the Radeon HD 6990 could be the perfect fit for gamers looking to plan ahead for multi-display 3D setups.
Because there is a 40nm Cayman GPU embedded at each end, the blower fan orientation is positioned dead-center between them both. This also means that heated exhaust air comes out of both ends, with half of it recirculating back into the computer case. CrossFire configurations are possible, but AMD recommends at least one card slot of space between Radeon HD 6990's. Gamers with CrossFireX sets must ensure proper cooling inside their computer case for these video cards to receive fresh air.
The AMD Radeon HD 6990 requires two 8-pin PCI-Express power connections for normal operation. AMD suggests that maximum TDP power demands are 375 watts using PowerTune with the BIOS set to 830MHz, or 450 watts when set to 880MHz. We confirm this with our own power testing results, discussed later near the end of this article. AMD will announce other new developments on the same day they launch the Radeon HD 6990 video card. The first is their Catalyst Control Center 11.4 'Preview' driver, but AMD is also unveiling new branding names based on the system configuration. On an AMD based platform the driver will be called AMD VISION Engine Control Center, and when a discrete AMD GPU is used with any Intel CPU it will be called AMD Catalyst Control Center. Antilles Detailed FeaturesIn the image below, we've included an exposed PCB illustration supplied by AMD. According to their media presentation, the Radeon HD 6990 uses a special phase-change thermal interface material that loses its effectiveness once the heatsink has been removed. Used in conjunction with an improved cooling solution, according to AMD the Radeon HD 6990 yields 8% better thermal performance when compared to previous thermal management solutions they've used. Click on the image below for a larger high-resolution version. The redesigned cooling solution on Antilles uses two independent heatsinks mounted between a blower motor fan to cool each Cayman GPU. Will this be enough? We reveal our temperature testing results later on in this article.
Antilles uses two 40nm Cayman GPUs on the Radeon HD 6990. Each GPU is clocked to 830 MHz by default, but can be switched to 880 MHz using the dual-BIOS. Additionally, AMD's PowerTune software allows the same options through the Catalyst Control Center.
A pair of Cooper Bussmann coupled inductors labeled with CLA1108-4-50TR-R markings are surrounded by what AMD is referring to as digital programmable regulators made by Volterra Semiconductor.
The AMD Radeon HD 6990 features 4GB GDDR5 video frame buffer memory, supplied by several Hynix H5GQ2H24MFR-T2C components. Hynix defines the speed of these parts using the T2C indicator code, which represents 2.5 GHz. According to Hynix, this GDDR5 graphics memory can reach 5.0 Gbps bandwidth with 1.5V or 3.6 Gbps with 1.35V, depending on voltage applied.
AMD clocks the GDDR5 to 1250 MHz, regardless of dual-BIOS switch position. By creating a custom BIOS, or by using software tools, hardware enthusiasts can further tweak the performance of their Radeon HD 6990. Radeon HD 6990 Features
AMD 6900-Series GPU Details
VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages. A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X58-Express Test System
DirectX-9 Benchmark Applications
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-E 2.0 Graphics Cards
|
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX10: 3DMark Vantage
3DMark Vantage is a PC benchmark suite designed to test the DirectX-10 graphics card performance. FutureMark 3DMark Vantage is the 2009 addition to the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.
3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.
- 3DMark Vantage v1.02
- Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
3DMark Vantage GPU Test: Jane Nash
Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.
Jane Nash Extreme Quality Settings
3DMark Vantage GPU Test: New Calico
New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).
New Calico Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX10: Crysis Warhead
Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.
Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.
Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.
- Crysis Warhead v1.1 with HOC Benchmark
- Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)
Crysis Warhead Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: 3DMark11
FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.
The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
- Futuremark 3DMark11 Professional Edition
- Performance Level Settings: (1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
3DMark11 'Performance' Level Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
Aliens vs Predator Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: Battlefield Bad Company 2
The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.
At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.
In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.
- BattleField: Bad Company 2
- Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
Battlefield Bad Company 2 Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: BattleForge
BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.
Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.
- BattleForge v1.2
- Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
EDITOR'S NOTE: AMD is aware of performance concerns with BattleForge, and offered us an official response:
"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."
BattleForge Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
Lost Planet 2 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033
- Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
Metro 2033 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
DX11: Unigine Heaven 2.1
The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 2.1
- Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA
Heaven 2.1 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 256-bit |
Antilles Dual-BIOS Overclocking
The Radeon HD 6990 video card offers a special dual-BIOS feature that enables users to boot-up their computer with either a standard or factory-overclocked configuration. As the most powerful graphics card on the market there's more then enough performance available without the added GPU overclock, but some gamers and hardware enthusiasts may want to take a chance at setting a benchmark record or give their frame rate an extra boost. BIOS position '2' is the default shipping position, and yields 830 MHz GPU clocks at 1.12 volts each. BIOS position '1' is a hardware overdrive option, and increases the clocks to 880 MHz while adjusting voltage to 1.175 volts.
WARNING: AMD's product warranty does not cover damages caused by overclocking, even when overclocking is enabled via AMD software and/or the Dual-BIOS Function on the AMD Radeon HD 6990.
AMD and NVIDIA already stretch their GPUs pretty thin in terms of overclocking head room, but there's a difference between thin and non-existent. In this section, Benchmark Reviews compares stock versus overclocked video card performance on the Radeon HD 6990. Here are the test results:
GPU Overclocking Results
Test Item | Standard GPU | Overclocked GPU | Improvement | |||
Radeon HD 6990 | 830 MHz | 880 MHz | 50 MHz GPU | |||
DX9+SSAO: Mafia II | 73.0 | 74.8 | 1.8 FPS (2.5%) | |||
DX10: 3dMark Jane Nash | 56.2 | 58.2 | 2.0 FPS (3.6%) | |||
DX10: 3dMark Calico | 45.4 | 47.3 | 1.9 FPS (4.2%) | |||
DX11: 3dMark11 GT1 | 40.6 | 42.5 | 1.9 FPS (4.7%) | |||
DX11: 3dMark11 GT2 | 50.0 | 52.1 | 2.1 FPS (4.2%) | |||
DX11: 3dMark11 GT3 | 59.3 | 61.6 | 1.9 FPS (3.9%) | |||
DX11: 3dMark11 GT4 | 28.9 | 30.3 | 1.4 FPS (4.8%) | |||
DX11: Aliens vs Predator | 76.0 | 78.7 |
2.7 FPS (3.6%) |
|||
DX11: Battlefield BC2 | 123.5 | 127.4 | 3.9 FPS (3.2%) | |||
DX11: Metro 2033 | 54.1 | 55.2 | 1.1 FPS (2.0%) | |||
DX11: Heaven 2.1 | 75.5 | 78.0 | 2.5 FPS (3.3%) |
Overclocking Summary: Our baseline results show a 2.0~4.8% average increase in performance (at 1920x1200 resolution), which usually amounts to an added 2+ FPS in most games. That's not a whole lot of performance boost in relation to the increased power consumption, but every extra frame translates into an advantage over your enemy. In our overclocked testing with the Catalyst 11.4 'Preview' drivers, there were occasions when the driver would crash during a benchmark test, so it's unclear just how far enthusiasts can stretch the Radeon HD 6990. There were other issues to contend with, such as...
Radeon HD 6990 Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next, I use a modified version of FurMark's "Torture Test" to defeat NVIDIA's power monitoring and generate the maximum thermal load. This allows us to record absolute maximum GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world temperatures while gaming:
Video Card | Idle Temp | Loaded Temp | Loaded Noise | Ambient | ||
ATI Radeon HD 5850 | 39°C | 73°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 460 | 26°C | 65°C | 4/10 | 20°C | ||
AMD Radeon HD 6850 | 42°C | 77°C | 7/10 | 20°C | ||
AMD Radeon HD 6870 | 39°C | 74°C | 6/10 | 20°C | ||
ATI Radeon HD 5870 | 33°C | 78°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 560 Ti | 27°C | 78°C | 5/10 | 20°C | ||
NVIDIA GeForce GTX 570 | 32°C | 82°C | 7/10 | 20°C | ||
ATI Radeon HD 6970 | 35°C | 81°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 580 | 32°C | 70°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 590 | 33°C | 77°C | 6/10 | 20°C | ||
AMD Radeon HD 6990 | 40°C | 84°C | 8/10 | 20°C |
Although the Radeon HD 6990 uses an enhanced cooling solution with AMD's latest power efficiency technology, the temperatures did force the fan to run at audible levels most of the time. Resting at idle the Radeon HD 6990 measured 40°C in a 20°C room, which is actually on-par with some of the mid-range graphics cards. Once the GPU's were stressed to 100% using multi-GPU Furmark, the differences began to surface. The Radeon HD 6990 produced 84°C under full load (measured at 20°C ambient after ten minutes), which is similar to other video cards, but enough to force the cooling fan into a noisy high-power mode.
VGA Power Consumption
For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
In the previous section we discovered how well the new and improved AMD cooling solution managed temperatures for a pair of Cayman GPUs on the Radeon HD 6990 video card. In terms of power consumption, the results were generally similar. Keeping in mind that the Radeon HD 6990 houses two independent high-performance graphics processors, it's expected that the graphics card will require significant power even despite the use of new Volterra regulators. The Radeon HD 6990 accepts two 8-pin PCI-E power connections for proper operation, and will not display a picture on the screen unless proper power has been supplied.
The power consumption statistics discussed below are absolute maximum values, and not representative of real-world temperatures while gaming:
Resting at idle with no GPU load, the Radeon HD 6990 consumed a 46W - or roughly 23W per GPU by our measure. This level of power consumption was identical between standard and OC positions on the dual-BIOS. Compensating for a small margin of error, this also roughly matches idle power draw from the ATI Radeon HD 5970 video card. Once 3D-applications begin to demand power from the Cayman GPUs, electrical power consumption climbs. Measured at full throttle with FurMark's multi-GPU 3D torture load, the Radeon HD 6990 topped out at 350W maximum power draw and matches AMD's stated max TDP. Under full load using the OC setting on the dual-BIOS, measured maximum power draw climbed to 368W. Our measurements are absolute maximums, since Furmark is not representative of real-world gaming power draw.
Radeon HD 6990 Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
AMD designed the Radeon HD 6990 to be the best graphics card available on the market, and based on our results they've succeeded. The closest competition is the NVIDIA GeForce GTX 580 in terms of single-card graphics, or two GeForce GTX 570's paired together into a SLI set. We've included a pair of AMD Radeon HD 6870's joined in a CrossFireX set, just to illustrate other options. Obviously two AMD Radeon HD 6970's could be combined into a CrossFireX set, but the performance would approximately match what we've received out of the Radeon HD 6990. AMD HD3D technology was not tested for this review, or the impact it has on video game frame rates.
Beginning with graphics performance, the Radeon HD 6990 video card confirmed AMD's promise to gamers that they would deliver the best possible solutions. At the 830 MHz default speed setting, the Radeon HD 6990 performed extremely well against any perceivable competition. Switching the dual-BIOS to the OC setting unlocked another 50 MHz from both Cayman GPUs, good for a 2-3 FPS boost to video frame rate performance. DirectX-9 games performed extremely well with all of the setting turned up high and played at 1920x1200 resolution. Mafia-II with SSAO enabled and PhysX turned off easily pushed past 80 FPS but couldn't match GeForce GTX 570 SLI performance. Call of Duty: Black Ops was tweaked to use the absolute highest settings possible, and yet still had extremely fluid performance during action-packed multiplayer maps.
In the more modern DirectX 10 game tests, Crysis Warhead kept an average 55 FPS and edged out the GTX 570 SLI set while matching up against two GTX 580's in SLI. Battlefield: Bad Company 2 used 8x anti-aliasing and 16x anisotropic filtering to produce 124 FPS, but trailed the pair GTX 570's in SLI. Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level DirectX-11 settings displayed at 720p, yet the HD6990 generally matched up well to the GTX 570 SLI pair in these test. DX11 Aliens vs Predator pushed the Radeon HD 6990 to produce 76 FPS on average while easily surpassing the GeForce GTX 570 SLI set. Lost Planet 2 played at 57 FPS with 2x AA, but performance fell behind the competition. Metro 2033 is a demanding game even when played with high-end graphics, but the Radeon HD 6990 delivered 54 FPS and edged past the GTX 570 SLI set. Unigine Heaven positioned the Radeon HD 6990 ahead of the GTX 570 SLI pair, and nearly matched up against the GTX 580 SLI set.
There are six PLX display channel bridges present on the Radeon HD 6990 video card, which opens up visual functionality. Two channels are dedicated to the only dual-link DVI port available on this video card, while four mini-DisplayPort 1.2 outputs each use a channel. The real innovation comes with DP 1.2, which can use a Multi-Stream Transport Hub to drive multiple displays at different resolutions, refresh rates, and color depth in Eyefinity. Included with the Radeon HD 6990 is an extended-length CrossFireX bridge, one mini-DisplayPort to passive single-link DVI adapter, mini-DisplayPort to active single-link DVI adapter, and mini-DisplayPort to passive HDMI adapter. The included adapters will enable 3x1 gaming right out of the box with DVI monitors, but with additional display adapters or DisplayPort monitors, you will be able to drive up to five displays in portrait Eyefinity (5x1 Portrait mode). AMD's HD3D technology currently supports only one 3D display, with plans for multi-monitor 3D available in the future, so the Radeon HD 6990 could be the perfect fit for gamers looking to plan ahead for multi-display 3D setups.
Antilles uses 40nm Cayman GPUs, and with the added thermal management system they worked perfectly in a dual-GPU package. The constant move towards building with a smaller die process is rather insignificant in the grand scheme of things, as was proved when the NVIDIA GeForce GTX 280 successfully launched at 65nm instead of the expected 55nm process. Taiwan Semiconductor Manufacturing Company (TSMC) is already building 32nm processors for other clientele, and AMD has noted that Moore's Law still applies - just not in regard to their Cayman GPU. They claim that as a die process becomes smaller, it also becomes much more costly to develop and produce. But then there are times when the manufacturer just can't complete the project as planned, such as the case with TSMC.
Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. By now, most people have had a few years to grow familiar to AMD's 'black box' product appearance, so this style has nearly established itself as tradition. With the reference design, half of the heated exhaust air is recirculated back into the computer case while the other half is expelled out of the rear vents. Some add-in card partners may periodically offer their own unique twist on a reference design by incorporating an improved cooling solution with colorful fan shroud graphics, but I don't expect this to happen with the Radeon HD 6990. AMD's redesigned cooler is their most efficient, and these video cards are expected to sell in limited quantities for each partner.
Value is a fast moving target, and please believe me when I say that it changes by the minute in this industry. Delivering better performance and additional features at a lower cost to consumers has been the cornerstone of AMD's business philosophy for more than a decade, and they've repeatedly demonstrated this resolve in each of their many battles with Intel CPUs and NVIDIA GPUs. The premium-priced Radeon HD 6990 graphics card hedges a bet on AMD's traditional values, and demonstrates their capability to innovate the graphics segment while leading their market. As of launch day 08 March 2011, the Radeon HD 6990 has been assigned a $699 MSRP. In terms of cost value the Radeon HD 6990 costs roughly twice the rate of two Radeon HD 6970's or GeForce GTX 570's, which is fitting considering it usually performs twice as well to match. Newegg offers the following models online:
-
$709.99 - Sapphire 100310SR
-
$734.99 - HIS H699F4G4M
-
$734.99 - MSI R6990-4PD4GD5
-
$739.99 - XFX HD-699A-ENF9
-
$739.99 - Gigabyte GV-R699D5-4GD-B
-
$749.99 - PowerColor AX6990 4GBD5-M4D
In summary, the Radeon HD 6990 is the ultimate enthusiast graphics card. It dominates the landscape with unrivaled single-card performance, and matches very well against dual-card Radeon HD 6970 CrossFireX or GeForce GTX 570 SLI sets that consume more power and dissipate additional heat. If you're looking to match performance on the cheap, value-seeking gamers could purchase one Radeon HD 6970 now while saving to upgrade with a second unit later. You'll take up more room inside the computer case and a multi-card setup could require a new power supply unit, but it's possible so long as you're willing to make concessions. For elite-level gamers and hardware enthusiasts the AMD Radeon HD 6990 represents the best you can buy, and delivers on its price point.
While AMD HD3D and Fusion technology are working into the mainstream, products like the Radeon HD 6990 introduce more flexibility for display devices; especially where multi-monitor Eyefinity is used. Stereoscopic 3D gaming is possible with the right equipment, as are 3D Blu-ray and 3D DVD playback. The 40nm Cayman GPU may not have been built from the 32nm die it was originally designed for, but the Radeon HD 6990 still offers stellar gaming performance that rivals the older Radeon HD 5970 as well as two recently introduced Radeon HD 6970's. Overall I consider the Radeon HD 6990 to be an excellent video card intended for affluent top-end gamers, but I suspect that the frame rate performance will sell more Antilles cards than multi-display Eyefinity support or added stereoscopic 3D functionality. If you can afford the asking price, the Radeon HD 6990 delivers the best graphics performance money can buy.
So what do you think of the AMD Antilles video card? Leave comments below, or ask questions in our Forum.
Pros:
+ Unmatched top-end DX11 graphics performance
+ Dual-BIOS switch enables performance boost
+ Drives five-display portrait mode Eyefinity
+ Cayman GPUs enable stereoscopic 3D functionality
+ Fan exhausts most heated air outside of case
+ Multi-view CODEC enables 3D Blu-ray playback
+ Improves DisplayPort to 1.2 with display chaining
+ Supports dual-mode AMD CrossFireX functionality
Cons:
- Extremely expensive enthusiast product
- Heated exhaust is vented back into case
- Audible cooling fan at idle, noisy under load
- AMD HD3D products are hard to find
Related Articles:
- NZXT Phantom Full-Tower Case PHAN-001BK
- G.Skill ECO 4GB DDR3 Memory Kit F310666CL7D
- Mad Catz Cyborg F.R.E.Q. 5 Gaming Headset
- Corsair CA-HS1NA USB Gaming Headset
- RatPadz XT and GS Gaming Surface Mouse Pads
- Rosewill RK-9000BR Mechanical Keyboard
- Hiper HPU-4K530-MS Type R Modular 530W PSU
- Intel DZ77GA-70K Motherboard Components
- AZiO KB577U Levetron MECH5 Gaming Keyboard
- Razer Orochi USB/Bluetooth Mobile Gaming Mouse
Comments
The dual-bios switch is a pretty awesome idea even if the actual overclocking potential of the 6990 was a huge 'meh'.
I suppose these cards are never intended to be particularly sensible though and are aimed are that certain type of gamer (more money then sense, possibly).
I know it seems ironic to talk about budget when we're discussing a $700 video card, but if I take the money I would have spent on an X58/LGA1366 motherboard and a top-of-the-heap CPU and put it towards the video card, I think it's a better bang for the $$$$.
Another problem is, whilst high end cards are great fun and all that, they're currently redundant. Current PC games are becoming so heavily gimped by the console market that we're not seeing anything worth buying these parts for.
Having said that I've not played Crysis 2 yet, but it's a console port. Bleh.
RagingShadow, I suggest you wait until the game is released.
In this case I don't count the "What the lawyers make us say" setting at 830MHz 1.12V as the reference setting, but just a safe adaptation to get the card within the PCIe standards.
880MHz and 1.175V is the reference speed, and from there the card can be overclocked a bit. How much past 900MHz is still anybody's guess though...
And almost all my friends and I use controllers when we feel like it.