NVIDIA GeForce GTX 590 Gemini Video Card |
Reviews - Featured Reviews: Video Cards | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Thursday, 24 March 2011 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA GeForce GTX 590 Gemini Review
Manufacturer: NVIDIA Full Disclosure: The product sample used in this article has been provided by NVIDIA. Poised to compete against AMD's Radeon HD 6990, NVIDIA launches their own dual-Fermi GF110-based GeForce GTX 590 video card.March madness indeed. Just last week AMD made news with their dual-Cayman GPU Radeon HD 6990 video card, and days later NVIDIA is returning with their own competitor. Designed around their flagship GeForce GTX 580 video card, they've combined two titanic graphics processors into a package roughly the same size, and still manage to produce the quietest dual-GPU video card ever made. With 512 CUDA cores each, two independent NVIDIA GF110 GPUs join to deliver 1024 total cores of graphical processing power. There are six 64-bit memory controllers that offer 384-bit combined bandwidth per GPU, and feed 3GB of combined GDDR5 video frame buffer. All of this is said to deliver comparable performance to the Radeon HD 6990, which is good for gamers, but it does so while running cooler and emitting less noise. When it comes to computer hardware there's something for everyone, and this rings especially true for graphics cards. If you're on a tight budget but still like to point and shoot your way through levels, there are plenty of affordable entry-level products that can satisfy your needs. But if you're an enthusiast gamer who demands only the highest level of performance that far surpasses mainstream standards, the graphics industry continues to cherish your business. Fierce competition between NVIDIA and AMD have allowed PC gamers to enjoy the best graphics hardware ever developed for desktop computers. NVIDIA has worked hard to earn their reputation as the industry leader in desktop graphics, and the codename 'Gemini' graphics card is proof to their dedication. In this article, Benchmark Reviews tests the GeForce GTX 590 against the AMD Radeon HD 6990 and an entire market of top-end desktop graphics solutions. In what could be hailed as Fermi's final chapter, NVIDIA continues to update their product family by adding the GeForce GTX 590 to the ranks. The GeForce GTX 590 is intended to achieve the best performance possible while remaining power-efficient and quiet during heavy operation. Only 11.0-inches long, the GeForce GTX 590 is capable of installing into standard ATX computer cases where fitting video cards such as the AMD Radeon HD 6990 would be impossible. Additionally, NVIDIA has invested more research into vapor chamber technology, and developed a cooling solution that tames temperatures for two GF110 GPUs using only one fan. This enables hardcore gamers to configure two GTX 590's into quad-SLI, presuming the motherboard and power supply support the requirements. All of this adds up to more potential performance for gamers, and some serious enthusiast credibility for overclockers.
Something happened to the Fermi architecture between the time it premiered as GF100 and when it began to really turn heads as GF104: the ratio of shaders and texture units was perfected. The original GF100 GPU placed too much emphasis on tessellation, and not enough on overall graphical performance. As a result of finding the right balance, the GF104 graphics processor on the GeForce GTX 460 became an overnight sensation for gamers. Now evolved into the GF110 GPU, all 512 cores understand their purpose and propel the GeForce GTX 580 to a level only rivaled by the competition's best and most-expensive dual-GPU Radeon HD 5970 graphics card. Trimmed down to a modest 244 watts of power consumption under load, the GTX 580 outperforms its predecessor in both power efficiency graphical performance. NVIDIA targets the GeForce GTX 590 at premium upper-end segment willing to spend $700 on their discrete graphics, which admittedly includes only the most affluent gamers. To best illustrate GTX 590's performance, we use the most demanding PC video game titles and benchmark software available. Graphical frame rate performance is tested against a large collection of competing desktop products, such as the Radeon HD 6990 and various SLI/CrossFire configurations. Using the DirectX-9 API that is native to Windows XP, we've compared graphics performance using Mafia II. Some older DirectX-10 favorites such as Crysis Warhead and PCMark Vantage are included, as well as newer DirectX-11 titles such as: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Metro 2033, Tom Clancy's HAWX2, along with the Unigine Heaven 2.1 and 3dMark11 benchmarks. GeForce GTX 590 Basic DetailsFrom a distance, the NVIDIA GeForce GTX 590 looks a lot like the GTX 580 or 570. Unless you get close enough to notice the details, they're appear to be about the same size, overall. The outer dimensions for the GeForce GTX 590 give this 1.5" tall double-bay, 3.9" wide, 11.0" long graphics card a similar profile, but it's actually slightly longer than the GTX 580 (10.5") and much shorter than a Radeon HD 6990 (12.0" long). NVIDIA's add-in card partners may incorporate their own cooling solution on the GTX 590, but most brands have adopted the reference design dressed with decals.
A center-mounted 80mm fan uses a deep-chamfered depression to draw cool air into the angled fan shroud, best illustrated in the image below. The GeForce GTX 590 keeps fresh air moving into the unit, which passes through heatsinks located at opposite ends. This design, paired with a fan that extends out slightly beyond the surface of the shroud, allows more air to reach the intake whenever two or more video cards are combined in close-proximity SLI configurations. In terms of that SLI configurations, the GeForce GTX 590 supports a dual-card quad-SLI set; triple-card hexa-SLI capability is not possible.
If you consider that the GeForce GTX 580 requires a 6-pin and 8-pin power connection to maintain 244W TDP, it seems incredible that NVIDIA could fit two of these processors onto one PCB and make them functional with two eight-pin PCI-E power connections for 365W TDP. Similar to the GTX 480 shroud design, the GeForce GTX 590 shares identical vents near the header panel near the SLI tab. Despite the lower operating temperatures, special consideration for heat must be given to overclocked computers systems since multiple GPU's inside the computer case will further extend the CPU's heat range.
The reference design offers three simultaneously functional dual-link DVI (DL-DVI) connections and a mini-DisplayPort output on the GeForce GTX 590. Add-in partners may elect to remove or possibly further extend any of these video interfaces, but most will likely retain the original engineering. Since three dual-link DVI digital outputs are included on the GTX 590, only one of these video cards is necessary to drive triple-display NVIDIA 3D-Vision Surround functionality. All of these video interfaces consume exhaust-vent real estate, so most of the heated air will be expelled back into the computer case.
NVIDIA designed the GeForce GTX 590 for 365 watts Thermal Design Power (TDP), and suggests at least a 700W power supply unit. It would be ideal for system builders to use a PSU with two 8-pin PCI-E power connections, rather than potentially overloading another rail by using an adapter. Powering the twin GF110 GPUs is a 10-phase advanced digital power controller with over-volting capability, while two dual-phase controllers provide power for the circuit board's GDDR5 memories. We examine power consumption later on in this article, using 3DMark11 to represent real-world loads.
On the backside of the GeForce GTX 590 video card are two aluminum plates fastened securely to the PCB. These backplates act as heatsinks to aid in cooling the GDDR5 memory mounted on both sides of the PCB, and also help reduce temperatures for the GF110 GPUs located at each end of the video card. In our next section, we disassemble the GeForce GTX 590 and inspect the component technology that NVIDIA used to build Gemini... NVIDIA Gemini Internal DetailsOn the outside, NVIDIA's GeForce GTX 590 is merely a brick-shaped video card. On the inside, NVIDIA's codename "Gemini" design grabs your attention. The first thing you'll notice are two self-contained GPU coolers, each utilizing a copper vapor chamber and dual-slot heatsink, with a 80mm fan positioned equidistant between them. Beneath these heatsinks is a 12-layer printed circuit board (PCB) to ensure the highest signal integrity, and to help disperse heat more effectively across the PCB. NVIDIA uses two ounces of copper for each of the GTX 590's power and ground PCB layers, enhancing the circuit board's longevity.
Popular in the world of CPU coolers, NVIDIA has used hollow vapor chamber technology to provide a robust thermal management system on the GeForce GTX 590. This will bring into question the need for add-in card partners to design their own cooling solution, and challenge them to produce better results. Pictured below is one of two heatsink component, with the vapor chamber residing inside the copper portion. The polished copper contact surface mates together with the GF110 GPU, as the fan drives air past the aluminum fins and outside the each end of the fan shroud.
With the heatsinks removed, two GF110 processors are exposed. Packed with 512 CUDA cores each, two independent NVIDIA GF110 GPUs are combined by a NVIDIA NF200-P-SLI-A3 chip to deliver 1024 total cores of graphical processing power at 607 MHz. There are six 64-bit memory controllers that offer 384-bit combined bandwidth per GPU, and feed 3GB of combined GDDR5 video frame buffer. The reference design enabled 1GB of 1000 MHz GDDR5 memory clocked to 835.5 MHz, but could potentially include the use of higher-density DRAM modules in the future. Replacing the Samsung 128MB 1250MHz K4G10325FE-HC04 GDDR5 IC's with 256MB IC parts such as: Samsung 1250MHz K4G20325FC-HC04 GDDR5, or 1500MHz K4G20325FC-HC03 might be possible. NVIDIA GeForce GTX 590 Exposed PCB (click for larger image)NVIDIA now dedicates hardware circuitry to the task of monitoring power consumption as well as temperature, adjusting performance to protect the graphics card from damage. Circled below are the electronic components responsible for power management on GeForce GTX 5xx series video cards (GTX 580 pictured). NVIDIA has indicated that this circuitry is optional, and that not all AIC partners will include it on their products. Benchmark Reviews uses 3DMark11 to stress both GPUs and produce maximum power usage measurements, although admittedly not many video games exist that might create comparable average power demands.
To help provide cooling for the PCB and its components, and aluminum baseplate is secured to the top of the board. In addition, two backplates are mounted on the bottom of the board to cool the graphics memory. Combined with a 10-phase digital power controller, the GeForce GTX 590 run exceptionally cool and quiet. NVIDIA adds one final touch the GTX 590: a glowing NVIDIA GeForce logo near the power connections.
In the next section, we detail the various Fermi products and lay out their features and specifications before putting them to test. If you're looking for specific transistor counts or texture unit detail, this is the section... NVIDIA Fermi FeaturesIn today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100-series GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.
Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left. Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide. GeForce GTX-Series Products
VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages. A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X58-Express Test System
DirectX-9 Benchmark Applications
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-E 2.0 Graphics Cards
|
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX10: 3DMark Vantage
3DMark Vantage is a PC benchmark suite designed to test the DirectX-10 graphics card performance. FutureMark 3DMark Vantage is the 2009 addition to the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.
3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.
- 3DMark Vantage v1.02
- Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
3DMark Vantage GPU Test: Jane Nash
Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.
Jane Nash Extreme Quality Settings
3DMark Vantage GPU Test: New Calico
New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).
New Calico Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX10: Crysis Warhead
Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.
Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.
Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.
- Crysis Warhead v1.1 with HOC Benchmark
- Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)
Crysis Warhead Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: 3DMark11
FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.
The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
- Futuremark 3DMark11 Professional Edition
- Performance Level Settings: (1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
3DMark11 'Performance' Level Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
Aliens vs Predator Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: Battlefield Bad Company 2
The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.
At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.
In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.
- BattleField: Bad Company 2
- Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
Battlefield Bad Company 2 Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: BattleForge
BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.
Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.
- BattleForge v1.2
- Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
EDITOR'S NOTE: AMD is aware of performance concerns with BattleForge, and offered us an official response:
"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."
BattleForge Extreme Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
Lost Planet 2 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033
- Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
Metro 2033 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
DX11: Unigine Heaven 2.1
The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 2.1
- Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA
Heaven 2.1 Moderate Quality Settings
Graphics Card | Radeon HD6870 | Radeon HD6970 | GeForce GTX570 | Radeon HD5970 | GeForce GTX580 | GeForce GTX590 | Radeon HD6990 |
GPU Cores | 1120 | 1536 | 480 | 3200 Total | 512 | 1024 | 3072 Total |
Core Clock (MHz) | 900 | 880 | 732 | 725 | 772 | 608 | 830/880 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1215 | N/A |
Memory Clock (MHz) | 1050 | 1375 | 950 | 1000 | 1002 | 854 | 1250 |
Memory Amount | 1024MB GDDR5 | 2048MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072 | 4096MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 320-bit | 512-bit | 384-bit | 384-bit | 256-bit |
NVIDIA Gemini Overclocking
As we've discussed in this article, NVIDIA's codename 'Gemini' project may use two GF110 GPUs from the GeForce GTX 580 series, but it wasn't designed to produce twice the performance. NVIDIA states that the GeForce GTX 590 should produce approximately 1.5x a single GTX 580 in most applications. That makes sense, considering the GTX 580's GF110 operates at 772 MHz and each of the GPUs in the GTX 590 run at only 608 MHz. That means that two GTX 580's in SLI combine for 1544 MHz to the GTX 590's 1216 MHz - a difference of 328 MHz, or 164 MHz on each GPU. That a lot of ground to cover with a single overclock.
Gemini uses a 12-later PCB For maximum signal integrity, so the overclocked GPUs could be more stable. To help disperse heat more effectively across the printed circuit board, two ounces of copper are used for each of the board's power and ground layers on the PCB. This also helps to enhance the circuit board's longevity. To help provide cooling for the PCB and its components, and aluminum baseplate is secured to the top of the board. In addition, two backplates are mounted on the bottom of the board to cool the graphics memory. NVIDIA also uses a 10-phase digital power controller with over-volting capability on Gemini. So let's see how far we were able to stretch the GPU's on Gemini...
AMD and NVIDIA already stretch their GPUs pretty thin in terms of overclocking head room, but there's a difference between thin and non-existent. In this section, Benchmark Reviews compares stock versus overclocked video card performance on the GeForce GTX 590 with default voltage supplied to the GPUs. Here are the test results:
GPU Overclocking Results
Test Item | Standard GPU | Overclocked GPU | Improvement | |||
GeForce GTX 590 | 608 MHz | 680 MHz | 72 MHz (11.8%) | |||
DX9+SSAO: Mafia II | 88.2 | 91.7 | 3.5 FPS (4.0%) | |||
DX10: 3dMark Jane Nash | 54.0 | 57.9 | 3.9 FPS (7.2%) | |||
DX10: 3dMark Calico | 51.2 | 57.0 | 5.8 FPS (11.3%) | |||
DX11: 3dMark11 GT1 | 39.0 | 42.7 | 3.7 FPS (9.5%) | |||
DX11: 3dMark11 GT2 | 41.0 | 44.9 | 3.9 FPS (9.5%) | |||
DX11: 3dMark11 GT3 | 57.3 | 61.9 | 4.6 FPS (8.0%) | |||
DX11: 3dMark11 GT4 | 28.3 | 31.5 | 3.2 FPS (11.3%) | |||
DX11: Aliens vs Predator | 64.7 | 69.4 |
4.7 FPS (7.3%) |
|||
DX11: BattleForge | 103.3 | 112.5 | 9.2 FPS (8.9%) | |||
DX11: Lost Planet 2 | 64.3 | 66.4 | 2.1 FPS (3.4%) | |||
DX11: Metro 2033 | 50.6 | 55.0 | 4.4 FPS (8.7%) |
Overclocking Summary: With a 72 MHz overclock that represents a 11.8% increase in GPU speed, our baseline results indicate an average increase of about 5% in actual frame rate performance at 1920x1200 resolution. This usually amounted to an added 4+ FPS in most games. This isn't a huge performance boost, especially compared to single-GPU overclocks, but every extra frame translates into an advantage over your enemy. Of course, all of our results were gathered by using the standard core voltage. Once updated tools are available to better overclock these GPUs with added voltage, there will be additional headroom available for better performance.
GeForce GTX 590 Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next, I then used a modified version of FurMark's "Torture Test" to generate the maximum thermal load. This allows us to record absolute maximum GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and may not represent real-world temperatures created by the average video game:
Video Card | Idle Temp | Loaded Temp | Loaded Noise | Ambient | ||
ATI Radeon HD 5850 | 39°C | 73°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 460 | 26°C | 65°C | 4/10 | 20°C | ||
AMD Radeon HD 6850 | 42°C | 77°C | 7/10 | 20°C | ||
AMD Radeon HD 6870 | 39°C | 74°C | 6/10 | 20°C | ||
ATI Radeon HD 5870 | 33°C | 78°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 560 Ti | 27°C | 78°C | 5/10 | 20°C | ||
NVIDIA GeForce GTX 570 | 32°C | 82°C | 7/10 | 20°C | ||
ATI Radeon HD 6970 | 35°C | 81°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 580 | 32°C | 70°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 590 | 33°C | 77°C | 6/10 | 20°C | ||
AMD Radeon HD 6990 | 40°C | 84°C | 8/10 | 20°C |
Furmark Temperature Chart (click for larger image)
NVIDIA surprised us with how cool and quite their GeForce GTX 580 operated under full load, especially considering that this is their flagship model. AMD's Radeon HD 6970 ran a bit warmer, but still kept fan noise to a moderate level. When Benchmark Reviews tested the Radeon HD 6990 last week, it was disappointing to have their premium-level graphics product produce the loudest noise levels we've experienced in quite a while. So then, it's refreshing to see how well Gemini handles temperatures on the GeForce GTX 590. Obviously the speed reduction from 772 MHz down to 608 MHz on each GPU made a significant difference to the thermal output, thus reducing dependency on a high-RPM fan.
Since each GF110 GPU received the same vapor chamber heatsink as it gets on the GeForce GTX 580, it makes perfect sense to see manageable temperatures. There is an aluminum baseplate secured to the top of the printed circuit board that helps cool the PCB and its components, and two backplates are mounted at the bottom of the board to passively cool the graphics memory. As an added result of NVIDIA's optional hardware power monitoring circuitry, temperatures on this dual-GPU video card are actually closer to those we've seen from single-GPU units. Measured with a constant 20°C ambient room temperature, idle temperatures were a cool 33°C. After ten minutes of torture testing with a modified version of Furmark, loaded temperatures were a modestly warm 77°C with only slightly audible cooling fan noise - certainly a level the competition should take notice of.
VGA Power Consumption
For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #1 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
In the previous section we discovered how well the new and improved NVIDIA cooling solution managed temperatures for a pair of Fermi GF110 GPUs on the GeForce GTX 590 video card. In terms of power consumption, the results were generally similar in scale. Keeping in mind that Gemini houses two independent high-performance graphics processors, it's expected that the graphics card will require significant power even despite the use of a 10-phase advanced digital power controller. The GeForce GTX 590 accepts two 8-pin PCI-E power connections for proper operation, and will not display a picture on the screen unless proper power has been supplied. NVIDIA recommends a 700W power supply unit for stable operation, which should include the two required 8-pin PCI-E connections without using any adapters. The power consumption statistics discussed below are absolute maximum values, and may not represent real-world power consumption created by the average video game:
Resting at idle with no GPU load, the GeForce GTX 590 consumed a 53W - or roughly 26W per GPU by our measure. Compensating for a small margin of error, this level of power consumption is only 7W more than the opposing Radeon HD 6990 requires at idle. This also roughly matches idle power draw from the older ATI Radeon HD 5970 video card, while being lower than many of the older-generation single-GPU solutions. Once 3D-applications begin to demand power from the twin Fermi GPUs, electrical power consumption really climbs. Measured at full throttle using 3dMark11 benchmark suite (GT1), the GeForce GTX 590 topped out at 396W maximum power draw, which is 31W higher than NVIDIA's stated max TDP of 365W. Our measurements represent absolute maximum limits, since most real-world applications and video game do not demand 100% GPU load.
GeForce GTX 590 Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
NVIDIA designed the GeForce GTX 590 to be the best graphics card available on the market, and they contend that it's a better total solution than the Radeon AMD Radeon HD 6990. I will discuss graphics performance in a moment, but first let's look at the other factors that come into play. NVIDIA's Gemini graphics card consumes a few more watts of power at idle than the Radeon HD 6990, but under load the GTX 590 consumes 46W more than its competitor. Yet, as a direct result of superior cooling efficiency, less heat byproduct is produced by the GTX 590 video card when matched against the Radeon HD 6990. Fan noise from the cooling unit offers the largest contrast we've found between these two products: the GeForce GTX 590 operated quietly under full load, while the Radeon HD 6990 was significantly louder. Let's not forget that the GTX 590 is a full inch shorter, and can fit in more computer cases. Comparing these two products on overall size, heat output, and operational noise, the evidence all points back in favor of NVIDIA's GeForce GTX 590 being the better product.
The closest competition GeForce GTX 590 has is the AMD Radeon HD 6990 in terms of single-card graphics performance, or two GeForce GTX 570's paired together into a SLI set. We've included a pair of AMD Radeon HD 6870's joined in a CrossFireX set, just to illustrate other options. Although NVIDIA has previously informed us the GTX 590 performs to 1.5x the level of two GTX 580's in SLI, we've added them to the results of our tests as well. Now on to the graphics performance results... which take some attention to fully appreciate.
After running benchmarks on each video card through fourteen different tests, the results occasionally placed one product better than the other, and then vice versa. Beginning with DirectX 9 graphics performance in Mafia II with all of the setting turned up high and played with SSAO enabled and PhysX turned off, the GeForce GTX 590 produced an impressive lead over the Radeon HD 6990 but couldn't quite match GeForce GTX 570 SLI performance levels. Call of Duty: Black Ops was tweaked to use the absolute highest quality settings possible, and yet still had extremely fluid video performance during action-packed multiplayer maps for both products.
In the more modern DirectX 10 game tests, Crysis Warhead kept the GTX 590 even with the Radeon HD 6990 and a few frames ahead of the GTX 570 SLI set. 3dMark Vantage used high-end DirectX 10 settings to place all three contenders approximately equal in the Jane Nash test, but both GeForce products would excel past the Radeon HD 6990 in New Calico tests. Battlefield: Bad Company 2 used 8x anti-aliasing and 16x anisotropic filtering to produce far superior framerates on the GTX 590 compared to the Radeon HD 6990, but slightly trailed the pair GTX 570's in SLI.
In the DirectX 11 tests, Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, yet the HD6990 generally matched up well to the GeForce GTX 590 as well as both GTX 570's in SLI. Aliens vs Predator pushed the Radeon HD 6990 to produce considerably higher average framerates than the GTX 590, while also surpassing the GeForce GTX 570 SLI set. Lost Planet 2 played well at 2x AA, allowing the GeForce GTX 590 to pass the 570 SLI set and leap beyond Radeon HD 6990 performance capabilities. Metro 2033 is a demanding game even when played with high-end graphics, but the Radeon HD 6990 edged past both the GTX 590 and GTX 570 SLI set. Unigine Heaven positioned the Radeon HD 6990 well ahead of the GeForce GTX 590, and only slightly ahead of the GTX 570 SLI pair.
My tally of these results have the GTX 590 ahead in five tests, equal in five, and trailing in five. Based on how the GeForce GTX 590 and Radeon HD 6990 swap paint in most tests or go tit-for-tat in others, graphics performance is roughly equal between these two cards in my book. Compared against the GTX 570 SLI set, the benchmark scores give the SLI set a lead in seven tests, even in two, and trailing in three. If it's a battle between GeForce GTX 590 and Radeon HD 6990 with all things (performance, heat, and noise), most would agree that the GTX 590 is the better choice. For those looking to match graphics frame rate performance at the expense of all the previously mentioned items, plus installation space, then a SLI set of GeForce GTX 570's will also work well.
Each GPU on the GeForce GTX 590 offers two graphics adapters, which are doubled to four with the NF200-P-SLI-A3 chip. Three dual-link DVI ports and a mini-DisplayPort 1.1s output really open up visual functionality, allowing the GTX 590 to power four concurrent displays at once. DL-DVI #1 and DL-DVI #2 are routed from the first GF110 GPU, while DL-DVI #3 and the mini-DP are routed from the second GPU. It's great to see NVIDIA finally include a DisplayPort option, which enables display expansion as the technology catches up with consumers. Gamers will likely take advantage of triple-display surround, or even 3D Vision Surround for those of us who want the most out of our NVIDIA 3D-Vision kit.
GeForce GTX 590 uses 40nm NVIDIA GF110 GPUs identical to those in the flagship GTX 580 model, and with the added thermal management system they've worked perfectly in Gemini's dual-GPU package. The constant move towards building with a smaller die process is rather insignificant in the grand scheme of things, as was proved when the NVIDIA GeForce GTX 280 successfully launched at 65nm instead of the expected 55nm process. Taiwan Semiconductor Manufacturing Company (TSMC) is already building 32nm processors for other clientele, but just not to the level needed to create GPUs.
Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has used a fairly recognizable design for the past year, and with the exception of angular corners the GTX 590 looks very similar to the recently launched GTX 580 and 570 models. Gemini's relatively compact size helps this dual-GPU video card to do what the Radeon HD 6990 could not: fit two processors into a card the size of products designed with only one GPU. Some add-in card partners may offer their own unique designs by incorporating an improved cooling solution, but most will simply dress up the original design with colorful fan shroud graphics.
Value is a fast moving target, and please believe me when I say that it changes by the minute in this industry. The premium-priced GeForce GTX 590 "Gemini" graphics card demonstrates NVIDIA's ability to innovate the graphics segment while leading their market. As of launch day 23 March 2011, the GeForce GTX 590 has been assigned a $699 MSRP. In terms of value, the GeForce GTX 590 costs roughly the same as AMD's Radeon HD 6990. To compare one cards' value to another based solely on video frame rates, then identical pricing fools you into thinking these cards offer approximately the same value. Just remember that only one of these video cards can offer multi-display 3D gaming, standard form-factor installation, and PhysX technology.
When I reviewed the AMD Radeon HD 6990 for the launch event two weeks ago, I genuinely liked the card's ability to produce unmatched performance using the sheer strength of two top-end GPUs. NVIDIA answered back with a product just as powerful, but refined so many of Gemini's smaller details that the scales now tip in their favor. Depending on your collection of games and settings, graphics performance is fairly even between the GeForce GTX 590 and Radeon HD 6990. But unfortunately for the Radeon HD 6990, modern graphics cards are capable of a lot more than simply producing frame rates. Consumers are looking at supplemental features, such as stereoscopic 3D functionality, graphical enhancements, affordable multi-display possibilities, broad software support, and stable drivers. NVIDIA 3D Vision, APEX PhysX, The Way It's Meant to be Played developer support, surround support with inexpensive DVI monitors, and Forceware drivers all deliver these things. AMD's solutions are either no widely supported (DisplayPort), unpopular (AMD HD3D), or lack affordable integration (Eyefinity).
GeForce GTX 590 is the ultimate enthusiast graphics card intended for affluent top-end gamers. It may match the competition's solution in terms of frame rate performance, but then again it also operates at lower temperatures, and does so very quietly. For elite-level gamers and hardware enthusiasts the GeForce GTX 590 represents the best you can buy, and delivers on its price point. Of course, putting together a GeForce GTX 570 SLI set is still an option, but it will consume more power and dissipate additional heat. If you're looking to match performance on the cheap, value-seeking gamers could purchase one GeForce GTX 570 now while saving to upgrade with a second unit later. You'll take up more room inside the computer case and a multi-card setup could require a new power supply unit, but it's possible so long as you're willing to make concessions. If you can afford the asking price, the GeForce GTX 590 'Gemini' graphics card delivers the best total package that money can buy.
Do you agree with my assessment of the GeForce GTX 590 video card? Leave comments below, or ask questions in our Forum.
Pros:
+ Best total package for DX11 video games
+ Short profile fits into standard size computer cases
+ One card drives four displays or 3D Vision Surround
+ Fermi GPUs enable 3D Vision and PhysX functionality
+ Cooling fan operates at very quiet acoustic levels
+ Includes DisplayPort connectivity for future display tech
+ Supports quad-SLI for unmatched potential
Cons:
- Extremely expensive enthusiast product
- Heated exhaust vents out to computer case
- Does not include HDMI output for HDTVs
Related Articles:
- MSI N550GTX-Ti Cyclone-II OC Video Card
- ASUS GeForce GT 430 Overclocking Performance
- Synology Cube Station CS407 4-Bay SATA Gigabit NAS
- ASUS P9X79 WS LGA2011 Workstation Motherboard
- GeForce 8800 Graphics Performance: GT vs GTS vs GTX
- Mad Catz Cyborg F.R.E.Q. 5 Gaming Headset
- Mionix Naos 5000 USB Gaming Laser Mouse
- G.Skill ECO 4GB DDR3 Memory Kit F310666CL7D
- NVIDIA 3D-Vision Multimedia Resource Guide
- AZiO Levetron GM533U Gaming Mouse
Comments
but i will wait until aftermarket coolers for 6990 appears
anyway i appreciate great competitions like this one
If the results show low power use in Furmark for the 590, you know the card is activating its limiter, therefor the temp/noise reading are not realistic.
Simply put the power/temp/fan noise should be done on the games as you benched them, its all to easy to program a driver to fool one application to make it look and sound good.
Thanks
Actually, the hardware sits in a big box for retests and other projects. After a year or so, they become gifts or sold for what little they're still worth.
Hey- I never did see any of your work from PAX! Hope it was fun, and worth the visit.
They really need to put some 2g or even 3g of mem on those 580 cards.
I'm pretty sure that's not right.
So what's up next week?
When you look at the things that are not bias towards either product IE Heaven benchmark. Thats where you see the real difference in the cards. All this end talk about features is complete garbage. You can say the same for ATI there card and 3d bluray is far superior then the Nvidia solution. Eyefinity maybe a novel feature but it does work. Something Nvidia has yet to even come close to matching. Heck this is first card with 3 monitors support. Also the displayport is a mute point because ATI is now including a displayport adaptor that illiminates the super high cost of this solution among other reasons could state that this is not a fd it will eventually be the standard. If your going to skew your review because you like nvidia just do it at the begining. Its getting extremely old seeing the bait and switch from both sides.
Some of your remarks are a relief to me, because they show how uninformed you truly are. According to you, features like PhysX, 3D Vision, and CUDA support don't matter. I suggest you start asking around, because like it or not those 'garbage' features help sell these products.
By the way, those 'biased' NVIDIA games that always score the highest... games like BattleForge and Battlefield Bad Company 2... those are AMD co-developed games, not NVIDIA.
Then Toro's jumped on this forum and blogged his ATI Fanboy Mumbo Jumbo. But you know what, that's ok, Handicap need a place to lay their heads at night. If he had read the whole review he would have seen that facts. 1. The 6990 has its pro's and con's. Fact 2, The Nvidia and ATI cards are pretty neck and neck. Fact 3, the 590 won 5, tied 5 and (barley) lost 5. I don't know how anyone could have taken this review as "BIAS" towards Nvidia. I am an Nvidia fan, but I own a 6990 and 2 580 GTX's, they are both powerful................nevermind, I know if Toros is reading this it wont make sense so i will save my time.
Interesting read though. I am anxious to see how the cards compare at their overclocked overvolted limits on reference cooler.
For all I know next year could have big stuff for processors.
The rest of you fanboys should really stop honestly. In the case of the 6990 and 590, what is there really to say one card is superior to the other? I don't find 50w less, 5c degrees less or a card that runs EVER SO quieter to the other good ways of saying my card beats yours. There is no real pure power difference. So I must agree with Colin above and say they are both virtually the same exact cards.
I just have to stick with purchasing Nvidia however for my 3D setup, no biggie.
I was wondering if the 6990 still had the aggressive AF optimizations which results in texture shimmering. I am referring to the AF optimization discovered earlier and brought up in an nViDIA blog.
I really hate texture shimmering, that is a deciding factor for me. If I have to switch setting around in Catalyst in order to get rid of shimmering the benchmark results would not indicate the actual performance I'll get from 6990.
Thanks in advance.
/index.php?option=com_content&task=view&id=12845&Itemid=47
Usually shimmering (if it's present) can easily be seen in Half-Life 2 in road or railroad parts of the map which extend long into the horizon.
I remember having horrible shimmering in HL2 back when I had 7800GTX.
In the Unigine night scene you mention, can you get rid of shimmering by setting Catalyst AI to high or turning it off altogether?
I suppose in the same Unigine scene , GTX 590 is not showing any shimmering?
Thanks for the reply btw.
##tweakpc.de/hardware/tests/grafikkarten/amd_radeon_hd_6870_h d_6850/s09.php
Can you get rid of shimmering by fiddling with Cat AI ? Does the GTX 590 show the same amount of shimmering as well? In HL2 particularly, since it can easily be detected when moving the camera.
Neither AMD nor NVidia are charities. You want a bleeding edge product, prepare to bleed the cash.
Yes NVIDIA does include HDMI with their cards, I can only assume it is up to AIB partners whether it will be mini-HDMI or a full size port.