Archive Home arrow Reviews: arrow Video Cards arrow NVIDIA GeForce GTX 590 Gemini Video Card
NVIDIA GeForce GTX 590 Gemini Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Thursday, 24 March 2011

NVIDIA GeForce GTX 590 Gemini Review

Manufacturer: NVIDIA
Product Name: GeForce GTX 590
Suggested Retail Price: $699.99 MSRP

Full Disclosure: The product sample used in this article has been provided by NVIDIA.

Poised to compete against AMD's Radeon HD 6990, NVIDIA launches their own dual-Fermi GF110-based GeForce GTX 590 video card.

March madness indeed. Just last week AMD made news with their dual-Cayman GPU Radeon HD 6990 video card, and days later NVIDIA is returning with their own competitor. Designed around their flagship GeForce GTX 580 video card, they've combined two titanic graphics processors into a package roughly the same size, and still manage to produce the quietest dual-GPU video card ever made. With 512 CUDA cores each, two independent NVIDIA GF110 GPUs join to deliver 1024 total cores of graphical processing power. There are six 64-bit memory controllers that offer 384-bit combined bandwidth per GPU, and feed 3GB of combined GDDR5 video frame buffer. All of this is said to deliver comparable performance to the Radeon HD 6990, which is good for gamers, but it does so while running cooler and emitting less noise.

When it comes to computer hardware there's something for everyone, and this rings especially true for graphics cards. If you're on a tight budget but still like to point and shoot your way through levels, there are plenty of affordable entry-level products that can satisfy your needs. But if you're an enthusiast gamer who demands only the highest level of performance that far surpasses mainstream standards, the graphics industry continues to cherish your business. Fierce competition between NVIDIA and AMD have allowed PC gamers to enjoy the best graphics hardware ever developed for desktop computers. NVIDIA has worked hard to earn their reputation as the industry leader in desktop graphics, and the codename 'Gemini' graphics card is proof to their dedication. In this article, Benchmark Reviews tests the GeForce GTX 590 against the AMD Radeon HD 6990 and an entire market of top-end desktop graphics solutions.

In what could be hailed as Fermi's final chapter, NVIDIA continues to update their product family by adding the GeForce GTX 590 to the ranks. The GeForce GTX 590 is intended to achieve the best performance possible while remaining power-efficient and quiet during heavy operation. Only 11.0-inches long, the GeForce GTX 590 is capable of installing into standard ATX computer cases where fitting video cards such as the AMD Radeon HD 6990 would be impossible. Additionally, NVIDIA has invested more research into vapor chamber technology, and developed a cooling solution that tames temperatures for two GF110 GPUs using only one fan. This enables hardcore gamers to configure two GTX 590's into quad-SLI, presuming the motherboard and power supply support the requirements. All of this adds up to more potential performance for gamers, and some serious enthusiast credibility for overclockers.

NVIDIA-GeForce-GTX-590-Ammo-Can.jpg

Something happened to the Fermi architecture between the time it premiered as GF100 and when it began to really turn heads as GF104: the ratio of shaders and texture units was perfected. The original GF100 GPU placed too much emphasis on tessellation, and not enough on overall graphical performance. As a result of finding the right balance, the GF104 graphics processor on the GeForce GTX 460 became an overnight sensation for gamers. Now evolved into the GF110 GPU, all 512 cores understand their purpose and propel the GeForce GTX 580 to a level only rivaled by the competition's best and most-expensive dual-GPU Radeon HD 5970 graphics card. Trimmed down to a modest 244 watts of power consumption under load, the GTX 580 outperforms its predecessor in both power efficiency graphical performance.

NVIDIA targets the GeForce GTX 590 at premium upper-end segment willing to spend $700 on their discrete graphics, which admittedly includes only the most affluent gamers. To best illustrate GTX 590's performance, we use the most demanding PC video game titles and benchmark software available. Graphical frame rate performance is tested against a large collection of competing desktop products, such as the Radeon HD 6990 and various SLI/CrossFire configurations. Using the DirectX-9 API that is native to Windows XP, we've compared graphics performance using Mafia II. Some older DirectX-10 favorites such as Crysis Warhead and PCMark Vantage are included, as well as newer DirectX-11 titles such as: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Metro 2033, Tom Clancy's HAWX2, along with the Unigine Heaven 2.1 and 3dMark11 benchmarks.

GeForce GTX 590 Basic Details

From a distance, the NVIDIA GeForce GTX 590 looks a lot like the GTX 580 or 570. Unless you get close enough to notice the details, they're appear to be about the same size, overall. The outer dimensions for the GeForce GTX 590 give this 1.5" tall double-bay, 3.9" wide, 11.0" long graphics card a similar profile, but it's actually slightly longer than the GTX 580 (10.5") and much shorter than a Radeon HD 6990 (12.0" long). NVIDIA's add-in card partners may incorporate their own cooling solution on the GTX 590, but most brands have adopted the reference design dressed with decals.

NVIDIA-GeForce-GTX-590-Video-Card-Top.jpg

A center-mounted 80mm fan uses a deep-chamfered depression to draw cool air into the angled fan shroud, best illustrated in the image below. The GeForce GTX 590 keeps fresh air moving into the unit, which passes through heatsinks located at opposite ends. This design, paired with a fan that extends out slightly beyond the surface of the shroud, allows more air to reach the intake whenever two or more video cards are combined in close-proximity SLI configurations. In terms of that SLI configurations, the GeForce GTX 590 supports a dual-card quad-SLI set; triple-card hexa-SLI capability is not possible.

NVIDIA-GeForce-GTX-590-Angle.jpg

If you consider that the GeForce GTX 580 requires a 6-pin and 8-pin power connection to maintain 244W TDP, it seems incredible that NVIDIA could fit two of these processors onto one PCB and make them functional with two eight-pin PCI-E power connections for 365W TDP. Similar to the GTX 480 shroud design, the GeForce GTX 590 shares identical vents near the header panel near the SLI tab. Despite the lower operating temperatures, special consideration for heat must be given to overclocked computers systems since multiple GPU's inside the computer case will further extend the CPU's heat range.

NVIDIA-GeForce-GTX-590-Side.jpg

The reference design offers three simultaneously functional dual-link DVI (DL-DVI) connections and a mini-DisplayPort output on the GeForce GTX 590. Add-in partners may elect to remove or possibly further extend any of these video interfaces, but most will likely retain the original engineering. Since three dual-link DVI digital outputs are included on the GTX 590, only one of these video cards is necessary to drive triple-display NVIDIA 3D-Vision Surround functionality. All of these video interfaces consume exhaust-vent real estate, so most of the heated air will be expelled back into the computer case.

NVIDIA-GeForce-GTX-590-IO-Panel.jpg

NVIDIA designed the GeForce GTX 590 for 365 watts Thermal Design Power (TDP), and suggests at least a 700W power supply unit. It would be ideal for system builders to use a PSU with two 8-pin PCI-E power connections, rather than potentially overloading another rail by using an adapter. Powering the twin GF110 GPUs is a 10-phase advanced digital power controller with over-volting capability, while two dual-phase controllers provide power for the circuit board's GDDR5 memories. We examine power consumption later on in this article, using 3DMark11 to represent real-world loads.

NVIDIA-GeForce-GTX-590-Exposed-PCB-Back.jpg

On the backside of the GeForce GTX 590 video card are two aluminum plates fastened securely to the PCB. These backplates act as heatsinks to aid in cooling the GDDR5 memory mounted on both sides of the PCB, and also help reduce temperatures for the GF110 GPUs located at each end of the video card.

In our next section, we disassemble the GeForce GTX 590 and inspect the component technology that NVIDIA used to build Gemini...

NVIDIA Gemini Internal Details

On the outside, NVIDIA's GeForce GTX 590 is merely a brick-shaped video card. On the inside, NVIDIA's codename "Gemini" design grabs your attention. The first thing you'll notice are two self-contained GPU coolers, each utilizing a copper vapor chamber and dual-slot heatsink, with a 80mm fan positioned equidistant between them. Beneath these heatsinks is a 12-layer printed circuit board (PCB) to ensure the highest signal integrity, and to help disperse heat more effectively across the PCB. NVIDIA uses two ounces of copper for each of the GTX 590's power and ground PCB layers, enhancing the circuit board's longevity.

NVIDIA-GeForce-GTX-590-Exposed-PCB-Top.jpg

Popular in the world of CPU coolers, NVIDIA has used hollow vapor chamber technology to provide a robust thermal management system on the GeForce GTX 590. This will bring into question the need for add-in card partners to design their own cooling solution, and challenge them to produce better results. Pictured below is one of two heatsink component, with the vapor chamber residing inside the copper portion. The polished copper contact surface mates together with the GF110 GPU, as the fan drives air past the aluminum fins and outside the each end of the fan shroud.

NVIDIA-GeForce-GTX-590-Heatsinks.jpg

With the heatsinks removed, two GF110 processors are exposed. Packed with 512 CUDA cores each, two independent NVIDIA GF110 GPUs are combined by a NVIDIA NF200-P-SLI-A3 chip to deliver 1024 total cores of graphical processing power at 607 MHz. There are six 64-bit memory controllers that offer 384-bit combined bandwidth per GPU, and feed 3GB of combined GDDR5 video frame buffer. The reference design enabled 1GB of 1000 MHz GDDR5 memory clocked to 835.5 MHz, but could potentially include the use of higher-density DRAM modules in the future. Replacing the Samsung 128MB 1250MHz K4G10325FE-HC04 GDDR5 IC's with 256MB IC parts such as: Samsung 1250MHz K4G20325FC-HC04 GDDR5, or 1500MHz K4G20325FC-HC03 might be possible.

NVIDIA-GeForce-GTX-590-Exposed-PCB.jpg

NVIDIA GeForce GTX 590 Exposed PCB (click for larger image)

NVIDIA now dedicates hardware circuitry to the task of monitoring power consumption as well as temperature, adjusting performance to protect the graphics card from damage. Circled below are the electronic components responsible for power management on GeForce GTX 5xx series video cards (GTX 580 pictured). NVIDIA has indicated that this circuitry is optional, and that not all AIC partners will include it on their products. Benchmark Reviews uses 3DMark11 to stress both GPUs and produce maximum power usage measurements, although admittedly not many video games exist that might create comparable average power demands.

NVIDIA-GeForce-GTX580-Power-Monitoring-Hardware.jpg

To help provide cooling for the PCB and its components, and aluminum baseplate is secured to the top of the board. In addition, two backplates are mounted on the bottom of the board to cool the graphics memory. Combined with a 10-phase digital power controller, the GeForce GTX 590 run exceptionally cool and quiet. NVIDIA adds one final touch the GTX 590: a glowing NVIDIA GeForce logo near the power connections.

NVIDIA-GeForce-GTX-590-Installedl.jpg

In the next section, we detail the various Fermi products and lay out their features and specifications before putting them to test. If you're looking for specific transistor counts or texture unit detail, this is the section...

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100-series GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

GeForce GTX-Series Products

Graphics Card

GeForce GTX 550 Ti

GeForce GTX 460

GeForce GTX 560 Ti

GeForce GTX 570 GeForce GTX 580 GeForce GTX 590
GPU Transistors 1.17 Billion 1.95 Billion 1.95 Billion 3.0 Billion 3.0 Billion 6.0 Billion Total

Graphics Processing Clusters

1 2

2

4

4

8 Total

Streaming Multiprocessors

4 7

8

15 16 32 Total

CUDA Cores

192 336

384

480 512 1024 Total

Texture Units

32 56

64

60 64 128 Total

ROP Units

24 768MB=24 / 1GB=32

32

40 48 96 Total

Graphics Clock
(Fixed Function Units)

900 MHz

675 MHz

822 MHz

732 MHz 772 MHz 607 MHz

Processor Clock
(CUDA Cores)

1800 MHz

1350 MHz

1644 MHz

1464 MHz 1544 MHz 1215 MHz

Memory Clock
(Clock Rate/Data Rate)

1025/4200 MHz

900/3600 MHz

1001/4008 MHz

950/3800 MHz 1002/4016 MHz 854/3414 MHz

Total Video Memory

1024MB GDDR5

768MB / 1024MB GDDR5

1024MB GDDR5

1280MB GDDR5

1536MB GDDR5

3072MB GDDR5

Memory Interface

192-Bit

768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

384-Bit

Total Memory Bandwidth

98.4 GB/s

86.4 / 115.2 GB/s

128.3 GB/s

152.0 GB/s 192.4 GB/s 327.7 GB/s

Texture Filtering Rate
(Bilinear)

28.8 GigaTexels/s

37.8 GigaTexels/s

52.6 GigaTexels/s

43.9 GigaTexels/s

49.4 GigaTexels/s

77.7 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

3x Dual-Link DVI-I
1x Mini DisplayPort

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

6-Pin

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

2x 8-Pin

Thermal Design Power (TDP)

116 Watts

768MB=150W / 1GB=160W

170 Watts

219 Watts 244 Watts 365 Watts

Recommended PSU

400 Watts

450 Watts

500 Watts

550 Watts

600 Watts

700 Watts

GPU Thermal Threshold

100°C

104°C

100°C

97°C

97°C

97°C

GeForce Fermi Chart Courtesy of Benchmark Reviews

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel X58-Express Test SystemNVIDIA-GeForce-GTX-590-GPUZ.gif

DirectX-9 Benchmark Applications

  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

DirectX-11 Benchmark Applications

  • Futuremark 3DMark11 Professional Edition
    • Performance Level Settings: (1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
  • Aliens vs Predator Benchmark 1.0
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
  • Tom Clancy's HAWX 2 Benchmark 1.0.4
    • Extreme Settings: (Maximum Quality, 8x AA, 16x AF, DX11 Terrain Tessellation)
  • Unigine Heaven Benchmark 2.1
    • Moderate Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA)

PCI-E 2.0 Graphics Cards

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX9+SSAO: Mafia II

Mafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II.

Mafia II is a SSAO-enabled PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers is equal access to high-detail physics regardless of video card manufacturer.

  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)

Mafia2_DX11_Benchmark.jpg

Mafia II Extreme Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX10: 3DMark Vantage

3DMark Vantage is a PC benchmark suite designed to test the DirectX-10 graphics card performance. FutureMark 3DMark Vantage is the 2009 addition to the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Jane Nash Extreme Quality Settings

3DMark Vantage GPU Test: New Calico

New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

3dMark_Vantage_New_Calico_Benchmark.jpg

New Calico Extreme Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Moderate Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: 3DMark11

FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.

The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.

3DMark11-Performance-Test-Settings.png

  • Futuremark 3DMark11 Professional Edition
    • Performance Level Settings: (1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)

3dMark2011_Performance_GT1-2_Benchmark.jpg
3dMark2011_Performance_GT3-4_Benchmark.jpg

3DMark11 'Performance' Level Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Extreme Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: Battlefield Bad Company 2

The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)

Battlefield-Bad-Company-2_Benchmark.jpg

Battlefield Bad Company 2 Extreme Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: BattleForge

BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)

EDITOR'S NOTE: AMD is aware of performance concerns with BattleForge, and offered us an official response:

"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."

BattleForge_DX11_Benchmark.jpg

BattleForge Extreme Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Moderate Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Moderate Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

DX11: Unigine Heaven 2.1

The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 2.1
    • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven 2.1 Moderate Quality Settings

Graphics Card Radeon HD6870 Radeon HD6970 GeForce GTX570 Radeon HD5970 GeForce GTX580 GeForce GTX590 Radeon HD6990
GPU Cores 1120 1536 480 3200 Total 512 1024 3072 Total
Core Clock (MHz) 900 880 732 725 772 608 830/880
Shader Clock (MHz) N/A N/A 1464 N/A 1544 1215 N/A
Memory Clock (MHz) 1050 1375 950 1000 1002 854 1250
Memory Amount 1024MB GDDR5 2048MB GDDR5 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072 4096MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 512-bit 384-bit 384-bit 256-bit

NVIDIA Gemini Overclocking

As we've discussed in this article, NVIDIA's codename 'Gemini' project may use two GF110 GPUs from the GeForce GTX 580 series, but it wasn't designed to produce twice the performance. NVIDIA states that the GeForce GTX 590 should produce approximately 1.5x a single GTX 580 in most applications. That makes sense, considering the GTX 580's GF110 operates at 772 MHz and each of the GPUs in the GTX 590 run at only 608 MHz. That means that two GTX 580's in SLI combine for 1544 MHz to the GTX 590's 1216 MHz - a difference of 328 MHz, or 164 MHz on each GPU. That a lot of ground to cover with a single overclock.

Gemini uses a 12-later PCB For maximum signal integrity, so the overclocked GPUs could be more stable. To help disperse heat more effectively across the printed circuit board, two ounces of copper are used for each of the board's power and ground layers on the PCB. This also helps to enhance the circuit board's longevity. To help provide cooling for the PCB and its components, and aluminum baseplate is secured to the top of the board. In addition, two backplates are mounted on the bottom of the board to cool the graphics memory. NVIDIA also uses a 10-phase digital power controller with over-volting capability on Gemini. So let's see how far we were able to stretch the GPU's on Gemini...

NVIDIA-GeForce-GTX-590-Angle-Dark.jpg

AMD and NVIDIA already stretch their GPUs pretty thin in terms of overclocking head room, but there's a difference between thin and non-existent. In this section, Benchmark Reviews compares stock versus overclocked video card performance on the GeForce GTX 590 with default voltage supplied to the GPUs. Here are the test results:

GPU Overclocking Results

Test Item Standard GPU Overclocked GPU Improvement
GeForce GTX 590 608 MHz 680 MHz 72 MHz (11.8%)
DX9+SSAO: Mafia II 88.2 91.7 3.5 FPS (4.0%)
DX10: 3dMark Jane Nash 54.0 57.9 3.9 FPS (7.2%)
DX10: 3dMark Calico 51.2 57.0 5.8 FPS (11.3%)
DX11: 3dMark11 GT1 39.0 42.7 3.7 FPS (9.5%)
DX11: 3dMark11 GT2 41.0 44.9 3.9 FPS (9.5%)
DX11: 3dMark11 GT3 57.3 61.9 4.6 FPS (8.0%)
DX11: 3dMark11 GT4 28.3 31.5 3.2 FPS (11.3%)
DX11: Aliens vs Predator 64.7 69.4

4.7 FPS (7.3%)

DX11: BattleForge 103.3 112.5 9.2 FPS (8.9%)
DX11: Lost Planet 2 64.3 66.4 2.1 FPS (3.4%)
DX11: Metro 2033 50.6 55.0 4.4 FPS (8.7%)

Overclocking Summary: With a 72 MHz overclock that represents a 11.8% increase in GPU speed, our baseline results indicate an average increase of about 5% in actual frame rate performance at 1920x1200 resolution. This usually amounted to an added 4+ FPS in most games. This isn't a huge performance boost, especially compared to single-GPU overclocks, but every extra frame translates into an advantage over your enemy. Of course, all of our results were gathered by using the standard core voltage. Once updated tools are available to better overclock these GPUs with added voltage, there will be additional headroom available for better performance.

GeForce GTX 590 Temperatures

Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next, I then used a modified version of FurMark's "Torture Test" to generate the maximum thermal load. This allows us to record absolute maximum GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and may not represent real-world temperatures created by the average video game:

Video Card Idle Temp Loaded Temp Loaded Noise Ambient
ATI Radeon HD 5850 39°C 73°C 7/10 20°C
NVIDIA GeForce GTX 460 26°C 65°C 4/10 20°C
AMD Radeon HD 6850 42°C 77°C 7/10 20°C
AMD Radeon HD 6870 39°C 74°C 6/10 20°C
ATI Radeon HD 5870 33°C 78°C 7/10 20°C
NVIDIA GeForce GTX 560 Ti 27°C 78°C 5/10 20°C
NVIDIA GeForce GTX 570 32°C 82°C 7/10 20°C
ATI Radeon HD 6970 35°C 81°C 6/10 20°C
NVIDIA GeForce GTX 580 32°C 70°C 6/10 20°C
NVIDIA GeForce GTX 590 33°C 77°C 6/10 20°C
AMD Radeon HD 6990 40°C 84°C 8/10 20°C

NVIDIA-GeForce-GTX-590-Furmark-Fixed-Sm.jpg

Furmark Temperature Chart (click for larger image)

NVIDIA surprised us with how cool and quite their GeForce GTX 580 operated under full load, especially considering that this is their flagship model. AMD's Radeon HD 6970 ran a bit warmer, but still kept fan noise to a moderate level. When Benchmark Reviews tested the Radeon HD 6990 last week, it was disappointing to have their premium-level graphics product produce the loudest noise levels we've experienced in quite a while. So then, it's refreshing to see how well Gemini handles temperatures on the GeForce GTX 590. Obviously the speed reduction from 772 MHz down to 608 MHz on each GPU made a significant difference to the thermal output, thus reducing dependency on a high-RPM fan.

Since each GF110 GPU received the same vapor chamber heatsink as it gets on the GeForce GTX 580, it makes perfect sense to see manageable temperatures. There is an aluminum baseplate secured to the top of the printed circuit board that helps cool the PCB and its components, and two backplates are mounted at the bottom of the board to passively cool the graphics memory. As an added result of NVIDIA's optional hardware power monitoring circuitry, temperatures on this dual-GPU video card are actually closer to those we've seen from single-GPU units. Measured with a constant 20°C ambient room temperature, idle temperatures were a cool 33°C. After ten minutes of torture testing with a modified version of Furmark, loaded temperatures were a modestly warm 77°C with only slightly audible cooling fan noise - certainly a level the competition should take notice of.

VGA Power Consumption

For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #1 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

In the previous section we discovered how well the new and improved NVIDIA cooling solution managed temperatures for a pair of Fermi GF110 GPUs on the GeForce GTX 590 video card. In terms of power consumption, the results were generally similar in scale. Keeping in mind that Gemini houses two independent high-performance graphics processors, it's expected that the graphics card will require significant power even despite the use of a 10-phase advanced digital power controller. The GeForce GTX 590 accepts two 8-pin PCI-E power connections for proper operation, and will not display a picture on the screen unless proper power has been supplied. NVIDIA recommends a 700W power supply unit for stable operation, which should include the two required 8-pin PCI-E connections without using any adapters. The power consumption statistics discussed below are absolute maximum values, and may not represent real-world power consumption created by the average video game:

Resting at idle with no GPU load, the GeForce GTX 590 consumed a 53W - or roughly 26W per GPU by our measure. Compensating for a small margin of error, this level of power consumption is only 7W more than the opposing Radeon HD 6990 requires at idle. This also roughly matches idle power draw from the older ATI Radeon HD 5970 video card, while being lower than many of the older-generation single-GPU solutions. Once 3D-applications begin to demand power from the twin Fermi GPUs, electrical power consumption really climbs. Measured at full throttle using 3dMark11 benchmark suite (GT1), the GeForce GTX 590 topped out at 396W maximum power draw, which is 31W higher than NVIDIA's stated max TDP of 365W. Our measurements represent absolute maximum limits, since most real-world applications and video game do not demand 100% GPU load.

GeForce GTX 590 Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

NVIDIA designed the GeForce GTX 590 to be the best graphics card available on the market, and they contend that it's a better total solution than the Radeon AMD Radeon HD 6990. I will discuss graphics performance in a moment, but first let's look at the other factors that come into play. NVIDIA's Gemini graphics card consumes a few more watts of power at idle than the Radeon HD 6990, but under load the GTX 590 consumes 46W more than its competitor. Yet, as a direct result of superior cooling efficiency, less heat byproduct is produced by the GTX 590 video card when matched against the Radeon HD 6990. Fan noise from the cooling unit offers the largest contrast we've found between these two products: the GeForce GTX 590 operated quietly under full load, while the Radeon HD 6990 was significantly louder. Let's not forget that the GTX 590 is a full inch shorter, and can fit in more computer cases. Comparing these two products on overall size, heat output, and operational noise, the evidence all points back in favor of NVIDIA's GeForce GTX 590 being the better product.

The closest competition GeForce GTX 590 has is the AMD Radeon HD 6990 in terms of single-card graphics performance, or two GeForce GTX 570's paired together into a SLI set. We've included a pair of AMD Radeon HD 6870's joined in a CrossFireX set, just to illustrate other options. Although NVIDIA has previously informed us the GTX 590 performs to 1.5x the level of two GTX 580's in SLI, we've added them to the results of our tests as well. Now on to the graphics performance results... which take some attention to fully appreciate.

After running benchmarks on each video card through fourteen different tests, the results occasionally placed one product better than the other, and then vice versa. Beginning with DirectX 9 graphics performance in Mafia II with all of the setting turned up high and played with SSAO enabled and PhysX turned off, the GeForce GTX 590 produced an impressive lead over the Radeon HD 6990 but couldn't quite match GeForce GTX 570 SLI performance levels. Call of Duty: Black Ops was tweaked to use the absolute highest quality settings possible, and yet still had extremely fluid video performance during action-packed multiplayer maps for both products.

In the more modern DirectX 10 game tests, Crysis Warhead kept the GTX 590 even with the Radeon HD 6990 and a few frames ahead of the GTX 570 SLI set. 3dMark Vantage used high-end DirectX 10 settings to place all three contenders approximately equal in the Jane Nash test, but both GeForce products would excel past the Radeon HD 6990 in New Calico tests. Battlefield: Bad Company 2 used 8x anti-aliasing and 16x anisotropic filtering to produce far superior framerates on the GTX 590 compared to the Radeon HD 6990, but slightly trailed the pair GTX 570's in SLI.

In the DirectX 11 tests, Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, yet the HD6990 generally matched up well to the GeForce GTX 590 as well as both GTX 570's in SLI. Aliens vs Predator pushed the Radeon HD 6990 to produce considerably higher average framerates than the GTX 590, while also surpassing the GeForce GTX 570 SLI set. Lost Planet 2 played well at 2x AA, allowing the GeForce GTX 590 to pass the 570 SLI set and leap beyond Radeon HD 6990 performance capabilities. Metro 2033 is a demanding game even when played with high-end graphics, but the Radeon HD 6990 edged past both the GTX 590 and GTX 570 SLI set. Unigine Heaven positioned the Radeon HD 6990 well ahead of the GeForce GTX 590, and only slightly ahead of the GTX 570 SLI pair.

My tally of these results have the GTX 590 ahead in five tests, equal in five, and trailing in five. Based on how the GeForce GTX 590 and Radeon HD 6990 swap paint in most tests or go tit-for-tat in others, graphics performance is roughly equal between these two cards in my book. Compared against the GTX 570 SLI set, the benchmark scores give the SLI set a lead in seven tests, even in two, and trailing in three. If it's a battle between GeForce GTX 590 and Radeon HD 6990 with all things (performance, heat, and noise), most would agree that the GTX 590 is the better choice. For those looking to match graphics frame rate performance at the expense of all the previously mentioned items, plus installation space, then a SLI set of GeForce GTX 570's will also work well.

NVIDIA-GeForce-GTX-590-Ammo-Can.jpg

Each GPU on the GeForce GTX 590 offers two graphics adapters, which are doubled to four with the NF200-P-SLI-A3 chip. Three dual-link DVI ports and a mini-DisplayPort 1.1s output really open up visual functionality, allowing the GTX 590 to power four concurrent displays at once. DL-DVI #1 and DL-DVI #2 are routed from the first GF110 GPU, while DL-DVI #3 and the mini-DP are routed from the second GPU. It's great to see NVIDIA finally include a DisplayPort option, which enables display expansion as the technology catches up with consumers. Gamers will likely take advantage of triple-display surround, or even 3D Vision Surround for those of us who want the most out of our NVIDIA 3D-Vision kit.

GeForce GTX 590 uses 40nm NVIDIA GF110 GPUs identical to those in the flagship GTX 580 model, and with the added thermal management system they've worked perfectly in Gemini's dual-GPU package. The constant move towards building with a smaller die process is rather insignificant in the grand scheme of things, as was proved when the NVIDIA GeForce GTX 280 successfully launched at 65nm instead of the expected 55nm process. Taiwan Semiconductor Manufacturing Company (TSMC) is already building 32nm processors for other clientele, but just not to the level needed to create GPUs.

Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has used a fairly recognizable design for the past year, and with the exception of angular corners the GTX 590 looks very similar to the recently launched GTX 580 and 570 models. Gemini's relatively compact size helps this dual-GPU video card to do what the Radeon HD 6990 could not: fit two processors into a card the size of products designed with only one GPU. Some add-in card partners may offer their own unique designs by incorporating an improved cooling solution, but most will simply dress up the original design with colorful fan shroud graphics.

Value is a fast moving target, and please believe me when I say that it changes by the minute in this industry. The premium-priced GeForce GTX 590 "Gemini" graphics card demonstrates NVIDIA's ability to innovate the graphics segment while leading their market. As of launch day 23 March 2011, the GeForce GTX 590 has been assigned a $699 MSRP. In terms of value, the GeForce GTX 590 costs roughly the same as AMD's Radeon HD 6990. To compare one cards' value to another based solely on video frame rates, then identical pricing fools you into thinking these cards offer approximately the same value. Just remember that only one of these video cards can offer multi-display 3D gaming, standard form-factor installation, and PhysX technology.

When I reviewed the AMD Radeon HD 6990 for the launch event two weeks ago, I genuinely liked the card's ability to produce unmatched performance using the sheer strength of two top-end GPUs. NVIDIA answered back with a product just as powerful, but refined so many of Gemini's smaller details that the scales now tip in their favor. Depending on your collection of games and settings, graphics performance is fairly even between the GeForce GTX 590 and Radeon HD 6990. But unfortunately for the Radeon HD 6990, modern graphics cards are capable of a lot more than simply producing frame rates. Consumers are looking at supplemental features, such as stereoscopic 3D functionality, graphical enhancements, affordable multi-display possibilities, broad software support, and stable drivers. NVIDIA 3D Vision, APEX PhysX, The Way It's Meant to be Played developer support, surround support with inexpensive DVI monitors, and Forceware drivers all deliver these things. AMD's solutions are either no widely supported (DisplayPort), unpopular (AMD HD3D), or lack affordable integration (Eyefinity).

GeForce GTX 590 is the ultimate enthusiast graphics card intended for affluent top-end gamers. It may match the competition's solution in terms of frame rate performance, but then again it also operates at lower temperatures, and does so very quietly. For elite-level gamers and hardware enthusiasts the GeForce GTX 590 represents the best you can buy, and delivers on its price point. Of course, putting together a GeForce GTX 570 SLI set is still an option, but it will consume more power and dissipate additional heat. If you're looking to match performance on the cheap, value-seeking gamers could purchase one GeForce GTX 570 now while saving to upgrade with a second unit later. You'll take up more room inside the computer case and a multi-card setup could require a new power supply unit, but it's possible so long as you're willing to make concessions. If you can afford the asking price, the GeForce GTX 590 'Gemini' graphics card delivers the best total package that money can buy.

Do you agree with my assessment of the GeForce GTX 590 video card? Leave comments below, or ask questions in our Forum.

Pros:

+ Best total package for DX11 video games
+ Short profile fits into standard size computer cases
+ One card drives four displays or 3D Vision Surround
+ Fermi GPUs enable 3D Vision and PhysX functionality
+ Cooling fan operates at very quiet acoustic levels
+ Includes DisplayPort connectivity for future display tech
+ Supports quad-SLI for unmatched potential

Cons:

- Extremely expensive enthusiast product
- Heated exhaust vents out to computer case
- Does not include HDMI output for HDTVs


Related Articles:
 

Comments 

 
# RE: NVIDIA GeForce GTX 590 Gemini Video Cardmihai 2011-03-24 06:37
good review, great battle, 590gtx ugly competitor, for me is no choice
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 590 Gemini Video CardOlin Coles 2011-03-24 06:40
Thank you Mihai. What features made you want one card more than the other?
Report Comment
 
 
# RE: RE: RE: NVIDIA GeForce GTX 590 Gemini Video Cardmihai 2011-03-24 08:27
good question ! there are not big things (power consumption,eyefinit y 4gb of vram)
but i will wait until aftermarket coolers for 6990 appears
anyway i appreciate great competitions like this one
Report Comment
 
 
# Poor reviewGT40 2011-03-25 06:01
For a balance please redo the temp/fan noise on the same test you mesured the power draw with. Then also do the power test on the Furmark test.

If the results show low power use in Furmark for the 590, you know the card is activating its limiter, therefor the temp/noise reading are not realistic.

Simply put the power/temp/fan noise should be done on the games as you benched them, its all to easy to program a driver to fool one application to make it look and sound good.

Thanks
Report Comment
 
 
# RE: Poor ReviewCom-Tek Chris 2011-03-25 08:41
I think these guys spend more time than you think ATI FANBOY. The results these guys give don't come out of a cracker jack box. Read the whole review. They give you power readings at idle and under the heaviest load. The sound results were simply put, ATI is louder. I own the 6990, its louder and somewhat annoying, but I deal with it because I have noise canceling headphones. Do you trolls even read the whole articles....? Or just bits and pieces?
Report Comment
 
 
# RE: Poor reviewOlin Coles 2011-03-25 10:09
Poor review? If that's the case, go hold your breath for that special request!
Report Comment
 
 
# VS 6990Summary 2011-03-24 06:42
after all gtx 590 is better why? preformance maybe is not as great as i expected and found lack of it in gpu/mem clocks...3Gb vs 4GB for higher resolutions is lack too..but i had previously ati 5870(paid so much for buggy card with buggy memory) and what i suffered from? first card was an issue because of memory chips of samsung not form hynx and random GSOD happened...niether bios change fix that problem...rma'd card second was good but again samsung chips did not allow me to clock card higher than 1mhz even it had great afetmarket cooler....second the driver issue cf scaling in games tess' performance and on and on...i considered on going on 6970 but bought gtx 580 and was amazed with performance and driver release adn support...drivers was released in 18-th january and new release was not needed...but when i had 5870 driver fixes or many issues and games was released every 1-2 months ???? that is mine expierence with ati that i don't want to live again ,thanks but no way...about gtx 590 this card offer cuda(i needed so much for my study at university),3d vision,now multimonitor setup,physx and many other stable technologioes by nvidia that are certified....even if it is not faster than amd 6990 which is not faster that much than gtx 590 for me quality is number one and nvidia offer that....i am not fanboy of nvidia niether ati but i had many products from nvidia staring with agp card geforce 2 MX440 that served me great for long period of time that 8600gt,8600 gts,9500gt than ati 4870 that had also driver issues and 4870x2 with heat issues...than came 5870 and finally gtx 580 which i plan soon to sli with another one card...
Report Comment
 
 
# RE: NVIDIA GeForce GTX 590 Gemini Video CardDavid Ramsey 2011-03-24 11:32
Like the ammo-case packaging, although I assume that's only for the review cards. Did it come with the magazines, too?
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 590 Gemini Video CardOlin Coles 2011-03-24 11:39
Nope, those are for my AR-15's. As you already know, I shoot all review samples after testing.
Report Comment
 
 
# RE: RE: RE: NVIDIA GeForce GTX 590 Gemini Video CardSteven Iglesias-Hearst 2011-03-24 13:15
Good job you couldn't over-volt this card or you really would have to shoot this sample... Damn shame about these cards blowing up.
Report Comment
 
 
# Shoot? What?Eric Garay 2011-03-24 15:29
LOL! Do you literally shoot them? Or, are you just messing around? Because...damn. :p
Report Comment
 
 
# RE: Shoot? What?Olin Coles 2011-03-24 15:35
I shoot the big ones, but throw back the youngin's. :)

Actually, the hardware sits in a big box for retests and other projects. After a year or so, they become gifts or sold for what little they're still worth.

Hey- I never did see any of your work from PAX! Hope it was fun, and worth the visit.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 590 Gemini Video CardFxAsus 2011-03-24 12:19
I think with two 6870 you are done this year
Report Comment
 
 
# What holdSuccellus 2011-03-24 16:43
What hold Nvidia from topping is the lack of memory in game really pushing like A10C full, Nvidia Short of memory especially on triple + monitor settings.
They really need to put some 2g or even 3g of mem on those 580 cards.
Report Comment
 
 
# Quick heads upCharles McGraw 2011-03-24 17:34
Great review; as a quick heads up though, on the Features & Spec's page you have the GTX 590 listed as 7.7 Gigapixels per second.

I'm pretty sure that's not right.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 590 Gemini Video CardRobert17 2011-03-24 20:57
Nice review. Interesting how one week, one product, can up-end the previous high-water mark.

So what's up next week?
Report Comment
 
 
# Amazing ArticleCom-Tek Chris 2011-03-24 21:18
I always appreciate the time that is taken into all the tests. I also appreciate the non bias assessments made. I currently own the 6990 (Great Card but it is noisy but I have the Logitech G930 Headset and it offers some noise cancellation) & I own 2x 580 GTX's in SLI, also beastly cards. If I had an extra $700 to bang out I would prob buy this new 590 but the budget says................oh wait..........I just sold 40 computers today to a client............lol, gimme a week, I got this baby!
Report Comment
 
 
# I was surprisedBruceBruce 2011-03-24 21:35
The thing that surprised me the most was the price. But when I look at the clocks, and I see that it is basically the same performance as two GTX 570 cards, well it makes sense since thaey are hanging at $350 right now. It's kinda funny how ATI got slammed for making the HD 5970 from two 5850 chips, and then NVIDIA decides to follow the same path. I know the GTX 590 isn't gimped on shaders, but it's so hamstrung by it's low clocks that it may as well be. Well, they pinned a target on the HD 6990 and they hit it, center of mass! Or was that a headshot?
Report Comment
 
 
# RE I was SurprisedCom-Tek Chris 2011-03-24 22:18
HEAD SHOT!!!!! headshotcomputers.com coming soon to give reviewers a chance to test the real gaming pc's. Sorry had to do it, please don't kill me, lol.
Report Comment
 
 
# does not seem to be as unbias a review as should betoros 2011-03-24 21:54
First I am not a diehard fan of either brand I go with who has the best overall product at the time. But its clear whenever a Nvidia product is launched it is always tested with a bias on games that have the nvidia tag on them. This is obviously going to make there card perform better.

When you look at the things that are not bias towards either product IE Heaven benchmark. Thats where you see the real difference in the cards. All this end talk about features is complete garbage. You can say the same for ATI there card and 3d bluray is far superior then the Nvidia solution. Eyefinity maybe a novel feature but it does work. Something Nvidia has yet to even come close to matching. Heck this is first card with 3 monitors support. Also the displayport is a mute point because ATI is now including a displayport adaptor that illiminates the super high cost of this solution among other reasons could state that this is not a fd it will eventually be the standard. If your going to skew your review because you like nvidia just do it at the begining. Its getting extremely old seeing the bait and switch from both sides.
Report Comment
 
 
# Fan-boys cry the loudestOlin Coles 2011-03-24 22:27
You know what else gets old? Fan boy trolling. It's obvious you favor ATI, and that's your right as a consumer. I'm a reviewer without a horse in this race, so I couldn't care less for who wins the graphics war every week. If you researched my position a little more before you made rude comments on the work I've just delivered, you might see that. Just two weeks ago I was highlighting the AMD Radeon HD 6990. Anyone with their head pulled from the clouds would realize that the GTX 590 is virtually the same card, only with more polish.

Some of your remarks are a relief to me, because they show how uninformed you truly are. According to you, features like PhysX, 3D Vision, and CUDA support don't matter. I suggest you start asking around, because like it or not those 'garbage' features help sell these products.

By the way, those 'biased' NVIDIA games that always score the highest... games like BattleForge and Battlefield Bad Company 2... those are AMD co-developed games, not NVIDIA.
Report Comment
 
 
# RE: Fan-boys cry the loudestCom-Tek Chris 2011-03-24 22:56
In all honesty with "toros" comment I really don't think he read all of your review or any of it for that matter. I would say his comment is based on 1 of 2 things. 1. He got this email and 2. He only read Page 19 Paragraph 2 line 1 where "NVIDIA designed the GeForce GTX 590 to be the best graphics card available on the market".

Then Toro's jumped on this forum and blogged his ATI Fanboy Mumbo Jumbo. But you know what, that's ok, Handicap need a place to lay their heads at night. If he had read the whole review he would have seen that facts. 1. The 6990 has its pro's and con's. Fact 2, The Nvidia and ATI cards are pretty neck and neck. Fact 3, the 590 won 5, tied 5 and (barley) lost 5. I don't know how anyone could have taken this review as "BIAS" towards Nvidia. I am an Nvidia fan, but I own a 6990 and 2 580 GTX's, they are both powerful................nevermind, I know if Toros is reading this it wont make sense so i will save my time.
Report Comment
 
 
# RE: Fan-boys cry the loudestVik 2011-03-26 17:05
I think the points about NVidia's features are valid, but I disagree with what was said about Eyefinity support. In the past the adapters or dp capable monitors may indeed have been an expensive proposition, but given that the 6990 ships with an assortment of adapters, I don't think that is a fair point to try and make. I am running Eyefinity on the older 5970 now with 3 DVI monitors and one adapter. I wish those cheap adapters and free adapter were available back when I set this up, but early adopters pay that price.
Interesting read though. I am anxious to see how the cards compare at their overclocked overvolted limits on reference cooler.
Report Comment
 
 
# seems unbiased enoughSempifi99 2011-04-01 10:26
I would think the review is plenty unbiased. But I would hardly discount features, since that is where the real difference between ati cards and nvidia cards are. Both brands can provide more than enough horsepower for most games at anything but extreme resolutions and their price points for performance are similar. But I think it is funny that someone who complains about bias in a review notes such a strong bias in his reply.
Report Comment
 
 
# You know...Jake 2011-03-25 08:26
I am the kind of consumer that actually check this cards when they lower the price (you know, as long its using the same DirectX and not a lot different from other features), does the card industry will get better or will be a very stable year after this last movements?
For all I know next year could have big stuff for processors.
Report Comment
 
 
# Finally!Rob 2011-03-25 09:57
It's about time to see some neck-and-neck battles between ATi and Nviida again. I was especially surprised to see how well the 6990 made improvement in Dx11 titled games as before it was for the most part Nvidia based products rather outperforming in Dx11 titled games. Now I just have to wait a few years for the next DirectX (12 perhaps) and then I will replace this GTX480 with a revised Dx12 card rather than just jumping the gun. Good to finally see some strong competition for a change between both manufacturers.

The rest of you fanboys should really stop honestly. In the case of the 6990 and 590, what is there really to say one card is superior to the other? I don't find 50w less, 5c degrees less or a card that runs EVER SO quieter to the other good ways of saying my card beats yours. There is no real pure power difference. So I must agree with Colin above and say they are both virtually the same exact cards.

I just have to stick with purchasing Nvidia however for my 3D setup, no biggie.
Report Comment
 
 
# Image QualityRamin 2011-03-25 13:42
I have a question from "benchmark reviews"

I was wondering if the 6990 still had the aggressive AF optimizations which results in texture shimmering. I am referring to the AF optimization discovered earlier and brought up in an nViDIA blog.

I really hate texture shimmering, that is a deciding factor for me. If I have to switch setting around in Catalyst in order to get rid of shimmering the benchmark results would not indicate the actual performance I'll get from 6990.



Thanks in advance.
Report Comment
 
 
# What I see...BruceBruce 2011-03-25 14:28
In the night scenes of the Unigine Heaven Benchmark, I see severe (bat-like) shimmering on the stones below some of the street lights. Both scenes are towards the end of the benchmark. Is this what you are referring to?
Report Comment
 
 
# Videos in this pageRamin 2011-03-25 21:48
Yep what I mean by shimmering can be seen in videos posted in this page:

/index.php?option=com_content&task=view&id=12845&Itemid=47

Usually shimmering (if it's present) can easily be seen in Half-Life 2 in road or railroad parts of the map which extend long into the horizon.

I remember having horrible shimmering in HL2 back when I had 7800GTX.

In the Unigine night scene you mention, can you get rid of shimmering by setting Catalyst AI to high or turning it off altogether?

I suppose in the same Unigine scene , GTX 590 is not showing any shimmering?

Thanks for the reply btw.
Report Comment
 
 
# CorrrectionRamin 2011-03-25 21:50
Ooops! My mistake, scratch that URL. This page has contains the videos

##tweakpc.de/hardware/tests/grafikkarten/amd_radeon_hd_6870_h d_6850/s09.php
Report Comment
 
 
# RE: Image QualityOlin Coles 2011-03-25 16:09
Yes, because they are Cayman-based, they still show a collision of ranges in AF-specific testing. Most people will never see the end-effects, but you can see them on tests made for this purpose.
Report Comment
 
 
# RE: RE: Image QualityRamin 2011-03-25 22:03
Thanks for the reply Olin I just noticed that you are the reviewer.

Can you get rid of shimmering by fiddling with Cat AI ? Does the GTX 590 show the same amount of shimmering as well? In HL2 particularly, since it can easily be detected when moving the camera.
Report Comment
 
 
# Struggles of Small BusinessPrasad 2011-03-26 06:27
This Graphics card is very expensive. If the company reduces its price then so many people, students and other game lovers can enjoy the games.
Report Comment
 
 
# RE: Struggles of Small BusinessVik 2011-03-26 18:02
^lol

Neither AMD nor NVidia are charities. You want a bleeding edge product, prepare to bleed the cash.
Report Comment
 
 
# RE: RE: Struggles of Small BusinessAvro Arrow 2011-03-27 17:10
Amen to that Vik! :D
Report Comment
 
 
# Quad-SLI / Triple SLI?Sempifi99 2011-04-01 10:19
I enjoyed reading the article though the only thing I wish was here was some results of a quad-SLI of 590s and a triple-SLI of 580s. I know that it would not help everyone but I am sure at least everyone would find it an interesting read. Especially if Nvidia claims that the 590 has 1.5x the performance of the 580. Also it would be especially useful to me since I currently have 2x 580 in SLI for surround-3D gaming but have been finding them a little underwhelming, my resolution is 6160 x 1080, or essentially 6x 1080p monitors at 60Hz each. I have a little time left with the step up program and would love to see an article comparing the quad vs. triple SLI setups. Also triple-SLI 570s would be an interesting inclusion too.
Report Comment
 
 
# I was wonderingjec 2011-04-03 23:31
I was just wondering when I was reading this review...does nvidia ever put HDMI ports in any of their cards at all. I've owned both types in my systems and lately don't recall many nvidias at all having hdmi ports. maybe its not a big deal for many people. what do you guys think? all in all great and fair review. first time coming to this site, great work.
Report Comment
 
 
# RE: I was wonderingSteven Iglesias-Hearst 2011-04-04 00:31
Welcome to BmR jec.

Yes NVIDIA does include HDMI with their cards, I can only assume it is up to AIB partners whether it will be mini-HDMI or a full size port.
Report Comment
 
 
# Re: I was wonderingSempifi99 2011-04-04 13:52
Nvidia puts a HDMI on many of their cards, though with dual link DVI supporting more bandwidth I don't see any need for a direct HDMI cable plug, since with an adapter you get the same thing. I would rather have more dual link DVI plugs and just use adapters. The only acception I can think of is for cards that would be meat to be used for HTPC since there are no TVs I can think of off the top of my head that would requier a dual link DVI connection, though with some of the 3d tvs today I could be wrong. But since dual link DVI supports the most bandwidth I would think it be more adventagious to have more dual link DVI and just have a DVI ? HDMI cable.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive