Archive Home arrow Reviews: arrow Video Cards arrow ASUS ENGTX580 GeForce GTX 580 Video Card
ASUS ENGTX580 GeForce GTX 580 Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Tuesday, 23 November 2010

ASUS GeForce GTX 580 Video Card Review

Voltage Tweak drives the NVIDIA Fermi GF110 GPU to 925 MHz and produces the most powerful DirectX-11 video card available.

ASUS is a well-known innovator of technology, but there are times when they recognize a good idea and strive to simply make it better. Queue the GeForce GTX 580 video card. Armed with the maximum number of CUDA cores and PolyMorph engines NVIDIA can deliver with the Fermi architecture, the GeForce GTX 580 represents their trophy effort to seize the performance market. While being similar to the GeForce GTX 480, the GeForce GTX 580 design updates the Fermi formula, improves upon the power appetite, reduces heat output, and increases graphical frame rate performance. ASUS raises the GF110 fixed function clock speed from 772 MHz to 782, while the graphics cores now operate at 1564 MHz. 1536MB of GDDR5 video frame buffer use a familiar 384-bit memory bus, clocked to 1002MHz for a 4008MHz data rate. This is before ASUS Voltage Tweak gives us access to unlocked potential...

Using ASUS Voltage Tweak technology and their SmartDoctor software utility, Benchmark Reviews drives GPU voltage from a pedestrian 962 mV at stock speed and overclocks the GF110 GPU by nearly 20% over reference speeds with 1145 mV. The ASUS GeForce GTX 580 competes on two levels: price point and GPU segment. Priced at the $520 price point, the ASUS ENGTX580 competes directly against ATI's dual-GPU Radeon HD 5970 and a pair of AMD Radeon HD 6870 video cards combined into CrossFireX. In regard to single-GPU competition, the closest video cards would be ATI's Radeon HD 5870 or NVIDIA's own GeForce GTX 480.

Using the most demanding PC video game titles and benchmark software available at the time of launch, Benchmark Reviews tests graphical frame rate performance of the GeForce GTX 580. Older DirectX-10 favorites such as Crysis Warhead and PCMark Vantage are included, as well as newer DirectX-11 titles such as: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Lost Planet 2, Mafia II, Metro 2033, Tom Clancy's HAWX2, and the Unigine Heaven 2.1 benchmark. Built to deliver the best possible graphical experience to a performance-orientated gaming market, NVIDIA's GeForce GTX 580 video card delivers top-end frame rates with unmatched efficiency.

ASUS_ENGTX580_2DI_1536MD5_Video_Card_Splash.jpg

Everyone who waited for NVIDIA to launch their Fermi graphics architecture felt like they had held back once it was finally released. The GF100 graphics processor that was packaged into the GTX 480 used less than its full potential, and it didn't create the wide performance margin everyone expected between competitors. Seven months later, NVIDIA has returned with their updated Fermi GF110 GPU, delivering all 512 CUDA cores in the GeForce GTX 580. Featuring a tweaked graphics processor that runs much cooler and uses less power than the GTX 480, the ASUS GeForce GTX 580 is tested by Benchmark Reviews against the Radeon 5970 and CrossFire 6870 video cards, along with two overclocked GeForce GTX 460's in SLI, using the latest DirectX-11 video games.

Something happened to the Fermi architecture between the time it premiered as GF100 and when it began to really turn heads as GF104: the ratio of shaders and texture units was perfected. The original GF100 GPU placed too much emphasis on tessellation, and not enough on overall graphical performance. As a result of finding the right balance, the GF104 graphics processor on the GeForce GTX 460 became an overnight sensation for gamers. Now evolved into the GF110 GPU, all 512 cores understand their purpose and propel the GeForce GTX 580 to a level only rivaled by the competition's best and most-expensive dual-GPU Radeon HD 5970 graphics card. Trimmed down to a modest 244 watts of power consumption under load, the GTX 580 outperforms its predecessor in both power efficiency graphical performance.

Manufacturer: ASUSTeK Computer Inc.
Model Number: ENGTX580/2DI/1536MD5
Product Name: GeForce GTX 580
Price as Tested: $520 (Amazon and NewEgg)

Full Disclosure: The product sample used in this article has been provided by ASUS.

GeForce GTX 580 Closer Look

Compared to the bulging heat-pipes on the GeForce GTX 480, NVIDIA's GeForce GTX 580 actually looks like it belongs to the GTX 470 family. Both the GTX 480 and the new GeForce GTX 580 have profiles that measure 10.5" long and occupy two card slots, but the GTX 580 drops the 1/2-inch protruding nerf bars and wears a svelte 3.875" waistline. Not surprisingly, it also weighs considerably less, too. In terms of appearance, the ASUS ENGTX580 is identical to NVIDIA's reference design with exception to branding labels.

ASUS_ENGTX580_2DI_1536MD5_Top.jpg

Although the power demands have been reduced, the NVIDIA GeForce GTX 580 still requires an eight-pin and six-pin PCI-E power connection. Similar to the GTX 480 shroud design, the GeForce GTX 580 shares an identical vent near the header panel.

NVIDIA-GeForce-GTX580-Side.jpg

NVIDIA has designed the shroud with a deeper chamfer depression for the blower motor fan, allowing more air to reach the intake whenever two or more video cards are combined in close-proximity SLI configurations.

ASUS_ENGTX580_2DI_1536MD5_Angle.jpg

Popular in the world of CPU coolers, NVIDIA has used a hollow vapor chamber to provide a robust thermal management system on the GeForce GTX 580. This will bring into question the need for add-in card partners to design their own cooling solution, and challenge them to produce better results.

NVIDIA-GeForce-GTX580-Blower-Fan.jpg

Pictured below is the heatsink component, with the vapor chamber residing inside the copper portion. The polished copper contact surface mates together with the GF110 GPU, as the blower motor fan drives air past the aluminum fins and outside the computer case.

NVIDIA-GeForce-GTX580-Cooling-Chamber.jpg

NVIDIA offers a mini-HDMI 1.4a digital A/V output on the GeForce GTX 580, which add-in partners may further extend with a full-size adapter. Dual dual-link DVI digital outputs are retained on the GTX 580, which still means that at least two GeForce video cards will be needed for a NVIDIA 3D-Vision Surround functionality.

NVIDIA-GeForce-GTX580-Bracket.jpg

In the next section, we'll peel off the plastic shroud and take a detailed look at ASUS GeForce GTX 580...

GeForce GTX 580 Detailed

In many ways the new GF110 graphics processor is no different than the original GF100. NVIDIA claims that the Fermi block diagram is identical between the two, and only the ratio of tessellation, shader, and texture units has been refined. This means that the GF110 shares the same 530 mm2 die profile, shared memory, dedicated L1 cache, and 768KB L2 cache as the GF100. The thermal threshold is reduced to 97°C, and the transistor count moves down from 3.2-billion to 3-billion. Benchmark Reviews offers our NVIDIA Fermi Graphics Architecture guide for a more comprehensive look into the Fermi GPU.

NVIDIA-Fermi-GF110-GPU-Graphics-Processor.jpg

NVIDIA's GF110 GPU inside the GeForce GTX 580 maintains the same six 64-bit GDDR5 memory controllers (384-bit total) to facilitate high bandwidth access to the frame buffer. This also means that the same printed circuit board design could have been used for the GTX 580, and it was, but with several new twists. The first obvious change is the lack of blower fan openings in the PCB. The other changes are much more subtle, but make a big impact on functionality.

NVIDIA-GeForce-GTX580-PCB-Top.jpg

Circled below are the electronic components responsible for power management on NVIDIA's GeForce GTX 580 video card. Benchmark Reviews uses GPU stress tools such as FurMark to measure maximum power usage, although admittedly no video game exists that would ever create comparable power demands. NVIDIA now dedicates hardware circuitry to the task of monitoring power consumption as well as temperature, adjusting performance to protect the graphics card from damage.

NVIDIA-GeForce-GTX580-Power-Monitoring-Hardware.jpg

With so much circuit board real-estate still undeveloped, add-in card partners may choose to re-design the GeForce GTX 580 PCB so long as NVIDIA offers engineering information. Other changes could potentially include the use of higher-density DRAM modules, replacing the Samsung 128MB 1250MHz K4G10325FE-HC04 GDDR5 IC's with 256MB IC parts such as: Samsung 1250MHz K4G20325FC-HC04 GDDR5, or 1500MHz K4G20325FC-HC03.

NVIDIA-GeForce-GTX580-Exposted-PCB.jpg

NVIDIA designed the GTX 580 for 244 watts Thermal Design Power (TDP), however the 6+8-pin power connections are good for up to 400W on demand. Keep this in mind when shopping for a power supply unit, because NVIDIA recommends a 600W PSU for the GTX 580; although I personally feel that a 800W PSU would be a safer suggestion in terms of upgrade longevity. Benchmark Reviews has tested power consumption with the ASUS ENGTX580 later in this article.

NVIDIA-GeForce-GTX580-PCB.jpg

NVIDIA has designed triple-SLI capability into their upper-level 'GTX' family Fermi video cards: GTX 465, 470, 480, and now the GeForce GTX 580. Even though SLI and triple-SLI are possible, careful consideration for heat must be given to overclocked computers systems, since multiple GPU's inside the computer case will further extend the CPU's heat range.

In the next section, we detail the various Fermi products and lay out their features and specifications before putting them to test...

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100-series GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF110 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF110, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

GF100 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF110's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF110's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100-series is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF110 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100-series GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF110 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

GF110 Specifications

  • 512 CUDA Cores
  • 16 Geometry Units
  • 4 Raster Units
  • 64 Texture Units
  • 48 ROP Units
  • 384-bit GDDR5
  • DirectX-11 API Support

GeForce 400-Series Products

Graphics Card

GeForce GTS 450

GeForce GTX 460

GeForce GTX 465

GeForce GTX 470

GeForce GTX 480

GeForce GTX 580
GPU Transistors 1.17 Billion 1.95 Billion 3.2 Billion 3.2 Billion 3.2 Billion 3.0 Billion

Graphics Processing Clusters

1 2

4

4

4

4

Streaming Multiprocessors

4 7 11

14

15

16

CUDA Cores

192 336 352

448

480

512

Texture Units

32 56 44

56

60

64

ROP Units

16 768MB=24 / 1GB=32 32

40

48

48

Graphics Clock
(Fixed Function Units)

783 MHz

675 MHz

607 MHz

607 MHz

700 MHz

772 MHz

Processor Clock
(CUDA Cores)

1566 MHz

1350 MHz

1215 MHz

1215 MHz

1401 MHz

1544 MHz

Memory Clock
(Clock Rate/Data Rate)

902/3608 MHz

900/3600 MHz

837/3348 MHz

837/3348 MHz

924/3696 MHz

1002/4016 MHz

Total Video Memory

1024MB GDDR5

768MB / 1024MB GDDR5

1024MB GDDR5

1280MB GDDR5

1536MB GDDR5

1536MB GDDR5

Memory Interface

128-Bit 768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

384-Bit

Total Memory Bandwidth

57.7 GB/s

86.4 / 115.2 GB/s

102.6 GB/s

133.9 GB/s

177.4 GB/s

192.4 GB/s

Texture Filtering Rate
(Bilinear)

25.1 GigaTexels/s

37.8 GigaTexels/s

26.7 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

49.4 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

6-Pin

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

6-Pin + 8-Pin

Thermal Design Power (TDP)

106 Watts 768MB=150W / 1GB=160W

200 Watts

215 Watts

250 Watts

244 Watts

Recommended PSU

400 Watts

450 Watts

550 Watts

550 Watts

600 Watts

600 Watts

GPU Thermal Threshold

95°C

104°C

105°C

105°C

105°C

97°C

GeForce Fermi Chart Courtesy of Benchmark Reviews

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Cost to Performance Ratio

For this article Benchmark Reviews has included cost per FPS for graphics performance results. An average of the five least expensive product prices are calculated, which do not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. The median retail prices for each product were obtained from NewEgg.com and current as of 19-November-2010:

  • $230 EVGA GTX 460 FTW Edition
  • ASUS-ENGTX580-GPUZ.gif$242 Radeon HD 6870 1GB
  • $255 GeForce GTX 470 1GB
  • $324 Radeon HD 5870 1GB
  • $437 GeForce GTX 480 1536MB
  • $520 ASUS ENGTX 580 1536MB
  • $570 Radeon HD 5970 2GB

Intel X58-Express Test System

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

DirectX-11 Benchmark Applications

  • Aliens vs Predator Benchmark 1.0
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)
  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
  • Tom Clancy's HAWX 2 Benchmark 1.0.4
    • Extreme Settings: (Maximum Quality, 8x AA, 16x AF, DX11 Terrain Tessellation)
  • Unigine Heaven Benchmark 2.1
    • Moderate Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX10: 3DMark Vantage

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Jane Nash Extreme Quality Settings

Cost Analysis: Jane Nash (1920x1200)

  • $242 Radeon HD 6870 1GB costs $10.17 per FPS
  • $255 GeForce GTX 470 1GB costs $10.99 per FPS
  • $324 Radeon HD 5870 1GB costs $11.25 per FPS
  • $437 GeForce GTX 480 1536MB costs $14.97 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $9.81 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $10.85 per FPS
  • $520 ASUS ENGTX580 1536MB costs $13.54 per FPS
  • $520 ASUS GTX 580 Overclocked costs $11.98 per FPS
  • $570 Radeon HD 5970 2GB costs $13.26 per FPS

3DMark Vantage GPU Test: New Calico

New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

3dMark_Vantage_New_Calico_Benchmark.jpg

New Calico Extreme Quality Settings

Cost Analysis: New Calico (1920x1200)

  • $242 Radeon HD 6870 1GB costs $13.37 per FPS
  • $255 GeForce GTX 470 1GB costs $12.94 per FPS
  • $324 Radeon HD 5870 1GB costs $14.86 per FPS
  • $437 GeForce GTX 480 1536MB costs $17.27 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $11.06 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $13.79 per FPS
  • $520 ASUS ENGTX580 1536MB costs $15.25 per FPS
  • $520 ASUS GTX 580 Overclocked costs $13.58 per FPS
  • $570 Radeon HD 5970 2GB costs $16.72 per FPS

Test Summary: The ASUS GeForce GTX 580 competes on two levels: price point, and GPU performance segment. At the $520 price point, it competes directly against ATI's dual-GPU Radeon HD 5970 and a pair of AMD Radeon HD 6870 video cards combined into CrossFireX or two overclocked GeForce GTX 460's in SLI. In regard to single-GPU competition, the closest video cards would be ATI's Radeon HD 5870 or NVIDIA's own GeForce GTX 480.

3dMark Vantage usually sets the pace for coming events, and through the analysis of these two tests spread over two different resolutions we gain a clear insight into who ranks where on the battlefield. In the Jane Nash tests, the dual-GPU Radeon HD 5970 and CrossFireX Radeon HD 6870's outscore the ASUS GeForce GTX 580, but then in the New Calico tests the ENGTX 580 comes back to outperform or match these same video cards. Comparing the GeForce GTX 580 to the older GTX 480 creates a tremendous performance margin, and even more so against the ATI Radeon HD 5870. Adding a hefty overclock using Voltage Tweak draws out the hiding frames, and gives the ENGTX580 a significant advantage. Our cost analysis indicates a trend among the premium top-end graphics products: better frame rate, and less value.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Moderate Quality Settings

Cost Analysis: Crysis Warhead (1920x1200)

  • $242 Radeon HD 6870 1GB costs $9.68 per FPS
  • $255 GeForce GTX 470 1GB costs $9.44 per FPS
  • $324 Radeon HD 5870 1GB costs $12.00 per FPS
  • $437 GeForce GTX 480 1536MB costs $12.85 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $9.58 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $9.68 per FPS
  • $520 ASUS ENGTX580 1536MB costs $12.38 per FPS
  • $520 ASUS GTX 580 Overclocked costs $9.13 per FPS
  • $570 Radeon HD 5970 2GB costs $13.57 per FPS

Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead doesn't allow gamers to use the maximum quality settings, but it allows the ASUS GeForce GTX 580 to match performance with the dual-GPUs inside an ATI Radeon HD 5970. Unless overclocked to 92 5MHz, the ENGTX580 is no match for CrossFireX AMD Radeon HD 6870's or EVGA 460 FTW's in SLI. Comparing the Fermi GF100 in the GeForce GTX 480 against the optimized GF110 GPU inside the GeForce GTX 580 amounts to a huge performance difference, and massive increase over the Radeon HD 5870. In terms of price to performance value, the ASUS ENGTX580 doesn't hit paydirt until you make use of the overclocking headroom this video card has to offer and maximize its potential.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Extreme Quality Settings

Cost Analysis: Aliens vs Predator (1920x1200)

  • $242 Radeon HD 6870 1GB costs $8.96 per FPS
  • $255 GeForce GTX 470 1GB costs $8.73 per FPS
  • $324 Radeon HD 5870 1GB costs $10.80 per FPS
  • $437 GeForce GTX 480 1536MB costs $11.78 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $8.65 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $9.06 per FPS
  • $520 ASUS ENGTX580 1536MB costs $11.40 per FPS
  • $520 ASUS GTX 580 Overclocked costs $8.86 per FPS
  • $570 Radeon HD 5970 2GB costs $11.22 per FPS

Test Summary: Aliens vs Predator may use the well-known Asura game engine, but DirectX-11 extensions push the graphical demand on this game to levels eclipsed by very few others. With an unbiased appetite for raw DirectX-11 graphics performance, Aliens vs Predator accepts AMD and NVIDIA products as equal contenders. When maximum quality settings and high-strain SSAO are called into action, the ASUS GeForce GTX 580 surpasses the GeForce GTX 480 out of the box, but doesn't quite match itself up to the dual-GPU ATI Radeon HD 5970 or CrossFireX AMD Radeon HD 6870's like it has done in other tests.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Battlefield Bad Company 2

The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)

Battlefield-Bad-Company-2_Benchmark.jpg

Battlefield Bad Company 2 Extreme Quality Settings

Cost Analysis: Battlefield: Bad Company 2 (1920x1200)

  • $242 Radeon HD 6870 1GB costs $4.43 per FPS
  • $255 GeForce GTX 470 1GB costs $4.12 per FPS
  • $324 Radeon HD 5870 1GB costs $5.19 per FPS
  • $437 GeForce GTX 480 1536MB costs $6.59 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $4.12 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $4.52 per FPS
  • $520 ASUS ENGTX580 1536MB costs $5.59 per FPS
  • $520 ASUS GTX 580 Overclocked costs $5.15 per FPS
  • $570 Radeon HD 5970 2GB costs $6.33 per FPS

Test Summary: Our extreme-quality tests use maximum settings for Battlefield: Bad Company 2, still allowing the ASUS GeForce GTX 580 to outperform a dual-GPU ATI Radeon HD 5970 along with every other single-GPU video card on the market. Two AMD Radeon HD 6870's in CrossFire catch the GTX 580 on price and performance, as do the overclocked GTX 460's in SLI, but these both create more heat and consume much more power. Battlefield Bad Company 2 spells bad news for the older GeForce GTX 480, producing the highest cost per frame while being thoroughly outperformed by the ENGTX 580.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: BattleForge

BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)

BattleForge_DX11_Benchmark.jpg

BattleForge Extreme Quality Settings

Cost Analysis: BattleForge (1920x1200)

  • $242 Radeon HD 6870 1GB costs $7.06 per FPS
  • $255 GeForce GTX 470 1GB costs $5.62 per FPS
  • $324 Radeon HD 5870 1GB costs $6.79 per FPS
  • $437 GeForce GTX 480 1536MB costs $7.50 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $5.93 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $7.11 per FPS
  • $520 ASUS ENGTX580 1536MB costs $7.69 per FPS
  • $520 ASUS GTX 580 Overclocked costs $6.79 per FPS
  • $570 Radeon HD 5970 2GB costs $9.44 per FPS

Test Summary: With BattleForge graphics settings turned to their highest quality and SSAO enabled, the GeForce GTX 580 truly shines as it easily surpasses the dual-GPU Radeon HD 5970 and matches up to a pair of AMD Radeon HD 6870's in CrossFireX. The irony here is that BattleForge isn't even part of NVIDIA's The Way It's Meant To Be Played initiative; it's actually an AMD co-developed game. Nevertheless, all 512 CUDA cores dance past the others to offer one of the more respectable cost to performance ratios. Compared to the pair of overclocked EVGA GeForce GTX 460's in SLI, it takes an overclocked ENGTX580 to match performance.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Moderate Quality Settings

Cost Analysis: Lost Planet 2 (1920x1200)

  • $242 Radeon HD 6870 1GB costs $7.81 per FPS
  • $255 GeForce GTX 470 1GB costs $6.71 per FPS
  • $324 Radeon HD 5870 1GB costs $10.42 per FPS
  • $437 GeForce GTX 480 1536MB costs $9.30 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $8.13 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $8.46 per FPS
  • $520 ASUS ENGTX580 1536MB costs $9.29 per FPS
  • $520 ASUS GTX 580 Overclocked costs $8.51 per FPS
  • $570 Radeon HD 5970 2GB costs $12.08 per FPS

Test Summary: Lost Planet 2 is one of the few PC video games that is impossible to play with maximum quality settings. Even when moderate quality levels are configured, top-end video cards like AMD's Radeon HD 6870 and the Radeon HD 5870 barely produce playable frame rates. At 1680x1050 the ASUS GeForce GTX 580 outperforms every other graphics product available, including a pair of Radeon HD 6870's in CrossFire or overclocked GTX 460's in SLI. Increasing the display resolution to 1920x1200 results in the GTX 580 falling about one frame behind the CrossFireX and SLI set, but still 9-FPS ahead of the Radeon HD 5970's dual Cypress GPUs that somehow compete with the older GeForce GTX 480. With a maximum overclock, the ASUS ENGTX580 clears all of the competition, combined or otherwise.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX9+SSAO: Mafia II

Mafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II.

Mafia II is a SSAO-enabled PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers is equal access to high-detail physics regardless of video card manufacturer.

  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)

Mafia2_DX11_Benchmark.jpg

Mafia II Extreme Quality Settings

Cost Analysis: Mafia II (1920x1200)

  • $242 Radeon HD 6870 1GB costs $5.10 per FPS
  • $255 GeForce GTX 470 1GB costs $5.57 per FPS
  • $324 Radeon HD 5870 1GB costs $6.08 per FPS
  • $437 GeForce GTX 480 1536MB costs $7.56 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $6.05 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $7.15 per FPS
  • $520 ASUS ENGTX580 1536MB costs $7.84 per FPS
  • $520 ASUS GTX 580 Overclocked costs $7.00 per FPS
  • $570 Radeon HD 5970 2GB costs $9.39 per FPS

Test Summary: Of all the video games presently available for DirectX-11 platforms, Mafia II is by far one of the most unique. Playable at the highest quality settings when APEX PhysX is disabled (for the benefit of non-GeForce products), all of the video cards tested delivered good frame rate performance. As we've seen in the other tests, the ASUS GeForce GTX 580 outperforms the ATI Radeon HD 5970, and by the same margin that it surpasses the GeForce GTX 480. Although only 1-2 FPS ahead, the pair of CrossFireX AMD Radeon HD 6870's still earns a performance edge at the expense of higher total power consumption. When APEX PhysX is enabled on Mafia II, all Radeon graphics products are forced to use the CPU and performance quickly plummets. Using ASUS Voltage Tweak to help overclock up to 925MHz, the ENGTX580 matches the pair of EVGA GTX 460 FTW video cards in SLI.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Moderate Quality Settings

Cost Analysis: Metro 2033 (1920x1200)

  • $242 Radeon HD 6870 1GB costs $10.39 per FPS
  • $255 GeForce GTX 470 1GB costs $11.18 per FPS
  • $324 Radeon HD 5870 1GB costs $14.53 per FPS
  • $437 GeForce GTX 480 1536MB costs $14.86 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $11.41 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $10.48 per FPS
  • $520 ASUS ENGTX580 1536MB costs $15.07 per FPS
  • $520 ASUS GTX 580 Overclocked costs $13.90 per FPS
  • $570 Radeon HD 5970 2GB costs $14.92 per FPS

Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience using moderate quality settings. Even when these settings are turned down, as they were in our tests, Metro 2033 is a power-hungry video game that crushes frame rate performance. Although Metro 2033 offers advanced PhysX options, these settings are available only to NVIDIA GeForce video cards and disabled for all of our tests.

Metro 2033 proves to be too much for most video cards even when anti-aliasing is turned to its lowest setting, and causes the GeForce GTX 480 to perform at a barely playable 29.4 FPS. The inability to enjoy PhysX effects doesn't look like a huge drawback considering how low the other settings must be turned down, but that a decision for gamers. The ASUS GeForce GTX 580 may be the most powerful single-GPU video card available, but it's outperformed by the two closest dual-GPU options within its price point when Metro 2033 is involved.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Tom Clancy's HAWX2

Tom Clancy's H.A.W.X.2 has been optimized for DX11 enabled GPUs and has a number of enhancements to not only improve performance with DX11 enabled GPUs, but also greatly improve the visual experience while taking to the skies. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain in question. This allows for a very low memory footprint and relies on the GPU power alone to expand the low resolution data to highly realistic detail.

The Tom Clancy's HAWX2 benchmark uses normal game content in the same conditions a player will find in the game, and allows users to evaluate the enhanced visuals that DirectX-11 tessellation adds into the game. The Tom Clancy's HAWX2 benchmark is built from exactly the same source code that's included with the retail version of the game. HAWX2's tessellation scheme uses a metric based on the length in pixels of the triangle edges. This value is currently set to 6 pixels per triangle edge, which provides an average triangle size of 18 pixels.

The end result is perhaps the best tessellation implementation seen in a game yet, providing a dramatic improvement in image quality over the non-tessellated case, and running at playable frame rates across a wide range of graphics hardware.

  • Tom Clancy's HAWX 2 Benchmark 1.0.4
    • Extreme Settings: (Maximum Quality, 8x AA, 16x AF, DX11 Terrain Tessellation)

HAWX-2_DX11_Benchmark.jpg

Tom Clancy's HAWX2 Extreme Quality Settings

Cost Analysis: Tom Clancy's HAWX2 (1920x1200)

  • $242 Radeon HD 6870 1GB costs $3.56 per FPS
  • $255 GeForce GTX 470 1GB costs $2.63 per FPS
  • $324 Radeon HD 5870 1GB costs $5.40 per FPS
  • $437 GeForce GTX 480 1536MB costs $3.55 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $3.15 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $4.48 per FPS
  • $520 ASUS ENGTX580 1536MB costs $3.77 per FPS
  • $520 ASUS GTX 580 Overclocked costs $3.54 per FPS
  • $570 Radeon HD 5970 2GB costs $6.55 per FPS

Test Summary: This is a controversial game, at least in the sense that AMD has asked us not to use it for testing (which is why it wasn't included in the AMD Radeon 6850 and 6870 reviews) but NVIDIA endorses it. Our thinking is that HAWX 2 is a full retail product that will be purchased and played by consumers, regardless of how well Radeon products handle its tessellation. Tom Clancy's HAWX2 works incredibly well with the tessellation-optimized Fermi architecture, finally making NVIDIA's effort more transparent. Despite this benefit to GF100-series products, the game is primarily comprised of clear blue skies that require very little processing power. As a result, even a single AMD Radeon HD 6870 produces 68 FPS at 1920x1200.

When HAWX2 is turned up to use the highest possible quality settings with terrain tessellation enabled, the ASUS GeForce GTX 580 video card easily surpasses every product tested against it. Oddly enough, the newer but less-powerful AMD Radeon HD 6870 performs better than the older and more expensive ATI Radeon HD 5870: 10-FPS at 1680x1050 and 8-FPS at 1920x1200. This proves that AMD gave tessellation a little more emphasis on the Barts GPU (Cypress refresh), but doesn't give it enough power for two Radeon HD 6870's in CrossFire to do more than match performance with the GeForce GTX 470. Another twist is demonstrated by over-emphasized tessellation power of GF100 in NVIDIA's GeForce GTX 480, which gives it the strength to compete with the new GTX 580. Using Voltage Tweak, the overclocked ENGTX580 matches performance with a pair of overclocked GTX 460's.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

DX11: Unigine Heaven 2.1

The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 2.1
    • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven 2.1 Moderate Quality Settings

Cost Analysis: Unigine Heaven (1920x1200)

  • $242 Radeon HD 6870 1GB costs $9.13 per FPS
  • $255 GeForce GTX 470 1GB costs $8.67 per FPS
  • $324 Radeon HD 5870 1GB costs $12.51 per FPS
  • $437 GeForce GTX 480 1536MB costs $11.94 per FPS
  • $460 EVGA GTX 460 FTW SLI costs $8.10 per FPS
  • $484 Radeon HD 6870 CrossFireX costs $9.18 per FPS
  • $520 ASUS ENGTX580 1536MB costs $11.53 per FPS
  • $520 ASUS GTX 580 Overclocked costs $10.42 per FPS
  • $570 Radeon HD 5970 2GB costs $12.98 per FPS

Test Summary: Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a graphics tool this detailed. For now though, those details only come by way of DirectX-11 video cards. Our 'extreme' test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards when set to higher quality levels. Heaven 2.1 is a very demanding benchmark tool, which is why tessellation is set to normal levels and antialiasing is reduced to 4x.

In Unigine's Heaven benchmark, both the AMD Radeon HD 6870 and ATI Radeon HD 5870 video cards trail behind the GeForce GTX 470, making the 5870's value rating is the worst of the entire bunch while the opposite is true for the GTX 470. The dual-GPU ATI Radeon HD 5970 is positioned between the older GeForce GTX 480 and new GeForce GTX 580, and all three share the $11 cost per FPS range. Combining two Radeon HD 6870 video cards into CrossFire will get you the best frame rate performance, and surprisingly, it nearly matches the best overall value. The overclocked ENGTX580 manages to drop into the mid-$10 range in our cost analysis, but a pair of overclocked GeForce GTX 460 video cards in SLI capture the best price point.

Graphics Card EVGA GTX 460 FTW Radeon HD6870 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970 ASUS ENGTX580
GPU Cores 336 1120 448 1600 480 3200 (1600 per GPU) 512
Core Clock (MHz) 850 900 608 850 700 725 782
Shader Clock (MHz) 1700 N/A 1215 N/A 1401 N/A 1564
Memory Clock (MHz) 1000 1050 837 1200 924 1000 1002
Memory Amount 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5 1536MB GDDR5
Memory Interface 256-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU) 384-bit

ASUS SmartDoctor Utility

AMD and NVIDIA already stretch their GPUs pretty thin in terms of overclocking head room, but there's a difference between thin and non-existent. In this section, Benchmark Reviews overclocks the ASUS GeForce GTX 580 video card using the ASUS SmartDoctor overclocking utility. This application offers several adjustable variables to reach the desired overclock, and allows for voltage changes (increase/decrease). The aim of this project is to push the Fermi GF110 GPU inside the ENGTX580 as far as it could go by taking advantage of the ASUS Voltage Tweak functionality.

In the past, software-based overclocking on ASUS video cards has been accomplished through the GamerOSD program. Now these tasks are handled by ASUS SmartDoctor (illustrated below), which is described as "intelligent hardware protection and a powerful overclocking tool". The ASUS SmartDoctor tool allows users to overclock their ASUS video card's GPU and RAM, and at the same time monitor thermal output. ASUS simplifies the on-screen feedback with notes such as "Your VGA Card is OK.", but more advanced users will appreciate the in-depth data that is displayed at the upper-right corner of the module.

The ASUS SmartDoctor utility measures memory speed in quad data rate, which means that the stock speed of 1000MHz GDDR5 appears as 4000MHz in the utility. The ASUS SmartDoctor utility worked well to overclock GPU clock speed, while Vcore voltage allowed plenty of available range thanks to the added Voltage Tweak functionality. As a best practice, it's good to find the maximum stable GPU clock speed, and then drop back 10 MHz or more. After several trial-and-error gaming sessions to confirm stability, the ASUS ENGTX580 video card proved to offer an overclock that resulted in some very impressive gains.

ASUS-Smart-Doctor-ENGTX580-Max-OC.png

ASUS SmartDoctor Overclocking Utility (GeForce GTX 580)

ASUS ENGTX580 Overclocking

The factory overclocked ASUS ENGTX580 uses 962 mV at 782 MHz, but with the help of added Voltage Tweak stability and their SmartDoctor software we safely applied 1145 mV to reach 925 MHz. This amounts to a 153 MHz increase over the standard speed, and 143 MHz increase over the factory overclock. Beginning with the maximum stable GPU clock speed, I slowly increased the settings until I began to see screen tearing or the Forceware driver crashed. Once I reached the most stable speeds for both GPU and GDDR5, I put the video card back into action with high-demand video games for additional benchmark tests. Here are the results:

GeForce GTX 580 Overclocking Results

Test Item Standard GPU/RAM Factory GPU/RAM Overclocked GPU/RAM Improvement
GeForce GTX 580 772/1002 MHz 782/1002 MHz 925/1002 MHz 153 MHz (20%)
DX10: 3dMark Jane Nash 38.1 38.4 43.4 5.3 FPS (14%)
DX10: 3dMark Calico 33.7 34.1 38.3 4.6 FPS (14%)
DX10: Crysis Warhead 41 42 46 5 FPS (12%)
DX11: Aliens vs Predator 43.0 45.6 47.4 4.4 FPS (10%)
DX11: Battlefield BC2 92.0 93.1 101.0 9.0 FPS (10%)
DX11: BattleForge 66.9 67.6 76.6 9.7 FPS (15%)
DX11: HAWX 2 136 138 147 11 FPS (8%)
DX11: Heaven 2.1 44.8 45.1 49.9 5.1 FPS (11%)
DX11: Lost Planet 2 55.0 56.0 61.1 6.1 FPS (11%)
DX9+SSAO: Mafia II 65.9 66.3 74.3 8.4 FPS (13%)

Overclocking Summary: After re-testing the overclocked GeForce GTX 580 on eleven different benchmarks, the increased performance amounted to 8-20% improvement in video frame rates. This is an impressive overclock considering that the GeForce GTX 580 already rests at the very top of NVIDIA's food chain. The recent ASUS ENGTX480 Overclocking project used a refined GF100 GPU that yielded 12-17% improvements, while the AMD Radeon HD 6870 was limited to 6.5-9.3%. Even though ASUS uses NVIDIA's best processor, there's still headroom for overclocking to squeeze every extra frame into an advantage over your enemy.

ASUS-ENGTX580-OC-GPUZ.gif

Back in the day, software overclocking tools were few and far between. Benchmark Reviews was literally put on the map with my first article: Overclocking the NVIDIA GeForce Video Card. Although slightly dated, that article is still relevant for enthusiasts wanting to permanently flash their overclock onto the video cards BIOS. Unfortunately, most users are not so willing to commit their investment to such risky changes, and feel safer with temporary changes that can be easily undone with a reboot. That's the impetus behind the sudden popularity for software-based GPU overclocking tools.

ASUS GamerOSD

Want to boost graphics card performance without exiting the game? Activate ASUS GamerOSD anytime during gameplay and adjust the GPU clock for instant overclocking. Real-time FPS (frames per second) status is also available to keep track of performance improvements. The ASUS GamerOSD (On-Screen Display) is a tool for combining ASUS SmartDoctor and video capture tools together without leaving the action. What I found especially helpful was the high-resolution DVD-quality video capture feature, which comes FREE with the ASUS ENGTX580 kit. Programmable hot keys assign actions, and recorded video (saved in XviD MPEG-4 format) or screen shots can be taken on command. These are features you would have to pay for using other software, such as FRAPS.

ASUS-GamerOSD-Movie-Capture.png

ASUS GamerOSD On-Screen Display Utility

In the next section, we explore NVIDIA APEX PhysX enhancements with the ASUS GeForce GTX 480...

GeForce GTX 580 Temperatures

Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

NVIDIA-GeForce-GTX580-Blower-Fan.jpg

To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing. FurMark does two things extremely well: drive the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world temperatures while gaming:

Video Card Idle Temp Loaded Temp Ambient
ATI Radeon HD 5850 39°C 73°C 20°C
AMD Radeon HD 6850 42°C 77°C 20°C
AMD Radeon HD 6870 39°C 74°C 20°C
ATI Radeon HD 5870 33°C 78°C 20°C
NVIDIA GeForce GTX 480 36°C 82°C 20°C
NVIDIA GeForce GTX 580 24°C 64°C 20°C
ASUS ENGTX580 32°C 70°C 20°C
ASUS ENGTX580 1145 mV 33°C 80°C 20°C

As a result of NVIDIA's new hardware power monitoring circuitry, temperatures are kept to their lowest level in many years. At first I suspected GPU load or power throttling, but there's no evidence of this on the GPU-Z histogram when we re-tested (at 26°C ambient room temp). Regardless, the nearly-ambient 32°C idle temperature and modestly warm 70°C loaded temp are something NVIDIA should be proud of... and the competition should take notice of. On the overclocked ASUS ENGTX580 model, average temperatures were identical to reference tests; not that adding 10 MHz to the clock should really amount to much. Once overclocked to 925 MHz with 1145 mV, idle temperatures climbed only a few degrees while loaded temps climbed to 80°C. Even with the added voltage and overclock, the ENGTX580 was quiet under load and ran cooler than a GeForce GTX 480.

VGA Power Consumption

For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The factory overclocked ASUS GeForce GTX 580 video card surprised us with chilly idle and lukewarm loaded temperatures, and the surprises keep coming as we measure power consumption. The GeForce GTX 580 requires one eight-pin and one six-pin PCI-E power connection for proper operation. Resting at idle with no GPU load, the ASUS GeForce GTX 580 consumed a modest 31W by our measure. Compensating for a small margin of error, this falls roughly in-line with idle power draw from the GeForce 8800 GT and slightly less than the GeForce GTX 280 or GTX 465. Most notably, it's considerably lower than the GF100 inside the GeForce GTX 480... by nearly 26%.

Once 3D-applications begin to demand power from the GPU, electrical power consumption climbs. Because the GeForce GTX 580 uses a new hardware power monitoring technology not included on previous GeForce models, we've had to work around the power-throttling effects enabled with FurMark or OCCT. To do so, we merely renamed Furmark.exe to Crysis.exe and changed some file names and locations. Measured at full throttle with FurMark's 3D 'torture' load, the GeForce GTX 580 topped out at 245W maximum power draw. NVIDIA's stated max TDP is 244W, which is falls within our measurements - even once efficiency rating and margin of error are factored in.

Oddly enough, using ASUS Voltage Tweak to over-volt the ENGTX580 up to 925 MHz didn't exactly translate into higher power consumption. In fact, our average maximum readings were only 1-2 watts above the standard TDP. I realize that increasing from 962 mV to 1145 mV isn't a huge change, but I actually expected the power consumption to rise accordingly. While the idle power draw is identical to older models like the GeForce 8800 GT, it's better than the GeForce GTX 280 and more recent GTX 465 models. At full power load the GeForce GTX 580 matches up to the ATI Radeon HD 5870, but still operates well below the levels of a Radeon HD 5970.

ASUS GeForce GTX 580 Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

Beginning with frame rate performance, the ASUS GeForce GTX 580 video card competes at a level comparable to the dual-GPU ATI Radeon HD 5970 and occasionally matched up well against a pair of AMD Radeon HD 6870's in CrossFireX configuration or GeForce GTX 460's in SLI. All of these options share a similar premium price point, and offer similar DirectX 11 functionality. In comparison to the Fermi GF100-powered GeForce GTX 480, the GF110 proved that GeForce GTX 580 was far more than an added streaming multiprocessor and clock speed increase; it was the perfect blend of tessellation, shaders, and texture units we first witnessed with the GF104 inside GeForce GTX 460.

In our DirectX 10 tests, 3D Mark Vantage had the 782 MHz GeForce GTX 580 trailing slightly behind the Radeon HD 5970 and CrossFire 6870's in Jane Nash, but then it comes back and matches or exceeds them in New Calico. Overclocked to 925 MHz, the ENGTX580 struggles to keep up with the two overclocked GTX 460's in SLI, but it matches or beats the others. Crysis Warhead matches the GeForce GTX 580 to the Radeon HD 5970 equally at stock speeds, yet both trail behind the Radeon 6870's in CrossFire. In our DirectX 11 tests, Aliens vs Predator puts the GeForce GTX 580 video card behind the Radeon 5970 and CrossFire 6870's, and then positions it between them for Battlefield: Bad Company 2. Then BattleForge, Lost Planet 2, and Mafia II all report the GeForce GTX 580 even with, or outperforming, the Radeon HD 5970 and CrossFire 6870's. Testing with Metro 2033 the GeForce GTX 580 takes a turn South, and both Radeon contenders surpass it. Tom Clancy's HAWX2 clearly works better with GeForce tessellation, putting the GTX 580 and all other NVIDIA products way ahead of AMD Radeon video cards. Finally, the Unigine Heaven benchmark confirms the trends we've seen in all the tests leading up to this, and position the ASUS GeForce GTX 580 slightly ahead of the dual-GPU ATI Radeon HD 5970 and trailing shortly behind a pair of AMD Radeon HD 6870's in CrossFireX. Further overclocking to 925 MHz sends the ENGTX580 past the competition.

ASUS_ENGTX580_2DI_1536MD5_Package.jpg

Appearance is a more subjective matter since the rating doesn't have benchmark scores to fall back on. Partners traditionally offer their own unique twist on the design, with improved cooling solutions and colorful fan shroud designs. NVIDIA doesn't allow this to happen with the GeForce GTX 580, primarilty because the video card operates so efficiently that improving an already effective cooling solution would be superfluous. The reference design allows nearly all of the heated air to externally exhaust outside of the computer case, which could be critically important to overclockers wanting the best possible environment for their computer hardware. This deep shroud chamfer also preserves the Fermi GF110 GPU in SLI sets. Aside from some branding decals, the ENGTX580 is identical in physical appearance to the reference design. The GTX 580 might not look as tough as the GTX 480 on the outside, but it doesn't need to overcompensate by exposing a few heat-pipes.

Defining value at the premium-priced high-end segment isn't easy, because hardware enthusiasts know that they're going to pay top dollar to own the top product. Even still, value is a fast moving target because the price on these products literally changes by the minute in this industry. Even with a factory overclock and plenty of extra headroom, the ASUS ENGTX580/2DI/1536MD5 model sell for $520 (Amazon and NewEgg) and matches the most-affordable GeForce GTX 580 models.

In summary, the GF110 GPU inside the ASUS ENGTX580 clearly contrasts how far off the mark GF100 was when it launched with GTX 480, possibly lending credibility to the leap in series number (GeForce 400 vs 500). The GeForce GTX 580 redeems the Fermi architecture with performance results the enthusiast community can finally appreciate, along with reduced power consumption and thermal output everyone can be grateful for. These are all impressive enhancements to the series, and some will argue that it's what NVIDIA should have done with Fermi from the start. ASUS further extends performance with their Voltage Tweak feature, which allowed us to overclock the ENGTX580 up to 925 MHz completely stable.

Taking all aspects of the ASUS GeForce GTX 580 into consideration, there's really a lot more to like about the Fermi architecture now. The GF110 graphics processor finally has its priorities straight, and the dramatically efficient power consumption levels have reduced heat output to their lowest level in many years. The expensive price tag on the GTX 580 is something consumers of premium top-end graphics products might already be used to, but it's hard to like a $500 price point. I find myself reminded that this video card matches performance with a dual-GPU Radeon HD 5970 that was priced at $700 for the longest time, and the few remaining models still cost nearly $600. And while the ASUS GeForce GTX 580 proves itself a solid contender against the Radeon HD 5970, there's still the small matter of two GeForce GTX 460's in SLI or two AMD Radeon HD 6870's combined into CrossFire to consider. This will ultimately be a decision for the consumer, who either wants the best overall performance a single card can offer along with several proprietary features benefits only available from NVIDIA, or they'll settle for two lesser products that produce comparable frame rates.

If the ASUS GeForce GTX 580 video card was yours, would you overclock it using Voltage Tweak? Leave comments below, or ask questions in our Forum.

Benchmark Reviews Golden Tachometer AwardPros:

+ Fastest single-unit DX11 graphics accelerator available
+ Matches performance with dual-GPU Radeon HD 5970
+ Outstanding performance for ultra high-end games
+ Much lower power consumption vs GTX 480
+ Reduced heat output and cooling fan noise
+ Fan exhausts all heated air outside of case
+ Includes native HDMI audio/video output
+ Adds 32x CSAA post-processing detail
+ Supports triple-SLI functionality
+ Adds Error Correcting Code (ECC)

Cons:

- Very expensive premium-level product
- Outperformed by overclocked GTX 460's in SLI
- Outperformed by CrossFire Radeon HD 6870's

Ratings:

  • Performance: 10.0
  • Appearance: 9.25
  • Construction: 9.75
  • Functionality: 9.75
  • Value: 6.75

Final Score: 9.1 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award.


Related Articles:
 

Comments 

 
# RE: ASUS ENGTX580 GeForce GTX 580 Video CardHarry 2010-11-22 20:42
What a waste of money to be honest to spend $550 bucks on a card which will be beaten within a few months- this thing is nearly as fast as 6850CF
Report Comment
 
 
# RE: RE: ASUS ENGTX580 GeForce GTX 580 Video CardOlin Coles 2010-11-22 22:05
Harry, not everyone wants to buy two 6850's and have no upgrade path a year or two later. So you might reconsider who's wasting money.
Report Comment
 
 
# RE: RE: RE: ASUS ENGTX580 GeForce GTX 580 Video CardHarry 2010-11-23 03:14
Olin, in two years time you'll be wanting to change your motherboard as well because the new PCIE interface is already in works. Also there aren't many heavy games around to justify an expensive upgrade. I mean a 5870 can easily handle games for up to two more years. The real upgrade mania comes when there is a game or a multimedia application to justify an upgrade, all these chip manufacturers are going through serious financial issues that's why they bring out newer models even when there isn't any demand.
Report Comment
 
 
# RE: RE: RE: RE: ASUS ENGTX580 GeForce GTX 580 Video CardOlin Coles 2010-11-23 14:26
See my reply to Gordon, below.

Using your own example, if someone is to purchase a new motherboard simply for the benefit of a new PCI-E 3.0 specification, which entails a new processor and possibly memory, why wouldn't they also buy the best graphics card now? You can't have it both ways, Harry.
Report Comment
 
 
# RE: RE: RE: RE: ASUS ENGTX580 GeForce GTX 580 Video CardTom 2010-11-24 04:56
Harry, even if the GTX580 is a bit slower than 6850CF and costs a little bit more, there's still the issue of Multi-card configurations CF/SLI being substantially more complicated than single card.
Just take a look inside any ATI/nvidia release notes and you'll see that the vast majority of bugs are in multi-card configurations.
I'm willing to sacrifices 10% performance and pay 10% more for the more simple/compatible option.
Report Comment
 
 
# justifyclaydough 2011-01-31 22:21
As an artist dreaming for a next gen 2.0 that can someday match what an "army" of creative types can achieve...
( not only in resolution but in pure/massive imaginary scope )
This attitude is very de-moralizing.
Why even read a graphic card benchmark review if you live for the "status quo"? Just git yerself a console and button mash to that.
The graphic card arms race assures me a future with tesselation, vsm soft shadows, and
global illumination:
##youtube.com/watch?v=Dd8yMPZzWfE&feature=related
none of which is or will be ready for prime time if we don't fan the flames.
The product is coming.
Report Comment
 
 
# RE: ASUS ENGTX580 GeForce GTX 580 Video CardDavid Ramsey 2010-11-22 20:59
Um, be beaten in a few months by what, exactly? And how do you know that whatever upcoming wonder card might beat it will be priced anywhere near it? As a hardware geek who must have the latest and greatest, I welcome whatever ATI can come up with in response. Right now, the GTX580's combination of power, noise, and low temps is unbeatable.
Report Comment
 
 
# superTrajan Long 2010-11-22 21:15
Crushes AMD's single GPU like roadkill. SLI is unbeatable. Great image quality and CUDA. IN many games 580 minimums are better than 5970. Smoother gameplay all around.
Report Comment
 
 
# SuccellusFranck 2010-11-23 04:12
Great article cover the single card contenders and the SLI that could and can match it.
Its clear its not worth yet buying it unless you dont thrust SLI and you have power consuption/heat issues.
BUT, since SLI is only relevant in few games its a factor to be considered.
Many already rushed to the market to buy the vanilla 580, i suppose many more will go to the premium that are coming.
Report Comment
 
 
# Worth it to doRealNeil 2010-11-23 09:06
It's worth the money if you have it, and I probably wouldn't over clock it either. It run well enough at stock, and if I want more performance then I'll buy another one of them. I had decided earlier on that I was going to but two GTX460's and put them into SLI for my new PC, but this card is smarter for me, so it's what I'll get.
Report Comment
 
 
# RE: ASUS ENGTX580 GeForce GTX 580 Video CardGordon 2010-11-23 14:00
Harry is making a very simple point that I think you are looking past in defense of being a techy like we all most likely are. The card may be great but the cost is too high and the performance over earlier cards isnt enough to justify spending another 3-5 hundred dollars a couple times a year to stay up to date. Technology moves forward at a quick rate but the improvements are very incremental, not large leaps. For each incremental improvement they pump out a new card and charge top dollar for it and add a higher number to the end of the cards type IE GTX - 260/270/280/290/blah blah blah 460/470/480 blah blah 580. I still own dual 260's and have no problems playing all the new games at an appealing framerate. When DX 11 becomes the absolute standard for games and games demand it then I will upgrade. But why stay ahead of the technology train when the tracks that guide it havent been built yet?
Report Comment
 
 
# RE: RE: ASUS ENGTX580 GeForce GTX 580 Video CardOlin Coles 2010-11-23 14:23
This isn't my first high-end video card review. I've been testing top level graphics cards for over four years, and every single time it's the same argument and complaint. It's a stale topic that I've written about at length, which I doubt you or Harry has ever read, so I'll summarize for you: They don't continue to make Ferrari's and Lamborghini's because they're fuel efficient or economic, the make them because people will buy them. To this end, high-end video cards will always have a market. It may not be the majority market which is filled with integrated graphics and casual computer users, and it may not be the mainstream graphics market that spawns from these high-end successes, but it?s a big enough market to continue pushing the envelope.

Additionally, you claim that software doesn't exist to warrant the need for products like the GTX 580, and I believe this to be untrue. Have you ever tried playing Crysis with all the graphical glory it was designed with? How about Metro 2033, Lost Planet 2, or Mafia II? Sure, you can play them with PhysX disabled and AA/AF turned down low, but that's like watching movies in standard definition. The logic of this argument also means that DVD and Blu-ray are unnecessary technologies, and VHS was 'good enough'. Not everybody wants to stand still... some of us enjoy the slow march forward.
Report Comment
 
 
# *THOROUGH*Zack 2010-11-24 07:06
The last two reviews (the general 580 and this one) were incredibly thorough and useful. The tools used, the descriptions, the overclocked charts, price per FPS per game, etc. were all GREATLY appreciated by this humbled little nerd. Even the comments were knowledgable and delivered in a useful way.

THANK YOU VERY MUCH for this review. I hope you get a raise.
Report Comment
 
 
# Tricky Spot...BeerGutBarry 2010-11-25 01:54
I love the new GTX 580, but can get the new Gigabyte 480 SOC for a bit cheaper...torn.

Curious to see how far partners can push this card though. The improvements made to the 480 from release to now have been pretty impressive, so wondering if similar gains can be made with the 580, or if this is already pushing the limit (without causing a massive increase in power consumption and removing throttling).
Report Comment
 
 
# RE: Tricky Spot...Olin Coles 2010-11-25 04:57
If it's a choice between the GTX 580 and GTX 480 SOC, I'd jump on the GTX 580 in an instant.
Report Comment
 
 
# how can i spend 770$ ??LASER 2010-11-25 03:26
Right Olin!! yea i dont wanna feel that my card is struggling to play the best game i liked..looking at tesselation video that was just to explain the idea.... coming game might share something which wont feel like u playing a game...so tesselating hardware i decided...

GTX 580 in India is for 770$ !!!
GTX 570 will be at 520$ for me....
i saw ferrari ships for 20$-84$ if u order some merchandise...
So is there any chances i can get that GTX 580 in 550$ here???
Report Comment
 
 
# Great ReviewWarren 2010-11-25 12:54
Great review. I bought two of these, thus proving that there are people out there who will buy the very best regardless of cost.

As you rightly mention, Ferrari do not make cars that are the most fuel efficient at 60Mph, they know that there are buyers out there who want the best at any price.
Report Comment
 
 
# To Ferrari or Not FerrariRickard 2010-12-06 02:14
Weird review, mixing crossfire/sli scores with normal setups. Would have been more easily understood if you had had a separate table for cf/sli setups.

Also it would be interesting to see the scores mapped to a dollar/performance index to see where the best gain is to be had.

Further its interesting that the cf/sli scores are slightly different with other reviews where the 580 consistently (well almost) had scored lower than the 5970.

I know many think hey its a dual gpu card but hey it costs less and you can still cf/sli it so I dont really see the point they are trying to make.

Imo Nvidia is on the right track and the 460 and 580 is really good cards but atm they are basically catching up and I am waiting to see what the response from AMD is before I commit to an upgrade.
Report Comment
 
 
# fast new cards?Laszlo 2010-12-10 08:21
For me, a top card is one which beat the older top dog. But none of the new cards doing this (eg. none of them beat the GTX295's raw power: most of them slower except in 1-2 titles),even if they equipped much faster ram, clocked higher,... I would like to see a GTX295 manufactured with 40nm process, GDDR5 mems, 512bit wide mem bus, clocked as high as the new cards. What would touch it in DX10? So we get new feats (which maybe could be SW rendered too..) but not speed. Like 470 -> 570: most of the 25-27% speed advantage of the 570 gained from higher clock speed and just fragments from any optimization and the one less disabled CUDA or other unit. From another point of view: it's not a coincident that first came the slower 480 (vs. 580), then 470 (crippled) then 580 with more MHz, then 570.. maybe we will see a 680 too if the 40nm process allows more than 1 GHz - as a new, very advanced chip of course (which is internally the same). We getting the same, only the price/crippling ratio changes. We are very far away from ray traced rendering, real physics, real materials, ballistics, etc.
Report Comment
 
 
# RE: fast new cards?Olin Coles 2010-12-10 08:29
I might be misunderstanding your message, so allow me to seek clarification. Are you saying a GeForce GTX 295 outperforms the GTX 580? Just so that you know, it really doesn't even come close. Here's exhibit A: /index.php?option=com_content&task=view&id=403

If the Radeon HD 5970 can clear it by 35-40% in Crysis and Heaven, then look at where the GTX 580 sits against the Radeon 5970 on those tests in this article.
Report Comment
 
 
# RE: ASUS ENGTX580 GeForce GTX 580 Video CardLaszlo 2010-12-10 10:38
Yes I was misunderstable. Clock to clock (same GPU/Mem) the 580 would be slower. Or: the 295 would be much faster, manufactured @ 40nm process and set to same clock speeds with same memory modules. = We don't get what nv can make, we get crippled cards - but with new feats. If the 580 would get the GTX285's 512 bit wide memory bus (the 295 also crippled to 448 bit, and from 32 to 28 ROPs per GPU), and 2048 MB memory, 64 ROPs, etc (as it's double number of transistors dictate), then it would be a real high end (technology wise) card. Now it's a very fast card, that milk the consumers like a cow, then nv needs to 'invent' some of their inventions again and we never get a new generation of cards that 100 or more percent faster than their precedessors. When they made their first(?), the Riva128(?) then there was no other just their best chip, and it was blown away by their next, faster chip. Not by one more feat, but by raw power - in ALL games.
Sadly they make only high end chips these days too, then crippling them down to a price category.
So these new cards are fast but far less than they should be - and they should be 50-100% faster in everything, especially in 'future' games that really use their new feats.
Report Comment
 
 
# Nice reviewThomas 2011-01-15 14:42
Very nice review, well done. Thanks very much! That'll be about it...greetings
Report Comment
 
 
# To Olin ColesGordon 2011-01-31 22:59
Olin,

Hey, so to clarify I was only trying to make what Harry was saying a little more clear as I assumed there was some misunderstanding. I agree with you in every way as to what you replied, and I should say that your comparison to Ferrari and Lamroghini was spot on. I am definately in the casual group but not by choice, lol. I would, if i could afford, like to stay up with all the newest hardware as I am a firm believer that your rig is only as strong as its weakest component. My gtx 260's: Do they play even Crysis on appealing settings at a decent enough frame rate? Yes I can push them to relatively high settings and play at about 30 or more FPS. BUUUUUT I want a GTX 580 so I can crank that up just as much as the next guy HAHA. Either way I liked your review and found it very informative.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive