Archive Home arrow Reviews: arrow Video Cards arrow ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480
ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Friday, 01 October 2010

ASUS ENGTX480 GeForce GTX 480

Benchmark Reviews tests the GF100 Fermi GPU six months after its launch, and is reminded that refined construction process and mature firmware can make a difference.

Back in late March (2010) when Benchmark Reviews revealed NVIDIA's GeForce GTX 480 video card, Fermi fever was hot and expectations were unreasonably high. After months of wild rumor and inaccurate speculation, the enthusiast community was finally able to purchase the worlds most powerful single-GPU graphics card. As this article is written, it's been half a year since that launch and the landscape has hardly changed. NVIDIA still sells the most powerful graphics processor made, and the competition has resorted to a low and middle-market focus on value over pure performance. Fortunately for NVIDIA, their launch of the GeForce GTX-460 and GTS-450 product lines have made the value play extremely difficult for AMD. Yet, for a select few with money set aside for elite-level graphics, the top-end GeForce GTX 480 video card still offers good reason for purchase.

In this article, Benchmark Reviews tests the ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480 video card. This GF-100 GPU Fermi-based graphics card is the key to many things: high-performance 3D-Vision and 3D-Vision Surround, unmatched PhysX potential, and one part of an unbeatable SLI set. NVIDIA Forceware driver optimizations and firmware refinements have turned the GTX 480 into the video card it should have been from day one, reducing operating temperatures over the original production samples and improving power consumption and fan noise. Priced at $486, the ASUS ENGTX480/2DI/1536MD5 empowers DirectX-11 video games to deliver unmatched geometric realism. Our benchmark tests compare 3D frame rate performance with the ASUS GeForce GTX 480 in single-card and SLI-modes against some of the most powerful graphics products on the market.

ASUS_ENGTX480_Video_Card_Splash.jpg

ASUS ENGTX480/2DI/1536MD5 Video Card

At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF100 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX 480 graphics card is a complex tool built to arrive at one simple destination: immersive entertainment, especially when paired with NVIDIA GeForce 3D Vision. Benchmark Reviews tests graphics frame rate performance of the ASUS GeForce GTX 480 using several of the most demanding PC video game titles and benchmark software available. Old favorites such as Crysis Warhead and PCMark Vantage are all included. New to the scene are Aliens vs Predator, Battlefield: Bad Company 2, Mafia II, Metro 2033, BattleForge, and the recently announced Unigine Heaven 2.0 benchmark.

TSMC, the largest semiconductor foundry on the planet, has had a great deal of difficulty shrinking their business. Originally intended to feature 512 CUDA cores, NVIDIA was faced with limited yields from TSMC, and decided to end the consumers wait and offer gamers and PC hardware enthusiasts a 480-core solution. Powered by 48 ROPs and 480 unified CUDA (shader) cores, the GF100 Fermi GPU has 3.2-billion transistors to help process DirectX-11 commands and render some of the most detailed graphics ever seen on the PC platform. Tessellation is the word for 2010, and DX11 brings movie-quality graphics to life on consumer-level video games.

Manufacturer: ASUSTek Computer, Inc.
Product Name: GeForce GTX 480
Model Number: ENGTX480/2DI/1536MD5
Price As Tested:$463 at Amazon or $500 at NewEgg

Full Disclosure: The product sample used in this article has been provided by ASUS.

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100 GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF100 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF100, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

GF100 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF100's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF100 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100 GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

GF100 Specifications

  • 512 CUDA Cores
  • 16 Geometry Units
  • 4 Raster Units
  • 64 Texture Units
  • 48 ROP Units
  • 384-bit GDDR5
  • DirectX-11 API Support

GeForce 400-Series Specifications

Graphics Card

GeForce GTS 450

GeForce GTX 460

GeForce GTX 465

GeForce GTX 470

GeForce GTX 480

GPU Transistors 1.17 Billion 1.95 Billion 3.2 Billion 3.2 Billion 3.2 Billion

Graphics Processing Clusters

1 2

4

4

4

Streaming Multiprocessors

4 7 11

14

15

CUDA Cores

192 336 352

448

480

Texture Units

32 56 44

56

60

ROP Units

16 768MB=24 / 1GB=32 32

40

48

Graphics Clock
(Fixed Function Units)

783 MHz

675 MHz

607 MHz

607 MHz

700 MHz

Processor Clock
(CUDA Cores)

1566 MHz

1350 MHz

1215 MHz

1215 MHz

1401 MHz

Memory Clock
(Clock Rate/Data Rate)

902/3608 MHz

900/3600 MHz

837/3348 MHz

837/3348 MHz

924/3696 MHz

Total Video Memory

1024MB GDDR5

768MB / 1024MB GDDR5

1024MB GDDR5

1280MB GDDR5

1536MB GDDR5

Memory Interface

128-Bit 768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

Total Memory Bandwidth

57.7 GB/s

86.4 / 115.2 GB/s

102.6 GB/s

133.9 GB/s

177.4 GB/s

Texture Filtering Rate
(Bilinear)

25.1 GigaTexels/s

37.8 GigaTexels/s

26.7 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

6-Pin

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

Thermal Design Power (TDP)

106 Watts 768MB=150W / 1GB=160W

200 Watts

215 Watts

250 Watts

Recommended PSU

400 Watts

450 Watts

550 Watts

550 Watts

600 Watts

GPU Thermal Threshold

95°C

104°C

105°C

105°C

105°C

GeForce Fermi Chart Courtesy of Benchmark Reviews

Closer Look: ASUS ENGTX480

Up to now, 2010 has been an exciting year for game developers. Microsoft Windows 7 (and updated Windows Vista) introduced gamers to DirectX-11, allowing video games released for the PC platform to look better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware. This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles.

ASUS_ENGTX480_Video_Card_Package.jpg

The ASUS ENGTX480/2DI/1536MD5 is very similar to the reference design, and retires NVIDIA's GeForce GTX 285 series as the most powerful single-GPU desktop graphics card. Assuming the same double-bay card height and 10.5" long profile, the ASUS GeForce GTX 480 adds a more robust thermal management system with five heatpipes (four are exposed) that transfer heat away from the GF100 GPU to an exposed heatsink surface. Video frame buffer memory specifications change with the GTX 480. The 512-bit memory interface of the GTX 285 is replaced by a 384-bit version, which features 1536MB of GDDR5 vRAM operating at 924/3696 MHz clock and data rates.

ASUS_ENGTX480_Video_Card_Top.jpg

Other changes occur in more subtle ways, such as with the device header panel. While keeping to the traditional design used for GeForce 200-series products, the GTX 480 simply replaces the S-Video connection with a more relevant (mini) HDMI 1.3a A/V output. ASUS has retained dual DVI output on their GTX 480, which means that at least two GeForce video cards will be needed for a NVIDIA 3D-Vision Surround functionality. As with past GeForce video cards, the GF100 GPU offers two output 'lanes', so all three output devices cannot operate at once.

ASUS_ENGTX480_Video_Card_Header.jpg

The 40nm GPU fabrication process opens the die for more transistors, now increased from 1.4-billion in GT200 GPU present on the GeForce GTX 285 to an astounding 3.2-billion built into the Fermi GF100 GPU used with the ASUS GeForce GTX 480. While the exposed heatsink surface does well-enough to remove a portion of heat from the 700MHz Fermi GPU, the wide exhaust vents begin to expel hot air from the initial system start-up. Additionally, the GF100 Fermi GPU is located closer to the exhaust panel than GT200 was, and although the heatsink and heat-pipes are improved over previous designs the loaded operating temperature runs very warm at full output. We explore operating temperatures later in this article.

ASUS_ENGTX480_Video_Card_Angle.jpg

NVIDIA designed the GTX 480 for 250 watts Thermal Design Power (TDP), however the 6+8-pin power connections are good for up to 400W on demand. Keep this in mind when shopping for a power supply unit, because NVIDIA recommends a 600W PSU for the GTX 480; although I feel that an 800W PSU would be a safer suggestion for upgrade longevity.

NVIDIA_GeForce_GTX-480_Video_Card_Side.jpg

NVIDIA has designed triple-SLI capability for all of their GF100 Fermi video cards: GTX 465, 470, and GTX 480. Technically SLI and triple-SLI are possible, but careful consideration for heat must be given to the GTX 480. Under maximum load using FurMark, the ASUS GTX 480 video card reached 84°C in a 20°C room with no additional cooling assistance from the computer case. Upon close inspection of the printed circuit board (PCB), there's an opening behind the blower fan that allows intake air to be drawn from either side of the unit.

The PCB is a busy place for the GeForce GTX 480. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the fully-grown 530 mm2 GF100 GPU and its 3.2-billion transistors. 480 CUDA cores operate at 1401 MHz, which keeps a firm lead over AMD's 850 MHz Cypress-XT GPU that measures 334 mm2 and fits 2.154-billion transistors.

Zotac_GeForce_GTX-480_PCB.jpg

Over the next several sections, Benchmark Reviews explains our video card test methodology followed by a performance comparison of the ASUS GeForce GTX 480 against several of the most popular graphics accelerators available. The GeForce GTX 480 replaces the DX10 GeForce GTX 285, and directly competes against the DX11 AMD Radeon HD 5870; so we'll be keeping a close eye on comparative performance.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending August 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Cost to Performance Ratio

For this article Benchmark Reviews has included cost per FPS for graphics performance results. An average of the five least expensive product prices are calculated, which do not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. The median retail prices for each product were obtained from NewEgg.com and current as of 01-October-2010:

  • $140 Radeon HD 5770 1GB
  • $220 GeForce GTX 460 1GB
  • $260 Radeon HD 5850 1GB
  • $295 GeForce GTX 470 1280MB
  • $370 Radeon HD 5870 1GB
  • $470 GeForce GTX 480 1356MB
  • $650 Radeon HD 5970 2GB
  • $940 GeForce GTX 480 SLI

Intel X58-Express Test System

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

DirectX-11 Benchmark Applications

  • Aliens vs Predator
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2
    • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)
  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Mafia II
    • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)
  • Metro 2033
    • Extreme Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
  • Unigine Heaven Benchmark 2.1
    • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
Core Clock (MHz) 850 670 725 608 850 700 725
Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

DX10: 3DMark Vantage

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Jane Nash Extreme Quality Settings

Cost Analysis: Jane Nash (1920x1200)

  • $260 Radeon HD 5850 1GB costs $11.30 per FPS
  • $295 GeForce GTX 470 1280MB costs $13.05 per FPS
  • $370 Radeon HD 5870 1GB costs $12.85 per FPS
  • $470 GeForce GTX 480 1356MB costs $16.10 per FPS
  • $650 Radeon HD 5970 2GB costs $16.13 per FPS
  • $940 GeForce GTX 480 SLI costs $17.67 per FPS
  • 3DMark Vantage GPU Test: New Calico

    New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

    3dMark_Vantage_New_Calico_Benchmark.jpg

    New Calico Extreme Quality Settings

    Cost Analysis: New Calico (1920x1200)

  • $260 Radeon HD 5850 1GB costs $15.12 per FPS
  • $295 GeForce GTX 470 1280MB costs $15.37 per FPS
  • $370 Radeon HD 5870 1GB costs $16.97 per FPS
  • $470 GeForce GTX 480 1356MB costs $18.58 per FPS
  • $650 Radeon HD 5970 2GB costs $19.12 per FPS
  • $940 GeForce GTX 480 SLI costs $19.15 per FPS
  • Test Summary: In terms of value, the ASUS GeForce GTX 480 falls between the AMD Radeon HD 5870 and Radeon HD 5970 video cards. This series of high-end graphics is already relegated to affluent gamers and hardware elites, so the price comparison isn't as important as the performance characteristics. At 1920x1200 resolution on the New Calico benchmark scene, the ASUS ENGTX480 produced almost 14% better performance compared to the Radeon HD 5870. In other tests the GTX 480 and HD 5870 offered similar performance.

    In regard to SLI scaling, a single GTX 480 produced 29.2 FPS in the 1920x1200 Jane Nash test, compared to 53.2 FPS in SLI. The New Calico test rendered 25.3 FPS on a single GTX 480 while a pair of them produced 49.1 FPS. This is proof evident that NVIDIA's SLI technology has become extremely efficient, and can potentially double the performance of a single card without much (if any) degradation.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX10: Crysis Warhead

    Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

    Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

    Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

    • Crysis Warhead v1.1 with HOC Benchmark
      • Extreme Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

    Crysis_Warhead_Benchmark.jpg

    Crysis Warhead Extreme Quality Settings

    Cost Analysis: Crysis Warhead (1920x1200)

  • $260 Radeon HD 5850 1GB costs $11.30 per FPS
  • $295 GeForce GTX 470 1280MB costs $10.93 per FPS
  • $370 Radeon HD 5870 1GB costs $13.70 per FPS
  • $470 GeForce GTX 480 1356MB costs $13.82 per FPS
  • $650 Radeon HD 5970 2GB costs $16.67 per FPS
  • $940 GeForce GTX 480 SLI costs $17.09 per FPS
  • Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead allows the GeForce GTX 480 to outperform the AMD Radeon HD 5870 by 26% and nearly matches performance with AMD's Radeon HD 5970. Two ASUS GeForce GTX 480's in an SLI set produced 55 FPS, demonstrating 81% efficiency in Crysis Warhead.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Aliens vs Predator

    Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

    In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

    • Aliens vs Predator
      • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)

    Aliens-vs-Predator_DX11_Benchmark.jpg

    Aliens vs Predator Extreme Quality Settings

    Cost Analysis: Aliens vs Predator (1920x1200)

  • $140 Radeon HD 5770 1GB costs $8.75 per FPS
  • $220 GeForce GTX 460 1GB costs $9.91 per FPS
  • $260 Radeon HD 5850 1GB costs $10.74 per FPS
  • $295 GeForce GTX 470 1280MB costs $10.69 per FPS
  • $370 Radeon HD 5870 1GB costs $12.33 per FPS
  • $470 GeForce GTX 480 1356MB costs $12.67 per FPS
  • $650 Radeon HD 5970 2GB costs $13.89 per FPS
  • $940 GeForce GTX 480 SLI costs $13.35 per FPS
  • Test Summary: Aliens vs Predator may use the well-known Asura game engine, but DirectX-11 extensions push the graphical demand on this game to levels eclipsed only by Mafia-II or Metro 2033 (and possibly equivalent to DX10 Crysis). With an unbiased appetite for raw DirectX-11 graphics performance, Aliens vs Predator accepts AMD and NVIDIA products as equal contenders. When high-strain SSAO is called into action, the ASUS ENGTX480 rendered 37.1 FPS for a 24% improvement over the AMD Radeon HD 5870. A pair of GeForce GTX 480's in SLI demonstrate how well Fermi is suited for DX11, producing 70.4 FPS with 95% SLI efficiency in Aliens vs Predator.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Battlefield Bad Company 2

    The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

    At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

    In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings. The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

    • BattleField: Bad Company 2
      • Extreme Settings: (Highest Quality, HBAO, 8x AA, 16x AF, 180s Fraps Single-Player Intro Scene)

    Battlefield-Bad-Company-2_Benchmark.jpg

    Battlefield Bad Company 2 Extreme Quality Settings

    Cost Analysis: Battlefield: Bad Company 2 (1920x1200)

  • $260 Radeon HD 5850 1GB costs $5.04 per FPS
  • $295 GeForce GTX 470 1280MB costs $5.49 per FPS
  • $370 Radeon HD 5870 1GB costs $5.93 per FPS
  • $470 GeForce GTX 480 1356MB costs $6.88 per FPS
  • $650 Radeon HD 5970 2GB costs $8.54 per FPS
  • $940 GeForce GTX 480 SLI costs $8.81 per FPS
  • Test Summary: Our extreme-quality tests use maximum settings for Battlefield: Bad Company 2, and so users who dial down the anti-aliasing or use a lower resolution will have much better frame rate performance. At 1920x1200, the ASUS ENGTX480 outperforms the AMD Radeon HD 5870 by nearly 10%. A single GeForce GTX 480 tested with Battlefield: Bad Company 2 rendered 65.3 FPS, but jumps 64% when a second video card is added for SLI for 78% total SLI efficiency.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: BattleForge

    BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

    Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

    • BattleForge v1.2
      • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)

    BattleForge_DX11_Benchmark.jpg

    BattleForge Extreme Quality Settings

    Cost Analysis: BattleForge (1920x1200)

  • $140 Radeon HD 5770 1GB costs $6.48 per FPS
  • $220 GeForce GTX 460 1GB costs $6.83 per FPS
  • $260 Radeon HD 5850 1GB costs $7.93 per FPS
  • $295 GeForce GTX 470 1280MB costs $6.61 per FPS
  • $370 Radeon HD 5870 1GB costs $7.76 per FPS
  • $470 GeForce GTX 480 1356MB costs $5.70 per FPS
  • $650 Radeon HD 5970 2GB costs $9.29 per FPS
  • $940 GeForce GTX 480 SLI costs $8.54 per FPS
  • Test Summary: With settings turned to their highest quality, the NVIDIA GeForce GTX 480 appears to really dominate this AMD-sponsored game. Producing 82.5 FPS the GTX 480 outperforms the AMD Radeon HD 5870 by a whopping 73%, and also extends an 18% lead over the dual-GPU Radeon HD 5970. Two GTX 480's in an SLI set produce 110.1 FPS, which equals a mere 67% SLI scaling efficiency.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX9+SSAO: Mafia II

    Mafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II.

    Mafia II is a DirectX-9/10/11 compatible PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers is equal access to high-detail physics regardless of video card manufacturer.

    • Mafia II
      • Extreme Settings: (Antialiasing, 16x AF, High Shadow Quality, High Detail, High Geometry, Ambient Occlusion)

    Mafia2_DX11_Benchmark.jpg

    Mafia II Extreme Quality Settings

    Cost Analysis: Mafia II (1920x1200)

  • $140 Radeon HD 5770 1GB costs $9.21 per FPS
  • $220 GeForce GTX 460 1GB costs $6.03 per FPS
  • $260 Radeon HD 5850 1GB costs $5.71 per FPS
  • $295 GeForce GTX 470 1280MB costs $6.40 per FPS
  • $370 Radeon HD 5870 1GB costs $6.94 per FPS
  • $470 GeForce GTX 480 1356MB costs $8.13 per FPS
  • $650 Radeon HD 5970 2GB costs $8.59 per FPS
  • $940 GeForce GTX 480 SLI costs $10.48 per FPS
  • Test Summary: Of all the video games presently available for DirectX-11 platforms, Mafia II is by far one of the most detailed and feature-rich. The ASUS GeForce GTX 480 allows the Fermi GF100 GPU to produce great high-quality APEX PhysX effects while displaying frame rates superior to anything AMD's lineup could offer with CPU support. Although the game looks and feels its best with APEX PhysX enabled, this special effect was disabled in our tests to ensure a fair performance comparison. With all settings being equal in terms of graphics quality, the ASUS GTX 480 delivers a minor 8% improvement with PhysX disabled but expands to 89.7 in SLI for 78% scaling efficiency.

    On a side note, Mafia 2 is absolutely phenomenal with 3D-Vision... and with its built-in multi-monitor profiles and bezel correction already factored this game is also well suited for 3D-Vision Surround. Combining two GeForce GTX 480's into SLI allowed this game to play at 5760 x 1080 resolution across three monitors using the highest settings with APEX PhysX enabled, delivering a thoroughly impressive experience. If you already own a 3D Vision kit and 120Hz monitor, Mafia II was developed with 3D Vision in mind. If purchasing the equipment is within your reach (I suggest the ASUS VG236H model that comes with a NVIDIA 3D-Vision kit enclosed), you owe to yourself to experience this game the way it was intended: in 3D.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Metro 2033

    Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

    The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

    Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

    NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

    • Metro 2033
      • Extreme Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)

    Metro-2033_DX11_Benchmark.jpg

    Metro 2033 Extreme Quality Settings

    Cost Analysis: Metro 2033 (1920x1200)

  • $140 Radeon HD 5770 1GB costs $11.38 per FPS
  • $220 GeForce GTX 460 1GB costs $13.33 per FPS
  • $260 Radeon HD 5850 1GB costs $13.76 per FPS
  • $295 GeForce GTX 470 1280MB costs $13.92 per FPS
  • $370 Radeon HD 5870 1GB costs $16.67 per FPS
  • $470 GeForce GTX 480 1356MB costs $15.99 per FPS
  • $650 Radeon HD 5970 2GB costs $18.31 per FPS
  • $940 GeForce GTX 480 SLI costs $17.12 per FPS
  • Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience unless you're willing to seriously tone-down the settings. Even when these settings are turned down, Metro 2033 is a power-hungry video game that crushes frame rates. Although Metro 2033 offers advanced PhysX options, these settings are available only to NVIDIA GeForce video cards and disabled for our tests.

    Tested with a single ASUS ENGTX480 video card, the GeForce GTX 480 produced 29.4 FPS with moderate quality settings to deliver a 32% lead over AMD's Radeon HD 5870. The pair of GeForce GTX 480's in SLI produced 54.9 FPS in our stress test, despite using Advanced Depth of Field and Tessellation effects, which equals 93% SLI scaling efficiency.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Unigine Heaven 2.1

    The Unigine "Heaven 2.1" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

    The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

    Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

    • Unigine Heaven Benchmark 2.1
      • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA

    Unigine_Heaven_DX11_Benchmark.jpg

    Heaven 2.1 Extreme Quality Settings

    Cost Analysis: Unigine Heaven (1920x1200)

  • $140 Radeon HD 5770 1GB costs $9.09 per FPS
  • $220 GeForce GTX 460 1GB costs $9.02 per FPS
  • $260 Radeon HD 5850 1GB costs $11.98 per FPS
  • $295 GeForce GTX 470 1280MB costs $11.71 per FPS
  • $370 Radeon HD 5870 1GB costs $14.29 per FPS
  • $470 GeForce GTX 480 1356MB costs $ 12.84 per FPS
  • $650 Radeon HD 5970 2GB costs $17.78 per FPS
  • $940 GeForce GTX 480 SLI costs $13.78 per FPS
  • Test Summary: Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a graphics tool this detailed. For now though, those details only come by way of DirectX-11 video cards. Our 'extreme' test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards when set to higher quality levels. Heaven 2.1 is a very demanding benchmark tool, which is why tessellation is set to normal levels and antialiasing is reduced to 4x.

    The ASUS GeForce GTX 480 produced 36.6 FPS, which nearly matched the AMD Radeon HD 5970 dual-GPU video card, and surpassed the Radeon HD 5870 by more than 41%. Combining two GTX 480's into an SLI set rendered 68.2 FPS on the Unigine Heaven benchmark, amounting to an impressive 93% SLI scaling efficiency.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    ASUS ENGTX480 Overclocking

    Since the GeForce GTX 480 as already the most powerful single-GPU video card available, not everyone will feel the need to overclock. When loaded during high-demand gaming sessions the surface heatsink on the ENGTX480 gets very hot to the touch, and overclocking the GF100 GPU and adding voltage would further exasperate heat output. But, if it's an overclocked GeForce GTX 480 you want, it's and overclocked GTX 480 you'll get. ASUS offers two free software tools with the ENGTX480 kit: ASUS SmartDoctor Overclocking Utility and ASUS GamerOSD On-Screen Display Utility. The ASUS SmartDoctor Overclocking Utility software is intended to compliment their Voltage Tweak feature with vCore adjustments.

    Back in the day, software overclocking tools were few and far between. Benchmark Reviews was literally put on the map with my first article: Overclocking the NVIDIA GeForce Video Card. Although slightly dated, that article is still relevant for enthusiasts wanting to permanently flash their overclock onto the video cards BIOS. Unfortunately, most users are not so willing to commit their investment to such risky changes, and feel safer with temporary changes that can be easily undone with a reboot. That's the impetus behind the sudden popularity for software-based GPU overclocking tools.

    NVIDIA already offers one such utility within their System Tools suite, formerly named NVIDIA nTune. While the NVIDIA Control Panel interface is very easy to understand an navigate, it's downfall lies in the limited simplicity of the tool. It's also limited, and doesn't offer the overclocking potential that ASUS offers in branded SmartDoctor software tool.

    NVIDIA-nTune-GeForce-GTX-460-1GB.png

    NVIDIA System Tools Overclocking Utility

    ASUS SmartDoctor

    My mission was simple: locate the highest possible overclock without adding any additional voltage. In the past, software-based overclocking on ASUS video cards has been accomplished through the GamerOSD program. Now these tasks are handled by ASUS SmartDoctor (illustrated below), which is described as "intelligent hardware protection and a powerful overclocking tool". The ASUS SmartDoctor tool allows users to overclock their ASUS video card's GPU and RAM, and at the same time monitor thermal output. ASUS simplifies the on-screen feedback with notes such as "Your VGA Card is OK.", but more advanced users will appreciate the in-depth data that displays along the upper-right corner.

    ASUS-SmartDoctor-GeForce-GTX460.jpg

    ASUS SmartDoctor Overclocking Utility (GTX 460)

    Unlike the NVIDIA System Tools utility that measures memory clock speeds in dual data rate, the ASUS SmartDoctor utility measures memory speed in quad data rate. This means that the stock speed of 1000MHz GDDR5 appears as 4000MHz in the utility. The ASUS SmartDoctor utility worked well to overclock GPU clock speed, while Vcore voltage (not changed) allowed plenty of available range thanks to the added Voltage Tweak functionality. As a best practice, it's good to find the maximum stable GPU clock speed, and then drop back 10 MHz or more. After several trial-and-error gaming sessions to confirm stability, the ASUS ENGTX480 produced a final overclock the reached 840/1680MHz GPU and a 1940MHz GDDR5 memory overclock that resulted in some very impressive gains!

    Video Game Standard Overclocked Improvement
    700/1401 MHz 840/1680 MHz
    Crysis Warhead 34 39 15%
    Aliens vs Predator 37.1 41.6 12%
    Heaven 2.1 36.6 42.9 17%
    Mafia II 57.8 65.0 13%

    ASUS GamerOSD

    Want to boost graphics card performance without exiting the game? Activate ASUS GamerOSD anytime during gameplay and adjust the GPU clock for instant overclocking. Real-time FPS (frames per second) status is also available to keep track of performance improvements. The ASUS GamerOSD (On-Screen Display) is a tool for combining ASUS SmartDoctor and video capture tools together without leaving the action. What I found especially helpful was the high-resolution DVD-quality video capture feature, which comes FREE with the ASUS ENGTX480 kit. Programmable hot keys assign actions, and recorded video (saved in XviD MPEG-4 format) or screen shots can be taken on command. These are features you would have to pay for using other software, such as FRAPS.

    ASUS-GamerOSD-Movie-Capture.png

    ASUS GamerOSD On-Screen Display Utility

    In the next section, we explore NVIDIA APEX PhysX enhancements with the ASUS GeForce GTX 480...

    NVIDIA APEX PhysX Enhancements

    Many of the latest video games are being developed with new graphical enhancement technologies, such as APEX PhysX and 3D-Vision Surround. Each of these NVIDIA technologies are designed to work their best on GeForce desktop graphics solutions, but only the most powerful GPUs can make the special effects stand out in full glory. While a single GeForce GTX 480 has enough power to enable all of the quality settings to their highest levels with APEX PhysX enabled, adding a second video cards for SLI opens up the possibilities for higher frame rate performance and excellent 3D-Vision performance.

    Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.

    We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:

    Mafia2_Cloth_High_No-PhysX.jpg

    No PhysX = Cloth Blending and Missing Debris

    Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 480 with APEX PhysX enabled:

    Mafia2_Cloth_High_PhysX.jpg

    Realistic Cloth and Debris - High Quality Settings With PhysX

    With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.

    Mafia2_PhysX_Fire.jpg

    Burning Destruction Smoke and Vapor Realism

    Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.

    Mafia2_PhysX_Glass.jpg

    Exploding Glass Shards and Bursting Flames

    NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.

    NVIDIA 3D-Vision Effects

    Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.

    Mafia 2 is absolutely phenomenal with 3D-Vision... and with its built-in multi-monitor profiles and bezel correction already factored this game is well suited for 3D-Vision Surround. Combining two GeForce GTX 480's into SLI allowed this game to play at 5760 x 1080 resolution across three monitors using upper-level settings with APEX PhysX enabled to deliver a thoroughly impressive experience. If you already own a 3D Vision kit and 120Hz monitor, Mafia II was built with 3D Vision in mind.

    Mafia2_3d-Vision_Characters.jpg

    The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.

    Mafia2_3d-Vision_Tire_Door.jpg

    Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.

    Mafia2_3d-Vision_Debris.jpg

    Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.

    Mafia2_3d-Vision_Smoke_Plumes.jpg

    NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.

    Mafia2_3d-Vision_Out-of-Screen.jpg

    NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.

    GeForce GTX480 Temperatures

    Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

    To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing, while the inner-case temperature hovered around 37°C.

    FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.

    A Tale of Two GTX 480's

    As the introduction to this article suggested, the early engineering samples NVIDIA sent to the media for evaluation testing earned the GeForce GTX 480 product line a bad reputation. These samples used immature BIOS firmware that differs significantly from the full retail version sold on store shelves. Physically, the video card is essentially identical between versions and graphics performance is unchanged. The real difference comes from a reduced power demand, which in turn reduces GF100 GPU heat output and results in a lower blower fan RPM that emits less noise.

    NVIDIA-supplied product specifications state that the GeForce GTX 480 has a maximum GPU thermal threshold of 105°C. This is identical for the previous-generation GeForce GTX 285, as well as the GeForce GTX 470 that shares the same GF100 graphics processor. Housed in a mid-tower computer case for testing with no additional cooling, the original GeForce GTX 480 engineering sample produced a very warm 53°C in idle 2D mode and increased to 93°C in sustained full 3D mode.

    NVIDIA_GeForce_GTX-480_Video_Card_Temperatures.jpg

    NVIDIA GeForce GTX480 Engineering Sample Temperatures

    After six months of additional refinements, the ASUS ENGTX480 retail kit produced much lower temperatures... and did so with a lot less fan noise. The image below illustrates how the (many) BIOS revisions have benefited this product series:

    ASUS_ENGTX480_Video_Card_Temperatures.jpg

    ASUS ENGTX480 GeForce GTX480 Video Card Temperatures

    As you can see, 36°C at idle on the ENGTX480 is a far cry from 53°C produced by NVIDIA's engineering sample. 82°C under synthetic 100% load is also notably cooler, a full 11°C cooler than the early media samples. Obviously, there are the temperatures NVIDIA should have launched the product with from day one, but six months later consumers can reap the rewards of improved development.

    Most new graphics cards from NVIDIA and AMD will expel heated air out through exhaust vents, which does not increase the internal case temperature. Our test system is an open-air chassis that allows the video card to depend on its own cooling solution for proper thermal management. Most gamers and PC hardware enthusiasts who use an aftermarket computer case with intake and exhaust fans will usually create a directional airflow current and lower internal temperatures a few degrees below the measurements we've recorded. To demonstrate this, we've built a system to illustrate the...

    Best-Case Scenario

    Traditional tower-style computer cases position internal hardware so that heat is expelled out through the back of the unit. This is better than nothing, but there's a fundamental problem: heat rises. Using the transverse mount design on the SilverStone Raven-2 chassis, Benchmark Reviews re-tests the ASUS GeForce GTX 480 video card to determine the 'best-case' scenario.

    Sitting idle at the Windows 7 desktop with a 20°C ambient room temperature, the ASUS ENGTX480 rested at a respectable 35°C - 1°C better than previously tested. Pushed to abnormally high levels using the FurMark torture test, the ASUS GeForce GTX 480 operated at 79°C with a very quiet blower fan. While mature ASUS firmware has obviously made a significant difference in terms of power consumption and heat output, the well-designed Raven-2 computer case with additional cooling features on the GTX 480 have helped to make an even bigger difference.

    Despite the 40nm process and improved BIOS programming, GF100 still runs warm unless you add internal case cooling to draw in cool air and exhaust heated air. This could impact overclocking projects inside poorly ventilated computer cases. So if you've already got a warm-blooded CPU in your computer system, take look at our Best CPU Cooler Performance series and find a heatsink that will lower your internal temperatures and prolong component lifetime. Lowering the internal case temperature could give you an added edge for GPU overclocking projects, and it will also help overall system stability.

    VGA Power Consumption

    Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

    For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

    A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

    Video Card Power Consumption by Benchmark Reviews

    VGA Product Description

    (sorted by combined total power)

    Idle Power

    Loaded Power

    NVIDIA GeForce GTX 480 SLI Set
    82 W
    655 W
    NVIDIA GeForce GTX 590 Reference Design
    53 W
    396 W
    ATI Radeon HD 4870 X2 Reference Design
    100 W
    320 W
    AMD Radeon HD 6990 Reference Design
    46 W
    350 W
    NVIDIA GeForce GTX 295 Reference Design
    74 W
    302 W
    ASUS GeForce GTX 480 Reference Design
    39 W
    315 W
    ATI Radeon HD 5970 Reference Design
    48 W
    299 W
    NVIDIA GeForce GTX 690 Reference Design
    25 W
    321 W
    ATI Radeon HD 4850 CrossFireX Set
    123 W
    210 W
    ATI Radeon HD 4890 Reference Design
    65 W
    268 W
    AMD Radeon HD 7970 Reference Design
    21 W
    311 W
    NVIDIA GeForce GTX 470 Reference Design
    42 W
    278 W
    NVIDIA GeForce GTX 580 Reference Design
    31 W
    246 W
    NVIDIA GeForce GTX 570 Reference Design
    31 W
    241 W
    ATI Radeon HD 5870 Reference Design
    25 W
    240 W
    ATI Radeon HD 6970 Reference Design
    24 W
    233 W
    NVIDIA GeForce GTX 465 Reference Design
    36 W
    219 W
    NVIDIA GeForce GTX 680 Reference Design
    14 W
    243 W
    Sapphire Radeon HD 4850 X2 11139-00-40R
    73 W
    180 W
    NVIDIA GeForce 9800 GX2 Reference Design
    85 W
    186 W
    NVIDIA GeForce GTX 780 Reference Design
    10 W
    275 W
    NVIDIA GeForce GTX 770 Reference Design
    9 W
    256 W
    NVIDIA GeForce GTX 280 Reference Design
    35 W
    225 W
    NVIDIA GeForce GTX 260 (216) Reference Design
    42 W
    203 W
    ATI Radeon HD 4870 Reference Design
    58 W
    166 W
    NVIDIA GeForce GTX 560 Ti Reference Design
    17 W
    199 W
    NVIDIA GeForce GTX 460 Reference Design
    18 W
    167 W
    AMD Radeon HD 6870 Reference Design
    20 W
    162 W
    NVIDIA GeForce GTX 670 Reference Design
    14 W
    167 W
    ATI Radeon HD 5850 Reference Design
    24 W
    157 W
    NVIDIA GeForce GTX 650 Ti BOOST Reference Design
    8 W
    164 W
    AMD Radeon HD 6850 Reference Design
    20 W
    139 W
    NVIDIA GeForce 8800 GT Reference Design
    31 W
    133 W
    ATI Radeon HD 4770 RV740 GDDR5 Reference Design
    37 W
    120 W
    ATI Radeon HD 5770 Reference Design
    16 W
    122 W
    NVIDIA GeForce GTS 450 Reference Design
    22 W
    115 W
    NVIDIA GeForce GTX 650 Ti Reference Design
    12 W
    112 W
    ATI Radeon HD 4670 Reference Design
    9 W
    70 W
    * Results are accurate to within +/- 5W.

    Oh, how the mighty have fallen...

    ...to more reasonable power consumption levels. When Benchmark Reviews first tested the NVIDIA GeForce GTX 480 engineering sample, the idle power draw was a thirsty 52 watts of electricity, and perhaps among the highest idle power draw we've measured for any single-GPU DX11-generation graphics cards. This level of consumption is slightly higher than the 48W we measured for the dual-GPU AMD Radeon HD 5970, and more than twice the demand of AMD's Radeon HD5870 and HD5850.

    Similar to the dramatic decrease in temperature due to mature firmware, the Fermi GPU cores have been tamed to use less idle power on the ASUS GeForce GTX 480 than before... down to 39 watts at idle. The GF100 GPU surely has a big power appetite, but it's nice to see ASUS has trained it to snack on only a few watts.

    Once 3D-applications begin to demand power from the GPU, electrical power consumption really begins to climb. Measured at full 3D load, the NVIDIA GeForce GTX 480 engineering sample set a new maximum power record and consumed 370 watts. With a more refined BIOS controlling the ENGTX480 unit, the ASUS GeForce GTX 480 slimmed down to 315W... a decrease of 55W.

    For comparison, here are the GPU voltages the original engineering sample and the ASUS ENGTX480:

    NVIDIA GeForce GTX 480 Engineering Sample
    MSI Afterburner reported 0.962V GPU at idle, and 1.025V under load.
    GPU-Z reported 0.953V 15.0A 14.3W idle VDDC, and 0.980V 75.0A 74.0W at load.

    ASUS ENGTX480
    MSI Afterburner reported 0.962V GPU at idle, and 1.075V under load.
    GPU-Z reported 0.955V 11.0A 10.5W idle VDDC, and 1.033V 70.0A 72.3W at load.

    At first it seemed like there was nothing environmentally 'Green' about NVIDIA's new GF100 Fermi-based GPU, even despite the 40nm fabrication process. Idle power draw has come way down, and loaded maximum consumption is also more favorable, but it comes with a cost to consumable energy. Putting things into perspective though, enthusiast PC gamers who can afford the GeForce GTX 480 probably aren't very worried about a few extra dollars on their power bill each month.

    Editor's Opinion: NVIDIA Fermi

    My opinion of NVIDIA's Fermi architecture has changed over the past several months, as they've developed their graphics processor to fully embody the originally unclear long-term plan. Testing with NVIDIA's GF100 GPU held its own set of challenges, and many times the video cards based on this graphics processor seemed condemned by the inherited legacy of issues. From the flagship GeForce GTX 480 down to the GTX 465, Fermi impressed gamers with strong FPS performance... and that was about it. Thermal output and power consumption were unfashionably high, to which AMD constantly and consistently focused their marketing attacks. Then along comes GF104 on the GeForce GTX 460, a video card that completely changed the game.

    NVIDIA's GeForce GTX 460 not only changed the collective opinion regarding their Fermi architecture, it also changed the GPU landscape. AMD held the upper hand by releasing a DirectX-11 video card first, but they've painted themselves into a corner with their Evergreen GPU. Unlike NVIDIA's Fermi architecture, which can shape-shift as desired, AMD's Cedar, Redwood, and Juniper GPUs are all simply slices of the same processor: Cypress. This is where intelligent consumers will spot the flaw: AMD came to the (video) card game and showed their entire hand from the first deal, while NVIDIA had a few spare aces up their sleeves. NVIDIA's GeForce GTX 480 is only 15/16 of the complete GF100 package, and we're just beginning to see what's possible with a 7/8-whole GF104 GPU with GTX 460. It's unknown what NVIDIA has planned for GF102 and GF108... but now we know that GF104 made good on Fermi and GF106 made it supremely affordable.

    I don't like to guess at the future, especially since so many other websites exist for this purpose, but something tells me NVIDIA is stock-piling full-blown 512-core GF100 Fermi processors for an upcoming launch. Whether or not we will see GF102 or GF108 GPUs launched this year (2010) is still unclear, and nobody knows for sure if AMD has any significant threats to the crown the upcoming holiday season. The true potential of NVIDIA's Fermi architecture has still yet to be seen, as in the full 512 CUDA cores. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Current Folding@Home clients take advantage of Fermi's architecture and make CPU-based folding obsolete. The difference between work unit performance on the GeForce GTX 400-series surpasses AMD's Radeon HD 5000 series equivalents without much struggle, and CPUs become more like snail-mail compared to e-mail.

    NVIDIA-GeForce-Fermi-Product-Family.jpg

    NVIDIA GeForce Fermi Graphics Card Family

    So now AMD and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games have lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

    Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU series. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where Fermi-based products deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) feature requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

    Some older game titles will also benefit from the Fermi architecture, beyond a simple increase in video frame rates. For example, Far Cry 2 (among others) will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. NVIDIA's R260 Forceware release will be among the first to introduce new features enthusiasts have been wanting for quite some time, my favorite is the removal of previous driver files and extensions. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100-series are the first GPUs to ever offer full C++ support, the programming language of choice among game developers.

    Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

    ENGTX480/2DI/1536MD5 Conclusion

    IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

    Beginning with performance, we can simply state the obvious: NVIDIA's GF100 GPU inside the GeForce GTX 480 is unmatched. For the past six months up to now (October 2010), NVIDIA has kept their reputation for making the most powerful graphics cards. In stock form the ASUS GeForce GTX 480 doesn't always create a giant advantage over the Radeon HD 5870 when you're comparing frame to frame performance, but when PhysX is enabled there isn't a Radeon product sold that can keep up with the ENGTX480. That might change when NVIDIA publishes their PhysX 3.0 release, but until that time comes there's still reason for gamers to enjoy PhysX via GeForce. Overclocking the GTX 480 squeezes even more performance out of the GF100 GPU, just as it does for other video cards, but it also opens up the possibility for maximum quality settings for high-demand games and 3D-Vision.

    At 1920x1200 resolution on the 3dMark Vantage benchmark New Calico test scene, the ASUS ENGTX480 produced a 14% performance lead over the AMD Radeon HD 5870. Battlefield Bad Company 2 resulted in a 10% margin over the 5870, while Crysis Warhead delivered a 26% improvement. DirectX-11 games such as Aliens vs Predator rendered the ENGTX480 with a 24% lead over the 5870, and boasted a 95% SLI efficiency. AMD-sponsored titles such as BattleForce allowed the ASUS GTX 480 to produce an impressive 73% jump in performance beyond the Radeon HD 5870, and also extended an 18% lead over the dual-GPU Radeon HD 5970. In Mafia II there was an 8% improvement with PhysX disabled, but when this feature was enabled the performance difference was tenfold. Metro 2033 produced a 32% improvement over the 5870, and in Unigine's Heaven 2.1 benchmark the ENGTX480 surpassed the Radeon HD 5870 by more than 41%.

    ASUS_ENGTX480_Video_Card_Splash.jpg

    ASUS GeForce GTX 480 ENGTX480/2DI/1536MD5

    Appearance is a more subjective matter since the rating doesn't have benchmark scores to fall back on. The ASUS GeForce GTX 480, model ENGTX480/2DI/1536MD5, is very similar to the engineering samples sent from NVIDIA to the media. Faux carbon fiber decals adorn the plastic shroud don't really stand out, but there's no doubting that the GeForce GTX 480 appears to be one of the 'toughest' video card ever produced thanks to an exposed surface heatsink and flared heat-pipes.

    In terms of video card pecking order, NVIDIA has three divisions: GTX for high-end, GTS for middle market, and GT for lower-end. The GeForce GTX 480 is currently the most powerful Fermi video card available to gamers. The ASUS GeForce GTX 480 has been designed with the same solid construction as its predecessors, and while the electronic components located along the back of the PCB expose some sensitive electronic components there's really no need for metal back-plate for protection or heat dissipation. The top-side of the graphics card features a plastic fan shroud and large surface heatsink, while the blower fan receives a recessed concave opening which allows for airflow in SLI configurations.

    While most PC gamers and hardware enthusiasts buy a discrete graphics card for the sole purpose of playing video games, there's a very small niche who depend on the extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. NVIDIA's Fermi architecture is the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality. Additionally, applications such as Adobe Photoshop or Premier can take advantage of GPGPU processing power. In case the point hasn't already been driven home, don't forget that 3D-Vision and APEX PhysX are technologies only compatible with NVIDIA GeForce products.

    Reading through the comments of the original launch article, value means something different to everyone. Some readers take heat and power into consideration, while others are concerned with FPS performance. As of 06 December 2010, the ASUS ENGTX480/2DI/1536MD5 sells for $463 at Amazon or $500 at NewEgg. That's a lot of money for a video card, no doubt, but the GeForce GTX 480 delivers on it's promise to be the best. If you're not ready for the worlds fastest GPU, then the GTX 470 and especially two GTX 460's in SLI are very good alternatives. ASUS offers a three-year warranty on the ENGTX480, which could come in handy if the Voltage Tweak feature is taken to excess.

    In conclusion, early GeForce GTX 480 engineering samples NVIDIA sent to media outlets for testing were obviously not as refined as the retail ENGTX480 product we've tested in this article. After a few weeks of testing this ASUS GeForce GTX 480 video card there were several significant differences clearly evident to me as an end-user and gamer, but unfortunately the initial bad press on this series has already done its damage. For those who might still be shopping for the very best a single GPU can offer, there's still a good reason to consider the GTX 480. The ASUS ENGTX480/2DI/1536MD5 delivers much lower power consumption at both idle and load, which then leads to even lower heat output and much more quiet cooling fan. For me, heat output and fan noise are my biggest concerns, and now they're finally on par with competing graphics solutions. If you're looking for the best a single GPU can offer, GTX 480 still reigns surpreme, especially for 3D Vision Surround setups.

    Pros:Benchmark Reviews Golden Tachometer Award

    + Fastest single-unit DX11 graphics accelerator available
    + Outstanding performance for ultra high-end games
    + GF100 Introduces Error Correcting Code (ECC)
    + Reduced heat output and cooling fan noise
    + Fan exhausts all heated air outside of case
    + Includes native HDMI audio/video output
    + Adds 32x CSAA post-processing detail
    + Supports triple-SLI functionality
    + Three-year product warranty

    Cons:

    - Consumes 315W under full load
    - Very expensive premium-level product
    - Lacks free DX11 PC video game bundle

    Ratings:

    • Performance: 10.0
    • Appearance: 9.00
    • Construction: 9.75
    • Functionality: 9.75
    • Value: 6.50

    Final Score: 9.0 out of 10.

    Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

    Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.


    Related Articles:
     

    Comments 

     
    # RE: ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480RealNeil 2010-10-01 04:40
    It's good to see that they're getting a handle on the power usage and heat producing issues that many have written about concerning GTX480 cards.
    I'll probably go for a pair of GTX460's in SLI configuration myself. So far, two of them cost less than a single 480 does, and their performance together is knocking on it's door. Also, two of the 460's draw far less power than one 480 does. I'll take a chance and assume that they'll also produce less heat because of their low power usage.

    Heat and power is important to many of us as you said above, I feel that it's a lot of money to buy one and will also cost a lot to use it over the lifetime of the card. It is a truly impressive video card though. Thanks for another detailed and informative review.
    Report Comment
     
     
    # RE: ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480Adam 2010-10-01 08:20
    Well it's a helluva lot better then the first bunch, still hungry and hot, but far more tolerably so.

    Surprised ASUS didnt do anything with the cooler though.
    Report Comment
     
     
    # 15 pro nVidia articles sins the last ATi cart got testted.Michael 2010-10-01 09:45
    I am just wondering do got stock ore get paid by nVidia, as your last 15 articles have all bin very pro nVidia.

    And point out all the strong points of nVidia, not that i have anything aginst those strong points, I have my self a 3x SLI GTX480 + dedicated GTX280 for PhysX + 3D Vision, on a 3 screen setup.

    But not because the 480 is the best card, but because it scale's mouths better in SLI 3x then CF-X 3x.
    And even tho they are really fast, they ware also very irritating loud and hot before i installed water cooling on them.

    That said, if i would buy a single card i would for shore go for ATi as they run cooler and uses less power.

    Looking at all the articles it looks like they ware put together whit the help of a nVidia PR guy.

    And if BMR want to be taken seriously, more balanced reviews would be welcome.
    Report Comment
     
     
    # Did you notice...?BruceBruce 2010-10-01 10:21
    Have you noticed that the only new cards to be released in the last few months have been from NVIDIA? A year ago, ATI released the HD5xxx series and everyone complained that we were working for AMD, because we were praising the design and performance of the new Radeon cards. Well, 6 moths later, NVIDIA releases new cards that are now the best performers; some at price/performance ratios that beat the ATI competion by a country mile (the GTX460). Those are facts - not PR, not marketing spin, not bias, just the facts.

    That's the way the computer industry works, when new designs get released, they are generally a LOT better thatn last year's model. Ever hear of Moore's law? When ATI brings out their next generation of video cards, I expect them to be better than what is available today. And when we test them, if that is indeed the case, that's what we will report. And then someone will complain again that we're getting paid off by ATI. As if...!!!!
    Report Comment
     
     
    # Michael = paid by AMD?Olin Coles 2010-10-01 15:09
    Michael, since your IP address resolved to Bergin, Norway, I'll forgive the horrible grammar and spelling in your comment. What I won't forgive is the fact that you've come to this website and insulted me with a claim that I'm paid by NVIDIA to write these reviews without so much as an example.

    How can you seriously expect me to review an AMD Radeon product when the last video card they offered was the Radeon HD 5550 almost five months ago? You might also go back and look over all of MY article, and count-up who has received more awards.

    If you want your remark taken seriously, perhaps you should be more constructive. Otherwise, you just come off as another fanboy troll with poor spelling.
    Report Comment
     
     
    # Snerk!ChrisW 2010-10-07 07:55
    Forgiving him for his grammar because his IP is in Norway... WTF!

    Of course he's a fanboy or a troll, but you don't need to be a Grammar Nazi about it!
    Report Comment
     
     
    # It's hard to take you seriously...Hank 2010-10-01 12:00
    Michael,

    It's very hard to take you seriously when your post is filled with simple spelling errors and grammatical mistakes. If you disagree with the results of the tests, then it is up to you to test for yourself. Many of the benchmarks used are free or have free versions. Anyone who can afford a setup with 3 GTX480s in SLI can certainly afford to do their own testing.

    Hank
    Report Comment
     
     
    # RE: It's hard to take you seriously...Servando Silva 2010-10-01 12:32
    Additionally, that'd mean every other site publishing GTS450, GTX460, GTX470 etc. reviews is being payed by Nvidia. That's just wrong. So, that means AMD also payed recently with their new CPUs and Intel payed us back when they released their LGA1156 processors.... As if!
    Report Comment
     
     
    # RE: RE: It's hard to take you seriously...Adam 2010-10-01 12:56
    Nvidia is controlling the market by bribing all of the reviewers! It's a conspiracy, people!

    Tinfoil hat time.
    Report Comment
     
     
    # RE: RE: RE: It's hard to take you seriously...dlb 2010-10-01 19:58
    Tinfoil hat time? That implies that we haven't been wearing 'em before now.... I haven't taken mine off since the Reagan's "Star Wars" era. And I won't take it off either - not until BMR starts getting paid by AMD/ATI.

    LOL
    Report Comment
     
     
    # What was the voltage difference of the GPUs?RS 2010-10-03 19:53
    Hi,

    Great review. I wanted to know if you guys measured the GPU voltage (in MSI Afterburner or Asus Voltage Tweak software) to check if the more mature 40nm process resulted in lower GPU voltage at load? If so what was the difference?

    Also, since this is only 1 representative sample, how can you be certain that the more mature 40nm manufacturing process is the reason for the reduced demand? Could it be an outlier videocard?
    Report Comment
     
     
    # RE: What was the voltage difference of the GPUs?Olin Coles 2010-10-03 19:56
    Hello RS:

    Yes, it could always be one lucky sample or it could be this way for every sample. It's very difficult to verify, but the mere fact that one card could ever reach temperatures or power consumption this low is surprising.

    I have not conducted the GPU voltage tests you mention.
    Report Comment
     
     
    # VoltagesRS 2010-10-03 20:11
    Could you please check the voltages of this GTX480 if you still have it? I think that would give us a better indication if a more mature manufacturing process in indeed in play. What about testing a 6 months older GTX470 as well? Wouldn't the more mature process apply to the entire GF100 line?
    Report Comment
     
     
    # 40nm GF100 GPU VoltageOlin Coles 2010-10-03 20:14
    Sure, but it could be a few days since I'm on a deadline for another project. I will measure idle and loaded GPU voltage on this new GTX 480 and the original engineering sample.
    Report Comment
     
     
    # VoltagesRS 2010-10-03 20:18
    Thank you very much Olin! No rush.
    Report Comment
     
     
    # 40nm GF100 GPU VoltageOlin Coles 2010-10-03 20:42
    Added into the article:

    NVIDIA GeForce GTX 480 Engineering Sample
    MSI Afterburner reported 0.962V GPU at idle, and 1.025V under load.
    GPU-Z reported 0.953V 15.0A 14.3W idle VDDC, and 0.980V 75.0A 74.0W at load.

    ASUS ENGTX480
    MSI Afterburner reported 0.962V GPU at idle, and 1.075V under load.
    GPU-Z reported 0.955V 11.0A 10.5W idle VDDC, and 1.033V 70.0A 72.3W at load.
    Report Comment
     
     
    # ThanksRS 2010-10-04 08:01
    Thanks for the voltage update Olin. I would have imagined that the more mature manufacturing process would have allowed the GPU to operate at the same frequency with lower voltages. In this case, not only does the Asus card operate cooler and quieter, but it does so at higher voltages. A great mystery indeed.

    Still what you measured is an observable result. I look forward to future GTX480/470 reviews, where you can provide more data points :)
    Report Comment
     
     
    # Poor review, pull it.Strafage 2010-10-03 19:59
    This is a very poor review. You cannot draw these sorts of conclusions and claim Fermi runs cooler now based on comparing just two cards.

    No mention of voltages either.

    In no way have your test proven that these cards are running cooler now compared to before.

    Hope no one makes the mistake of buying one of these cards now if they wouldn't before thinking the heat and noise issues are resolved, because this 'review' proves nothing of the sort.
    Report Comment
     
     
    # RE: Poor review, pull it.Olin Coles 2010-10-03 20:06
    Your comment makes it seem like you didn't bother to ready the article. I have tested three GTX 480's including this one, and they keep getting cooler with each new release. Also, why would mentioning voltages validate our video card power consumption findings?

    I will pull the article offline and start taking orders from you when this becomes your website. Until then, your opinion matters as much as the next anonymous post.
    Report Comment
     
     
    # RE: RE: Poor review, pull it.hurleybird 2010-10-03 20:29
    3 data points is still way to small to draw any kind of conclusion.

    Voltage is absolutely something you need to give in this article, because voltage is directly related to heat output. When GTX 480 launched there was quite a bit of voltage binning (same thing with HD5870 even), where some cards were binned higher or lower. Specs remained the same, except for voltage, heat, and power consumption. Assuming absolutely no change, it's entirely possible to randomly get three cards successively binned with lower voltage.

    Now, that's not saying that TSMC isn't making advances on 40nm, I'm sure they are, and obviously any kind of decrease in defect density will be amplified by larger dies, but when you already had such variability in volts, heat, and power at launch, you need a lot more than three data points to draw a solid conclusion. Someone could have easily gotten the same data from three random cards at launch.
    Report Comment
     
     
    # RE: RE: RE: Poor review, pull it.Olin Coles 2010-10-03 20:40
    So then what is the magic number of video cards that would prove that it's a trend and not an anomoly? How many different manufacturers do you need to have samples from?
    Report Comment
     
     
    # RE: ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480hurleybird 2010-10-03 21:50
    Probably more than would be reasonable to buy ;), although three cards at launch vs. three cards from today would be the minimum I personally be comfortable drawing any sort of conclusion with (assuming they all show the marked difference).

    Obviously the more cards you use the stronger the probability is (again assuming those added cards support your original data), however as I said you could have randomly taken three cards at launch and gotten the same results thanks to the large amount of voltage binning.
    Report Comment
     
     
    # RE: ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480hurleybird 2010-10-03 21:52
    Also like I said, I have no doubt that TSMC has been making improvements, and a decrease in defect density would have an increased positive effect on larger dies. It's not really a matter of *if* yields (and by extension voltage binning) at TSMC is improving, it's a matter of *how much*. Another possible result of increased yields could involve changing the GTX 470 / GTX 480 mix, where more chips are able to qualify as a GTX 480, but only at higher voltages. Thus, depending on what Nvidia does with them, increased yields could conceivably lead to more high voltage/heat/temp cards being produced.

    About the only way that I could see for easily proving that GTX 480's are getting cooler is if new chips start using lower voltage than the lowest voltage that was available at launch. Otherwise with this type of problem you're dealing with statistics. You don't "prove" anything so much as provide probability. With a high enough probability you can begin to draw conclusions.
    Report Comment
     
     
    # GF100 always was powerfulCorpse 2010-10-04 00:56
    Not denying anything about how good GF100 is (but it is less effecient still). My one retort is this:

    "but it doesn't appear that AMD has any surprises for the upcoming holiday season"

    Umm, Cayman, Barts etc. These are coming up in the next couple of months and as we dpont have benchies, regardless of performance, they'll be a surprise (bad or good). And if Cayman improves like it's 'rumoured' too, it will eclipse the 480.

    Also, there was a 512 core GTX 480 out in the wild ##brightsideofnews.com/news/2010/8/13/512-core-geforce-gtx-480-reviewed-any-surprises.aspx and it wasnt very impressive.

    Gf100 good- yes no doubt, comment on no surprises from AMD - quite ignorant.
    Report Comment
     
     
    # RE: ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480Trajan Long 2010-10-10 18:52
    480 is awesome and paves the way for great advances in the future. The next generation will solve whatever heat issues remain with a huge performance boost and Nvidia will rule on all fronts, not just tech.
    Report Comment
     

    Comments have been disabled by the administrator.

    Search Benchmark Reviews Archive