Archive Home arrow Reviews: arrow Video Cards arrow Zotac GeForce GTX-470 Fermi Video Card
Zotac GeForce GTX-470 Fermi Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Friday, 14 May 2010

Zotac GeForce GTX-470 ZT-40201-10P

It used to be that PC video games such as Crysis and Far Cry 2 were as demanding as you could get, but that was all back before DirectX-11 brought tessellation and to the forefront of graphics. DX11 now adds heavy particle and turbulence effects to video games, and titles such as Metro 2033 demand the most powerful graphics processing available. NVIDIA's GF100 GPU is their first graphics processor to support DirectX-11 features such as tessellation and DirectCompute, and the GeForce GTX-470 offers an excellent combination of performance and value for games like Battlefield: Bad Company 2 or BattleForge.

Priced at $349, the NVIDIA GeForce GTX-470 empowers DirectX-11 video games to deliver unmatched geometric realism from 14 Streaming Multiprocessors with a total of 448 CUDA Cores and 56 Texture Units. In this article Benchmark Reviews tests 3D frame rate performance on the retail Zotac GeForce GTX-470 video card (model ZT-40201-10P), and compare the results against the most powerful graphics products on the market.

Based on the same GF-100 Fermi architecture that was used in the GeForce GTX 480, the NVIDIA GTX-470 delivers similar performance results yet costs 43% less. Additionally, the GTX-470's $350 price tag fits in nicely between the $310 Radeon HD5850 and $390 Radeon HD5870. In the following pages, Benchmark Reviews will demonstrate how well the ZOTAC GeForce GTX-470 performs against these two formidable opponents from ATI.

Zotac_GeForce_GTX470_Video_Card_Kit.jpg

ZOTAC GeForce GTX 470 Video Card Kit ZT-40201-10P

At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF100 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX-480 and GTX-470 video cards are complex tools built to arrive at one simple destination: immersive entertainment, especially when paired with NVIDIA GeForce 3D Vision.

Benchmark Reviews tests graphics frame rate performance of the ZOTAC GeForce GTX 470 using several of the most demanding PC video game titles and benchmark software available. Old favorites such as Crysis Warhead, Far Cry 2, Resident Evil 5, as well as PCMark Vantage are all included. New to the scene are Battlefield: Bad Company 2, Metro 2033, BattleForge, and the recently announced Unigine Heaven 2.0 benchmark.

About the Company: ZOTAC International (MCO) Limited

ZOTAC International (MCO) Limited was established in 2006 with a mission to deliver superb quality of NVIDIA graphic solutions to the industry. It has strong backup from parent group, PC Partner Ltd. Headquartered in Hong Kong, factory in mainland China and regional sales offices in Europe, Asia Pacific and North America. The support ZOTAC provides is currently the largest of its kind around the world.

With 40 SMT lines, 6,000 workers and 100,000 square-feet meter, ZOTAC features a full array of state-of-the-art facilities and machinery. In addition, ZOTAC has over 130 R&D professionals in Hong Kong, China and warranty and service center in strategic countries to enable effective and efficient worldwide as well as localized sales and marketing supports.

ZOTAC with NVIDIA not only means superb quality, it also means high performance, absolute reliability and great value. In the past year, ZOTAC was compared and tested by several influential members in the media and have proven its products are good quality, worth-to-buy graphic cards in the market. With the product features of overclocked performance, excellent cooling properties and unique packaging ZOTAC products definitely exceed users' expectations.

ZOTAC's commitment to our user is to bring the latest products quickly to the market with the best value. Doubtless to say ZOTAC is the right choice for those who require high-quality graphic solutions. For additional information please visit the ZOTAC website

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100 GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF100 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF100, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

Four-Offset Gather4

The texture unit on previous processor architectures operated at the core clock of the GPU. On GF100, the texture units run at a higher clock, leading to improved texturing performance for the same number of units. GF100's texture units now add support for DirectX-11's BC6H and BC7 texture compression formats, reducing the memory footprint of HDR textures and render targets.

The texture units also support jittered sampling through DirectX-11's four-offset Gather4 feature, allowing four texels to be fetched from a 128×128 pixel grid with a single texture instruction. NVIDIA's GF100 implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. With jittered sampling, games can implement smoother soft shadows or custom texture filters efficiently. The previous GT200 GPU did not offer coverage samples, while the GF100 can deliver 32x CSAA.

GF100 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF100's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF100 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100 GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

GF100 Specifications

  • 512 CUDA Cores
  • 16 Geometry Units
  • 4 Raster Units
  • 64 Texture Units
  • 48 ROP Units
  • 384-bit GDDR5
  • DirectX-11 API Support

GeForce Specifications

Graphics Card

GeForce GTX 285

GeForce GTX 470

GeForce GTX 480

GPU Transistors 1.4 Billion 3.2 Billion 3.2 Billion

Graphics Processing Clusters

10

4

4

Streaming Multiprocessors

24

14

15

CUDA Cores

240

448

480

Texture Units

80

56

60

ROP Units

32

40

48

Graphics Clock
(Fixed Function Units)

648 MHz

607 MHz

700 MHz

Processor Clock
(CUDA Cores)

1476 MHz

1215 MHz

1401 MHz

Memory Clock
(Clock Rate/Data Rate)

1242/2484 MHz

837/3348 MHz

924/3696 MHz

Total Video Memory

1024 MB

1280 MB

1536 MB

Memory Interface

512-Bit

320-Bit

384-Bit

Total Memory Bandwidth

159.0 GB/s

133.9 GB/s

177.4 GB/s

Texture Filtering Rate
(Bilinear)

51.8 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

GPU Fabrication Process

55 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x S-Video

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

Max Board Power (TDP)

204 Watts

215 Watts

250 Watts

Recommended PSU

550 Watts

550 Watts

600 Watts

GPU Thermal Threshold

105°C

105°C

105°C

Chart Courtesy of Benchmark Reviews

NVIDIA GF100 GPU Fermi Architecture

NVIDIA's latest GPU is codenamed GF100, and is the first graphics processor based on the Fermi architecture. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF100 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU is their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. Rest assured, the new GF100 GPU is more powerful than the GT200 could ever be, and early results indicate a Fermi-based video card delivers far more than twice the gaming performance over a GeForce GTX-280.

GF100 is not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they now reach 512 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF100 implements the new IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.

nvidia-fermi-gf100-gpu-block-diagram-benchmarkreviews-sm.png

NVIDIA Fermi GF100 Block Diagram (click for high-resolution)

Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF100 doubles the number of CUDA cores over the previous architecture. NVIDIA GeForce GF100 Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The NVIDIA GF100 implements four GPCs, sixteen SMs, and six memory controllers. Expect NVIDIA to launch GF100 products with different configurations of GPCs, SMs, and memory controllers to address different price points.

CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF100 implements six 64-bit GDDR5 memory controllers (384-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 32 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages.

GF100 implements 512 CUDA cores, organized as 16 SMs of 32 cores each. Each SM is a highly parallel multiprocessor supporting up to 48 warps at any given time. Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture services load, store, and texture operations. GF100 has 48 ROP units for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in six groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others.

NVIDIA GigaThread Thread Scheduler

One of the most important technologies of the Fermi architecture is its two-level, distributed thread scheduler. At the chip level, a global work distribution engine schedules thread blocks to various SMs, while at the SM level, each warp scheduler distributes warps of 32 threads to its execution units. The first generation GigaThread engine introduced in G80 managed up to 12,288 threads in real-time. The Fermi architecture improves on this foundation by providing not only greater thread throughput, but dramatically faster context switching, concurrent kernel execution, and improved thread block scheduling.

What's new in Fermi?

With any new technology, consumers want to know what's new in the product. The goal of this article is to share in-depth information surrounding the Fermi architecture, as well as the new functionality unlocked in GF100. For clarity, the 'GF' letters used in the GF100 GPU name are not an abbreviation for 'GeForce'; they actually denote that this GPU is a Graphics solution based on the Fermi architecture. The next generation of NVIDIA GeForce-series desktop video cards will use the GF100 to promote the following new features:

  • Third Generation Streaming Multiprocessor (SM)
    o 32 CUDA cores per SM, 4x over GT200
    o 8x the peak double precision floating point performance over GT200
    o Dual Warp Scheduler simultaneously schedules and dispatches instructions from two independent warps
    o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache
  • Second Generation Parallel Thread Execution ISA
    o Unified Address Space with Full C++ Support
    o Optimized for OpenCL and DirectCompute
    o Full IEEE 754-2008 32-bit and 64-bit precision
    o Full 32-bit integer path with 64-bit extensions
    o Memory access instructions to support transition to 64-bit addressing
    o Improved Performance through Predication
  • Improved Memory Subsystem
    o NVIDIA Parallel DataCache hierarchy with Configurable L1 and Unified L2 Caches
    o First GPU with ECC memory support
    o Greatly improved atomic memory operation performance
  • NVIDIA GigaThread Engine
    o 10x faster application context switching
    o Concurrent kernel execution
    o Out of Order thread block execution
    o Dual overlapped memory transfer engines

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

Closer Look: Zotac GeForce GTX480

So far, 2010 has been an exciting year for game developers. Microsoft Windows 7 (and updated Windows Vista) Operating Systems introduced gamers to DirectX-11, allowing video games released for the PC platform to look better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware. This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles.

The ZOTAC GeForce GTX-470 video card, model ZT-40201-10P, offers gamers a healthy dose of graphics processing power for their money. But the GeForce GTX-470 is more than just a tool for video games, it's also a tool for professional environments that make use of GPGPU-accelerated compute-friendly software, such as Adobe Premier Pro and Photoshop.

Zotac_GeForce_GTX470_Retail.jpg

Zotac's GeForce GTX 470 retail package is fairly basic: graphics card, support manual, driver software, two four-pin Molex PSU to six-pin PCI-E power adapters, mini-HDMI into HDMI adapter, and DVI-to-VGA adapter. The Zotac ZT-40201-10P kit does not include any bundled free PC video games, but it does offers several CUDA-enabled applications on a supplemental software CD.

The NVIDIA GeForce GTX480 is a 2.67" tall double-bay, 4.376-inches (111.15mm) wide graphics card with a 9.5-inch (241.3mm) long profile. ZOTAC's GeForce GTX470 retains the reference cooler design, which is more than adequate for the reduced-output Fermi GPU.

Zotac_GeForce_GTX470_Video_Card_Top.jpg

As with past GeForce video cards, the GPU offers two output 'lanes', so all three output devices cannot operate at once. NVIDIA has retained two DVI outputs on the GeForce GTX 470 so dual-monitor configurations can be utilized, and by adding a second video card users can enjoy GeForce 3D-Vision Surround functionality.

Other changes occur in more subtle ways, such as replacing the S-Video connection with a more relevant (mini) HDMI 1.3a A/V output. In past GeForce products, the HDMI port was limited to video-only output and required a separate audio output. Native HDMI 1.3 support is available to the GeForce GTX 470, which allows direct output to HDTVs and compatible monitors.

NVIDIA_GeForce_GTX-480_Video_Card_Header.jpg

The new 40nm fabrication process opens the die for more transistors, now increased from 1.4-billion in GT200 GPU present on the GeForce GTX 285, to an astounding 3.2-billion built into the Fermi GF100 GPU and used with the ZOTAC GeForce GTX 470 (the same amount that resides on the NVIDIA GeForce GTX 480).

Zotac_GeForce_GTX470_Angle.jpg

Similar to the past generation NVIDIA products, the GeForce 400-series also supports triple-SLI capability. While the GTX 470 disables one more Streaming Multiprocessor Unit (SMU) than the GeForce GTX 480 does, a heavy-duty thermal management system is still necessary. Under load the ZOTAC GTX 480 video card reached 88°C in a 20°C room with no additional cooling assistance from the computer case, which is identical to the retail GTX 480 under 100% load.

The plastic fan shroud unsnaps for the rest of the unit without any tools, and reveals a large finsink with four heat-pipe rods spanning from the GPU.

Zotac_GeForce_GTX470_Heatsink.jpg

Upon close inspection of the printed circuit board (PCB), there's an opening beneath the blower fan that allows intake air to be drawn from either side of the unit. This pays dividends when the GeForce GTX 470 is added into an SLI or triple-SLI set.

The PCB is a busy place for the GeForce GTX 470. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the fully-grown 530 mm2 GF100 GPU and its 3.2-billion transistors. 448 CUDA cores operate at 1215 MHz, which keeps a firm lead over ATI's 850 MHz Cypress-XT GPU that measures 334 mm2 and fits 2.154-billion transistors.

Zotac_GeForce_GTX470_PCB.jpg

In the next several sections, Benchmark Reviews explains our video card test methodology, followed by a performance comparison of the ZOTAC GeForce GTX 470 against several of the most popular graphics accelerators available. The GeForce GTX 470 directly competes against the ATI Radeon HD 5850; so we'll be keeping a close eye on comparative performance and value.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending April 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.Zotac_GeForce-GTX470_GPUZ.gif

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Cost to Performance Ratio

For this article Benchmark Reviews has included cost per FPS of graphics performance results. Only the least expensive price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 10-May-2010:

  • $200 Radeon HD 4890
  • $350 GeForce GTX 285
  • $310 Radeon HD 5850
  • $350 GeForce GTX 470
  • $390 Radeon HD 5870
  • $500 GeForce GTX 480
  • $700 Radeon HD 5970

Intel X58-Express Test System

Benchmark Applications

  • 3DMark Vantage v1.02 (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • BattleField: Bad Company 2 (High Quality, HBAO, 8x Anti-Aliasing, 16x Anisotropic Filtering, Single-Player Intro Scene)
  • BattleForge v1.2 (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Crysis Warhead v1.1 with HOC Benchmark (DX10, Very High Quality, 4x Anti-Aliasing, 16x Anisotropic Filtering, Airfield Demo)
  • Far Cry 2 v1.02 (DX10, Very High Performance, Ultra-High Quality, 8x Anti-Aliasing, HDR + Bloom)
  • Resident Evil 5 Benchmark (DX10, Super-High Quality, 8x MSAA)
  • Metro 2033 (Very-High Quality, DirectX-11, AAA, 16x AF, Scene 1 Intro)
  • Unigine Heaven Benchmark 2.0 (DX11, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
Core Clock (MHz) 850 670 725 608 850 700 725
Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

3DMark Vantage GPU Tests

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Cost Analysis: Jane Nash (1920x1200)

  • Radeon HD 4890: $13.42 per FPS
  • GeForce GTX 285: $18.33 per FPS
  • Radeon HD 5850: $13.48 per FPS
  • GeForce GTX 470: $15.77 per FPS
  • Radeon HD 5870: $13.54 per FPS
  • GeForce GTX 480: $17.79 per FPS
  • Radeon HD 5970: $17.37 per FPS
  • GeForce GTX480 SLI: $20.12 per FPS
  • 3DMark Vantage GPU Test: New Calico

    New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

    3dMark_Vantage_New_Calico_Benchmark.jpg

    Cost Analysis: New Calico (1920x1200)

  • Radeon HD 4890: $16.95 per FPS
  • GeForce GTX 285: $25.18 per FPS
  • Radeon HD 5850: $18.02 per FPS
  • GeForce GTX 470: $20.00 per FPS
  • Radeon HD 5870: $17.89 per FPS
  • GeForce GTX 480: $22.32 per FPS
  • Radeon HD 5970: $20.59 per FPS
  • GeForce GTX480 SLI: $23.92 per FPS
  • Test Summary: 3dMark Vantage keeps the ZOTAC GeForce GTX-470 to nearly identical graphical performance levels as the ATI Radeon HD 5850, but loses the fight when we analyze price per FPS. In this test, the GeForce GTX 470 does not bridge a gap left between the Radeon HD5850 and HD5870.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Battlefield: Bad Company 2

    The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

    At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

    In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

    Battlefield-Bad-Company-2_Benchmark.jpg

    The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

    Cost Analysis: Battlefield: Bad Company 2

  • Radeon HD 4890: $6.25 per FPS
  • GeForce GTX 285: $7.29 per FPS
  • Radeon HD 5850: $5.96 per FPS
  • GeForce GTX 470: $7.78 per FPS
  • Radeon HD 5870: $6.29 per FPS
  • GeForce GTX 480: $6.58 per FPS
  • Radeon HD 5970: $8.05 per FPS
  • GeForce GTX480 SLI: $9.09 per FPS
  • Test Summary: Battlefield: Bad Company 2's Frostbite-1.5 game engine forces the Zotac GeForce GTX 470 to trail behind the Radeon HD 5850 and operates to the same level as the out-going GeForce GTX 285. I was so surprised by the performance, after all of the other video cards were tested I re-tested just to confirm. Sure enough, the GTX 470 doesn't spit out frames like the GTX 480, or even the GTX 285. Because of this drop in DX10 performance compared to the others, the relative price per frame costs more than the competition.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    BattleForge Performance

    BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

    Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

    BattleForge_DX11_Benchmark.jpg

    Cost Analysis: BattleForge (1920x1200)

  • Radeon HD 4890: $8.70 per FPS
  • GeForce GTX 285: $9.72 per FPS
  • Radeon HD 5850: $9.39 per FPS
  • GeForce GTX 470: $8.14 per FPS
  • Radeon HD 5870: $10.00 per FPS
  • GeForce GTX 480: $8.07 per FPS
  • Radeon HD 5970: $12.50 per FPS
  • GeForce GTX480 SLI: $9.71 per FPS
  • Test Summary: With an unbiased appetite for raw DirectX-11 graphics performance BattleForge appears to be ambiguous towards ATI and NVIDIA products, despite sponsorship from AMD. When high-strain SSAO is called into action, Zotac's GTX 470 demonstrates how well Fermi is suited for DX11... trumping ATI's best single-GPU Radeon HD5870 by 10% and the HD5850 by 30%. Additionally, the ZOTAC GeForce GTX 470 video card also offers one of the best price to performance ratios in BattleForge.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Crysis Warhead Tests

    Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

    Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

    Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

    Crysis_Warhead_Benchmark.jpg

    Cost Analysis: Crysis Warhead (1920x1200)

  • Radeon HD 4890: $11.11 per FPS
  • GeForce GTX 285: $16.67 per FPS
  • Radeon HD 5850: $15.50 per FPS
  • GeForce GTX 470: $13.46 per FPS
  • Radeon HD 5870: $14.44 per FPS
  • GeForce GTX 480: $15.15 per FPS
  • Radeon HD 5970: $17.95 per FPS
  • GeForce GTX480 SLI: $20 per FPS*
  • * Possible maximum frame rate limit imposed by either the PC video game or HOC benchmark tool.

    Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead responds well to both ATI and NVIDIA products. The Zotac GeForce GTX 470 beats the ATI Radeon HD 5850 by 30% and matches performance to the Radeon HD 5870. For Crysis fans, the Zotac GeForce GTX 470 offers the best price to performance ratio.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Far Cry 2 Benchmark

    Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

    The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

    The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX-10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis.

    However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

    There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX-10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied, and HDR and Bloom were enabled.

    Far_Cry_2_Benchmark_Performance.jpg

    Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we used 8x AA and 'Ultra High' settings in Far Cry 2. The end effect was a separation between what is capable of maximum settings, and what is not. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), we noticed that there are very few products capable of producing playable frame rates with the settings all turned up.

    Cost Analysis: Far Cry 2 (1920x1200)

  • Radeon HD 4890: $6.45 per FPS
  • GeForce GTX 285: $7.78 per FPS
  • Radeon HD 5850: $7.05 per FPS
  • GeForce GTX 470: $5.65 per FPS
  • Radeon HD 5870: $7.50 per FPS
  • GeForce GTX 480: $6.41 per FPS
  • Radeon HD 5970: $9.21 per FPS
  • GeForce GTX480 SLI: $8.07 per FPS
  • Test Summary: The Dunia game engine appears preferential to NVIDIA products over ATI, and if you're one of the many gamers who like to play Far Cry 2 perhaps you should prefer their products too. The new NVIDIA GeForce GTX 470 absolutely dominates the Radeon HD 5850, and beats the Radeon HD5870 by over nearly 20%. In Far Cry 2, the Zotac GeForce GTX 470 offers the best cost per FPS of any DX11 video card, as well as past DX10 champions.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Resident Evil 5 Tests

    Built upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX-10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

    On the PC version of Resident Evil 5, both DirectX 9 and DirectX-10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements.

    NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX-10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.

    Resident_Evil_5_Benchmark.jpg

    Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX-10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs.

    Cost Analysis: Resident Evil 5 (Area 4)

  • Radeon HD 4890: $3.50 per FPS
  • GeForce GTX 285: $4.32 per FPS
  • Radeon HD 5850: $3.97 per FPS
  • GeForce GTX 470: $3.61 per FPS
  • Radeon HD 5870: $3.98 per FPS
  • GeForce GTX 480: $4.24 per FPS
  • Radeon HD 5970: $5.69 per FPS
  • GeForce GTX480 SLI: $5.85 per FPS
  • Test Summary: It's unclear if Resident Evil 5 graphics performance fancies ATI or NVIDIA, especially with two test scenes that alternate favoritism. Although this benchmark tool is distributed directly from NVIDIA, and Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any decisive tilt towards GeForce products over ATI Radeon counterparts from within the game itself. Test scene #3 certainly favors the GeForce GTX 470, and leads it ahead of every other product tested. In test scene #4 the Radeon video cards appear more competitive, although the GTX 470 still beats the Radeon HD 5850 by 25% and matches performance with the HD5870. Once again, the Zotac GTX 470 earns respect for the most value-packed video card.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Metro 2033 DX11 Performance

    Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

    The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

    Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

    Metro-2033_DX11_Benchmark.jpg

    NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX-480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates.

    Cost Analysis: Metro 2033

  • Radeon HD 5850: $16.32 per FPS
  • GeForce GTX 470: $16.67 per FPS
  • Radeon HD 5870: $17.73 per FPS
  • GeForce GTX 480: $18.82 per FPS
  • Radeon HD 5970: $20.00 per FPS
  • GeForce GTX480 SLI: $21.74 per FPS
  • Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience. The graphical demands make our selection of DX11 video cards appear flat, especially when two GeForce GTX-480 Fermi video card combined into a SLI set only produce 46 FPS. For what it's worth, the Zotac GeForce GTX 470 led the Radeon HD 5850 by 1-FPS, and also match value.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    Unigine Heaven Benchmark

    The Unigine "Heaven 2.0" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

    The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:

    • Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
    • Comprehensive use of tessellation technology
    • Advanced SSAO (screen-space ambient occlusion)
    • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
    • Dynamic simulation of changing environment with high physical fidelity
    • Interactive experience with fly/walk-through modes
    • ATI Eyefinity support

    Unigine_Heaven_DX11_Benchmark.jpg

    Although Heaven-2.0 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

    Cost Analysis: Unigine Heaven

  • Radeon HD 5850: $12.92 per FPS
  • GeForce GTX 470: $11.67 per FPS
  • Radeon HD 5870: $13.45 per FPS
  • GeForce GTX 480: $13.51 per FPS
  • Radeon HD 5970: $14.89 per FPS
  • GeForce GTX480 SLI: $14.71 per FPS
  • Test Summary: Our test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards. The NVIDIA GeForce GTX 470 clears past the Radeon HD 5850 by 25%, and beats the HD5870 by 1-FPS. Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a tool this detailed. For now though, those details only come by way of DirectX-11 video cards. It's worth noting that if there was a cost involved with Unigine's Heaven, the Zotac GeForce GTX 470 would cost less per frame than every other DirectX-11 video card.

    Graphics Card Radeon HD4890 GeForce GTX285 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 800 240 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 850 670 725 608 850 700 725
    Shader Clock (MHz) N/A 1550 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 975 1300 1000 837 1200 924 1000
    Memory Amount 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 512-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    NVIDIA APEX PhysX Enhancements

    Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.

    We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:

    Mafia2_Cloth_High_No-PhysX.jpg

    No PhysX = Cloth Blending and Missing Debris

    Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 460 with APEX PhysX enabled:

    Mafia2_Cloth_High_PhysX.jpg

    Realistic Cloth and Debris - High Quality Settings With PhysX

    With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.

    Mafia2_PhysX_Fire.jpg

    Burning Destruction Smoke and Vapor Realism

    Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.

    Mafia2_PhysX_Glass.jpg

    Exploding Glass Shards and Bursting Flames

    NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.

    NVIDIA 3D-Vision Effects

    Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.

    Mafia2_3d-Vision_Characters.jpg

    The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.

    Mafia2_3d-Vision_Tire_Door.jpg

    Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.

    Mafia2_3d-Vision_Debris.jpg

    Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.

    Mafia2_3d-Vision_Smoke_Plumes.jpg

    NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.

    Mafia2_3d-Vision_Out-of-Screen.jpg

    NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.

    GeForce GTX470 Temperatures

    Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

    To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing, while the inner-case temperature hovered around 37°C.

    FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.

    GTX-470 vs GTX-480

    NVIDIA-supplied product specifications state that the GeForce GTX 470 has a maximum GPU thermal threshold of 105°C. This is identical for the previous-generation GeForce GTX 285, as well as the GeForce GTX 480 that shares the same GF100 graphics processor. Housed in a mid-tower computer case with no additional cooling, the retail Zotac GeForce GTX-470 video card produced a luke-warm 40°C in idle 2D mode and increased to 88°C in sustained 100% mode using FurMark's torture test.

    Zotac_GeForce_GTX470_Video_Card_Temps.jpg

    Zotac GeForce GTX 470 Video Card Temperatures

    Compared to the Zotac GeForce GTX-480 retail kit, temperatures are extremely close. Let's not kid ourselves into thinking that the GTX 470 should be cooler-running simply because it's a lower model, because the difference in thermal management is clearly obvious from the beginning. The image below illustrates temperatures on the GTX 480 using FurMark to produce load:

    Zotac_GeForce_GTX-480_ZT-40101-10P_Temps.jpg

    ZOTAC GeForce GTX480 Video Card Temperatures

    Most new graphics cards from NVIDIA and ATI will expel heated air out through exhaust vents, which does not increase the internal case temperature. Our test system is an open-air chassis that allows the video card to depend on its own cooling solution for proper thermal management. Most gamers and PC hardware enthusiasts who use an aftermarket computer case with intake and exhaust fans will usually create a directional airflow current and lower internal temperatures a few degrees below the measurements we've recorded. To demonstrate this, we've built a system to illustrate the...

    Best-Case Scenario

    Traditional tower-style computer cases position internal hardware so that heat is expelled out through the back of the unit. This is better than nothing, but there's a fundamental problem: heat rises. Using the transverse mount design on the SilverStone Raven-2 chassis, Benchmark Reviews re-tests the Zotac GeForce GTX-470 video card to determine the 'best-case' scenario.

    Sitting idle at the Windows 7 desktop with a 20°C ambient room temperature, the Zotac GeForce GTX-470 rested at 35°C (nearly matching the inner-case temperature). Pushed to abnormally high levels using the FurMark torture test, the GeForce GTX-470 operated at 81°C with a very quiet cooling fan. While the many BIOS revisions leading up to retail availability have obviously made a noticeable difference, the well-designed Raven-2 computer case with additional cooling features has help to make an even bigger difference.

    Despite the 40nm process and improved BIOS programming, GF100 still runs warm unless you add internal case cooling to draw in cool air and exhaust heated air. This could impact overclocking projects inside poorly ventilated computer cases. So if you've already got a warm-blooded CPU in your computer system, take look at our Best CPU Cooler Performance series and find a heatsink that will lower your internal temperatures and prolong component lifetime. Lowering the internal case temperature could give you an added edge for GPU overclocking projects, and it will also help overall system stability.

    VGA Power Consumption

    Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

    For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

    A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

    Video Card Power Consumption by Benchmark Reviews

    VGA Product Description

    (sorted by combined total power)

    Idle Power

    Loaded Power

    NVIDIA GeForce GTX 480 SLI Set
    82 W
    655 W
    NVIDIA GeForce GTX 590 Reference Design
    53 W
    396 W
    ATI Radeon HD 4870 X2 Reference Design
    100 W
    320 W
    AMD Radeon HD 6990 Reference Design
    46 W
    350 W
    NVIDIA GeForce GTX 295 Reference Design
    74 W
    302 W
    ASUS GeForce GTX 480 Reference Design
    39 W
    315 W
    ATI Radeon HD 5970 Reference Design
    48 W
    299 W
    NVIDIA GeForce GTX 690 Reference Design
    25 W
    321 W
    ATI Radeon HD 4850 CrossFireX Set
    123 W
    210 W
    ATI Radeon HD 4890 Reference Design
    65 W
    268 W
    AMD Radeon HD 7970 Reference Design
    21 W
    311 W
    NVIDIA GeForce GTX 470 Reference Design
    42 W
    278 W
    NVIDIA GeForce GTX 580 Reference Design
    31 W
    246 W
    NVIDIA GeForce GTX 570 Reference Design
    31 W
    241 W
    ATI Radeon HD 5870 Reference Design
    25 W
    240 W
    ATI Radeon HD 6970 Reference Design
    24 W
    233 W
    NVIDIA GeForce GTX 465 Reference Design
    36 W
    219 W
    NVIDIA GeForce GTX 680 Reference Design
    14 W
    243 W
    Sapphire Radeon HD 4850 X2 11139-00-40R
    73 W
    180 W
    NVIDIA GeForce 9800 GX2 Reference Design
    85 W
    186 W
    NVIDIA GeForce GTX 780 Reference Design
    10 W
    275 W
    NVIDIA GeForce GTX 770 Reference Design
    9 W
    256 W
    NVIDIA GeForce GTX 280 Reference Design
    35 W
    225 W
    NVIDIA GeForce GTX 260 (216) Reference Design
    42 W
    203 W
    ATI Radeon HD 4870 Reference Design
    58 W
    166 W
    NVIDIA GeForce GTX 560 Ti Reference Design
    17 W
    199 W
    NVIDIA GeForce GTX 460 Reference Design
    18 W
    167 W
    AMD Radeon HD 6870 Reference Design
    20 W
    162 W
    NVIDIA GeForce GTX 670 Reference Design
    14 W
    167 W
    ATI Radeon HD 5850 Reference Design
    24 W
    157 W
    NVIDIA GeForce GTX 650 Ti BOOST Reference Design
    8 W
    164 W
    AMD Radeon HD 6850 Reference Design
    20 W
    139 W
    NVIDIA GeForce 8800 GT Reference Design
    31 W
    133 W
    ATI Radeon HD 4770 RV740 GDDR5 Reference Design
    37 W
    120 W
    ATI Radeon HD 5770 Reference Design
    16 W
    122 W
    NVIDIA GeForce GTS 450 Reference Design
    22 W
    115 W
    NVIDIA GeForce GTX 650 Ti Reference Design
    12 W
    112 W
    ATI Radeon HD 4670 Reference Design
    9 W
    70 W
    * Results are accurate to within +/- 5W.

    The GeForce GTX 470 requires two six-pin PCI-E power connections. Resting at idle, the power draw consumed 42 watts of electricity. Putting this into perspective: the NVIDIA GeForce 9800 GTX and ATI Radeon HD 4850 both consumed more power at idle, as do their successors. Unfortunately, it's almost twice the idle power draw of the ATI Radeon HD 5850 or HD 5870. As our GeForce GTX 480 review demonstrated, Fermi proves to have a big power appetite when it should be snacking on only a few watts.

    Once 3D-applications begin to demand power from the GPU, electrical power consumption really begins to climb. Measured at full 3D 'torture' load, the GeForce GTX 470 consumed 278 watts. This level of loaded power consumption matches the ATI Radeon HD 4890, but most current DirectX-11 video cards require much less power. Although Fermi features a 40nm fabrication process, there's nothing 'Green' about the power demand under load.

    Editor's Opinion: Fermi GF100

    NVIDIA heard the dinner bell ring many months ago when Microsoft introduced DirectX-11 along side Windows 7, and they've been crawling to the table ever since. Details of the new NVIDIA Fermi GPU architecture were first leaked out to the Web as early as September 2009, which makes exactly half a year between myth and reality. ATI helped set the table with their Radeon HD 5000 series, and even enjoyed some appetizers while a few DX11 games were released, but NVIDIA managed to take a seat just in time for supper. Unfortunately for NVIDIA, ATI showed up wearing its best Sunday blue's, while Fermi's suit is still at the cleaners. None of this really matters though, because now NVIDIA can eat as much as they want.

    My analogy plays out well you consider the facts behind GF100 and the launch of NVIDIA's GeForce GTX470/480. AMD may not have launched with more than a few hundred full-fledged 40nm ATI Cypress-XT GPUs having all eighty texture units, but they made it to market first and created a strong consumer demand for a limited supply of parts. NVIDIA decided on an alternate route, and binned their GPU yields based on streaming multiprocessors. The GF100 GPU is designed to have 16 streaming multiprocessors and 512 discrete cores, and while the Fermi architecture is still in-tact, there's one SMP disabled on the GeForce GTX 480, and two SMPs disabled on the GTX 470. The world has yet to see what the full 512 cores can accomplish, although NVIDIA is already revolutionizing the military with CUDA technology.

    So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

    NVIDIA_GeForce_GTX-480_Video_Card_Angle.jpg

    NVIDIA GeForce GTX 480 Graphics Card

    Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

    Even some older game titles benefit by the Fermi GF100 GPU, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.

    Fermi isn't for everyone. Many of NVIDIA's add-in card partners (what they call AICs) have already built inventory of the GeForce GTX 480. On 12 April 2010 ASUS will reveal the ENGTX480/2DI/1536MD5 GeForce GTX 480 graphics card kit, which online retailers are expected to price at around $500 for the 90-C3CH90-W0UAY0KZ SKU. The ASUS ENGTX470/2DI/1280MD5 kit (GeForce GTX 470) loses only one (more) SMP, but the price for their 90-C3CHA0-X0UAY0KZ kit drops to $350. While not based on anything other than these two prices, it seems that a full 16-SMP 512-core version could receive the GeForce "GTX-490" name and a price tag around $650. Certain to be an expensive enthusiast product, if and when it ever gets made, the GeForce "GTX-490" could keep company with the $1130 recently-announced Intel Core i7-980X 6-Core CPU BX80613I7980X.

    Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

    The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Heading into April 2010, only a private alpha version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 480 is going to surpass ATI's Radeon HD 5870 without much struggle, but it's uncertain how much better the performance will be compared to the previous-generation GeForce GTX 285. Until the GeForce GTX470/480 appears on retail shelves, and until a mature GeForce 400-series WHQL Forceware driver is publicly available, many of the new technologies introduced will remain untapped.

    ZOTAC GTX-470 Conclusion

    Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating for the sample received which may differ from retail versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

    Our performance rating considers how effective the Zotac GeForce GTX-470 DirectX-11 video card performs against competing products from both ATI and NVIDIA. Overall, the Zotac GeForce GTX-470 consistently outperformed the ATI Radeon HD 5850, and in several cases it also beat the Radeon HD 5870. Although the GTX 470 stumbles in Battlefield: Bad Company 2, it captures the best price to performance ratio in BattleForge, Crysis, Far Cry 2, and Resident Evil 5. In fact, when BattleForge includes high-strain SSAO, Zotac's GTX 470 demonstrates how well Fermi is suited for DX11... beating ATI's best single-GPU Radeon HD5870 by 10% and the HD5850 by 30%. At the end of our test gauntlet, the general feeling was that the GeForce GTX 470 is positioned directly between the Radeon HD5850 and HD5870; in both performance and price.

    Zotac_GeForce_GTX470_Splash.jpg

    Aside from a company decal, the Zotac GeForce GTX-470 is identical in appearance to the reference NVIDIA design. Zotac hasn't done much to differentiate their product from the many other Add-In Card (AIC) partners, which means that function takes precedence over fashion. While the graphics card doesn't need to look exciting in order to get the job done, there's a lot of competition that will win over buyers with added product presentation.

    As the only subordinate to the GeForce GTX 480, NVIDIA has designed the GeForce GTX 470 with the same solid construction. There are exposed electronics on the back of the PCB, so it surprises me that they didn't utilize a metal back-plate to work as a heatsink and protect these sensitive components. The top-side of the graphics card features a protective plastic fan shroud, which receives the same depressed chamfer that allowed previous generations to sustain airflow in SLI configurations. The Fermi GF100 GPU has been moved forward toward the exhaust vents, which allows memory and power components to receive optimal cooling first.

    While most enthusiast consumers buy a discrete graphics card for the sole purpose of PC video games, there's a very small niche who depend on the extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. Fermi GF100 is also the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality. Additionally, applications such as Adobe Photoshop or Premier can take advantage of GPGPU processing power.

    Defining product value means something different to everyone. Some readers take heat and power consumption into consideration, while others are only concerned with FPS performance. With regard to value, there are several ways to look at the $350 GeForce GTX-470 and compare it to the closest rivals: either the $310 ATI Radeon HD 5850, or the $390 ATI Radeon HD 5870 . In terms of product price to FPS performance, the GeForce GTX-470 is one of the most affordable DirectX-11 video cards available. Here's a breakdown of the average USD cost per FPS recorded for nine of the tests conducted in this article:

  • Radeon HD 5850: $11.40 per FPS
  • GeForce GTX 470: $11.42 per FPS
  • Radeon HD 5870: $11.65 per FPS
  • GeForce GTX 480: $12.54 per FPS
  • Radeon HD 5970: $14.03 per FPS
  • As of mid-May 2010, the Zotac GeForce GTX-470 model ZT-40201-10P sells at NewEgg for $349.99. If you're not ready for the worlds fastest GPU, the $510 Zotac GeForce GTX-480, the GTX-470 is an excellent value. Although both ZOTAC models include a Limited Lifetime Warranty (if registered within 30-days of purchase), neither of these products include a bundled full-version PC video game, or offer competitive buyer-upgrade incentives like their competition.

    NVIDIA's GF100 Fermi GPU delivers more than just a giant boost to video frame rates over their previous generation, it also delivers GPGPU functionality that is usable in- and outside of video games. Comparing video game frame rate performance between the GTX 470 and the ATI Radeon HD 5850 still puts the 448-core GF100 GPU ahead of the competition by significant margin in DX10 tests, and a decisive lead in DX11 applications. In a few cases, the Zotac GTX 470 also matched or exceeded performance of the ATI Radeon HD 5870. As inventory begins to match demand, manufacturers will become more competitive with pricing and bundled extras, so be sure to shop for your best bargain.

    Pros:Benchmark Reviews Golden Tachometer Award

    + Outstanding price-to-performance cost ratio
    + GF100 Introduces Error Correcting Code (ECC)
    + Great performance for high-end games
    + Fan exhausts all heated air outside of case
    + Quiet cooling fan under loaded operation
    + Includes native HDMI audio/video output
    + Adds 32x CSAA post-processing detail
    + Supports triple-SLI functionality

    Cons:

    - Consumes 42W at idle and 278W under full load
    - Expensive premium-level product
    - Lacks free bundled DX11 video game

    Ratings:

    • Performance: 9.50
    • Appearance: 8.75
    • Construction: 9.50
    • Functionality: 9.75
    • Value: 8.50

    Final Score: 9.2 out of 10.

    Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

    Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.


    Related Articles:
     

    Comments 

     
    # Other testsFederico La Morgia 2010-05-13 22:56
    2 x 470 ?
    3 x 470 ?
    470 in OC and repeat (2 x 470) && (3 x 470) with OC :)
    Report Comment
     
     
    # Unreasonable...Olin Coles 2010-05-14 07:01
    @ Federico: As much as you might be interested, it's not realistic for anyone to actually own more than one Fermi video card. That's only the first issue... because the second is actually getting ahold of more test samples. EVGA for example, has allocated all samples to sales, and none to marketing.
    Report Comment
     
     
    # very reasonableFederico La Morgia 2010-05-23 21:35
    I personally like to see this technology scale from 1 to 3 cards with 470 and 1 to 4 cards with the 480, since the latest beta nvidia drivers, 4-way mode is enabled on 480.
    It would be interesting to see though with a bit of modding the driver can enable the 4-way, including on the GTX 470.
    3/4-way solutions for people who are Fermi 2 x 5970 is not enough to play? :)
    Report Comment
     
     
    # Durp.Adam 2010-05-14 03:50
    Yeah because getting hold of 3 fermi's is REALLY easy at the moment isnt it? Not to mention how affordable they are.
    Report Comment
     
     
    # RE: Zotac GeForce GTX-470 Fermi Video CardJack 2010-05-23 18:45
    Yes, agreed on the twin 470's ... let's compare the two $700 solutions. Can getting twin 470's be that much harder than twin 480's ?
    Report Comment
     
     
    # Very HappyHappy 2010-07-11 16:48
    I bought a ZT-20109-10P (GTS 250) for my OLD Computer the other day - it's measly 300W Power Supply could not take a bigger card!

    I have you beat for $/uP - I got mine for $100 off, so I only paid $50 !

    Thank you Big Box Retailer,
    Happy
    Report Comment
     
     
    # RE: Zotac GeForce GTX-470 Fermi Video CardSurajit Halder 2011-03-27 22:27
    Hello. Can i use cooler master xtreme power plus 600 watt for zotac gtx 470
    Report Comment
     
     
    # RE: RE: Zotac GeForce GTX-470 Fermi Video CardOlin Coles 2011-03-28 07:26
    It's probably safe to use a 600W PSU, but you should tell us what the rest of your power-consuming hardware will be.
    Report Comment
     

    Comments have been disabled by the administrator.

    Search Benchmark Reviews Archive