Archive Home arrow Reviews: arrow Video Cards arrow NVIDIA GeForce GTX 480 Fermi Video Card
NVIDIA GeForce GTX 480 Fermi Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Friday, 26 March 2010

NVIDIA GeForce GTX 480 Video Card Review

PC video game enthusiasts have depended on two companies to deliver graphics power for their computer system: NVIDIA and ATI. While it's convenient for NVIDIA to enjoy fan favoritism for the past decade, ATI has recently enjoyed strong sales and a decisive head-start on the growing DirectX-11 consumer market over the past six months (as evidenced by our unbelievably long list of recent video card reviews). The ATI Radeon HD 5000 series has earned AMD new respect, but many inside the industry have impatiently waited on NVIDIA to respond with their fabled GF100 Fermi DX11 architecture. At long last, NVIDIA's Fermi is a reality.

At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF100 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX-480 graphics card is a complex tool built to arrive at one simple destination: immersive entertainment. Priced at $499, the NVIDIA GeForce GTX 480 empowers DirectX-11 video games to deliver unmatched geometric realism. In this article Benchmark Reviews tests 3D frame rate performance of NVIDIA's GeForce GTX 480, and demonstrates how well Fermi architecture fits in with GeForce 3D Vision.

NVIDIA_GeForce_GTX-480_Fermi_Video_Card.jpg

NVIDIA GeForce GTX 480 Video Card

TSMC, the largest semiconductor foundry on the planet, has had a great deal of difficulty shrinking their business. Originally intended to feature 512 CUDA cores, NVIDIA was faced with limited yields from TSMC, and decided to end the consumers wait and offer gamers and PC hardware enthusiasts a 480-core solution. Some readers may recall that AMD reacted to their yield crisis differently, and decided it would look better to announce a product with extremely-limited quantities. While this tactic works well for review samples and a pretty press release, NVIDIA knows that gamers want to actually own the video card... not just read about it.

NVIDIA presents to us the GeForce GTX-480 graphics card. Powered by 48 ROPs and 480 unified CUDA (shader) cores, the GF100 Fermi GPU has 3.2-billion transistors to help process DirectX-11 commands and render some of the most detailed graphics ever seen on the PC platform. Tessellation is the word for 2010, and DX11 brings movie-quality graphics to life on consumer-level video games.

Benchmark Reviews tests graphics frame rate performance of the NVIDIA GeForce GTX 480 using several of the most demanding PC video game titles and benchmark software available. Old favorites such as Crysis Warhead, Far Cry 2, Resident Evil 5, and PCMark Vantage are all included. New to the scene are Battlefield: Bad Company 2, Stalker: Call of Pripyat, BattleForge, and the recently announced Unigine Heaven 2.0 benchmark.

EDITOR'S NOTE: Since testing NVIDIA's engineering sample for this article, we've received retail GeForce GTX-480 products that perform the same but require less power and produce less heat and noise. Read more in our Zotac GeForce GTX-480 Fermi Video Card review, which also includes SLI performance results.

About NVIDIA Corporation:

NVIDIA (Nasdaq: NVDA) is the world leader in visual computing technologies and the inventor of the GPU, a high-performance processor which generates breathtaking, interactive graphics on workstations, personal computers, game consoles, and mobile devices. NVIDIA serves the entertainment and consumer market with its GeForce products, the professional design and visualization market with its Quadro products, and the high-performance computing market with its Tesla products. These products are transforming visually-rich and computationally-intensive applications such as video games, film production, broadcasting, industrial design, financial modeling, space exploration, and medical imaging.NVIDIA_Black_Square_3D_Logo_250px.jpg

NVIDIA Product Lines

GeForce - GPUs dedicated to graphics and video.
Desktop and notebook PCs equipped with GeForce GPUs deliver unparalleled performance, crisp photos, high-definition video playback, and ultra-realistic games. GeForce notebook GPUs also include advanced power management technology to deliver high performance without sacrificing battery life.

Quadro - A complete range of professional solutions engineered to deliver breakthrough performance and quality.
Certified for all leading professional graphics applications. #1in professional graphics segment share. NVIDIA Quadro Plex is the industry's first dedicated visual computing system (VCS).

Tesla - A massively-parallel multi-threaded architecture for high-performance computing problems.
A dedicated, high-performance GPU computing solution that brings supercomputing power to any workstation or server and to standard, CPU-based server clusters. Tesla delivers a 128-processor computing core per GPU, C-language development environment for the GPU, and a suite of developer tools - allowing users to develop applications faster and to deploy them across multiple generations of processors. It also can be used in tandem with multi-core CPU systems to create a scalable computing solution that fits seamlessly into existing workstation or IT infrastructures.

Tessellation

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100 GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF100 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF100, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

Four-Offset Gather4

The texture unit on previous processor architectures operated at the core clock of the GPU. On GF100, the texture units run at a higher clock, leading to improved texturing performance for the same number of units. GF100's texture units now add support for DirectX-11's BC6H and BC7 texture compression formats, reducing the memory footprint of HDR textures and render targets.

The texture units also support jittered sampling through DirectX-11's four-offset Gather4 feature, allowing four texels to be fetched from a 128×128 pixel grid with a single texture instruction. NVIDIA's GF100 implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. With jittered sampling, games can implement smoother soft shadows or custom texture filters efficiently. The previous GT200 GPU did not offer coverage samples, while the GF100 can deliver 32x CSAA.

GF100 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF100's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF100 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100 GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

GF100 Specifications

  • 512 CUDA Cores
  • 16 Geometry Units
  • 4 Raster Units
  • 64 Texture Units
  • 48 ROP Units
  • 384-bit GDDR5
  • DirectX-11 API Support

GeForce Specifications

Graphics Card

GeForce GTX 285

GeForce GTX 470

GeForce GTX 480

GPU Transistors 1.4 Billion 3.2 Billion 3.2 Billion

Graphics Processing Clusters

10

4

4

Streaming Multiprocessors

24

14

15

CUDA Cores

240

448

480

Texture Units

80

56

60

ROP Units

32

40

48

Graphics Clock
(Fixed Function Units)

648 MHz

607 MHz

700 MHz

Processor Clock
(CUDA Cores)

1476 MHz

1215 MHz

1401 MHz

Memory Clock
(Clock Rate/Data Rate)

1242/2484 MHz

837/3348 MHz

924/3696 MHz

Total Video Memory

1024 MB

1280 MB

1536 MB

Memory Interface

512-Bit

320-Bit

384-Bit

Total Memory Bandwidth

159.0 GB/s

133.9 GB/s

177.4 GB/s

Texture Filtering Rate
(Bilinear)

51.8 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

GPU Fabrication Process

55 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x S-Video

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

Max Board Power (TDP)

204 Watts

215 Watts

250 Watts

Recommended PSU

550 Watts

550 Watts

600 Watts

GPU Thermal Threshold

105°C

105°C

105°C

Chart Courtesy of Benchmark Reviews

NVIDIA GF100 GPU Fermi Architecture

NVIDIA's latest GPU is codenamed GF100, and is the first graphics processor based on the Fermi architecture. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF100 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU is their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. Rest assured, the new GF100 GPU is more powerful than the GT200 could ever be, and early results indicate a Fermi-based video card delivers far more than twice the gaming performance over a GeForce GTX-280.

GF100 is not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they now reach 512 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF100 implements the new IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.

nvidia-fermi-gf100-gpu-block-diagram-benchmarkreviews-sm.png

NVIDIA Fermi GF100 Block Diagram (click for high-resolution)

Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF100 doubles the number of CUDA cores over the previous architecture. NVIDIA GeForce GF100 Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The NVIDIA GF100 implements four GPCs, sixteen SMs, and six memory controllers. Expect NVIDIA to launch GF100 products with different configurations of GPCs, SMs, and memory controllers to address different price points.

CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF100 implements six 64-bit GDDR5 memory controllers (384-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 32 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages.

GF100 implements 512 CUDA cores, organized as 16 SMs of 32 cores each. Each SM is a highly parallel multiprocessor supporting up to 48 warps at any given time. Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture services load, store, and texture operations. GF100 has 48 ROP units for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in six groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others.

NVIDIA GigaThread Thread Scheduler

One of the most important technologies of the Fermi architecture is its two-level, distributed thread scheduler. At the chip level, a global work distribution engine schedules thread blocks to various SMs, while at the SM level, each warp scheduler distributes warps of 32 threads to its execution units. The first generation GigaThread engine introduced in G80 managed up to 12,288 threads in real-time. The Fermi architecture improves on this foundation by providing not only greater thread throughput, but dramatically faster context switching, concurrent kernel execution, and improved thread block scheduling.

What's new in Fermi?

With any new technology, consumers want to know what's new in the product. The goal of this article is to share in-depth information surrounding the Fermi architecture, as well as the new functionality unlocked in GF100. For clarity, the 'GF' letters used in the GF100 GPU name are not an abbreviation for 'GeForce'; they actually denote that this GPU is a Graphics solution based on the Fermi architecture. The next generation of NVIDIA GeForce-series desktop video cards will use the GF100 to promote the following new features:

  • Third Generation Streaming Multiprocessor (SM)
    o 32 CUDA cores per SM, 4x over GT200
    o 8x the peak double precision floating point performance over GT200
    o Dual Warp Scheduler simultaneously schedules and dispatches instructions from two independent warps
    o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache
  • Second Generation Parallel Thread Execution ISA
    o Unified Address Space with Full C++ Support
    o Optimized for OpenCL and DirectCompute
    o Full IEEE 754-2008 32-bit and 64-bit precision
    o Full 32-bit integer path with 64-bit extensions
    o Memory access instructions to support transition to 64-bit addressing
    o Improved Performance through Predication
  • Improved Memory Subsystem
    o NVIDIA Parallel DataCache hierarchy with Configurable L1 and Unified L2 Caches
    o First GPU with ECC memory support
    o Greatly improved atomic memory operation performance
  • NVIDIA GigaThread Engine
    o 10x faster application context switching
    o Concurrent kernel execution
    o Out of Order thread block execution
    o Dual overlapped memory transfer engines

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

Closer Look: GeForce GTX480

So far, 2010 has been an exciting year for game developers. Microsoft Windows 7 introduced gamers to DirectX-11, and video games released for the PC platform have looked better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware.

This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles.

NVIDIA_GeForce_GTX-480_Video_Card_Top.jpg

The NVIDIA GeForce GTX480 replaces the GeForce GTX285 as their most powerful single-GPU desktop graphics card. Assuming the same double-bay card height and 10.5" long profile, the GeForce GTX480 adds a more robust thermal management system with five heatpipes (four exposed) transferring heat from the GF100 GPU to an exposed heatsink.

Video frame buffer memory specifications change with the GTX480. The 512-bit memory interface of the GTX285 is replaced by a 384-bit version, which features 1536MB of GDDR5 vRAM operating at 924/3696 MHz clock and data rates.

NVIDIA_GeForce_GTX-480_Video_Card_Angle.jpg

Other changes occur in more subtle ways, such as with the device header panel. While keeping to the traditional design used for GeForce 200-series products, the GTX 480 simply replaces the S-Video connection with a more relevant HDMI 1.3a A/V output. NVIDIA has retained dual DVI output, which means that at least two GeForce video cards will be needed for a GeForce 3D-Vision Surround functionality. As with past GeForce video cards, the GPU offers two output 'lanes', so all three output devices cannot operate at once.

NVIDIA_GeForce_GTX-480_Video_Card_Header.jpg

The new 40nm fabrication process opens the die for more transistors, now increased from 1.4-billion in GT200 GPU present on the GeForce GTX 285, to an astounding 3.2-billion built into the Fermi GF100 GPU and used with the GeForce GTX480 (and also the GTX470). The increased transistor count also amplifies the thermal output, which NVIDIA estimates at 250 watts TDP.

NVIDIA_GeForce_GTX-480_Video_Card_Corner.jpg

One particular difference between the GTX285 and the GTX480 is heat. While the exposed heatsink does well-enough to remove a portion of heat from the 700MHz GPU, the exhaust vents begin to expel hot air from initial start-up. The GF100 GPU is located closer to the exhaust panel than GT200 was, and although the heatsink and heat-pipes are improved the operating temperature runs very warm even at idle. We explore operating temperatures later in this article.

Although designed for 250W, the 6+8-pin power connections are good for up to 400W on demand... which is close to what we received for full-load power consumption results. Keep this in mind when shopping for a power supply unit; NVIDIA recommends a 600W PSU for the GTX480, but 800W would be a safer suggestion.

NVIDIA_GeForce_GTX-480_Video_Card_Side.jpg

Similar to the GeForce GTX285, both the GTX470 and GTX480 offer triple-SLI capability. Technically SLI and triple-SLI are possible, but careful consideration for heat must be given to the GTX480. Under load this video card nearly reached 100°C in a 25°C room; and that's with an extremely well-ventilated computer case helping to keep it cool. Also note the opened PCB, which allows the blower fan to intake air from either side of the unit.

The printed circuit board (PCB) is a busy place for the GeForce GTX480. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the fully-grown 530 mm2 GF100 GPU and its 3.2-billion transistors. 480 CUDA cores operate at 1401 MHz, which keeps a firm lead over ATI's 850 MHz Cypress-XT GPU that measures 334 mm2 and fits 2.154-billion transistors.

NVIDIA_GeForce_GTX-480_Video_Card_PCB.jpg

In next several sections, Benchmark Reviews explains our video card test methodology, followed by a performance comparison of the NVIDIA GeForce GTX480 against several of the most popular graphics accelerators available. The GeForce GTX480 replaces the GeForce GTX 285, and directly competes against the ATI Radeon HD 5870; so we'll be keeping a close eye on comparative performance.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update to the Vista O/S, so our test results apply to both versions of the Operating System. Because not all graphics solutions were DX11 compatible at the time this article was published, DirectX-10 test settings have been included beside DirectX-11 results.

According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.GPU-Z_NVIDIA_GeForce_GTX-480_Video_Card.png

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel X58-Express Test System

Benchmark Applications

  • 3DMark Vantage v1.02 (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • BattleField: Bad Company 2 (High Quality, HBAO, 8x Anti-Aliasing, 16x Anisotropic Filtering, Single-Player Intro Scene)
  • BattleForge v1.2 (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Crysis Warhead v1.1 with HOC Benchmark (DX10, Very High Quality, 4x Anti-Aliasing, 16x Anisotropic Filtering, Airfield Demo)
  • Far Cry 2 v1.02 (DX10, Very High Performance, Ultra-High Quality, 8x Anti-Aliasing, HDR + Bloom)
  • Resident Evil 5 Benchmark (DX10, Super-High Quality, 8x MSAA)
  • S.T.A.L.K.E.R. Call of Pripyat Benchmark Demo (Ultra-Quality, Enhanced DX10 light, 4x MSAA, SSAO on and off)
  • Unigine Heaven Benchmark 2.0 (DX11 and DX10, High-Quality Shaders, Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)
  • ATI Radeon HD 4890 Reference Design (850 MHz GPU/975 MHz vRAM - ATI Catalyst Driver 10.3)
  • NVIDIA GeForce GTX 275 Reference Design (633 MHz GPU/1404 MHz Shader/1134 MHz vRAM - Forceware 197.13)
  • ATI Radeon HD 5850 Reference Design (725 MHz GPU/1000MHz vRAM - ATI Catalyst Driver 10.3)
  • ASUS GeForce GTX 285 ENGTX285 TOP (670 MHz GPU/1550 MHz Shader/1330 MHz vRAM - Forceware 197.13)
  • ATI Radeon HD 5870 Reference Design (850 MHz GPU/1200MHz vRAM - ATI Catalyst Driver 10.3)
  • NVIDIA GeForce GTX 480 Reference Design (700 MHz GPU/1401 MHz Shader/924 MHz vRAM - Forceware 197.17 Beta)
  • NVIDIA GeForce GTX 295 Reference Design (576 MHz GPU x2/1242 MHz Shader/999 MHz vRAM - Forceware 197.13)
  • ATI Radeon HD 5970 Reference Design (725 MHz GPU x2/1000MHz vRAM - ATI Catalyst Driver 10.3)

3DMark Vantage GPU Tests

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Looking at 3dMark Vantage performance, there are some trends forming. The Radeon HD4890 and GeForce GTX275 are similarly matched, and generally perform at the low end of our chart. The factory-overclocked GeForce GTX285 is overshadowed by the Radeon HD5850, while the Radeon HD5870 tops the charts for single-GPU performance. NVIDIA's GeForce GTX trails the HD5870 by less than one FPS in the Jane Nash tests, and leads the GTX285 by 47%. NVIDIA's dual-GPU GeForce GTX295 is no match for the ATI Radeon HD5970, but one GF100 GPU easily beats two GT200's.

3DMark Vantage GPU Test: New Calico

New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

3dMark_Vantage_New_Calico_Benchmark.jpg

Similar to the Jane Nash tests, the New Calico tests in 3dMark Vantage have the Radeon HD4890 and GeForce GTX275 evenly matched. The Radeon HD5850 still trumps the GeForce GTX285, but in this test the GeForce GTX480 outperforms the Radeon HD5870 and all other single-GPU video cards. For reference, the GeForce GTX480 trailed behind the dual-GPU Radeon HD5970 by 32-51%.

Test Summary: GeForce GTX480 improves upon the GTX285 by nearly 61% at 1920x1200, and outperforms the Radeon HD5870 by 10%. The recent 1.02 version update and fresh Forceware/Catalyst drivers do not appear to effect 3dMark Vantage, and results indicate that this is still a balanced tool for measuring DX10 graphics performance.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

Battlefield: Bad Company 2

The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

Battlefield-Bad-Company-2_Benchmark.jpg

The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

There's a noticeable lead for the GeForce GTX275 over the Radeon HD4890, and the GeForce GTX285 is much closer to the Radeon HD5850 and HD5870. The NVIDIA GeForce GTX480 leads the Radeon HD5870 by 23%, and improves upon the GTX285 by 58%. Surprisingly, there's only a 15% lead over the GTX480 with the dual-GPU Radeon HD5970.

Test Summary: In Battlefield: Bad Company 2, the GeForce GTX480 improves upon the GTX285 by nearly 59% while beating the ATI Radeon HD5870 by 23%. Additionally, only 9 FPS separate the GTX480 and the dual-GPU Radeon HD4970.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

BattleForge Performance

BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The first chart illustrates how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is disabled, which runs tests at DirectX-10 levels.

BattleForge_No-SSAO_Benchmark.jpg

When Screen Space Ambient Occlusion (SSAO) is disabled, past-generation NVIDIA GeForce and ATI Radeon products are compared on a more even playing field (so long as you discredit the fact that we have a few DirectX-10 cards in the mix, and that BattleForge is a DirectX-11 game). These tests illustrate how well new DX11-compliant video cards improve upon previously-popular DX10 graphics solutions.

Looking at performance using 1920x1200 resolution, the ATI Radeon HD5890 is slightly ahead of the GeForce GTX275, and the Radeon HD5850 is ahead of the overclocked ASUS GeForce GTX 285 TOP. The Radeon HD 5870 is a few FPS ahead of the GeForce GTX295 dual-GT200 video card. NVIDIA's Fermi-based GeForce GTX480 delivers better graphics performance than every other video card on the planet, and the 82.5 FPS outperforms the dual-GPU ATI Radeon HD5970 by 18% at 1920x1200 or 20% at 1680x1050.

The next chart (below) illustrates how BattleForge reacts when SSAO is enabled, which forces multi-core optimizations that DirectX-11-compatible video cards are best suited to handle:

BattleForge_DX11_Benchmark.jpg

As should expected, the DirectX-11-compatible ATI 5000 reveals an immediate advantage over all previous-generation NVIDIA GeForce products. SSAO isn't a technology that DX10 GeForce products can handle very well, yet the older ATI Radeon products seem to work well enough with the new strain of DX11. If gaming is the primary purpose for a discrete graphics card, then you'll want to consider that nearly all new video games coming to market will be developed with SSAO and other DirectX-11 extensions. These features make it difficult (and sometimes impossible) to enjoy the game on non-compliant graphics hardware.

In respect to EA's BattleForge, a reference-clocked ATI Radeon HD4890 is able to outperform the GeForce GTX275, and overclocked ASUS GeForce GTX285 TOP, and nearly match the GeForce GTX295. This indicates that Windows 7 will re-center the definition of 'mainstream' graphics products. What was top shelf in Windows XP will soon become the low end with DirectX-11 in Windows 7 or Vista. For gamers who plan to use Windows 7, and especially those who play BattleForge, the Radeon HD 5850 offered excellent performance, as did the HD5970 and dual-GPU Radeon HD5970, but it was the NVIDIA GeForce GTX480 that deserves total respect.

Test Summary: With an unbiased appetite for raw DirectX-11 graphics performance, BattleForge appears to be ambiguous towards ATI and NVIDIA products. When high-strain SSAO is called into action, NVIDIA's GTX480 demonstrates how well Fermi is suited for DX11... improving upon the GeForce GTX285 by nearly 249%. While trumping ATI's best single-GPU Radeon HD5870 by 61% is an impressive feat all by itself, outperforming a dual-GPU Radeon HD5970 by 11% is incredible by definition.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

Crysis Warhead Tests

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

Crysis_Warhead_Benchmark.jpg

Without SSAO support, Crysis becomes a battle of sheer graphics power. The Radeon HD4890 and GeForce GTX275 dance around 18 FPS at 1920x1200, and the Radeon HD5850 chases the 21 FPS marker with an overclocked ASUS GeForce GTX 285 TOP. But when it comes to top-tier graphics, the ATI Radeon HD5870 measures up to the GeForce GTX295 of yesteryear and not much more. At 33 FPS the NVIDIA GeForce GTX480 is ahead of the HD5870 by 22%, and trails the dual-GPU Radeon HD5970 by a mere 6 FPS (only four at 1680x1050).

Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead responds well to both ATI and NVIDIA products, with test results appearing identical to 3dMark Vantage. The NVIDIA GeForce GTX480 improves upon the GTX285 by 57%, and also beats the ATI Radeon HD5870 by 22%. With 33 FPS at 1920x1200, the GTX480 is a mere 6 FPS away from the dual-GPU ATI Radeon HD4970.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

Far Cry 2 Benchmark

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX-10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis.

However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX-10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied, and HDR and Bloom were enabled.

Far_Cry_2_Benchmark_Performance.jpg

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we used 8x AA and 'Ultra High' settings in Far Cry 2. The end effect was a separation between what is capable of maximum settings, and what is not. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), we noticed that there are very few products capable of producing playable frame rates with the settings all turned up.

Inspecting the performance at 1920x1200 resolution, it appears that every graphics card we tested can handle higher quality settings and post-processing effects in Far Cry 2. Similar to Battlefield: Bad Company 2 and Crysis Warhead, Far Cry 2 delivers optimal performance on NVIDIA GeForce video cards over the ATI Radeon alternative... The Way It's Meant To Be Played.

The Radeon HD4890 lags behind the GeForce GTX275, and the Radeon HD5850 only matches the overclocked ASUS GeForce GTX 285 TOP for a change. Obviously the results are leaning towards NVIDIA here, so be prepared for anything. The GeForce GTX480 outperforms every other graphics card sold in terms of Far Cry 2 performance, with healthy lead over the HD5870 and a few FPS ahead of the dual-GPU Radeon HD5970.

Test Summary: The Dunia game engine appears preferential to NVIDIA products over ATI, and if you're one of the many gamers who like to play Far Cry 2 perhaps you should prefer them too. The new NVIDIA GeForce GTX480 improves over the past-generation GeForce GTX285 by nearly 76%, and beats the Radeon HD5870 by over 50%. The surprise here is a measured benefit of 2 FPS for the single GF100 graphics card over dual-GPUs in the Radeon HD5970.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

Resident Evil 5 Tests

Built upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX-10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

On the PC version of Resident Evil 5, both DirectX 9 and DirectX-10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements.

NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX-10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.

Resident_Evil_5_Benchmark.jpg

Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX-10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs.

Even so, the past-generation ATI Radeon HD4890 renders 44 FPS in test scene 3, while jumping to 58 FPS in test scene 4. This loosely indicates that lower-end graphics cards can still play Resident Evil 5 at 1920x1200, and produce good 30+ frame rates with maximum settings. For these results however, it seems that driver optimizations between manufacturers could account for the disparity among test scenes, although the Resident Evil 5 game itself 'normalizes' in the two other (less demanding) scenes.

Many of the video card rankings changed between the two test scenes in Resident Evil 5. The Radeon HD5850 is 5 FPS ahead of NVIDA's GeForce GTX275 in area #3, and then falls 12 FPS behind in area #4. The inverse is true for the next video cards, where the GeForce GTX285 is ahead of the Radeon HD5870 and then drops behind by 16 FPS. The NVIDIA GeForce GTX480 dominates the entire field in area #3 results, and trails just behind the Radeon HD5970 in area #4.

Test Summary: It's unclear if Resident Evil 5 graphics performance fancies ATI or NVIDIA, especially with two test scenes that alternate favoritism. Although this benchmark tool is distributed directly from NVIDIA, and Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any decisive tilt towards GeForce products over Radeon counterparts from within the game itself. Test scene #3 certainly favors the GeForce GTX480, and leads it ahead of every other product tested. In test scene #4, the GTX480 scores 118 FPS compared to only 81 with the past-generation GTX285, or 97 FPS from the Radeon HD5870.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

STALKER Call of Pripyat Benchmark

The events of S.T.A.L.K.E.R.: Call of Pripyat unfold shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl. Having discovered about the open path to the Zone center, the government decides to hold a large-scale military "Fairway" operation aimed to take the CNPP under control. According to the operation's plan, the first military group is to conduct an air scouting of the territory to map out the detailed layouts of anomalous fields location. Thereafter, making use of the maps, the main military forces are to be dispatched. Despite thorough preparations, the operation fails. Most of the avant-garde helicopters crash. In order to collect information on reasons behind the operation failure, Ukraine's Security Service send their agent into the Zone center.

S.T.A.L.K.E.R.: CoP is developed on X-Ray 1.6 game engine, and implements several ambient occlusion (AO) techniques including one that AMD has developed. AMD's AO technique is optimized to run on efficiently on Direct3D11 hardware. It has been chosen by a number of games (e.g. BattleForge, HAWX, or the new Aliens vs Predator) for the distinct effect in it adds to the final rendered images. This AO technique is called HDAO which stands for ‘High Definition Ambient Occlusion' because it picks up occlusions from fine details in normal maps.

Put in simple terms, ambient light occlusion can be described as the parts of the scene where light finds it hard to reach. In the real world, light has to bounce off many surfaces in order to reach some places. Classically this problem is solved with a radiosity technique but this is usually too expensive for real-time applications. For this reason, various screen space techniques have been invented to emulate the effect of ambient occlusion.

STALKER-CoP_DX10_Benchmark.jpg

For the sake of including past-generation products in our test results, we've restrained the STALKER: CoP benchmark to DirectX-10 level settings with SSAO. There are three SSAO modes: Default, HBAO, and HDAO; this test uses Default. Each mode then has three SSAO levels of detail: Low, Medium, High, and specific to HDAO you can add Ultra. Our tests use the Default-High settings, which rank #3 out of 10 SSAO levels.

Using DirectX-10 lighting and Default-High SSAO settings, the HD4890 actually comes amazingly close to the Radeon HD5850 and NVIDIA GeForce GTX480 level of performance. While it's understandable that NVIDIA may not have optimized DX10 performance in their GeForce GTX480 beta driver, the opposite is true for Forceware 197.13 and the other GeForce products. It's suspicious that a past-generation ATI Radeon HD4890 DX10 video card can dramatically outperform all of NVIDIA's DX10 products, even the dual-GPU GeForce GTX295. My commentary of these test results ends here, and you can draw your own conclusion. Next are the DirectX-11 benchmark tests...

STALKER-CoP_DX11_Benchmark.jpg

S.T.A.L.K.E.R. Call of Pripyat is a video game based on the DirectX-11 architecture, and designed to use high-definition SSAO. Our DX11 tests utilize the highest settings possible, with HDAO mode set to use Ultra SSAO quality. Although this benchmark runs through all four test scenes (Day, Night, Rain, and Sun Shafts), only Day and Night are reported in our chart above.

Test Summary: Something's not right with S.T.A.L.K.E.R. Call of Pripyat. Either the X-Ray game engine or the video game itself are heavily optimized for ATI Radeon video cards and not representative of relative performance we seen with other game titles. If you play S.T.A.L.K.E.R. Call of Pripyat with a DX10 video card, the ATI Radeon series clearly carries more weight. Nevertheless, NVIDIA's GeForce GTX480 Fermi video card holds its own in DX11 tests with HDAO set to Ultra, and improves upon the non-compliant GTX285 by 120% in night scenes and, 108% during daytime scenes.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

Unigine Heaven Benchmark

The Unigine "Heaven 2.0" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:

  • Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
  • Comprehensive use of tessellation technology
  • Advanced SSAO (screen-space ambient occlusion)
  • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
  • Dynamic simulation of changing environment with high physical fidelity
  • Interactive experience with fly/walk-through modes
  • ATI Eyefinity support

Unigine_Heaven_DX10_Benchmark.jpg

Just like we've already done in BattleForge and S.T.A.L.K.E.R. Call of Pripyat, the Unigine "Heaven" benchmark was reduced to DirectX-10 levels to make sure everyone had a fair chance. The Heaven benchmark is a free demo that makes use of the Unigine game engine, and is designed to show off the most detailed cobblestone and smoke you've ever seen a graphics card generate... in DirectX-11. Down-graded to DirectX-10 performance, our test results (above) indicate trending similar to 3dMark Vantage, and keep the NVIDIA GeForce GTX480 only two frames from the Radeon HD5870.

Unigine_Heaven_DX11_Benchmark.jpg

Although Heaven-2.0 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

The ATI Radeon HD5850 establishes a 24.1 FPS baseline, which is increased to 29.9 FPS by the Radeon HD5870. Placed between ATI's best single-GPU video card and their dual-GPU Radeon HD5970, the NVIDIA GeForce GTX480 delivers 37.4 FPS. While the Unigine Heaven benchmark is a sample for what a game developer could do with their engine, as of now it's merely a synthetic benchmark with the same punctuation as 3dMark Vantage.

Test Summary: NVIDIA has been keen to promote the Unigine Heaven benchmark tool, as it appears to deliver a fair comparison of DirectX-11 test results. The NVIDIA GeForce GTX480 clears past the Radeon HD5870 by 25%, and trails the dual-GPU Radeon HD5970 by a mere 10 FPS. Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a tool this detailed. For now though, those details only come by way of DirectX-11 video cards.

Graphics Card Radeon HD4890 GeForce GTX275 Radeon HD5850 GeForce GTX285 Radeon HD5870 GeForce GTX480 GeForce GTX295 Radeon HD5970
GPU Cores 800 240 1440 240 1600 480 480 (240 per GPU) 3200 (1600 per GPU)
Core Clock (MHz) 850 633 725 670 850 700 576 725
Shader Clock (MHz) N/A 1404 N/A 1550 N/A 1401 1242 N/A
Memory Clock (MHz) 975 1134 1000 1300 1200 924 999 1000
Memory Amount 1024 MB GDDR5 896 MB GDDR3 1024MB GDDR5 1024MB GDDR3 1024MB GDDR5 1536MB GDDR5 1792MB GDDR3 2048MB GDDR5
Memory Interface 256-bit 448-bit 256-bit 512-bit 256-bit 384-bit 896-bit (448-bit per GPU) 512-bit (256-bit per GPU)

NVIDIA APEX PhysX Enhancements

Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.

We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:

Mafia2_Cloth_High_No-PhysX.jpg

No PhysX = Cloth Blending and Missing Debris

Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 460 with APEX PhysX enabled:

Mafia2_Cloth_High_PhysX.jpg

Realistic Cloth and Debris - High Quality Settings With PhysX

With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.

Mafia2_PhysX_Fire.jpg

Burning Destruction Smoke and Vapor Realism

Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.

Mafia2_PhysX_Glass.jpg

Exploding Glass Shards and Bursting Flames

NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.

NVIDIA 3D-Vision Effects

Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.

Mafia2_3d-Vision_Characters.jpg

The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.

Mafia2_3d-Vision_Tire_Door.jpg

Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.

Mafia2_3d-Vision_Debris.jpg

Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.

Mafia2_3d-Vision_Smoke_Plumes.jpg

NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.

Mafia2_3d-Vision_Out-of-Screen.jpg

NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.

GeForce GTX480 Temperatures

Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, or merely a hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application of video game, and it does so with consistency every time. While I have proved that FurMark is not a true benchmark tool for comparing video cards, it would still work very well to compare one product against itself at different stages. FurMark would be very useful for comparing the same GPU against itself using different drivers or clock speeds, of testing the stability of a GPU as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

NVIDIA_GeForce_GTX-480_Video_Card_Temperatures.jpg

NVIDIA GeForce GTX 480 Temperature Results

NVIDIA-supplied product specifications state that the GeForce GTX 480 has a maximum GPU thermal threshold of 105°C. This is identical for the previous-generation GeForce GTX 285, as well as the GeForce GTX 470 that shares the same GF100 graphics processor.

To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20.0°C throughout testing, while the inner-case temperature hovered around 37°C. The NVIDIA GeForce GTX 480 Fermi video card recorded a very warm 53°C in idle 2D mode (matching the inner case temp), and increased to 93°C in sustained full 3D mode.

In comparison to the departing-generation GeForce GTX 285 and the competing ATI Radeon HD 5870, these temperatures are much higher at both idle and load. The GeForce GTX 285 hovers at 38°C, while the Radeon HD 5870 reports only 33°C at idle. Under full load, the GeForce GTX 285 climbs to 86°C, while the Radeon HD 5870 works at 80°C.

Despite the 40nm process, GF100 runs hot. This could certainly effect overclocking projects, and may also increase the likelihood for graphics instability inside poorly ventilated computer cases. If you've already got a warm-blooded CPU in your computer system, take look at our Best CPU Cooler Performance series and find a heatsink that will lower your internal temperatures and prolong component lifetime. Lowering the internal case temperature could give you an added edge for GPU overclocking projects, and it will also help overall system stability.

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

NVIDIA-supplied product specifications state a 250W Max Board Power (TDP), and suggest a 600W power supply unit. For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions.

At idle the NVIDIA GeForce GTX 480 Fermi video card used 52 watts of electricity, and perhaps among the highest idle power draw we've measured for DirectX-11 generation graphics cards. This level of consumption is slightly higher than the 48W we measured for the dual-GPU ATI Radeon HD 5970, and more than twice the demand of ATI's Radeon HD5870 and HD5850. Compared against the departing GeForce GTX 285, the new GTX 480 adds-on about 20 additional watts at idle. Fermi certainly has a big power appetite when it should be snacking on only a few watts.

Once 3D-applications begin to demand power from the GPU, electrical power consumption really begins to climb. Measured at full 3D load, the GeForce GTX 480 sets a new maximum power record and consumes 370 watts. Although Fermi features a 40nm fabrication process, there's nothing 'Green' about the power demand under load. Sure, the performance-per-watt ratio is higher on the GTX480 than the other cards, but it comes at a price. Putting things into perspective, though, the enthusiast PC gamer who can afford the NVIDIA GeForce GTX 480 probably isn't very worried about a few extra dollars on his power bill each month.

To be fair, our GeForce GTX 480 was an engineering sample, and although several other reviewers I've discussed this with have experienced the same high power consumption, it's unclear if retail parts will be built from the same yield.

Editor's Opinion: Fermi GF100

NVIDIA heard the dinner bell ring many months ago when Microsoft introduced DirectX-11 along side Windows 7, and they've been crawling to the table ever since. Details of the new NVIDIA Fermi GPU architecture were first leaked out to the Web as early as September 2009, which makes exactly half a year between myth and reality. ATI helped set the table with their Radeon HD 5000 series, and even enjoyed some appetizers while a few DX11 games were released, but NVIDIA managed to take a seat just in time for supper. Unfortunately for NVIDIA, ATI showed up wearing its best Sunday blue's, while Fermi's suit is still at the cleaners. None of this really matters though, because now NVIDIA can eat as much as they want.

My analogy plays out well you consider the facts behind GF100 and the launch of NVIDIA's GeForce GTX470/480. AMD may not have launched with more than a few hundred full-fledged 40nm ATI Cypress-XT GPUs having all eighty texture units, but they made it to market first and created a strong consumer demand for a limited supply of parts. NVIDIA decided on an alternate route, and binned their GPU yields based on streaming multiprocessors. The GF100 GPU is designed to have 16 streaming multiprocessors and 512 discrete cores, and while the Fermi architecture is still in-tact, there's one SMP disabled on the GeForce GTX 480, and two SMPs disabled on the GTX 470. The world has yet to see what the full 512 cores can accomplish, although NVIDIA is already revolutionizing the military with CUDA technology.

So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

ASUS-ENGTX480_GeForce_GTX-480_Video_Card_Kit.jpg

ASUS ENGTX480/2DI/1536MD5 GeForce GTX 480 Graphics Kit

Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

Even some older game titles benefit by the Fermi GF100 GPU, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.

Fermi isn't for everyone. Many of NVIDIA's add-in card partners (what they call AICs) have already built inventory of the GeForce GTX 480. On 12 April 2010 ASUS will reveal the ENGTX480/2DI/1536MD5 GeForce GTX 480 graphics card kit, which online retailers are expected to price at around $500 for the 90-C3CH90-W0UAY0KZ SKU. The ASUS ENGTX470/2DI/1280MD5 kit (GeForce GTX 470) loses only one (more) SMP, but the price for their 90-C3CHA0-X0UAY0KZ kit drops to $350. While not based on anything other than these two prices, it seems that a full 16-SMP 512-core version could receive the GeForce "GTX-490" name and a price tag around $650. Sure to be an expensive enthusiast product if and when it ever gets made, the GeForce "GTX-490" could keep company with the $1130 recently-announced Intel Core i7-980X 6-Core CPU BX80613I7980X.

Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Heading into April 2010, only a private alpha version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 480 is going to surpass ATI's Radeon HD 5870 without much struggle, but it's uncertain how much better the performance will be compared to the previous-generation GeForce GTX 285. Until the GeForce GTX470/480 appears on retail shelves, and until a mature GeForce 400-series WHQL Forceware driver is publicly available, many of the new technologies introduced will remain untapped.

GeForce GTX-480 Conclusion

Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating for the sample received which may differ from retail versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

Our performance rating considers how effective the GeForce GTX480 DirectX-11 video card performs against competing products from both ATI and NVIDIA. While it's not easy to nail-down exact ratios because of driver and game optimizations, the GeForce GTX480 consistently outperformed the ATI Radeon HD5870 and establishes itself as the most powerful single-unit graphics card available. Tested on the unbiased 3dMark Vantage DX10 benchmark, GeForce GTX480 improves upon the GTX285 by nearly 61% at 1920x1200, and outperforms the Radeon HD5870 by 10%. When BattleForge calls high-strain SSAO into action, NVIDIA's GTX480 demonstrates how well Fermi is suited for DX11... improving upon the GeForce GTX285 by nearly 249% while trumping ATI's best single-GPU Radeon HD5870 by 61%. The GeForce GTX480 also proved itself a worthy adversary for the dual-GPU ATI Radeon HD5970; beating it in our Resident Evil 5, Far Cry 2, and BattleForge tests.

ASUS-ENGTX480_GeForce_GTX-480_Video_Card.jpg

In regard to product appearance, there's no doubting that NVIDIA's GeForce GTX 480 will be referred to as the 'toughest' video card they've produced. Exposed heatsink and heat-pipes certainly stand out like exposed exhaust tips do for hot rods, and AICs (such as ASUS) have already decorated the plastic shroud with faux-Carbon Fiber finishes. It remains to be see how created the add-in card partners will be, but variety is guaranteed.

Riding the bleeding edge of technology, NVIDIA has built their GeForce GTX 480 (and likely the GTX 470, too) with solid construction. I'm always concerned for exposed electronics, so it surprises me that they didn't integrate a metal back-plate to add as a heatsink and guard for the PCB components. The top-side of the graphics card features a in-laid heatsink and four exposed heat-pipe rods with one more tucked inside the shroud; all of which get extremely hot. The Fermi GF100 GPU has been moved forward (towards the exhaust vents) by one inch when compared to the GTX 285, which allows the memory and power components to receive optimal cooling first.

While most consumers buy a discrete graphics card for the sole purpose of PC video games, there's a very small niche who expect extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. Fermi GF100 is also the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality.

The NVIDIA GeForce GTX 480 video card will be officially available from retailers on 12 April 2010 or sooner. NVIDIA suggests a retail price of $500 for the GeForce GTX 480, and $350 for the GTX 470. Priced at $499.99 each, NewEgg currently lists: the Zotac ZT-40101-10P, ASUS ENGTX480, EVGA 015-P3-1480-AR, MSI N480GTX-M2D15, Gigabyte GV-N480D5-15I-B, and even the PNY VCGGTX480XPB. XFX and BFG are not planning to co-brand the GeForce GTX 480, and are also the only manufacturers to offer a lifetime product warranty.

In terms of value, there are several ways to look at the GeForce GTX 480 and compare it to the closest rival: the $420 ATI Radeon HD 5870. Some analysts take heat and power into consideration, but for this illustration we'll use only the FPS performance from our tests. Based on the in-game DirectX-11 BattleForge benchmark, gamers who play at 1680x1050 will pay $6.79 per frame of performance with the GeForce GTX 480, as opposed to $9.29 per frame with the Radeon HD 5870. Playing single-player Battlefield: Bad Company 2 at 1920x1200 with maximum settings, you can expect to pay $6.59 with the GeForce GTX 480, or $6.77 with the Radeon HD 5870. Far Cry 2 costs $6.38 per frame for the GeForce GTX 480, while the ATI Radeon HD 5970 costs $8.05 per frame. The GeForce GTX 480 may cost more up front, but it delivers more for the money with regard to video game performance.

In summary, NVIDIA's GF100 Fermi GPU delivers more than just a giant boost to video frame rates over their previous generation, it also delivers GPGPU functionality that is usable in- and outside of video games. Performance increases over the GeForce GTX 285 were dramatic, and often added nearly 300% improvements. Comparing only PC video game frame rate performance between the GTX480 and Radeon HD5870 still puts the 480-core GF100 GPU decisively ahead of the competition by 22-98% in DX10 tests, and 12-63% in DX11. As a whole, NVIDIA video cards generally performed better on the most popular video game titles; and there were at least two games where the GeForce GTX480 performed better than ATI's dual-GPU Radeon HD 5970. There are also added PhysX and DirectX-11 enhancements to help tip the cup well into NVIDIA's favor in terms of user-experience. Conclusion: NVIDIA is back on top again, right where most gamers like to see them.

EDITOR'S NOTE: Since testing NVIDIA's engineering sample for this article, we've received retail GeForce GTX-480 products that perform the same but require less power and produce less heat and noise. Read more in our Zotac GeForce GTX-480 Fermi Video Card review, which also includes SLI performance results.

Pros:

+ Fastest single-unit DX11 graphics accelerator available
+ Best price-to-performance cost ratio
+ GF100 Introduces Error Correcting Code (ECC)
+ Outstanding performance for ultra high-end games
+ Fan exhausts all heated air outside of case
+ Includes native HDMI audio/video output
+ Adds 32x CSAA post-processing detail
+ Supports triple-SLI functionality

Cons:

- Consumes 52W at idle and 370W under full load
- Extremely hot under normal operation
- Very expensive premium-level product
- Cooling fan is loud under load

Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.


Related Articles:
 

Comments 

 
# Bummer!Robert 2010-03-26 15:41
It offers some great speed frames but at a huge cost to power and temperatures. That's a total bummer, both for casual and non-casual gamers alike! =(
Report Comment
 
 
# RE: Bummer!SiliconDoc 2010-06-21 20:45
So you expect casual gamers to buy a $499 videocard ? You expect non-casual gamers with their massive systems to worry about:
1. power useage, when they inevitably use a lot of power sucking add ons - printers, joysticks, and multiple monitors ?
HOW STUPID does a noob have to be to whine about power from one component ? PRETTY STUPID.
2. fan noise... as they crank their speakers to maximum, enjoy the 3D sound environment, or have their headphones on while they scream into the mic about who they just fragged to their teammates.
3. Heat... after years of reviews and building systems and selling, not once have I seen some idiot post that any card they were using expired from H E A T.
It's always some notional crazed ati fanboy geek preening about saving the earth,or wailing about less electricity cost than they jam down their throats in cheetoes and soda in a single frag session.
AVOID high end cards, and get yer $299 netbook tweaked for Pogo online.
Report Comment
 
 
# HEATDr.Unk 2010-06-22 05:01
I don't know how many computer you work with but I have worked with several computer that have fried graphics cards...because of heat. To be fair, there have been 2 NV cards and 2 ATI cards. From NV I had an 8800 GT die which I blame dell for because they had no ventilation for the poor thing. The other NV card was a 6600 that had the fan wire get cut some how so the fan didn't spin. The ATI cards had no excuse. They just straight up fried. They both had good ventilation and no other problems to speak of. So, heat can fry a card but NV cards seem to handle it a lot better in my experience.
Report Comment
 
 
# You're a bit naffAaron 2010-06-26 07:13
I've had two ATI 4870x2 cards fry in three months, because of heat issues... all whilst running at default clocks in a well ventilated case. The bloody things just can't handle very high resolutions at very high settings and, rather than degrade gracefully, they go tits up. These cards, from both vendors, are NOT made well and you bloody well should care about heat.
Report Comment
 
 
# opinionjohn 2010-03-26 16:18
just read the hole review , just wanted to say great work ,
Report Comment
 
 
# Go back 3 spaces try againDavid Noneya 2010-03-26 16:39
Wow it looks like the enquier as bias as it could be is right, power hungry, hot, a fan that can take 1.8 amps, hi end fans are .8 amps but it does take a good fan to cool a power hungry toaster I own a GTX 295 I wont be swaping it for fermi the gains are to poor heat and power usage are bad. Nvidia I'm sure is designing its replacement now ant it wont be Fermi related. On a positive note nice try.
Report Comment
 
 
# Blazing fastTomS 2010-03-26 17:16
Holy smokes this card is blazing fast. This thing with just 1 gpu smokes most of ATI's Flagship 5970 dual gpu scores! It's also supposed to be up to 2.5X faster in tessellation than ATI's latest gpu. Can't wait to see in game characters like the warcraft promo movies. Woo Hoo !!!!
Report Comment
 
 
# Blazing HotWilliam 2010-03-27 21:23
TomS,

I don't see "blazing fast" anywhere, but I see "blazing hot" as a better descriptor. In games the 5970 is still killing it, and using about 70 watts less power. I would have to choose a 5970 over this new nVidia card at the moment. We waited, and waited, and all we got was a campfire that needs a lot of wood to burn.
Report Comment
 
 
# ObjectivelyJ.M 2010-05-12 15:11
If you look at it objectively, it is pretty hot. However, it is doing pretty good as a single GPU against a dual gpu like the 5970. It is also priced between the 5870 and the 5970 which is about right for the amount of performance you see here. Also, you can see that it overtakes the 5970 on at least one of the games. This is something you don't see the 5870 doing at all.

The heat is pretty bad however. Nvidia says that they tested the gpu on that load heat extensively.

The other problems that the 5970 has is microstuttering. They've done an analysis that shows spikes even when vsync is turned on. Aside from heat and power consumption, the 480 is a pretty decent choice in gaming hardware. People who've left reviews of it on newegg have said that it isn't as loud as people make it out to be.
Report Comment
 
 
# ES vs RetailOlin Coles 2010-05-12 15:20
The media received engineering samples, which run hotter than final retail versions of the GTX 480. Retail products consume a little less power, operate slightly cooler, and make less noise. See here: /index.php?option=com_content&task=view&id=518&Itemid=72
Report Comment
 
 
# PowerUser!nt300 2010-03-28 08:18
What the, R U joking? why not try reading the whole review before making false claims like that. The GTX 480 is nowhere near the performance of the HD 5970 according to 15 internet review sites.
Report Comment
 
 
# RE: PowerUser!nt300 2010-03-28 08:19
Woop's I posted the wrong reply to the wrong person. Sorry...
Report Comment
 
 
# RE: RE: PowerUser!nt300 2010-03-28 08:20
No I got it right, I was refering to TomS crapy post lol.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video CardJustin 2010-03-26 22:06
An excellent review Olin. To be honest, I hoped for a little more. The performance increase is welcome, but admittedly expected. What isn't, however, are the temps and power consumption. Granted 1000W+ PSUs are cheap enough today, but still 700W+ for a single card system is ridiculous. I'm looking forward to seeing how GTX470 stacks up against HD5850. Keep up the good work!
Report Comment
 
 
# 470 vs 5850Sean 2010-03-27 11:43
Saw a review for those two and the 5850 came out on top for everything.
Report Comment
 
 
# Sure it did.SiliconDoc 2010-04-02 22:35
In your red rooster dreams.
With those two the ati card gets stomped on every level, and the 470 beats the reds bigger brother often enough.
All the bleating red roosters won't stop the awesome NV sales numbers.
For now, it's red rooster cry and lie time.
Stomp the three toed chicken feet, and bleat like a mad modder, and all for nought, the SUPERSTOMP has already occurred, and no amount of red rooster bs can change that. (you probably believe yourself to keep from weeping)
Report Comment
 
 
# Fanboy time :DDr.Unk 2010-04-06 14:58
I think what NV did is great besides the heat. What I don't think is great is fanboy time and calling ATI fans names. That's below any person with any form of respect.
Report Comment
 
 
# You misunderstood me...Dr.Unk 2010-06-21 12:36
SiliconDoc, you misunderstood. I did not say you have no form of respect I was just saying that the name calling thing is so below you. You seem smart...very passionate about this stuff, but smart. I understand the angry reaction to someone not telling the whole story but surely you can see the see the error in reacting just as biased opposite to what the other person believes. It then just becomes an arms race of BS and since you can talk above many people, you think you've won when really, the numbers speak for themselves.
Report Comment
 
 
# PAssmark software???UNknown 2013-06-04 08:55
has anyone heard of PAssmark? the Radeon HD 5970 has a score of 2633,while the GeForce GTX 480 has a score of 4333???/ out performs???? LOL you can not just go off Processor speed noob.
Report Comment
 
 
# awsomedk 2010-03-27 06:11
would love to see a benchmark comparing cuda encoding with mediacoder
Report Comment
 
 
# Power consumptionrarchimedes 2010-03-27 06:36
I believe there is a possible error in computation of the power usage for the card. If the CPU is at max TDP when the GPU is at max TDP, that would add quite a bit of power usage to the CPU, which has a TDP of someplace between 120-130 watts.
Report Comment
 
 
# Power is correctOlin Coles 2010-03-27 13:49
rarchimedes: You are wrong. Why would CPU usage be at maximum TDP when FurMark is a GPU-dependant test? For reference, our tests showed that only one out of eight CPU threads was even utilized by this tool. Furthermore, the same tool and system have been used to test power consumption on all video cards.

You should go test for yourself before trying to discredit somebody else's hard work.
Report Comment
 
 
# out of whackMerty_boy 2010-03-28 05:00
This review is at massive odds to other sites. Best performace to price ratio, hmmm me thinks not. Most ppl are gonna have to factor the cost of a psu into the upgrade too (and it'll have to be a good one).
Report Comment
 
 
# POWER USER!nt300 2010-03-28 08:43
Here?s the problem with the GTX 480 just to name a few: (470 is no better)?
- Always hits 97C regardless of case cooling when gaming.
- Super LOUD Fan (65db?s +/- 2db).
- Never buy without Lifetime Warranty (source: Anantech)
- 10% to 15% faster in some cases vs. HD 5870 (source: 15+ review sites)
- Price/Performance stinks beyond belief (source: 15+ review sites)
- 250W is complete BS, its 300W+, NVIDIA lied (source: Toms Hardware)
- Nvidia's Fermi GTX480 is Broken, Unfixable, Hot, Slow, Late and Un-manufacturable.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardnoob use 2010-03-29 09:10
u say nvidia didnt done a thing... r u a noob ???????

this card has a single gpu and differnce between it and the 5970(2 gpu)
is only a 4 to 8 frames only the nivdia is a single u noob :S
Report Comment
 
 
# DotMeh 2010-03-29 14:02
Sure it's a single GPU, but it comes at a double GPU price. Nvidia loves charging premiums and ripping off its fan base.
Report Comment
 
 
# so dissapointedjons 2010-03-29 12:52
what a joke.
nvidia can shove this up their ass. whats the point of this failure card?
wow its a little faster then the hd5970.
no where near fast enough. its sad
Report Comment
 
 
# RE: so dissapointedMike M 2010-03-30 09:54
YOu correct, but the HD 5970 is the fastest graphics card out right now so I don't know why NVIDIA FANbOys are making # up, GTX 480 is nowhere near the speed of the HD 5970, and NVIDIA will not be coming out with a Dual Fermi card because it's too bloody F**ing hot.
Report Comment
 
 
# tier 3 techUNknown 2013-06-04 08:51
yes, the gxt 480 runs hot, but not as hot as the gtx 260/280 but as of 4 june 2013, the gtx 480 falls in the top 15 fastest cards on the planet and even out performs the Gtx 560ti. you little noobs out there know SH!% you ATI punks need to know that INTEL and Nvidia will always out perform AMD/ATI(known now as AMD) even to this day they're only four AMD card that are faster then the 480 and ten Nvidia cards.
Report Comment
 
 
# nothingnick 2010-03-31 01:09
yeah, the 5970 is a dual core card, wait for nvidia to release a dual core card and then compare it.
Report Comment
 
 
# RE: nothingMike M 2010-03-31 05:50
Not correct, it all has to do with Price/Performance regardless whether its Dual-GPU or not. I don't see a Dual-GPU Fermi anytime soon, can you imagine the amount of power that sucker would need let alone the heat output. NVIDIA won't do such a think and the 470 is no better, that too runs way too hot and sucks back power like there's no tomorrow.
Report Comment
 
 
# *ARTICLE UPDATE*Olin Coles 2010-03-29 14:05
Priced at $499.99 each, NewEgg currently lists: the ASUS ENGTX480, EVGA 015-P3-1480-AR, MSI N480GTX-M2D15, Gigabyte GV-N480D5-15I-B, Zotac ZT-40101-10P and even the PNY VCGGTX480XPB. (Links are on conclusion page)

Perhaps by coincidence XFX and BFG are not listing the GeForce GTX 480 at NewEgg; and are the only manufacturers to offer a lifetime product warranty.
Report Comment
 
 
# RE: *ARTICLE UPDATE*Mike M 2010-03-30 09:52
No, the reason why XFX and BFG is not listing them is because they can't get any of them. Fermi is in a super tight supply. It's bad enough NVIDIA is losing money for every one they sell. OEM's don't even want them, they are too hot, 95C +. WTF,
Report Comment
 
 
# Fanboys = dillweedJames.Q 2010-05-12 15:24
Why don't you try verifying your comments before you post them instead of looking like a dillweed online?

Check here:
#digitalstormonline.com/comploadsaved.asp?id=395218
Report Comment
 
 
# REPLYDino 2010-03-29 20:15
Hey! GTX480 seems to be slower than ATI 5970, eventhough it is a single GPU you must imagine the SPEED of the card. Why should I buy this card while this card is not the fastest.

YOU MUST BUY A V-CARD by it's speed not by the it's MONO gpu property. as what NVIDIA said before "Two is Better than One" on their SLI motto.
Report Comment
 
 
# REPLYDino 2010-03-29 20:26
NVIDIA must take dual GPU GTX480 to compete ATI 5970.

The merge of AMD and ATI gives ATI a larger R&D division and have an independent fabs while NVIDIA uses a fabs by TSMC to produce GPU's.

ATI before was a Buggy Videocard Manufaturer making color shifts on my monitor and now becoming a hard competitor of NVIDIA
Report Comment
 
 
# REPLYDino 2010-03-29 20:30
ATI will soon produce 22nm GPU's after the new 22nm will be done.

Another Comment:
Intel must buy NVIDIA to beat the One and hard competitor AMD
Report Comment
 
 
# GTX 480 sucks Big Time!!!Mike M 2010-03-30 09:49
This piece of junk can't render my #ing balls let alone a video game. Fermi is broken, why in hell would you want a card that runs more than 70C in idle mode, sounds like a jet engine and sucks pack more power than a #ing Nuclear Power Plant. No Thank You, just can't wait for ATI's re-fresh further slaughters this pile of NVIDIA junk.

Anybody buying this piece of garbage is pure NVIDIA FANbOy LOL
Report Comment
 
 
# ATI's spooge in your eyesR7 2010-05-12 15:28
Seriously, did you even try to look at the actual information? It's 50c idle not 70. It beat the 5970 on one game and is close to it on other games. It practically beat the 5870 on all games.

People who buy it want the fastest single gpu but will have to factor in heat and power consumption. They wouldn't be fanboys as only fanboys buy from one company and ignore facts so they can diss the competition. Like what you did.
Report Comment
 
 
# 480trajan 2010-03-30 14:40
Great review! I've read several that show it smoking the 5870 and it's only going to get better and faster as the year goes on. Great new tech. Great image quality, great extras, great driver team. By the end of the year Fermi 2 will crush ATI like bug under a tire.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardsülo 2010-03-30 16:36
GTX 480 delivers nearly the same performance if not worse as HD 5870 while it's nearly 2x hotter/louder and consumes 170W more power
Report Comment
 
 
# What review did you read?Olin Coles 2010-03-30 17:15
"GTX 480 delivers nearly the same performance if not worse as HD 5870 while it's nearly 2x hotter/louder and consumes 170W more power"

Based on what? It wasn't this article, because the GTX 480 beat the Radeon HD5870 by an average of 25%.
Report Comment
 
 
# 480gtx vs HD5970Alex 2010-03-30 17:20
If you pay $750+ for HD 5970 don't you think it should be better than a $500 480gtx one? you should also consider that the 480gtx is a more recent product than the 5970 so with new drivers it should get better but that wont help the power consumption or the heat coming out of it. Remember that 480gtx is a single interface memory and not a dual as of the 5970 so anything that is dual should be better when comparing recent products. So basically anyone buying a 5970 would be paying $200+ over a 480gtx so i don't know why people complains when obviously if you pay more for something it should be better in most cases.


480gtx HD5970
Single 1.5gb memory basically dual 1gb(2gb) memory
Core Clock 725MHz Core Clock 725MHz
Memory Interface 384bit Memory Interface(256 x 2)-bit
Report Comment
 
 
# RE: 480gtx vs HD5970nt300 2010-03-31 18:22
It is, the HD 5970 runs circles around the GTX 480, don't know what you've been smoking but ATI is still king right now. It all has to do with price/performance and currently both HD 5970 and GTX 480 are selling for RIP-OFF prices! But at least the HD 5970 runs cooler and sucks back a lot less power and is much more quiet than that power plant sucking GTX 480...
Report Comment
 
 
# RE: RE: 480gtx vs HD5970Alex 2010-04-01 10:55
That's exactly why you are paying $250 more for the 5970 price/performance.
$250 for more memory, less heat, less noise, less power consumption.
Report Comment
 
 
# RE: RE: RE: 480gtx vs HD5970Alex 2010-04-01 10:58
just to add to my last post do you think its worth it? to pay $250 just for that. I rather stay with my 295gtx than to upgrade at the moment.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardnick 2010-03-31 01:19
to me, 93*c under full load is too hot - it was mentioned in the review that it reached 100*c, if the threshold is 105*c wtf is going on? it wont last long under those temperatures. Another thing; what do you need all those frame rates for? future proof? # that, i'd save my $1000au and get something cheaper, because in a year or two, this card wont be as good as it seems and it will no longer be "future proof" the frame rates of a gtx285 gives is more than enough for now. My advice depending on your current card is to save your money, either get something cheaper, or wait a while before you upgrade - if your in no rush to do so.

my gtx 285 will do the job for a while to come, to me, when im running at 50fps+ why on earth would i need to run 90% faster? its a waste of money.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardnick 2010-03-31 01:20
just to add to my last post; it reached 100*c in a well ventilated case...
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 480 Fermi Video Cardnt300 2010-03-31 18:19
Hope you got lifetime Warranty on that BBQ 480 card lol, that is nuts 100C WOW...
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardnick 2010-03-31 01:28
what do you mean it improves over the gtx285 often by over 300% ? wtf are they on about? its like 90% in most cases...
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardftw 2010-03-31 20:44
they did it again NVIDIA FTW
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 480 Fermi Video CardAlex Jones 2010-04-01 12:43
Yes, this time Nvidia lost the round. Maybe in 2011 they will come back, but right now they've lost.
Report Comment
 
 
# ATI Radeon HD 6870 out for Q1 2011Nick 2010-04-09 22:08
I've heard AMD plans on releasing its new GPU design built from the ground up sometime in Q1 2011 code name Northern Islands based on 28nm process. They may be a few early releases for Q4 2010. This is all according to an AMD engineer. So NVIDIA better get busy with Fermi and get it fixed because wispers state that ATI's next gen card is going to be between 75% to 150% faster than it's current HD 5800 cards.
Report Comment
 
 
# ATI Hybrid Code Name: Southern IslandsATI Boy!!! 2010-04-05 18:39
Fermi 1 & 2 Killer!!!
Despite the 40nm process, three key features that will be based on Evergreen have been put on steroids. Please observe bellow:

- 40nm Process
- 2400 Stream Processing Units (1)
- 256 Texture Address Units (2)
- 128 ROPs (Rasterization Operator Units) (3)

AMD plans to differentiate this re-fresh line from the past HD 4890 naming sceme due to its architectural changes. So don't expect the name HD 5890.

Though we will see an improvement in Tessellation, it?s not highly emphasised in Southern Islands (SI). You can look for this feature being greatly improved in code name Northern Islands.

ATI Code Name called ?Hecatoncheires? is something that will feature a series of technological advancements never heard of in the GPU industry. You will never find any information about this special architecture due to is being extremely top secret.
Report Comment
 
 
# U have any proof of this info?Nick 2010-04-09 22:09
Do you have a link or something? Its hard to believe somebody from ATI would post this much detailed info let alone any info on a forum about upcoming graphics cards lol.
Report Comment
 
 
# Jim 2010-04-11 10:16
Can't help it, Somehow this card has the same stink as the 5800 Ultra
Overpriced, Too hot , Too loud , Too late
NVIDIA have definatly being spanked this time!

In Australia gtx 480 is currently retaling for around $900 .
Yet the 5970 is the same price .I know which card I'll be buying.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardsholich 2010-04-14 07:34
first: nvidia never give dual gtx480 in a board.
second: need 370 watt! bom power.
third: price.

i think amd hd 5870 the winner in single gpu. but if the cuda implement c++ is the +++, what do u think?
Report Comment
 
 
# Seems great for CUDAJJA 2010-04-19 19:52
This card its too hot, take too much power but I guess it gets its calculations RIGHT and FAST. So, for people who are not gamers but need this will love it. For us gamers that actually use huge processing graphics cards to use it only in games... too hot. too hungry. too expensive.

But I am actually interested of what the GTX 470 its capable of. Maybe it will win the single gpu radeon. And if it dosent get that hot it may be a good candy for gamers.
Report Comment
 
 
# its the bestRyan 2010-04-29 01:17
forget the power and heat "issues" it will be fine even with normal case colling just dont OC (EVGA makes OC edition and water cool get these if you want OC) it is the fastest GPU in the world +you get phyX and surround 3D thats just as good as anything ATI using more then 3 screens for gaming is stupid ive seen 6 screens for WoW and the character modle was ****** NOT ALL games support SLI (WOW for instance) so 1GPU is better then 2 in my opinion and this will destroy 5970 when it gets new drivers and games are made with it aritechure in mind :)
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video Cardwoop 2010-04-30 02:25
ah good card when they bring its proposed 22nm chips, same it'll be nearer 2010/2011 when they are brought out but I'm sure I can live with a 5850 for now. Might try Nvidia again if the card becomes more efficient.
Report Comment
 
 
# AwesomenessCrysist 2010-05-07 09:40
After read this aticle, i went straight to the shop and grab a gtx 480 oc from EVGA. Compare to my old gtx 260, it does looks wayyyy better on graphics, smoother.. but paying 600 bucks and more bill are what I still worry about. Asking myself a question,"Am I want it or am I need it?"
Report Comment
 
 
# MehTantrumus 2010-05-12 06:57
All I saw in that article is that my GTX295 is staying in my system for another year at least.

Fermi can lick my bawlz.... I'm leaning more and more towards ATI and I have an instinct that another release is coming on their front.

I've noticed that most nVidia fanboys are ridiculously uneducated with their rants.... fermi is not a good release. Limited supply, way too hot, too much power consumption, and barely faster than the 5870 which is the king as far as I am concerned from a price/power/performance perspective.

I used to be an nVidia fanboy, but I think this war is goin in the other direction.
Report Comment
 
 
# ES vs RetailOlin Coles 2010-05-12 07:08
The media received engineering samples, which have been running hotter than retail versions of the GTX 480. They are also louder, and consume a little more power. See here: /index.php?option=com_content&task=view&id=518&Itemid=72
Report Comment
 
 
# You are a fanboyR7 2010-05-12 16:00
Point out the nvidia fanboy rants here that aren't factual.

From what I've seen, the first thing you'll notice at review sites that allow comments is the ATI regular anti-fermi comments regardless of what the review shows.

If the 480 shows a 4 fps difference from the 5970 -- ATI fanboy comment is "The 5970 totally beat the 480. Wasn't even close!"

If nothing else "The 480 is hot enough to boil water!"

Scroll up and you'll see a guy say that the 480's idle temp is 70c.

As for price/performance.. Check this out.

(FPS from all benches)
GTX 480

95.2, 82.5, 73.6, 62, 41, 33, 92.7, 78.4, 108, 118, 35.5, 32.5, 48.8, 45

AVG: 67.6
Price per performance: $7.40

HD 5870

53.6, 45.1, 29.3, 22.9, 33, 29, 68, 57.6, 90, 83, 29.3, 26.9, 29.3, 27.4

AVG:43
Price per performance: $9.30

Hows that for education?
Report Comment
 
 
# always been with atijlei8a 2010-05-23 12:44
haha ive never been an nvidia fan, even when my friends were using the lastest nvidia cards 5-7 years ago i only used ati cards way cheaper and gave me enough juice to run games
Report Comment
 
 
# RE: NVIDIA GeForce GTX 480 Fermi Video CardRyan 2010-06-30 22:59
After some consideration, I'd say this card is actually a very good performer. Until recently, I thought ATI finally took the top again with the 5970, but then I discovered that it was a dual-GPU... I hate those. As a previous owner of a GeForce 9800GX2 and having some experience with SLI setups, I will never again purchase anything with SLI/Crossfire/Dual-GPU/Etc. There is no consistency, one game runs super fast, and other 3D applications are slower than a low-end single GPU. Micro-stuttering, driver issues, memory sharing, just way too many issues to even deal with. Just go for the most raw power in a true, single-GPU. The only way it would work is if you have enough cash to buy two or three of these and make it work. That way you still have a powerful single card when the SLI craps out on you.
Report Comment
 
 
# Wow, CrackedSetsunayaki 2010-07-09 18:08
The card itself does NOT consume 370w under full load. The 8 Pin PEG amounts to 150w, the 6 Pin PEG amounts to 75w and the PCI-Express Slot can supply 75w for a total of 300w. The Idle consumption of 52w is actually amazing because it actually is lower than some ATI cards. What you forgot to test is what it takes for this card to enter LOAD STATUS. I almost don't see this card hit 1% for anything, but ATI cards enter load status very frequently, so they don't use IDLE consumption as much.

One thing you forgot to mention is that if you are a casual gamer and want to break free of Windows...a 480 GTX actually in practically every game ran under Linux + Wine delivers at least 60 FPS. ATI cards won't even boot games on Linux due to how bad their drivers are....

I own more ATI cards than Nvidia Cards, thanks to Nvidia...I was able to move to Linux as my main OS and retain windows for only the heaviest of games.
Report Comment
 
 
# I got itRob 2010-08-03 18:42
So much I want to complain about...lol

Both the 5970 and 480 are amazing cards. My 480 heavily overclocked never hits 91C+ on load and sits around 70C idle. So yeah its hot, just get an air conditioner in the summer and your fine. Doesn't consume much power for what it is I think. My X6 1055T consumes just as much power on load.

And technically, I'm neutral with both vendors, but the 480 is stronger TECHNICALLY. It's 1 GPU vs the 5970 which is 2 GPUs and the 480 is trailing just behind it.

So far wht I have seen though is the 5970 plays Dx10 games and under better, the 480 is known to play Dx11 games better. So both good and bad.
Report Comment
 
 
# rtrd fanboysmRemembaStalin 2010-09-16 14:09
i've seen maybe 70% of the comments stating the greens or the reds must be crushed/killed/dissaper.
What u don't uderstand my dear retard troll idiots from both sides: THE WORST THING COULD HAPPEN IS THE MONOPOLY. no matter who wins. Giza, watta bunch of lobotomised studid mofos!

Nice review, good and hard work. biased in the end. but somehow understandable.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive