Archive Home arrow Reviews: arrow Video Cards arrow ASUS GeForce GTX 460 SLI Performance
ASUS GeForce GTX 460 SLI Performance
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Monday, 02 August 2010

GeForce GTX 460 SLI Performance

ASUS ENGTX460 Makes Argument for Single and SLI Upgrades

When Benchmark Reviews first tested the GeForce GTX 460 video card, we knew that NVIDIA had a winner on their hands. For the mid-range price of $200, gamers can enjoy high-speed frame rates and PhysX enhancements. After some additional investigation, it seemed that the price point offered more than just a successful domination of the rivaling ATI Radeon HD 5830, it also surpassed the more expensive GeForce GTX 465. There are several enthusiast video cards that cost more than the GTX 460, which presents the case for combining two of these video cards into a $400 SLI set. At this price segment the GTX 460 SLI set directly competes against the ATI Radeon HD 5870, but could potentially deliver more performance than ATI's Radeon HD 5970. In this article, Benchmark Reviews compares single and SLI performance for the 768MB GTX 460 against the entire enthusiast graphics card segment.

The ASUS ENGTX460/2DI/768MD5 GeForce GTX 460 768MB-GDDR5 graphics card empowers DirectX-11 video games to deliver unmatched geometric realism at the $200 price point. Based on the same Fermi architecture that powers their high-end GeForce GTX 480 model, the ASUS GeForce GTX 460 delivers mid-range performance for gamers on a budget. The GeForce GTX 460 comes armed with NVIDIA's GF104 Fermi graphics processor, and packs seven Streaming Multiprocessors for a total of 336 CUDA Cores and 56 Texture Units. This article will demonstrate 3D video game frame rate performance on a single 768MB ASUS GeForce GTX 460 video card as well as two cards in SLI, and compare the results against the most competitive graphics products in the segment.

PC video games are still the best way to experience realistic effects and immerse yourself in the battle. Consoles do their part, but only high-precision video cards offer the sharp clarity and definition needed to enjoy detailed graphics. Thanks to the new GF104 GPU, the GeForce GTX 460 has plenty of headroom for overclockers to drive out additional FPS performance, while keeping temperatures cool. In this article, Benchmark Reviews tests the GeForce GTX 460 against some of the best video cards within the price segment by using several of the most demanding PC video game titles and benchmark software available: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Crysis Warhead, Far Cry 2, Resident Evil 5, and Metro 2033.

ASUS-ENGTX460-Video-Card-SLI.jpg

ASUS GeForce GTX460 Video Cards in SLI

At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF104 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX-460 family of video cards are complex tools built to arrive at one simple destination: immersive entertainment, especially when paired with NVIDIA GeForce 3D Vision. The experience is further improved with NVIDIA System Tools software, which includes NVIDIA Performance Group for GPU overclocking and NVIDIA System Monitor which displays real-time temperatures. These tools help gamers and overclockers get the most out of their investment.

It used to be that PC video games such as Crysis and Far Cry 2 were as demanding as you could get, but that was all back before DirectX-11 brought tessellation and to the forefront of graphics. DX11 now adds heavy particle and turbulence effects to video games, and titles such as Metro 2033 demand the most powerful graphics processing available. NVIDIA's GF100 GPU was their first graphics processor to support DirectX-11 features such as tessellation and DirectCompute, and the GeForce GTX 400-series offers an excellent combination of performance and value for games like Battlefield: Bad Company 2 or BattleForge.

Manufacturer: ASUSTek Computer, Inc.
Product Name: ASUS GeForce GTX-460
Model Number: ENGTX460/2DI/768MD5
Price As Tested:$199.99

Full Disclosure: The product sample used in this article has been provided by ASUS.

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF104 GPU (GF100 series), tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Stages of Tessellation with NVIDIA Fermi Graphics

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

What's new in Fermi?

With any new technology, consumers want to know what's new in the product. The goal of this article is to share in-depth information surrounding the Fermi architecture, as well as the new functionality unlocked in GF100. For clarity, the 'GF' letters used in the GF100 GPU name are not an abbreviation for 'GeForce'; they actually denote that this GPU is a Graphics solution based on the Fermi architecture. The next generation of NVIDIA GeForce-series desktop video cards will use the GF100 to promote the following new features:

  • Third Generation Streaming Multiprocessor (SM)
    o 32 CUDA cores per SM, 4x over GT200
    o 8x the peak double precision floating point performance over GT200
    o Dual Warp Scheduler simultaneously schedules and dispatches instructions from two independent warps
    o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache
  • Second Generation Parallel Thread Execution ISA
    o Unified Address Space with Full C++ Support
    o Optimized for OpenCL and DirectCompute
    o Full IEEE 754-2008 32-bit and 64-bit precision
    o Full 32-bit integer path with 64-bit extensions
    o Memory access instructions to support transition to 64-bit addressing
    o Improved Performance through Predication
  • Improved Memory Subsystem
    o NVIDIA Parallel DataCache hierarchy with Configurable L1 and Unified L2 Caches
    o First GPU with ECC memory support
    o Greatly improved atomic memory operation performance
  • NVIDIA GigaThread Engine
    o 10x faster application context switching
    o Concurrent kernel execution
    o Out of Order thread block execution
    o Dual overlapped memory transfer engines

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF100-series graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are eight PolyMorph engines and two Raster engines on the GF104, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

Four-Offset Gather4

The texture unit on previous processor architectures operated at the core clock of the GPU. On GF104, the texture units run at a higher clock, leading to improved texturing performance for the same number of units. GF104's texture units now add support for DirectX-11's BC6H and BC7 texture compression formats, reducing the memory footprint of HDR textures and render targets.

The texture units also support jittered sampling through DirectX-11's four-offset Gather4 feature, allowing four texels to be fetched from a 128×128 pixel grid with a single texture instruction. NVIDIA GF100 series GPUs implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. With jittered sampling, games can implement smoother soft shadows or custom texture filters efficiently. The previous GT200 GPU did not offer coverage samples, while the GF100-series can deliver 32x CSAA.

GF104 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF104's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA GF100-series GPUs are the first to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF104 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA GF100-series GPUs, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

NVIDIA GigaThread Thread Scheduler

One of the most important technologies of the Fermi architecture is its two-level, distributed thread scheduler. At the chip level, a global work distribution engine schedules thread blocks to various SMs, while at the SM level, each warp scheduler distributes warps of 32 threads to its execution units. The first generation GigaThread engine introduced in G80 managed up to 12,288 threads in real-time. The Fermi architecture improves on this foundation by providing not only greater thread throughput, but dramatically faster context switching, concurrent kernel execution, and improved thread block scheduling.

NVIDIA GF104 GPU Fermi Architecture

Based on the Fermi architecture, NVIDIA's latest GPU is codenamed GF104 and is equipped on the GeForce GTX 460. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF104 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU was their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100/GF104 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. The new GF104 GPU is still more powerful than GT200, and delivers DirectX-11 performance for NVIDIA's mid-range Fermi-based video card family.

GF100 was not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they reach 512 in the GF100 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. GF104 features up to 336 CUDA cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF104 implements the IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.

NVIDIA-GF104-Fermi-Graphics-Processor.png

NVIDIA Fermi GF104 Block Diagram (click for high-resolution)

Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF104 could be mistaken as a divided GF100. NVIDIA GeForce GF100-series Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. NVIDIA's GF100 GPU implemented four GPCs, sixteen SMs, and six memory controllers. Conversely, GF104 implements two GPCs. eight SMs, and four memory controllers. Where each SM contained 32 CUDA cores in the GF100, NVIDIA now configures the GF104 to deliver 48 cores per SM. As expected, NVIDIA GF100-series products are launching with different configurations of GPCs, SMs, and memory controllers to address different price points.

CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF104 implements four 64-bit GDDR5 memory controllers (256-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 48 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages.

GF104 implements 336 CUDA cores, organized as 8 SMs of 48 cores each. Each SM is a highly parallel multiprocessor supporting up to 32 warps at any given time (four Dispatch Units per SM deliver two dispatched instructions per warp for four total instructions per clock per SM). Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture (384KB on 768MB version or 512KB on 1GB cards) services load, store, and texture operations. GF104 is designed to offer a total of 32 ROP units (768MB=24 / 1GB=32) for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in four groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others.

GF104 Specifications

  • Up to 336 CUDA Cores
  • 8 Geometry Units
  • 2 Raster Units
  • 64 Texture Units
  • 32 ROP Units
  • 256-bit GDDR5
  • DirectX-11 API Support

GeForce GTX 400 Specifications

Graphics Card

GeForce GTX 460

GeForce GTX 465

GeForce GTX 470

GeForce GTX 480

GPU Transistors 1.95 Billion 3.2 Billion 3.2 Billion 3.2 Billion

Graphics Processing Clusters

2

4

4

4

Streaming Multiprocessors

7 11

14

15

CUDA Cores

336 352

448

480

Texture Units

56 44

56

60

ROP Units

768MB=24 / 1GB=32 32

40

48

Graphics Clock
(Fixed Function Units)

675 MHz

607 MHz

607 MHz

700 MHz

Processor Clock
(CUDA Cores)

1350 MHz

1215 MHz

1215 MHz

1401 MHz

Memory Clock
(Clock Rate/Data Rate)

900/3600 MHz

837/3348 MHz

837/3348 MHz

924/3696 MHz

Total Video Memory

768MB / 1GB

1024 MB

1280 MB

1536 MB

Memory Interface

768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

Total Memory Bandwidth

86.4 / 115.2 GB/s

102.6 GB/s

133.9 GB/s

177.4 GB/s

Texture Filtering Rate
(Bilinear)

37.8 GigaTexels/s

26.7 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

Thermal Design Power (TDP)

768MB=150W / 1GB=160W

200 Watts

215 Watts

250 Watts

Recommended PSU

450 Watts

550 Watts

550 Watts

600 Watts

GPU Thermal Threshold

104°C

105°C

105°C

105°C

GeForce Fermi Chart Courtesy of Benchmark Reviews

ASUS ENGTX460/2DI/768MD5 Video Card

So far, 2010 has been an exciting year for game developers. Microsoft Windows 7 (and updated Windows Vista) Operating Systems introduced gamers to DirectX-11, allowing video games released for the PC platform to look better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware. This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles.

ASUS-ENGTX460-2DI-768MD5-Retail-Package.jpg

The 768MB ASUS GeForce GTX 460 video card offers gamers a healthy dose of graphics processing power for their money. But the GeForce GTX 460 is more than just a tool for video games; it's also a tool for professional environments that make use of GPGPU-accelerated compute-friendly software, such as Adobe Premier Pro and Photoshop.

ASUS-ENGTX460-2DI-768MD5-Video-Card.jpg

The ASUS GeForce GTX 460 is a 2.67" tall double-bay, 4.376-inches (111.15mm) wide graphics card with a 8.25-inch (209.55mm) long profile. There are two varieties of ASUS GTX 460: one with 768MB (model ENGTX460/2DI/768MD5), and another with 1GB of GDDR5 memory (model ENGTX460 DirectCU TOP/2DI/1GD5). NVIDIA's reference cooler design uses a center-mounted 75mm finsink, which is more than adequate for this midrange-output Fermi GF104 GPU.

ASUS-ENGTX460-2DI-768MD5-Video-Card-Top.jpg

As with most past GeForce video cards, the Fermi GPU offers two output 'lanes', so all three output devices cannot operate at once. NVIDIA has retained two DVI outputs on the GeForce GTX 460, so dual-monitor configurations can be utilized. By adding a second video card users can enjoy GeForce 3D-Vision Surround functionality.

NVIDIA-GeForce-GTX-460-Header-Panel.jpg

Other changes occur in more subtle ways, such as replacing the S-Video connection with a more relevant (mini) HDMI 1.3a A/V output. In past GeForce products, the HDMI port was limited to video-only output and required a separate audio output. Native HDMI 1.3 support is available to the ASUS GeForce GTX 460, which allows direct output to HDTVs and compatible monitors. Selling at an affordable $200 price point, NVIDIA was wise to support dual-card SLI sets on the GTX 460. This allows mainstream gamers to buy their primary video card at a fair price, and later upgrade their system to run two GeForce GTX 460's in SLI. Triple-SLI capability is not supported, since the $600 cost of three video cards would be better used to purchase either two GTX 470's or one GTX 480.

NVIDIA-GeForce-GTX-460-PCB-Bottom.jpg

The new 40nm fabrication process opens the die for more transistors; by comparison there are 1.4-billion in GT200 GPU (GeForce GTX 285), compared to 1.95-billion in GF104 for the GTX 460. While the GTX 460's GF104 disables one of its eight Streaming Multiprocessor Units (SMU's), a heavy-duty thermal management system is still utilized for optimal temperature control. On the reference sample, the entire heatsink and shroud portion are attached to the circuit board by four screws surrounding the GPU.

NVIDIA-GeForce-GTX-460-GF104-GPU.jpg

Even with its mid-range intentions, the PCB is a busy place for the GeForce GTX 460. There are eight positions for DRAM ICs on the circuit board, which combine for 1GB of GDDR5 video frame buffer memory - or 768MB when six are utilized. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the large scale GF104 GPU and its 1.95-billion transistors. 336 CUDA cores operate at 1350 MHz, which keeps a firm lead over ATI's 850 MHz Cypress GPU that measures 334 mm2.

ASUS employs a dual-slot cooling system on their reference-based GTX 460 video card. Two copper heat-pipe rods span away from the copper base into two opposite sets of aluminum fins. The entire unit is cooled with a 75mm fan, which kept our test samples extremely cool at idle and maintained very good cooling once the card received unnaturally high stress loads with FurMark (covered later in this article).

NVIDIA-GeForce-GTX-460-Heatsink-Cooler.jpg

In the next several sections Benchmark Reviews will explain our video card test methodology, followed by a performance comparison of the ASUS GeForce GTX 460 against several of the most popular mid-range graphics accelerators available. The GeForce GTX 460 will compete against the ATI Radeon HD 5830 and several other middle-market video cards as a single unit and the Radeon HD 5870 when combined into SLI; so we'll be keeping a close eye on comparative performance and value.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending May 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

DX11 Cost to Performance Ratio

For this article Benchmark Reviews has included cost per FPS for graphics performance results. Only the least expensive product price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 1-August-2010:

  • $200 Radeon HD 5830
  • $200 GeForce GTX 460
  • $290 Radeon HD 5850
  • $320 GeForce GTX 470
  • $400 Radeon HD 5870
  • $480 GeForce GTX 480
  • $680 Radeon HG 5970
  • $400 GeForce GTX 460 SLI
  • $960 GeForce GTX 480 SLI

Intel X58-Express Test System

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02 (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark (DX10, Very High Quality, 4x Anti-Aliasing, 16x Anisotropic Filtering, Airfield Demo)
  • Far Cry 2 v1.02 (DX10, Very High Performance, Ultra-High Quality, 8x Anti-Aliasing, HDR + Bloom)
  • Resident Evil 5 Benchmark (DX10, Super-High Quality, 8x MSAA)

DirectX-11 Benchmark Applications

  • Aliens vs Predator (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2 (High Quality, HBAO, 8x Anti-Aliasing, 16x Anisotropic Filtering, Single-Player Intro Scene)
  • BattleForge v1.2 (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Metro 2033 (Very-High Quality, DirectX-11, AAA, 16x AF, Advanced DoF, Tessellation, 180s Scene 1 Fraps)
  • Unigine Heaven Benchmark 2.1 (DX11, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
Core Clock (MHz) 800 675 725 608 850 700 725
Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)
  • ATI Radeon HD 5830 Reference Design (800 MHz GPU/1000 MHz vRAM - ATI Catalyst Driver 10.7)
  • ASUS GeForce GTX 460 768MB (675 MHz GPU/1350 MHz Shader/900 MHz vRAM - Forceware 258.96)
  • ATI Radeon HD 5850 Reference Design (725 MHz GPU/1000MHz vRAM - ATI Catalyst Driver 10.7)
  • ZOTAC GeForce GTX 470 Reference Design (608 MHz GPU/1215 MHz Shader/837 MHz vRAM - Forceware 258.96)
  • ATI Radeon HD 5870 Reference Design (850 MHz GPU/1200MHz vRAM - ATI Catalyst Driver 10.7)
  • ZOTAC GeForce GTX 480 ZT-40101-10P (701 MHz GPU/1401 MHz Shader/924 MHz vRAM - Forceware 258.96)
  • ATI Radeon HD 5970 Reference Design (725 MHz GPU x2/1000MHz vRAM - ATI Catalyst Driver 10.7)
  • NVIDIA GeForce GTX 460 combined SLI pair (675 MHz GPU/1350 MHz Shader/900 MHz vRAM - Forceware 258.96)
  • NVIDIA GeForce GTX 480 combined SLI pair (701 MHz GPU/1401 MHz Shader/924 MHz vRAM - Forceware 258.96)

DX10: 3DMark Vantage

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

3dMark_Vantage_Jane_Nash_Benchmark.jpg

Cost Analysis: Jane Nash (1680x1050)

  • $200 Radeon HD 5830 costs $8.78 per FPS
  • $200 GeForce GTX 460 costs $8.81 per FPS
  • $290 Radeon HD 5850 costs $10.18 per FPS
  • $320 GeForce GTX 470 costs $11.15 per FPS
  • $400 Radeon HD 5870 costs $11.17 per FPS
  • $480 GeForce GTX 480 costs $13.60 per FPS
  • $680 Radeon HG 5970 costs $14.56 per FPS
  • $400 GeForce GTX 460 SLI costs $9.41 per FPS
  • $960 GeForce GTX 480 SLI costs $16.47 per FPS
  • 3DMark Vantage GPU Test: New Calico

    New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

    3dMark_Vantage_New_Calico_Benchmark.jpg

    Cost Analysis: New Calico (1680x1050)

  • $200 Radeon HD 5830 costs $11.43 per FPS
  • $200 GeForce GTX 460 costs $10.42 per FPS
  • $290 Radeon HD 5850 costs $13.24 per FPS
  • $320 GeForce GTX 470 costs $13.17 per FPS
  • $400 Radeon HD 5870 costs $15.27 per FPS
  • $480 GeForce GTX 480 costs $16.90 per FPS
  • $680 Radeon HG 5970 costs $16.96 per FPS
  • $400 GeForce GTX 460 SLI costs $11.08 per FPS
  • $960 GeForce GTX 480 SLI costs $18.36 per FPS
  • Test Summary: According to 3dMark Vantage, a single ASUS GeForce GTX 460 768MB model trails slightly behind the ATI Radeon HD 5830 in the Jane Nash GPU test, and then leads it by just as much in New Calico tests. Sharing the same price point, it appears that this benchmark demonstrates that they could be sharing the same frame rate performance levels, too. After joining two GeForce GTX 460's into SLI for $400, performance leaps past the equally-priced Radeon HD 5870. Take 3D-Vision, PhysX, CUDA, and 32x MSAA into consideration, and the favor leans towards NVIDIA when choosing between GeForce of Radeon products.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX10: Crysis Warhead

    Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

    Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

    Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

    Crysis_Warhead_Benchmark.jpg

    Cost Analysis: Crysis Warhead (1680x1050)

  • $200 Radeon HD 5830 costs $9.09 per FPS
  • $200 GeForce GTX 460 costs $8.70 per FPS
  • $290 Radeon HD 5850 costs $10.74 per FPS
  • $320 GeForce GTX 470 costs $9.70 per FPS
  • $400 Radeon HD 5870 costs $12.90 per FPS
  • $480 GeForce GTX 480 costs $11.43 per FPS
  • $680 Radeon HG 5970 costs $15.11 per FPS
  • $400 GeForce GTX 460 SLI costs $9.76 per FPS
  • $960 GeForce GTX 480 SLI costs $19.20 per FPS
  • Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead responds well to both ATI and NVIDIA products, which allows the 768MB ASUS GeForce GTX 460 to match performance with the Radeon HD 5830 at 1680x1050 and again at 1920x1200. For die-hard fans of Crysis, the GeForce GTX 460 offers the best price to performance ratio compared with the Radeon HD 5830. Adding a second GeForce GTX 460 video card, the SLI set matches ATI's Radeon HD 5870 at 1680x1050 and trails only slightly at 1920x1200.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX10: Far Cry 2

    Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

    The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

    The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX-10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis.

    However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

    There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX-10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied, and HDR and Bloom were enabled.

    Far_Cry_2_Benchmark_Performance.jpg

    Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we used 8x AA and 'Ultra High' settings in Far Cry 2. The end effect was a separation between what is capable of maximum settings, and what is not. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), we noticed that there are very few products capable of producing playable frame rates with the settings all turned up.

    Cost Analysis: Far Cry 2 (1680x1050)

  • $200 Radeon HD 5830 costs $5.76 per FPS
  • $200 GeForce GTX 460 costs $3.32 per FPS
  • $290 Radeon HD 5850 costs $5.61 per FPS
  • $320 GeForce GTX 470 costs $4.16 per FPS
  • $400 Radeon HD 5870 costs $6.60 per FPS
  • $480 GeForce GTX 480 costs $5.18 per FPS
  • $680 Radeon HG 5970 costs $7.62 per FPS
  • $400 GeForce GTX 460 SLI costs $3.72 per FPS
  • $960 GeForce GTX 480 SLI costs $6.19 per FPS
  • Test Summary: The Dunia game engine in Far Cry 2 demonstrates a preference towards NVIDIA products over ATI, which allows a single GeForce GTX 460 video card to dominate performance over the Radeon HD 5830, and even beats the Radeon HD 5850. Because of the major differences in performance with Far Cry 2, the GeForce GTX 460 (and all other GeForce cards) offered the best performance and cost per FPS of any DX11 video card. Of course, this also translated into impressive SLI performance. Combined into an SLI set, the GeForce GTX 460 surpasses both the Radeon HD 5870 as well as ATI's flagship Radeon HD 5970.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX10: Resident Evil 5

    Built upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX-10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

    On the PC version of Resident Evil 5, both DirectX 9 and DirectX-10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements.

    NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX-10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.

    Resident_Evil_5_Benchmark.jpg

    Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX-10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs.

    Cost Analysis: Resident Evil 5 (Area 4)

  • $200 Radeon HD 5830 costs $3.39 per FPS
  • $200 GeForce GTX460 costs $3.18 per FPS
  • $290 Radeon HD 5850 costs $3.54 per FPS
  • $320 GeForce GTX 470 costs $3.81 per FPS
  • $400 Radeon HD 5870 costs $4.12 per FPS
  • $480 GeForce GTX 480 costs $4.06 per FPS
  • $680 Radeon HG 5970 costs $5.55 per FPS
  • $400 GeForce GTX 460 SLI costs $3.03 per FPS
  • $960 GeForce GTX 480 SLI costs $5.60 per FPS
  • Test Summary: It's unclear if Resident Evil 5 graphics performance fancies ATI or NVIDIA, especially since two different test scenes alternate favoritism. Although this benchmark tool is distributed directly from NVIDIA, and Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any decisive tilt towards GeForce products over ATI Radeon counterparts from within the game itself. Test scene #3 certainly favors Fermi GPU's, and they leads ahead of every other product tested. In test scene #4 the Radeon video card series appears more competitive, which allows the 768MB ASUS GeForce GTX 460 to slightly outperform the Radeon HD 5830. Additionally, the GTX 460 earns respect for its competitive cost per frame. Two GeForce GTX 460's in SLI dominate the landscape and outperform the Radeon HD 5870 and dual-GPU Radeon HD 5970. Additionally, the GeForce GTX 460 SLI set also offers the best price per frame value.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Aliens vs Predator

    Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

    In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

    Aliens-vs-Predator_DX11_Benchmark.jpg

    Cost Analysis: Aliens vs Predator (1680x1050)

  • $200 Radeon HD 5830 costs $8.66 per FPS
  • $200 GeForce GTX460 costs $8.03 per FPS
  • $290 Radeon HD 5850 5850 costs $9.80 per FPS
  • $320 GeForce GTX 470 costs $9.44 per FPS
  • $400 Radeon HD 5870 costs $11.17 per FPS
  • $480 GeForce GTX 480 costs $13.26 per FPS
  • $680 Radeon HG 5970 costs $14.53 per FPS
  • $400 GeForce GTX 460 SLI costs $8.51 per FPS
  • $960 GeForce GTX 480 SLI costs $13.73 per FPS
  • Test Summary: Aliens vs Predator may use a well-known game engine, but DirectX-11 pushes graphical demand on this game second to only Metro 2033 (and possibly equivalent to DX10 Crysis). With an unbiased appetite for raw DirectX-11 graphics performance, Aliens vs Predator accepts ATI and NVIDIA products as equal contenders, even despite sponsorship from NVIDIA. When high-strain SSAO is called into action, the 768MB GeForce GTX 460 demonstrates how well Fermi is suited for DX11... besting ATI's Radeon HD 5830, and also matching the GTX 465. Even though high-stress graphics has pushed frame rate performance on the ASUS GeForce GTX 460, the relative price to performance ratio is actually quite high in Aliens vs Predator and wins out over the Radeon HD 5830 for value. While the GeForce GTX 460 SLI set performs well enough at 1680x1050, performance begins to wane at higher resolutions like 1920x1200.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Battlefield Bad Company 2

    The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

    At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

    In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

    Battlefield-Bad-Company-2_Benchmark.jpg

    The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

    Cost Analysis: Battlefield: Bad Company 2 (1680x1050)

  • $200 Radeon HD 5830 costs $4.39 per FPS
  • $200 GeForce GTX 460 costs $4.14 per FPS
  • $290 Radeon HD 5850 costs $4.77 per FPS
  • $320 GeForce GTX 470 costs $4.97 per FPS
  • $400 Radeon HD 5870 costs $5.63 per FPS
  • $480 GeForce GTX 480 costs $5.84 per FPS
  • $680 Radeon HG 5970 costs $8.95 per FPS
  • $400 GeForce GTX 460 SLI costs $4.31 per FPS
  • $960 GeForce GTX 480 SLI costs $8.69 per FPS
  • Test Summary: Battlefield: Bad Company 2's Frostbite-1.5 game engine allows a single ASUS GeForce GTX 460 to post small gains ahead of the ATI Radeon HD 5830. The surprise here is how close the GeForce GTX 460 performed to the GTX 465, which barely adds 3-FPS. Despite the drop in DX10 performance compared to the others, the relative price per frame costs are actually better than ATI's Radeon HD 5830. Adding a second GeForce GTX 460 into SLI yields frame rate performance that surpasses even the dual-GPU ATI Radeon HD 5970 and very nearly competes with the GTX 480 in SLI.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: BattleForge

    BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

    Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

    BattleForge_DX11_Benchmark.jpg

    Cost Analysis: BattleForge (1680x1050)

  • $200 Radeon HD 5830 costs $7.41 per FPS
  • $200 GeForce GTX 460 costs $5.70 per FPS
  • $290 Radeon HD 5850 costs $7.51 per FPS
  • $320 GeForce GTX 470 costs $6.07 per FPS
  • $400 Radeon HD 5870 costs $8.85 per FPS
  • $480 GeForce GTX 480 costs $6.52 per FPS
  • $680 Radeon HG 5970 costs $10.32 per FPS
  • $400 GeForce GTX 460 SLI costs $5.88 per FPS
  • $960 GeForce GTX 480 SLI costs $7.97 per FPS
  • Test Summary: With an unbiased appetite for raw DirectX-11 graphics performance BattleForge appears to be ambiguous towards ATI and NVIDIA products, even despite advertised sponsorship from AMD. When high-strain SSAO and 8x anti-aliasing applied, the ASUS GeForce GTX 460 trumps ATI's Radeon HD 5830 at both test resolutions by up to 7 FPS. Improved FPS performance gives the 768MB ASUS GeForce GTX 460 the best value of the bunch, and the only card to offer sub-$6 frames. Not surprisingly, the ASUS GeForce GTX 460 SLI set performs along the same lines, and keeps below $6/FPS while competiting with the Radeon HD 5970.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Metro 2033

    Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

    The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

    Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

    Metro-2033_DX11_Benchmark.jpg

    NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All our tests disable advanced PhysX options.

    Cost Analysis: Metro 2033 (1680x1050)

  • $200 Radeon HD 5830 costs $10.58 per FPS
  • $200 GeForce GTX 460 costs $10.70 per FPS
  • $290 Radeon HD 5850 costs $12.50 per FPS
  • $320 GeForce GTX 470 costs $11.85 per FPS
  • $400 Radeon HD 5870 costs $14.65 per FPS
  • $480 GeForce GTX 480 costs $14.08 per FPS
  • $680 Radeon HG 5970 costs $15.60 per FPS
  • $400 GeForce GTX 460 SLI costs $11.20 per FPS
  • $960 GeForce GTX 480 SLI costs $14.52 per FPS
  • Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience unless you're willing to seriously tone-down the settings. These demands make our selection of DX11 video cards appear flat, especially since two GeForce GTX 480's combined into a SLI set produced only 46 FPS. While the quality settings will likely need to be reduced to medium levels for adequate game play performance, and advanced depth of field disabled, the ASUS GeForce GTX 460 matched the Radeon HD 5830 for performance. Of course, the Radeon series cannot utilize PhysX - and Metro 2033 only offers advanced PhysX options for NVIDIA GeForce video cards. Comparing GeForce GTX 460 SLI performance against the Radeon HD 5870 indicates how two GF104 GPUs can pull frame rates past the competition and offer a better value.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    DX11: Unigine Heaven 2.1

    The Unigine "Heaven 2.1" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

    The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:

    • Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
    • Comprehensive use of tessellation technology
    • Advanced SSAO (screen-space ambient occlusion)
    • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
    • Dynamic simulation of changing environment with high physical fidelity
    • Interactive experience with fly/walk-through modes
    • ATI Eyefinity support

    Unigine_Heaven_DX11_Benchmark.jpg

    Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

    Cost Analysis: Unigine Heaven (1680x1050)

  • $200 Radeon HD 5830 costs $9.48 per FPS
  • $200 GeForce GTX 460 costs $7.33 per FPS
  • $290 Radeon HD 5850 costs $11.79 per FPS
  • $320 GeForce GTX 470 costs $10.53 per FPS
  • $400 Radeon HD 5870 costs $13.65 per FPS
  • $480 GeForce GTX 480 costs $12.63 per FPS
  • $680 Radeon HG 5970 costs $14.59 per FPS
  • $400 GeForce GTX 460 SLI costs $8.10 per FPS
  • $960 GeForce GTX 480 SLI costs $13.22 per FPS
  • Test Summary: Our test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards. The 768MB NVIDIA GeForce GTX 460 clears past the Radeon HD 5830 by 6-FPS, and also surpasses the GeForce GTX 465 in this test. Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a graphics tool this detailed. For now though, those details only come by way of DirectX-11 video cards. It's worth noting that if there was a cost involved with Unigine's Heaven benchmark, the ASUS GeForce GTX 460 would cost $2.15 less per frame than the Radeon HD 5830, and also cost less than all other DirectX-11 video cards. Two GeForce GTX 460's in SLI costs $5.55 per frame than the Radeon HD 5870, and also outperforms the Radeon HD 5970.

    Graphics Card Radeon HD5830 GeForce GTX460 Radeon HD5850 GeForce GTX470 Radeon HD5870 GeForce GTX480 Radeon HD5970
    GPU Cores 1120 336 1440 448 1600 480 3200 (1600 per GPU)
    Core Clock (MHz) 800 675 725 608 850 700 725
    Shader Clock (MHz) N/A 1350 N/A 1215 N/A 1401 N/A
    Memory Clock (MHz) 1000 900 1000 837 1200 924 1000
    Memory Amount 1024MB GDDR5 768MB GDDR5 1024MB GDDR5 1280MB GDDR5 1024MB GDDR5 1536MB GDDR5 2048MB GDDR5
    Memory Interface 256-bit 192-bit 256-bit 320-bit 256-bit 384-bit 512-bit (256-bit per GPU)

    NVIDIA APEX PhysX Enhancements

    Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.

    We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:

    Mafia2_Cloth_High_No-PhysX.jpg

    No PhysX = Cloth Blending and Missing Debris

    Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 460 with APEX PhysX enabled:

    Mafia2_Cloth_High_PhysX.jpg

    Realistic Cloth and Debris - High Quality Settings With PhysX

    With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.

    Mafia2_PhysX_Fire.jpg

    Burning Destruction Smoke and Vapor Realism

    Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.

    Mafia2_PhysX_Glass.jpg

    Exploding Glass Shards and Bursting Flames

    NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.

    NVIDIA 3D-Vision Effects

    Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.

    Mafia2_3d-Vision_Characters.jpg

    The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.

    Mafia2_3d-Vision_Tire_Door.jpg

    Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.

    Mafia2_3d-Vision_Debris.jpg

    Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.

    Mafia2_3d-Vision_Smoke_Plumes.jpg

    NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.

    Mafia2_3d-Vision_Out-of-Screen.jpg

    NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.

    GeForce GTX460 Temperatures

    Benchmark performance tests are always interesting to analyze, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

    To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing, while the inner-case temperature hovered around 36°C.

    FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.

    ASUS-ENGTX460-Video-Card-SLI-Termperatures.jpg

    NVIDIA GeForce GTX 460 768MB Video Card Temperatures

    NVIDIA-supplied product specifications state that the GeForce GTX 460 has a maximum GPU thermal threshold of 104°C. We've noted that this new GF104 threshold is one degree less than GF100 Fermi products. In a room with 20°C ambient temperature, the SLI GeForce GTX 460 set produced a mere 27/27°C at idle and only 65/71°C under load. After a series a warm-blooded GeForce products, it appears that the GTX 460 operates stone cold in comparison.

    Most new graphics cards from NVIDIA and ATI will expel heated air out through exhaust vents, which does not increase the internal case temperature. Our test system is an open-air chassis that allows the video card to depend on its own cooling solution for proper thermal management. Most gamers and PC hardware enthusiasts who use an aftermarket computer case with intake and exhaust fans will usually create a directional airflow current and lower internal temperatures a few degrees below the measurements we've recorded. To demonstrate this, we've built a system to illustrate the...

    Best-Case Scenario

    Traditional tower-style computer cases position internal hardware so that heat is expelled out through the back of the unit. This is better than nothing, but there's a fundamental problem: heat rises. Using the transverse mount design on the SilverStone Raven-2 chassis, Benchmark Reviews re-tested the NVIDIA GeForce GTX 460 video card to determine the 'best-case' scenario.

    Sitting idle at the Windows 7 desktop with a 20°C ambient room temperature, the GeForce GTX 460 rested at 27°C, which was actually one degree higher than measured in a regular computer case. Pushed to abnormally high levels using the FurMark torture test, the GeForce GTX 460 operated at 67°C with a very quiet cooling fan. After some investigation, it seems that the reference thermal cooling solution is better suited to a horizontal orientation. Although the well-designed Raven-2 computer case offers additional cooling features and has helped to make a difference in other video cards, this wasn't the case with the GTX 460... not that it matters at this low of a temperature.

    NVIDIA-GeForce-GTX-460-Heatsink-Cooler.jpg

    NVIDIA GeForce GTX 460 Thermal Cooling Solution Heatsink

    In the traditional (horizontal) position, the slightly angled heat-pipe rods use gravity and sintering to draw cooled liquid back down to the base. When positioned in a transverse mount case such as the SilverStone Raven-2, the NVIDIA GeForce GTX 460 heatsink loses optimal effective properties in the lowest heat-pipe rod, because gravity takes keeps the cool liquid in the lowest portion of the rod within the finsink.

    VGA Power Consumption

    Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

    For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions, however our results are not adjusted for consistency. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

    A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

    Video Card Power Consumption by Benchmark Reviews

    VGA Product Description

    (sorted by combined total power)

    Idle Power

    Loaded Power

    NVIDIA GeForce GTX 480 SLI Set
    82 W
    655 W
    NVIDIA GeForce GTX 590 Reference Design
    53 W
    396 W
    ATI Radeon HD 4870 X2 Reference Design
    100 W
    320 W
    AMD Radeon HD 6990 Reference Design
    46 W
    350 W
    NVIDIA GeForce GTX 295 Reference Design
    74 W
    302 W
    ASUS GeForce GTX 480 Reference Design
    39 W
    315 W
    ATI Radeon HD 5970 Reference Design
    48 W
    299 W
    NVIDIA GeForce GTX 690 Reference Design
    25 W
    321 W
    ATI Radeon HD 4850 CrossFireX Set
    123 W
    210 W
    ATI Radeon HD 4890 Reference Design
    65 W
    268 W
    AMD Radeon HD 7970 Reference Design
    21 W
    311 W
    NVIDIA GeForce GTX 470 Reference Design
    42 W
    278 W
    NVIDIA GeForce GTX 580 Reference Design
    31 W
    246 W
    NVIDIA GeForce GTX 570 Reference Design
    31 W
    241 W
    ATI Radeon HD 5870 Reference Design
    25 W
    240 W
    ATI Radeon HD 6970 Reference Design
    24 W
    233 W
    NVIDIA GeForce GTX 465 Reference Design
    36 W
    219 W
    NVIDIA GeForce GTX 680 Reference Design
    14 W
    243 W
    Sapphire Radeon HD 4850 X2 11139-00-40R
    73 W
    180 W
    NVIDIA GeForce 9800 GX2 Reference Design
    85 W
    186 W
    NVIDIA GeForce GTX 780 Reference Design
    10 W
    275 W
    NVIDIA GeForce GTX 770 Reference Design
    9 W
    256 W
    NVIDIA GeForce GTX 280 Reference Design
    35 W
    225 W
    NVIDIA GeForce GTX 260 (216) Reference Design
    42 W
    203 W
    ATI Radeon HD 4870 Reference Design
    58 W
    166 W
    NVIDIA GeForce GTX 560 Ti Reference Design
    17 W
    199 W
    NVIDIA GeForce GTX 460 Reference Design
    18 W
    167 W
    AMD Radeon HD 6870 Reference Design
    20 W
    162 W
    NVIDIA GeForce GTX 670 Reference Design
    14 W
    167 W
    ATI Radeon HD 5850 Reference Design
    24 W
    157 W
    NVIDIA GeForce GTX 650 Ti BOOST Reference Design
    8 W
    164 W
    AMD Radeon HD 6850 Reference Design
    20 W
    139 W
    NVIDIA GeForce 8800 GT Reference Design
    31 W
    133 W
    ATI Radeon HD 4770 RV740 GDDR5 Reference Design
    37 W
    120 W
    ATI Radeon HD 5770 Reference Design
    16 W
    122 W
    NVIDIA GeForce GTS 450 Reference Design
    22 W
    115 W
    NVIDIA GeForce GTX 650 Ti Reference Design
    12 W
    112 W
    ATI Radeon HD 4670 Reference Design
    9 W
    70 W
    * Results are accurate to within +/- 5W.

    The ASUS GeForce GTX 460 requires two six-pin PCI-E power connections. Resting at idle, the power draw consumed only 18 watts of electricity... 8W less than the ATI Radeon HD 5830 and exactly half the amount required for the GeForce GTX 465. Once 3D-applications begin to demand power from the GPU, electrical power consumption climbed to full-throttle. Measured with 3D 'torture' load using FurMark, the GeForce GTX 460 consumed 167 watts, which is slightly higher than the Radeon 5830 but well below the 219W consumed by the GTX 465. Although GF104 Fermi GPU features the same 40nm fabrication process as the GF100, it's clear that NVIDIA's GTX 460 is better suited for 'Green' enthusiasts.

    Adding a second GeForce GTX 460 into SLI may create a pair of video cards capable of surpassing ATI's Radeon HD 5870, but it also increases power consumption beyond the 5870 as well. At idle, the GeForce GTX 460 SLI set consumes roughly 44 watts of power. Pressed well beyond their normal energy needs with Furmark, the SLI pair uses 285 watts. A single Radeon HD 5870 video card used 240W in the same test, which measures 45W less than the GeForce GTX 460 SLI set.

    GeForce GTX 460 Overclocking

    If there's one particular message this article should impress upon the reader, it would be that the NVIDIA GeForce GTX 460 is an overclockers dream. The GF104-equipped GTX 460 already comes with an impressive stock clock of 675/1350 MHz, with the GDDR5 running at 900 MHz (1800 DDR). Putting this into perspective, these speed fall between the GeForce GTX 470 and GTX 480 (closer to the latter). Now comes the fun: overclocking the GeForce GTX 460 is as easy as its ever been.

    Back in the day, software overclocking tools were few and far between. Benchmark Reviews was literally put on the map with my first article: Overclocking the NVIDIA GeForce Video Card. Although slightly dated, that article is still relevant for enthusiasts wanting to permanently flash their overclock onto the video cards BIOS. Unfortunately, most users are not so willing to commit their investment to such risky changes, and feel safer with temporary changes that can be easily undone with a reboot. That's the impetus behind the sudden popularity for software-based GPU overclocking tools.

    NVIDIA offers one such tool with their System Tools suite, formerly available as NVIDIA nTune. While the NVIDIA Control Panel interface is very easy to understand an navigate, it's downfall lies in the limited simplicity of the tool. It's also limited, and doesn't offer the overclocking potential that AIC partners offer in their own branded software tools. For example, using the NVIDIA System Tools utility to overclock the GeForce GTX 460 (1GB version), I was able to set the graphics clock up to 1350MHz (which is way out of realistic range) but the memory clock was limited to 2160. As it turns out, overclocking the GTX 460's GDDR5 from 1800 to well past 2160 MHz was possible.

    NVIDIA-nTune-GeForce-GTX-460-1GB.png

    After hitting a brick wall with the NVIDIA System Tools utility (nTune), I turned to the MSI Afterburner utility (based on Riva Tuner). Knowing that the GeForce GTX 460 would need a wider range of clock speeds, I began overclocking with MSI Afterburner 1.6.1.

    My mission was simple: locate the highest possible overclock without adding any additional voltage. Starting with memory, I slowly raised the GDDR5 clock speed... until I once again reached the limit of this tool: 1170 MHz (2340 MHz data rate). While I would have liked to go further, and take the ASUS GeForce GTX 460's 1GB GDDR5 memory as far as it could possibly go, the jump from 900 to 1170 MHz equaled very little frame rate improvement. Using only a memory overclock, Aliens vs Predator performance jumped from 22.2 to 23.5 FPS at 1920x1200. Far Cry 2 performance was bumped from 57.36 to 58.85. Now it was time to work some magic on the GF104 GPU.

    MSI-Afterburner-GeForce-GTX-460-1GB.png

    As a best practice, it's good to find the maximum stable GPU clock speed and then drop back 10 MHz or more. While the GeForce GTX 460 was stable in many tests up to 880 MHz, there was an occasional graphics defect. In the end, I decided that 855 MHz with full-time stability is a far better proposition than crashing out midway through battle. Adding the 855/1710 MHz GPU overclock onto the 1170 MHz GDDR5 overclock resulted in some very impressive gains!

    Far Cry 2 video frame rate performance increased from 57.36 FPS to 71.67, equaling a 25% bump in performance for this DirectX-10 video game. That's a nice start, but DirectX-11 is really where the market's at these days... so I turned to AvP for another series of test runs.

    Aliens vs Predator jumped from 22.2 FPS at stock/default settings to 28.3 FPS, resulting in nearly 28% performance gain. What does that 28% mean to you? For a mere $200~$220, the ASUS GeForce GTX 460 just matched the $400 ATI Radeon HD 5870... and that's just a single video card. We've already demonstrated how two GeForce GTX 460's perform in SLI. In the next section, I offer my opinion on Fermi's updated architecture and add a little salt into ATI's fresh wound...

    Editor's Opinion: NVIDIA Fermi

    My opinion of NVIDIA's Fermi architecture has changed over the past several months, as they've developed their graphics processor to fully embody the originally unclear long-term plan. Testing with NVIDIA's GF100 GPU held its own set of challenges, and many times the video cards based on this graphics processor seemed condemned by the inherited legacy of problems. From the flagship GeForce GTX 480 down to the GTX 465, Fermi impressed gamers with strong FPS performance... and that was about it. Thermal output and power consumption were unfashionably high, to which ATI constantly and consistently focused their marketing attacks. Then along comes GF104 on the GeForce GTX 460.

    NVIDIA's GeForce GTX 460 not only changes the collective opinion of their Fermi architecture, it also changes the GPU landscape. ATI held the upper hand by releasing a DirectX-11 video card first, but they've painted themselves into a corner with their Evergreen GPU. Unlike NVIDIA's Fermi architecture, which can shape-shift as desired, ATI's Cedar, Redwood, and Juniper GPUs are all simply slices of the same processor: Cypress. This is where intelligent consumers will spot the flaw: ATI came to the (video) card game and showed their entire hand from the first deal, while NVIDIA had a few spare aces up their sleeves. NVIDIA's GeForce GTX 480 is only 15/16 of the complete GF100 package, and we're just beginning to see what's possible with a 7/8-whole GF104 GPU. It's unknown what NVIDIA has planned for the GF102, GF106, and GF108... although the speculation is rampant.

    So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

    NVIDIA-GeForce-Fermi-Product-Family.jpg

    NVIDIA GeForce Fermi Graphics Card Family

    Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

    Even some older game titles benefit by the Fermi architecture, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.

    Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

    The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Well into 2010, only a beta version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 400-series is going to surpass ATI's Radeon HD 5000 series equivalents without much struggle, but it's uncertain how much better the performance will be compared to the previous-generations.

    GeForce GTX 460 Conclusion

    IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

    Tested as a single graphics card, frame rate performance delivers a big win for the ASUS GeForce GTX 460 over ATI's Radeon HD 5830 video card which shares the same $200 retail price point. Our performance rating considers how effective the 768MB ASUS GeForce GTX 460 DirectX-11 video card performs against competing products from both ATI and NVIDIA. Taking all of the collected results into consideration, the 768MB ASUS GeForce GTX 460 ENGTX460/2DI/768MD5 outperforms ATI's Radeon HD 5830 in nearly every benchmark test conducted. Further dominating over the Radeon 5830, NVIDIA's GeForce GTX 460 offers PhysX and 3D-Vision functionality, produces less heat, and requires less power at idle. It easily captures the best price to performance ratio for most games tested, especially in BattleForge. At the end of our test gauntlet, the results prove that NVIDIA's GeForce GTX 460 is the best choice at $200... but also a better choice than their own GeForce GTX-465.

    Combining two 768MB GeForce GTX 480 video cards into SLI, NVIDIA has a serious threat at the $400 price point. For this same price, ATI offers their Radeon HD 5870 single-GPU flagship. As our benchmark tests have demonstrated, two 768MB GeForce GTX 460's are better than one ATI Radeon HD 5870... to a certain extent. Sure, our GTX 460 SLI set surpassed the Radeon HD 5870 in most tests, but at 1680x1050 the graphics strain is rather modest. There were a few occasions where the twin GF104 GPUs didn't fare so well when the resolution increased to 1920x1200. Presumably, most users will likely start with one GTX 460 on a 1680x1050 monitor and later have enough money to add a second into SLI, this could be a non-issue. Still, the potential exists.

    ASUS-ENGTX460-Video-Card-SLI.jpg

    ASUS GeForce GTX460 Video Cards in SLI

    Appearance is going to be a more subjective matter, since so many of NVIDIA's AIC partners have custom cooling solutions each with their own unique look. The ASUS GeForce GTX 460 ENGTX460/2DI/768MD5 we've tested in this article takes on the reference, while the soon-to-be tested ASUS ENGTX460-TOP DirectCU goes way beyond the necessary. If you're planning to combine two of these units together into SLI, the reference cooling solution already cools the GeForce GTX 460 down to near-ambient levels at idle and keeps them both extremely cool under load. Personally, I prefer externally exhausting video cards to avoid heat build-up inside the computer case.

    In terms of video card pecking order, the NVIDIA GeForce GTX 460 occupies the #4 spot just barely behind the GTX 465 in the NVIDIA product lineup and settled between the ATI Radeon HD 5830 and 5850. With so much power in a mid-level graphics board, this often creates an interest in paired SLI sets. As this article has proven, SLI is not only possible - it's ideal. Unfortunately, triple-SLI is not an option... but NVIDIA 3D-Vision Surround certainly is. As the first GF104 product (and fourth Fermi iteration), the GeForce GTX 460 has been designed with the same solid construction as its predecessors. There are exposed electronics on the back of the PCB, but nothing that protrudes enough to require a metal back-plate for protection. The top-side of the graphics card features a protective plastic fan shroud, which receives a recessed concave opening for the 75mm fan and allows for airflow in SLI configurations.

    While most PC gamers and hardware enthusiasts buy a discrete graphics card for the sole purpose of playing video games, there's a very small niche who depend on the extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. NVIDIA's Fermi architecture is the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality. Additionally, applications such as Adobe Photoshop or Premier can take advantage of GPGPU processing power. In case the point hasn't already been driven home, don't forget that 3D Vision and PhysX are technologies only available through NVIDIA. With an SLI set, you can assign one card to PhysX tasks and use the other uninhibited.

    Defining product value means something different to everyone. Some readers take heat and power consumption into consideration, while others are only concerned with FPS performance. With regard to value, there are several ways to look at the $200 GeForce GTX 460 768MB GDDR5 version and compare it to the closest rivals: such as the $200 ATI Radeon HD 5830. In terms of product price to FPS performance, the GeForce GTX 460 is one of the most affordable DirectX-11 video card products available. The ASUS ENGTX460/2DI/768MD5 GeForce GTX 460 768MB video card is available at NewEgg for $200. There are also several other options to consider:

    GeForce GTX 460 - 768MB Versions 1GB Versions
  • $200 ASUS ENGTX460/2DI/768MD5 GeForce GTX 460 768MB
  • $200 MSI N460GTX-M2D768D5 GeForce GTX 460 768MB
  • $200 GIGABYTE GV-N460D5-768I-B GeForce GTX 460 768MB
  • $200 EVGA 768-P3-1360-TR GeForce GTX 460 768MB
  • $200 Palit NE5TX460FHD79 GeForce GTX 460 768MB
  • $210 EVGA 768-P3-1362-TR GeForce GTX 460 Superclocked
  • $210 GIGABYTE GV-N460OC-768I GeForce GTX 460 OC 768MB
  • $210 MSI N460GTX Cyclone 768D5/OC GeForce GTX 460 768MB
  • $220 EVGA 768-P3-1362-AR GeForce GTX 460 Superclocked
    • $220 Palit NE5X460SF1102 GeForce GTX 460 Sonic 1GB
    • $230 GIGABYTE GV-N460OC-1GI GeForce GTX 460 1GB
    • $245 Palit NE5X460HF1102 GeForce GTX 460 SONIC PLATINUM

    In conclusion, NVIDIA's GF104 Fermi GPU is exactly what the mid-range discreet graphics market needed. The 768MB ASUS GeForce GTX 460 beats the ATI Radeon HD 5830 at the $200 price point, and two GTX 460's combined into SLI usually trump the $400 Radeon HD 5870. It's great to see NVIDIA offer budget-minded gamers so much with the GTX 460, primarily because this video card wins over the segment, but also because it overclocks well into the next level of graphics products. To the delight of many, temperatures are way down and have suggested this could be the coolest-running mid-level NVIDIA video card in a very long time. Idle power draw was a mere 18 watts by our measure, demonstrating that the GeForce GTX 460 is more efficient than ATI's Radeon HD 5830 - and also demands half the power of the GTX 465. Even in SLI, the GeForce GTX 460 stands strong and surpasses the Radeon HD 5870 in most tests... except power consumption. What could be the best pitch for the GeForce GTX 460 is that one affordable video card works can work wonders, and then saving to add a second card into SLI is still within reach for many gamers.

    EDITOR'S NOTE: Benchmark Reviews has also separately published our review of the 1GB NVIDIA GeForce GTX 460.

    Benchmark Reviews Golden Tachometer AwardPros:

    + Impressively cool operating temperatures!
    + Consumes only 18 watts at idle
    + Best performer at $200 - beats Radeon 5830
    + SLI set surpasses Radeon HD 5870 performance
    + Fermi Introduces Error Correcting Code (ECC)
    + Excellent price-to-performance cost ratio
    + Fan exhausts all heated air outside of case
    + Quiet cooling fan under loaded operation
    + Includes native HDMI audio/video output
    + Adds 32x CSAA post-processing detail
    + Supports SLI functionality

    Cons:

    - Triple-SLI not supported
    - SLI set doesn't impress at extreme resolutions

    Ratings:

    • Performance: 8.75
    • Appearance: 9.00
    • Construction: 9.75
    • Functionality: 9.50
    • Value: 8.75

    Final Score: 9.15 out of 10.

    Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

    Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.


    Related Articles:
     

    Comments 

     
    # RE: ASUS GeForce GTX 460 SLI PerformanceServando Silva 2010-08-02 20:38
    Very informative article Olin. I'm very tempted to change my HD 5850 and get a pair of these puppies. According to your experience, which would represent a better buy? GTX460 768MB SLI? or GTX460 1GB SLI? I can save quite a few bucks by going for the 768MB editions, but would 1GB worth it for an SLI setup?
    Great article!
    Report Comment
     
     
    # 768MB vs 1GBOlin Coles 2010-08-02 21:45
    If you've already got a 5850, I don't see any reason to upgrade. If anything, look for another and put them into CrossFire. As for 768MB vs 1GB variants, I think my suggestion depends on your resolution. 1680x10 is fine with a set of 768MB GeForce GTX 460's, but if you're running 1920x1200 or higher you'll want the 1GB versions.
    Report Comment
     
     
    # RE: 768MB vs 1GBServando Silva 2010-08-02 22:04
    CrossfireX scaling is worse than SLI scaling, right? I think HD5850 CF performs slower than GTX460 SLI (that's why I want to change), but I might be wrong. And I want it for 3840x1080 resolutions as that's what I use most of the time (if not possible, I use 1920x1080).
    Report Comment
     
     
    # Any hints of a 3xSLI?Mark N 2010-08-03 07:18
    Thanks for the great article! My biggest concern is the 2xSLI limitation. A single 480, while slower than a 2x460 setup affords the option to upgrade with a 2nd or 3rd 480 later as needed without discarding the previous card. Are there any rumors floating about anything on the horizon with greater expandability?
    Report Comment
     
     
    # Different price segmentsOlin Coles 2010-08-03 07:22
    The GTX 480 is a $480 product that could buy two 1GB GTX 460's and still have money left over... suffice it to say that it commands an entirely different price segment. In SLI, we're talking about $400 worth of video cards vs $960, and that's a huge difference.
    Report Comment
     
     
    # Any hints of a 3xSLI?Mark N 2010-08-03 07:48
    Absolutely. My thoughts were do I buy one 480 for now instead of two 460s. It seems the worse purchase for now, but in the future I would have the option to purchase another 480 if the 2x460 wasn't still cutting it.

    Were I to go with a 2x460 purchase I'm at the limit of my expansion and if the cards don't hold up, I'd have to discard/sell them and start over.

    I'm probably worrying about nothing though. A 2x460 setup, even before overclocking is such a huge jump from my GTX275 (OC 666MHz). I imagine it will more than satisfy my needs for this generation of cards. I'm just a worrier.
    Report Comment
     
     
    # RE: Any hints of a 3xSLI?Olin Coles 2010-08-03 07:51
    That would certainly take you to the top. What kind of resolution are you using? If it's 1920x1200 or larger, the GTX 480 makes sense. Also, if you'd like to see how the GTX 460 (or anything else) holds up against the GTX 285 (which is slightly faster than your GTX 275), read the single-card GTX 460 reviews.

    There's no indication that the GF104 will support triple-SLI. It appears to be reserved for their GF100 GPU.
    Report Comment
     
     
    # Any hints of a 3xSLI?Mark N 2010-08-03 08:02
    Two monitors. Main screen is 1920x1080 and the 2nd is 1680x1050. Usually the 2nd screen just has web/vent/messenger running on it while I play. Occasionally I'll play a DVD or video on the 2nd screen while gaming.
    Report Comment
     
     
    # RE: Any hints of a 3xSLI?Olin Coles 2010-08-03 08:05
    I understand the logic behind your decision, but I wouldn't count on ever buying a second GTX 480. By the time it's necessary, they'll be long gone. Besides, just one GTX 480 will push any game you've got (except Metro 2033) with settings at their highest.
    Report Comment
     
     
    # RE: ASUS GeForce GTX 460 SLI Performancedlb 2010-08-04 08:35
    Great review! Very informative.... however, I think I found a typo on page 5.... in the graph that shows the specs of the various individual NVidia and ATI cards, in the last row, it says the ATI HD5870 has a memory interface of only 192bit.... shouldn't that read 256bit? Other than that, good stuff here Olin. Keep up the great work!
    Report Comment
     
     
    # RE: RE: ASUS GeForce GTX 460 SLI PerformanceOlin Coles 2010-08-04 08:37
    You're right... it's 256-bit. I've got it fixed now, thanks!
    Report Comment
     
     
    # one questionFederico La Morgia 2010-08-07 05:43
    what is written on the RAM chips?
    Report Comment
     
     
    # ^_^Setsunayaki 2010-08-11 04:46
    I still take a single 480 GTX over an SLI 460 GTX. 768MB of Memory is too small today for games. 1024MB is ok, but there are reviews all over the place now on how 1024MB for some setups is too little as well...

    That 460 SLI is great if you plan on playing one game at a certain resolution and tech setup for many years. The two games from this year REVIEWED...AvP and Metro2033 are games requiring more than 1GB at max settings...So it means SLI is playable at lower resolution but can't max most games at high resolution from 2010 which means 460 SLI will utterly fail in games from 2011 which will be even more demanding and will come with full physics engines.

    1536 - 2048MB VRAM is what you should plan on having prior to going full SLI or Crossfire, specially if you are going triple monitor you need all the VRAM you can get. 1024MB is the new "baby-face" minimum.
    Report Comment
     
     
    # RE: ^_^Olin Coles 2010-08-11 06:54
    You're missing an important point: not everyone can afford $500 up front for the GTX 480. Think of this as high-performance layaway, because you can get a good bit of performance for $200 now and then add another card when you have more money. Two GTX 460's in SLI easily beat a single GTX 480, and offer more memory buffer too. 1536 - 2048MB of vRAM is exactly what you get with two GTX 460's... you don't get that with a single GTX 480. Finally, users with a 460 are not likely to use a resolution higher than 1920x1050, so it's a perfect fit.
    Report Comment
     
     
    # single 4870X2 to gtx460 SLI or single 480 ?mgph 2010-08-12 00:02
    thank you so much for your very detailed articles as well as answering every question. I am also thinking to upgrade my rigs so and if you don't mind giving your advice, here is my rigs:
    i7 930 - OC to 4.0ghz with V6 fan cooler
    RAM - Xtreem 2x3 6gbs
    GPU - single HIS 4870X2
    Mobo - Asus P6X58D Premium
    PSU - Antec True Power 800w
    Resolution - 1900x1020
    Driver - Catalyst 10.7a
    Since the graphics card give me so much problems, like minor stuttering and inconsistent fps in some games(yes, Crysis is one of them). My biggest disappointment is in Starcraft 2, though I can max out all, during some cutscenes and missions, the fps drops significantly (18-22). So, I am thinking to upgrade my graphics card and can you kindly suggest me which one I should choose: 460 SLI, single 480 or 5970 ? I think I will need to replace my PSU with higher 1k or 1.2k though if I do SLI.
    Report Comment
     
     
    # RE: single 4870X2 to gtx460 SLI or single 480 ?Olin Coles 2010-08-12 07:30
    You're going to get stuttering issues whenever two GPUs are involved: SLI, CrossFire, and dual-GPU cards like the 5970. If you tell me what your budget is, I can help you find the best build. Also, some of that budget should be used on a SSD.
    Report Comment
     
     
    # RE: single 4870X2 to gtx460 SLI or single 480 ?mgph 2010-08-12 17:25
    Thanks for your help and no, stuttering issues are gone away when I tested both Crysis and Crysis:Warhead on my bro's pc who changed from the same graphics card(yes he also had stuttering issues and inconsistent fps) to gtx 480 SLI with same CPU (i7 920 OC 3.4).
    Budget-wise, I am open to anything but I just want to get best of both world, value + performance since I believe why should I waste my money if I just need to take some time to search for the best instead of buying the most expensive(either gtx480 or 5970) which seems to be most powerful without knowing anything.
    Since articles from you and tomshardware showing how new fermi is promising ##tomshardware.com/reviews/geforce-gtx-460-sli-geforce-gtx-480,2694.html, I am thinking to give it a try but I am also worried if later when I want to upgrade, I will be in dead end. For SSD, any enlightenment for why I should use it ? I just know that it would be great for OS booting up part and data consistency.
    Report Comment
     
     
    # RE: RE: single 4870X2 to gtx460 SLI or single 480 ?Olin Coles 2010-08-12 20:32
    I think that the GTX 480 is a far better purchase than the Radeon 5970... but I think that two GTX 460's are much better than one 480.

    SSDs will make everything open faster, and levels will load much quicker.
    Report Comment
     
     
    # RE: RE: RE: single 4870X2 to gtx460 SLI or single 480 ?mgph 2010-08-12 21:24
    Oh I see. Now what I should be thinking is do I want to make mine as future (next at lest 3-5years) proof? If yes, grab a GTX 480 first and add sli later when got money. Or to get the best performance+value atm, gtx 460sli is the right choice. Thanks for the info !

    What I understand is like SSDs is reading from flash memory instead of magnetic tape so it will reduce the time for cycling and finding data from tape. Okay but how about games ? Since if I do partition, games will take huge amount (at least 200-300gb) and for that whole to be faster loading time, I need to use SSD and install all the game on that SSD ? That seems like it will be quite expensive though. Okay, thanks for the enlightenment and appreciate your help !
    Report Comment
     
     
    # RE: RE: RE: RE: single 4870X2 to gtx460 SLI or single 480 ?Olin Coles 2010-08-13 06:09
    Since you need so much storage space, I suggest you look at the Seagate Momentus-XT Solid State Hybrid Drive: /index.php?option=com_content&task=view&id=525&Itemid=60
    Report Comment
     
     
    # RE: RE: RE: RE: RE: single 4870X2 to gtx460 SLI or single 480 ?Adam 2010-08-13 06:19
    Would that really offer any significant benefits for gaming? Due to the very small amount of flash memory on it surlye it's only really going to have an effect on boot times.

    I was interested when it was released but pondered that for most purposes it wouldnt give any advantage whatsoever.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: single 4870X2 to gtx460 SLI or single 480 ?Olin Coles 2010-08-13 06:31
    Games and the other programs would all open faster, and game levels/maps would change more quickly. Video frame rates would not be changed.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: single 4870X2 to gtx460 SLI or single 480 ?Adam 2010-08-13 07:57
    Yeah, for a regular SSD. But the Momentus XT only has 4GB of flash memory combined with its adaptive tech, so it 'might' load one game (or perhaps only certain maps) at SSD speeds but everything else will be stuck back in the 7200rpm HDD zone.

    I'm unsure of how well it works with gaming, just a bit sceptical.
    Report Comment
     
     
    # Picky observationGorham4 2010-08-19 11:44
    Olin, just found your site and I'm enjoying your indepth reviews. On a personal note I would have found it very interesting to see some of the ATI product in crossfire as a comparison.

    While I beleive that Nvidia has a real winner in the 460. The SLI scaling is really impressive so comparing a 5830 crossfire would have been nice. Keep up the good work
    Report Comment
     
     
    # RE: Picky observationOlin Coles 2010-08-19 11:52
    If I had a second 5830 we would include CrossFire results, but alas, I do not. :(

    Still, since one GTS 460 easily beats a single 5830, and since two 460's in SLI are nearly twice the performance of one, it could be surmised that a pair of 5830's isn't going to beat an SLI set of 460's anyway.
    Report Comment
     
     
    # more of a scaling testGorham4 2010-08-19 12:15
    I agree, per your single card 460 review that the 460 edges the 5830 out in most benchmarks at or above 1920. I'm interested in the scaling efficency.

    I'm currently running 5870's in CF. It works great at 2560X1600 and the picture quality is fantastic (BC2 in DX11 is unreal). I held off until after the disappointing 480 launch where heat and power consumption was unacceptable to me.

    Nvidia's SLI has always seemed more mature\robust to me so any comparisons between same level cards in single and dual modes is much appreciated.

    Again thanks for the great reviews I've been sending links to this site out to friends I don't know how you've slipped under our radar :)
    Report Comment
     
     
    # RE: ASUS GeForce GTX 460 SLI Performancequickquestion 2011-01-31 19:02
    Could 2 of these cards handle Crysis on Medium with No AA, on 3 1680X1050 Screens? I mean like 35-50 FPS is fine for me. Thanks, i know I have been asking alot of questions lately, just looking for advice.
    Report Comment
     
     
    # RE: RE: ASUS GeForce GTX 460 SLI PerformanceOlin Coles 2011-01-31 19:43
    That should do it just fine.
    Report Comment
     
     
    # RE: ASUS GeForce GTX 460 SLI Performancequickquestion 2011-01-31 19:54
    Thanks a bunch man. I'll probably start saving to get 2 of these and a 3rd 20" screen. Have a good day.
    Report Comment
     
     
    # PSU650W x GTX460 SLI cfg?Panko 2011-08-28 01:16
    is ANTEC Truepower 650W PSU enough powerfull for ENGTX460 DirectCU 1064 SLI config?
    Please, it's f o n d a m e n t a l e xme!
    Report Comment
     
     
    # RE: PSU650W x GTX460 SLI cfg?JPEGONE 2011-08-28 17:25
    yeah, 650w will be enough to power the cards. Just make sure that it has 4X 6 pin or 6+2 pin PCI-E power connectors.
    Report Comment
     
     
    # 460SLI NVIDIA SurroundGh0stfr33kza 2011-10-18 08:22
    Will a ASUS GTX460 DirectCu 1GB SLI config be able to run a NVIDIA Surround setup @ high settings for games like Crysis 2, BF3 and so on. Will be using 3x 23'' LED's.
    Report Comment
     
     
    # 460SLI NVIDIA SurroundGh0stfr33kza 2011-10-18 08:44
    Oh and I almost forgot, will I be able to run the 3x 23'' LED as 1 display and then an additional 27'' LCD monitor as a second display
    Report Comment
     
     
    # RE: 460SLI NVIDIA SurroundOlin Coles 2011-10-18 09:01
    Two GeForce GTX 460's in SLI can operate a 3-display setup, but I doubt they'll be capable of high-quality video settings in BF3. I believe you can add a fourth monitor, and can make it standalone or part of the set.
    Report Comment
     
     
    # 460SLI NVIDIA SurroundGh0stfr33kza 2011-10-18 10:04
    How sure are you about the forth monitor, this is an essential piece of information as it will determine if i'll run a sli config or an single, powerfull card eyefinity setup
    Report Comment
     
     
    # RE: 460SLI NVIDIA SurroundOlin Coles 2011-10-18 10:08
    Since we've never reviewed a stereo plus single setup, I can't say with certainty. I would suggest sending NVIDIA a message or talking to their support team.
    Report Comment
     
     
    # 460SLI NVIDIA SurroundGh0stfr33kza 2011-10-18 10:20
    okay thanks, 1 last question, which ati card would you recommend in the price range of 2 460's if i were to run the single card eyefinity setup?
    Report Comment
     
     
    # RE: 460SLI NVIDIA SurroundOlin Coles 2011-10-18 10:26
    I believe the Radeon HD 6970 would be the best single-card from AMD that matches the price of two GTX 460s. See here for performance details:
    /index.php?option=com_content&task=view&id=680
    Report Comment
     

    Comments have been disabled by the administrator.

    Search Benchmark Reviews Archive