Archive Home arrow Reviews: arrow Video Cards arrow NVIDIA GeForce GTX-460 1GB Video Card
NVIDIA GeForce GTX-460 1GB Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Monday, 12 July 2010

NVIDIA GeForce GTX 460 1GB Video Card Review

NVIDIA's GeForce GTX 460 1GB-GDDR5 graphics card empowers DirectX-11 video games to deliver unmatched geometric realism at the $200 price point. Based on the same Fermi architecture that powers their high-end GeForce GTX 480 model, the NVIDIA GeForce GTX 460 delivers mid-range performance for gamers on a budget. The GeForce GTX 460 comes armed with NVIDIA's GF104 Fermi graphics processor, and packs seven Streaming Multiprocessors for a total of 336 CUDA Cores and 56 Texture Units. In this article Benchmark Reviews tests 3D video game frame rate performance on the 1GB NVIDIA GeForce GTX 460 video card, and compare our results against the most competitive graphics products in the segment. NVIDIA's 1GB GTX 460 price tag fits in nicely between the $200 Radeon HD 5830 and $250 GeForce GTX 465, but could deliver more performance for the value. In the following pages, Benchmark Reviews demonstrates how well the GeForce GTX 460 performs against these other DirectX-11 video card products.

PC video games are still the best way to experience realistic effects and immerse yourself in the battle. Consoles do their part, but only high-precision video cards offer the sharp clarity and definition needed to enjoy detailed graphics. Thanks to the new GF104 GPU, the GeForce GTX 460 has plenty of headroom for overclockers to drive out additional FPS performance, while keeping temperatures cool. In this article, Benchmark Reviews tests the GeForce GTX 460 against some of the best video cards within the price segment by using several of the most demanding PC video game titles and benchmark software available: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Crysis Warhead, Far Cry 2, Resident Evil 5, and Metro 2033.

It used to be that PC video games such as Crysis and Far Cry 2 were as demanding as you could get, but that was all back before DirectX-11 brought tessellation and to the forefront of graphics. DX11 now adds heavy particle and turbulence effects to video games, and titles such as Metro 2033 demand the most powerful graphics processing available. NVIDIA's GF100 GPU was their first graphics processor to support DirectX-11 features such as tessellation and DirectCompute, and the GeForce GTX 400-series offers an excellent combination of performance and value for games like Battlefield: Bad Company 2 or BattleForge.

NVIDIA-GeForce-GTX-460-Video-Card-Review-Splash.jpg

NVIDIA GeForce GTX 460 Video Card

At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF104 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX-460 family of video cards are complex tools built to arrive at one simple destination: immersive entertainment, especially when paired with NVIDIA GeForce 3D Vision. The experience is further improved with NVIDIA System Tools software, which includes NVIDIA Performance Group for GPU overclocking and NVIDIA System Monitor which displays real-time temperatures. These tools help gamers and overclockers get the most out of their investment.

EDITOR'S NOTE: Benchmark Reviews has separately published our review of the 768MB NVIDIA GeForce GTX 460.

About NVIDIA Corporation:

NVIDIA (Nasdaq: NVDA) is the world leader in visual computing technologies and the inventor of the GPU, a high-performance processor which generates breathtaking, interactive graphics on workstations, personal computers, game consoles, and mobile devices. NVIDIA serves the entertainment and consumer market with its GeForce products, the professional design and visualization market with its Quadro products, and the high-performance computing market with its Tesla products. These products are transforming visually-rich and computationally-intensive applications such as video games, film production, broadcasting, industrial design, financial modeling, space exploration, and medical imaging.NVIDIA_Black_Square_3D_Logo_250px.jpg

NVIDIA Product Lines

GeForce - GPUs dedicated to graphics and video.
Desktop and notebook PCs equipped with GeForce GPUs deliver unparalleled performance, crisp photos, high-definition video playback, and ultra-realistic games. GeForce notebook GPUs also include advanced power management technology to deliver high performance without sacrificing battery life.

Quadro - A complete range of professional solutions engineered to deliver breakthrough performance and quality.
Certified for all leading professional graphics applications. #1in professional graphics segment share. NVIDIA Quadro Plex is the industry's first dedicated visual computing system (VCS).

Tesla - A massively-parallel multi-threaded architecture for high-performance computing problems.
A dedicated, high-performance GPU computing solution that brings supercomputing power to any workstation or server and to standard, CPU-based server clusters. Tesla delivers a 128-processor computing core per GPU, C-language development environment for the GPU, and a suite of developer tools - allowing users to develop applications faster and to deploy them across multiple generations of processors. It also can be used in tandem with multi-core CPU systems to create a scalable computing solution that fits seamlessly into existing workstation or IT infrastructures.

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF104 GPU (GF100 series), tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Stages of Tessellation with NVIDIA Fermi Graphics

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

What's new in Fermi?

With any new technology, consumers want to know what's new in the product. The goal of this article is to share in-depth information surrounding the Fermi architecture, as well as the new functionality unlocked in GF100. For clarity, the 'GF' letters used in the GF100 GPU name are not an abbreviation for 'GeForce'; they actually denote that this GPU is a Graphics solution based on the Fermi architecture. The next generation of NVIDIA GeForce-series desktop video cards will use the GF100 to promote the following new features:

  • Third Generation Streaming Multiprocessor (SM)
    o 32 CUDA cores per SM, 4x over GT200
    o 8x the peak double precision floating point performance over GT200
    o Dual Warp Scheduler simultaneously schedules and dispatches instructions from two independent warps
    o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache
  • Second Generation Parallel Thread Execution ISA
    o Unified Address Space with Full C++ Support
    o Optimized for OpenCL and DirectCompute
    o Full IEEE 754-2008 32-bit and 64-bit precision
    o Full 32-bit integer path with 64-bit extensions
    o Memory access instructions to support transition to 64-bit addressing
    o Improved Performance through Predication
  • Improved Memory Subsystem
    o NVIDIA Parallel DataCache hierarchy with Configurable L1 and Unified L2 Caches
    o First GPU with ECC memory support
    o Greatly improved atomic memory operation performance
  • NVIDIA GigaThread Engine
    o 10x faster application context switching
    o Concurrent kernel execution
    o Out of Order thread block execution
    o Dual overlapped memory transfer engines

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF100-series graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are eight PolyMorph engines and two Raster engines on the GF104, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

Four-Offset Gather4

The texture unit on previous processor architectures operated at the core clock of the GPU. On GF104, the texture units run at a higher clock, leading to improved texturing performance for the same number of units. GF104's texture units now add support for DirectX-11's BC6H and BC7 texture compression formats, reducing the memory footprint of HDR textures and render targets.

The texture units also support jittered sampling through DirectX-11's four-offset Gather4 feature, allowing four texels to be fetched from a 128×128 pixel grid with a single texture instruction. NVIDIA GF100 series GPUs implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. With jittered sampling, games can implement smoother soft shadows or custom texture filters efficiently. The previous GT200 GPU did not offer coverage samples, while the GF100-series can deliver 32x CSAA.

GF104 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF104's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA GF100-series GPUs are the first to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF104 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA GF100-series GPUs, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

NVIDIA GigaThread Thread Scheduler

One of the most important technologies of the Fermi architecture is its two-level, distributed thread scheduler. At the chip level, a global work distribution engine schedules thread blocks to various SMs, while at the SM level, each warp scheduler distributes warps of 32 threads to its execution units. The first generation GigaThread engine introduced in G80 managed up to 12,288 threads in real-time. The Fermi architecture improves on this foundation by providing not only greater thread throughput, but dramatically faster context switching, concurrent kernel execution, and improved thread block scheduling.

NVIDIA GF104 GPU Fermi Architecture

Based on the Fermi architecture, NVIDIA's latest GPU is codenamed GF104 and is equipped on the GeForce GTX 460. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF104 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU was their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100/GF104 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. The new GF104 GPU is still more powerful than GT200, and delivers DirectX-11 performance for NVIDIA's mid-range Fermi-based video card family.

GF100 was not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they reach 512 in the GF100 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. GF104 features up to 336 CUDA cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF104 implements the IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.

NVIDIA-GF104-Fermi-Graphics-Processor.png

NVIDIA Fermi GF104 Block Diagram (click for high-resolution)

Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF104 could be mistaken as a divided GF100. NVIDIA GeForce GF100-series Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. NVIDIA's GF100 GPU implemented four GPCs, sixteen SMs, and six memory controllers. Conversely, GF104 implements two GPCs. eight SMs, and four memory controllers. Where each SM contained 32 CUDA cores in the GF100, NVIDIA now configures the GF104 to deliver 48 cores per SM. As expected, NVIDIA GF100-series products are launching with different configurations of GPCs, SMs, and memory controllers to address different price points.

CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF104 implements four 64-bit GDDR5 memory controllers (256-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 48 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages.

GF104 implements 336 CUDA cores, organized as 8 SMs of 48 cores each. Each SM is a highly parallel multiprocessor supporting up to 32 warps at any given time (four Dispatch Units per SM deliver two dispatched instructions per warp for four total instructions per clock per SM). Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture (384KB on 768MB version or 512KB on 1GB cards) services load, store, and texture operations. GF104 is designed to offer a total of 32 ROP units (768MB=24 / 1GB=32) for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in four groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others.

GF104 Specifications

  • Up to 336 CUDA Cores
  • 8 Geometry Units
  • 2 Raster Units
  • 64 Texture Units
  • 32 ROP Units
  • 256-bit GDDR5
  • DirectX-11 API Support

GeForce GTX 400 Specifications

Graphics Card

GeForce GTX 460

GeForce GTX 465

GeForce GTX 470

GeForce GTX 480

GPU Transistors 1.95 Billion 3.2 Billion 3.2 Billion 3.2 Billion

Graphics Processing Clusters

2

4

4

4

Streaming Multiprocessors

7 11

14

15

CUDA Cores

336 352

448

480

Texture Units

56 44

56

60

ROP Units

768MB=24 / 1GB=32 32

40

48

Graphics Clock
(Fixed Function Units)

675 MHz

607 MHz

607 MHz

700 MHz

Processor Clock
(CUDA Cores)

1350 MHz

1215 MHz

1215 MHz

1401 MHz

Memory Clock
(Clock Rate/Data Rate)

900/3600 MHz

837/3348 MHz

837/3348 MHz

924/3696 MHz

Total Video Memory

768MB / 1GB

1024 MB

1280 MB

1536 MB

Memory Interface

768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

Total Memory Bandwidth

86.4 / 115.2 GB/s

102.6 GB/s

133.9 GB/s

177.4 GB/s

Texture Filtering Rate
(Bilinear)

37.8 GigaTexels/s

26.7 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

Thermal Design Power (TDP)

768MB=150W / 1GB=160W

200 Watts

215 Watts

250 Watts

Recommended PSU

450 Watts

550 Watts

550 Watts

600 Watts

GPU Thermal Threshold

104°C

105°C

105°C

105°C

GeForce Fermi Chart Courtesy of Benchmark Reviews

NVIDIA GeForce GTX 460 Video Card

So far, 2010 has been an exciting year for game developers. Microsoft Windows 7 (and updated Windows Vista) Operating Systems introduced gamers to DirectX-11, allowing video games released for the PC platform to look better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware. This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles.

The NVIDIA GeForce GTX 460 video card series offers gamers a healthy dose of graphics processing power for their money. But the GeForce GTX 460 is more than just a tool for video games; it's also a tool for professional environments that make use of GPGPU-accelerated compute-friendly software, such as Adobe Premier Pro and Photoshop.

NVIDIA-GeForce-GTX-460-Corner.jpg

The NVIDIA GeForce GTX 460 is a 2.67" tall double-bay, 4.376-inches (111.15mm) wide graphics card with a 8.25-inch (209.55mm) long profile. There are two varieties of GTX 460: one with 768MB, and another with 1GB of GDDR5 memory. NVIDIA's reference cooler design uses a center-mounted 75mm finsink, which is more than adequate for this midrange-output Fermi GF104 GPU.

NVIDIA-GeForce-GTX-460-Top.jpg

As with most past GeForce video cards, the Fermi GPU offers two output 'lanes', so all three output devices cannot operate at once. NVIDIA has retained two DVI outputs on the GeForce GTX 460, so dual-monitor configurations can be utilized. By adding a second video card users can enjoy GeForce 3D-Vision Surround functionality.

NVIDIA-GeForce-GTX-460-Header-Panel.jpg

Other changes occur in more subtle ways, such as replacing the S-Video connection with a more relevant (mini) HDMI 1.3a A/V output. In past GeForce products, the HDMI port was limited to video-only output and required a separate audio output. Native HDMI 1.3 support is available to the GeForce GTX 460, which allows direct output to HDTVs and compatible monitors. Selling at an affordable $200 price point, NVIDIA was wise to support dual-card SLI sets on the GTX 460. Triple-SLI capability is not supported, since the $600 cost of three video cards would be better used to purchase either two GTX 470's or one GTX 480.

NVIDIA-GeForce-GTX-460-PCB-Bottom.jpg

The new 40nm fabrication process opens the die for more transistors; by comparison there are 1.4-billion in GT200 GPU (GeForce GTX 285), compared to 1.95-billion in GF104 for the GTX 460. While the GTX 460's GF104 disables one of its eight Streaming Multiprocessor Units (SMU's), a heavy-duty thermal management system is still utilized for optimal temperature control. On the reference sample, the entire heatsink and shroud portion are attached to the circuit board by four screws surrounding the GPU.

NVIDIA-GeForce-GTX-460-GF104-GPU.jpg

Even with its mid-range intentions, the PCB is a busy place for the GeForce GTX 460. There are eight positions for DRAM ICs on the circuit board, which combine for 1GB of GDDR5 video frame buffer memory - or 768MB when six are utilized. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the large scale GF104 GPU and its 1.95-billion transistors. 336 CUDA cores operate at 1350 MHz, which keeps a firm lead over ATI's 850 MHz Cypress GPU that measures 334 mm2.

NVIDIA employs a dual-slot cooling system on the reference GTX 460 video card. Two copper heat-pipe rods span away from the copper base into two opposite sets of aluminum fins. The entire unit is cooled with a 75mm fan, which kept our test samples extremely cool at idle and maintained very good cooling once the card received unnaturally high stress loads with FurMark (covered later in this article).

NVIDIA-GeForce-GTX-460-Heatsink-Cooler.jpg

In the next several sections Benchmark Reviews will explain our video card test methodology, followed by a performance comparison of the NVIDIA GeForce GTX 460 against several of the most popular mid-range graphics accelerators available. The GeForce GTX 460 will compete against the ATI Radeon HD 5830 and several other middle-market video cards; so we'll be keeping a close eye on comparative performance and value.

GeForce GTX-460 Partner Products

NVIDIA sells their graphics technology to a large host of add-in card partners (AIC's), and many are expected to take advantage of GTX 460's competitively focused price point. Companies will offer the GTX 460 in both 768MB (192-bit) and 1GB (256-bit) varieties. This section offers a preview at the design implementations used by some of the most popular finished-goods companies:

ASUS-GeForce-GTX-460-Video-Card.jpg

ASUS ENGTX460 GeForce GTX 460 Video Card

ASUS will immediately launch several versions of the GeForce GTX 460, using their ENGTX460 part number. The ENGTX460-DirectCU-TOP-2DI-768MD5 (above) will use a custom cooling solution loosely based on the original, while the ENGTX460-2DI-768MD5 will maintain the reference cooler design,

EVGA-GeForce-GTX-460-Video-Card.jpg

EVGA GeForce GTX 460 Video Card

NVIDIA's closest AIC is EVGA. Expect two versions of the GTX 460 to be available at launch: 01G-P3-1371-TR shown above uses the same cooling solution seen on the larger GTX 465, while 768-P3-1360-TR maintains a reference design.

Gigabyte-GeForce-GTX-460-Video-Card.jpg

Gigabyte GV-N460OC GeForce GTX 460 Video Card

Gigabyte Technology is quickly gaining popularity as an NVIDIA AIC, and they will have their custom-cooled GTX 460 available at launch. Gigabyte GV-N460OC-1GI is their 1GB model (above), while GV-N460OC-768I comes with 768MB GDDR5.

MSI-GeForce-GTX-460-Video-Card.jpg

MSI N460GTX GeForce GTX 460 Video Card

MSI, the name known my master overclockers world-wide, will have a 1GB GeForce GTX 460 named N460GTX-CYCLONE-1GD5 (G57-V100699) and a 768MB card named N460GTX-CYCLONE-768D5-OC (G57-V100698) that comes factory overclocked. Both cards feature MSI's 'Cyclone' GPU cooler that loosely resembled the reference design without the plastic shroud.

Zotac-GeForce-GTX-460-Video-Card.jpg

Zotac Synergy Edition GeForce GTX 460 Video Card

ZOTAC International goes beyond a custom cooling design, and also offers a unique video output selection on their version of the GeForce GTX 460. The ZOTAC GeForce GTX 460 series are the first graphics cards to ship with the new ZOTAC Boost Premium software bundle that features five applications, including vReveal 2.0, Nero Vision Xtra, Cooliris, Kylo and XBMC media center. Look for the ZOTAC Synergy Edition to add DisplayPort to the dual-DVI and HDMI output selection.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending May 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

NVIDIA-GeForce-GTX460-1GB-GPUZ.gifIn each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

DX11 Cost to Performance Ratio

For this article Benchmark Reviews has included cost per FPS for graphics performance results. Only the least expensive product price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 10-July-2010:

  • $150 Radeon HD 5770
  • $220 GeForce GTX 460
  • $200 Radeon HD 5830
  • $250 GeForce GTX 465
  • $290 Radeon HD 5850
  • $320 GeForce GTX 470

Intel X58-Express Test System

DirectX-10 Benchmark Applications

  • 3DMark Vantage v1.02 (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 with HOC Benchmark (DX10, Very High Quality, 4x Anti-Aliasing, 16x Anisotropic Filtering, Airfield Demo)
  • Far Cry 2 v1.02 (DX10, Very High Performance, Ultra-High Quality, 8x Anti-Aliasing, HDR + Bloom)
  • Resident Evil 5 Benchmark (DX10, Super-High Quality, 8x MSAA)

DirectX-11 Benchmark Applications

  • Aliens vs Predator (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleField: Bad Company 2 (High Quality, HBAO, 8x Anti-Aliasing, 16x Anisotropic Filtering, Single-Player Intro Scene)
  • BattleForge v1.2 (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Metro 2033 (Very-High Quality, DirectX-11, AAA, 16x AF, Advanced DoF, Tessellation, 180s Scene 1 Fraps)
  • Unigine Heaven Benchmark 2.1 (DX11, Normal Tessellation, 16x AF, 4x AA)
  • Video Card Test Products

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit
    • NVIDIA GeForce 9800 GTX+ / GTS 250 (740 MHz GPU/1836 MHz Shader/1100 MHz vRAM - Forceware 257.21)
    • ATI Radeon HD 4890 Reference Design (850 MHz GPU/975 MHz vRAM - ATI Catalyst Driver 10.6)
    • ASUS GeForce GTX 285 ENGTX285 TOP (670 MHz GPU/1550 MHz Shader/1330 MHz vRAM - Forceware 257.21)
    • ATI Radeon HD 5770 Reference Design (850 MHz GPU/1200 MHz vRAM - ATI Catalyst Driver 10.6)
    • NVIDIA GeForce GTX 460 1GB (675 MHz GPU/1350 MHz Shader/900 MHz vRAM - Forceware 258.80)
    • ATI Radeon HD 5830 Reference Design (800 MHz GPU/1000 MHz vRAM - ATI Catalyst Driver 10.6)
    • ASUS GeForce GTX 465 (608 MHz GPU/1215 MHz Shader/802 MHz vRAM - Forceware 258.80)
    • ATI Radeon HD 5850 Reference Design (725 MHz GPU/1000MHz vRAM - ATI Catalyst Driver 10.6)
    • ZOTAC GeForce GTX 470 ZT-40201-10P (608 MHz GPU/1215 MHz Shader/837 MHz vRAM - Forceware 258.80)

    DX10: 3DMark Vantage

    3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

    3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

    3DMark Vantage GPU Test: Jane Nash

    Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

    3dMark_Vantage_Jane_Nash_Benchmark.jpg

    Cost Analysis: Jane Nash (1680x1050)

  • $150 Radeon HD 5770 costs $8.33 per FPS
  • $220 GeForce GTX 460 costs $9.09 per FPS
  • $200 Radeon HD 5830 costs $8.78 per FPS
  • $250 GeForce GTX 465 costs $11.11 per FPS
  • $290 Radeon HD 5850 costs $10.18 per FPS
  • $320 GeForce GTX 470 costs $11.15 per FPS
  • 3DMark Vantage GPU Test: New Calico

    New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

    3dMark_Vantage_New_Calico_Benchmark.jpg

    Cost Analysis: New Calico (1680x1050)

  • $150 Radeon HD 5770 costs $11.11 per FPS
  • $220 GeForce GTX 460 costs $10.63 per FPS
  • $200 Radeon HD 5830 costs $11.43 per FPS
  • $250 GeForce GTX 465 costs $13.23 per FPS
  • $290 Radeon HD 5850 costs $13.24 per FPS
  • $320 GeForce GTX 470 costs $13.17 per FPS
  • Test Summary: According to 3dMark Vantage, the NVIDIA GeForce GTX 460 1GB model leads slightly ahead of the ATI Radeon HD 5830 in the Jane Nash GPU test, and then overtakes it by 3-FPS in New Calico tests. Sharing a similar price point, it appears that the 3dMark Vantage benchmark demonstrates they could also be sharing the same frame rate performance levels, too. Take 3D-Vision, PhysX, CUDA, and 32x MSAA into consideration, and the favor leans even more towards NVIDIA.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX10: Crysis Warhead

    Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

    Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

    Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

    Crysis_Warhead_Benchmark.jpg

    Cost Analysis: Crysis Warhead (1680x1050)

  • $150 Radeon HD 5770 costs $8.82 per FPS
  • $220 GeForce GTX 460 costs $8.80 per FPS
  • $200 Radeon HD 5830 costs $9.09 per FPS
  • $250 GeForce GTX 465 costs $9.62 per FPS
  • $290 Radeon HD 5850 costs $10.74 per FPS
  • $320 GeForce GTX 470 costs $9.70 per FPS
  • Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead responds well to both ATI and NVIDIA products, which allows the 1GB NVIDIA GeForce GTX 460 to exceed the performance of ATI's Radeon HD 5830 at 1680x1050 and again at 1920x1200. For die-hard fans of Crysis, the GeForce GTX 460 offers the best price to performance ratio compared with the Radeon HD 5830, and actually performs as well as the GeForce GTX 465.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX10: Far Cry 2

    Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

    The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

    The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX-10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis.

    However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

    There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX-10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied, and HDR and Bloom were enabled.

    Far_Cry_2_Benchmark_Performance.jpg

    Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we used 8x AA and 'Ultra High' settings in Far Cry 2. The end effect was a separation between what is capable of maximum settings, and what is not. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), we noticed that there are very few products capable of producing playable frame rates with the settings all turned up.

    Cost Analysis: Far Cry 2 (1680x1050)

  • $150 Radeon HD 5770 costs $4.45 per FPS
  • $220 GeForce GTX 460 costs $3.26 per FPS
  • $200 Radeon HD 5830 costs $5.76 per FPS
  • $250 GeForce GTX 465 costs $4.03 per FPS
  • $290 Radeon HD 5850 costs $5.61 per FPS
  • $320 GeForce GTX 470 costs $4.16 per FPS
  • Test Summary: The Dunia game engine in Far Cry 2 demonstrates a preference towards NVIDIA products over ATI, which allows the GeForce GTX 460 to dominate performance over the Radeon HD 5830, and even beats the Radeon HD 5850 by a significant margin. Because of the major differences in performance with Far Cry 2, the GeForce GTX 460 (and all other GeForce cards) offered the best performance and cost per FPS of any DX11 video card.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX10: Resident Evil 5

    Built upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX-10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

    On the PC version of Resident Evil 5, both DirectX 9 and DirectX-10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements.

    NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX-10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.

    Resident_Evil_5_Benchmark.jpg

    Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX-10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs.

    Cost Analysis: Resident Evil 5 (Area 4)

  • $150 Radeon HD 5770 costs $2.73 per FPS
  • $220 GeForce GTX 460 costs $3.28 per FPS
  • $200 Radeon HD 5830 costs $3.39 per FPS
  • $250 GeForce GTX 465 costs $3.21 per FPS
  • $290 Radeon HD 5850 costs $3.54 per FPS
  • $320 GeForce GTX 470 costs $3.81 per FPS
  • Test Summary: It's unclear if Resident Evil 5 graphics performance fancies ATI or NVIDIA, especially since two different test scenes alternate favoritism. Although this benchmark tool is distributed directly from NVIDIA, and Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any decisive tilt towards GeForce products over ATI Radeon counterparts from within the game itself. Test scene #3 certainly favors Fermi GPU's, and they leads ahead of every other product tested. In test scene #4 the Radeon video card series appears more competitive, although the 1GB GeForce GTX 460 still outperformed the Radeon HD 5830 by a significant 8-FPS margin. Additionally, the GTX 460 earns respect for its competitive cost per frame.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX11: Aliens vs Predator

    Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

    In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

    Aliens-vs-Predator_DX11_Benchmark.jpg

    Cost Analysis: Aliens vs Predator (1680x1050)

  • $150 Radeon HD 5770 costs $8.02 per FPS
  • $220 GeForce GTX 460 costs $8.00 per FPS
  • $200 Radeon HD 5830 costs $8.66 per FPS
  • $250 GeForce GTX 465 costs $10.00 per FPS
  • $290 Radeon HD 5850 costs $9.80 per FPS
  • $320 GeForce GTX 470 costs $9.44 per FPS
  • Test Summary: Aliens vs Predator may use a well-known game engine, but DirectX-11 pushes graphical demand on this game second to only Metro 2033 (and possibly equivalent to DX10 Crysis). With an unbiased appetite for raw DirectX-11 graphics performance, Aliens vs Predator accepts ATI and NVIDIA products as equal contenders, even despite sponsorship from NVIDIA. When high-strain SSAO is called into action, the 1GB GeForce GTX 460 demonstrates how well Fermi is suited for DX11... besting ATI's Radeon HD 5830 by several frames per second, and even surpassing the GTX 465. Even though high-stress graphics has pushed frame rate performance on the GeForce GTX 460, the relative price to performance ratio is actually quite high in Aliens vs Predator and wins out over the Radeon HD 5830 for value.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX11: Battlefield Bad Company 2

    The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.

    At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.

    In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

    Battlefield-Bad-Company-2_Benchmark.jpg

    The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.

    Cost Analysis: Battlefield: Bad Company 2 (1680x1050)

  • $150 Radeon HD 5770 costs $3.75 per FPS
  • $220 GeForce GTX 460 costs $4.04 per FPS
  • $200 Radeon HD 5830 costs $4.39 per FPS
  • $250 GeForce GTX 465 costs $4.82 per FPS
  • $290 Radeon HD 5850 costs $4.77 per FPS
  • $320 GeForce GTX 470 costs $4.97 per FPS
  • Test Summary: Battlefield: Bad Company 2's Frostbite-1.5 game engine makes use of the 1GB frame buffer on the GeForce GTX 460 to help post 8-FPS gains ahead of the ATI Radeon HD 5830. The surprise here is how close the 1GB GeForce GTX 460 surpassed the GTX 465 by 2-FPS. Despite the drop in DX10 performance compared to the others, the relative price per frame costs are actually better than ATI's Radeon HD 5830.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX11: BattleForge

    BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

    Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

    BattleForge_DX11_Benchmark.jpg

    Cost Analysis: BattleForge (1680x1050)

  • $150 Radeon HD 5770 costs $6.47 per FPS
  • $220 GeForce GTX 460 costs $5.71 per FPS
  • $200 Radeon HD 5830 costs $7.41 per FPS
  • $250 GeForce GTX 465 costs $6.00 per FPS
  • $290 Radeon HD 5850 costs $7.51 per FPS
  • $320 GeForce GTX 470 costs $6.07 per FPS
  • Test Summary: With an unbiased appetite for raw DirectX-11 graphics performance BattleForge appears to be ambiguous towards ATI and NVIDIA products, even despite advertised sponsorship from AMD. When high-strain SSAO and 8x anti-aliasing applied, the NVIDIA GeForce GTX 460 trumps ATI's Radeon HD 5830 at both test resolutions by nearly 12-FPS. Improved FPS performance gives the 1GB GeForce GTX 460 the best value of the bunch, and the only card to offer sub-$6 frames.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX11: Metro 2033

    Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

    The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

    Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

    Metro-2033_DX11_Benchmark.jpg

    NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All our tests disable advanced PhysX options.

    Cost Analysis: Metro 2033 (1680x1050)

  • $150 Radeon HD 5770 costs $9.43 per FPS
  • $220 GeForce GTX 460 costs $10.68 per FPS
  • $200 Radeon HD 5830 costs $10.58 per FPS
  • $250 GeForce GTX 465 costs $11.96 per FPS
  • $290 Radeon HD 5850 costs $12.50 per FPS
  • $320 GeForce GTX 470 costs $11.85 per FPS
  • Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience unless you're willing to seriously tone-down the settings. These demands make our selection of DX11 video cards appear flat, especially since two GeForce GTX 480's combined into a SLI set produced only 46 FPS. While the quality settings will likely need to be reduced to medium levels for adequate game play performance, and advanced depth of field disabled, the NVIDIA GeForce GTX 460 extended itself past the Radeon HD 5830 for performance. Even with a slight lead over the 5830, the Radeon series cannot utilize PhysX - and Metro 2033 only offers advanced PhysX options for NVIDIA GeForce video cards.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    DX11: Unigine Heaven 2.1

    The Unigine "Heaven 2.1" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

    The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:

    • Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
    • Comprehensive use of tessellation technology
    • Advanced SSAO (screen-space ambient occlusion)
    • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
    • Dynamic simulation of changing environment with high physical fidelity
    • Interactive experience with fly/walk-through modes
    • ATI Eyefinity support

    Unigine_Heaven_DX11_Benchmark.jpg

    Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

    Cost Analysis: Unigine Heaven (1680x1050)

  • $150 Radeon HD 5770 costs $8.24 per FPS
  • $220 GeForce GTX 460 costs $7.67 per FPS
  • $200 Radeon HD 5830 costs $9.48 per FPS
  • $250 GeForce GTX 465 costs $9.47 per FPS
  • $290 Radeon HD 5850 costs $11.79 per FPS
  • $320 GeForce GTX 470 costs $10.53 per FPS
  • Test Summary: Our test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards. The 1GB NVIDIA GeForce GTX 460 clears past the Radeon HD 5830 by nearly 8-FPS, and again surpasses the GeForce GTX 465 in this test. Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a graphics tool this detailed. For now though, those details only come by way of DirectX-11 video cards. It's worth noting that if there was a cost involved with Unigine's Heaven benchmark, the GeForce GTX 460 would cost $1.81 less per frame than the Radeon HD 5830, and also cost less than all other DirectX-11 video cards.

    Graphics Card GeForce 9800 GTX+ Radeon HD4890 GeForce GTX285 Radeon HD5770 GeForce GTX460 Radeon HD5830 GeForce GTX465 Radeon HD5850 GeForce GTX470
    GPU Cores 128 800 240 800 336 1120 352 1440 448
    Core Clock (MHz) 740 850 670 850 675 800 608 725 608
    Shader Clock (MHz) 1836 N/A 1550 N/A 1350 N/A 1215 N/A 1215
    Memory Clock (MHz) 1100 975 1300 1200 900 1000 802 1000 837
    Memory Amount 512 MB GDDR3 1024 MB GDDR5 1024MB GDDR3 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1280MB GDDR5
    Memory Interface 256-bit 256-bit 512-bit 128-bit 256-bit 256-bit 256-bit 256-bit 320-bit

    NVIDIA APEX PhysX Enhancements

    Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.

    We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:

    Mafia2_Cloth_High_No-PhysX.jpg

    No PhysX = Cloth Blending and Missing Debris

    Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 460 with APEX PhysX enabled:

    Mafia2_Cloth_High_PhysX.jpg

    Realistic Cloth and Debris - High Quality Settings With PhysX

    With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.

    Mafia2_PhysX_Fire.jpg

    Burning Destruction Smoke and Vapor Realism

    Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.

    Mafia2_PhysX_Glass.jpg

    Exploding Glass Shards and Bursting Flames

    NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.

    NVIDIA 3D-Vision Effects

    Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.

    Mafia2_3d-Vision_Characters.jpg

    The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.

    Mafia2_3d-Vision_Tire_Door.jpg

    Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.

    Mafia2_3d-Vision_Debris.jpg

    Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.

    Mafia2_3d-Vision_Smoke_Plumes.jpg

    NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.

    Mafia2_3d-Vision_Out-of-Screen.jpg

    NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.

    GeForce GTX460 Temperatures

    Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

    To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing, while the inner-case temperature hovered around 36°C.

    FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.

    NVIDIA-GeForce-GTX-460-Temperature.jpg

    NVIDIA GeForce GTX 460 1GB Video Card Temperatures

    NVIDIA-supplied product specifications state that the GeForce GTX 460 has a maximum GPU thermal threshold of 104°C. We've noted that this new GF104 threshold is one degree less than GF100 Fermi products. In a room with 20°C ambient temperature, the NVIDIA GeForce GTX 460 produced a mere 26°C at idle while remaining perfectly silent. After FurMark's torture test ran the GPU up to a rather lukewarm 65°C under full load, the fan remained inaudible. After a series a warm-blooded GeForce products, it appears that the GTX 460 operates stone cold in comparison.

    Most new graphics cards from NVIDIA and ATI will expel heated air out through exhaust vents, which does not increase the internal case temperature. Our test system is an open-air chassis that allows the video card to depend on its own cooling solution for proper thermal management. Most gamers and PC hardware enthusiasts who use an aftermarket computer case with intake and exhaust fans will usually create a directional airflow current and lower internal temperatures a few degrees below the measurements we've recorded. To demonstrate this, we've built a system to illustrate the...

    Best-Case Scenario

    Traditional tower-style computer cases position internal hardware so that heat is expelled out through the back of the unit. This is better than nothing, but there's a fundamental problem: heat rises. Using the transverse mount design on the SilverStone Raven-2 chassis, Benchmark Reviews re-tested the NVIDIA GeForce GTX 460 video card to determine the 'best-case' scenario.

    Sitting idle at the Windows 7 desktop with a 20°C ambient room temperature, the GeForce GTX 460 rested at 27°C, which was actually one degree higher than measured in a regular computer case. Pushed to abnormally high levels using the FurMark torture test, the GeForce GTX 460 operated at 67°C with a very quiet cooling fan. After some investigation, it seems that the reference thermal cooling solution is better suited to a horizontal orientation. Although the well-designed Raven-2 computer case offers additional cooling features and has helped to make a difference in other video cards, this wasn't the case with the GTX 460... not that it matters at this low of a temperature.

    NVIDIA-GeForce-GTX-460-Heatsink-Cooler.jpg

    NVIDIA GeForce GTX 460 Thermal Cooling Solution Heatsink

    In the traditional (horizontal) position, the slightly angled heat-pipe rods use gravity and sintering to draw cooled liquid back down to the base. When positioned in a transverse mount case such as the SilverStone Raven-2, the NVIDIA GeForce GTX 460 heatsink loses optimal effective properties in the lowest heat-pipe rod, because gravity takes keeps the cool liquid in the lowest portion of the rod within the finsink.

    VGA Power Consumption

    Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

    For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions, however our results are not adjusted for consistency. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

    A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

    Video Card Power Consumption by Benchmark Reviews

    VGA Product Description

    (sorted by combined total power)

    Idle Power

    Loaded Power

    NVIDIA GeForce GTX 480 SLI Set
    82 W
    655 W
    NVIDIA GeForce GTX 590 Reference Design
    53 W
    396 W
    ATI Radeon HD 4870 X2 Reference Design
    100 W
    320 W
    AMD Radeon HD 6990 Reference Design
    46 W
    350 W
    NVIDIA GeForce GTX 295 Reference Design
    74 W
    302 W
    ASUS GeForce GTX 480 Reference Design
    39 W
    315 W
    ATI Radeon HD 5970 Reference Design
    48 W
    299 W
    NVIDIA GeForce GTX 690 Reference Design
    25 W
    321 W
    ATI Radeon HD 4850 CrossFireX Set
    123 W
    210 W
    ATI Radeon HD 4890 Reference Design
    65 W
    268 W
    AMD Radeon HD 7970 Reference Design
    21 W
    311 W
    NVIDIA GeForce GTX 470 Reference Design
    42 W
    278 W
    NVIDIA GeForce GTX 580 Reference Design
    31 W
    246 W
    NVIDIA GeForce GTX 570 Reference Design
    31 W
    241 W
    ATI Radeon HD 5870 Reference Design
    25 W
    240 W
    ATI Radeon HD 6970 Reference Design
    24 W
    233 W
    NVIDIA GeForce GTX 465 Reference Design
    36 W
    219 W
    NVIDIA GeForce GTX 680 Reference Design
    14 W
    243 W
    Sapphire Radeon HD 4850 X2 11139-00-40R
    73 W
    180 W
    NVIDIA GeForce 9800 GX2 Reference Design
    85 W
    186 W
    NVIDIA GeForce GTX 780 Reference Design
    10 W
    275 W
    NVIDIA GeForce GTX 770 Reference Design
    9 W
    256 W
    NVIDIA GeForce GTX 280 Reference Design
    35 W
    225 W
    NVIDIA GeForce GTX 260 (216) Reference Design
    42 W
    203 W
    ATI Radeon HD 4870 Reference Design
    58 W
    166 W
    NVIDIA GeForce GTX 560 Ti Reference Design
    17 W
    199 W
    NVIDIA GeForce GTX 460 Reference Design
    18 W
    167 W
    AMD Radeon HD 6870 Reference Design
    20 W
    162 W
    NVIDIA GeForce GTX 670 Reference Design
    14 W
    167 W
    ATI Radeon HD 5850 Reference Design
    24 W
    157 W
    NVIDIA GeForce GTX 650 Ti BOOST Reference Design
    8 W
    164 W
    AMD Radeon HD 6850 Reference Design
    20 W
    139 W
    NVIDIA GeForce 8800 GT Reference Design
    31 W
    133 W
    ATI Radeon HD 4770 RV740 GDDR5 Reference Design
    37 W
    120 W
    ATI Radeon HD 5770 Reference Design
    16 W
    122 W
    NVIDIA GeForce GTS 450 Reference Design
    22 W
    115 W
    NVIDIA GeForce GTX 650 Ti Reference Design
    12 W
    112 W
    ATI Radeon HD 4670 Reference Design
    9 W
    70 W
    * Results are accurate to within +/- 5W.

    The NVIDIA GeForce GTX 460 requires two six-pin PCI-E power connections. Resting at idle, the power draw consumed only 18 watts of electricity... 8W less than the ATI Radeon HD 5830 and exactly half the amount required for the GeForce GTX 465. Once 3D-applications begin to demand power from the GPU, electrical power consumption climbed to full-throttle. Measured with 3D 'torture' load using FurMark, the GeForce GTX 460 consumed 167 watts, which is slightly higher than the Radeon 5830 but well below the 219W consumed by the GTX 465. Although GF104 Fermi GPU features the same 40nm fabrication process as the GF100, it's clear that NVIDIA's GTX 460 is better suited for 'Green' enthusiasts.

    GeForce GTX 460 Overclocking

    If there's one particular message this article should impress upon the reader, it would be that the NVIDIA GeForce GTX 460 is an overclockers dream. The GF104-equipped GTX 460 already comes with an impressive stock clock of 675/1350 MHz, with the GDDR5 running at 900 MHz (1800 DDR). Putting this into perspective, these speed fall between the GeForce GTX 470 and GTX 480 (closer to the latter). Now comes the fun: overclocking the GeForce GTX 460 is as easy as its ever been.

    Back in the day, software overclocking tools were few and far between. Benchmark Reviews was literally put on the map with my first article: Overclocking the NVIDIA GeForce Video Card. Although slightly dated, that article is still relevant for enthusiasts wanting to permanently flash their overclock onto the video cards BIOS. Unfortunately, most users are not so willing to commit their investment to such risky changes, and feel safer with temporary changes that can be easily undone with a reboot. That's the impetus behind the sudden popularity for software-based GPU overclocking tools.

    NVIDIA offers one such tool with their System Tools suite, formerly available as NVIDIA nTune. While the NVIDIA Control Panel interface is very easy to understand an navigate, it's downfall lies in the limited simplicity of the tool. It's also limited, and doesn't offer the overclocking potential that AIC partners offer in their own branded software tools. For example, using the NVIDIA System Tools utility to overclock the GeForce GTX 460 (1GB version), I was able to set the graphics clock up to 1350MHz (which is way out of realistic range) but the memory clock was limited to 2160. As it turns out, overclocking the GTX 460's GDDR5 from 1800 to well past 2160 MHz was possible.

    NVIDIA-nTune-GeForce-GTX-460-1GB.png

    After hitting a brick wall with the NVIDIA System Tools utility (nTune), I turned to the most popular brand name on the overclocking scene: MSI. While not the biggest company selling motherboard and video card products, they're definitely the most aggressive brand with some of the highest-quality products sold. MSI offers their Afterburner utility (based on Riva Tuner), and also a Kombustor tool (based on FurMark). Knowing that the GeForce GTX 460 would need a wider range of clock speeds, I began overclocking with MSI Afterburner 1.6.1.

    My mission was simple: locate the highest possible overclock without adding any additional voltage. Starting with memory, I slowly raised the GDDR5 clock speed... until I once again reached the limit of this tool: 1170 MHz (2340 MHz data rate). While I would have liked to go further, and take the NVIDIA GeForce GTX 460's 1GB GDDR5 memory as far as it could possibly go, the jump from 900 to 1170 MHz equaled very little frame rate improvement. Using only a memory overclock, Aliens vs Predator performance jumped from 22.2 to 23.5 FPS at 1920x1200. Far Cry 2 performance was bumped from 57.36 to 58.85. Now it was time to work some magic on the GF104 GPU.

    MSI-Afterburner-GeForce-GTX-460-1GB.png

    As a best practice, it's good to find the maximum stable GPU clock speed and then drop back 10 MHz or more. While the GeForce GTX 460 was stable in many tests up to 880 MHz, there was an occasional graphics defect. In the end, I decided that 855 MHz with full-time stability is a far better proposition than crashing out midway through battle. Adding the 855/1710 MHz GPU overclock onto the 1170 MHz GDDR5 overclock resulted in some very impressive gains!

    Far Cry 2 video frame rate performance increased from 57.36 FPS to 71.67, equaling a 25% bump in performance for this DirectX-10 video game. That's a nice start, but DirectX-11 is really where the market's at these days... so I turned to AvP for another series of test runs.

    Aliens vs Predator jumped from 22.2 FPS at stock/default settings to 28.3 FPS, resulting in nearly 28% performance gain. What does that 28% mean to you? For a mere $200~$220, the NVIDIA GeForce GTX 460 just matched the $400 ATI Radeon HD 5870. That right ATI, you read that correctly: NVIDIA's $200 product can perform as well as your $400 flagship video card. Ouch! That's gotta hurt.

    In the next section, I offer my opinion on Fermi's updated architecture and add a little salt into ATI's fresh wound...

    Editor's Opinion: NVIDIA Fermi

    My opinion of NVIDIA's Fermi architecture has changed over the past several months, as they've developed their graphics processor to fully embody the originally unclear long-term plan. Testing with NVIDIA's GF100 GPU held its own set of challenges, and many times the video cards based on this graphics processor seemed condemned by the inherited legacy of problems. From the flagship GeForce GTX 480 down to the GTX 465, Fermi impressed gamers with strong FPS performance... and that was about it. Thermal output and power consumption were unfashionably high, to which ATI constantly and consistently focused their marketing attacks. Then along comes GF104 on the GeForce GTX 460.

    NVIDIA's GeForce GTX 460 not only changes the collective opinion of their Fermi architecture, it also changes the GPU landscape. ATI held the upper hand by releasing a DirectX-11 video card first, but they've painted themselves into a corner with their Evergreen GPU. Unlike NVIDIA's Fermi architecture, which can shape-shift as desired, ATI's Cedar, Redwood, and Juniper GPUs are all simply slices of the same processor: Cypress. This is where intelligent consumers will spot the flaw: ATI came to the (video) card game and showed their entire hand from the first deal, while NVIDIA had a few spare aces up their sleeves. NVIDIA's GeForce GTX 480 is only 15/16 of the complete GF100 package, and we're just beginning to see what's possible with a 7/8-whole GF104 GPU. It's unknown what NVIDIA has planned for the GF102, GF106, and GF108... although the speculation is rampant.

    So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

    NVIDIA-GeForce-Fermi-Product-Family.jpg

    NVIDIA GeForce Fermi Graphics Card Family

    Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

    Even some older game titles benefit by the Fermi architecture, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.

    Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

    The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Well into 2010, only a beta version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 400-series is going to surpass ATI's Radeon HD 5000 series equivalents without much struggle, but it's uncertain how much better the performance will be compared to the previous-generations.

    GeForce GTX 460 Conclusion

    Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating for the sample received which may differ from retail versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

    Performance delivers a big win for the GeForce GTX 460 over ATI's Radeon HD 5830 video card, which shares a similar $200~$220 retail price point. Our performance rating considers how effective the 1GB NVIDIA GeForce GTX 460 DirectX-11 video card performs against competing products from both ATI and NVIDIA. Taking all of the collected results into consideration, the reference 1GB GeForce GTX 460 outperformed ATI's Radeon HD 5830 in each and every benchmark test conducted. Further dominating over the Radeon 5830, NVIDIA's GeForce GTX 460 offers PhysX and 3D-Vision functionality, produces less heat, and requires less power at idle. It easily captures the best price to performance ratio for most games tested, especially in BattleForge. At the end of our test gauntlet, the results prove that NVIDIA's GeForce GTX 460 is the best choice at $220... but also a better choice than their own GeForce GTX-465.

    Zotac-GeForce-GTX-460-Video-Card.jpg

    Zotac Synergy Edition GeForce GTX 460 with Display Port

    Appearance is going to be a more subjective matter, since so many of NVIDIA's AIC partners have custom cooling solutions each with their own unique look. Some GeForce GTX 460 products are going to keep the reference look we've seen in this article, while others are going to borrow from the upper-end GeForce GTX 465 design (as illustrated above by the Synergy Edition GeForce GTX 460). To my own surprise, the designs all differentiate themselves from other Add-In Card partners who typically clone the reference appearance. This also means that function takes precedence over fashion, and in at least one case, we see the inclusion of DisplayPort beside the traditional dual-DVI and HDMI outputs.

    In terms of video card pecking order, the NVIDIA GeForce GTX 460 occupies the #4 spot just barely behind the GTX 465 in the NVIDIA product lineup and settled between the ATI Radeon HD 5830 and 5850. With so much power in a mid-level graphics board, this often creates an interest in paired SLI sets. Unfortunately, triple-SLI is not an option... but NVIDIA 3D-Vision Surround certainly is. As the first GF104 product (and fourth Fermi iteration), the GeForce GTX 460 has been designed with the same solid construction as its predecessors. There are exposed electronics on the back of the PCB, but nothing that protrudes enough to require a metal back-plate for protection. The top-side of the graphics card features a protective plastic fan shroud, which receives a recessed concave opening for the 75mm fan and allows for airflow in SLI configurations.

    While most PC gamers and hardware enthusiasts buy a discrete graphics card for the sole purpose of playing video games, there's a very small niche who depend on the extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. NVIDIA's Fermi architecture is the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality. Additionally, applications such as Adobe Photoshop or Premier can take advantage of GPGPU processing power. In case the point hasn't already been driven home, don't forget that 3D Vision and PhysX are technologies only available through NVIDIA.

    As of launch day (12 July 2010), the NVIDIA GeForce GTX 460 video card is available at NewEgg in several different configurations. Here's a short list of their initial offerings:

    GeForce GTX 460 - 1GB Versions 768MB Versions
    • $220 Palit NE5X460SF1102 GeForce GTX 460 Sonic 1GB
    • $230 GIGABYTE GV-N460OC-1GI GeForce GTX 460 1GB
    • $245 Palit NE5X460HF1102 GeForce GTX 460 SONIC PLATINUM
    • $200 ASUS ENGTX460/2DI/768MD5 GeForce GTX 460 768MB
    • $200 MSI N460GTX-M2D768D5 GeForce GTX 460 768MB
    • $200 GIGABYTE GV-N460D5-768I-B GeForce GTX 460 768MB
    • $200 EVGA 768-P3-1360-TR GeForce GTX 460 768MB
    • $200 Palit NE5TX460FHD79 GeForce GTX 460 768MB
    • $210 EVGA 768-P3-1362-TR GeForce GTX 460 Superclocked
    • $210 GIGABYTE GV-N460OC-768I GeForce GTX 460 OC 768MB
    • $210 MSI N460GTX Cyclone 768D5/OC GeForce GTX 460 768MB
    • $220 EVGA 768-P3-1362-AR GeForce GTX 460 Superclocked

    Defining product value means something different to everyone. Some readers take heat and power consumption into consideration, while others are only concerned with FPS performance. With regard to value, there are several ways to look at the $220 GeForce GTX 460 1GB GDDR5 version and compare it to the closest rivals: such as the $200 ATI Radeon HD 5830... or even the $250 GTX 465. In terms of product price to FPS performance, the GeForce GTX 460 is one of the most affordable DirectX-11 video card products available. Here's a breakdown of the average cost (USD) per FPS recorded for the ten tests conducted for this article:

    • $150 Radeon HD 5770 costs $7.14 per FPS
    • $200 GeForce GTX 460 costs $7.03 per FPS (768MB version)
    • $220 GeForce GTX 460 costs $7.12 per FPS (1GB version)
    • $200 Radeon HD 5830 costs $7.90 per FPS
    • $250 GeForce GTX 465 costs $8.35 per FPS
    • $290 Radeon HD 5850 costs $8.97 per FPS
    • $320 GeForce GTX 470 costs $8.49 per FPS

    In conclusion, NVIDIA's GF104 Fermi GPU is exactly what the mid-range discreet graphics market needed. The 768MB version of the GeForce GTX 460 already beats the ATI Radeon HD 5830 at the $200 price point, and the 1GB version further deepens the divide. It's great to see NVIDIA offer budget-minded gamers so much with the GTX 460, primarily because this video card wins over the segment, but also because it overclocks well into the next level of graphics products. To the delight of many, temperatures are way down and have suggested this could be the coolest-running mid-level NVIDIA video card in a very long time. Idle power draw was a mere 18 watts by our measure, demonstrating that the GeForce GTX 460 is more efficient than ATI's Radeon HD 5830 - and also demands half the power of the GTX 465. With a myriad of custom cooling solutions and video output options from NVIDIA's AIC partners, expect the GeForce GTX 460 to improve the public opinions of Fermi's architecture.

    EDITOR'S NOTE: Benchmark Reviews has separately published our review of the 768MB NVIDIA GeForce GTX 460.

    Benchmark Reviews Golden Tachometer AwardPros:

    + Impressively cool operating temperatures!
    + Consumes only 18 watts at idle
    + Best performer at $200~$220 - easily beats Radeon 5830
    + Excellent price-to-performance cost ratio
    + Fermi Introduces Error Correcting Code (ECC)
    + 30% Overclock performance matches Radeon HD 5870
    + Fan exhausts all heated air outside of case
    + Quiet cooling fan under loaded operation
    + Includes native HDMI audio/video output
    + Adds 32x CSAA post-processing detail
    + Surpasses GeForce GTX 465 performance in many tests
    + Supports SLI functionality

    Cons:

    - Triple-SLI not supported

    Ratings:

    • Performance: 8.75
    • Appearance: 9.00
    • Construction: 9.75
    • Functionality: 9.50
    • Value: 8.75

    Final Score: 9.15 out of 10.

    Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

    Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.


    Related Articles:
     

    Comments 

     
    # sliFederico La Morgia 2010-07-11 20:44
    benchmark with sli ?
    Report Comment
     
     
    # RE: sliOlin Coles 2010-07-11 20:47
    Sorry, but there were no SLI tests completed at the time of launch. NVIDIA sent us the 1GB and 768MB versions, and SLI requires identical memory configurations.

    We expect to have SLI results within the next few days.
    Report Comment
     
     
    # no ATI 5870?amdfreak 2010-07-16 06:39
    why wouldn't the ATI 5870 be used in the testing? Nvidia wouldn't allow it? both top products would make better comparison?
    Report Comment
     
     
    # RE: no ATI 5870?Olin Coles 2010-07-16 07:15
    The 5870 is a $420 video card, and the GTX 460 is half its price... they don't compete in the same market segment at all. If this were an SLI review it would make sense, but it was a single card. The Radeon 5850 was compared, which is a $290 product.
    Report Comment
     
     
    # RE: RE: no ATI 5870?John Darcy 2010-07-16 09:30
    the gtx460 looked attractive to me, an owner of 2 asus 5870s v1 and v 2, each in different build- for now. I was really miffed at the enormous costs of the cards and how they are deliberately held back to run as such low speed and it is so difficult to undo that artificially low 'idle' speed. I paid $400 for these cards that are at 157/300 when my third card is the 4870 and it is at600/900 at idle and can be reduced easy or clocked and when new cost less than half.runs at similar temps. (of course, there are other differences.)
    Report Comment
     
     
    # RE: NVIDIA GeForce GTX-460 1GB Video CardLeon Lionheart 2010-07-11 22:41
    Why is it I smell nVidia bias in this review? The other results I've seen don't lean near as much to the 460..
    Report Comment
     
     
    # RE: RE: NVIDIA GeForce GTX-460 1GB Video CardOlin Coles 2010-07-11 22:48
    If you make a claim that discredits someone's work, you should back it up with credible examples. I'd like to know what review HAS NOT said the GTX 460 is better than the Radeon 5830? When a product has proven itself to be better than the competition, it's not called biased... it's called succeeding.
    Report Comment
     
     
    # RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardLeon Lionheart 2010-07-11 22:53
    Ok then, lets think about how nVidia PR likes to skew things (look at the Dirt 2 demo) and assume that they went through and cherry-picked the best for reviews.

    As for the bias statement, there blatent bashes at ATi about how a OVERCLOCKED card manages to beat a more expensive one at stock. That seems like the old "Apples to Oranges" test methology there.
    Report Comment
     
     
    # RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardOlin Coles 2010-07-11 22:57
    So what you're saying is that all of the reviews across the web received cherry-picked samples? Possible, but highly unlikely. I'm still waiting for you to point at a review that says the 5830 beats the GTX 460. As for my statement about the overclocked $200 GTX 460 matching a stock $400 Radeon 5870, I think you need to ask yourself which one a cash-poor gamer would rather spend money on. You're also missing the point: a $200 video card just attained the level of a $400 video card... for FREE.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardLeon Lionheart 2010-07-11 23:02
    And where does that same $400 card compare when you overclock it? Slap that 5830 back in there, OC it to the point where it will go no higher and run the tests again.

    And yes, nVidia has that habit. How many of their other retail cards performed the EXACT same as the review cards? I also never claimed to know a review where somebody said the 5830 was better, but that could also be noted as there being so few reviews out ATM. Only time will tell, so quit using the time advantage. I've always found the OverclockersClub reviews to be high quality, and after seeing this hodge-podge of results I'm believing it more.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardOlin Coles 2010-07-11 23:09
    You're crazy to think that gamers wouldn't rather spend $200 and overclock their GTX 460, than to spend twice that amount on one low-headroom Radeon 5870. The GTX 460 overclocks by 30%, and the Radeon 5870 is 10% at best (we've already tested overclocked versions in other articles). Even still: get two GTX 460's for the same amount as one 5870, overclock and add them together in SLI... which do you think wins by a landslide?
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardLeon Lionheart 2010-07-11 23:28
    The people that decided to get not energy hogs? The people that plan on getting a 5830 because of its Eyefinity feature? Features help sell cards, and anybody wanting a multi-monitor solution is going to go ATi.

    Love how you quote the 5830's power consumption but don't bother putting it on the chart. According to what IS on there, the 460 pulls more energy then the 5850 so how is that a win?

    I give up though, because there are always those people that are bull-headed and see no other way besides their own.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video Cardmetalisticpain 2010-07-11 23:54
    I am with Olin on this one. They have a pretty decent disclaimer in this conclusion for 1.

    I myself am not strapped for cash and I dont know/cant be bothered how to overclock properly, so I am still going to purchase a 5870.

    But Leon, you never did say what other reviews showed different results? What other results have you seen that say otherwise??
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardLeon Lionheart 2010-07-12 00:07
    Currently Tomshardware shows that the 5830 wins on somethings but loses on others (not by a huge margin mind you). Still waiting on Overclockersclub to get a review published...
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardOlin Coles 2010-07-12 09:05
    These results really speak for themselves, and I'm confident that other websites will come to the same conclusion in their review of the GeForce GTX 460. My test results may not agree with your opinion, but it shouldn't provoke you to claim bias.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardYodamanjaro 2010-07-14 11:33
    I believe the results here are good enough to point out that anyone in their right mind would buy a 460 if they are tight on cash instead of a 5870 (or a 5830 in this case).

    Also, I think if you are tech-savvy enough to install a video card (which doesn't take much brain power up there people), you should be able to download the driver, or pop in the CD (that it comes with, mind you) and move a lever over a bit to a bit under what Olin found, and you should be okay (as long as the case has sufficient air-flow so it doesn't overheat).

    To justify the O.C. card vs a stock card, I would say this: if they were even CLOSE in price, it would matter, and I would cry foul. But the FACT that the 460 is so much CHEAPER, it would definitely matter to someone who is low on cash, but wants a good upgrade.

    Feelings aside from this, I just bought a 480, and I'm curious how two 460s would do in SLI vs my MSI 480 (even if the 460s would be O.C.'d).
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardDavid Ramsey 2010-07-12 02:58
    "Features help sell cards, and anybody wanting a multi-monitor solution is going to go ATi."

    Fine. What if the features I want are CUDA, PhysX, and 3D Vision? And you can get triple-monitor support (with bezel correction) on a two-460 SLI system for less than a single 5870.

    Opinions are great, but at the end of the day, empiricism rules. Olin's ran the tests. You haven't. If you think his methodology flawed, please explain why; if you think his results incorrect, point to supporting data.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardJustin Park 2010-07-22 21:06
    I'm actually debating whether to buy a new graphic card...
    My current one is HD4850, and I wonder if I would see any noticeable difference in performance when playing high-end games such as Crysis and BF:BC2.
    Would it be worth the money?
    And also I'm impressed by the GTX460, at least I think it's making up for its brothers GTX470 and GTX480 when they didn't quite live up to peoples' expectations.
    So what would be the best bang-for-buck card out there right now?
    And I don't know why you say that the HD5870 has little headroom for OC.
    I helped a friend OC it and it went pretty far and seemed to do well.
    Report Comment
     
     
    # RE: RE: RE: RE: RE: RE: RE: RE: RE: RE: NVIDIA GeForce GTX-460 1GB Video CardOlin Coles 2010-07-23 07:16
    @Justin: I actually tested a Radeon HD 4870, which is faster than the HD 4850, in both Crysis and BF:BC2. The results are right there in the article, so have a look. The GeForce GTX 460 clearly offers the best bang for buck, as illustrated by the price-per-frame comparison. You did read this article, right?
    Report Comment
     
     
    # whatever..Klyster 2010-07-12 01:50
    I've been a dedicated reader of this site for a couple of years now, I like the impartial reviews and also the links to off-site reviews.
    As for cherry picking, I have been looking for reviews to the contrary, but in all honesty, pretty much everyone is saying the same things, Anand and Toms included. Testing methodology may differ a little but the conclusions are roughly the same.
    Leon is just #-stirring and Michael must be a little delicate.
    This may be my next card and I can afford better, but I won't spend ridiculous amounts of money on a graphics card anymore and it's nearly time to update my trust 260.

    Good review Olin, thanks.
    Report Comment
     
     
    # RE: whatever..Olin Coles 2010-07-13 08:08
    Thank you for the feedback Klyster. If you're running a GTX 260, then the GTX 460 is an ideal upgrade. The 1GB version does slightly better than the 768MB edition, so it's a matter of how easily you can save another $20.
    Report Comment
     
     
    # RE: whatever..Adam 2010-07-14 09:16
    I'm in the same boat as you, got a GTX260 that I've had for nearly 2 years now and have been starting to look into an upgrade.
    Was previously interested in the 5850 but considering it's relatively high price and the rate that it'll be outdated, the 460 is looking a much better bet.
    Respectable results, low temps, quiet cooling and a very reasonable price seems to make this one a winner. Shame that it's still suffering from high energy consumption but considering the other bonuses I'll personally overlook that.
    Report Comment
     
     
    # HumanJohn Darcy 2010-07-12 02:59
    I built myself 3 amd/ati machines. (2- 965BE OCd 3.8. 1090T @ 3.9) I have 4870 in one 965 and 5870s in the other 2. I was sweating to get the money for the 5870s and am not able to clock the 5870s much at all.
    I enjoyed the article on the $200 460gtx mostly because of the possibilities the Fermi now offers. I would like to expand my hardware use beyond only one focus. I was originally put off by nvdia's super high price point for its stuff but this new high functioning low price point clockable unit provides me room for renewed intrest in this solid product. It was an excellent article for me only a bit confused by the 8.75 performance rating after all the raves about its great performance -even if some was potential-
    Report Comment
     
     
    # RE: HumanOlin Coles 2010-07-12 08:53
    Our staff recently discussed ratings, and I'm doing my best to keep the Performance rating in perspective of the entire market and not just the price point. As I warn, the ratings are subjective.
    Report Comment
     
     
    # John Darcy 2010-07-12 09:29
    It is a noble thing to attempt impartiality or objectivity in ratings, however, personal bias will always creep in. To keep totally objective might be boring in this type of review anyway. Double blind eval systems were developed to eliminate personal bias.
    I liked the way the review came out and/or how this particular product did because my bias had been toward ATI products only because it was the only one i had ever used. I kept getting it because I knew how it worked, like it or not. I never liked the software. Until recently, it was always messing up on me for unknown reasons. I still dont like some things about it , which is why I had interest in these Femi reviews.
    Report Comment
     
     
    # It's a biased marketOlin Coles 2010-07-12 11:15
    If only the average reader understood just how many factors there are creating bias in the computer hardware industry. Video cards, for example, depend entirely on driver support for their performance. Have you ever wondered why the NVIDIA ForceWare download is 125MB? It?s because of the embedded profiles to optimize their product for each and every video game (along with a PhysX installer). Both ATI and NVIDIA do it, although one more than the other, and without these tailored profiles VGA performance drops flat.

    Since its unreasonable (and not realistically possible) to grade these products solely on their hardware performance without driver optimizations, we are constantly chasing the latest driver to keep up-to-date. The real shame here is that some drivers see a 25% improvement with a game a month after launch, and it renders our rating obsolete.
    Report Comment
     
     
    # RE: It's a biased marketJohn Darcy 2010-07-12 16:25
    I am a neophyte in computer terms. I am stuck with some of the built in profiles mandated by ati in order that the new gpu be 'energy efficient' new 5870 series. but it is built on software that forces my super fast gpu to run at a constant 157/300 speed though i bought it because it was fast.. but it is kept slow so it will stay cool. It caused all kinds of 2 monitor issues with screen tearing and jumping since it wasnt running anywhere near its potential and wouldnt throttle back up fast enough and a dumb newcomer like me has no idea how to fix it. I paid $400 for a gpu which is slower running than my 4670 because software makes it like that and it is far too complicated for me to undo it except keep a photoshop app always on to draw resources.
    Report Comment
     
     
    # Try ATI OverdriveBruceBruce 2010-07-12 19:17
    When I use ATI Overdrive (in Catalyst Control Center) to manually set the GPU and memory clocks, it bumps up the minimum GPU clock settings to ~500 MHz, and keeps it there. Most people complain about the loss of energy savings caused by this behavior, but it probably suits your situation.

    You don't need to overclock the GPU to use CCC; I use it because I NEED TO KNOW what the clocks are set for. Sometimes I have to underclock a card for testing purposes.
    Report Comment
     
     
    # RE: Try ATI OverdriveJohn Darcy 2010-07-13 06:41
    I stopped using overdrive after I got the 5870 and replaced it with the riva tuner and msi afterburner. It is possible it has been fixed and works better now. I cant make afterburner's profiles have any 2d/3d effect. I can control fan speed. I tried to rewrite the xml for profiles and that did no good after 10.3b. I will try this though,
    thanks
    Report Comment
     
     
    # RE: NVIDIA GeForce GTX-460 1GB Video CardServando Silva 2010-07-12 12:37
    The results are OK and they all confirm the same situations in other tech-sites. Also, if somehow the GPU is cherry-picked, that could mean that it overclocks a little bit better than a retail, but it doesn't means it performs better than any other retail sample at the same clocks. SO, the results are not biased in this way.
    Report Comment
     
     
    # Mr.Mete Can Karahasan 2010-08-06 23:31
    First of all, I will frankly say that I am an avid follower of benchmarkreviews. One reason I can point out is back in last year, the Asus hd5770 vt edition review having a quote from asus engineers that they could overclock the card up until 1063mhz, so this website is full of useful information in my terms(thanks editors!).
    Back to the topic: I feel the cherry picked demonstration cards issue is real considering newegg user feedbacks of a "specific" gtx460 distributer reproaching of the card not having vrm heatsinks such as some reviews stated(occ maybe?) and overclocking potential being limited as fan noise is notably higher also.
    One other issue is in my opinion, power consumption under overclocking. Donanimhaber did just that and the results were interesting: gtx460 overclocked consumed just 10 watts less than an hd5970, for a mainstream card in a possibly midtower case that is a concern.
    Report Comment
     
     
    # Mr.Mete Can Karahasan 2010-08-07 00:02
    Let me correct myself: not occ, but legionhardware, tweaktown, hardwareheaven, guru3d and techspot afaik are all victims to the sample cards which their retail counterparts are lacking their vrm heatsinks and gpu heatpipes.
    Report Comment
     
     
    # Mr.Mete Can Karahasan 2010-08-07 00:41
    I also would like to ask if the price performance ratios are correct. I double checked, the average fps of hd5770 in your tests is 25.32 and that results in 5.92$ per fps for hd5770.
    Report Comment
     
     
    # RE: Mr.Olin Coles 2010-08-07 05:08
    I take the average cost per frame in each test, and then average the sum of their total; I don't take the average frame rate of all tests.

    Also, most readers don't understand that NVIDIA designs the card with several different cooling options. It's up to the AIC partners to choose which solution they prefer based on price and purpose.

    I'm only a day away from publishing my review of the ASUS ENGTX460 DirectCU TOP-2DI-1GD5, which is identical to the retail version at NewEgg. The temperatures and power consumption validate my previous 'reference design' tests, and the performance is, well, you'll see.
    Report Comment
     
     
    # AwardsEric 2010-07-13 01:21
    This card has already won atleast 9 awards.

    You can find most via OverclockersClub, the rest google will help you with.
    I'd link them but I've written a lot in the links and it's in Swedish so it's sort of pointless here.
    Report Comment
     
     
    # Here's an SLI reviewDon 2010-07-14 17:12
    Completely unbiased SLI review
    ##guru3d.com/article/geforce-gtx-460-sli-review/

    Results speak for themselves
    Report Comment
     
     
    # RE: Here's an SLI reviewDean Hanson 2010-07-14 17:39
    Good find! Two GTX460's in SLI beat one Radeon 5870 for the same amount of money. It could be nothing, but they did get eight different GTX 460's from NVIDIA and its partners according to another website, which could indicate some degree of favoritism.
    Report Comment
     
     
    # Favoritism...???BruceBruce 2010-07-14 18:22
    What, you think they pinch the cards they don't like, and make 'em cry right before the talent section of the contest?

    Results speak for themselves...they are inanimate machines that get measured in a controlled environment.
    Report Comment
     
     
    # PLUS...Don 2010-07-14 19:53
    Don't forget, this is an SLI test with the 768MB version, imagine the 1GB version in SLI?

    The 1GB version will have even better results - can't wait 'till those start trickling in.

    Found one:
    ##techpowerup.com/reviews/NVIDIA/GeForce_GTX_460_SLI
    Report Comment
     
     
    # one questionFederico La Morgia 2010-08-07 06:05
    what is written on the RAM chips?
    Report Comment
     
     
    # Samsung GDDR5BruceBruce 2010-08-07 08:39
    Samsung K4G10325FE-HC05
    Report Comment
     
     
    # ASUS ENGTX460 1GB [DirectCU] TOP/2DI/1GD5nVidia gamer 2010-08-19 14:49
    when will ASUS ENGTX460 1GB [DirectCU] TOP/2DI/1GD5 become available in europe?
    Report Comment
     
     
    # 460 GTXSuhaib 2010-09-04 11:24
    I've got myself a PC setup with Gigabyte's 460 GTX. OC'd to 715 core clock speed, the PC runs most games smoothly.
    Battlefield Bad company 2 however, does not.
    I happen to have a 2.80ghz core 2 duo(ancient, I want to know if it affects card performance) and 4gb of ram, but max settings on 1280x1024 resolution and 4x anti-aliasing give me poor fps, around a humble 20-40.
    Need your replies as soon as possible.
    Report Comment
     
     
    # RE: 460 GTXOlin Coles 2010-09-04 11:45
    I just played a round online using the 1GB GeForce GTX 460 with settings at 4x AA and shadows off. There were no problems with performance at all, and this was at 1920x1200. Your processor is more than enough for the game; especially at that low resultion. I suggest that you look at some of the other possible issues: driver (each new version updates game profiles), and game cache.
    Report Comment
     
     
    # RE: RE: 460 GTXSuhaib 2010-09-04 21:36
    I see. I'm glad its not my setup then. thanks.
    Report Comment
     
     
    # RE: RE: RE: 460 GTXSuhaib 2010-09-05 20:29
    I can't find any reason why its doing this, heat levels are normal, my PC's harddisk is very fast in reading and writing files. I turned off everything in the control panel and made it so its application-controlled.
    problem still persists :(
    Report Comment
     
     
    # Which is better?BlackIce 2010-11-30 13:46
    Will he buy one of these two plates, which they believe that it is better?

    PCI-Express 16x MSI R5770 Hawk 1G D5 VGA/DVI/HDMI/DP (ATI)

    PCI-Express 16x MSI N460GTX Twin Frozr II 768M V/2xD/mH (NVidia)

    Please I need help since I am not very understood of plates and me gustaria to buy the best of the two
    Report Comment
     
     
    # 2nd VoteBruceBruce 2010-11-30 17:05
    I agree with David. Buy the GTX 460.
    Report Comment
     
     
    # RE: NVIDIA GeForce GTX-460 1GB Video CardDavid Ramsey 2010-11-30 14:30
    Buy the MSI N460GTX Twin Frozr II 768M.
    Report Comment
     
     
    # 224 shaders?raymond 2010-12-24 14:53
    it has 336 doesnt it?
    Report Comment
     
     
    # RE: 224 shaders?Olin Coles 2010-12-24 16:20
    That's correct: the GTX 460 has 336 CUDA cores, and the GTX 460 SE has 288. Until you mentioned it, I hadn't noticed that GPU-Z incorrectly listed the shader core count... probably because this series wasn't supported at the time.

    All of the data and charts are correct, but the GPU-Z image is displays inaccurate specifications.
    Report Comment
     
     
    # @olinraymond 2010-12-25 01:37
    No lm affraid that's correct.
    The gf-104 only activates 224 shaders unit and only activats the rest "when needed".
    I have both the asus engtx 465 and 460 dirct CU and one thing I'd noticed that games on the 465 are looking much better,like in the new need for speed hot pursiut,the road looks much more sharper on the 465 while in the 460 it looks kinda blend.
    I think that the 460 gets good frame rate by downgrading the "far away" graphic and both are 1 gig 256 .
    Report Comment
     
     
    # RE: NVIDIA GeForce GTX-460 1GB Video Cardraymond 2010-12-26 18:57
    I've played also mafia 2 on both cards(465 &460 both 1 GB) on very high settings including physix set to high and AAx4 on 1920x1080 and the 460 just couldn't handle it while the 465 ran smoothly.
    Spec:i3 530 2.93Ghz,4gb,500HD seagate barracuda(if I recall right).
    Except low temps(23 idle,48 at load)I wasn't impressed at all with the 460,the 465 performs way better and all those reviews that favor the 460 over the 465 are just missleading the readers to think the 460 is betterr while its the opposite!
    Report Comment
     
     
    # Good price for performanceDanny 2011-01-30 04:16
    I don´t care for brands, I have changed between Intel and AMD a few times because of the price and done the same with graphic cards. I have a 32 inch HD ready which I watch approximately 2 meters away so really gigantic resolutions are pretty useless to me in my HTPC. For instance the Palit GTX 460 is the fastest card I ever seen with just 19 cm length and in my HTPC I have just 22,5 cm space to work with which would leave the HD 5770 and the GTX 460 as the only viable options. I have considered the HD 5830 but it is just way too big. I currently have an Athlon II 630 processor so I am the last person you would call biased.
    Report Comment
     

    Comments have been disabled by the administrator.

    Search Benchmark Reviews Archive