ASUS GeForce GTX-465 Video Card |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday, 22 June 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ASUS ENGTX465 Video Card ReviewPC video games are still the best way to experience realistic effects and immerse yourself in the battle. Consoles do their part, but only high-precision video cards offer the sharp clarity and definition needed to enjoy detailed graphics. Armed with Voltage Tweak functionality, the ASUS GeForce GTX 465 delivers a healthy helping of graphical power at an affordable price. The ENGTX465/2DI/1GD5 model has plenty of headroom for overclockers to drive out additional FPS performance, while keeping temperatures cool. In this article, Benchmark Reviews tests the ASUS ENGTX465 against some of the best video cards within the price segment by using several of the most demanding PC video game titles and benchmark software available: Aliens vs Predator, Battlefield: Bad Company 2, BattleForge, Crysis Warhead, Far Cry 2, Resident Evil 5, and Metro 2033. It used to be that PC video games such as Crysis and Far Cry 2 were as demanding as you could get, but that was all back before DirectX-11 brought tessellation and to the forefront of graphics. DX11 now adds heavy particle and turbulence effects to video games, and titles such as Metro 2033 demand the most powerful graphics processing available. NVIDIA's GF100 GPU is their first graphics processor to support DirectX-11 features such as tessellation and DirectCompute, and the GeForce GTX-470 offers an excellent combination of performance and value for games like Battlefield: Bad Company 2 or BattleForge. With prices starting at $250 ($280 for this ASUS model), the NVIDIA GeForce GTX-465 empowers DirectX-11 video games to deliver unmatched geometric realism from 11 Streaming Multiprocessors with a total of 352 CUDA Cores and 44 Texture Units. In this article Benchmark Reviews tests 3D frame rate performance on the ASUS GeForce GTX-465 video card (model ENGTX465/2DI/1GD5), and compare the results against the most powerful graphics products on the market. Based on the same GF-100 Fermi architecture that was first used in the GeForce GTX 480, the NVIDIA GTX-465 delivers mid-range performance for gamers on a budget. Additionally, the GTX-465's price tag fits in nicely between the $220 Radeon HD 5830 and $305 Radeon HD5850. In the following pages, Benchmark Reviews will demonstrate how well the ASUS GeForce GTX-465 performs against other DirectX-11 video card products.
ASUS GeForce GTX 465 Video Card Kit ENGTX465/2DI/1GD5At the center of every new technology is purpose, and NVIDIA has designed their Fermi GF100 GPU with an end-goal of redefining the video game experience through significant graphics processor innovations. Disruptive technology often changes the way users interact with computers, and the GeForce GTX-4xx family of video cards are complex tools built to arrive at one simple destination: immersive entertainment, especially when paired with NVIDIA GeForce 3D Vision. The experience is further improved with ASUS SmartDoctor and ASUS GamerOSD software tools, which help gamers and overclockers get the most out of their investment. About ASUSTeK Computer Inc.ASUS comes from the last four letters of Pegasus, the winged horse in Greek mythology that represents the inspiration of art and learning. ASUS embodies the strength, creative spirit and purity symbolized by this regal and agile mythical creature, soaring to new heights of quality and innovation with each product it introduces to the market.
ASUSTeK Computer Inc., also known as ASUS, a technology-oriented company blessed with one of the world's top R&D teams, is well known for high-quality and innovative technology. As a leading provider of 3C (computers, communications and consumer electronics) total solutions, ASUS offers a complete product portfolio to compete in the new millennium. To succeed in this ultra-competitive industry, great products need to be complimented by speed-to-market, cost and service. That's why all 100,000 over employees of ASUS strive for the "ASUS Way of Total Quality Management" to offer the best quality without compromising cost and time-to-market while providing maximum value to all customers through world-class services. NVIDIA Fermi FeaturesIn today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100 GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.
Stages of Tessellation with NVIDIA Fermi GraphicsUsing GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left. Tessellation in DirectX-11Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics. In regard to the GF100 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF100, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die. Four-Offset Gather4The texture unit on previous processor architectures operated at the core clock of the GPU. On GF100, the texture units run at a higher clock, leading to improved texturing performance for the same number of units. GF100's texture units now add support for DirectX-11's BC6H and BC7 texture compression formats, reducing the memory footprint of HDR textures and render targets. The texture units also support jittered sampling through DirectX-11's four-offset Gather4 feature, allowing four texels to be fetched from a 128×128 pixel grid with a single texture instruction. NVIDIA's GF100 implements DirectX-11 four-offset Gather4 in hardware, greatly accelerating shadow mapping, ambient occlusion, and post processing algorithms. With jittered sampling, games can implement smoother soft shadows or custom texture filters efficiently. The previous GT200 GPU did not offer coverage samples, while the GF100 can deliver 32x CSAA. GF100 Compute for GamingAs developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF100's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF100's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm. NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF100 GPU, such as CUDA C++ for video games. Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100 GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF100 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding. GF100 Specifications
GeForce Specifications
NVIDIA GF100 GPU Fermi ArchitectureNVIDIA's latest GPU is codenamed GF100, and is the first graphics processor based on the Fermi architecture. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF100 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU is their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. Rest assured, the new GF100 GPU is more powerful than the GT200 could ever be, and early results indicate a Fermi-based video card delivers far more than twice the gaming performance over a GeForce GTX-280. GF100 is not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they now reach 512 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF100 implements the new IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.
Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF100 doubles the number of CUDA cores over the previous architecture. NVIDIA GeForce GF100 Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The NVIDIA GF100 implements four GPCs, sixteen SMs, and six memory controllers. Expect NVIDIA to launch GF100 products with different configurations of GPCs, SMs, and memory controllers to address different price points. CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF100 implements six 64-bit GDDR5 memory controllers (384-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 32 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages. GF100 implements 512 CUDA cores, organized as 16 SMs of 32 cores each. Each SM is a highly parallel multiprocessor supporting up to 48 warps at any given time. Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture services load, store, and texture operations. GF100 has 48 ROP units for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in six groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others. NVIDIA GigaThread Thread SchedulerOne of the most important technologies of the Fermi architecture is its two-level, distributed thread scheduler. At the chip level, a global work distribution engine schedules thread blocks to various SMs, while at the SM level, each warp scheduler distributes warps of 32 threads to its execution units. The first generation GigaThread engine introduced in G80 managed up to 12,288 threads in real-time. The Fermi architecture improves on this foundation by providing not only greater thread throughput, but dramatically faster context switching, concurrent kernel execution, and improved thread block scheduling. What's new in Fermi?With any new technology, consumers want to know what's new in the product. The goal of this article is to share in-depth information surrounding the Fermi architecture, as well as the new functionality unlocked in GF100. For clarity, the 'GF' letters used in the GF100 GPU name are not an abbreviation for 'GeForce'; they actually denote that this GPU is a Graphics solution based on the Fermi architecture. The next generation of NVIDIA GeForce-series desktop video cards will use the GF100 to promote the following new features:
Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide. Closer Look: ASUS GeForce GTX-465So far, 2010 has been an exciting year for game developers. Microsoft Windows 7 (and updated Windows Vista) Operating Systems introduced gamers to DirectX-11, allowing video games released for the PC platform to look better than ever. DirectX-11 is the leap in video game software development we've been waiting for. Screen Space Ambient Occlusion (SSAO) is given emphasis in DX11, allowing some of the most detailed computer textures gamers have ever seen. Realistic cracks in mud with definable depth and splintered tree bark make the game more realistic, but they also make new demands on the graphics hardware. This new level of graphical detail requires a new level of computer hardware: DX11-compliant hardware. Tessellation adds a tremendous level of strain on the GPU, making previous graphics hardware virtually obsolete with new DX11 game titles. The ASUS GeForce GTX-465 video card, model ENGTX465/2DI/1GD5, offers gamers a healthy dose of graphics processing power for their money. But the GeForce GTX-465 is more than just a tool for video games, it's also a tool for professional environments that make use of GPGPU-accelerated compute-friendly software, such as Adobe Premier Pro and Photoshop.
The ASUS ENGTX465/2DI/1GD5 retail package is fairly basic: graphics card, support manual, driver software, a dual four-pin Molex PSU to six-pin PCI-E power adapter, DVI into HDMI adapter, and DVI into VGA (D-Sub) adapter. The ASUS GeForce GTX-465 kit does not include any bundled free PC video games, but it does offers the CUDA-enabled applications NVIDIA Design Garage and Supersonic Sled on a supplemental software CD. The NVIDIA GeForce GTX-465 is a 2.67" tall double-bay, 4.376-inches (111.15mm) wide graphics card with a 9.5-inch (241.3mm) long profile. The ASUS GeForce GTX-465 retains the reference cooler design, which is more than adequate for the reduced-output Fermi GPU, but adds an interesting faux carbon-fiber finish to the top of the product.
As with past GeForce video cards, the Fermi GPU offers two output 'lanes', so all three output devices cannot operate at once. NVIDIA has retained two DVI outputs on the GeForce GTX 465, so dual-monitor configurations can be utilized. By adding a second video card users can enjoy GeForce 3D-Vision Surround functionality. Other changes occur in more subtle ways, such as replacing the S-Video connection with a more relevant (mini) HDMI 1.3a A/V output. In past GeForce products, the HDMI port was limited to video-only output and required a separate audio output. Native HDMI 1.3 support is available to the GeForce GTX 465, which allows direct output to HDTVs and compatible monitors.
The new 40nm fabrication process opens the die for more transistors, now increased from 1.4-billion in GT200 GPU present on the GeForce GTX 285, to an astounding 3.2-billion built into the Fermi GF100 GPU and used with the ASUS GeForce GTX-465 (the same amount that resides on the NVIDIA GeForce GTX 480).
Similar to the past generation NVIDIA products, the GeForce 400-series also supports triple-SLI capability. While the GTX 465 disables five Streaming Multiprocessor Units (SMU's) from the Fermi architecture, a heavy-duty thermal management system is still necessary. The plastic fan shroud unsnaps for the rest of the unit without any tools, and reveals a large finsink with four heat-pipe rods spanning from the GPU. Under load the ASUS GeForce GTX-465 video card reached 82°C in a 20°C room with no additional cooling assistance from the computer case.
Upon close inspection of the printed circuit board (PCB), there's an opening beneath the blower fan that allows intake air to be drawn from either side of the unit. This pays dividends when the GeForce GTX 465 is added into an SLI or triple-SLI set. Even with its mid-range intentions, the PCB is a busy place for the GeForce GTX 465. Many of the electronic components have been located to the 'top' side of the PCB, so to better accommodate the fully-grown 530 mm2 GF100 GPU and its 3.2-billion transistors. 352 CUDA cores operate at 1215 MHz, which keeps a firm lead over ATI's 850 MHz Cypress GPU that measures 334 mm2 and fits 2.154-billion transistors.
In the next several sections Benchmark Reviews will explain our video card test methodology, followed by a performance comparison of the ASUS ENGTX465/2DI/1GD5 against several of the most popular mid-range graphics accelerators available. The GeForce GTX 465 will compete against the ATI Radeon HD 5850 and several other middle-market video cards; so we'll be keeping a close eye on comparative performance and value. VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. According to the Steam Hardware Survey published for the month ending May 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.
A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. DX11 Cost to Performance RatioFor this article Benchmark Reviews has included cost per FPS for graphics performance results. Only the least expensive product price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 22-June-2010:
Intel X58-Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
Video Card Test Products
DX10: 3DMark Vantage3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology. 3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results. 3DMark Vantage GPU Test: Jane NashOf the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.
Cost Analysis: Jane Nash (1920x1200)3DMark Vantage GPU Test: New CalicoNew Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).
Cost Analysis: New Calico (1920x1200)Test Summary: 3dMark Vantage keeps ASUS GeForce GTX-465 graphical performance levels slightly ahead of an overclocked GeForce GTX 285, but trailing behind the ATI Radeon HD 5850. When we analyze price per FPS, the ASUS ENGTX465/2DI/1GD5 costs about $1 more per frame than the Radeon HD 5850, and roughly the same as the GeForce GTX 270.
DX10: Crysis WarheadCrysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering. Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance. Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.
Cost Analysis: Crysis Warhead (1920x1200)* Possible maximum frame rate limit imposed by either the PC video game or HOC benchmark tool. Test Summary: The CryENGINE2 graphics engine used in Crysis Warhead responds well to both ATI and NVIDIA products, which allows the ASUS ENGTX465/2DI/1GD5 to nearly match performance to the Radeon HD 5850 at 1680x1050 and only 2-FPS away at 1920x1200. Oddly enough, the GeForce GTX-465 appears to yield identical performance to the older GeForce GTX-285... again. For fan of Crysis, the GeForce GTX 465 offers the best price to performance ratio compared with the GTX-470 and Radeon HD 5850 right behind it.
|
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX10: Resident Evil 5
Built upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX-10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.
On the PC version of Resident Evil 5, both DirectX 9 and DirectX-10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements.
NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX-10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.
Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX-10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs.
Cost Analysis: Resident Evil 5 (Area 4)
Test Summary: It's unclear if Resident Evil 5 graphics performance fancies ATI or NVIDIA, especially with two test scenes that alternate favoritism. Although this benchmark tool is distributed directly from NVIDIA, and Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any decisive tilt towards GeForce products over ATI Radeon counterparts from within the game itself. Test scene #3 certainly favors Fermi GF100 GPU's, and they leads ahead of every other product tested. In test scene #4 the Radeon video card series appears more competitive, which allows the ASUS ENGTX465/2DI/1GD5 to nearly match performance with the Radeon HD 5850 after leading it by 8-FPS in area #3 tests. Once again, the ASUS GeForce GTX-465 and GTX-470 earn respect for the most value-packed video cards.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
Cost Analysis: Aliens vs Predator (1920x1200)
Test Summary: Aliens vs Predator may use a well-known game engine, but DirectX-11 pushes graphical demand on this game second to only Metro 2033 (and possibly equivalent to Crysis). With an unbiased appetite for raw DirectX-11 graphics performance, Aliens vs Predator accepts ATI and NVIDIA products as equal contenders, despite sponsorship from NVIDIA. When high-strain SSAO is called into action, the ASUS GeForce GTX-465 demonstrates how well Fermi is suited for DX11... but not quite besting ATI's Radeon HD 5850. Even though high-stress graphics has pushed frame rate performance lower on the ASUS ENGTX465/2DI/1GD5, the relative price to performance ratio is actually quite high in Aliens vs Predator and edges out the Radeon HD 5850 for value.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX11: Battlefield Bad Company 2
The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users.
At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. Unfortunately, the Extreme Edition Intel Core i7-980X six-core CPU with twelve threads will not see full utilization.
In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
The Frostbite-1.5 game engine in Battlefield: Bad Company 2 appears to equalize our test set of video cards, and despite AMD's sponsorship of the game it still plays well using any brand of graphics card.
Cost Analysis: Battlefield: Bad Company 2
Test Summary: Battlefield: Bad Company 2's Frostbite-1.5 game engine forces the ASUS GeForce GTX-465 to trail behind the Radeon HD 5850, and also operate nearly to the same level as an overclocked GeForce GTX 285. The surprise here is how close the ASUS GeForce GTX-465 performed to the GTX-470, which was just barely more than 2-FPS better. Despite the drop in DX10 performance compared to the others, the relative price per frame costs are actually the same as ATI's Radeon HD 5850.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX11: BattleForge
BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.
Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.
Cost Analysis: BattleForge (1920x1200)
Test Summary: With an unbiased appetite for raw DirectX-11 graphics performance BattleForge appears to be ambiguous towards ATI and NVIDIA products, even despite sponsorship from AMD. When high-strain SSAO and 8x anti-aliasing applied, the ASUS GeForce GTX-465 trumps ATI's Radeon HD 5850 at both test resolutions. Improved FPS performance gives the ASUS ENGTX465/2DI/1GD5 the best value of the bunch.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX-480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates.
Cost Analysis: Metro 2033
Test Summary: There's no way to ignore the graphical demands of Metro 2033, and only the most powerful GPUs will deliver a decent visual experience. These demands make our selection of DX11 video cards appear flat, especially when two GeForce GTX-480 Fermi video card combined into a SLI set produce only 46 FPS. While quality settings will need to be reduced to medium levels for adequate game play performance, the ASUS GeForce GTX-465 trailed the Radeon HD 5850 by only 2-FPS, and improved value by more than a $1 saved per FPS.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
DX11: Unigine Heaven 2.1
The Unigine "Heaven 2.0" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:
-
Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
-
Comprehensive use of tessellation technology
-
Advanced SSAO (screen-space ambient occlusion)
-
Volumetric cumulonimbus clouds generated by a physically accurate algorithm
-
Dynamic simulation of changing environment with high physical fidelity
-
Interactive experience with fly/walk-through modes
-
ATI Eyefinity support
Although Heaven-2.0 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
Cost Analysis: Unigine Heaven
Test Summary: Our test results with the Unigine Heaven benchmark tool appear to deliver fair comparisons of DirectX-11 graphics cards. The ASUS GeForce GTX-465 just clears the Radeon HD 5850 by 0.6-FPS, which makes them even by most standards. Reviewers like to say "Nobody plays a benchmark", but it seems evident that we can expect to see great things come from a tool this detailed. For now though, those details only come by way of DirectX-11 video cards. It's worth noting that if there was a cost involved with Unigine's Heaven benchmark, the ASUS ENGTX465/2DI/1GD5 would cost less per frame than the Radeon HD 5850 as well as many other DirectX-11 video cards.
Graphics Card | GeForce 9800 GTX+ | Radeon HD4890 | GeForce GTX285 | Radeon HD5770 | GeForce GTX465 | Radeon HD5850 | GeForce GTX470 | Radeon HD5870 |
GPU Cores | 128 | 800 | 240 | 800 | 352 | 1440 | 448 | 1600 |
Core Clock (MHz) | 740 | 850 | 670 | 850 | 608 | 725 | 608 | 850 |
Shader Clock (MHz) | 1836 | N/A | 1550 | N/A | 1215 | N/A | 1215 | N/A |
Memory Clock (MHz) | 1100 | 975 | 1300 | 1200 | 802 | 1000 | 837 | 1200 |
Memory Amount | 512 MB GDDR3 | 1024 MB GDDR5 | 1024MB GDDR3 | 1024MB GDDR5 | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 1024MB GDDR5 |
Memory Interface | 256-bit | 256-bit | 512-bit | 128-bit | 256-bit | 256-bit | 320-bit | 256-bit |
NVIDIA APEX PhysX Enhancements
Mafia II is the first PC video game title to include the new NVIDIA APEX PhysX framework, a powerful feature set that only GeForce video cards are built do deliver. While console versions will make use of PhysX, only the PC version supports NVIDIA's APEX PhysX physics modeling engine, which adds the following features: APEX Destruction, APEX Clothing, APEX Vegetation, and APEX Turbulence. PhysX helps make object movement more fluid and lifelike, such as cloth and debris. In this section, Benchmark Reviews details the differences made with- and without APEX PhysX enabled.
We begin with a scene from the Mafia II benchmark test, which has the player pinned down behind a brick column as the enemy shoots at him. Examine the image below, which was taken with a Radeon HD 5850 configured with all settings turned to their highest and APEX PhysX support disabled:
No PhysX = Cloth Blending and Missing Debris
Notice from the image above that when PhysX is disabled there is no broken stone debris on the ground. Cloth from foreground character's trench coat blends into his leg and remains in a static position relative to his body, as does the clothing on other (AI) characters. Now inspect the image below, which uses the GeForce GTX 460 with APEX PhysX enabled:
Realistic Cloth and Debris - High Quality Settings With PhysX
With APEX PhysX enabled, the cloth neatly sways with the contour of a characters body, and doesn't bleed into solid objects such as body parts. Additionally, APEX Clothing features improve realism by adding gravity and wind effects onto clothing, allowing for characters to look like they would in similar real-world environments.
Burning Destruction Smoke and Vapor Realism
Flames aren't exactly new to video games, but smoke plumes and heat vapor that mimic realistic movement have never looked as real as they do with APEX Turbulence. Fire and explosions added into a destructible environment is a potent combination for virtual-world mayhem, showcasing the new PhysX APEX Destruction feature.
Exploding Glass Shards and Bursting Flames
NVIDIA PhysX has changed video game explosions into something worthy of cinema-level special effects. Bursting windows explode into several unique shards of glass, and destroyed crates bust into splintered kindling. Smoke swirls and moves as if there's an actual air current, and flames move out towards open space all on their own. Surprisingly, there is very little impact on FPS performance with APEX PhysX enabled on GeForce video cards, and very little penalty for changing from medium (normal) to high settings.
NVIDIA 3D-Vision Effects
Readers familiar with Benchmark Reviews have undoubtedly heard of NVIDIA GeForce 3D Vision technology; if not from our review of the product, then for the Editor's Choice Award it's earned or the many times I've personally mentioned it in out articles. Put simply: it changes the game. 2010 has been a break-out year for 3D technology, and PC video games are leading the way. Mafia II is expands on the three-dimensional effects, and improves the 3D-Vision experience with out-of-screen effects. For readers unfamiliar with the technology, 3D-Vision is a feature only available to NVIDIA GeForce video cards.
The first thing gamers should be aware of is the performance penalty for using 3D-Vision with a high-demand game like Mafia II. Using a GeForce GTX 480 video card for reference, currently the most powerful single-GPU graphics solution available, we experienced frame rate speeds up to 33 FPS with all settings configured to their highest and APEX PhysX set to high. However, when 3D Vision is enabled the video frame rate usually decrease by about 50%. This is no longer the hardfast rule, thanks to '3D Vision Ready' game titles that offer performance optimizations. Mafia II proved that the 3D Vision performance penalty can be as little as 30% with a single GeForce GTX 480 video card, or a mere 11% in SLI configuration. NVIDIA Forceware drivers will guide players to make custom-recommended adjustments specifically for each game they play, but PhysX and anti-aliasing will still reduce frame rate performance.
Of course, the out-of-screen effects are worth every dollar you spend on graphics hardware. In the image above, an explosion sends the car's wheel and door flying into the players face, followed by metal debris and sparks. When you're playing, this certainly helps to catch your attention... and when the objects become bullets passing by you, the added depth of field helps assist in player awareness.
Combined with APEX PhysX technology, NVIDIA's 3D-Vision brings destructible walls to life. As enemies shoot at the brick column, dirt and dust fly past the player forcing stones to tumble out towards you. Again, the added depth of field can help players pinpoint the origin of enemy threat, and improve response time without sustaining 'confusion damage'.
NVIDIA APEX Turbulence, a new PhysX feature, already adds an impressive level of realism to games (such as with Mafia II pictured in this section). Watching plumes of smoke and flames spill out towards your camera angle helps put you right into the thick of action.
NVIDIA 3D-Vision/3D-Vision Surround is the perfect addition to APEX PhysX technology, and capable video games will prove that these features reproduce lifelike scenery and destruction when they're used together. Glowing embers and fiery shards shooting past you seem very real when 3D-Vision pairs itself APEX PhysX technology, and there's finally a good reason to overpower the PCs graphics system.
GeForce GTX465 Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20°C throughout testing, while the inner-case temperature hovered around 37°C.
FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.
ASUS GeForce GTX-465 Video Card Temperatures
NVIDIA-supplied product specifications state that the GeForce GTX-465 has a maximum GPU thermal threshold of 105°C. This is identical for the previous-generation GeForce GTX 285, as well as the GeForce GTX-470 and -480 that share the same GF100 graphics processor. In a room with 20°C ambient temperature, the ASUS GeForce GTX-465 produced 45°C at idle and 82°C under load. It wasn't until FurMark has raised the temperatures to an unnaturally high level that the fan began to become audible. Otherwise, operation remained silent during game play.
Comparison: GTX-470 & GTX-480
Housed in a mid-tower computer case with no additional cooling, the retail NVIDIA GeForce GTX-470 video card produced a luke-warm 40°C in idle 2D mode and increased to 88°C in sustained 100% mode using FurMark's torture test.
Zotac GeForce GTX 470 Video Card Temperatures
Compared to the NVIDIA GeForce GTX-480 retail kit, temperatures are extremely close. Let's not kid ourselves into thinking that the GTX 470 should be cooler-running simply because it's a lower model, because the difference in thermal management is clearly obvious from the beginning. The image below illustrates temperatures on the GTX-480 using FurMark to produce load:
ZOTAC GeForce GTX480 Video Card Temperatures
Most new graphics cards from NVIDIA and ATI will expel heated air out through exhaust vents, which does not increase the internal case temperature. Our test system is an open-air chassis that allows the video card to depend on its own cooling solution for proper thermal management. Most gamers and PC hardware enthusiasts who use an aftermarket computer case with intake and exhaust fans will usually create a directional airflow current and lower internal temperatures a few degrees below the measurements we've recorded. To demonstrate this, we've built a system to illustrate the...
Best-Case Scenario
Traditional tower-style computer cases position internal hardware so that heat is expelled out through the back of the unit. This is better than nothing, but there's a fundamental problem: heat rises. Using the transverse mount design on the SilverStone Raven-2 chassis, Benchmark Reviews re-tests the ASUS GeForce GTX-465 video card to determine the 'best-case' scenario.
Sitting idle at the Windows 7 desktop with a 20°C ambient room temperature, the GeForce GTX-465 rested at 42°C, which wasn't much less than a regular computer case. Pushed to abnormally high levels using the FurMark torture test, the GeForce GTX-465 operated at 78°C with a very quiet cooling fan. While the many BIOS revisions leading up to retail availability have obviously made a noticeable difference, the well-designed Raven-2 computer case with additional cooling features has help to make an even bigger difference.
Despite the 40nm process and improved BIOS programming, GF100 still runs warm unless you add internal case cooling to draw in cool air and exhaust heated air. This could impact overclocking projects inside poorly ventilated computer cases. So if you've already got a warm-blooded CPU in your computer system, take look at our Best CPU Cooler Performance series and find a heatsink that will lower your internal temperatures and prolong component lifetime. Lowering the internal case temperature could give you an added edge for GPU overclocking projects, and it will also help overall system stability.
VGA Power Consumption
Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.
For power consumption tests, Benchmark Reviews utilizes the 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.
A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
The GeForce GTX-465 requires two six-pin PCI-E power connections. Resting at idle, the power draw consumed 36 watts of electricity... 6W less than the GTX-470. Putting this into perspective: the ATI Radeon HD 4770 and NVIDIA GeForce GTX-280 use about the same power, while the NVIDIA GeForce 9800 GTX and ATI Radeon HD 4850 both consume more power at idle. Unfortunately, the GTX-465 uses almost twice the idle power draw of the ATI Radeon HD 5770... and comes close to doubling the HD 5850. As our GeForce GTX 480 review demonstrated, Fermi proves to have a big power appetite when it should be snacking on only a few watts.
Once 3D-applications begin to demand power from the GPU, electrical power consumption really begins to climb. Measured at full 3D 'torture' load, the GeForce GTX 465 consumed 219 watts, compared to 278W for the GTX-470. While the GTX-465 matches power consumption of the old GTX-280 (again), it's much higher than the 157W consumed by the Radeon HD 5850. Although Fermi features a 40nm fabrication process, there's nothing 'Green' about the power demand under load and most DirectX-11 video cards from ATI require less power.
ASUS SmartDoctor
In the past, software-based overclocking on ASUS video cards has been accomplished through the GamerOSD program. Now these tasks are handled by ASUS SmartDoctor, which is described as "intelligent hardware protection and a powerful overclocking tool". The ASUS SmartDoctor tool allows users to overclock their video card's GPU and RAM, and at the same time monitor thermal output. ASUS simplifies the on-screen feedback with notes such as "Your VGA Card is OK.", but more advanced users will appreciate the in-depth data that displays along the upper-right corner. Shown below are the readings from the ASUS GeForce GTX-465 ENGTX465/2DI/1GD5 sample we received for testing:
ASUS GamerOSD
Want to boost graphics card performance without exiting the game? Activate ASUS GamerOSD anytime during gameplay and adjust the GPU clock for instant overclocking. Real-time FPS (frames per second) status is also available to keep track of performance improvements.
The ASUS GamerOSD (On-Screen Display) is a tool for combining ASUS SmartDoctor and video capture tools together without leaving the action. What I found especially helpful was the high-resolution DVD-quality video capture feature, which comes FREE with the ASUS ENGTX465/2DI/1GD5 kit. Programmable hot keys assign actions, and recorded video (saved in XviD MPEG-4 format) or screen shots can be taken on command. These are features you would have to pay for using other software, such as FRAPS.
Editor's Opinion: NVIDIA Fermi
My opinion of NVIDIA's Fermi architecture has changed over the past several months, as they've developed their graphics processor to fully embody the originally unclear long-term plan. Testing with NVIDIA's GF100 GPU held its own set of challenges, and many times the video cards based on this graphics processor seemed condemned by the inherited legacy of problems. From the flagship GeForce GTX 480 down to the GTX 465, Fermi impressed gamers with strong FPS performance... and that was about it. Thermal output and power consumption were unfashionably high, to which ATI constantly and consistently focused their marketing attacks. Then along comes GF104 on the GeForce GTX 460.
NVIDIA's GeForce GTX 460 not only changes the collective opinion of their Fermi architecture, it also changes the GPU landscape. ATI held the upper hand by releasing a DirectX-11 video card first, but they've painted themselves into a corner with their Evergreen GPU. Unlike NVIDIA's Fermi architecture, which can shape-shift as desired, ATI's Cedar, Redwood, and Juniper GPUs are all simply slices of the same processor: Cypress. This is where intelligent consumers will spot the flaw: ATI came to the (video) card game and showed their entire hand from the first deal, while NVIDIA had a few spare aces up their sleeves. NVIDIA's GeForce GTX 480 is only 15/16 of the complete GF100 package, and we're just beginning to see what's possible with a 7/8-whole GF104 GPU. It's unknown what NVIDIA has planned for the GF102, GF106, and GF108... although the speculation is rampant.
So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.
NVIDIA GeForce Fermi Graphics Card Family
Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.
Even some older game titles benefit by the Fermi architecture, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.
Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.
The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Well into 2010, only a beta version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 400-series is going to surpass ATI's Radeon HD 5000 series equivalents without much struggle, but it's uncertain how much better the performance will be compared to the previous-generations.
ASUS ENGTX465 Conclusion
Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating for the sample received which may differ from retail versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
Our performance rating considers how effective the ASUS GeForce GTX-465 DirectX-11 video card performs against competing products from both ATI and NVIDIA. Overall, the ASUS ENGTX465/2DI/1GD5 either matched or outperformed the ATI Radeon HD 5850 in more than half the tests, or trailed only slightly behind in performance. Although the GTX 465 stumbles in Battlefield: Bad Company 2 and Aliens vs Predator, it captures the price to performance ratio in BattleForge, Far Cry 2, and Resident Evil 5. In fact, when BattleForge includes high-strain SSAO, the ASUS GTX-465 demonstrates how well Fermi is suited for DX11... beating ATI's more expensive Radeon HD 5850 by nearly 10% at 1680x1050. At the end of our test gauntlet, the general feeling was that the GeForce GTX-465 is positioned directly between the Radeon HD 5830 and HD 5850 in price, but closer to the 5850 in performance.
Aside from a carbon fiber decal, the ASUS ENGTX465/2DI/1GD5 video card is identical in appearance and construction to the reference NVIDIA design. ASUS has kept with NVIDIA's design and added their own accents, but otherwise this product doesn't differentiate itself from the many other Add-In Card (AIC) partner clones with custom cooling or other design improvements, which means that function takes precedence over fashion. While the graphics card doesn't need to look exciting in order to get the job done, ASUS knows there's a lot of competition that will win over buyers with added product presentation and does what they can with the limited palette.
In terms of video card pecking order, the ASUS GeForce GTX-465 occupies the #3 spot in both the NVIDIA and ATI video card lineup. This means that there are four other series of single-GPU graphics cards priced/performing above the GTX-465, which can often create the opportunity or interest in paired SLI sets. As the third GF100 Fermi iteration, the GeForce GTX465 has been designed with the same solid construction. There are exposed electronics on the back of the PCB, but nothing that protrudes enough to require a metal back-plate for protection. The top-side of the graphics card features a protective plastic fan shroud, which receives the same depressed chamfer that allowed previous generations to sustain airflow in SLI configurations. The Fermi GF100 GPU has been moved forward toward the exhaust vents, which allows memory and power components to receive optimal cooling first.
While most PC gamers and hardware enthusiasts buy a discrete graphics card for the sole purpose of playing video games, there's a very small niche who depend on the extra features beyond video fast frame rates. NVIDIA is the market leader in GPGPU functionality, and it's no surprise to see CPU-level technology available in their GPU products. Fermi GF100 is also the first GPU to ever support Error Correcting Code (ECC), a feature that benefits both personal and professional users. Proprietary technologies such as NVIDIA Parallel DataCache and NVIDIA GigaThread Engine further add value to GPGPU functionality. Additionally, applications such as Adobe Photoshop or Premier can take advantage of GPGPU processing power.
Defining product value means something different to everyone. Some readers take heat and power consumption into consideration, while others are only concerned with FPS performance. With regard to value, there are several ways to look at the $280 GeForce GTX-465 and compare it to the closest rivals: such as the $305 ATI Radeon HD 5850. In terms of product price to FPS performance, the GeForce GTX-465 is one of the most affordable DirectX-11 video card products available. Here's a breakdown of the average USD cost per FPS recorded for the ten tests conducted for this article:
As of late June 2010, the ASUS GeForce GTX-465 video card (model ENGTX465/2DI/1GD5) sells at NewEgg for $279.99 while nearly all other GTX-465 models are listed between $250-$260. At the same time, NewEgg lists nearly all Radeon HD 5850 models for more than $305. If you've got money to spend and demand better performance, the ASUS GTX-470 (model ASUS ENGTX470/2DI/1280MD5) delivers on its $350 price point. Additionally, the ASUS GeForce GTX-465 comes with a three-year limited warranty, and for a limited time they've bundled the 'Just Cause 2' full-version PC video game; which is more incentive than the competition.
While NVIDIA's GF100 Fermi GPU delivers more than just a giant boost to video frame rates over their previous generation, such as GPGPU functionality that's usable in- and outside of video games, the GeForce GTX-465 is reduced from 16 to 11 Streaming Multiprocessors. This in turn reduces CUDA cores from 512 to 352, and takes down texture units from 64 to only 44. You're getting what you pay for, but keep in mind that you're not paying very much. By comparing video game frame rate performance between the GTX-465 and ATI Radeon HD 5850, the ENGTX465/2DI/1GD5 kit pulls ahead at least half the time and either matches or slightly trails the HD5850 the rest of the time. That's should be enough reason to save $55, or push your system with two units in SLI. As inventory begins to match demand, manufacturers will become more competitive with pricing and bundled extras, so be sure to shop for your best bargain.
Pros:
+ Respectible price-to-performance cost ratio
+ GF100 Introduces Error Correcting Code (ECC)
+ Good performance for high-end games
+ Fan exhausts all heated air outside of case
+ Quiet cooling fan under loaded operation
+ Includes native HDMI audio/video output
+ Adds 32x CSAA post-processing detail
+ Supports triple-SLI functionality
+ Competes well with Radeon HD 5850
Cons:
- Consumes 36W at idle and 219W under full load
- Fermi architecture, but less than ideal GPU
Ratings:
- Performance: 8.25
- Appearance: 8.75
- Construction: 9.50
- Functionality: 9.00
- Value: 8.75
Final Score: 8.85 out of 10.
Recommended: Benchmark Reviews Seal of Approval.
Benchmark Reviews encourages you to leave comments (below), or ask questions and join the discussion in our Forum.
Related Articles:
- G.Skill Ares 16GB 1866Mhz DDR3 Memory Kit
- QNAP TS-219P+ NAS Network Server
- PowerColor Radeon HD 5770 PCS+ Video Card
- AZZA Solano 1000R Full-Tower Computer Case
- 12GB Crucial DDR3 Memory Kit CT3KIT51264BA1339
- HIS Radeon HD 7870 IceQ Turbo 2GB
- MSI A75MA-G55 AMD FM1 Llano Motherboard
- nMedia HTPCKB Media Center Wireless Keyboard
- PNY XLR8 GeForce GTX 670 Video Card
- Mad Catz M.O.U.S.9 Gaming Mouse
Comments
Then, the HD5770 is not 256-bit but 128-bit!
I count 5 clear wins for the 5850, 3 for the 465 and one wash (Resident evil 5). From the 465's point of view, thats a staggering 67% more wins for the 5850!!
And by the way, I don't count the two parts of 3D vantage as separate tests.
So not only is the 5850 the faster card with over half the tests won, more importantly, it also offers the most bang for your buck! All according to your own figures!
At least to me, this would count as a clear win for the 5850, but that is hardly what I see in the summary.
Also worth mentioning i think: Having been on Newegg on a few occasions, $305 seemed a bit steep for a 5850, and for aspiring customers for a graphics card, I can tell a 5850 can be found for $285. Only $5 more expensive than the price for the 465 you are quoting, and with that small difference I think the value numbers throughout the test would look a bit different.
You should also check your math on the cost per FPS, because the GTX-465 beats the Radeon 5850 in nearly all of them.
I thank you and congratulate you for your response to the commenter.
I sit here absolutely STUNNED. I can't believe that somebody didn't just "take it" and nearly agree with the ati fan fraud.
THANK YOU SO MUCH.
My faith in humanity has been renewed.
Believe me, I really, really appreciate it.
Sincerely sick of the rampant red bias,
SiliconDoc
Did you just report the AC Power differnence between no graphic card in the system and with the Graphics card installed? Did you try to take into account the PSU efficiency?
Power supply efficiency is not taken into consideration for any of our reported results. Only the motherboard, processor, memory, SSD, and video card are drawing power. The math is simply idle/load result minus baseline.
And I don't get your pricing figures. I found a 5850 for $285 and another for $305 in Newegg... the ones you mention beyond $325 include special cooling designs.