Archive Home arrow Reviews: arrow Video Cards arrow MSI N460GTX Cyclone 1GD5/OC Video Card
MSI N460GTX Cyclone 1GD5/OC Video Card
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Wednesday, 11 August 2010

MSI N460GTX Cyclone 1GD5/OC Review

There is no doubt that the NVIDIA GTX 460 has created quite a splash since its introduction. The reconfigured Fermi architecture of the new GF104 chip gives the card a welcome boost in gaming performance, compared to the first GPUs in the series. With 1/3 fewer transistors to feed, the board uses much less power, runs cooler, and it overclocks well with just air cooling. With that kind of capability baked into the basic design, it didn't take long for NVIDIA's partners to start releasing overclocked editions with advanced cooling hardware. MSI has taken on the challenge, and released one of their custom designs under the Cyclone banner. Benchmark Reviews can't resist a good HSF assembly; that much should be obvious to long term readers. So here we are with a detailed review of the MSI N460GTX Cyclone 1GD5/OC video card.

MSI_N460GTX_Cyclone_1GD5_OC_Bracket_34_01.jpg

Software control of a video card's clocks and core voltage is the fastest and easiest way to improve its performance. MSI Afterburner is one of the best monitoring and control software products available, and it brings voltage control to the GTX 460 reference design. With so much apparent thermal headroom available on the GTX 460, the ability to bump up the core voltage on the GPU is quite useful. Add in an oversized heat sink and an oversized fan, and you have a recipe for generous overclocks.

Driver updates are a touchy subject for the enthusiast and gaming communities. They are a necessary evil, IMHO. I like the performance and stability improvements they (sometimes...!) bring, but I wish I didn't have to constantly mess with them. Not every update is needed by every user, but for a new product like the GTX 4xx series it seems like every second or third release is a necessary update. Either the performance increases are too great to ignore, or the vendor has fixed a major bug that affects a large number of users. NVIDIA released a major performance upgrade to their Fermi drivers right around the same time as the GTX 460 hit the market and got some great synergy from the combination.

"What we have here is the perfect storm." When you combine the newly improved Fermi architecture of the GF104, its overclocking ability, the latest performance-enhancing device drivers, enhanced cooling performance, and software voltage control, that's what you get: the perfect storm. Ride along with Benchmark Reviews as we see how high we can crest this wave before crashing into a watery trough.

Manufacturer: Micro-Star Int'l Co., Ltd
Product Name: MSI N460GTX Cyclone
Model Number: N460GTX Cyclone 1GD5/OC
Price As Tested:$234.99

Full Disclosure: The product sample used in this article has been provided by MSI.

NVIDIA GeForce GTX 460 GPU Features

The features of the GF104 GPU contained in the N460GTX are fully comparable with the latest offerings from both major GPU camps. We've been using most of these, or similar technologies, on Radeon 5xxx cards since last September, now we have rough parity in GPU features. Here are the Features and Specifications directly related to the GPU, as provided by the manufacturer, NVIDIA:

NVIDIA_Black_Square_3D_Logo_250px.jpg

Microsoft DirectX 11 Support

DirectX 11 GPU with Shader Model 5.0 support designed for ultra high performance in the new API's key graphics feature, GPU-accelerated tessellation.

NVIDIA PhysX Technology

Full support for NVIDIA PhysX technology, enabling a totally new class of physical gaming interaction for a more dynamic and realistic experience with GeForce.

NVIDIA 3D Vision Ready*

GeForce GPU support for NVIDIA 3D Vision, bringing a fully immersive stereoscopic 3D experience to the PC. A combination of high-tech wireless glasses and advanced software, 3D Vision transforms hundreds of PC games into full stereoscopic 3D. In addition, you can watch 3D movies and 3D digital photographs in eye popping, crystal-clear quality.

NVIDIA 3D Vision Surround Ready**

Expand your games across three displays in full stereoscopic 3D for the ultimate "inside the game" experience with the power of NVIDIA 3D Vision and SLI technologies. NVIDIA Surround also supports triple screen gaming with non-stereo displays.

NVIDIA CUDA Technology

CUDA technology unlocks the power of the GPU's processor cores to accelerate the most demanding tasks such as video transcoding, physics simulation, ray tracing, and more, delivering incredible performance improvements over traditional CPUs.

NVIDIA SLI Technology***

Industry leading NVIDIA SLI technology offers amazing performance scaling for the world's premier gaming solution.

32x Anti-aliasing Technology

Lightning fast, high-quality anti-aliasing at up to 32x sample rates obliterates jagged edges.

NVIDIA PureVideo HD Technology****

The combination of high-definition video decode acceleration and post-processing that delivers unprecedented picture clarity, smooth video, accurate color, and precise image scaling for movies and video.

PCI Express 2.0 Support

Designed for the new PCI Express 2.0 bus architecture offering the highest data transfer speeds for the most bandwidth-hungry games and 3D applications, while maintaining backwards compatibility with existing PCI Express motherboards for the broadest support.

Dual-link DVI Support

Able to drive industry's largest and highest resolution flat-panel displays up to 2560x1600 and with support for High-bandwidth Digital Content Protection (HDCP).

HDMI 1.4a Support*****

Fully integrated support for HDMI 1.4a including GPU accelerated Blu-ray 3D4 support, xvYCC, deep color, and 7.1 digital surround sound including Dolby TrueHD and DTS-HD. Upgrade your GPU to full 3D capability with NVIDIA 3DTV Play software, enabling 3D gaming, picture viewing and 3D web video streaming. See www.nvidia.com/3dtv for more details.

* NVIDIA 3D Vision requires 3D Vision glasses and a 3D Vision-Ready monitor. See www.nvidia.com/3dvision for more information.

** NVIDIA 3D Vision Surround require two or more graphics cards in NVIDIA SLI configuration, 3D Vision glasses and three matching 3D Vision-Ready displays. See www.nvidia.com/surround for more information.

*** A GeForce GTX 460 GPU must be paired with another GeForce GTX 460 GPU (graphics card manufacturer can be different) with the same frame buffer size. SLI requires sufficient system cooling and a compatible power supply. Visit www.slizone.com for more information and a listing of SLI-Certified components.

**** Supported video software is required to experience certain features.

***** Blu-ray 3D playback requires the purchase of a compatible software player

NVIDIA GeForce GTX 460 GPU Specifications

GPU Engine Specs (1GB model at 725MHz):

MSI_N460GTX_Cyclone_1GD5_OC_NVIDIA_Logo_Etch_01.jpg

Fabrication Process: TSMC 40nm Bulk CMOS
Die Size: 366mm2 (estimated)
No. of Transistors: 1.95E9
Graphics Processing Clusters: 2
Streaming Multiprocessors: 7
CUDA Cores: 336
Texture Units: 56
ROP Units: 32
Engine clock speed: 725 MHz
Texel fill rate (bilinear filtered): 40.7 Gigatexels/sec
Pixel fill rate: 23.2 Gigapixels/sec
Maximum board power: 160 Watts

Memory Specs:

Memory Clock: 1800 MHz - DDR
Memory Configurations: 1 GB or 768MB GDDR5
Memory Interface Width: 256-bit or 192-bit
Memory data rate: 3.6 Gbps
Memory Bandwidth: 115.2 or 86.4 GB/sec

Display Support:

Maximum Digital Resolution: 2560x1600
Maximum VGA Resolution: 2048x1536
Standard Display Connectors: Two Dual Link DVI, Mini HDMI
Multi Monitor Capable
HDCP
HDMI 1.4a
Internal Audio Input for HDMI

Standard Graphics Card Dimensions:

Height: 4.376 inches (111 mm)
Length: 8.25 inches (210 mm)
Width: Dual-slot

Thermal and Power Specs:

Maximum GPU Temperature: 104 C
Maximum Graphics Card Power: 160 W
Minimum Recommended System Power: 450 W
Power Connectors: Two 6-pin PCI-E

Source: NVIDIA.com

MSI N460GTX Cyclone 1GD5/OC Features

Material in this section is based on data from MSI.

Above and beyond the features that come with every graphics card based on an NVIDIA GTX 460 GPU, there are several hardware and software features that MSI brings to the table with the N460GTX Cyclone. The feature set of the N460GTX 1GD5/OC is primarily based on three areas: thermal performance, hardware reliability, and overclocking capability. MSI's Cyclone Thermal Design was first introduced on the ATI HD 4870 and 4890 video cards; both chips are known for their high thermal loads. I won't say that they were the absolute hottest running GPUs at the time, but they certainly raised the average. MSI claims to be using Military Class Components in this design. I take issue with this marketing term, as true MIL-SPEC components are actually very hard to source, tend to be based on older manufacturing technologies, and I don't see any on this board. I'm all for using higher quality components, but let's call them by their real names, please. The third area is overclocking performance. Here is where MSI's Afterburner software helps take the performance of this product to another level. Let's start by examining the special hardware features that MSI provides with the latest Cyclone series.

MSI_N460GTX_Cyclone_1GD5_OC_HSF_Closeup_01.jpg

The Cyclone heat-sink-fan assembly dominates the visual design of the N460GTX, and has the following features:

  • 90 mm PWM Fan, which is 15.7% quieter than reference design
  • Large nickel-plated copper base for better dissipation
  • Two large 8mm 6mm heatpipes
  • Rounded corners on heatsink to prevent dust accumulation

The MSI Cyclone series features Military Class Components for its power supply components, including:

  • Hi-c Cap provides extremely stable GPU power supply.
  • Solid State Choke has No Buzz noise and higher current for better overclocking ability.
  • All Solid CAP for longer lifespan

What they don't specifically mention is the use of tantalum capacitors for localized decoupling at the GPU. Because a high capacitance can be achieved in a small volume, these tantalum caps can be mounted closer to the GPU, where they are more effective at reducing high frequency ripple currents. They are also one of the most reliable types of capacitors, especially in hot environments. The back side of a video card PCB, right below the GPU, certainly qualifies as a hot environment. By combining solid Aluminum electrolytic capacitors for bulk filtering at the power inputs, with tantalum caps at the "point of use", the designers have made a smart move in optimizing the performance of the power supply subsystem. The Solid State Chokes are a common feature in the current PC component market, and anything that can be done to eliminate the awful noise that can emanate from these little hunks of Iron and wire is a worthwhile effort.

The MSI Afterburner overclocking utility supplied with the N460GTX is version 1.6.1, with support for the latest GF104 GPU from NVIDIA. There are a number of distinct features available within the utility:

  • Benchmarking
  • Overclocking
  • Monitor
  • Profile
  • Advanced Fan Speed Controls
  • Independent or Synchronous Clocks for Multiple GPUs
  • Information Display: BIOS, Driver, GPU ID, etc.

The benchmarking tool is called "Kombustor", and is based on the very popular Furmark application. The overclocking capabilities of Afterburner are based on RivaTuner, another solid application that many enthusiasts and overclockers have used for several years now.

NVIDIA-GTX-460/MSI-Afterburner v1.6.1

The monitoring tool supplied as part of MSI Afterburner is one of my favorites. I especially like the fact that you can "detach" the monitor window, and then minimize the controller window separately. Each of the graphs has a scalable vertical axis, and hovering over the line graph with the mouse produces a top-to-bottom listing of values at that time slice. The overall interface of this product is one of the best available, at least for my purposes. I like to keep things simple when I can, so a lot of utility software doesn't do much for me; it ends up creating screen clutter once you get them all up and running. MSI Afterburner combines the features I need into one interface, plus offers capabilities that others can't match. Many different types of video cards have had their GPU core voltage maxed out with Afterburner, not just certain MSI models.

Other than the fan speed control, which is a MUST for benchmarking, I don't use the other features as much. I don't have any applications that require a specialized profile. The basic 2D and 3D settings work well for me. I am waiting for one of the vendors to come out with a form of "Turbo" controller, similar to what Intel, AMD and ASUS have introduced recently for CPUs. A quick burst of speed is sometimes needed while you are in 2D mode, and why not crank it to the max?

We've had a few opportunities to examine different video cards based on NVIDIA's new GF104 chip, but there's always more left to learn. So let's take a closer look at the MSI N460GTX Cyclone 1GD5/OC, and see how this interpretation of the GTX 460 design is different from the others.

Closer Look: MSI N460GTX Cyclone 1GD5/OC

The MSI N460GTX Cyclone 1GD5/OC is a prime example of how to make a good thing better. As I mentioned in the Features section, the N460GTX Cyclone distinguishes itself from the reference cards based on three major areas: improved cooling, Military Class components, and the included MSI Afterburner software. We're going to take a closer look at what MSI has accomplished here, and then dive into more detail in the next section.

MSI_N460GTX_Cyclone_1GD5_OC_Bottom_Tilt_01.jpg

The first thing you notice with this video card is the open construction of the heatsink and the oversized fan. It should be obvious to any computer enthusiast alive today that bigger fans push more air with less noise, all else being equal. The 90mm fan in the center of the radial heatsink assembly is 10mm larger than the fan on the reference design. That may not sound like a whole lot, but it has 30% more swept area, and the added area at the end of the fan blades is spinning faster than any part on the smaller fan. It's also a given that increasing clock speed and voltage for any GPU will increase heat generation within the chip, which has to be removed. So, if you're going to release a graphics card that is just begging to be overvolted and overclocked, it's a wise idea to bump up the cooling capacity, as well. The fan upgrade is an obvious enhancement, and it's an unusual design, so it‘s worth taking a good look around.

MSI_N460GTX_Cyclone_1GD5_OC_HSF_Closeup_02.jpg

The basic concept behind the MSI Cyclone heatsink-Fin assembly is not new; in fact it has a lot in common with the reference design produced by NVIDIA. The difference is almost exclusively in the size of things. The fins attached to the twin heat pipes are larger than the reference design, and there are more of them (94 total vs. 56), extending 18mm above the PCB of the card itself. They won't be a bother in most gaming cases, but check for clearance if you have side fans. Don't try this in a typical HTPC case. The published card dimensions only include the card, not the additional height of the cooler. The heatpipes are 6mm diameter, nickel plated, and there are only two of them, but we'll see later that it's enough to get the job done. The bottom plate is thicker and larger than the reference design, it's nickel plated, and the mounting standoffs are attached directly to this plate, providing a much more direct load path for the tension screws. We'll take a look at the underside later.

MSI_N460GTX_Cyclone_1GD5_OC_Power_Section_02.jpg

In addition to the cooling changes, the MSI N460GTX Cyclone incorporates what it calls Military Class Components in the power supply. If you have seen any hi-res photos of the NVIDIA reference design, you would be forgiven if you thought that this image of the MSI board looks the same. They're the same. That's not a bad thing, and all the things that are specifically called out in the marketing material: Hi-c Cap, Solid State Choke, and all Solid CAP, are all there. The Hi-c (Tantalum) Caps are mounted on the back side, where their small size and low profile are particularly useful.

MSI_N460GTX_Cyclone_1GD5_OC_Power_Choke_Closeup_01.jpg

It's also fair to note that the first two power supply chokes, mounted right at the PCI-E power connectors, are open frame units, not SSCs. To be honest, I never heard them squeal, growl or even chirp, so it's a non-issue as far as I am concerned. I've also not heard any reports that the reference cards sent out to dozens of review sites had a problem with these chokes singing along with the theme music to Crysis; so again, I think it's a non-issue.

The board is fed from two 6-pin PCI-E power connectors exiting the rear of the fairly short card. There should be no problems fitting this card, and its connectors, in any standard ATX style chassis. The 6-pin PCI-E connection is highly underrated, at 75W each. Since the 8-pin connection is rated for 150W, I don't understand how 33% more pins give 100% more power. And BTW, the extra two pins are both for ground; there are still only three 12V+ pins, so it's really like 0% more pins providing 100% more power. The real capacity of a 6-pin connector is at least 100W, so there is at least 275 W available from the standard connector arrangement (including the X16 PCI Express connector on the motherboard), well above the card's rated 160W maximum requirement.

MSI_N460GTX_Cyclone_1GD5_OC_Solder_Quality_Closeup_01.jpg

The PC board had excellent solder quality and precise component placement, as can be seen here. The component placement is quite good; this is the area on the back side of the board, directly below the GPU, and is one of the most crowded sections. On my LCD screen, this image is magnified 20X, compared to what the naked eye sees. The small SMD resistors located side-by-side in this view are placed on 1mm centers. This is also one of the most critical sections of the PCB for build quality, as variations in stray capacitance here could impact the performance of the GPU, and certainly its overclocking ability.

This board was also much cleaner than several samples I've looked at recently. There was still some residue in a few places, but the comparison was like night and day. Once you start looking at macro photographs like this, there's no place for any manufacturing shortcuts to hide. All manufacturers are under intense pressure to minimize the environmental impact of their operations, and cleaning processes have historically produced some of the most toxic industrial waste streams. The combination of eco-friendly solvents, lead-free solder, and smaller SMD components have made cleaning of electronic assemblies much more difficult than it used to be.

MSI_N460GTX_Cyclone_1GD5_OC_Front_Bare_01.jpg

The layout on the front and back of the printed circuit board is identical to the NVIDIA reference card. It's a fairly simple design, and there are fewer components mounted on the back side than on a full-bore high end card. The only interesting things mounted on the rear of the board are several Hi-c Tantalum capacitors near the GPU, and the main PWM controller IC. The GPU cooler is mounted with four spring-loaded screws, without the aid of any type of back plate. There are no additional cooling considerations for any of the power supply components or the GDDR5 RAM chips. However, all of them benefit somewhat from the downdraft airflow of the 90mm cooling fan.

What I like about this card is how it does so much with so little. It's a simple card, without a lot of excess, whiz-bang components, yet it dares to compete with some pretty sophisticated Cypress and Fermi-based designs. It's relatively compact, runs cool and doesn't use as much power as its competitors. It's all down to the design of the GF104 GPU really, which is actually a relief. After the nuclear powered GF100-based cards came out, I was wondering if NVIDIA had completely lost the bubble. Now I know they haven't.

Let's take a more detailed look at some of the components on the board. I did a full tear-down, so we could see everything there is to see...

MSI N460GTX Cyclone Detailed Features

With high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster has been the byword for achieving gaming-class performance with all the latest and greatest GPUs. The MSI N460GTX Cyclone uses a fairly standard GPU cooler concept that is similar to the reference design, but it contains some enhancements, some more visible than others.

MSI_N460GTX_Cyclone_1GD5_OC_HSF_Off_The_Cuff.jpg

Two flattened, 6mm diameter heatpipes are clamped between the fairly thick, nickel plated copper mounting plate and a small aluminum heatsink, passing directly over the GPU die. Once they exit from there, they spread to the outer reaches of two semi-circular aluminum fin assemblies. Considering the power density of modern GPU devices, it makes sense to contact every square millimeter of the top surface with the heatsink if you can. The GF104 chip, like most NVIDIA GPU packages has a very large heat spreader mounted to it, and the copper mounting plate covers it with room to spare.

The air all flows out in a radial fashion from the centrally mounted fan, and very little makes it out the vents on the I/O bracket at the rear of the case. Make sure your chassis has plenty of airflow, in the right direction, in order to move the heat generated by this card up and out of the case. This cooler design does not seem all that well suited to multi-card SLI applications. Of course, that doesn't prevent it from doing just that, in a very convincing manner. Sometimes you just have to engineer your way around unusual design choices; the most famous case in point being the Porsche 911, another air-cooled design which succeeds brilliantly.

MSI_N460GTX_Cyclone_1GD5_OC_TIM_Closeup.jpg

The GPU makes direct contact with a copper plate that is soldered to the heatpipes passing directly over the top of the GPU. The thermal interface material (TIM) was very evenly distributed by the factory, but was applied slightly thicker than necessary. One day, anxious manufacturing engineers are going to figure out that too little TIM is better than too much. For the rest of us who pay attention to these things, a thorough discussion of best practices for applying TIM is available here.

MSI_N460GTX_Cyclone_1GD5_OC_Tantalum_Cap_01.jpg

Here is a close-up of one of the Hi-c tantalum capacitors on the back side of the card. They are incredibly small for the amount of charge they hold, which allows them to be placed much closer to the active components they support. This greatly improves the filtering performance at high frequencies. If you remember when ATI upgraded the HD 4870 GPU to HD 4890 status, it was the addition of small filter caps right on the GPU package substrate that allowed the 4890 to reach such high clock rates. Tantalum caps were what made that design change possible.

MSI_N460GTX_Cyclone_1GD5_OC_ON_Semi_PWM_Control.jpg

The main power supply controller chip used on the MSI N460GTX Cyclone is an NCP5388 chip from ON Semiconductor. It is a 2/3/4 Phase PWM control IC that does not supports I2C software voltage control, however the NVIDIA BIOS provides its own software control that interfaces with the controller at the hardware level. The VRM section uses a relatively simple and straightforward 3-phase design for powering the GPU. I've seen some custom GTX 460 designs recently that bump this number up to at least four phases, but the three provided by the reference design seem to work well. A couple of small ANPEC APW71xx integrated controllers provide 1.5 volts to the GDDR5 RAM, and 12V to the board's control circuits.

MSI_N460GTX_Cyclone_1GD5_OC_Power_MOSFETs_01.jpg

The MSI N460GTX Cyclone uses standard Power-SO8 packaging for the Single N-Channel MOSFET power transistors and drivers in the VRM section. This discrete implementation gives up the opportunity to save a little space, but it does give the designer a broader choice in component selection, compared to a DrMOS design. The 4935N devices installed here can source a whopping 93A at an ambient temp of 25C, and are downgraded to 59A at 85C. We all know how hot video cards get, so it's good to have plenty of reserve current capacity for these power devices.

MSI_N460GTX_Cyclone_1GD5_OC_Samsung_GDDR5.jpg

The memory choice for the MSI N460GTX Cyclone 1GD5/OC is consistent with the NVIDIA reference designs. The basic GTX 460 specs only require 900 MHz chips for the memory, but most cards have been using these Samsung K4G10325FE-HC05 GDDR5 parts, which are designed for up to 1000 MHz. The MSI Afterburner software supplied with this Cyclone edition doesn't have the capability to increase memory voltage, so don't presume that you will get much more than the rated memory speed. The 1250 MHz versions of this chip have been mediocre overclockers on the Radeon platform; we'll have to see if the lower specified parts are a little more willing to exceed their ratings.

MSI_N460GTX_Cyclone_1GD5_OC_GDDR5_Specs.png

Now that we've had a good tour of the MSI N460GTX Cyclone, inside and out, it's time to put it to the test. Well, Benchmark is our first name, so don't worry. There are a wide variety of tests waiting for you in the next several sections. Let's start off with a complete description of the Video Card Testing Methodology.

Video Card Testing Methodology

With the widespread adoption of Windows7 in the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this highly anticipated, operating system. Overall performance levels of Windows 7 are favorable compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have anxiously awaited for years. After several months of product testing with Win7-64, I can vouch for its stability and performance; I can't think of any reasons why I would want to switch back to XP.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro. We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors).

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in several benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel P55 Express Test System

  • Motherboard: ASUS P7P55D-E Pro (1002 BIOS)
  • System Memory: 2x 2GB GSKILL Ripjaws DDR3 1600MHz (7-8-7-24)
  • Processor: Intel Core i5-750 (OC @ 4.0 GHz)
  • CPU Cooler: Prolimatech Megahalems (Delta AFB1212SHE)
  • Video: MSI GeForce GTX 460 (N460GTX Cyclone 1GD5/OC-Forceware v258.96)
  • Drive 1: OCZ Vertex SSD, 32GB
  • Drive 2: Western Digital VelociRaptor, 150GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: CM STORM Sniper Gaming Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

DirectX 10 Benchmark Applications

  • 3DMark Vantage v1.02 (Extreme Quality, 8x MSAA, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis v1.21 Benchmark (DX10, Very High Settings, 0x and 4x MSAA, Island Demo)
  • Devil May Cry 4 Benchmark Demo (DX10, Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 (DX10, Very High Performance, Ultra-High Quality, 8x MSAA, Small Ranch Demo)
  • Resident Evil 5 Benchmark (DX10, 8x MSAA, Motion Blur ON, Quality Levels-High)

DirectX 11 Benchmark Applications

  • BattleField: Bad Company 2 (High Quality, HBAO, 8x MSAA, 16x AF, Single-Player Intro Scene)
  • Unigine Heaven Benchmark 2.0 (DX11, Normal Tessellation, 16x AF, 4x and 8x MSAA)
  • S.T.A.L.K.E.R. Call of Pripyat Benchmark (Ultra-Quality, Enhanced DX11, 4x MSAA, SSAO-HDAO,Ultra)
  • Aliens vs Predator (Very High Quality, 4x MSAA, 16x AF, SSAO, Tessellation, Advanced Shadows)

I decided to test this video card in two configurations: first with its modest 50MHz factory overclock (725MHz core), and then with my own 175MHz overclock (850MHz core). I had to raise the core voltage from its default setting of 0.987 V to 1.000 V in order to achieve stability at this speed, but that is a minute amount compared to what this chip is capable of. My goal was to show what performance levels could be reached without extreme measures. Anyone who buys this card should be able to achieve this result, not just the mad scientists that you read about on overclocking forums. The fact that MSI supplies the software to make it not just possible but easy, is icing on the cake. While I was at it, I bumped up the memory clock to 50 MHz above their rated speed, to 1050 MHz (4.2 Gbps data rate).

I've listed the same card twice below, in two different configurations, so don't be confused. It's the same card, just two different clock rates.

Video Card Test Products

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

  • XFX Radeon HD5750 (HD-575X-ZNFC - Catalyst 8.732.0.0)
  • ATI Radeon HD5770 (Engineering Sample - Catalyst 8.732.0.0)
  • XFX Radeon HD5830 (HD-583X-ZNFV - Catalyst 8.732.0.0)
  • MSI GeForce GTX 460 (N460GTX Cyclone 1GD5/OC - Forceware v258.96)
  • XFX Radeon HD5850 (21162-00-50R - ATI Catalyst 8.732.0.0)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX - Forceware v197.45)
  • ASUS GeForce GTX 285 (GTX285 MATRIX - Forceware v197.45)
  • XFX Radeon HD5870 (HD-587X-ZNFC - Catalyst 8.732.0.0)
  • ASUS Radeon HD5870 (EAH5870/2DIS/1GD5/V2) - Catalyst 8.732.0.0)

3DMark Vantage Performance Tests

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

3DMark Vantage GPU Test: Jane Nash

MSI_N460GTX_Cyclone_1GD5_OC_3DMark_Vantage_Jane_Nash_1680.jpg

Our first test shows the GTX460 placed right where NVIDIA wants it, just ahead of the ATI HD 5830. The MSI N460GTX Cyclone is overclocked from the factory, by about 7.5% (+50MHz), and I am also showing the results from a more substantial overclock of 25%, to 850 MHz on the core. When overclocked by 25%, the N460GTX outperforms the HD 5850 by over 8%. All the results are very even and linear, just the way synthetic benchmarks are supposed to be. The only hitch in the graph is caused by the older GT200-based cards, which I am including for reference in case you want to see whether it's worth upgrading. The synthetic results overwhelmingly say, Yes.

MSI_N460GTX_Cyclone_1GD5_OC_3DMark_Vantage_Jane_Nash_1920.jpg

At 1920x1200 native resolution, things look much the same as they did at the lower screen size; just the absolute values are lower, the ranking stays the same. One thing you may have noticed is how well the HD 5830 does on this test, compared to the HD 5770. That issue has been beat to death, but I mention it to demonstrate that the N460GTX beats the HD 5830 even when it has everything going for it. The 5870 is the only card that can break 30FPS at this resolution, and it's pretty obvious as the test plays out on the screen. All the lower choices seem choppy by comparison. Let's take a look at test#2, which has a lot more surfaces to render, with all those asteroids flying around the doomed planet New Calico.

3DMark Vantage GPU Test: New Calico

MSI_N460GTX_Cyclone_1GD5_OC_3DMark_Vantage_New_Calico_1680.jpg

In the medium resolution New Calico test, the slightly overclocked MSI N460GTX Cyclone does so well that it edges out an ATI HD 5850 with base clocks. That's an impressive feat for a card in this price range. The higher overclock results show that synthetic performance scales linearly with higher clock rates, just as you would suspect. Even though the 850 MHz GTX 460 gets within 4 FPS to a highly overclocked HD 5870, it still takes a 1.0 GHz Cypress core to get over 30 FPS in this benchmark, which shows how tough this test really is.

MSI_N460GTX_Cyclone_1GD5_OC_3DMark_Vantage_New_Calico_1920.jpg

At a higher screen resolution of 1920x1200, the N460GTX Cyclone with its mild factory OC keeps a slim lead over the HD 5850, less than 1 FPS. Again, an increase in Core and Shader clocks for the GTX 460 provides a roughly 1:1 increase in frame rates. Even the fastest single GPU cards have trouble rendering this scene, with an average frame rate in the mid 20s. Soon this benchmark suite may be replaced with DX11-based tests, but in the fading days of DX10 it has been a very reliable benchmark for high-end video cards.

We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Crysis Performance tests

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards themselves.

MSI_N460GTX_Cyclone_1GD5_OC_Crysis_NoAA_1680.jpg

With medium screen resolution and no MSAA dialed in, the MSI N460GTX Cyclone is on par with the HD 5830 and the same card with a 175 MHz overclock is about one FPS behind a stock HD 5850. Unlike many so-called TWIMTBP titles, Crysis has always run quite well on the ATI architecture. The GTX 460 is still competitive here at current pricing, so don't look at the performance in this title as anything like a failure. It's just not a slam dunk victory for NVIDIA this time.

Crysis is one of those few games that stress the CPU almost as much as the GPU. As we increase the load on the graphics card, with higher resolution and AA processing, the situation may change. Remember all the test results in this article are with maximum allowable image quality settings, plus all the performance numbers in Crysis took a major hit when Benchmark Reviews switched over to the DirectX 10 API for all our testing.

MSI_N460GTX_Cyclone_1GD5_OC_Crysis_NoAA_1920.jpg

At 1900 x 1200 resolution, the relative rankings stay the same; the raw numbers just go down. With the increased load on the GPU, the GTX 460 can't quite get above the 30 FPS mark, even with a 25% overclock. It takes more than any mid-range GPU can muster to play Crysis at high resolution, but that's no surprise.

MSI_N460GTX_Cyclone_1GD5_OC_Crysis_4xAA_1680.jpg

Now let's turn up the heat a bit, and add some Multi-Sample Anti-Aliasing. With 4x MSAA cranked in, the MSI N460GTX Cyclone 1GD5/OC loses about 5 FPS at 1680x1050 screen resolution. This time however, the GTX 460 with a major overclock manages to stay just above the 30 FPS line. Compared to the ATI offerings, the N460GTX with out-of-the-box settings hangs tight with the HD 5830, and when pushed to 850 MHz core, sticks with the HD 5850. Very competitive results.... None of the GT200 cards are a serious threat to the newer cards with their 40nm GPU technology.

MSI_N460GTX_Cyclone_1GD5_OC_Crysis_4xAA_1920.jpg

This is one of our toughest tests, at 1900 x 1200, maximum quality levels, and 4x AA. Only one GPU gets above 30 FPS in this test, and until recently it was the fastest single-GPU card on the planet, the Radeon HD 5870. In the middle ranges, the HD 5850 holds on to its spot as performance leader, but the GTX 460 is starting to look like it might be the value leader. We'll have to get a lot more results tabulated before we can make that judgment.

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Devil May Cry 4 Test Results

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms. On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used. Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

MSI_N460GTX_Cyclone_1GD5_OC_DMC4_DX10_Scene2.jpg

The N460GTX Cyclone puts one more nail in the coffin of the HD 5830 in this test, beating it by 19% with out-of-the-box settings. Overclock it like you mean it....and the GTX 460 plays tag with the HD 5850. This is definitely one of the tests where the HD 5830 stumbles a bit, providing only a small increase in performance over the HD 5770, while the HD 5850 runs off ahead of the group.

The GT200 cards from NVIDIA stage a small comeback in Devil May Cry 4, but are still showing their age. The ASUS EAH5870V2 takes full advantage of an 18% overclock, even at these crazy frame rates, putting up 18% higher frame rates than the 5870 with stock clocks. I love the fact that this benchmark doesn't seem to get bottlenecked by the CPU, even at these crazy high frame rates.

MSI_N460GTX_Cyclone_1GD5_OC_DMC4_DX10_Scene4.jpg

In Scene #4, the N460GTX does the unthinkable; it runs clock-for-clock with the HD 5850. 725 MHz core clock on the GTX 460, and 725 MHz on the 5850 - same FPS. Who would have thought it..?!? Oh, and the GTX 285 is within 1 FPS as well. Well, I know which one is cheaper, anyway. Score another one for the GTX 460.

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class DirectX 10 graphics in a far away land.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Far Cry 2 Benchmark Results

Ubisoft developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I. actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX-10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis.

However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX-10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied. HDR and Bloom are automatically enabled in DX10 mode.

MSI_N460GTX_Cyclone_1GD5_OC_Far_Cry_2_DX10_1680.jpg

Even on a game that typically favors the Green Machine, the performance of the latest NVIDIA GPU in this test is nothing short of amazing. It's not worth even running the numbers, the advantage is so overwhelming. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), many of the midrange products we've tested are capable of producing playable frame rates with the settings all turned up. Now it seems we have a midrange video card that absolutely dominates this game. We also saw a different effect when switching our testing from DX9 to DX10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DirectX 10 requirements. This test also generally produces one of the lighter GPU loads (thermal + power) among our benchmarks; the coding appears to be highly optimized.

MSI_N460GTX_Cyclone_1GD5_OC_Far_Cry_2_DX10_1920.jpg

The higher resolution testing doesn't change the rankings at all, and the N460GTX still produces stellar results. With these kinds of average frame rates, there is less chance of any stutter making it into game play. I was curious to see how well the GTX 460 did on minimum frame rates, given the outstanding performance on average, so here is what I learned:

MSI_N460GTX_Cyclone_1GD5_OC_Far_Cry_2_Framerate_Chart.png

The minimum frame rate never dropped below 50 FPS, and there was only one sharp dip in the chart, at the 13 second mark. It was probably one of the many explosions, the first one takes place at close range, and has more detail associated with it. I've been glancing at these charts every time I run this benchmark, even though we generally don't report the results, and this is definitely one of the smoother and flatter curves I've seen.

Our next benchmark of the series puts our collection of video cards against some fresh graphics in the recently released Resident Evil 5 benchmark.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Resident Evil 5 Test Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenary mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience. The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes which take place in different areas of the compound. In addition it calculates an overall average for the four scenes. The averages for scene #3 and #4 are what we report here, as they are the most challenging.

MSI_N460GTX_Cyclone_1GD5_OC_Resident_Evil_5_DX10_Scene3.jpg

Looking at the results for area #3, it's blatantly obvious that the NVIDIA cards do exceptionally well in this scene. The MSI N460GTX Cyclone achieves the same average frame rate as an HD 5870 at stock settings. Coincidently, that also the same performance level as the GTX 285 puts up. So it's clear, that this is not a fair comparison. If you like this game, all of the GTX cards offer best value in this instance. Plus, all that performance is available at a substantial discount with the new GTX 460. There is quite a bit of variation in the game play between the four areas, so let's see what happens in the next scene, area #4.

MSI_N460GTX_Cyclone_1GD5_OC_Resident_Evil_5_DX10_Scene4.jpg

In area #4, the 5870 convincingly reclaims its title, and the 5850 comes back to compete with the GTX 460; this looks more like we've seen on the other titles so far. I'm not sure what it is in area #3 that gives the GTX cards such an advantage, but it doesn't last throughout the entire benchmark. In both scenes, the 17% overclock on the GTX 460 returns a comparable gain in performance, consistent with the improvements we've seen in the other benchmarks. Let's keep looking, especially at some new titles that were developed specifically to showcase DX11, and see if there are any more surprises in store for the N460GTX Cyclone.

In our next section, Benchmark Reviews looks at one of the newest and most popular games, Battlefield: Bad Company 2. The game lacks a dedicated benchmarking tool, so we'll be using FRAPS to measure frame rates within portions of the game itself.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Battlefield: Bad Company 2 Test Results

The Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users. At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor.

In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

MSI_N460GTX_Cyclone_1GD5_OC_Battlefield_Bad_Company2_1920.jpg

BF:BC2 shows that DirectX10 need not be the death card for NVIDIA GeForce products; the Frostbite-1.5 game engine is partial to NVIDIA products over ATI, despite AMD's sponsorship of the game. In Battlefield: Bad Company 2, the MSI N460GTX Cyclone bests the ATI Radeon HD 5830 by 19%. Once overclocked to a readily achievable 850 MHz, the N460GTX improves its lead over the HD 5830 to 39%. BTW, I think it's a fair fight comparing the Cypress to the GF104; they both have roughly 2 billon transistors, use the exact same fabrication technology-sourced from the same supplier, and many are running at 800-850 MHz core frequencies here. As always, in the fight between NVIDIA and ATI, it's how each company has chosen to arrange those transistors; they have radically different computing architectures.

I know general purpose computing uses a very small fraction of the power contained in today's average PC, but it does seem that gaming applications are at least trying to push the envelope. Playing this game with the previous generation of graphics cards is a complete waste of time and effort. Some of that is attributable to advances in 3D Graphics APIs (application programming interfaces) like DirectX11, but at some level the game developers have to make decisions about how much detail to include in the scenes, and how realistically to render soft surfaces like skin and water. I know some of the improvements may look minimal or insignificant when perusing the promotional screenshots, but they all add up, in the final result. Bring it on, I say. I'll find some other use for that old HD 4850 graphics card.

In our next section, we are going to switch over to DirectX 11 testing and look at the one of the newest DX11 benchmarks, straight from Russia and the studios of Unigine. Their latest benchmark is called "Heaven", and it has some very interesting and non-typical graphics. So, let's take a peek at what Heaven v2.0 looks like.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

Unigine Heaven Benchmark

The Unigine "Heaven 2.0" benchmark is a free, publicly available, tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:

  • Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
  • Comprehensive use of tessellation technology
  • Advanced SSAO (screen-space ambient occlusion)
  • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
  • Dynamic simulation of changing environment with high physical fidelity
  • Interactive experience with fly/walk-through modes
  • ATI Eyefinity support

MSI_N460GTX_Cyclone_1GD5_OC_Unigine_Heaven_DX11_4xAA.jpg

Starting off with a lighter load of 4x MSAA, we see a steady progression of performance as you move up the ATI 5xxx ladder. Stuck there in the middle of the chart are two results that show a clear distinction between the two competing architectures. Even in the "normal" tessellation mode, this is a graphics test that really shows off the full effect of the new technology. The Fermi architecture has so much more computing power designated and available for tessellation, that it's no small surprise to see the card doing so well here. There is still some jerkiness to the display with all of the cards; now that I've seen the landscape go by for a couple hundred times, I can spot the small stutters more easily. This test was run with 4x anti-aliasing; let's see how the cards stack up when we increase MSAA to the maximum level of 8x.

MSI_N460GTX_Cyclone_1GD5_OC_Unigine_Heaven_DX11_8xAA.jpg

Increasing the anti-aliasing just improved the already convincing performance of the GTX 460, relative to the Radeon HD 5xxx series. It's interesting to note that the HD 5850 doesn't stand out so much with this benchmark; everywhere else, it seems to jump a little higher than its Radeon neighbors. There's no denying that the Fermi chip, in its best interpretation yet: the GF104, is a killer when called upon for tessellation duty.

Let's take a look at one more DX11 benchmark, a decidedly less cheerful scenario in a post-apocalyptic "Zone", which is traversed by mercenary guides called Stalkers.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit

S.T.A.L.K.E.R.: Call of Pripyat Test Results

The events of S.T.A.L.K.E.R.: Call of Pripyat unfolds shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl. Having discovered about the open path to the Zone center, the government decides to hold a large-scale military "Fairway" operation aimed to take the CNPP under control. According to the operation's plan, the first military group is to conduct an air scouting of the territory to map out the detailed layouts of anomalous fields location. Thereafter, making use of the maps, the main military forces are to be dispatched. Despite thorough preparations, the operation fails. Most of the avant-garde helicopters crash. In order to collect information on reasons behind the operation failure, Ukraine's Security Service sends their agent into the Zone center.

S.T.A.L.K.E.R.: CoP is developed on X-Ray game engine v.1.6, and implements several ambient occlusion (AO) techniques including one that AMD has developed. AMD's AO technique is optimized to run on efficiently on Direct3D11 hardware. It has been chosen by a number of games (e.g. BattleForge, HAWX, and the new Aliens vs. Predator) for the distinct effect in it adds to the final rendered images. This AO technique is called HDAO which stands for ‘High Definition Ambient Occlusion' because it picks up occlusions from fine details in normal maps.

MSI_N460GTX_Cyclone_1GD5_OC_STALKER_DX11_HDAO_Ultra.jpg

Once we turn on DirectX 11 with S.T.A.L.K.E.R.: CoP, we're left with only the latest GPUs to test with. There's a fairly even step up from one card to the next, similar to what you see in a synthetic benchmark. In this case, the GTX460 doesn't jump to the head of the class like it did with Unigine's heaven 2.0, primarily because there isn't as much emphasis on tessellation here. The primary influence on the graphics seems to be the features introduced in DirectX 10 and 10.1, namely SSAO. In fact, "Shadows" is the first thing that comes to mind when trying to think of words to describe the scenes in this gloomy adventure.

Our next benchmark of the series is not for the faint of heart. Lions and tiger - OK, fine. Guys with guns - I can deal with that. But those nasty little spiders......NOOOOOO! How did I get stuck in the middle of a deadly fight between Aliens vs. Predator anyway? Check out the results from our newest DirectX11 benchmark in the next section.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit


Aliens Vs. Predator Test Results

Rebellion, SEGA and Twentieth Century FOX have released the Aliens vs. Predator DirectX 11 Benchmark to the public. As with many of the already released DirectX 11 benchmarks, the Aliens vs. Predator DirectX 11 benchmark leverages your DirectX 11 hardware to provide an immersive game play experience through the use of DirectX 11 Tessellation and DirectX 11 Advanced Shadow features.

MSI_N460GTX_Cyclone_1GD5_OC_460GTX_AvP_Bench.png

In Aliens vs. Predator, DirectX 11 Geometry Tessellation is applied in an effective manner to enhance and more accurately depict HR Giger's famous Alien design. Through the use of a variety of adaptive schemes, applying tessellation when and where it is necessary, the perfect blend of performance and visual fidelity is achieved with at most a 4% change in performance.

DirectX 11 hardware also allows for higher quality, smoother and more natural looking shadows as well. DirectX 11 Advanced Shadows allow for the rendering of high-quality shadows, with smoother, artifact-free penumbra regions, which otherwise could not be realized, again providing for a higher quality, more immersive gaming experience.

Benchmark Reviews is committed to pushing the PC graphics envelope, and whenever possible we configure benchmark software to its maximum settings for our tests. In the case of Aliens vs. Predator, all cards were tested with the following settings: Texture Quality-Very High, Shadow Quality-High, HW Tessellation & Advanced Shadow Sampling-ON, Multi Sample Anti-Aliasing-4x, Anisotropic Filtering-16x, Screen Space Ambient Occlusion (SSAO)-ON. You will see that this is a challenging benchmark, with all the settings turned up and a screen resolution of 1920 x 1200, as only the HD5870 cards achieved an average frame rate of 30FPS.

MSI_N460GTX_Cyclone_1GD5_OC_Aliens_vs_Predator_1920.jpg

This is truly a DirectX11 only benchmark, so we're limited to looking at only the latest generation cards that I had available. This is clearly a tough benchmark, and it's very useful for testing the latest and greatest graphics hardware. The stock ATI HD 5870, with a core clock of 850 MHz, just barely reached 30 FPS as an average frame rate. Using anything less than the top hardware, some scenes had a jumpy quality to them. The overclocked N460GTX got the closest, in terms of smooth video quality, with an average frame rate of 27 FPS. Once again, the GTX 460 put the hurt on the HD 5830, piling up some real pressure on the existing cards in this price sector.

In our next section, we investigate the thermal performance of the MSI N460GTX Cyclone, and see how well MSI's Cyclone cooler works on the latest Fermi offering.

Graphics Card

Processor
Cores

Core Clock
(MHz)

Shader Clock
(MHz)

Memory Clock
(MHz)

Memory
Amount

Memory
Interface

XFX Radeon HD5750 (HD-575X-ZNFC)

720

700

N/A

1150

1.0GB GDDR5

128-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

MSI GeForce GTX460 (N460GTX Cyclone 1GD5/OC)

336

725

1450

900

1.0GB GDDR5

256-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX460-OC (N460GTX Cyclone 1GD5/OC)

336

850

1700

1050

1.0GB GDDR5

256-bit

ASUS GeForce GTX 285 (MATRIX GTX285)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

ASUS Radeon HD5870-OC (EAH5870/2DIS/1GD5/V2)

1600

1000

N/A

1250

1.0GB GDDR5

256-bit


MSI N460GTX Cyclone Temperatures

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.8.2 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at a very high 29C throughout testing. I know this is much higher than the average American household, but we had a massive heat wave this summer and my testing is done in an upstairs room that doesn't get as much of the central A/C as I would like... Besides, I know some of you are not living in iceboxes and would be interested in how well the GTX 460 handles high ambient temps. I do have a ton of airflow into the video card section of my benchmarking case, with a 200mm side fan blowing directly inward, so that helps alleviate the high ambient temps.

The MSI N460GTX Cyclone 1GD5/OC video card recorded 33C in idle 2D mode, and increased to 60C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution, and the maximum MSAA setting of 8X. With the fan set on Automatic, the speed rose from 40% at idle to 70% under full load. Before we talk about the temps under load, it's worth it to pay notice to the idle temperatures. I have never seen idle temps this low above ambient, but if you follow along into the next section on power consumption, I think you'll see the explanation.

Load

Fan Speed

GPU Temperature

Idle

40% - AUTO

33C

Furmark

70% - AUTO

60C

60C is an excellent result for temperature stress testing, especially with such a powerful GPU, stock fan settings, a high ambient of 29C, and fan speeds controlled by the card. I'm used to seeing video card manufacturers keeping the fan speeds low and letting GPU temps get into higher temperature regions than this. I applaud MSI for keeping the fan speeds up and the temps low with their stock automatic fan settings. I rarely do my benchmarking tests with fans set on Automatic, preferring to give the GPU or CPU the best shot at surviving the day intact. With an integrated temperature controller in play though, I want to show how the manufacturer has programmed the system. This is one of the few video cards where I like the manufacturer's settings, right out of the box.

Load temps never got higher than 57C when running continuous gaming benchmarks with automatic fan speeds, so the cooling system definitely does the job, and there is a lot of temperature headroom left for the GPU. The noise at 100% speed was much lower than some other products I've tested recently that had squirrel cage blowers. For me, this type of fan noise is less irritating than what a radial fan produces, but I still prefer a design that pushes all the heated air out the back of the case. For normal usage patterns including gaming, I'd leave the fan settings on Auto. For benchmarking, it's worth it to put up with a little more noise, and drive the fan at 100%.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power!

MSI_N460GTX_Cyclone_1GD5_OC_Furmark_temp.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now. Take a look at the idle clock rates NVIDIA programmed into the BIOS for this GPU. Yes, that's two digits for core and memory clocks, right out of the box; no special power-saving software utilities required.

MSI_N460GTX_Cyclone_1GD5_OC_460GTX_GPU-Z0.4.4_Sensor_Ta.png

To measure isolated video card power consumption, I used the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The MSI N460GTX Cyclone 1GD5/OC pulled just 18 (140-122) watts at idle and 180 (304-122) watts when running full out, using the test method outlined above. So, there's good news for those who were frightened off by the GF100 power consumption. The GF104 is much more frugal, especially in idle, where the device driver runs the clocks WAY down, without any apparent ill effects. Built on 40nm technology, those two billion transistors could be pulling a lot more power and generating a lot more heat with older chip technology, exactly like the GT200 cards built with 55nm chips did. Next, I'll offer you some final thoughts, and my conclusions. On to the next page...

GTX 460 Final Thoughts

I wrote earlier this year that the first Fermi cards from NVIDIA were not really "Competitors" for ATI, because they occupied different price and market segments than the existing series of Radeon HD 5xxx video cards. Well all that's changed now, with the introduction of the GF104 GPU. With 1.95 billion transistors and an estimated die size of 366 mm2, it's in the same league as the ATI Cypress chip, introduced last September on the Radeon HD 5870. On second thought, maybe NVIDIA is in the National League and ATI is in the American League. They both play the same game, but by different rules, and once a year everyone gets together and pretends that they are all the same. Then it's Football season, thank goodness.

If I allow myself to anthropomorphize these products, I thought it was a bit cruel for the GF104 to go gunning for the HD 5830, the crippled sister of the Radeon family. As fate would have it, she held on to the $200-$240 market with only a hope and a prayer by her side. There was no better point for NVIDIA to attack, with a product more clearly focused on gaming graphics, than this thinly populated market segment. Resistance was futile; there was no way the GTX 460 was going to lose this battle. That's because the GTX 460 is a wolf in sheep's clothing. To put it more plainly, and give away my conclusion to those who are reading this entire page, the GTX 460 is a 5850-class video card with a $230 price tag.

From a technology standpoint, the GTX 460 has a whole lot more in common with the Radeon HD 5850 than it does with the HD 5830. Let's compare. The HD 5850 disables one out of ten (10%) possible stream processing units, the HD 5830 disables three out of every ten (30%). The GTX 460 ships with one out of eight possible Streaming Multiprocessor blocks (12.5%) disabled. Match ‘em up.... looks like a 5850 to me. Now let's look at clock rates, the top clock rate that ATI specs out for the Cypress line is 850 MHz, and the HD 5850 ships with a 725 MHz stock clock. It's too early to guess what the highest clock will be on the GF104 chip, but Galaxy and Palit are already shipping cards with factory core clocks over 800 MHz. Almost every reviewer that bothered to overclock their GTX 460 sample got it easily up to the 850 range. The base clock for the GTX 460 is 675 MHz. Once again, the similarity to the HD 5850 is pretty plain; chop off one (presumably dead) processing cluster and downclock the core significantly, so it doesn't compete with the top model (or the lame duck GTX 465 in this case...).

Forgive me for dabbling in a bit of fairy tale economics, but I can't help myself. First of all, I'm going to make a bold assumption that an HD 5830 chip costs exactly the same amount of money to produce as an HD 5870 or HD 5850. Same amount of silicon, same pin out, same package, same testing costs - all the production costs are equal. Next, I'll extend the same bold assumption and conclude that every GF104 chip costs almost exactly the same as the Cypress chips I just mentioned. Same number of transistors, same technology node, same supplier, same production lines, same die area, etc. The only difference is the R&D and SG&A costs that have to get amortized in to establish a fully burdened cost. (I wish I could add a survey button here: agree or disagree.) The pricing model on the other hand, has you paying for performance, which seems realistic and fair for the consumer. That's where NVIDIA chose their battleground.

MSI_N460GTX_Cyclone_1GD5_OC_GF104_Headshot.jpg

I've come to one inescapable conclusion: the GTX 460 is really comparable to an HD 5850 from a technology standpoint, and NVIDIA chose to sell it at a price point currently occupied by a lesser model, the HD 5830. Sounds like a good marketing plan to me, especially since I believe that every Cypress-based card and every GF104-based card share the same cost structure. Sure, you can add or subtract features, but the fundamental production costs are comparable, even if the performance is not. ATI has had a monopoly on DX11 hardware for what seems like ages, so you can't blame NVIDIA for throwing a spanner in the works and trying to disrupt the market. Finally, I can say, "Fermi = Competition". BTW, just like you, I can't wait to read the next chapter in this continuing battle saga.

MSI N460GTX Cyclone 1GD5/OC Conclusion

From a performance standpoint, it's impossible to argue with the numbers this card puts up, at its price point. As I hypothesized in my Final Thoughts, this is really a 5850-class card from a technology standpoint, and it performed like one. Overclocked up to what seems like its natural operating point, at 850 MHz on the core, it swept the field in its market segment. The cooling performance was first rate, including the noise required to achieve it, which was quite low. This is also one of the few cards where I would leave the fan settings on factory automatic; the default curve is aimed at performance users. The combination of a new low-power Fermi GPU, a well designed, oversized cooler, and a performance oriented fan profile kept operating temperatures quite low during both intensive gaming and brutal stress testing.

The appearance of the MSI N460GTX Cyclone video card is excellent in my opinion, and also somewhat unique. That's always a plus in my book; I hate the same-old, same-old. The larger fan works well even without a full shroud to direct the airflow. While not a subtle design, the Cyclone avoids the garish themes that often show up on products marketed at gamers.

MSI_N460GTX_Cyclone_1GD5_OC_Box_and_Card.jpg

The build quality of the MSI Cyclone card was excellent. Everything is assembled well, the overall impression of the card was solid, and the packaging was also first rate. I was also impressed by the manufacturing quality of the PC board, especially compared to some recent samples I've seen. The nickel plating on the cooler components doesn't help the thermal performance in the short term, but it eliminates corrosion which would eventually take its toll in the long run. The GTX 460 is a relatively easy card to make because of its simplicity, and MSI chose to execute a simple design, well. Good choice, IMHO. I also have to give very high marks to MSI for the quality of their monitoring and control software, MSI Afterburner. It is one of the leading utilities in the enthusiast community for good reason, in that it is full featured, has an unusually wide application base, can be customized for individual cards, has very strong factory support, and is reliable. It has very few real competitors.

The features of the N460GTX are fully comparable with the latest offerings from both camps. It has: Microsoft DirectX 11 Support, PhysX Technology, is 3D Vision Ready, also 3D Vision Surround Ready, CUDA Technology, SLI, 32x Anti-aliasing, PureVideo HD, and HDMI 1.4a support. We've been using some of these same, or competitive, technologies on a whole host of Radeon 5xxx cards since last September. Still, it's good to finally have rough parity in the features and functions arena.

As of August 2010, the price for the MSI N460GTX CYCLONE 1GD5/OC is $234.99 at my favorite PC component supplier, Newegg. This is a little bit higher than the lowest price GTX 460 boards, the lower spec cards are going for just under $200. It's hard to find a bad deal for any of the GTX 460 cards; even if you are paying a premium for certain features, more memory, or a software bundle, the price-to-performance ratio is so good, there's not a lot of downside anywhere. For people who are interested in exploring the overclocking potential of the GF104 chip, the "extra" $15 or so that these Cyclone models go for is a good investment.

Let's face it, almost any GTX 460 card is going to get high marks at this stage of the game. NVIDIA has priced it very aggressively, and until ATI responds with some serious price cuts, or releases its next generation of video cards, this is the card to beat in the $200-$250 price range. It's pretty obvious from all the reporting that has been done already, that early production units of the GF104 have tons of overclocking headroom. I got +25% on the core clock with no trouble at all, and that seems to be a typical result if you read through the enthusiast forums. MSI has leveraged that capability by providing better cooling, and one of the best software overclocking utilities on the market. While it's not a full-on assault on the top performance crown, it's hard to argue with a product that's (better) X (better) X (better). Highly recommended.

Pros:Benchmark Reviews Golden Tachometer Award

+ Attractive and effective cooling system
+ World-class monitoring and control software
+ Excellent overclocking headroom
+ Outstanding price/performance ratio
+ 1000 MHz Samsung GDDR5 overclocks better than 1250 MHz parts
+ Very low idle clocks = low power consumption
+ Low fan noise, even with performance fan profile
+ Memory ICs are cooled by airflow from fan
+ PCB manufacturing quality looks good
+ Driver updates with improved performance were ready at launch

Cons:

- Military Class Components = Mostly Marketing Speak
- All heat from the card is pushed into the case interior
- Mini HDMI connector offers no benefit over standard size

Ratings:

  • Performance: 9.50
  • Appearance: 9.25
  • Construction: 9.25
  • Functionality: 9.00
  • Value: 9.50

Final Score: 9.3 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments 

 
# Has anyone else noticed?Stefan 2010-08-10 22:54
The fins at the base of the heat-sink spiral in the opposite direction to the fan rotation?

Whoops.
Report Comment
 
 
# RE: Has anyone else noticed?Olin Coles 2010-08-11 06:56
That removes laminar air flow through air friction, and delivers better cooling performance.
Report Comment
 
 
# Good Heat TransferBruceBruce 2010-08-11 07:07
For good heat transfer, you basically want the air banging into the surface of the heatsink. Turbulence is good, impingement is even better.
Report Comment
 
 
# I stand corrected!Stefan 2010-08-12 23:35
Well, I guess I should have taken an engineering class :)

I consider myself now a little more educated from your remarks, thanks ;)
Report Comment
 
 
# :DFederico La Morgia 2010-08-11 02:12
same question as always: D this video card and RAM chips mounted?
Report Comment
 
 
# RE: :DServando Silva 2010-08-11 04:49
You really want to read the "Detailed Features" section before asking.
Report Comment
 
 
# RE: RE: :DFederico La Morgia 2010-08-11 06:06
the code :)
Olin know what I mean to ask
Report Comment
 
 
# Yeah, but...BruceBruce 2010-08-11 07:22
All the information is right there, beautiful pictures of the RAM chips and everything.... Just click on the "Detailed Features" page in the table above.
Report Comment
 
 
# RE: Yeah, but...Servando Silva 2010-08-11 08:29
Thanks Bruce... Unless Federico and Olin are talking different things, I think you did a great job about the ICs in the detailed section. Great review!
Report Comment
 
 
# RE: MSI N460GTX Cyclone 1GD5/OC Video Cardmascotzel 2010-08-11 02:16
Mini-HDMI is there on the board because of space constraints. You can't put anything else next to 2 DVI's on a single slot bracket.
Report Comment
 
 
# RE: MSI N460GTX Cyclone 1GD5/OC Video CardAdam 2010-08-11 08:10
Favourite benchmarks so far! Uses an almost identical system to me and includes the GTX260 (my current card), great stuff for working out if I want to upgrade yet or not.
Report Comment
 
 
# RE: MSI N460GTX Cyclone 1GD5/OC Video CardDavid Ramsey 2010-08-11 16:42
FWIW, the Porsche 911 has been water cooled for more than a decade, and the tighter temperature regulation afforded by water cooling led to major increases in power and efficiency (parts could be machined to tighter tolerances since the narrower temperature envelope limited expansion of the parts). Still, for the time, the air cooled 911 was pretty good.
Report Comment
 
 
# ImagineBruceBruce 2010-08-12 19:38
Just think what the engineers could have done with heatpipes...LOL

Seriously, you don't want to get into a discussion of air cooled vs. watercooled Porsches, do you? That's just begging for an invasion of the body-snatchers! Don't you think the comment section has suffered enough lately?
Report Comment
 
 
# MSI N460GTXWayne Manor 2010-08-14 00:29
Thanks for the review, very informative! I currently have 2 of these puppies in SLI. I'm curious how the Gainward 2GB version compares with the 1GB versions
Report Comment
 
 
# Excellent reviewDr_b_ 2010-08-18 18:53
excellent review. glad you added the older 200 series in for comparison, there doesn't seem to be a reason to keep using the old 285 that i have now. going to SLI two of these.
Report Comment
 
 
# It was a great card in its day...BruceBruce 2010-08-18 19:05
I'm sure you got lots of good use out of it, but time and progress marches on. One of the things I like about the GTX 460 cards is that there is a variety of implementations. The ASUS that was reviewd here is another example of a solid non-reference design (4 phase PWM, etc...).
Report Comment
 
 
# Question for a purchaseKaelin 2010-08-23 03:58
Thank you for your very interesting article. I must change my graphics card but i hesitate between the MSI N460GTX Cyclone 1GD5/OC and the Asus ENGTX460 DirectCU TOP/2DI/1GD5... I don't know which one to choose =( Which one do you advise ?

(Sorry for my english level)
Report Comment
 
 
# Tough Choice...Bruce Bruce 2010-08-23 07:44
They each have larger fans and bigger HSF assemblies that both improve cooling and reduce noise levels. ASUS rolled their own PC board design, and it looks like they increased the PWM from 3-phase to 4-phase. MSI stuck with the reference design. I can't tell where you are located, but you may want to consider the support services that are available in your location, as well as local warranty provisions.
Report Comment
 
 
# Asus or MSI...Kaelin 2010-08-23 10:03
Thank you for your reply
I am French so the support service is not a problem (usually)
to help me to decide I would like to know wich graphic card has the cooling system more efficient and wich one has the best resistance to a higher overclocking. And to finish wich one do you prefer ^^

Thanks in advance
Report Comment
 
 
# RE: Asus or MSI...sportwarrior 2010-08-24 16:56
I'm also interested in the questions above... it seems that the two cards are quite similar in performance capability, though, so ultimately I'm guessing it doesn't really matter which I would select (I live in SoCal, for what it's worth). I will say I like the look of the ASUS quite a but more than the MSI, but Amazon is making me wait a ridiculously long time to get my card and I NEED to build my new rig very very soon.
Report Comment
 
 
# RE: RE: Asus or MSI...Olin Coles 2010-08-24 17:00
Research the warranty support for each, and go from there.
Report Comment
 
 
# French Pricing?BruceBruce 2010-08-24 18:47
What are the prices for each, locally? I don't want to say that the two cards are the same, because they are not. But.....they both do an admirable job supporting the GF104 chip, in slightly different ways. I like MSI Afterburner better than ASUS Smart Doctor, so that's how I would go... If the ASUS card supported the ASUS iTracker2 software, it might be a different story.
Report Comment
 
 
# The choice is madeKaelin 2010-08-24 23:39
In France, the price difference is about 1 euro for the Asus graphics card. Moreover the Asus ENGTX460 DirectCU TOP/2DI/1GD5 is guaranteed 3 years against 2 for the MSI, that's the reason for why I chose the Asus.

thank you for your help =)
Report Comment
 
 
# RE: The choice is madeWayne Manor 2010-08-25 00:58
I would have gone with the Asus version too if it had come out earlier as it is $35 cheaper here than the MSI cyclone, both with 3 yr warranty.
Report Comment
 
 
# RE: RE: The choice is madeWayne Manor 2010-08-25 01:00
Here in Australia that is :p
Report Comment
 
 
# Wacky PricesBruceBruce 2010-08-25 17:45
That's a major difference Wayne. This is exactly why I asked Kaelin about the pricing in France. You just never know.....
Report Comment
 
 
# RE: MSI N460GTX Cyclone 1GD5/OC Video Cardnax 2010-08-25 23:07
i like asus look kool and my mb is asus also i will go wit asus
Report Comment
 
 
# RE: MSI N460GTX Cyclone 1GD5/OC Video Carddon 2011-01-03 21:27
agree

Oh, and I like your writing (style).
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive