Gigabyte GeForce GTX 480 SOC GV-N480SO-15I |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Bruce Normann | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Wednesday, 22 December 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Gigabyte GeForce GTX 480 Super Over Clock ReviewThe GeForce GTX 480 got an incredible amount of criticism when it first came out. To be honest, I think a lot of people were just letting off steam at NVIDIA for taking so long! The three biggest concerns were: an architecture that wasn't optimized for gaming, power consumption, and its evil twin - heat. A short while after the release date, vendor cards started appearing that consumed slightly less power and weren't quite so toasty. The consensus was that BIOS tweaks were responsible for the more socially acceptable behavior, but I don't know that anyone has been able to confirm that. Now, at the twilight of the product life for the GTX 480, a few vendors have attempted to put those problems completely to rest. The Gigabyte GV-N480SO-15I GeForce GTX 480 Super Over Clock video card is one of the best examples of that effort, and it represents what the GTX 480 could have, would have, and should have been at its launch.
We've had to wait for the GF110 GPU in the GTX 580 to correct the architectural shortcomings of Fermi for gaming purposes, but at least in the meantime we had the GF104 in the GTX 460 to offer a preview of things to come. The brand new GTX 570 seems to have very similar performance to the GTX 480, so there's going to be some pricing confusion in the marketplace for the next several months as suppliers and consumers decide which one to favor. Given the binning/downgrading scheme that all CPU and GPU manufacturers use to minimize production losses, it doesn't make sense to sell only one model of any chip design. I wouldn't be surprised if production of the GF100 GPU is stopped completely. NVIDIA is still looking at a big gap in performance and price between the GTX 460 series and the GTX570, though. AMD introduced a brand new "tweener" GPU with the HD 68xx series to fill in their product line, and I have to wonder if NVIDIA will follow suit. One alternative is to continue offering the earlier versions of Full-House Fermi at reduced prices. Until there is a demi-Fermi, if there is one in the offing, the GeForce GTX 470 and GTX 480 are the only products left in the gap between the new 5xx series NVIDIA cards and the venerable GTX 460. For the near future there are going to be some old-stock reference cards in the pipeline and the few non-reference designs that made it to production before the 5-series hit. There are a ton of benchmarks out there for the reference GTX 480 already, but let's dig a little deeper and take a complete look, inside and out, at the Gigabyte GV-N480SO-15I GeForce GTX 480 Super Over Clock Video Card. Then we'll run this highly overclocked version through Benchmark Review's full suite of DirectX 9, 10, and 11 tests.
Manufacturer: GIGA-BYTE TECHNOLOGY CO., LTD Full Disclosure: The product sample used in this article has been provided by Gigabyte. Closer Look: Gigabyte GTX 480 SOCThe Gigabyte GV-N480SO-15I GeForce GTX 480 Super Over Clock video card couldn't be further from the reference card. The first thing you notice when you pick up this video card is that it doesn't seem as densely constructed as some other cards in this price and performance range that have a single radial blower. The Windforce 3X cooling assembly doesn't look or feel like a brute force approach to GPU cooling. Sure, the heatsink/fin assembly, and the heatpipes and the three fans add some weight, but it's modest. This is the turbocharged flat-six approach to performance, not the Detroit Iron way. Later, we'll see that it doesn't sound like brute force either, which is a very good thing.
The fan shroud is a very open design; there is very little chance that a major portion of the airflow is being directed to the rear outlet. After the air passes through the fin assembly, it hits the components mounted on the face of the card, at least on the two ends. The center section is taken up by a fairly large combination of copper vapor chamber, aluminum mounting plate, and heat pipes. The three fans are angled in different directions, and the fin assemblies under each fan are independent units, that are laid out 90 degrees apart. All three fans are identical three-wire types with a tachometer output, and have an indicated maximum operating speed of 4075 RPM.
With high-end video cards, the cooling system is an integral part of the performance envelope for the product. Make it run cooler, and you can make it run faster, has always been the byword for achieving gaming-class GPU performance. The Gigabyte GTX 480 SOC has a very complex and sophisticated design for its cooling device, combining a vapor chamber and three 6mm diameter heatpipes. At first glance, it looks like four heatpipes exit from the central area, but one of them does a U-turn in there, so the total count is only three. None of the heatpipes passes directly over the center of the GPU die, and instead rely on the copper vapor chamber to transfer the heat over to the pipes. Once they exit, the pipes spread to the outer reaches of the aluminum fin assembly, two to the right and two to the left.
The layout of the various elements of the cooler design is a little easier to see in this straightforward view from the GPU's perspective once the cooler is removed. In addition to the contact patch for the GPU, the main mounting plate has multiple locations for thermal interfaces with the DRAM chips, which are easy to spot by the light blue foam tape. There was good, even compression on the soft thermal tape used to equalize the surface heights of the various devices, but a couple of the GDDR5 memory chips seem to have been left out in the cold. The driver transistors for the VRM Output section are clamped under a thin heatsink on the top side of the board, while the high-side and low-side outputs are mounted on the back side and are left to their own devices as far as cooling goes.
The copper mounting plate of this cooler is actually a vapor chamber instead of a solid block of metal. Rather than acting like a true heat-sink, where short heat pulses are absorbed quickly into a large thermal mass, this design relies on the high thermal conductivity of the vapor chamber to transfer heat quickly to the heat pipes, and then over to the aluminum fins where it gets transferred to the passing air. The fact that the heat pipes are nowhere near the center of the GPU package tells you that this is a completely different type of design than we're used to seeing.
The power for the Gigabyte GTX 480 SOC enters into one 6-pin and one 8-pin PCI Express connector at the far end of the board. There are several chokes and capacitors right at the input to do the bulk filtering of DC power coming in. This is the first chance the board has to knock down any nasties coming in from the power supply. Noise, voltage spikes, dropouts, and other transients should all be stopped right here. On the bottom corner of the board is the BIOS select button, which switches between the two independent BIOS chips installed on the card: the SOC (Super Overclock Series) Default BIOS and LN2 BIOS. The LN2 BIOS prevents the Cold Bug phenomenon that causes the GPU to freeze up at extremely low temperatures.
The layout on the front and back of the printed circuit board is very typical for a high-enc video card. The large number of power supply components mounted on the back side is a matter of necessity, both from a space standpoint and for thermal reasons. The lack of any heatsinks on the 24 primary power MOSFETs in the VRM section is a question mark, but the fact that there are so many of them, means there is more surface area to dissipate heat. This non-reference card from Gigabyte is simpler in some ways than the basic NVIDIA design; maybe more straight forward is a better description. I doubt that it is as cheap to produce as the original GTX480, with all the enhancements to the power supply, and the trick cooling section. Every added dollar that went into the redesign went towards making the card faster and more conducive to further overclocking by the customer. It's hard for me to argue with that approach; it's the one I'd be most inclined to take, given a choice. In the next section, let's take a more detailed look at some of the new components on this decidedly non-reference board. I did a full tear-down, so we could see everything there is to see... Gigabyte GV-N480SO-15I Detailed FeaturesBesides the upscale cooling system, which is yards ahead of the reference design, the Gigabyte GTX 480 Super Over Clock has an impressive selection of voltage regulator sections that supply cleaner power than the average video card gets. The input and controller sections are in their traditional place, at the far end of the card. The PWM-based voltage regulator section that supplies power to the HD 6870 GPU is based on the ADP4100 Programmable Multi-Phase Synchronous Buck Converter. It is a 6-phase design that is was initially designed for Intel CPU voltage regulation. Voltage control is digitally programmable through an 8-bit VID for an output range of 0.375 V to 1.6 V. The 6 phases then get split up into 12 phases and then 12 separate sets of driver, high-side, and low-side MOSFETS do the actual power switching. Purists will complain that this is not a "True" 12-phase design, but the truth is, the GPU doesn't care how the individual phases get generated, it is seeing twelve phase power. There may not be as many discrete steps available for throttling back when the GPU only needs low power, but this card isn't really optimized for saving power; it's meant to run as fast as possible.
The other two chips in the image above are an 8-Bit CMOS Microcontroller and a GPIO Controller with 14 outputs and SMBusTM interface. This is the computer that runs all of the low-level functions on the video card. As far as I know, no one has tried to overclock it yet... The driver transistors for the main VRM section are on the top side of the card, in the general vicinity of the biggest heat generator, so they get additional cooling via this thin aluminum heatsink. It's held on with a spring clip, that's quite a bit stiffer than you might think at first glance. The totem-pole style output MOSFET pairs are on the back side, and the solid-core chokes and polymer capacitors are on the top side along with the drivers. There's nothing really new and innovative in this design so far, but it's very well executed with a lot of attention to detail, and has lots of headroom available for higher voltage and current levels.
The VRM section also features some heavy duty film capacitors on the output that are specially designed to eliminate the last vestiges of noise on the critical DC supply to the GPU and DRAM. These large, flat units from NEC-Tokin have the strange name "Proadlizer", which stands for prompt broadband stabilizer. They are rated for 2.5V and each has a capacitance of 900 μF. That's about twice the capacitance as any of the twelve solid polymer units on the top side of the board, but it's not the total capacitance that matters, it's how well they work at the highest frequencies.
Every electronic component has some extra characteristics that are sometimes referred to as parasitic values. All resistors have parasitic capacitance and inductance, and all capacitors have extra resistance and inductance. The critical thing is that these deviations from perfect behavior are frequency dependant. For a capacitor, they tend to reduce its effectiveness at higher frequencies. Each of the various types of capacitor are optimized for different frequency ranges, and film caps, like these Proadlizer units, are best at higher frequencies. You can see from the manufacturer's data that these are optimized for reducing noise the 100 MHz range, but they are still effective out to 1.0 GHz.
A new trend I am seeing on the latest video cards is the switchover to SMD packaging for the crystal oscillators. Both NVIDIA and AMD have moved away from the traditional through-hole mounting style for this component. These were one of the last holdouts in the transition to full SMD production. Check out the miniature welding around the perimeter of this 27 MHz can; for size reference the entire package is 3mm x 6mm.
The PC board had excellent solder quality and precision component placement, as can be seen below. This is the area on the back side of the board, directly below the GPU, and it's one of the most crowded sections of any graphics card. On my LCD screen, this image is magnified 20X, compared to what the naked eye sees. The small SMD capacitors located side-by-side in this view are placed on 1mm centers. This is one of the most critical sections of the PCB for build quality, as variations in stray capacitance here could impact the performance of the GPU, and certainly its overclocking ability.
This Gigabyte board was well above average for cleanliness, compared to some of the samples I've looked at in the last year. There were some minor traces of residue on different sections of the board, but they were minimal compared to what I've been seeing lately. It's obvious that this card was not made in the same factory as the others I've tested recently. Once you start looking at macro photographs like this, there's no place for any manufacturing shortcuts to hide. All manufacturers are under intense pressure to minimize the environmental impact of their operations, and cleaning processes have historically produced some of the most prolific and toxic industrial waste streams. The combination of eco-friendly solvents, lead-free solder, and smaller SMD components have made cleaning of electronic assemblies much more difficult than it used to be.
The memory choice for the Gigabyte GTX 480 SOC is consistent with the NVIDIA reference designs. The basic GTX 480 reference specs only require 924 MHz chips for the memory, but most cards have been using these Samsung K4G10325FE-HC04 GDDR5 parts, which are designed for up to 1250 MHz. This Gigabyte Super Over Clock version only takes the memory up to 950 MHz, out of the box. It's nice that there is some headroom left on the memory chips, but the GPU has to be willing to play at these higher frequencies, too. Now that we've had the grand tour of the Gigabyte GTX 480 SOC, inside and out, it's time to put it to the test. Well, Benchmark is our first name, so don't worry. There are a wide variety of tests waiting for you in the next several sections, including some new entries. Let's start off with a complete description of the Video Card Testing Methodology. NVIDIA GeForce GTX 480 GPU FeaturesThe NVIDIA GeForce GTX 480 GPU contained in the Gigabyte GV-N480SO-15I GeForce GTX 480 Super Over Clock Video Card has all of the major technologies that the Radeon 5xxx cards have had since last September. AMD has added several new features in the interval though, and they are now available on the HD 6xxx series. The most important ones are: the new Morphological Anti-aliasing, the two DisplayPort 1.2 connections that support four monitors between them, 3rd generation UVD video acceleration, and AMD HD3D technology. The NVIDIA tech refresh with the GTX 580 and GTX 570 didn't really introduce any new customer-facing features. Full speed FP16 texture filtering and improved z-cull efficiency represent good performance enhancements, but I'm challenged to call them new "features". Just in case you are just starting your research for a new graphics card, here is the complete list of GPU features, as supplied by NVIDIA: NVIDIA GeForce GTX 480 GPU Feature Summary:3D Graphics
GPU Computing
NVIDIA Technology
GPU Interfaces
Advanced Display Functionality
Video
Digital Audio
Power Management Technology
NVIDIA GeForce GTX 480 GPU Detail SpecificationsGPU Engine Specs:
Memory Specs:
Display Support:
Source: NVIDIA.com Gigabyte GTX 480 Super Over Clock FeaturesThe Gigabyte GV-N480SO-15I GeForce GTX 480 Super Overclock video card is much more than the sum of it's GPU and memory components. Every support subsystem on the card has been enhanced compared to the reference design. Even the GPU and the GDDR5 memory have been selected for maximum performance. The GPUs especially, have been hand-selected to make sure this card delivers maximum performance. Let's take a look at the unique features that Gigabyte has endowed this card with. In their own words, it features: GIGABYTE Extreme Dual BIOS technologyAs opposed to the traditional concept of dual BIOS chips where one is a main BIOS and the other one is a backup BIOS, GIGABYTE Extreme Dual BIOS technology contains two different BIOS chips designed for separate purposes. GIGABYTE Extreme Dual BIOS technology supports one-touch dual BIOS switching button to shift between two BIOSes, SOC (Super Overclock Series) Default BIOS and LN2 BIOS. SOC Default BIOS allows gamers to build their dream machines to conquer digital battles on the web. LN2 BIOS is especially designed for overclockers to avoid potential cold bug issues at extremely low temperatures. This feature equips selective GIGABYTE Super Overclock Series for extreme overclockability to break world records at any competitions. The "cold bug" is an issue occurred when a processor ceases functioning or overclocking at a low (usually sub zero) temperature, preventing the use of extreme cooling methods such as dry ice or liquid nitrogen. WINDFORCE - GIGABYTE's Latest Cooling SolutionGIGABYTE introduces WINDFORCETM, the latest cooling technology that differentiates the brand's graphics cards from the rest. GIGABYTE understands that a fan is crucial to the graphics card, so only the fan with the most efficiency is selected for WINDFORCETM. To qualify for WINDFORCETM, the cooling solution must be equipped with anti-turbulence inclined-fin, ultra quiet PWM fan and pure copper heat pipe. Inclined-fins redirect airflow and help to reduce excessive heat and turbulence. Depending on its orientation, there are 3 types of inclined-fin designs: parallel-inclined, mirroring-inclined and 3D-inclined. Each type contributes to generating excellent exterior airflow for efficient heat dissipation from the interior GPU core. WINDFORCE3X: Anti-turbulence CoolingGV-N480SO-15I is equipped with WINDFORCETM 3X cooling solution. WINDFORCETM 3X features 3 ultra quiet PWM fans. The special 3D-inclined triple-fan design effectively minimizes the flow of turbulence between three fans. With a unique vapor chamber, WINDFORCETM 3X is able to transfer heat from the hot spot to the cool spot as thermal energy becomes evaporated to the surrounding air. By capillary action, the condensed liquid droplets circulate back to the base of chamber. The cycles of evaporation and condensation enhance heat dissipation for greater cooling efficiency. Moreover, WINDFORCETM 3X is equipped with three copper heat pipes to strengthen the speed of heat dissipation. Pure Power ArchitectureGV-N480SO-15I has incredibly pure power. It is compatible with ATX standard length PCB. 14 phase PWM design includes 12 for GPU, 2 for memory. Standard GTX 480 graphics card has 6+2 power phases whereas GIGABYTE's GTX 480 SOC has 12. Not only does GV-N480SO-15I have more power phases than reference card, the 12 power phase status LED indicators indicate graphics card's current power status, helping users monitor power consumption. This feature ensures a more stable system and better overclocking capability. GPU Gauntlet SortingGIGABYTE's very own GPU GauntletTM Sorting process is the key to the best GPUs for Super Overclock Series. A GPU must pass a sequence of tests to be selected. This unique process offers three major advantages. First, full-throttle performance tuning improves overclocking ability by at least 10% to 30%. Second, optimized thermal efficiency lowers extra temperatures by 5% to 10%. Fan speed is guaranteed at standard rpm in high-rated 3D and physics-accelerated games to reduce noise for an absolutely quiet environment. Lastly, expert power switching lowers power switching loss by 10% to 30%. With GPU GauntletTM Sorting, GIGABYTE SOC graphics card guarantees higher overclocking ability. Ultra Durable VGA+Thanks to Ultra Durable VGA+ that features 5+1 quality components, the Super Overclock Series supports top performance in graphics benchmarking. GIGABYTE Ultra Durable VGA+'s 5 units of proadlizer on GV-N480SO-15I graphics cards is able to provide purer power supply compared to traditional power design (zero proadlizers). ProadlizerOne of the quality components of Ultra Durable VGA+ is GIGABYTE's exclusive use of proadlizers. On GV-N480SO-15I graphics cards, GIGABYTE uses five NEC Proadlizer, also known as film capacitor, to provide amazing current capacity and extremely low ESL/ESR for super overclocking ability. The added proadlizers transmit more stable power even during heavy loading. OC Guru - Graphic Card Smart TunerGIGABYTE proudly improves its graphics card smart tuner OC Guru. This graphics card tweaking utility puts together the best combination of software/hardware overclocking. It now features 3 profiles: OC (for overclocking mode), Gaming (for gaming mode), and Green (for power saving mode). The different modes cater to users' different needs. GIGABYTE's distinctive adjustable software offers gamers to adjust tuners for memory, voltage, and fan speed. OC Guru, the user-friendly software, gives hardcore gamers liberty when overclocking. Voltage Read PointGIGABYTE's specific voltage read point supplies hardcore gamers an ultimate benching experience at overclocking competition. Heavy users can change the voltage simply through software instead of manually adjusting any components on PCB. It is easy for overclockers to master the platform and monitor GPU workload at any time just through voltage read point. Video Card Testing MethodologyWith the widespread adoption of Windows7 in the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this highly anticipated, operating system. Overall performance levels of Windows 7 are favorable compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have anxiously awaited for years. After almost a year of product testing with Win7-64, I can vouch for its stability and performance; I can't think of any reasons why I would want to switch back to XP. Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned below. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10 or DirectX 11, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9. At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro. We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors). Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in several benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article. A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.
Intel P55 Express Test System
DirectX 9/10 Benchmark Applications
DirectX 11 Benchmark Applications
Video Card Test Products
3DMark Vantage Performance Tests3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system. There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt. At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another. 1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2. 3DMark Vantage GPU Test: Jane Nash
Our first test shows the Gigabyte GTX 480 SOC clearly besting all of the single-GPU cards we have in this test. The Radeon HD 5870 card is a factory overclocked PCS+ unit from PowerColor, so it's an Oranges-to-Oranges comparison. In DX10 synthetics, the 1600 stream processors of the top Cypress GPU model had ruled the roost for awhile. All of the dual-GPU pairs beat the fastest single-GPU cards, by a big margin. The GTX 460 SLI combo really stands out here, and you better get used to it... This is a matched pair of MSI Cyclone cards, and they perform very well, even with the mild factory OC of 725MHz on the core. This is where you can see the advantage the GF104 GPU has over the first Fermi chips, which has now been ported over to the GF110 GPU in the new 5xx series from NVIDIA.
At 1920x1200 native resolution, things are much the same as the lower screen size; just the absolute values are lower, the ranking stays the same. BTW, the order of the bars, from left to right is based on current cost of the products. Only the sub-$300 cards seemed choppy at times, as most of them managed to break free from the 30 FPS visual barrier. There is quite a price range represented here, from a low of $210 on the left to a high of $580 on the right. Oddly enough, if you take the e-tail prices and divide them by ten, you get a rough estimate of the frame rate in this benchmark. Unfortunately, the odd man out in that calculation is the GTX 480... Let's take a look at test #2 now, which has a lot more surfaces to render, with all those asteroids flying around the doomed planet New Calico. 3DMark Vantage GPU Test: New Calico
In the medium resolution New Calico test, the MSI N460GTX Cyclone SLI set does so well that it edges out the AMD HD 6870 CrossFireX pair with standard clocks. Once again, the GTX 480 beats all comers that only have a single GPU to keep warm, but it's just that the GTX460 seems to get a boost from this benchmark. In this test, it takes a $400 graphics solution to get over 30 FPS in this 1680x1050 benchmark, which shows how tough this medium resolution test really is. With two cards running in tandem though, you are well over the hump.
At the higher screen resolution of 1920x1200, the Gigabyte GTX 480 Super Over Clock card, with its substantial factory OC finally hits the wall and just manages to dip below 30 FPS. The GTX 460 SLI set kicks butt one more time, and ties with the HD 6870 CrossFireX pair again. This benchmark suite may have recently been replaced with DX11-based tests, but in the fading days of DX10 it has been a very reliable and challenging benchmark for high-end video cards. We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for DX10 gaming benchmarks, Crysis.
Crysis Performance TestsCrysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half. Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE2 such as physics, networking and sound, have been re-written to support multi-threading. Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged. Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards themselves, but there is still some influence by the rest of the system components.
With medium screen resolution and no MSAA dialed in, Crysis shows a completely different picture than 3DMark. Unlike many so-called TWIMTBP titles, Crysis has always run quite well on the ATI architecture, and the Radeon HD 5870 is able to vie for top honors in this benchmark. The GTX 460 and GTX 480 are not quite as competitive in this game, at current pricing. The GTX 460 SLI pair could possibly have done better with a massively overclocked version, which is always a potential threat with the little GF104 chip. Crysis is one of those few games that stress the CPU almost as much as the GPU. As we increase the load on the graphics card, with higher resolution and AA processing, the situation may change. Remember all the test results in this article are with maximum allowable image quality settings, plus all the performance numbers in Crysis took a major hit when Benchmark Reviews switched over to the DirectX 10 API for all our testing. None of the cards are struggling at these low settings, though. You may have noticed that the CrossFireX combinations are missing from these results, and that is because I couldn't get meaningful results with that arrangement. Try as I might, the results were all over the place, and I just didn't trust them. All the single-GPU results were very consistent, like they normally are with this test.
At 1900 x 1200 resolution, the relative rankings stay the same; the raw numbers just go down. Even with the increased load on the GPU, every card from the HD 6870 on up still gets over the 30FPS hump convincingly. Any of these high-end GPUs can muster up the muscle to play Crysis at high resolution with all the bells and whistles turned on, much to everyone's relief. Can it play Crysis? Yes.
Now let's turn up the heat a bit on the ROP units, and add some Multi-Sample Anti-Aliasing. With 4x MSAA cranked in, the top cards lose about 5 FPS at 1680x1050 screen resolution but they manage to stay well above the 30 FPS line. In all these Crysis tests, the Gigabyte GTX 480 SOC has managed to do quite well. It even keeps up with the GTX 460 SLI pair, which has the advantage of the second generation Fermi GPU, and 672 CUDA cores between the two of them.
This is one of our toughest tests, at 1900 x 1200, maximum quality levels, and 4x AA. Only one GPU drops below 30 FPS in this test, which is unbelievable when I think back to the first days of DirectX 10 testing with Crysis. In the middle ranges, the HD 6870 lags the performance leader a bit, and also pushes out the GTX 460 out of the running. Even a massive overclock on the GTX 460 won't come up even with the HD 6870 on this test. I ran the MSI GTX460 HAWK at 950 MHz core clock last month, and it only got 26 FPS at these settings. On this, the toughest of the four benchmark configurations, the GTX 480 SOC finally edges out an overall win and shows its thoroughbred pedigree. Our next test is a relatively new one for Benchmark Reviews. It's a DirectX 10 game with all the stops pulled out. Just Cause 2 uses a brand new game engine called Avalanche Engine 2.0, which enabled the developers to create games of epic scale and with great variation across genres and artistic styles, for the next generation of gaming experiences. Sounds like fun, let's take a look...
Just Cause 2 Performance Tests"Just Cause 2 sets a new benchmark in free-roaming games with one of the most fun and entertaining sandboxes ever created," said Lee Singleton, General Manager of Square Enix London Studios. "It's the largest free-roaming action game yet with over 400 square miles of Panaun paradise to explore, and its 'go anywhere, do anything' attitude is unparalleled in the genre." In his interview with IGN, Peter Johansson, the lead designer on Just Cause 2 said, "The Avalanche Engine 2.0 is no longer held back by having to be compatible with last generation hardware. There are improvements all over - higher resolution textures, more detailed characters and vehicles, a new animation system and so on. Moving seamlessly between these different environments, without any delay for loading, is quite a unique feeling."
Just Cause 2 is one of those rare instances where the real game play looks even better than the benchmark scenes. It's amazing to me how well the graphics engine copes with the demands of an open world style of play. One minute you are diving through the jungles, the next you're diving off a cliff, hooking yourself to a passing airplane, and parasailing onto the roof of a hi-rise building. The ability of the Avalanche Engine 2.0 to respond seamlessly to these kinds of dramatic switches is quite impressive. It's not DX11 and there's no tessellation, but the scenery goes by so fast there's no chance to study it in much detail anyway. Although we didn't use the feature in our testing, in order to equalize the graphics environment between NVIDIA and ATI, the GPU water simulation is a standout visual feature that rivals DirectX 11 techniques for realism. There's a lot of water in the environment, which is based around an imaginary Southeast Asian island nation, and it always looks right. The simulation routines use the CUDA functions in the Fermi architecture to calculate all the water displacements, and those functions are obviously not available when using an ATI-based video card. The same goes for the Bokeh setting, which is an obscure Japanese term for out-of-focus rendering. Neither of these techniques uses PhysX, but they do use specific computing functions that are only supported by NVIDIA's proprietary CUDA architecture. There are three scenes available for the in-game benchmark, and I used the last one, "Concrete Jungle" because it was the toughest and it also produced the most consistent results. That combination made it an easy choice for the test environment. All Advanced Display Settings were set to their highest level, and Motion Blur was turned on, as well.
The results for the Just Cause 2 benchmark look similar to the ones we saw for 3DMark, with one or two exceptions. While I'm getting used to the GTX 460 SLI pair popping up above its nearest neighbors, the HD 6870 CrossFireX set really shines in this test as well. They take the top spot, despite being $100 cheaper than a pair of HD 5870 cards. The Gigabyte GTX 480 SOC hangs a little closer to the 460s in SLI here, despite being at a severe disadvantage in the number of CUDA cores. On the whole, I'd call this a pretty well behaved benchmark, and the game's a blast, too. Let's take a look at one more older DirectX gaming benchmark. This time, it's a wonderful blend of modern graphics technology and classic crime scenes, called Mafia II.
Mafia II DX9+SSAO Benchmark ResultsMafia II is a single-player third-person action shooter developed by 2K Czech for 2K Games, and is the sequel to Mafia: The City of Lost Heaven released in 2002. Players assume the life of World War II veteran Vito Scaletta, the son of small Sicilian family who immigrates to Empire Bay. Growing up in the slums of Empire Bay teaches Vito about crime, and he's forced to join the Army in lieu of jail time. After sustaining wounds in the war, Vito returns home and quickly finds trouble as he again partners with his childhood friend and accomplice Joe Barbaro. Vito and Joe combine their passion for fame and riches to take on the city, and work their way to the top in Mafia II. Mafia II is a DirectX 9 PC video game built on 2K Czech's proprietary Illusion game engine, which succeeds the LS3D game engine used in Mafia: The City of Lost Heaven. In our Mafia-II Video Game Performance article, Benchmark Reviews explored characters and gameplay while illustrating how well this game delivers APEX PhysX features on both AMD and NVIDIA products. Thanks to DirectX-11 APEX PhysX extensions that can be processed by the system's CPU, Mafia II offers gamers equal access to high-detail physics regardless of video card manufacturer. Equal access is not the same thing as equal performance, though. With PhysX technology turned off, both AMD and NVIDIA are on a level playing field in this test. In contrast to many gaming scenes, where other-worldly characters and environments allow the designers to amp up the detail, Mafia II uses human beings wearing ordinary period-correct clothes and natural scenery. Just like how high end audio equipment is easiest to judge using that most familiar of sounds - the human voice, graphics hardware is really put to the test when rendering things that we have real experience with. The drape of a wool overcoat is a deceptively simple construct; easy to understand and implement, but very difficult to get perfect.
This test shows a more linear trend, in terms of performance v. cost. Many of the prior tests had more of a "step" function, which is what you might expect when comparing single and dual-GPU configurations of the same cards. By arranging the cards in price-order, from left to right in the chart, you expect the bars to rise evenly as you progress up the pricing ladder. The fact that they do so in such a linear manner in this test indicates to me that this game is equally suited to either AMD or NVIDIA solutions. Given the fact that Mafia II makes excellent use of PhysX and 3D as described in our NVIDIA APEX PhysX Efficiency: CPU vs GPU article, both areas where NVIDIA has an edge, some of you are probably howling at that statement. But, except for the typical bump in the chart for the GTX 460 SLI, the results do scale pretty well with price.
At the higher screen resolution of 1920x1200, the NVIDIA cards start to lose some ground relative to the ATI clan. For a game clearly developed using NVIDIA hardware, it surprises me a bit to see the Radeon series doing so well. Of course, I DO miss the PhysX features, which are always turned off during comparison testing. Since Mafia II can't rely on tessellation for enhancing realism, it leans heavily on PhysX. If tesselation were in the mix, the new and improved tessellation engines in the HD 6870 and the GTX460 would be pushing those numbers up. Here is a game where brute force, meaning the number of shader processors, pays off. Our next benchmark of the series is not for the faint of heart. Lions and tiger - OK, fine. Guys with guns - I can deal with that. But those nasty little spiders......NOOOOOO! How did I get stuck in the middle of a deadly fight between Aliens vs. Predator anyway? Check out the results from one of our toughest new DirectX 11 benchmarks in the next section.
Aliens vs. Predator Test ResultsRebellion, SEGA and Twentieth Century FOX have released the Aliens vs. Predator DirectX 11 Benchmark to the public. As with many of the already released DirectX 11 benchmarks, the Aliens vs. Predator DirectX 11 benchmark leverages your DirectX 11 hardware to provide an immersive game play experience through the use of DirectX 11 Tessellation and DirectX 11 Advanced Shadow features.
In Aliens vs. Predator, DirectX 11 Geometry Tessellation is applied in an effective manner to enhance and more accurately depict HR Giger's famous Alien design. Through the use of a variety of adaptive schemes, applying tessellation when and where it is necessary, the perfect blend of performance and visual fidelity is achieved with at most a 4% change in performance. DirectX 11 hardware also allows for higher quality, smoother and more natural looking shadows as well. DirectX 11 Advanced Shadows allow for the rendering of high-quality shadows, with smoother, artifact-free penumbra regions, which otherwise could not be realized, again providing for a higher quality, more immersive gaming experience. Benchmark Reviews is committed to pushing the PC graphics envelope, and whenever possible we configure benchmark software to its maximum settings for our tests. In the case of Aliens vs. Predator, all cards were tested with the following settings: Texture Quality-Very High, Shadow Quality-High, HW Tessellation & Advanced Shadow Sampling-ON, Multi Sample Anti-Aliasing-4x, Anisotropic Filtering-16x, Screen Space Ambient Occlusion (SSAO)-ON. You will see that this is a challenging benchmark, with all the settings turned up and a screen resolution of 1920 x 1200, it takes an HD5870 card to achieve an average frame rate higher than 30FPS.
Now we get into the full DirectX 11 only benchmarks, so we're looking at the full potential for graphics rendering that's available on only the latest generation of video cards. AvP is a tough benchmark, but it has been a fair one so far, and it's very useful for testing the newest graphics hardware. The relatively high frame rates you see above are a testament to the very high performance of the latest and greatest cards, especially when paired up in SLI or CrossFireX. The GTX 460 SLI pair works its CUDA magic again and provides class-leading performance. The GTX 480 SOC turns in a good performance at 41.4 FPS, 22% higher than the ATI HD 5870, but it's no match for the multi-GPU choices. On this test, when using anything less than the top hardware, some scenes have a jumpy quality to them. This was only evident on the MSI N460GTX Cyclone and the MSI R6870, which picked up the two lowest slots, even with their enhanced tessellation muscle. This game needs shaders more than tessellation, as the performance of the GTX 460 SLI proves, with 672 CUDA Cores on the job. In our next section, Benchmark Reviews looks at one of the newest and most popular games, Battlefield: Bad Company 2. The game lacks a dedicated benchmarking tool, so we'll be using FRAPS to measure frame rates within portions of the game itself.
Battlefield: Bad Company 2 Test ResultsThe Battlefield franchise has been known to demand a lot from PC graphics hardware. DICE (Digital Illusions CE) has incorporated their Frostbite-1.5 game engine with Destruction-2.0 feature set with Battlefield: Bad Company 2. Battlefield: Bad Company 2 features destructible environments using Frostbit Destruction-2.0, and adds gravitational bullet drop effects for projectiles shot from weapons at a long distance. The Frostbite-1.5 game engine used on Battlefield: Bad Company 2 consists of DirectX-10 primary graphics, with improved performance and softened dynamic shadows added for DirectX-11 users. At the time Battlefield: Bad Company 2 was published, DICE was also working on the Frostbite-2.0 game engine. This upcoming engine will include native support for DirectX-10.1 and DirectX-11, as well as parallelized processing support for 2-8 parallel threads. This will improve performance for users with an Intel Core-i7 processor. In our benchmark tests of Battlefield: Bad Company 2, the first three minutes of action in the single-player raft night scene are captured with FRAPS. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
The Gigabyte GTX 480 SOC continues in what is becoming a familiar pattern, i.e. second place to all of the multi-GPU solutions. I know it sounds obvious, but the pricing of the various solutions is the key element to look for in the chart. The prices go up as you move from left to right, and it's easy to spot the standouts in the crowd. Both the GTX 460 SLI and the HD 6870 CrossFireX rise above their immediate neighbors, and provide the best bang for the buck. This is not as tough a benchmark as some others; the developers trod a fine line between juicing up the visuals and keeping the performance levels up. As much as people may Oooh and Aaah over some newer titles like Mafia II that have more exacting graphics, no one is complaining about the look and feel of BF:BC2. This benchmark does not utilize tessellation, so as in our DX10 testing, the strength of the newest GPUs in this area are not having an impact here. Don't worry; we'll see some results later that will show clear differences between the generations with some tessellation-heavy titles. The little-documented feature in the basic game setup, which allows the application to choose which DirectX API it uses during the session, is not a factor here. All of the tested cards here are DX11-capable, and the game was running in DX11 mode for all the test results reported here. Even though this is primarily developed as a DX10 game, there are DX11 features incorporated in BF:BC2, like softened shadows. That one visual enhancement takes a small, but measureable toll on frame rates. It doesn't have as big an impact as aggressive use of tessellation would, either from the visuals standpoint or the computing perspective. In the next section we use one of my favorite games, DiRT-2, to look at DX11 performance. Life isn't ALL about shooting aliens; sometimes you just need to get out of the city and drive...!
DiRT-2 Demo DX11 Benchmark ResultsDiRT-2 features a roster of contemporary off-road events, taking players to diverse and challenging real-world environments. This World Tour has players competing in aggressive multi-car, and intense solo races at extraordinary new locations. Everything from canyon racing and jungle trails to city stadium-based events. Span the globe as players unlock tours in stunning locations spread across the face of the world. USA, Japan, Malaysia, Baja Mexico, Croatia, London, and more venues await, as players climb to the pinnacle of modern competitive off-road racing. ![]() Multiple disciplines are featured; encompassing the very best that modern off-roading has to offer. Powered by the third generation of the EGOTM Engine's award-winning racing game technology, DiRT-2 benefits from tuned-up car-handling physics and new damaged engine effects. It showcases a spectacular new level of visual fidelity, with cars and tracks twice as detailed as those seen in GRID. The DiRT-2 garage houses a collection of officially licensed rally cars and off-road vehicles, specifically selected to deliver aggressive and fast paced racing. Covering seven vehicle classes, players are given the keys to powerful vehicles right away. In DiRT-2 the opening drive is the Group N Subaru, essentially making the ultimate car from the original game the starting point in the sequel, and the rides just get even more impressive as you rack up points.
The primary contribution that DirectX-11 makes to the DiRT-2 Demo benchmark is in the way water is displayed when a car is passing through it, and in the way cloth items are rendered. The water graphics are pretty obvious, and there are several places in the Moroccan race scene where cars are plowing through large and small puddles. Each one is unique, and they are all believable, especially when more than one car is in the scene. The cloth effects are not as obvious, except in the slower-moving menu screens; when there is a race on, there's precious little time to notice the realistic furls in a course-side flag. I should also note that the flags are much more noticeable in the actual game than in the demo, so they do add a little more to the realism there, that is absent from the benchmark. On a side note, I appreciate the fact that the demo's built-in benchmark has variable game play. I know its lame, but I most always watch it intently, just to see how well "my" car is being driven. So far, my finest telekinetic efforts have yielded a best finish of second place!
The race winner is the GTX460 SLI combo again; imagine my shock and disbelief... For a title that was developed on AMD hardware, this is a somewhat surprising result, or it would be if I hadn't already seen the GTX460 pick a fight with every high end card it encountered. The HD 6870 and HD 5870 results look pretty lackluster here, as this is the first time the GTX 480 SOC is able to beat out both of the CrossFireX solutions, each of which costs more than the single GTX 480 video card. Fortunately, every setup I tested with here did a great job rendering all of the various scenes. As I said above, this is one of my favorite games, and I can confirm that the results above are not far off from real gameplay. There has been some concern in the community about the veracity of the Demo Benchmark compared to the in-game one, and/or FRAPS results. Despite that, I like to use the Demo version because everyone has access to it, and can easily compare results obtained with their own hardware. In the next section we'll take a look at one of the newest benchmarking tools, H.A.W.X. 2. It's a high flying aerial adventure filled with lots of tessellated terrain, blown-up airplane bits, and masses of blue sky as a background.
H.A.W.X. 2 DX11 Benchmark ResultsH.A.W.X. 2 has been optimized for DX11 enabled GPUs and has a number of enhancements to not only improve performance with DX11 enabled GPUs but also greatly improve the visual experience while taking to the skies.
The H.A.W.X.2 benchmark test is not quite the tessellation monster that Unigine Heaven is. It is supposed to represent an actual game, after all. However, the developers have taken full advantage of the DirectX 11 technology to pump up the realism in this new title. The scenery on the ground in particular is very detailed and vividly portrayed, and there's a lot of it that goes by the window of the F-22 Raptor that is your point of view. The blue sky, not so much.... The enhanced ability of the newest GPU designs to handle tessellation is quite evident here. Both the Fermi and Barts GPU win the battle against other comparable cards. The big surprise is the relative failure of the GTX 460 SLI pair to scale properly. I didn't have other SLI combinations to try and see if this was strictly a GTX 460 issue, or whether all SLI configurations would be affected. This benchmark was launched by NVIDIA and AMD had limited access during development, so I expect these numbers to shift a bit in the upcoming months as AMD updates their drivers and their hardware. The GTX 480 ends up on the top of the pile in this test, because it doesn't do anything wrong. My guess is that once the GTX460 SLI gets its act together, it will out climb the GTX 480, though. Let's take a look at another DX11 benchmark, a fast-paced scenario on a Lost Planet called E.D.N. III. The dense vegetation in "Test A" is almost as challenging as it was in Crysis, and now we have tessellation and soft shadows thrown into the mix via DirectX 11.
Lost Planet 2 DX11 Benchmark ResultsA decade has passed since the first game, and the face of E.D.N. III has changed dramatically. Terra forming efforts have been successful and the ice has begun to melt, giving way to lush tropical jungles and harsh unforgiving deserts. Players will enter this new environment and follow the exploits of their own customized snow pirate on their quest to seize control of the changing planet.
Test A:The primary purpose of Test A is to give an indication of typical game play performance of the PC running Lost Planet 2 (i.e. if you can run Mode A smoothly, the game will be playable at a similar condition). In this test, the character's motion is randomized to give a slightly different outcome each time.
In Test A of Lost Planet 2, we see a familiar pattern. That is, the newest games are implementing the latest software technology and the newest graphics cards are optimized to handle exactly that. The HD 6870 does quite a bit better than the HD 5870 and the GTX 460 ran well in both single card mode and SLI. I saw one or two "slowdowns" during the test with the AMD cards that didn't occur with the NVIDIA products. They remained during the second and third runs of the benchmark, so it wasn't a "map loading" issue. It occurred in scene two which is the most demanding, no matter what card is installed. In fact it's usually tougher than Test B. For simplicity's sake, we are reporting the average result, as calculated by the benchmark application. It is not an average of the individual scores reported for the three scenes. I'm sure there will be further optimizations as time marches on, but right now we have an almost ideal gaming environment where the software and hardware are finally in sync. As long as you are happy with the story lines, characters, scoring systems, etc. of the new games, you can enjoy a level of realism and performance that was only hinted at with the first generation of DX11 software and hardware. I keep thinking of some of the early titles as "tweeners", as they were primarily developed using the DirectX 10 graphics API, and then some DX11 features were added right before the product was released. It was a nice glimpse into the technology, but the future is now. Test B:The primary purpose of Test B is to push the PC to its limits and to evaluate the maximum performance of the PC. It utilizes many functions of Direct X11 resulting in a very performance-orientated, very demanding benchmark mode.
Test B shows broadly similar ranking as Test A, but the GTX 480 SOC and the HD 5870 make a bit of a comeback. The sea monster (I can't quite say "River Monster" for some reason...) is a prime candidate for tessellation, and given the fact that it is in the foreground for most of the scene, the full level of detail is usually being displayed. The water effects also contribute to the graphics load in this test, making it just a little bit tougher than Test A. The single card results are very close, but the dual-GPU tests reveal that the Radeon HD6870 and the GeForce GTX480 have the best measure of this benchmark. In our next section, we are going to continue our DirectX 11 testing with a look at our most demanding DX11 benchmarks, straight from the depths of Moscow's underground rail system and the studios of 4A Games in Ukraine. Let's take a peek at what post-apocalyptic Moscow looks like in the year 2033.
METRO 2033 DX11 Benchmark ResultsMetro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision. The 4A engine is multi-threaded in that only PhysX has a dedicated thread, and it uses a task-model without any pre-conditioning or pre/post-synchronizing, thus allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering. Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When an overclocked GeForce GTX 480 struggles to produce 29 FPS, you know that only the strongest graphics processors will generate playable frame rates. All of my tests use the in-game benchmark that was added to the game as DLC earlier this year. Advanced Depth of Field and Tessellation effects are enabled, but the advanced PhysX option is disabled to provide equal load to both AMD and NVIDIA cards. All tests are run with 4x MSAA, which produces the highest load of the two anti-aliasing choices.
We're back to a step increase in performance again, when two cards are harnessed together. The GTX 460 pair gets the top spot again with an average frame rate of 31.6 FPS, and the GTX 480 SOC from Gigabyte holds down second place with a respectable 29.0 FPS. That may sound low, but METRO 2033 is a punishing graphics load, and that's a very good result for a single card. Once again, PhysX is disabled for all testing, although it only extracted about a 2 FPS penalty when it was enabled. IMHO, the minor hit in frame rates is fully justified in terms of the additional realism that PhysX imparts to the gameplay. It adds a lot more than any amount of anti-aliasing, no matter what type...
At the higher screen resolution of 1920x1200, the step gets a little smaller as the multi-GPU scaling loses some steam. Once that happens, the GTX 480 SOC has an advantage and it pulls down the top spot in this test by a margin of 7% over the second place finisher, a pair of HD 5870 cards running at stock clocks of 850/1200. These are barely playable frame rates; it takes a bigger card than we have in the mix today to play this game with all the stops pulled out. In our next section, we are going to complete our DirectX 11 testing with a look at an unusual DX11 benchmarks, straight from mother Russia and the studios of Unigine. Their latest benchmark is called "Heaven", and it has some very interesting and non-typical graphics. So, let's take a peek at what Heaven v2.1 looks like.
Unigine Heaven 2.1 Benchmark ResultsThe Unigine "Heaven 2.1" benchmark is a free, publicly available, tool that grants the power to unleash the graphics capabilities in DirectX 11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming. The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception. The "Heaven" benchmark excels at the following key features:
Starting off with a lighter load of 4x MSAA, we see the Gigabyte GTX 480 SOC taking the single GPU crown by a large margin. Even in the "normal" tessellation mode, this is a graphics test that really shows off the full impact of this DirectX 11 technology. The Fermi architecture has so much more computing power designated and available for tessellation, that it's no small surprise to see the card doing so well here. The same goes for the GTX 460 SLI combo, which ekes out a first place finish in this test. The HD 6870, with its revamped tessellation engine, gets within striking distance of the HD 5870 in single-GPU mode and nips past it by less than one FPS when they're both in CrossFireX mode. I'm amazed that it does this with only the same number of shaders as the Radeon HD 5830. There is no jerkiness to the display with any of the multi-GPU pairs at this resolution; now that I've seen the landscape go by for a couple hundred times, I can spot the small stutters pretty easily. This test was run with 4x anti-aliasing; let's see how the cards stack up when we increase MSAA to the maximum level of 8x.
Increasing the anti-aliasing just improved the already convincing performance of the MSI N460GTX Cyclone, relative to all of the other cards. There's no denying that the Fermi chip, in its best interpretation yet: the GF104, is a killer when called upon for tessellation duty. The GTX 480 SOC also gains some in this test; it advances to second place in these rankings, compared to fourth place when we had MSAA cranked down to 4x. Besides the most obvious trend of the GTX 460 SLI grabbing first place more often than not, the other thing I've noticed is a consistent improvement in performance by the GTX 480 when the going gets tough. In our next section, we investigate the thermal performance of the Gigabyte GV-N480SO-15I GeForce GTX 480 Super Overclock Video Card, and see how well this unusual non-reference cooler works on the GF100 Fermi GPU that is known affectionately as "big and hot".
Gigabyte GTX 480 SOC TemperaturesIt's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test. To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.8.2 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 24C throughout testing. I have a ton of airflow into the video card section of my benchmarking case, with a 200mm side fan blowing directly inward, so that helps alleviate any high ambient temps. The Gigabyte GTX 480 SOC video card recorded 38C in idle mode, and increased to 77C after 30 minutes of stability testing in full 3D mode, at 1920x1200 resolution, and the maximum MSAA setting of 8X. With the fan set on Automatic, the speed rose to 57% under full load. The idle fan speed is a relatively high 48%, which is fine because the three fans are pretty much inaudible at that setting. I then did a run with manual fan control and 100% fan speed. I was rewarded by a modest increase in fan noise and a reduced load temperature of 71C.
77C may seem like a not very good result for temperature stress testing, but in comparison to a stock GTX 480 it's darn good. The first batch of GTX 480 cards got up to 93C when Benchmark Reviews tested them, and a later model from ASUS hit 82 under load with an ambient of 20C. I've become used to seeing video card manufacturers keeping the fan speeds low, especially with radial blower wheels, but Gigabyte takes advantage of their Windforce 3X fan design to keep the idle speeds up. Unless you've got the luxury, and the maniacal streak needed, to play video games 24 hours a day, your graphics card spends a lot of time idling while you're at work. With this card, the fan controller keeps the idle speed up to 48% and your card stays cool during the off-hours. There is definitely some thermal benefit to running the fan harder, as you can see from the 100% fan results above, and the increase in noise is not too bad at full tilt. Most users will not have to make custom software profiles to optimize the fan speeds on this non-reference design.
Load temps got up to a maximum of 73C when running continuous gaming benchmarks, with automatic fan speeds ramping up to 51% with the most challenging titles. This is fairly close to stress-test-maximums, so despite all the industry protests about using an extreme tool like FurMark for stress testing, it's doing a good job of emulating a real-world graphics load, IMHO. That temperature is higher than I like to see, but the chip can obviously take it. FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power!
FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool. In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill... VGA Power ConsumptionLife is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now. Take a look at the idle clock rates that NVIDIA programmed into the BIOS for this GPU; no special power-saving software utilities are required. With the GPU core running at 50.6 MHz, and the memory at 67.5 MHz, not a lot of power is required to keep things percolating. This is miles ahead of what the industry was doing 1-2 years ago, in terms of speed throttling to save energy.
To measure isolated video card power consumption, I used the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
* Results are accurate to within +/- 5W.
The Gigabyte GV-N480SO-15I GeForce GTX 480 Super Overclock Video Card pulled 49 (171-122) watts at idle and 308 (430-122) watts when running full out, using the test method outlined above. The GTX 480 Super Over Clock is more frugal than the first generation of full-sized Fermi cards, and within the same range as later cards from major board partners. Part of the increased efficiency comes from running cooler than most GTX 480 cards. I've consistently seen power usage climb right along with temps while stress testing a variety of graphics cards. We've all become used to the low power ways of the newest processors, and there's no turning back. BTW, just because you asked... GPU-Z reported a GPU voltage of 0.962V at idle, 1.037V in 2D mode, and 1.037V in full 3D mode. Next, I'll offer you some final thoughts, and my conclusions. On to the next page... Super Overclock Final ThoughtsEvery technology reaches its zenith after it is already obsolete. A perfect example is LP record playback, AKA the "turntable", which now offers the highest quality playback of any consumer medium. Don't believe me? Find your local audio society, stop by one of their monthly meetings and prepare to be amazed, both at the cost and quality of the modern analog music source. As improbable as it may seem, the highest quality system of music reproduction available in the year 2010 was introduced commercially in 1889. The King is Dead, Long Live the King. Just a few short months ago, the first Fermi cards launched onto a market that had been waiting with baited breath for the Wunder Chip. In many ways, the product that finally made it out of production trials was stunning in its overall concept and its scale. GPU computing had arrived in a huge way, and no one was sure what the full impact was going to be of the new micro architecture. In retrospect, it's now clear that NVIDIA saw a chance to completely own a very particular market that uses the massively parallel architecture of the modern GPU to solve math problems. It sounds simple, but the real-world implications are pretty wide ranging. Want to break 128-bit security codes? Want to break into your neighbor's Wi-Fi? It doesn't take days anymore to use brute-force hacking methods; the dirty deed can be done in 10 minutes if the password is 12 characters or less. Of course the availability of all this parallel computing power doesn't have to be directed at nefarious schemes. There's weather modeling, medical imaging, genetic decoding, and all manner of scientific endeavors that have nothing to do with breaking into your bank account. But then there's nuclear blast simulation.... Oh, did I mention that China just built the world's fastest super-computer with multiple GPUs? It's just a coincidence that 99.999% of all video cards are built in China, right? My guess is that there are other countries doing it as well; it just hasn't shown up in Wiki Leaks yet. Oh well, our common curse is that we live in interesting times.
Back to what most of us are here for, PC gaming. Let's be honest; the GeForce GTX 480 is a big enough chip to satisfy almost every gaming application out there. If you really need more graphics horsepower, the GTX 580 probably isn't going to cut it. If one 480 can't do it, one 580 can't either; the only real answer is SLI. So let's dispense with the myth that this old warhorse is now suddenly useless. If the pricing drops enough, it will be the cheapest gateway to incredible SLI performance that will absolutely kill any gaming application. If the GeForce GTX 570 continues to undercut it in price, then it certainly makes sense to jump on the newest, fastest, coolest, radial fan rocket sled that is also selling for less. Unless of course, you want to overclock the thing until it begs for mercy, or you care about minimizing noise, or you just want to support a company that tried (and succeeded) to improve the breed with a more focused design approach. You get the picture....people buy last year's Audi instead of this year's VW all the time, for the same reason. Gigabyte GTX 480 SOC ConclusionFrom a performance standpoint, it seems odd to be criticizing an overclocked GTX 480, just because a couple of GTX 460 cards took their combined strength of 672 CUDA cores and leapt to the top of almost every benchmark. Multi-GPU set-ups are not the right answer for everyone, and as of today, only the new GTX 580 really improves on the performance of the Gigabyte GTX 480 SOC graphics card. I'm thrilled with the cooling solution, due to the noise factor and the aggressive fan profile that keeps the fan speed above 48%, even at idle. All this leads to lower operating temperatures during normal gaming scenarios and also when the system is at rest, or doing 2D tasks. Power requirements are within reason, I think. NVIDIA recommends a 600W PSU for the GTX 480, which is the minimum I would really consider for any modern gaming rig. The appearance of the Gigabyte GTX 480 SOC video card is very attractive; the Windforce 3x fan shroud gives it an edgy look that implies toughness without the fake styling that often times is used to portray that character trait. The decorative touches are few, and they're subtle and refined; the grayish color of the anodized aluminum shroud doesn't shout at you, either. The blue PCB material is a Gigabyte trademark and it doesn't look out of place, even though the hardcore among us would probably have preferred black. When installed in a typical ATX-style PC case, the fans point down towards the ground and you mostly see the back side of the board, where twelve red LEDs provide a status report on the number of PWM phases in use at the time. They're not so bright that they overpower any other lighting in the case, but they are easy to spot, especially if you have a side window.
The build quality of the Gigabyte GTX480 SOC card was top notch, as it should be at the sharp end of the gaming market. The overall impression of the card was solid, even though the cooler wasn't a rigid block like some other cards. All the open space on the shroud takes away from the rigidity of the assembly. The packaging was high quality and very informative. I was very happy to see that the PC board was clean and free from residue; I've seen too much gunk on video cards lately, leftovers from the wave solder/cleaning process. The power supply used high quality parts, and a great deal of attention was paid to powering all of the sub-systems with localized voltage regulators. The 12-phase VRM section reserved for the GPU is unique to this card and is a definite bonus for ensuring stability in overclocking situations. Gigabyte also employs their GPU GauntletTM sorting process for production of the GV-N480SO-15I, ensuring that the owner is getting all the performance that's intended. No more wondering if you got the golden sample or the duff chip. I do wonder what the "pass rate" is, but I'm sure that is very confidential info that NVIDIA does not want to be released. The basic features of the GTX 480 SOC are fully comparable with the latest offerings from both camps, plus it includes PhysX Technology, which is a real must-have for some. One of the really unique features of the Gigabyte GTX 480 SOC is the ability to choose from two different BIOS files for startup. The original idea for this feature was to prevent "brain freeze" of the GPU when really extreme cooling methods are used for overclocking competitions. This cold-bug behavior is never encountered in normal use, even with high-end water cooling, so it's clear that Gigabyte has designed this card for the customer looking for extreme performance potential. On the power supply side, GIGABYTE uses five NEC proprietary film capacitors, to give extremely low parasitic inductance and resistance in the final filter section of the GPU and memory power supplies. It takes more than raw processing power to win over today's hard core enthusiast, and this board doesn't hold back on build features. As of early October 2010, the price for the Gigabyte GV-N480SO-15I GeForce GTX 480 Super Overclock Video Card is $399.99 at my favorite PC component supplier, Newegg. This is at the very low end of the price range for all GTX 480 cards. Of course, the recent introduction of the GTX 580 and especially the GTX 570 have increased the volatility of GTX 480 prices. The extra costs associated with production of this card serve to increase performance, which is exactly where I like to see the money spent. The asking price is no more than the base card from other manufacturers, and there is a Mafia II game included as an incentive at this time. Considering the extra value that this card brings compared to the reference designs, it's actually a bargain. You have to move way up the price ladder to the MSI N480GTX Twin Frozr II at $480 to get away from the reference design which definitely runs hotter and louder than this GTX 480 Super Over Clock edition. The problem with the value proposition on this, and all the other GTX 480 cards, is that a brand new GTX 570 generally matches it in performance and they are currently going for only $350 - $370. They are all bog-standard reference cards at that price, but the new reference design is a big improvement over the previous generation. I liked this card, despite the fact that it has essentially been replaced by the GTX 570 at a price that is some $50-$70 lower. This card would have been at the top of every category 3-4 months ago, but today it is eclipsed by the latest product refreshes from both AMD and NVIDIA. OTOH, this is the best example of a GTX 480 I've seen to date, even though the MSI product line gives it some competition at higher price points. The out-of-the-box clocks are fantastic, no doubt due to Gigabyte's GPU selection process. I almost always prefer the performance of cooler designs that use an axial fan, and this example is one of the best; it's both quiet and efficient. For the extreme overclocker, and even the folks who want to run water cooling, the power supply and BIOS features make this a standout video card for bagging the top clocks anywhere you go. For the person who's never going to take the factory cooler off, there are some cheaper alternatives out there today. The Gigabyte GeForce GTX 480 Super Overclock card is a viable choice, but only if the special features are needed to meet your special needs. It can't compete directly with the latest GPUs from NVIDIA, and it never will at its current price. I'm actually hoping the prices will bottom out in a month or so and I can pick up a second one for SLI. I told you I liked it, I just don't like the current pricing. Pros:
+ Ultra quiet cooling system with aggressive stock fan profile Cons:
- The GTX 570 does it all for less Ratings:
Final Score: 8.70 out of 10.Recommended: Benchmark Reviews Seal of Approval.
|
Comments
Enough Said,...............
I now have an ATI card in and all is fine.
And why would you be trying several 400 cards surley one is enough.
Keep your comments on-topic with this article, so they can be published.