| NVIDIA GeForce GTX 670 Video Card Tests |
| Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Thursday, 10 May 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA GeForce GTX 670 Video Card Review
Manufacturer: NVIDIA Full Disclosure: The product sample used in this article has been provided by NVIDIA. NVIDIA Have enjoyed the fruits of their labors with the recent launch of Kepler, their latest ultra-efficient desktop GPU architecture. The NVIDIA GeForce GTX 680 seized the crown for graphics performance, but also has a price tag fit for kings. Now NVIDIA are back to address the needs of performance gamers with GeForce GTX 670, using the same GK104 GPU found in GTX 680 along with 2GB of GDDR5 memory running at the same clock speeds. For around $399 the GeForce GTX 670 matches price to the AMD Radeon HD 7950, yet performs to the level of Radeon HD 7970. In this article Benchmark Reviews tests the NVIDIA GeForce GTX 670 video card against the leading competition, including the GeForce GTX 570 that it replaces. NVIDIA targets high-performance gaming enthusiasts look to upgrade their aging graphics card with their GeForce GTX 670 video card, who tend to update their hardware every few years. In order to best illustrate the GTX 670s performance potential, we use the most demanding PC video game titles and benchmark software available. Video frame rate performance is tested against a large collection of competing desktop graphics products, such as the AMD Radeon HD 7970 (Tahiti). Crysis Warhead compares DirectX 10 performance levels, joined by newer DirectX 11 benchmarks such as: 3DMark11, Batman: Arkham City, Battlefield 3, and Unigine Heaven 3. Of the many platforms available to enjoy video games, there's no question that the highest quality graphics come from PC. While game developers might not consider PC gaming as lucrative as entertainment consoles, companies like NVIDIA use desktop graphics to set the benchmark for smaller more compact designs that make it into notebooks, tablets, and smartphone devices. NVIDIA's Kepler GPU architecture is an example of this, delivering unprecedented performance while operating cooler and consuming far less power than previous flagship discrete graphics cards. Featuring their new NVIDIA GPU Boost technology, the GeForce GTX 670 video card can dynamically adjust power and clock speeds based on real-time application demands.
NVIDIA's GeForce GTX 670 graphics card is designed around their next-generation Kepler GPU architecture, which adopts key aspects from the previous Fermi architecture. Building from the 32-core Streaming Multiprocessor (SM) from Fermi on the GeForce GTX 580, NVIDIA optimized Kepler with twice the performance per watt using an innovative 192-core streaming multiprocessor (referred to as SMX) that exchanges a double speed processor clock for more processor cores. Utilizing seven SMX units, the GeForce GTX 670 boasts 1344 total CUDA cores which manage shader, texture, geometry, and compute tasks. GTX 670 shares an identical memory subsystem with GeForce GTX 680, which reduces the pipeline penalty for these many cores and allows memory speeds up to 6.0 Gb/s. Combined, these architecture improvements offer impressive performance gains while improving overall power efficiency, yet they actually represent only a small portion of new technology available with this product. In addition to a new and improved Kepler GPU architecture with NVIDIA GPU Boost technology, the GeForce GTX 670 video card delivers refinements in the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users - from the gamer to the casual computer user. This new technology adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality. First Look: GeForce GTX 670The NVIDIA GeForce GTX 670 is a 1.5" tall double-bay, 3.9" wide, 9.5" long graphics card that will fit into nearly all mid-tower computer case enclosures with room to spare. GeForce GTX 670 is shorter than NVIDIA's GeForce GTX 570, GeForce GTX 580, and also the AMD Radeon HD 6970, and Radeon HD 7970 (each 10.5" long).
A rear mounted 60mm (2.4") blower motor fan with a slight offset takes advantage of the chamfered depression to draw cool air into the angled fan shroud, allowing more air to reach the intake whenever two or more video cards are combined in close-proximity SLI configurations. NVIDIA's add-in card partners with engineering resources may incorporate their own cooling solution into the GTX 670, but most brands are likely to adopt the cool-running reference design.
Specified at 170W Thermal Design Power output, the GeForce GTX 670 requires less power than its predecessor and several other flagship products. Because TDP demands have been reduced, NVIDIA's GeForce GTX 670 has also reduced power supply requirements to a pair of six-pin PCI-E power connections - identical to the GeForce GTX 570. However, with GeForce GTX 670 the two power connections are relocated to the side of the video card so it fits better into small enclosures.
The GTX 670 offers two simultaneously functional dual-link DVI (DL-DVI) connections, a full-size HDMI 1.4a output, and a DisplayPort 1.2 connection. Add-in partners may elect to remove or possibly further extend any of these video interfaces, but most will likely retain the original engineering. Only one of these video cards is necessary to drive triple-display NVIDIA 3D-Vision Surround functionality, when using both DL-DVI and either the HDMI or DP connection for third output. All of these video interfaces consume exhaust-vent real estate, but this has very little impact on cooling because the 28nm Kepler GPU generates less heat than past GeForce processors, and also because NVIDIA intentionally positions the heatsink far enough from these vents to equalize exhaust pressure.
As with past-generation GeForce GTX series graphics cards, the GTX 670 is capable of two and three card SLI configurations. Because GeForce GTX 670 is PCI-Express 3.0 compliant device, the added bandwidth could potentially come into demand as future games and applications make use of these resources. Most games work well using moderate settings on a single GeForce GTX 670 graphics card, but multi-card SLI configurations are perfect for gamers wanting to experience high-performance video games played at their best quality settings with bells and whistles enabled.
The exposed printed circuit board on the backside of the video card reveals an interesting discovery: GeForce GTX 670 uses a much smaller PCB then the profile suggests. Past GeForce products generally use a shroud to cover the entire length of the circuit board, but with GTX 670 the PCB measures only 7.0" of this 9.5" card, with a 2.5" extension to support the cooling fan.
In the next section, we detail our test methodology and give specifications for all of the benchmark tools and hardware used for our performance tests... VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.
A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X79 Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-Express Graphics Cards
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: 3DMark11
FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.
The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
- Futuremark 3DMark11 Professional Edition
- Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
3DMark11 Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
Aliens vs Predator Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Batman Arkham City
Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.
In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.
Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.
- Batman: Arkham City
- Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
Batman: Arkham City Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Battlefield 3
In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.
The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.
Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
- BattleField 3
- Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
Battlefield 3 Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
Lost Planet 2 Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033 Benchmark
- Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
Metro 2033 Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
DX11: Unigine Heaven 3.0
The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 3.0
- Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA
Heaven Benchmark Test Results
| Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX670 | GeForce GTX680 |
| GPU Cores | 480 | 1536 | 512 | 2048 | 1344 | 1536 |
| Core Clock (MHz) | 732 | 880 | 772 | 925 | 915 | 1006 |
| Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 980 | Boost 1058 |
| Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 | 1502 |
| Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 |
| Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit |
VGA Power Consumption
In this section, PCI-Express graphics cards are isolated for idle and loaded electrical power consumption. In our power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.
The power consumption statistics discussed in this section are absolute maximum values, and may not represent real-world power consumption created by video games or graphics applications.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #4 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card subtracted from measured combined total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
|---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
The GeForce GTX 670 accepts two 6-pin PCI-E power connections for normal operation, and will not activate the display unless proper power has been supplied. Beginning with the GTX 600-series, a low power notification message is displayed at boot up whenever the user fails to connect power connectors properly.
GeForce GTX 670's Thermal Design Power (TDP) is specified at 170W, which appears very accurate because our own tests were able to push a maximum of 167W under full load. NVIDIA recommends a 500W power supply unit for stable operation with GTX 670, which should include both required 6-pin PCI-E connections without the use of adapters.
If you're familiar with how electronics function, it will come as no surprise that less power consumption equals less heat output, evidenced by our results below...
GeForce GTX 670 Temperatures
This section reports our temperature results with the GeForce GTX 670 under idle and maximum load conditions. During each test a 20°C ambient room temperature is maintained from start to finish, as measured by digital temperature sensors located outside the computer system. GPU-Z is used to measure the temperature at idle as reported by the GPU, and also under load.
Using a modified version of FurMark's "Torture Test" to generate maximum thermal load, peak GPU temperature is recorded in high-power 3D mode. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output.
The temperatures illustrated below are absolute maximum values, and do not represent real-world temperatures created by video games or graphics applications:
| Video Card | Idle Temp | Loaded Temp | Loaded Noise | Ambient | ||
| ATI Radeon HD 5850 | 39°C | 73°C | 7/10 | 20°C | ||
| NVIDIA GeForce GTX 460 | 26°C | 65°C | 4/10 | 20°C | ||
| AMD Radeon HD 6850 | 42°C | 77°C | 7/10 | 20°C | ||
| AMD Radeon HD 6870 | 39°C | 74°C | 6/10 | 20°C | ||
| ATI Radeon HD 5870 | 33°C | 78°C | 7/10 | 20°C | ||
| NVIDIA GeForce GTX 560 Ti | 27°C | 78°C | 5/10 | 20°C | ||
| NVIDIA GeForce GTX 570 | 32°C | 82°C | 7/10 | 20°C | ||
| ATI Radeon HD 6970 | 35°C | 81°C | 6/10 | 20°C | ||
| NVIDIA GeForce GTX 580 | 32°C | 70°C | 6/10 | 20°C | ||
| NVIDIA GeForce GTX 590 | 33°C | 77°C | 6/10 | 20°C | ||
| AMD Radeon HD 6990 | 40°C | 84°C | 8/10 | 20°C | ||
| NVIDIA GeForce GTX 670 | 26°C | 71°C | 3/10 | 20°C | ||
| NVIDIA GeForce GTX 680 | 26°C | 75°C | 3/10 | 20°C | ||
| NVIDIA GeForce GTX 690 | 30°C | 81°C | 4/10 | 20°C |
As we've already mentioned on the pages leading up to this section, NVIDIA's Kepler architecture yields a much more efficient operating GPU compared to previous designs. This becomes evident by the extremely low idle temperature, and the modest loaded temperature. What's even more impressive than these results is how quiet GeForce GTX 670 operates, barely changing levels from silent to almost silent as it reaches full load. Even with an open computer case exposing the video card, it's difficult to hear the cooling fan make any noise at all. While NVIDIA should be proud of updating their product line with the one of the fastest graphics cards on the planet, I'm happy they also made it one of the most quiet-running flagship video cards we've ever tested.
GeForce GTX 670 Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
GeForce GTX 670 replaces the GTX 570 in NVIDIA's product stack, and based on our test results the performance differences between them are night and day. On average GeForce GTX 670 delivers an impressive 45% increase over GTX 570, but occasionally reached as high as 70% over its predecessor. NVIDIA have designed GTX 670 to operate faster, offer more features, deliver more functionality, use less energy, and generate less heat... all things we proved it succeeds in achieving. Based on the $399 price tag GeForce GTX 670 competes with the AMD Radeon HD 6950, however after running each video card through several different benchmark tests our FPS results often favored NVIDIA's GeForce GTX 670 over the more expensive AMD Radeon HD 7970. Let's look at the break-down:
In the DirectX 10 game Crysis Warhead, the GeForce GTX 670 was ahead of its predecessor (GTX 570) a full 11 FPS at 1920x1080 while still leading ahead of the AMD Radeon HD 7970 by 4 FPS. DirectX 11 tests also had the GeForce GTX 670 ahead in most tests. The demanding DX11 graphics of Batman: Arkham Asylum made use of Kepler's optimized architecture, delivering a staggering lead of 18 FPS to the GeForce GTX 670 over the more expensive Radeon HD 7970. Battlefield 3 continued the run, pushing the stock GTX 670 more than 8 FPS beyond the Radeon HD 7970. Lost Planet 2 played well on all graphics cards when set to high quality with 4x AA, yet the GeForce GTX 670 still surpassed Radeon HD 7970 performance by 6 FPS. In one of the few exceptions, Aliens vs Predator gave back the lead to AMD Radeon products over their NVIDIA counterparts. Metro 2033 is another demanding game that requires high-end graphics to enjoy quality settings, but like AvP this game benchmark favors Radeon products.
Synthetic benchmark tools offered a similar read on these products, mirroring some of the results seen from our video game tests. Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, forcing the less-powerful Radeon HD 7970 to occasionally trail the GeForce GTX 670 by 4 FPS when it wasn't holding even. Our Unigine Heaven 3.0 benchmark tests used maximum settings, which might explain why Radeon HD 7970 pulled ahead. Taking all our benchmark results into consideration, NVIDIA's GeForce GTX 670 commands a decisive lead past its competition while occasionally surpassing AMD's flagship graphics card.
Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has traditionally used a recognizable design over the past two years, and with the exception to more angular corners, the GTX 670 looks very similar to the GTX 570 model. Some add-in card partners may offer their own unique cooling solution design, but this might not happen with the GeForce GTX 670 since it operates so efficiently and allows nearly all of the heated air to exhaust outside of the computer case. Expect most partners to dress up the original reference design by placing exciting graphics over the fan shroud or using colored plastic components. While looks might mean a lot to some consumers, keep in mind that this product outperforms the competition while generating much less heat and producing very little noise.
Construction is the one area NVIDIA continually shines, and thanks in part to extremely quiet operation paired with more efficient cores that consume less energy and emit less heat I'm confident that GeForce GTX 670 will continue this tradition. Requiring two 6-pin PCI-E power connections helps keep this video card compatible with most power supply units, while tweaking heatsink and fan placement to optimize cooling performance proves there are still ways to improve on a commonplace technology. GeForce GTX 670 has one of the shortest PCBs we've seen from a GTX-series model, which further reduces heat output and makes this a product suitable for more robust HTPC applications. Even better yet, now consumers have a single-GPU solution capable of driving three monitors in 3D Vision Surround with the inclusion of two DL-DVI ports with supplementary HDMI and DisplayPort output.
Defining value at the premium-priced high-end segment isn't easy, because hardware enthusiasts know that they're going to pay top dollar to own the top product. Even still, rating value is like chasing a fast moving target, so please believe me when I say that prices change by the minute in this industry. The GeForce GTX 670 "Kepler" graphics card demonstrates NVIDIA's ability to innovate the graphics segment while maintaining a firm lead in their market, but it comes at a cost. The NVIDIA GeForce GTX 670 shares the same $399 price segment with AMD's Radeon HD 7950, yet performs like the 7970. So with regard to value, the GeForce GTX 670 delivers more features and better performance than the less-powerful AMD Radeon HD 7950, but occasionally meets or exceeds performance to the $550 Radeon HD 7970. Even if we ignore GTX 670's faster FPS results, its features and functionality run off the chart. Furthermore, only NVIDIA's video cards offer multi-display 3D gaming, Adaptive VSync, PhysX technology, GPU Boost, FXAA, and now TXAA.
As of launch day 10 May 2012, GeForce GTX 670 is available at Newegg from several NVIDIA partners:
|
|
While the NVIDIA GeForce GTX 680 may be the reigning champion, GTX 670 saves gamers $100 while delivering incredibly similar performance. NVIDIA's 28nm GK104 'Kepler' GPU has made a huge difference in power consumption and heat output, features that really have my attention. It won't surprise me if enthusiast find themselves divided on their purchase: overclock the GeForce GTX 670 to perform like a GTX 680 (which we may test in a follow-up article), or combine two into an SLI set. Regardless, the performance is there and it reinforces value. Still, I think most people are waiting to see what GeForce GTX 650/660 will offer.
So what do you think of the NVIDIA GeForce GTX 670 graphics card, and are you planning to buy one?
Related Articles:
- Logitech ClearChat PC Wireless Headset 981-000068
- Xigmatek Aegir SD128264 CPU Cooler Heatsink
- SilverStone Treasure TS07 USB 3.0 Enclosure
- NZXT HAVIK 140 CPU Cooler
- Super Talent RAIDDrive USB-3.0 Flash Drive
- Corsair Carbide 400R Computer Case
- NVIDIA Tegra 650 Mobile Processor
- VisionTek Ultimate Performance 1866Mhz DDR3
- XFX GeForce 8800 GTS 320MB XXX Edition Video Card
- Sunbeam SATA-IDE-USB Adapter
Comments
Also was wondering if Battlefield 3 on three monitors and 3dVision will be as memory hungry or if I should wait for the upcoming 4 gig cards?
Go to Toughest bench first -> Metro2033 3 frame difference
Go to To will it play nerfed -> Crysis2 3 frame difference
Go to Gbuffer tech to see memory intensive app -> Battlefield3 3 frame difference!
Finish off with Physx example/open city that opened with shakey driver support: ArkhamCity... 3 Frame Differnce
?Check Price again...
According to Nvidia Math 3 frames represents the single GPU flagship performance that justifies a $100 price difference?
Perhaps a metro 2033/ Battlefied3 3Dvision + NV surround With dual tri ( quad? ) SLI is needed to show more advantage not yet exposed?
Otherwise I sure am glad there have not been any GTX 680 stock.
( I suppose a new wave of powerful games that will not be as nerfed by console economics when the next gen of consoles are released will validate the $100 difference as well... But I would suspect that Metro 2033 should have been enough of a burden for the gtx 680 to prove the advantage of it's muscle?? )
Was a GK 100 chip or GTX 780 rumor ever accounted for??
It would seem reasonable by looking at the results of this bench that perhaps some of the rumors have some truth? Perhaps the gtx 680 is a different flagship class then the 480 and 580???
( otherwise the gtx 570 was virtually equal to a 480... Compared to the KO "beatdown" the 670 gives the 580? ( boom! )
Don't know why the article starts with a concentration comparing against the 570 it replaces? The obvious story is: Why is this card practically a gtx 680 gk104 with the same memory?? Just turned down slightly? Better power requirement handling under load? While at lower temperture under load? ( with a 3 frame difference at the max settings given I would love to see if it can overclock as well? )
If it can... Then for $300 dollars cheaper! I can easily see the gtx670 tri sli the big story of the year ( if it scales well? )
Just seems so weird at the price point to actually dive in just yet. But if nothing develops that explains or ruins the ridiculous value... after x-mas I will definitely tri Sli the 670.
???
unbelievable!
Knowing that I might have saved $300 on my tri sli replacement makes me so happy I have spent the past round of benchmarks squealing in glee like a little girl riding a pretty pink bony bareback!(some of which showing even less of a divide like Tom's showing "less than" < the 3 seconds in Metro 2033 even at the extrem rez of 2650 x 1600!!)
Sure is hard for me to consider considering the possible implications "pointless"? Tho I suppose current developments might just be dumb luck.
But let us assume u r exactly right? In which case, what happens to gtx 680 production? How can they possibly sell any more of them at $100 more now? What kind of idiot would u have to be? (I suppose some might just blindly trudge on despite the obvious value disparity?)
When the single gpu flagship pride has been a point of prestige as much as anything else! And in light of the recent Jon Pedi Research numbers showing the projected multi-Billion ( $23 Billion ) Dollar PC gaming market explosion driven by performance gaming systems. Where according to their research:
The Enthusiast Market of systems costing over $3400 dollars a build represent as much of the final dollar share in the end as the performance systems costing over $1000 a build! And each of which alone, are more than double the markets dollar share compared to the ghetto gamer (gaming builds costing about $800)who invariably QQ's the loudest in support of their needs when they are contributing the least to the detriment of the gaming experience suffered by the peformance and above market who actually created this explosion in the first place!) In other words, The low and Mid Range users at the end of this console cycle are poised to become the new least common denominator that brings down the level of gaming for the rest of us! (Now that soon the blame can not be leveled squarely with the consoles)
In light of the new PC gaming revolution the last thing Nvidia would want to do is to lose credibility on their flagship model when the enthusiast Market numbers show how important they actually are.
( Unless they r going to play smoke and mirrors and just Promote their dual GPU flagship and the GTX 670 and Ignore the GTX680?
Just as long as there is a price drop now on the gtx 690 so original gtx 680 hopefuls do not do something really dumb out of anger and haste like "go with AMD".
That would be a tough choice between them.
I am kind of excited now to see what results from the upcoming 660 benchmarks?
At this point maybe we will see another deal. Where the card is to powerful compared to the 670!
It could happen. I haven't gone cheap in a long time. it would be awesome actually? Considering Maxwelll seems to be creeping up incredibly fast?
But then again Metro 2033 was actually 4 seconds not 3!
So maybe the 680 did "start" to show it's muscle by pulling away at the burden of Metro 2033?
( So perhaps a stress test representing next gen 2.0 upcoming market with the added burden of extreme resolution and 3dvision? )
That Metro 2033's burden still only resulted in a 1 frame difference does not seem to inspire much confidence though? ( heck 1 frame isnt enough concern for most users to give any credence to an PCI-E X8 speed difference )
Unbelievable?
take it out clean it replace the thermal paste with some AS5 and come back and tell us how well the temps dropped
I'm looking to use it for CUDA rendering..