Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Thursday, 03 May 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA GeForce GTX 690 Dual-Kepler GPU Video Card PerformanceBenchmark Reviews tests performance for the world's most powerful graphics solution
Manufacturer: NVIDIA Full Disclosure: The product sample used in this article has been provided by NVIDIA. NOTE: Benchmark Reviews has published our NVIDIA GeForce GTX 690 Features Overview in a separate article. Back on 22 March 2012, the NVIDIA GeForce GTX 680 video card made headlines and became the best-performing single-GPU graphics card on the market. Only six weeks later NVIDIA engineers have successfully combined two 28nm GK104 GPUs together to create their new GeForce GTX 690. In this article Benchmark Reviews tests game performance with the NVIDIA GeForce GTX 690, a double-slot graphics card equipped with a pair of Kepler GPUs. Featuring NVIDIA's cutting-edge GPU Boost technology, the GeForce GTX 690 video card can dynamically adjust power and clock speeds based on real-time application demands. Using EVGA Precision-X, the GeForce GTX 690 has both GPUs overclocked beyond 1200 GHz to produce ultimate graphical performance in PC video games. NVIDIA targets top-end enthusiasts with their ultra-premium GeForce GTX 690 discrete graphics card, which includes only the most affluent PC gamers. In order to best illustrate the GTX 690s dual-GPU performance, we use the most demanding PC video game titles and benchmark applications available. Video frame rate performance is tested against a large collection of competing desktop graphics products, such as the AMD Radeon HD 7970 (Tahiti). Crysis Warhead compares DirectX 10 performance levels, joined by newer DirectX 11 benchmarks such as: 3DMark11, Batman: Arkham City, Battlefield 3, and Unigine Heaven 3.
VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.
A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X79 Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-Express Graphics Cards
|
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: 3DMark11
FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.
The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
- Futuremark 3DMark11 Professional Edition
- Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
3DMark11 Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
Aliens vs Predator Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Batman Arkham City
Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.
In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.
Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.
- Batman: Arkham City
- Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
Batman: Arkham City Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Battlefield 3
In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.
The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.
Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
- BattleField 3
- Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
Battlefield 3 Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Gugila GroundWiz RTS
Gugila's GroundWiz RTS application showcases real-time shader technology. In DirectX 11 tests, terrain rendering uses displacement, tessellation and higher detail ground surfaces. GroundWiz RTS is optimized for parallel computing using multiple CPUs and GPU shaders to achieve real-time performance.
Procedural displacement tessellation is supported on DirectX11 compatible graphics cards. This feature adds a great amount of terrain detail, which will be extra noticeable on rocks and mountainous terrains. The amount of tessellation is user controllable and should be adjusted to the speed of graphics card.
Another important aspect is procedural terrain roughness - controllable per ground layer. Terrain roughness affects lighting via normal mapping and also layer distribution. Optimized routines of GroundWiz RTS Terrain Map make it possible to render a big layer tree in real-time (16 layers and more). The current version is optimized to use graphics cards that support Shader Model 3.0 and above.
- Gugila GroundWiz RTS 2.1 Demo: Alpine
- Settings: DirectX 11 Renderer, 1280x720p Resolution, Tessellation Normal, Shadow Mapping 1024, CPU 1t, 60-Second Duration
Gugila GroundWiz Alpine Benchmark Test Results
EDITOR'S NOTE 22 March 2012: AMD representatives and their PR firm were both contacted nearly one week prior to publication of this article, alerting them to the failure of their Radeon HD 7900 series with the Gugila GroundWiz benchmark using DirectX 11 rendering. To date, no response has been received and no driver update has been posted. It remains unclear why the R7900 series functions with the DX9 version of this test, but fails in DX11 mode.
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
Lost Planet 2 Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033 Benchmark
- Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
Metro 2033 Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
DX11: Unigine Heaven 3.0
The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 3.0
- Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA
Heaven Benchmark Test Results
Graphics Card | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | Radeon HD7970 | GeForce GTX680 | Radeon HD6990 | GeForce GTX590 | GeForce GTX690 |
GPU Cores | 480 | 1536 | 512 | 2048 | 1536 | 3072 Total | 1024 | 3072 |
Core Clock (MHz) | 732 | 880 | 772 | 925 | 1006 (1187 OC) | 830/880 | 608 | 915 (1053 OC) |
Shader Clock (MHz) | 1464 | N/A | 1544 | N/A | Boost 1058 (1240 OC) | N/A | 1215 | Boost 1020 (1215 OC) |
Memory Clock (MHz) | 950 | 1375 | 1002 | 1375 | 1502 (1600 OC) | 1250 | 854 | 1502 (1601 OC) |
Memory Amount | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072 GDDR5 | 4096MB GDDR5 |
Memory Interface | 320-bit | 256-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 256-bit |
VGA Power Consumption
In this section, PCI-Express graphics cards are isolated for idle and loaded electrical power consumption. In our power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.
The power consumption statistics discussed in this section are absolute maximum values, and may not represent real-world power consumption created by video games or graphics applications.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #4 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card subtracted from measured combined total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
The GeForce GTX 690 accepts two 8-pin PCI-E power connections for normal operation, and will not activate the display unless proper power has been supplied. NVIDIA recommends a 650W power supply unit for stable operation with GTX 690, which should include both required 8-pin PCI-E connections without the use of adapters.
If you're familiar with how electronics function, it will come as no surprise that less power consumption equals less heat output, evidenced by our results below...
GeForce GTX 690 Temperatures
This section reports our temperature results with the GeForce GTX 690 under idle and maximum load conditions. During each test a 20°C ambient room temperature is maintained from start to finish, as measured by digital temperature sensors located outside the computer system. GPU-Z is used to measure the temperature at idle as reported by the GPU, and also under load. Using a modified version of FurMark's "Torture Test" to generate maximum thermal load, peak GPU temperature is recorded in high-power 3D mode. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output.
The temperatures illustrated below are absolute maximum values, and do not represent real-world temperatures created by video games or graphics applications:
Video Card | Idle Temp | Loaded Temp | Loaded Noise | Ambient | ||
ATI Radeon HD 5850 | 39°C | 73°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 460 | 26°C | 65°C | 4/10 | 20°C | ||
AMD Radeon HD 6850 | 42°C | 77°C | 7/10 | 20°C | ||
AMD Radeon HD 6870 | 39°C | 74°C | 6/10 | 20°C | ||
ATI Radeon HD 5870 | 33°C | 78°C | 7/10 | 20°C | ||
NVIDIA GeForce GTX 560 Ti | 27°C | 78°C | 5/10 | 20°C | ||
NVIDIA GeForce GTX 570 | 32°C | 82°C | 7/10 | 20°C | ||
ATI Radeon HD 6970 | 35°C | 81°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 580 | 32°C | 70°C | 6/10 | 20°C | ||
NVIDIA GeForce GTX 590 | 33°C | 77°C | 6/10 | 20°C | ||
AMD Radeon HD 6990 | 40°C | 84°C | 8/10 | 20°C | ||
NVIDIA GeForce GTX 680 | 26°C | 75°C | 3/10 | 20°C | ||
NVIDIA GeForce GTX 690 | 30°C | 81°C | 4/10 | 20°C |
As we've already mentioned on the pages leading up to this section, NVIDIA's Kepler architecture yields a much more efficient operating GPU compared to previous designs. This becomes evident by the extremely low idle temperature, and the modest loaded temperature. What's even more impressive than these results is how quiet GeForce GTX 690 operates, barely changing levels from silent to almost silent as it reaches full load. Even with an open computer case exposing the video card, it's difficult to hear the cooling fan make any noise at all. While NVIDIA should be proud of updating their product line with the fastest graphics card on the planet, I'm happy they also made it one of the most quiet-running flagship video cards ever tested.
NVIDIA GPU Boost
Like the CPU found inside computer systems, the GPU found in video cards almost never reaches 100% usage. In fact, the CPU and GPU rarely use more than 10% of their available power, yet consume energy regardless of how much is actually needed. NVIDIA has recognized this, and followed suite with CPU manufacturers to offer variable clock speeds and power consumption. NVIDIA GPU Boost technology enabled the GeForce GTX 680 video card to dynamically adjust power and clock speeds based on real-time application demands, automatically tweaking these settings several times per second.
In some of the more demanding video games and applications there may occasionally be a time when the GPU is being fully utilized, which is when NVIDIA GPU Boost increases clock frequency to deliver improved graphics performance and higher image quality. NVIDIA GPU Boost is a combination of dedicated hardware circuitry that continually monitors GPU power consumption along with software technology that works in the background on GeForce GTX 690, and automatically adjusts the graphics clock speed based on application demands.
Using the EVGA PrecisionX tool (version 3.0.2 illustrated above with special GeForce GTX 690 skin), our test sample was able to remain stable with a 1215 MHz peak overclock (1158 MHz Boost) to the NVIDIA GPU Boost threshold, with the GeForce GTX 690s 4GB GDDR5 video frame buffer memory set to operate at 1601 MHz. Both settings used a maximum increase of target power (135%), and all tests were run with 95% fan speed.
AMD and NVIDIA already stretch their GPUs pretty thin in terms of overclocking head room, but there's a difference between thin and non-existent. In this section, Benchmark Reviews compares stock versus overclocked video card performance on the GeForce GTX 590 with default voltage supplied to the GPUs. Here are the test results:
GPU Overclocking Results
Test Item | Standard GPU | Overclocked GPU | Improvement | |||
GeForce GTX 690 | 915/1020*MHz | 1053/1158*MHz | 138 MHz (15.1%) | |||
DX11: 3dMark11 GT1 | 80.5 | 95.0 | 14.5 FPS (18.0%) | |||
DX11: 3dMark11 GT2 | 82.1 | 96.6 | 14.5 FPS (17.7%) | |||
DX11: 3dMark11 GT3 | 108.1 | 124.3 | 16.2 FPS (15.0%) | |||
DX11: 3dMark11 GT4 | 50.8 | 59.0 | 8.2 FPS (16.1%) | |||
DX11: Aliens vs Predator | 109.8 | 120.1 |
10.3 FPS (9.4%) |
|||
DX11: Batman Arkham City | 127 | 135 |
8.0 FPS (6.3%) |
|||
DX11: Battlefield 3 | 113 | 123 | 10.0 FPS (8.9%) | |||
DX11: Heaven 3.0 | 86.3 | 96.1 |
9.8 FPS (11.4%) |
|||
DX11: Lost Planet 2 | 103.9 | 110.0 | 6.1 FPS (5.9%) | |||
DX11: Metro 2033 | 75.0 | 81.7 | 6.7 FPS (8.9%) |
*Please note that the actual NVIDIA GPU Boost clock speed will vary, depending on system conditions and application demands. Typical GPU Boost speed shown.
Overclocking Summary: With a 138 MHz overclock that represents a 15.1% increase in typical GPU speed, our baseline results indicate an average increase of about 11.8% in actual frame rate performance at 1920x1200 resolution. This usually amounted to an additional 10+ FPS in most scenes. This is a decent performance boost, especially considering we're working with a dual-GPU graphics card, and every extra frame translates into an advantage over your enemy.
GeForce GTX 690 Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
As we discovered with the GeForce GTX 680 video card just a few weeks back, NVIDIA's Kepler architecture is designed to operate faster, offer more features, deliver more functionality, use less energy, and generate less heat. Taking these design goals into consideration, it seems that Kepler would be the perfect fit for the dual-GPU GeForce GTX 690. Making a bold claim, NVIDIA insists the GeForce GTX 690 is capable of matching performance with two GTX 680s in SLI. After running fifteen different benchmark tests, the results have proven their point. Spoiler alert: the GeForce GTX 690 easily surpassed graphics performance on every product available, and in some tests it even doubled performance of its closest competitor - AMD's Radeon HD 6990. Expect nothing short of amazing performance from the GeForce GTX 690, but let test results prove my point:
DirectX 11 tests delivered an astounding lead for the NVIDIA GeForce GTX 690 in nearly all benchmarks. Testing with Aliens vs Predator, the GTX 690 enjoyed a 15 FPS advantage over the AMD Radeon 6990 and pushed nearly 35 FPS past the GTX 590. The demanding DX11 graphics of Batman: Arkham Asylum made use of Kepler's optimized architecture, delivering a staggering lead to the GeForce GTX 690 over every other graphics card tested. Battlefield 3 continued the run, pushing the stock GTX 690 more than 47 FPS beyond the Radeon HD 6990 before receiving an overclock that sent it another 10 FPS higher. Lost Planet 2 played well on all graphics cards when set to high quality with 4x AA, yet the GeForce GTX 690 still surpassed Radeon HD 6990 performance by 45 FPS. Metro 2033 is another demanding game that requires high-end graphics to enjoy quality settings, which reduced the GTX 690s lead over the Radeon HD 6990 to just over 12 FPS.
Synthetic benchmark tools produced similar performance results to those seen from our video game tests. Futuremark's 3DMark11 benchmark suite strains high-end graphics cards with only mid-level settings displayed at 720p, yet the GeForce GTX 690 makes the test look silly and nearly doubles the next best performer. Gugila GroundWiz RTS Demo uses the Alpine scene to cripple graphics cards with real-time shadows that are so demanding our tests had to run at 1280x720p just to get decent frame rate results. NVIDIA's GeForce GTX 690 did well, but it seems that AMD needs to sort out issues on their new Radeon HD 7970 which failed this test. Unfortunately AMD did not consider this issue to be worthy of response, even though the test works nicely with their other products. Finally, the Unigine Heaven 3.0 benchmark confirmed what we've seen in most other tests: NVIDIA's GeForce GTX 690 leading way ahead of the AMD Radeon HD 6990 (by more than 31 FPS at 1920x1080), and then leaping past it more than 41 FPS once overclocked to maximum GPU Boost.
Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has traditionally used a recognizable design over the past two years, and with the exception transparent windows and treated materials, the GTX 690 looks very similar to their GTX 590 model. Expect most partners to dress up the original reference design by placing exciting graphics over the fan shroud or using colored plastic components. While looks might mean a lot to some consumers, keep in mind that this product outperforms the competition while generating much less heat and producing very little noise.
Construction is the one area NVIDIA continually shines, and thanks in part to extremely quiet operation paired with more efficient cores that consume less energy and emit less heat, I'm confident that GeForce GTX 690 will continue this tradition. Benchmark Reviews has published our NVIDIA GeForce GTX 690 Features Overview in a separate article, which details these improvements. Gamers wanting a single-card solution capable of driving three monitors in 3D Vision Surround will be pleased with the GeForce GTX 690, which offers three DL-DVI ports with supplementary mini-DisplayPort output.
Defining value for an ultra-premium high-end product isn't easy because hardware enthusiasts know that they're going to pay top dollar to own a cutting-edge product. GeForce GTX 690 is the ultimate enthusiast graphics card intended for affluent top-end gamers, and demonstrates NVIDIA's ability to innovate the graphics segment while establishing a huge lead in the discrete graphics market... but it comes at a cost: $999 MSRP to be exact. Keeping in mind that the GTX 690 also offers triple-display 3D gaming, PhysX technology support, GPU Boost overclocking, FXAA, and now TXAA post-processing, consumers get more than sheer graphical processing power for their purchase.
Our GeForce GTX 690 test sample took the standard 915/1020 MHz GPU clock and easily reached 1053/1158 MHz with GPU Boost helping to produce 1215 MHz when needed. Add this to the record-setting 6.0 GHz GDDR5 memory clock (which we also overclocked to 6.4 GHz), and enabling vSync on every game becomes a possibility... especially with NVIDIA Adaptive VSync now available to smooth the frame rate gaps. Using just one GeForce GTX 690 video card is enough to surpass the competition by nearly 100%, so imagine the graphics quality settings and resolutions possible with two units combined into a quad-SLI set.
In conclusion, the NVIDIA GeForce GTX 690 has dramatically surpassed all graphics cards on the market and could be the top contender for much longer than we expect. AMD has been forced to show their hand, and I expect the Radeon HD 7990 to surface sometime soon. Yet, even if AMD manages to double the performance of a single Radeon HD 7970, it still won't surpass GTX 690 frame rates and will still lack the thermal performance, quiet acoustics, and FXAA/TXAA post-processing.
So what do you think of the NVIDIA GeForce GTX 690 Kepler graphics card, and are you planning to buy one?
Related Articles:
- MSI GTX N650Ti Power Edition Video Card
- XFX Radeon HD R7790 Video Card
- HIS Radeon HD6850 IceQ-X Turbo Video Card
- Intel Core i7-3820 Extreme Edition CPU
- Cooler Master CM 690 NVIDIA Edition Case
- NVIDIA APEX PhysX: CPU vs GPU Efficiency
- AMD A10-5800K Trinity Desktop Processor
- OCZ Black Edition DDR3-1600 Memory Kit
- ASUS P9X79 WS LGA2011 Workstation Motherboard
- OCZ Vertex 4 Solild State Drive
Comments
GTX 680s in SLI completely destroy a single 690.
saves you space on you motherboard.
price of 2x gtx 680 is same as 1x gtx 690.
gtx 680 2x power consumption.
SLi/crossfire, 2x gpu setup delays because they need to communicate between each other first before making computations
everyone knows that SLi or crossfiring 2 gpu doesnt give you 2 times performance. some games doesnt fully support SLi usually you have to wait for game updates. You also need to wait gpu driver updates to make sli/crossfire work efficiently on new games especially on new release gpu cards
which by the way an sli of gtx680 needs to be plugged in two pcie which in many cases causes bottleneck because some mobo has diffrent pcie gen2, gen3 and sometimes even if on 2x pcie gen3 sli some mobo will run it as gen2 which is the bottle.. tooo much hassle..
a single gtx690 = on a single pcie running at gen3
I'll buy a 690, and in less than a year I'll buy one to SLI for half the price. They've already stated that it'll clock over 1,100 on stock cooling, so as far as you being "really sure", i'm not sure what you're smoking.
I had the 7950 gx2 back in the day and it ran as well as the 8800, and it was cheaper, minus dx10. A couple months later I bought another for half the price because i HAD ROOM. It lasted another 3-4 years, and still kicked ass.
If you're going to buy anything and they're the same price, why would you buy 2 that have less performance, instead of one at the same price using the same chips and more room to expand?
I'm not sure of your reasoning on this, it makes no sense.
And don't correct me on price, that is the price for two, actually useful 680's.
I´ve read an article about GeForce GTX 680 3-way SLI an that didnt impress me much... i think an 690 or an 2-way SLI of 680 is a gamers dream for the next time. Maybe NVIDIA has some driver work to do for more power with more cards, dunno....
But hey maybe they will benchmark an GTX 690 Quad SLI Pack here.... that would be more than great....
Would the cpu hold the GTX690 or the GTX680 back too much to be worth the investment, or will a GTX680 or GTX690 be able to deliver nevertheless?
You have somewhat put my worries to rest.
I have been thinking of upgrading since GTA V was announced. My GTX 295 can handle all current demanding games with surprising ease, but it is at its limits when I want to push the GTA IV graphics. I hope the GTA V engine is better optimized for PC.
I will try to hold back my eagerness to buy a new video card until I feel it is really necessary.
And I didn't see the power usage idle/load on power chart.
This isn't a motherboard review, so there's no point in testing PCI-E 2 vs 3. If someone has $1000 to burn on a video card, they're not using it on old hardware.
Seriously, good stuff, but did I miss the usual scoring in the conclusion section or is it just not present?
Sure, if you're running a multiple monitor setup, higher FPS is what you want, but STILL, dual 680s in SLI is way better than a single 690.
Also, I disagree that SLI 680s is better than one GTX 690. It's your money, so buy what you want, but there's no 'right' answer.
Do you know what the "p" in terms like "720p" actually stands for?
So I am afraid to buy Radeon now.
I have tried a few generations of Nvidia cards and all my driver problems have gone away.
So you can see why I would be hard pressed to ever go back to the video cards that gave me so much trouble.
I am not saying I will never ever try an AMD card again, But at the moment I decline.