Archive Home arrow Reviews: arrow Video Cards arrow SLI Performance: GTX 660 Ti vs GTX 670
SLI Performance: GTX 660 Ti vs GTX 670
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Monday, 20 August 2012

NVIDIA SLI: GeForce GTX 660 Ti vs GTX 670

Benchmark Reviews Compares SLI Performance Between NVIDIA GeForce GTX 660 Ti and GeForce GTX 670.

Back in May (2012) NVIDIA released their $400 GeForce GTX 670 video card, securing the number two position in their single-GPU product stack. Just three short months later, the GeForce GTX 660 Ti graphics card arrived to market and filled store shelves at the $300 price point. With a substantial $100 price difference between these two product, consumers might (incorrectly) presume there's a significant difference in hardware or performance. To the surprise of many, the GeForce GTX 670 and GTX 660 Ti are nearly the same card. Both feature identical 28nm NVIDIA 'Kepler' GK104 graphics processors, complete with 1344 CUDA cores all clocked to identical 915 MHz core and 980 Boost speeds. Additionally, the GTX 670 and GTX 660 Ti also feature the exact same 2GB GDDR5 video memory buffer, clocked to 1502 MHz on both cards. The only physical difference between these two products resides in the memory subsystem: GeForce GTX 670 receives four 64-bit controllers (256-bit total bandwidth) while GeForce GTX 660 Ti is designed with three memory controllers (192-bit bandwidth).

So will memory bandwidth amount to any real differences in video game performance? Most PC gamers have a PCI-Express 2.0 compatible motherboard inside their computer system, although the most recent motherboards and all of NVIDIA's GeForce GTX 600-series feature PCI-Express 3.0 compliance. Additionally, nearly all high-performance video cards feature at least 256-bit memory bandwidth or better (such as 384-bit with some AMD Radeon HD 7000-series graphics cards). However, most testing with these high-end graphics cards has shown little indication that bottlenecks actually occur at the PCI-Express level, even while playing the most demanding DirectX 11 video games. It's more likely that a bandwidth bottleneck might occur at the video card's memory subsystem, where the GPU may be capable of sending more information than the video frame buffer can accept. We discovered some evidence of this in our recent testing on the ultra-overclocked ASUS GeForce GTX 660Ti DirectCU-II TOP, which maintained performance levels with a stock GTX 670 in all but the most demanding video games that featured large maps or virtual worlds.

Obviously the GPU plays the star role in creating a bottleneck, and while less powerful processors lack the raw number of transactions to saturate the memory subsystem this isn't so difficult with powerful Kepler-based GeForce GTX 600-series products. Because there are times when GeForce GTX 660 Ti's 192-bit memory bandwidth can become bottlenecked, what would happen if we split demand between two cards connected together into an SLI set? In theory, by combining two graphics cards together with SLI technology we're essentially doubling the memory bandwidth available to standard workloads. Given that 256-bit memory configurations suffer very few bandwidth bottlenecks while 192-bit can have occasional limitations, it seems plausible. Furthermore, the results could really help gamers decide on creating an SLI set of either GTX 660 Ti or GTX 670 graphics cards. That's the purpose of this article.

NVIDIA-SLI-Ready-Badge.jpg

GeForce GTX 660 Ti VS. GeForce GTX670
GPU Cores 1344 1344
Core Clock (MHz) 915 915
Shader Clock (MHz) 980 Boost 980 Boost
Memory Clock (MHz) 1502 1502
Memory Amount 2048MB GDDR5 2048MB GDDR5
Memory Interface 192-bit 256-bit

The new and improved Kepler GPU architecture with NVIDIA GPU Boost technology is only the start, because GeForce GTX 600-series video cards deliver plenty of additional refinements to the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users - from the gamer to the casual computer user. This new technology adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality.

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel X79 Express Test System

DirectX-10 Benchmark Applications

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

DirectX-11 Benchmark Applications

  • 3DMark11 Professional Edition by Futuremark
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
  • Aliens vs Predator Benchmark 1.0
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

PCI-Express Graphics Cards

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: 3DMark11

FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.

The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.

3DMark11-Performance-Test-Settings.png

  • Futuremark 3DMark11 Professional Edition
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)

3dMark2011_Performance_GT1-2_Benchmark.jpg

3dMark2011_Performance_GT3-4_Benchmark.jpg

3DMark11 Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Batman Arkham City

Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.

In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.

Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.

  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled

Batman-Arkham-City-Benchmark.jpg

Batman: Arkham City Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Battlefield 3

In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.

The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.

Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene

Battlefield-3_Benchmark.jpg

Battlefield 3 Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

DX11: Unigine Heaven 3.0

The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven Benchmark Test Results

Graphics Card Radeon HD7970 GeForce GTX660Ti GeForce GTX670 GeForce GTX680 Radeon HD6990 GeForce GTX690
GPU Cores 2048 1344 1344 1536 3072 Total 3072
Core Clock (MHz) 925 915 915 1006 (1187 OC) 830/880 915 (1053 OC)
Shader Clock (MHz) N/A 980 Boost 980 Boost Boost 1058 (1240 OC) N/A Boost 1020 (1215 OC)
Memory Clock (MHz) 1375 1502 1502 1502 (1600 OC) 1250 1502 (1601 OC)
Memory Amount 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5 2048MB GDDR5 4096MB GDDR5 4096MB GDDR5
Memory Interface 384-bit 192-bit 256-bit 256-bit 256-bit 256-bit

SLI: GTX 660 Ti vs GTX 670 Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

SLI-Benchmark-Performance-Test-Comparison-NVIDIA-GeForce-GTX-660-Ti-vs-GTX-670.jpgThe NVIDIA GeForce GTX 660 Ti is presently sells for $300 online, while their GeForce GTX 670 video card currently costs $400. After reviewing the benchmark results for each graphics card in single and SLI configurations, there was plenty of evidence available to suggest that NVIDIA's GeForce GTX 660 Ti delivers a much better value. In the games: Battlefield 3, Batman: Arkham City, Lost Planet 2, and Aliens vs Predator the single GTX 660 Ti trailed right behind the more expensive GTX 670. Given a modest overclock, it's very possible for a gamer to save $100 and still enjoy GeForce GTX 670 levels of performance.

However, the SLI results didn't follow the same patterns, and many of these same video games gave the SLI set of GeForce GTX 670's a dominant lead ahead of the GTX 660 Ti SLI set. This was clearly the result of varying demands on memory and graphics processor, as well as the video cards ability to satisfy these demands. Not every benchmark test ended with GTX 660 Ti keeping pace with GTX 670 (such as Metro 2033, Crysis Warhead, and Unigine Heaven), and most SLI tests separated the cards' performance even further apart.

The GeForce GTX 660 Ti is nearly identical to GTX 670 in every way except for the number of memory controllers, and performance in most games was just as similar when tested in a single card configuration. Unfortunately, the opposite was true for a few games, leaving GTX 660 Ti to trail behind a single GTX 670, and eating dust when compared in SLI. These results make for a difficult determination.

The bottom line is that it depends on your display resolution. A higher pixel count (larger display resolution) creates a significant difference between graphics card performance in the video games you'll be playing. This was demonstrated with a single monitor configuration in our tests, using 1680x1050 (1.76MP) and 1920x1080 (2.07MP / +15%) display resolutions. Had we tested these products on multiple monitors, the results would have been exponentially greater and pushed the products even farther apart.

Aside from Crysis Warhead, almost every SLI benchmark favored GeForce GTX 670 by a significant margin. Our test results illustrated enough data to confirm that two GTX 670's are far more capable of generating high frame rate performance than two GTX 660 Ti video cards... but still not by enough to justify the $100 difference per card. If you're planning on building a single-card system and deciding between these two cards, it's my opinion that an overclocked GeForce GTX 660 Ti is the way to go. However, if that plan is for a SLI computer system with multiple monitors, it may be worth the extra money to run a pair of GeForce GTX 670's. Of course, I welcome your suggestions in the comment area below.


Related Articles:
 

Comments 

 
# weird?Chris 2012-08-21 04:38
Colin, you need to get a 2560 display if you're going to bench with this much horsepower! Those Korean monitors on E-bay are a great value right now!

IS it possible that nvidia needs to perform a driver update as well?
Report Comment
 
 
# RE: weird?Olin Coles 2012-08-21 07:30
Who's Colin? As I (Olin) mention in the article, a larger resolution would merely mean a larger difference in results. There was enough evidence to indicate the outcome at 1680x1050 and 1920x1080, so 2560x1080 would have just pushed the SLI sets even further apart. As for the driver update, anything is possible but I don't think that there's too much room for improvement on a fourth-generation Kepler product.
Report Comment
 
 
# 4th gen?Chris 2012-12-18 09:14
What info do you have on 4th gen? This is the second generation after fermi. Making it the 2nd
Report Comment
 
 
# RE: weird?Ben 2012-08-22 00:37
I've got one of those 27 inch Korean monitors :)... and I can't go back to playing on a 1900 X 1080p monitor...

Where are the reviews to compare them in sli on a single monitor at 2560 X 1440/1600 resolution?

Just bought a 670 gtx over a 660 ti... because i am fairly certain that the 670 gtx in sli will CRUSH a 660 ti sli configuration.

Need to see numbers please!
Report Comment
 
 
# my opinionmatt 2012-08-21 13:29
if you're going for a 2 graphic card solution and you're already willing to drop the money on 2 $400 cards coming out to $800 total already, i don't see a reason not to just jump up another $200 and gain what i can only imagine will be an even bigger jump, by going with a 690.

like you mentioned once you go for an sli build the 660 loses it's shine, and (this is just my opinion) once you're already talking about an investment that big, i don't see any reason not to go for the ultimate when it's only $200 more.

i feel the 660 ti was always meant to be a very value oriented card, and when you're running sli it's a $600+ investment, to me that's a pretty sizable purchase for graphics cards, so i don't see very many people really wanting to sli them, it's simply not who i think nvidia was attempting to appeal too.

i myself intend to settle with a single 660 ti after seeing this, as jumping to a 670 isn't worth it, and jumping to 2 660 tis would make me want to climb the price ladder until i was at a 690 because again to me that seems like the best true value for extreme graphics solutions.
Report Comment
 
 
# RE: my opinionMark 2012-08-23 15:49
660 Ti of great value when you purchase the 3 GB model and overclock. Just like when I take my intel 960 i7 to 4.2 ghz and have it perform at a 980 level. OCing is the way to go
Report Comment
 
 
# RE: RE: my opinionOpalSerpent 2012-10-19 16:20
You will reduce the lifespan of your pc overclocking too much.

I fried my xps m1730 sli years ago with only moderate overclocking

on just the graphics cards.

It's a gamble as we know, some cards are better then others and some have issues.
Report Comment
 
 
# RE: my opinionRicky 2013-02-16 06:02
660TI SLI makes a lot of sense actually... $600 vs $999 and for $1000 less you are only losing about 20 frames. 670 SLI makes 0 sense considering there is only a 10 frame difference. 660TI SLI is actually the best value by far...
Report Comment
 
 
# Wondering...Diego 2012-08-21 15:59
You'll have the chance to make a review of the 660Ti FTW+ (3GB)? Would be very cool to see a proper review of this card.

I believe this card, overclocked, is better than 670 FTW.

Keep doing this great reviews. =D
Report Comment
 
 
# RE: SLI Performance: GeForce GTX 660 Ti vs GTX 670Doug Dallam 2012-08-21 17:12
Perfect solution for the graphs Olin. Very nice.

One thing that has me a little worried about trusting the results that show only AVG FPS is that we don't know the low FPS.

For instance, if you get an average of 48FPS, that's plenty, but if every time an explosion goes off you dip to 15FPS, then that's no good at all.

On the other hand, perhaps I am misinterpreting avg fps as a viable benchmark for how a card will play a game?
Report Comment
 
 
# RE: RE: SLI Performance: GeForce GTX 660 Ti vs GTX 670Olin Coles 2012-08-21 17:53
So in framerate performance tests that use the exact same scenes to benchmark results for all video cards, you're worried that there's a mysterious point in which one card will suffer a frame rate loss that somehow won't affect all of the other cards?
Report Comment
 
 
# RE: RE: RE: SLI Performance: GeForce GTX 660 Ti vs GTX 670Dendikken 2012-08-22 06:27
You should take a good look at how these people test videocards:

##hardocp.com/

Just running benchmarks simply doesn't cut it anymore, you need to play the games to see which cards gives the best experience... you might be in for a surprise.

And you really should have tested 2560 X 1440/1600 resolution, if you are going to buy a 660ti/670 sli setup then you won't be playing at 1680x1050, it's just ridiculous.

Just my 2 cents.
Report Comment
 
 
# RE: SLI Performance: GeForce GTX 660 Ti vs GTX 670Doug Dallam 2012-08-21 19:50
That's a good point, and if it is true that all cards will render the exact minimum frame rate relative to their average frame rate ratio between each card, then we could calculate exactly what the minimum frame rate would be on each card--given we know one cards minimum frame rate!

How can I tell what the minimum frame rate will be with the current information?

I actually could care less what the average FPS is when buying a card to play games. I need to know what the minimum fps are so to maintain fluid game play throughout each scene.

For instance, if the average frame rate is 50 and the minimum is 18, that's a potential red flag.

If a card tests 40fps average over all tests and then tests minimum 18fps in the same test, that's a definite flag.
Report Comment
 
 
# key term: relative performanceOlin Coles 2012-08-21 20:35
I could test any card on my system using my settings and report the results, but that won't mean anything unless you use the exact same hardware and software. That's why we use the average to compare relative performance, since it contrasts the differences between products tested under the same circumstances.

Much like the charts, you're making much to do over nothing.
Report Comment
 
 
# Why we want lowest FPSChris 2012-12-17 10:32
Olin, it is a valid point about the lowest FPS. To give an example, let's say that the two SLI setups run a test (BF3 example) at 124 vs. 112 FPS. Particularly based on the criteria you set out at the beginning, based on memory throughput, it's important to know if those results indicated either:

1. A consistent 10.7% increased framerate for the 670 SLI setup... or...
2. A scenario during the test where the 660 TI SLI setup dropped down to 20 FPS for a few seconds due to the bandwidth being flooded.

Either of those two scenarios could produce the difference we see in the average FPS. From a gaming perspect, 112 FPS is plenty... but 20 for brief moments during game play could reduce the quality of play significantly, whereas a floor of 60 FPS would not.
Report Comment
 
 
# single card till price dropsdave gray 2012-10-02 15:15
the new superclocked 660 ti is little more than the 660 ti and boosts around 5% to 6% I'm going with that and when prices drop, pick up a 2nd card. At present I don't play over 1680x1050 but will go to 1920 x 1080 when VA or IPS monitors improve response times and by then the 2nd card for sli should have dropped 25% or more
Report Comment
 
 
# SLI or not??Rich M 2012-10-07 09:41
So I am I correct in saying that if you are going for the single card option the 6600ti is a better value? I really don't have $600+ to blow on 2video cards, I can build an entire decent system with the $300+ savings from the one card. The costs will even go higher for the sli being mobo and psu costs more money as well. I don't compete as mush as I used to, so will the 660ti be better suited for me?
Report Comment
 
 
# RE: SLI or not??matt 2012-10-07 10:17
read the review, he says point blank on the last page, for a single card configuration the 660ti is a better deal then a single 670. don't got $600 for graphics cards? then don't sli them, cause if you're going with sli the 670's a better deal anyways.
Report Comment
 
 
# 670 longterm availabilitycaleb 2012-10-10 17:56
Here is my main issue and the only reason I am hesitant to jump on the 670 as opposed to the 660. Two years ago around xmas I got a really good deal on a 470 and have been very pleased with it ever since. However now that I am looking for a little more horsepower, there is not a 470 to be had anywhere while you can still find 460's for sli. Would this likely be an issue in another year and a half to two years when I would want to sli a 670?
Report Comment
 
 
# RE: SLI Performance: GTX 660 Ti vs GTX 670matt 2012-10-10 22:27
only time will tell that, but with the lower price point, + being released later on (and therefore closer to the next generation of cards) i'd bet 660s n 660 tis will be easier to find then a 670 down the road, since demand for them won't slow down as much, since again they're newer.
Report Comment
 
 
# RE: SLI Performance: GTX 660 Ti vs GTX 670Geoffrey Bush 2012-12-05 09:21
I think saying the 660 Ti could keep up with a 670 if it's OC is a misreading of every DX11 benchmark graph in this entire review. The 670 is blowing the 660 Ti away and you're not even reviewing an OC version of the 670 like a FTW edition. It looks like you just don't want people to realize how close the 670 is to the 680 in these charts and you're hoping for a price drop.
Report Comment
 
 
# RE: SLI Performance: GTX 660 Ti vs GTX 670Greg 2012-12-06 05:31
I see absolutely no reason to get a GTX 690 at $999 when you can run 660 ti's in a 3 way sli configuration for $100 less than a 690 and get even higher frame rates. Heck even 2 way sli 660ti's are a better deal considering you'll be running any game at ultra settings over 100 fps and for $400 less than a 690, which you'll get maybe 20-30 more frames than the dual 660s. If getting 20-30 more frames is worth paying $400 extra to some of you, you have issues or absolutely no understanding of monetary value. Anyway here's a link where they compare dual/triple sli config 660ti's to several other cards including a GTX 690. The differences are so not even remotely worth $999 or even paying extra for dual 670s. ##xbitlabs.com/articles/graphics/display/geforce-gtx-660-ti-sli.html
Report Comment
 
 
# you're missing some other factors.matt 2012-12-06 11:56
take into consideration the additional power requirements that 3 way sli set up would need. the 690 uses 300 watts the 660ti uses 150 allegedly, however you can overclock it quite a bit and pull closer to 220, which means you'd want enough overhead on the power supply to leave aside 660 watts just for graphics. on top of needing 6 6 pin cables.

you've already covered that extra $100 difference for the most part, in just the cost of a higher end power supply. now think of the real world cost of running it at 450-660 watts, the electrical cost also works against you.

then you have the fact that 2 way sli tends to run smoother then 3 way sli, and last but not least think of airflow. 3 cards take up quite a bit of space in the case and restrict the airflow far more then a single would, as well as increase the noise levels dramatically.

i'd say the 690 is worth it if you really intended to choose between those set ups. and as an added bonus you get to say you have the best, rather then a pretty wicked set up.
Report Comment
 
 
# RE: you're missing some other factors.Greg 2012-12-06 22:32
@ Matt, that's definitely a fair point with the power consumption and I honostly don't know anyone who runs triple sli, just all things considered you're gonna get better performance than a single 690 for about the same price with a higher power supply in mind and if power supply isn't an issue it's 100 bucks saved.

Also I just don't see any reason to get a $999 card when you can run double for cheaper and get near to the performance. I myself just have a single 660 ti and have no problems running games like The Witcher 2, BF3 on Ultra settings @ 1920x1080 with no frame rate issues.

If you have some crazy 3 screen setup running 2560x1440 and you're into graphic design, then yea I can see that as a good reason to go with a 690, otherwise why? Bragging rights? I just don't see much of a graphical or performance differece once you're hitting at around 80fps and beyond or don't really care I guess. There's just absolutely nothing out there where you need a 690 or even running sli to max something out.
Report Comment
 
 
# RE: RE: you're missing some other factors.matt 2012-12-06 23:04
it really is all about bragging rights for the most part, that and future proofing your system. some people have nothing better to blow there money on, or want the absolute best, even if they don't realistically need it. others figure 1 epic card now is the same cost as a new epic card in 2 years + a 680 today, so why not grab it and not have to worry about it when money may be tighter in the future?

but either way you look at it, the 680s and 690's aren't orientated toward budget minded users anyways. they're all about power users and enthusiasts. people like you and me aren't their target consumer.
Report Comment
 
 
# 660 ti sliTranSir 2012-12-10 19:05
well i got meself 2 x evga 660 ti sc, 3770k, asrock z77 extreme4, gskill 2000mhz,ocz agility3, i had 1 x 660ti sc first and frame rate was pretty smooth but going to 2x 660 ti sc saw a 3d11 p of 14300 score, in bf3 ultra settings fps maxed at 200 ave 130-170 and it aint even pushing the cards full power or memeory,cant be more happy
Report Comment
 
 
# 660ti sli with 670gtxG-string8086 2012-12-12 11:06
Is it possible to sli,

x1 660ti (same gpu, mem, cores etc just 192bit,
with x1 670gtx of the same spec 256bit?

I have a PNY xlr8 660ti 2GB xlr8 (Amazon black Friday £80) + with an unecessary £20 pnp

and an evga 670gtx non ftw 2Gb £288


basically all that appears to be different is one is 256bit and the other 192bit.

My first test was an epic fail, as windows 8 crashed (my opinion is w8 is an epic fail...safe mode would not work.)

Windows will boot with both cards sli'd just haven't had time to do a full test.

btw its only a gap stop until I can get another 670gtx

my 850w is old school from 2009 so having to split rails isn't my prefered method but works.

If it works I will still gain better fps in Crysis 2, Heavens bench etc, just the 670gtx reaches 107fps which seems weak in Heavens Bench 3

so anyone had any success?
Report Comment
 
 
# previous post PC specsG-string8086 2012-12-12 11:14
FX8150 stock setting Hpet turned off
Noctua DH14
Noctua 120mm casing fan and 80mm for hdd caddy
16GB Corsair 2000XMS
x2 2gig wd greens
Sabertooth 990fx
850 corsair psu
670gtx
660ti
xonar D2x soundcard
Report Comment
 
 
# 660tiMc-boogerbalz 2012-12-13 06:17
No, it doesn't work, and the 660ti doesn't do anything when selected as a dedicated physx card...bummer :(

however the 660ti showed 198fps in heavens bench over 177 with the 670gtx somethings not right there, if the 660ti is better then I'm ditching the 670
Report Comment
 
 
# Mrsimo 2013-02-27 14:43
I don't know what test result your looking at because the gtx 660 ti smashed the 670 in every benchmark in sli. The 660 ti scaled nearly perfectly double the frame rate. The 670 was a fare margin under double frame rates, go back and have another look.
Report Comment
 
 
# 660 rather that 660TiRsdFnd 2013-05-29 10:28
I've read many info regarding these cards....where isn't there will be waste of money to buy Ti version when there's very small differences for non-Ti version? i'm talking about GTX660 vs GTX 660Ti.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive