| SLI Performance: GTX 660 Ti vs GTX 670 |
| Reviews - Featured Reviews: Video Cards | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Written by Olin Coles | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Monday, 20 August 2012 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA SLI: GeForce GTX 660 Ti vs GTX 670Benchmark Reviews Compares SLI Performance Between NVIDIA GeForce GTX 660 Ti and GeForce GTX 670.Back in May (2012) NVIDIA released their $400 GeForce GTX 670 video card, securing the number two position in their single-GPU product stack. Just three short months later, the GeForce GTX 660 Ti graphics card arrived to market and filled store shelves at the $300 price point. With a substantial $100 price difference between these two product, consumers might (incorrectly) presume there's a significant difference in hardware or performance. To the surprise of many, the GeForce GTX 670 and GTX 660 Ti are nearly the same card. Both feature identical 28nm NVIDIA 'Kepler' GK104 graphics processors, complete with 1344 CUDA cores all clocked to identical 915 MHz core and 980 Boost speeds. Additionally, the GTX 670 and GTX 660 Ti also feature the exact same 2GB GDDR5 video memory buffer, clocked to 1502 MHz on both cards. The only physical difference between these two products resides in the memory subsystem: GeForce GTX 670 receives four 64-bit controllers (256-bit total bandwidth) while GeForce GTX 660 Ti is designed with three memory controllers (192-bit bandwidth). So will memory bandwidth amount to any real differences in video game performance? Most PC gamers have a PCI-Express 2.0 compatible motherboard inside their computer system, although the most recent motherboards and all of NVIDIA's GeForce GTX 600-series feature PCI-Express 3.0 compliance. Additionally, nearly all high-performance video cards feature at least 256-bit memory bandwidth or better (such as 384-bit with some AMD Radeon HD 7000-series graphics cards). However, most testing with these high-end graphics cards has shown little indication that bottlenecks actually occur at the PCI-Express level, even while playing the most demanding DirectX 11 video games. It's more likely that a bandwidth bottleneck might occur at the video card's memory subsystem, where the GPU may be capable of sending more information than the video frame buffer can accept. We discovered some evidence of this in our recent testing on the ultra-overclocked ASUS GeForce GTX 660Ti DirectCU-II TOP, which maintained performance levels with a stock GTX 670 in all but the most demanding video games that featured large maps or virtual worlds. Obviously the GPU plays the star role in creating a bottleneck, and while less powerful processors lack the raw number of transactions to saturate the memory subsystem this isn't so difficult with powerful Kepler-based GeForce GTX 600-series products. Because there are times when GeForce GTX 660 Ti's 192-bit memory bandwidth can become bottlenecked, what would happen if we split demand between two cards connected together into an SLI set? In theory, by combining two graphics cards together with SLI technology we're essentially doubling the memory bandwidth available to standard workloads. Given that 256-bit memory configurations suffer very few bandwidth bottlenecks while 192-bit can have occasional limitations, it seems plausible. Furthermore, the results could really help gamers decide on creating an SLI set of either GTX 660 Ti or GTX 670 graphics cards. That's the purpose of this article.
The new and improved Kepler GPU architecture with NVIDIA GPU Boost technology is only the start, because GeForce GTX 600-series video cards deliver plenty of additional refinements to the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users - from the gamer to the casual computer user. This new technology adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality. VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included. In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages. A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X79 Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-Express Graphics Cards
DX10: Crysis WarheadCrysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering. Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance. Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.
Crysis Warhead Benchmark Test Results
DX11: 3DMark11FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets. The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
3DMark11 Benchmark Test Results
DX11: Aliens vs PredatorAliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit. In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
Aliens vs Predator Benchmark Test Results
DX11: Batman Arkham CityBatman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC. In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power. Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.
Batman: Arkham City Benchmark Test Results
DX11: Battlefield 3In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before. The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail. Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
Battlefield 3 Benchmark Test Results
DX11: Lost Planet 2Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game. Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs. The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
Lost Planet 2 Benchmark Test Results
DX11: Metro 2033Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision. The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering. Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors. NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
Metro 2033 Benchmark Test Results
DX11: Unigine Heaven 3.0The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming. The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
Heaven Benchmark Test Results
SLI: GTX 660 Ti vs GTX 670 ConclusionIMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
However, the SLI results didn't follow the same patterns, and many of these same video games gave the SLI set of GeForce GTX 670's a dominant lead ahead of the GTX 660 Ti SLI set. This was clearly the result of varying demands on memory and graphics processor, as well as the video cards ability to satisfy these demands. Not every benchmark test ended with GTX 660 Ti keeping pace with GTX 670 (such as Metro 2033, Crysis Warhead, and Unigine Heaven), and most SLI tests separated the cards' performance even further apart. The GeForce GTX 660 Ti is nearly identical to GTX 670 in every way except for the number of memory controllers, and performance in most games was just as similar when tested in a single card configuration. Unfortunately, the opposite was true for a few games, leaving GTX 660 Ti to trail behind a single GTX 670, and eating dust when compared in SLI. These results make for a difficult determination. The bottom line is that it depends on your display resolution. A higher pixel count (larger display resolution) creates a significant difference between graphics card performance in the video games you'll be playing. This was demonstrated with a single monitor configuration in our tests, using 1680x1050 (1.76MP) and 1920x1080 (2.07MP / +15%) display resolutions. Had we tested these products on multiple monitors, the results would have been exponentially greater and pushed the products even farther apart. Aside from Crysis Warhead, almost every SLI benchmark favored GeForce GTX 670 by a significant margin. Our test results illustrated enough data to confirm that two GTX 670's are far more capable of generating high frame rate performance than two GTX 660 Ti video cards... but still not by enough to justify the $100 difference per card. If you're planning on building a single-card system and deciding between these two cards, it's my opinion that an overclocked GeForce GTX 660 Ti is the way to go. However, if that plan is for a SLI computer system with multiple monitors, it may be worth the extra money to run a pair of GeForce GTX 670's. Of course, I welcome your suggestions in the comment area below.
Related Articles:
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The NVIDIA GeForce GTX 660 Ti is presently sells for $300 online, while their GeForce GTX 670 video card currently costs $400. After reviewing the benchmark results for each graphics card in single and SLI configurations, there was plenty of evidence available to suggest that NVIDIA's GeForce GTX 660 Ti delivers a much better value. In the games: Battlefield 3, Batman: Arkham City, Lost Planet 2, and Aliens vs Predator the single GTX 660 Ti trailed right behind the more expensive GTX 670. Given a modest overclock, it's very possible for a gamer to save $100 and still enjoy GeForce GTX 670 levels of performance.
Comments
IS it possible that nvidia needs to perform a driver update as well?
Where are the reviews to compare them in sli on a single monitor at 2560 X 1440/1600 resolution?
Just bought a 670 gtx over a 660 ti... because i am fairly certain that the 670 gtx in sli will CRUSH a 660 ti sli configuration.
Need to see numbers please!
like you mentioned once you go for an sli build the 660 loses it's shine, and (this is just my opinion) once you're already talking about an investment that big, i don't see any reason not to go for the ultimate when it's only $200 more.
i feel the 660 ti was always meant to be a very value oriented card, and when you're running sli it's a $600+ investment, to me that's a pretty sizable purchase for graphics cards, so i don't see very many people really wanting to sli them, it's simply not who i think nvidia was attempting to appeal too.
i myself intend to settle with a single 660 ti after seeing this, as jumping to a 670 isn't worth it, and jumping to 2 660 tis would make me want to climb the price ladder until i was at a 690 because again to me that seems like the best true value for extreme graphics solutions.
I fried my xps m1730 sli years ago with only moderate overclocking
on just the graphics cards.
It's a gamble as we know, some cards are better then others and some have issues.
I believe this card, overclocked, is better than 670 FTW.
Keep doing this great reviews. =D
One thing that has me a little worried about trusting the results that show only AVG FPS is that we don't know the low FPS.
For instance, if you get an average of 48FPS, that's plenty, but if every time an explosion goes off you dip to 15FPS, then that's no good at all.
On the other hand, perhaps I am misinterpreting avg fps as a viable benchmark for how a card will play a game?
##hardocp.com/
Just running benchmarks simply doesn't cut it anymore, you need to play the games to see which cards gives the best experience... you might be in for a surprise.
And you really should have tested 2560 X 1440/1600 resolution, if you are going to buy a 660ti/670 sli setup then you won't be playing at 1680x1050, it's just ridiculous.
Just my 2 cents.
How can I tell what the minimum frame rate will be with the current information?
I actually could care less what the average FPS is when buying a card to play games. I need to know what the minimum fps are so to maintain fluid game play throughout each scene.
For instance, if the average frame rate is 50 and the minimum is 18, that's a potential red flag.
If a card tests 40fps average over all tests and then tests minimum 18fps in the same test, that's a definite flag.
Much like the charts, you're making much to do over nothing.
1. A consistent 10.7% increased framerate for the 670 SLI setup... or...
2. A scenario during the test where the 660 TI SLI setup dropped down to 20 FPS for a few seconds due to the bandwidth being flooded.
Either of those two scenarios could produce the difference we see in the average FPS. From a gaming perspect, 112 FPS is plenty... but 20 for brief moments during game play could reduce the quality of play significantly, whereas a floor of 60 FPS would not.
you've already covered that extra $100 difference for the most part, in just the cost of a higher end power supply. now think of the real world cost of running it at 450-660 watts, the electrical cost also works against you.
then you have the fact that 2 way sli tends to run smoother then 3 way sli, and last but not least think of airflow. 3 cards take up quite a bit of space in the case and restrict the airflow far more then a single would, as well as increase the noise levels dramatically.
i'd say the 690 is worth it if you really intended to choose between those set ups. and as an added bonus you get to say you have the best, rather then a pretty wicked set up.
Also I just don't see any reason to get a $999 card when you can run double for cheaper and get near to the performance. I myself just have a single 660 ti and have no problems running games like The Witcher 2, BF3 on Ultra settings @ 1920x1080 with no frame rate issues.
If you have some crazy 3 screen setup running 2560x1440 and you're into graphic design, then yea I can see that as a good reason to go with a 690, otherwise why? Bragging rights? I just don't see much of a graphical or performance differece once you're hitting at around 80fps and beyond or don't really care I guess. There's just absolutely nothing out there where you need a 690 or even running sli to max something out.
but either way you look at it, the 680s and 690's aren't orientated toward budget minded users anyways. they're all about power users and enthusiasts. people like you and me aren't their target consumer.
x1 660ti (same gpu, mem, cores etc just 192bit,
with x1 670gtx of the same spec 256bit?
I have a PNY xlr8 660ti 2GB xlr8 (Amazon black Friday £80) + with an unecessary £20 pnp
and an evga 670gtx non ftw 2Gb £288
basically all that appears to be different is one is 256bit and the other 192bit.
My first test was an epic fail, as windows 8 crashed (my opinion is w8 is an epic fail...safe mode would not work.)
Windows will boot with both cards sli'd just haven't had time to do a full test.
btw its only a gap stop until I can get another 670gtx
my 850w is old school from 2009 so having to split rails isn't my prefered method but works.
If it works I will still gain better fps in Crysis 2, Heavens bench etc, just the 670gtx reaches 107fps which seems weak in Heavens Bench 3
so anyone had any success?
Noctua DH14
Noctua 120mm casing fan and 80mm for hdd caddy
16GB Corsair 2000XMS
x2 2gig wd greens
Sabertooth 990fx
850 corsair psu
670gtx
660ti
xonar D2x soundcard
however the 660ti showed 198fps in heavens bench over 177 with the 670gtx somethings not right there, if the 660ti is better then I'm ditching the 670