Archive Home arrow Reviews: arrow Video Cards arrow ASUS GeForce GTX 660Ti DirectCU-II TOP
ASUS GeForce GTX 660Ti DirectCU-II TOP
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Thursday, 16 August 2012

ASUS GeForce GTX 660Ti DirectCU-II TOP

ASUS DIGI+VRM Digital Power Tuning Enhances GPU Overclocking Results

Manufacturer: ASUSTeK Computer Inc.
Product Name: GeForce GTX 660Ti DirectCU-II TOP
Model Number: GTX660 TI-DC2T-2GD5
Part Number: 90YV0360-MONM00
UPC: 886227284740 EAN: 4716659284741
Suggested Retail Price: $329.99 MSRP

Full Disclosure: The product sample used in this article has been provided by ASUS.

Many computer hardware enthusiast recall when NVIDIA unleashed their GeForce GTX 580 video card onto the gaming landscape back on November 2010, setting the bar so high the closest threat to the $500 price point was a combined pair of the competition's best graphics cards. Under two years later, NVIDIA returns with a 150W mid-range solution that beats their former flagship. Built upon the NVIDIA GK104 GPU, the GeForce GTX 660 Ti offers the same high-end features found on the top-end GTX 680, but with a much more affordable price tag. In this article Benchmark Reviews tests the ASUS GeForce GTX 660 Ti DirectCU II TOP, model TI-DC2T-2GD5.

Of the many platforms available for gamers to enjoy video games, there's no question that the highest quality graphics come from PC. While game developers might not consider PC gaming as lucrative as entertainment consoles, companies like NVIDIA use desktop graphics to set the benchmark for smaller more compact designs that make it into notebooks, tablets, and smartphone devices. NVIDIA's Kepler GPU architecture is an example of this, delivering unprecedented performance while operating cooler and consuming far less power than previous flagship discrete graphics cards.

ASUS further extends GeForce GTX 660 Ti's functionality with proprietary features available only on their brand of video cards, which for the GTX 660 Ti series include: DirectCU-II Standard (TI-DC2-2GD5), DirectCU-II OC (TI-DC2O-2GD5), and DirectCU-II TOP (TI-DC2T-2GD5). Clocked to 1137 MHz, the ASUS GeForce GTX 660 Ti TOP model features a 7% faster GPU boost speed, or 157 MHz over the reference design. Its DirectCU-II design yields 20% cooler performance, with 9dB less audible fan noise. Combined with the ASUS GPU Tweak software, GTX 660 Ti DirectCU-II TOP utilizes digital DIGI+ VRM with 6-Phase Super Alloy Power (SAP) electronic components to achieve even greater GPU clock speeds with stable overclocked performance.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Package.jpg

NVIDIA's GeForce GTX 660 Ti graphics card is designed around the same 28nm GK104 Kepler GPU architecture found in the GeForce GTX 680 and 670, and replaces NVIDIA's GeForce GTX 570 series in terms of performance. NVIDIA GPU Boost enables 1344 CUDA cores to dynamically adjust power and clock speeds based on real-time game or application demands. GeForce GTX 660 Ti's core count and clock speeds are identical to GTX 670, with the same being true for the card's 2GB memory buffer - albeit reduced to 192-bit bandwidth. Designed to occupy the $300 price point, GeForce GTX 660 Ti competes on cost with AMD's Radeon HD 7870 but matches performance with the more expensive Radeon HD 7950. In this article, Benchmark Reviews will test the GeForce GTX 660 Ti and discover how this combination performs with DirectX 10/11 generation video games.

In addition to a new and improved Kepler GPU architecture with NVIDIA GPU Boost technology, the GeForce GTX 660 Ti video card delivers further refinement to the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users - from the gamer to the casual computer user. This new technology adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality.

First Look: ASUS GeForce GTX 680Ti TOP

This review examines the best-in-class ASUS GeForce GTX 660 Ti DirectCU-II TOP video card, model TI-DC2T-2GD5. The ASUS 660 Ti DirectCU-II TOP features three major stand-out features that set it apart from the competition: 1) an extreme cooling solution, 2) nearly silent cooling fans, and 3) digital voltage control to ensure ultra-stable overclocking results.

The reference design NVIDIA GeForce GTX 660 Ti is a 1.5" tall double-bay, 3.9" wide, 9.0" long graphics card that will fit into nearly all mid-tower computer case enclosures with plenty of room to spare. However, the added ASUS DirectCU-II heatpipe-laden cooler stretches GTX 660 Ti to 4.6" wide and beyond 10.5" long. For comparison, NVIDIA's GeForce GTX 570, GeForce GTX 580, AMD's Radeon HD 6970, and Radeon HD 7970 each share the same overall length.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Front.jpg

Cooling on the ASUS GeForce GTX 660 Ti DirectCU-II TOP is extreme, leaving plenty of potential for heat-generating overclock settings. With 8mm nickel plated heat-pipes bending out from beneath the plastic shroud, DirectCU-II is designed to soak up every watt of heat from the Kepler GPU secured beneath it. Two low-profile 74mm fans drive air down through aluminum heatsink fins with nearly no audible indication they're spinning, while chamfered depressions in the shroud's surface helps draw cool air whenever two or more video cards are combined into close-proximity SLI configurations.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Angle.jpg

Specified at 150W Thermal Design Power output, the GeForce GTX 660 Ti demands much less of the power supply, allowing NVIDIA to recommend a 450W PSU for single-card installations. GeForce GTX 660 Ti requires power from two six-pin PCI-E connections, which ASUS has implemented with special visual diagnostic LEDs. When a six-pin connection is fitted with proper power a green LED light shines to indicate ideal operating conditions, or the LED shines red when insufficient power is supplied or not connected.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Power.jpg

Both the ASUS and NVIDIA GeForce GTX 660 Ti designs offers two simultaneously functional dual-link DVI (DL-DVI) connections, a full-size HDMI 1.4a output, and a DisplayPort 1.2 connection. Only one GTX 660 Ti video card is necessary to drive triple-display NVIDIA 3D-Vision Surround functionality, when using both DL-DVI and either the HDMI or DP connection for third output. All of these video interfaces consume exhaust-vent real estate, but this has very little impact on cooling because ASUS' DirectCU-II heatsink drives air out from under the heatsink. Furthermore, the 28nm Kepler GPU already generates less heat than past GeForce processors, requiring less ventilation for exhausted air.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-IO-Plate.jpg

As with past-generation GeForce GTX series graphics cards, the GTX 660 Ti is capable of two and three card SLI configurations. Because GeForce GTX 660 Ti is PCI-Express 3.0 compliant device, the added bandwidth could potentially come into demand as future games and applications make use of these resources. Most games work well using medium settings with a single GeForce GTX 660 Ti graphics card, but multi-card SLI configurations are perfect for gamers wanting to experience high-performance video games played at their highest quality settings with all the bells and whistles enabled.

ASUS' GTX 660 Ti DirectCU-II TOP is certainly unique, but like any high-performance machine it's what hides under the hood that counts. On the inside NVIDIA's codename "Kepler" GPU architecture reshapes the internal landscape, and reveals a lot of unused space on the printed circuit board (PCB). That same 12-layer PCB ensures the highest signal integrity, and helps disperse heat more effectively across the entire board.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Disassembled.jpg

With the ASUS DirectCU-II heatsink removed to expose a NVIDIA GK104 28nm graphics processor, three 8mm heat-pipe rods cross through a solid aluminum base that features a potent heat-pipe direct touch configuration.

The memory subsystem has been tweaked on GeForce GTX 660 Ti, allowing the 2048MB GDDR5 video frame buffer to produce 144.2 GB/s total memory bandwidth at an impressive 6008 MHz data rate. Three memory controllers combine six GDDR5 ICs for a 192-bit memory lane, which moves data more efficiently than previous designs to yield a fill rate of 102.5 GigaTexels per second.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-PCB.jpg

ASUS utilizes nine solid polymer formed electrolytic chip capacitors (referred to as a POSCAP), whereas the reference design uses only two. Furthermore, the NVIDIA GeForce GTX 660 Ti refernce design uses only 4-phase PWMs with analog IC's. But just as ASUS offers on their ultra-duty motherboards, electronic components such as chokes, capacitors, and MOSFETs use fully digital DIGI+ VRM technology to combine effects with 6-Phase Super Alloy Power (SAP) to deliver precision voltage to the video card. By virtually eliminating flutter and voltage ripple through precision digital control, components last longer and have a much more stable overclock.

In the next section, we detail our test methodology and give specifications for all of the benchmarks and equipment used in our testing process...

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

GPU-Z_ASUS_GTX660Ti_DirectCU-II_TOP.gifIntel X79 Express Test System

DirectX-10 Benchmark Applications

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

DirectX-11 Benchmark Applications

  • 3DMark11 Professional Edition by Futuremark
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
  • Aliens vs Predator Benchmark 1.0
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

PCI-Express Graphics Cards

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: 3DMark11

FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.

The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.

3DMark11-Performance-Test-Settings.png

  • Futuremark 3DMark11 Professional Edition
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)

3dMark2011_Performance_GT1-2_Benchmark.jpg

3dMark2011_Performance_GT3-4_Benchmark.jpg

3DMark11 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Batman Arkham City

Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.

In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.

Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.

  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled

Batman-Arkham-City-Benchmark.jpg

Batman: Arkham City Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Battlefield 3

In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.

The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.

Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene

Battlefield-3_Benchmark.jpg

Battlefield 3 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

DX11: Unigine Heaven 3.0

The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 GeForce GTX660Ti Radeon HD7970 GeForce GTX670 GeForce GTX680
GPU Cores 480 1536 512 1344 2048 1344 1536
Core Clock (MHz) 732 880 772 915 925 915 1006
Shader Clock (MHz) 1464 N/A 1544 980 Boost N/A 980 Boost 1058 Boost
Memory Clock (MHz) 950 1375 1002 1502 1375 1502 1502
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 2048MB GDDR5 3072MB GDDR5 2048MB GDDR5 2048MB GDDR5
Memory Interface 320-bit 256-bit 384-bit 192-bit 384-bit 256-bit 256-bit

VGA Power Consumption

In this section, PCI-Express graphics cards are isolated for idle and loaded electrical power consumption. In our power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.

The power consumption statistics discussed in this section are absolute maximum values, and may not represent real-world power consumption created by video games or graphics applications.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #4 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card subtracted from measured combined total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

Because this article discusses a factory-overclocked ASUS GeForce GTX 660 Ti DirectCU-II TOP video card, our power consumption results are not representative of GTX 660-series products that use the reference design. Both the reference design and the ASUS GeForce GTX 660 Ti DirectCU-II TOP accept two 6-pin PCI-E power connections for normal operation, and will not activate the display unless proper power has been supplied. NVIDIA recommends a 450W power supply unit for stable operation with GTX 660 Ti, which should include both required 6-pin PCI-E connections without the use of adapters. In our test results the ASUS GeForce GTX 660 Ti DirectCU-II TOP consumed only 13W at idle, and 154W under full load.

If you're familiar with electronics, it will come as no surprise that less power consumption equals less heat output as evidenced by our thermal results below...

GeForce GTX 660 Ti Temperatures

This section reports our temperature results subjecting the video card to maximum load conditions. During each test a 20°C ambient room temperature is maintained from start to finish, as measured by digital temperature sensors located outside the computer system. GPU-Z is used to measure the temperature at idle as reported by the GPU, and also under load.

Using a modified version of FurMark's "Torture Test" to generate maximum thermal load, peak GPU temperature is recorded in high-power 3D mode. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output.

The temperatures illustrated below are absolute maximum values, and do not represent real-world temperatures created by video games or graphics applications:

Video Card Ambient Idle Temp Loaded Temp Max Noise
ATI Radeon HD 5850 20°C 39°C 73°C 7/10
NVIDIA GeForce GTX 460 20°C 26°C 65°C 4/10
AMD Radeon HD 6850 20°C 42°C 77°C 7/10
AMD Radeon HD 6870 20°C 39°C 74°C 6/10
ATI Radeon HD 5870 20°C 33°C 78°C 7/10
NVIDIA GeForce GTX 560 Ti 20°C 27°C 78°C 5/10
NVIDIA GeForce GTX 570 20°C 32°C 82°C 7/10
ATI Radeon HD 6970 20°C 35°C 81°C 6/10
NVIDIA GeForce GTX 580 20°C 32°C 70°C 6/10
NVIDIA GeForce GTX 590 20°C 33°C 77°C 6/10
AMD Radeon HD 6990 20°C 40°C 84°C 8/10
ASUS GeForce GTX660Ti TOP 20°C 26°C 62°C 2/10
NVIDIA GeForce GTX 670 20°C 26°C 71°C 3/10
NVIDIA GeForce GTX 680 20°C 26°C 75°C 3/10
NVIDIA GeForce GTX 690 20°C 30°C 81°C 4/10

As you've seen us mentioned on the pages leading up to this section, NVIDIA's Kepler architecture yields a much more efficient operating GPU compared to previous designs. This becomes evident by the extremely low idle temperature, and translates into modest full-load temperatures. Even more impressive is how quiet the ASUS GeForce GTX 660 Ti DirectCU-II TOP operates, barely changing levels from silent to almost silent as it reaches full load. With an open computer case and my ear positioned only a few inches from the video card, fan noise was almost non-existent under loaded operation. Manually turning up fan power to 100% with utilities such as ASUS GPU Tweak, fan noise was extremely low and only slightly audible.

The ASUS GeForce GTX 660 Ti DirectCU-II TOP video card is by far the least audible graphics product I've ever tested, and at up to moderate load it's indistinguishable from passive cooled products.

NVIDIA GPU Boost

Like the CPU found inside computer systems, the GPU found in video cards almost never reaches 100% usage. In fact, the CPU and GPU rarely use more than 10% of their available power, yet consume energy regardless of how much is actually needed. NVIDIA has recognized this, and followed suite with CPU manufacturers to offer variable clock speeds and power consumption. NVIDIA GPU Boost is a combination of dedicated hardware circuitry that continually monitors GPU power consumption along with software technology that works in the background. NVIDIA GPU Boost enables the GeForce GTX 660 Ti video card to dynamically adjust power and clock speeds based on real-time application demands, automatically tweaking these settings several times per second. In some of the more demanding video games and applications there may occasionally be a time when the GPU is being fully utilized, which is when NVIDIA GPU Boost increases clock frequency to deliver improved graphics performance and higher image quality.

ASUS GPU Tweak Utility

In the past, software-based overclocking on ASUS video cards had been accomplished through the ASUS SmartDoctor software (and GamerOSD tool prior to that). Now all overclocking and status monitoring tasks are handled by ASUS GPU Tweak and Monitor software (illustrated below).

ASUS GPU Tweak (GPUTweak.exe) allows users to adjust a controlled range of GPU core and boost clock speeds, memory clock speed, cooling fan power, minimum GPU voltage, and maximum boost power target. Tuning controls are part of the GPU Tweak portion of the software, which users can compliment by adding the Monitor (Monitor.exe) tool which is displayed along the left side when is enabled.

The ASUS Monitor utility can be customized to show metrics for every measurable aspect of the graphics card, allowing the user to add/remove fields as they see fit. Additionally, ASUS Monitor can record data to a log file saved onto the computer system. In the illustration of Monitor below (left side), I've demonstrated how particular times can be hovered over to gather data from that period, or set multiple periods using right-click.

ASUS-GPU-Tweak-Monitor-GeForce-GTX660.png

GPU Tweak Overclocking

Unlike previous versions of NVIDIA GeForce video cards, the 600-series contain specific overclock ranges that ensure software tools do not adjust outside of safe parameters. These parameters vary between products and are set by the hardware's firmware programming, so not all products of the same model will share the same available range of settings.

GPU Boost Clock manipulates the top-end speed for the video card, which is controlled by hardware and software demands (GPU Boost is discussed elsewhere in this article). The GPU Boost speed is not the same as the base core clock speed, which is not able to be adjusted as it has been with previous graphics products. GPU Boost works together with the Power Target ratio to determine how often or how high an overclocked 'boost' is established.

The graphics card uses an auto-tuned fan profile by default, although experienced users may manually tune their fan speed using the sliding fan-power scale to suit their cooling needs. The ASUS GTX660Ti DirectCU-II TOP model operates extremely quiet even at 100% power, but it's better to use a purpose driven profile that reduces noise if possible. For this purpose, ASUS GPU Tweak eneables a feature where users may define their own custom fan power scale with transition points.

Once the desired settings have been tested stable, the ASUS GPU Tweak can be used to store up to four custom profiles (using buttons 1, 2, 3, 4), or apply one of two pre-set profiles: Power Saving (button P) or Gaming (G).

ASUS-GPU-Tweak-Monitor-GeForce-GTX660-OC.png

660Ti DirectCU-II TOP Overclocking

Because the ASUS GeForce GTX 660Ti DirectCU-II TOP comes from the factory with a substantial overclock, it's difficult to imagine having much overclocking headroom out of a GPU that's already been stretched 144 MHz/157 MHz (Boost) past the reference speeds. To my surprise, I was able to achieve 1200 MHz (Boost) stable NO additional power or increase to power target. However, with a bump to minimum and target power settings there was room to bump the speeds to a 'mostly stable' 1210 MHz Boost.

Test Item Reference GPU GTX660 TOP TOP Overclock
ASUS 660Ti DirectCU-II TOP 915/980*MHz 1059/1137*MHz 1122/1200*MHz
DX11: 3dMark11 GT1 37.8 40.0 43.1
DX11: 3dMark11 GT2 35.8 39.7 43.0
DX11: 3dMark11 GT3 48.4 50.8 54.7
DX11: 3dMark11 GT4 23.2 25.3 26.9
DX11: Aliens vs Predator 43.8 45.4 47.4
DX11: Batman Arkham City 60 62 64
DX11: Battlefield 3 53.5 59.6 61.1
DX11: Heaven 3.0 35.6 37.9 40.0
DX11: Lost Planet 2 69.7 70.9 74.9
DX11: Metro 2033 35.7 37.7 39.0

*Please note that the actual NVIDIA GPU Boost clock speed will vary, depending on system conditions and application demands. Typical GPU Boost speed shown.

Overclocking Summary: Upgrading performance from NVIDIA GeForce GTX 660 Ti reference speeds to ASUS GTX660Ti DirectCU-II TOP levels, nearly every game tested jumped 2-3 FPS thanks to the factory overclock. Pushing even farther to a 1200 MHz GPU Boost clock speed, we captured another 2-3 FPS over the TOP results at 1920x1080 resolution. Essentially, our maximum overclock resulted in 5-7 FPS beyond reference values and closed the gap between a stock GTX 660 Ti and GTX 670. Every extra frame translates into an advantage over your enemy.

ASUS GTX 660Ti TOP Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

NVIDIA's GeForce GTX 660 Ti easily dominates competing graphics solutions that share the same $300 price point, such as AMD's Radeon HD 7870, but pushes graphical power to match performance with the more expensive Radeon HD 7950. Taking things a step further, the ASUS GTX660Ti DirectCU-II TOP delivers an additional 7-10% FPS bump beyond the stock GTX 660 Ti, usually giving it the power to surpass the ultra-premium $420 AMD Radeon HD 7970 in many games. For the comparisons that follow, we'll skip past reference design results and focus instead on how well the ASUS GeForce GTX 660 Ti DirectCU-II TOP video card performed its duty.

In the DirectX 10 game Crysis Warhead, the ASUS GeForce GTX 660 Ti DirectCU-II TOP nudged ahead of the Radeon HD 7970 by 1-FPS at 1920x1080 before slipped behind it using a less-demanding 1680x1050 resolution. DirectX 11 test results kept the GTX660Ti DirectCU-II TOP ahead in most tests. For example, the ultra-demanding DX11 graphics of Batman: Arkham Asylum made use of Kepler's optimized architecture, helping to deliver a staggering lead of 7-FPS over the more expensive Radeon HD 7970. Battlefield 3 continued the run, pushing the GTX660Ti DirectCU-II TOP more than 6-FPS beyond the Radeon HD 7970. Lost Planet 2 played well on all graphics cards when set to high quality with 4x AA, but that wouldn't keep the GTX660Ti DirectCU-II TOP from surpassing AMD's Radeon HD 7970 results by 8-FPS. In one of the few exceptions, Aliens vs Predator gave back the lead to AMD Radeon products over their NVIDIA counterparts. Metro 2033 is another demanding game that requires high-end graphics to enjoy quality settings, but like AvP this game benchmark favors Radeon products.

Synthetic benchmark tools offer an unbiased read on graphics products, allowing manufacturers to display their performance without optimizations or driver influence. Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, forcing the $420 AMD Radeon HD 7970 to trail the $320 GTX660Ti DirectCU-II TOP in every test. Our Unigine Heaven 3.0 benchmark tests used maximum settings that strained GTX 660's limited 192-bit bandwidth, allowing it still surpass the Radeon HD 6970 but trail behind the R7970. Taking all our benchmark results into consideration, the ASUS GeForce GTX 660 Ti DirectCU-II TOP commands a decisive lead past its price-relative competition but also occasionally surpassed AMD's flagship Radeon HD 7970 graphics card that costs $100 more.

ASUS-GeForce-GTX-660Ti-DirectCU-II-TOP-Angle.jpg

Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. This fact notwithstanding the ASUS GeForce GTX 660 Ti DirectCU-II TOP actually delivers something for its looks, offering more than flashy decals or colored plastic parts. ASUS' rugged DirectCU-II heatsink design renders function fashionable. DirectCU-II also keeps temperature and noise levels lower than we've seen on any previous GeForce GTX product, despite exhausting heated air back into the computer case.

Regardless of how you perceive this card's looks, construction is the one area where NVIDIA and ASUS continually shine. GTX660Ti DirectCU-II TOP's extremely quiet operation is due to Kepler's super-efficient cores that consume less energy and emit less heat than previous designs, earning my confidence that GeForce GTX 660 Ti will continue to impress gamers. ASUS SAP (Special Alloy Power) utilize long-lasting electronic components (such as chokes, capacitors, and MOSFETs) that combine with ASUS' DIGI+ VRM digital voltage management to ensure the most precise and stable overclock possible. GTX 660 Ti requires two 6-pin PCI-E power connections, which helps to keep this video card compatible with most user's existing power supply units. GeForce GTX 660 Ti has one of the shortest PCBs we've seen from a GTX-series model, which further reduces heat output and makes this a product suitable for more robust HTPC applications. Even better yet, now consumers have a single-GPU solution capable of driving three monitors in 3D Vision Surround with the inclusion of two DL-DVI ports with supplementary HDMI and DisplayPort output.

Defining value isn't easy, simply because hardware manufacturer's are constantly adjusting the price of their product, and please believe me when I say that prices change by the minute in this industry. The GeForce GTX 660 Ti "Kepler" graphics card demonstrates NVIDIA's ability to innovate the graphics industry while maintaining a firm lead in the mainstream market. The NVIDIA GeForce GTX 660 Ti shares the $299 price segment with AMD's Radeon HD 7870, yet performs beyond the more expensive 7950. With regard to the $320 ASUS GeForce GTX 660 Ti DirectCU-II TOP video card, model TI-DC2T-2GD5, there's direct competition with the $420 high-end Radeon HD 7970. Even if we ignore GTX 660 Ti's faster FPS results, its added Kepler features and functionality run off the chart. Furthermore, only NVIDIA's video cards offer multi-display 3D gaming, Adaptive VSync, PhysX technology, GPU Boost, FXAA, and now TXAA.

Overall, I see excellent value from the GeForce GTX 660 series. Sharing the same speed and number of CUDA cores and 2GB GDDR5 as the GTX 670 is a huge plus, but the smaller 192-bit memory bandwidth may become a limiting factor and could penalize performance on large-scale video games that require lots of memory resources. NVIDIA's 28nm GK104 'Kepler' GPU has made a huge difference in power consumption and heat output, benefits that really have my attention, which ASUS improves upon by adding their DirectCU-II heatsink with ultra-quiet cooling fans. A fierce factory overclock defines the TOP branding for this card, and DIGI+ VRM features help ensure additional overclocking is met with success. I'm a fan of the GTX 660 Ti as a stand-alone product, but it's certainly worth considering two ASUS GTX660TI's in SLI to help solve the potential memory bandwidth bottleneck.

Pros:Benchmark Reviews Golden Tachometer Award

+ Plenty of additional GPU overclocking headroom
+ Keeps pace with $420 Radeon HD 7970
+ DirectCU-II cools GPU extremely well
+ Digital DIGI+ VRM alloy power components
+ Best overall value for DX11 video games
+ Short profile fits into standard size computer cases
+ Triple-display and 3D Vision Surround support
+ Cooling fan operates at nearly-silent sound level
+ Supports cutting-edge DisplayPort monitor connectivity
+ Very low loaded power consumption
+ Kepler enables TXAA and NVIDIA GPU Boost

Cons:

- Heated air is exhausted back into computer case
- 192-bit bandwidth may potentially create bottleneck on memory-intensive games

Ratings:

  • Performance: 9.25
  • Appearance: 9.00
  • Construction: 9.50
  • Functionality: 9.50
  • Value: 7.75

Final Score: 9.0 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

So what do you think of the ASUS GeForce GTX660Ti DirectCU-II TOP, and do you plan to buy one?


Related Articles:
 

Comments 

 
# Good valueRealNeil 2012-08-16 07:25
These things are gonna drive market prices down. With such stellar performance from these GTX660Ti cards, AMD will have to do something.
Report Comment
 
 
# RE: Good valuehcforde 2012-08-18 19:55
I think the BOOST bios is suppose to help in addressing that in the 7950 series and the GHZ cards in the 7970 series
Report Comment
 
 
# RE: ASUS GeForce GTX 660Ti DirectCU-II TOPDoug Dallam 2012-08-16 16:28
I didn't read the entire article in depth, so if you have already stated this, excuse the redundancy.

Is the GTX 660 the same card as the 660ti TOP except for OCing?
Report Comment
 
 
# RE: RE: ASUS GeForce GTX 660Ti DirectCU-II TOPOlin Coles 2012-08-16 16:35
GTX 660 is/should be GTX 660 Ti, and the TOP edition is an ASUS special product that uses a very high factory overclock.
Report Comment
 
 
# Reader friendly graphsSudarshan_SMD 2012-08-17 05:45
@benchmarkreviews team
The graph representation you are using looks ugly and are difficult to read. It's bit confusing to read the graph, for instance I read the figure on bar, ok, but what is that figure for? I have refer the bottom of graph again to find what card it is for.
Maybe I am over-reacting, but it can be much simpler.
Report Comment
 
 
# RE: Reader friendly graphsOlin Coles 2012-08-17 07:27
Why don't you give us an example of how it could be done better?
Report Comment
 
 
# RE: RE: Reader friendly graphsSudarshan_SMD 2012-08-17 08:53
Hi Sire,
I am glad you took a notice of it and I hope you are not joking when you are asking for example.
Maybe something like this: ##anandtech.com/show/6159/the-geforce-gtx-660-ti-review/7


Sire, I am in no way saying anything about the content of the review. It's just about the graph. Cheers.
Report Comment
 
 
# RE: RE: RE: Reader friendly graphsDoug Dallam 2012-08-18 16:01
I tend to agree with this. It's much easier to read. I've never liked the BMR graphs.
Report Comment
 
 
# RE: RE: RE: RE: Reader friendly graphsOlin Coles 2012-08-18 16:11
Which is odd, since you've written for us and know that you're free to use/make any graph you like. Of course, then again, you never really made it into the demanding reviews that require charts.
Report Comment
 
 
# RE: RE: RE: RE: RE: Reader friendly graphsDoug Dallamllam 2012-08-18 16:14
Right I've never created graphs or I may have suggested a different style. It's not a huge deal either. but the clearer the graph, the easier it is for readers to glean information at a glance.
Report Comment
 
 
# RE: RE: RE: RE: RE: Reader friendly graphshcforde 2012-08-18 19:53
OUCH!! Let's play nice
Report Comment
 
 
# RE: ASUS GeForce GTX 660Ti DirectCU-II TOPDoug Dallam 2012-08-18 20:37
What's not nice? Olin and I are both making factual comments here. Unless I've said something mean, I see nothing here that would cause me to think anything otherwise. It's good someone pointed out the graphs could be more clear. I think the reason no one has is that it is true, they are a little hard too read, and also that it's not a deal breaker. So no one said anything.
Report Comment
 
 
# electroman0076electroman0076 2012-08-19 10:49
I have always found the graphs used by benchmark reviews to be hard to read the problem is matching colors from the key to the graph, it's often hard to determine the dark colors from one another in particular when they add more cards to the mix.
Report Comment
 
 
# everything's fineresere 2012-08-20 06:10
Hi.
I mean, the review is ok. the board is good, even the comptetition is not bad.
And the graph is ok. not brilliant, but fine. Me genius? neah.
Report Comment
 
 
# Graph ClarityAmyn 2012-08-21 08:49
After reading the comments first, I was intrigued to see the graphs.
I can't say I remember taking notice of them if ever the times I have read any review.
And I think they are fine, even the color coded part. I can understand the color brown and grayish black were a bit difficult to discern, but once I got they are in order on both top and bottom, it wasn't too challenging.
Bottom line: Graphs nice and clear.
One mans opinion.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive