Archive Home arrow Reviews: arrow Video Cards arrow ASUS ENGTX560 Ti DCII TOP Video Card
ASUS ENGTX560 Ti DCII TOP Video Card
Reviews - Featured Reviews: Video Cards
Written by Servando Silva   
Monday, 14 February 2011

ASUS ENGTX560 Ti DirectCU II TOP Review

Manufacturer: ASUSTeK Computer Inc
Product Name: GeForce GTX 560 Ti
Model Number: ENGTX560 Ti DCII TOP/2DI/1GD5
Price As Tested:$249.99 at Newegg

Full Disclosure: The product sample used in this article has been provided by ASUS.

The GeForce GTX 560 Ti is the new boom in the "mid-end" sector. It replaces the highly overclock-able GTX 460 video card and adds extra MHz and cores increasing 20-40% performance in many games. As it could be expected, brands like ASUS, GIGABYTE or MSI will pair this GPU with a special heatsink-cooler and increase factory frequencies to sell it at a higher price. Benchmark Reviews tests the ASUS ENGTX560 Ti DCII TOP. DCII means this card now comes with an improved version of the Direct CU cooler, now with 2 fans and adding dissipation area. They also tag this product as TOP because it sports a 900MHz GPU core instead of the 822MHz from the reference design and 1050MHz instead of 1000MHz from stock. Aside from that, ASUS also bundles their very own overclocking/monitoring software to modify frequency and voltage values and add some extra performance for free. Let's have a look at the ASUS ENGTX560 Ti and check if it can be a good contender against the competition, including the highly acclaimed GeForce GTX 460.

While it's difficult to say the GTX 560 Ti is good enough to replace the GTX 460 GPU, it might be because of the facts and the present conditions surrounding this launch. The GTX 460 is an excellent GPU, but the fact that all the rest of the Fermi 4x0 family was highly disappointing in terms of heat and power consumption was something that helped a lot to push the GTX 460 where it's currently positioned. It achieved great performance levels surpassing the HD5850 in many occasions, especially when enabling Tessellation and I assume something important was the superb overclocking capabilities which could render into a GPU faster than the HD 5870. SLI vs. CFX scaling performance was also a big deal, where many users preferred to buy a pair of GTX460 and put them in SLI than paying more for the GTX 480 or a CFX array of HD 5870s.

ENGTX560Ti_Accesories.jpg

Right now the conditions are a little bit different. AMD launched their HD 6800 series and they compete very close below and ahead the GTX 460. They also had the 6900 series launched before the GTX 560 Ti, and even cared to launch the HD 6950 1GB edition the same day Nvidia announced the GTX 560 Ti. Even that, at a $249 MSRP, the GTX 560 Ti performs better that the HD 6870 which now sits at $219 and competes very close with the HD 6950 1GB Radeon video card ($270). The ENGTX 560 Ti DCII TOP model offers up a modest 9% factory overclock, but I'm guessing there's more where that came from. Benchmark Reviews takes full advantage of ASUS hardware and software upgrades, as we explore the full potential of NVIDIA's latest midrange marvel on our test bed.

Nvidia GeForce GTX 560 Ti Features

NVIDIA's GeForce GTX 560 Ti introduces the new GF114 GPU that is largely based on the GF104 Fermi chip which drove the GTX 460 to great success. The differences are four fold: full utilization of the die (no disabled cores), architecture improvements, more widespread use of low-leakage transistors, and layout changes based on signal traffic analysis.

While the GF104 enabled only seven out of eight possible Streaming Multiprocessors (SM), the GF114 is able to use that last SM to make even more cores available, a total of 384 compared to 336 in the GTX 460. Each SM still offers 48 CUDA cores, four dispatch units, and eight texture/special function units. The architecture improvements are the addition of full speed FP16 texture filtering and new tile formats that improve Z-cull efficiency. These enhancements alone offer performance improvements ranging from 5% on tessellation-heavy benchmarks like Heaven 2.1, to a whopping 14% gain in 3DMark Vantage, where it's all about shader power.

The last two improvements go hand in hand to improve both the power usage and the maximum clock rates that the GPU can support. Low leakage transistors run cooler, use less power, and can be driven faster due to their lower gate capacitance. NVIDIA increased the usage of this more expensive device type, primarily to reduce power consumption, but also to gain some overclocking headroom. They also looked at signal flow across the various sections of the GPU and did some rearranging to shorten signal paths for the high traffic areas. It's not that the GTX 460 was particularly bad in this regard, but the luxury of a second chance yielded some improvements. Taken together, NVIDIA was able to increase the base clock on the core from 675 MHz to 822 MHz, a whopping 22% increase that supposedly doesn't eat into any overclocking headroom. We'll test that supposition later in the review.

As for the rest of the capabilities of this very advanced graphics card, here is the complete list of GPU features, as supplied by NVIDIA:

NVIDIA GeForce GTX 5xx GPU Feature Summary:

3D Graphics

  • Full Microsoft DirectX 11 Shader Model 5.0 support:
    • NVIDIA PolyMorph Engine with distributed HW tessellation engines
    • BC6H and BC7 texture compression formats
    • Gather4 extensions
  • OpenGL 4.1 support
  • Advanced image quality features:
    • 32× coverage sample antialiasing
    • Transparent multisampling and transparent supersampling
    • 16× angle independent anisotropic filtering
    • 128-bit floating point high dynamic-range (HDR) lighting with antialiasing; 32-bit per-component floating point texture filtering and blending
  • Interactive ray tracing support
  • Full-speed frame buffer blending
  • Advanced lossless compression algorithms for color, texture, and zdata
  • Support for normal map compression
  • Z-cull
  • Early-Z

GPU ComputingNVIDIA_Black_Square_3D_Logo_250px.jpg

  • NVIDIA CUDATM technology-allows the GPU cores to provide performance improvements for applications such as video transcoding, gaming, ray tracing, and physics. API support includes:
    • CUDA C
    • CUDA C++
    • DirectCompute 5.0
    • OpenCL
    • Java, Python, and Fortran
  • Third Generation Streaming Multiprocessor (SM)
    • 48 CUDA cores per SM
    • Dual Warp Scheduler simultaneously schedules and dispatches instructions from two independent warps
    • 64 KB of RAM with a configurable partitioning of shared memory and L1 cache
  • Second Generation Parallel Thread Execution ISA
    • Unified Address Space with Full C++ Support
    • Optimized for OpenCL and DirectCompute
    • Full IEEE 754-2008 32-bit and 64-bit precision
    • Full 32-bit integer path with 64-bit extensions
    • Memory access instructions to support transition to 64-bit addressing
    • Improved Performance through Predication
  • Improved Memory Subsystem
    • NVIDIA Parallel DataCacheTM hierarchy with Configurable L1 and Unified L2 Caches
    • Greatly improved atomic memory operation performance
  • NVIDIA GigaThreadTM Engine
    • 10x faster application context switching
    • Concurrent kernel execution
    • Out of order thread block execution

NVIDIA Technology

  • NVIDIA SLI technology-patented hardware and software technology allows up to four NVIDIA GeForce GPUs to run in parallel to scale performance and enhance image quality on today's top games.
  • NVIDIA PhysXTM technology-allows advanced physics effects to be simulated and rendered on the GPU.
  • NVIDIA 3D VisionTM Ready- GeForce GPU support for NVIDIA 3D Vision, bringing a fully immersive stereoscopic 3D experience to the PC.
  • NVIDIA 3D Vision SurroundTM Ready-scale games across 3 panels by leveraging the power of multiple GPUs in an NVIDIA SLI configuration. Combine with 3D Vision technology for the ultimate 3 panel stereoscopic 3D gaming experience.

GPU Interfaces

  • Designed for PCI Express 2.0 ×16 for a peak bandwidth (counting both directions) of up to 20 gigabytes (GB) per second (PCIe 2.0 devices are backwards compatible with PCI Express 1.x devices).
  • Up to 384-bit GDDR5 memory interface (memory interface width may vary by model)

Advanced Display Functionality

  • Two pipelines for dual independent display
  • Two dual-link DVI outputs for digital flat panel display resolutions up to 2560×1600
  • Dual integrated 400 MHz RAMDACs for analog display resolutions up to and including 2048×1536 at 85 Hz
  • HDMI 1.4a support including GPU accelerated Blu-ray 3D support, x.v.Color, HDMI Deep Color, and 7.1 digital surround sound. (Blu-ray 3D playback requires compatible software player. See www.nvidia.com/3dtv for more details).
  • Displayport 1.1a support
  • HDCP support up to 2560×1600 resolution on all digital outputs
  • 10-bit internal display processing, including hardware support for 10-bit scanout
  • Underscan/overscan compensation and hardware scaling

Video

  • NVIDIA PureVideo HD technology with VP4 programmable video processor
  • Decode acceleration for MPEG-2, MPEG-4 Part 2 Advanced Simple Profile, H.264, MVC, VC1, DivX (version 3.11 and later), and Flash (10.1 and later)
  • Blu-ray dual-stream hardware acceleration (supporting HD picture-in-picture playback)
  • Advanced spatial-temporal de-interlacing
  • Noise reduction
  • Edge enhancement
  • Bad edit correction
  • Inverse telecine (2:2 and 3:2 pull-down correction)
  • High-quality scaling
  • Motion Compensation
  • Video color correction
  • Dynamic contrast enhancement and color stretch

Digital Audio

  • Support for the following audio modes:
    • Dolby Digital (AC3), DTS 5.1, Multi-channel (7.1) LPCM, Dolby Digital Plus (DD+), MPEG2/MPEG4 AAC
  • Data rates of 44.1 KHz, 48 KHz, 88.2 KHz, 96 KHz, 176 KHz, and 192 KHz
  • Word sizes of 16bit, 20bit, and 24bit

Power Management Technology

  • Advanced power and thermal management for optimal acoustics, power, and performance based on usage:
  • ASPM power management
  • Adaptive Clocking
  • Adaptive Power States
  • Advanced fan control and temperature monitoring

NVIDIA GeForce GTX 560 Ti GPU Detail Specifications

GPU Engine Specs:MSI_N560GTX_Ti_GeForce_Video_Card_GTX_560Ti_Logo.jpg

  • Fabrication Process: TSMC 40nm Bulk CMOS
  • Die Size: 332mm2 (Estimated)
  • No. of Transistors: 1.95 Billion
  • Graphics Processing Clusters: 2
  • Streaming Multiprocessors: 8
  • CUDA Cores: 384
  • Texture Units: 64
  • ROP Units: 32
  • Engine Clock Speed: 822 MHz (ASUS OC @ 900MHz)
  • Texel Fill Rate (bilinear filtered): 56.3 Gigatexels/sec
  • Pixel Fill Rate: 28.2 Gigapixels/sec

Memory Specs:

  • Memory Clock: 2100 MHz - DDR
  • Memory Configurations: 1 GB GDDR5
  • Memory Interface Width: 256-bit
  • Memory data rate: 4.2 Gbps
  • Memory Bandwidth: 134.4 GB/sec

Display Support:

  • Maximum DVI Resolution: 2560x1600
  • Maximum VGA Resolution: 2048x153
  • Maximum Display Output: 4x - 1920x1200
  • Standard Display Connectors:
    • 2x Dual-Link DVI-I
    • 1x Mini HDMI v1.4a

Graphics card Dimensions:

  • Height: 4.376 inches (111 mm)
  • Length: 9.37 inches (238 mm)
  • Width: Dual-slot (37mm)
  • Weight: 669g

Thermal and Power Specs:

  • Maximum GPU Temperature: 104 C
  • Maximum Graphics Card Power: 170 W
  • Minimum Recommended System Power: 500 W
  • Power Connectors: Two 6-pin PCI Express (PCI-E)

Source:NVIDIA.com

Closer Look: ASUS GTX 560 Ti

As usual, ASUS packages its GPUs in a nice box with a warrior on it. Any technologies, features and bundles are displayed on the frontal and back side of the package. ASUS claims to use Super Alloy Power technology which uses some special alloy formula in critical power delivery components for a 15% performance boost, 35°C cooler operation and 2.5 times longer lifespan. It's hard not to notice the 900MHz GPU Core clock as it's as big as the ASUS logo.

ENGTX560Ti_Box.jpg

Once we get the box opened we'll quickly notice the accessories set. This consists of a CD with drivers and ASUS utilities, a quickly installation manual, a pair of Molex-PCI-e PSU connectors, DVI-VGA converter and Mini-HDMI to HDMI converter. Of course, the card is included in the package too, and it's got a massive heatpipe-heatsink with two 80mm fans.

ENGTX560Ti_Accesories.jpg

With high-end video cards, the cooling system is an integral part of the performance envelope for the product. Make it run cooler, and you can make it run faster, has been the by-word for achieving gaming-class performance with all recent GPUs. Even some midrange models have turned out to be supreme overclockers with enhanced cooling. Based on ASUS DirectCU architecture, which uses copper heat pipes in direct contact with the GPU to speed up heat dissipation for over 20% cooler performance, DirectCU II takes cooling further with twin 80mm fans for doubled airflow. This specific model comes in black color with red stripes, which remembers me a F1 car or something similar. Also, please notice that GPU bar at the top of the PCB to avoid it from warping because of the heatsink weight or while being moved inside any case.

ENGTX560Ti_Upperview.jpg

The back side of the PCB is not as crowded as it comes with many other GPUs. It seems ASUS improved their overall design to reduce circuit tracks as I've seen other GTX 560 Ti GPUs with a PCB similar to New York City streets and buildings.

ENGTX560Ti_Back.jpg

As it's noticeable, both fans push cold air to the heatsink but there's no "especial" technology to make the air flow through the rear of the PC case. Make sure you've got great ventilation and good air-flow inside your case to remove all the heat produced from this GPU.

ENGTX560Ti_Sideview.jpg

All the ports including PCI-e are covered with a blue plastic cap. The ENGTX 560 Ti is a dual-slot GPU and the second slot brings hot air out of the case. As usual, Nvidia GPUs have a pair of Dual-Link DVI ports and the mini-HDMI port. This means you can't create a Surround setup with this model.

ENGTX560Ti_Ports.jpg

The GXTX 560 Ti uses 2 x 6-pin PCI-e PSU ports to add power to the GPU; the same number of connectors used with the GTX 460. There is a pair of ASUS logos here which look great, but the connectors position won't help with cable management overall. The good part is that this is OK for people with small PC cases to avoid contact with storage drive bays.

ENGTX560Ti_PCIe_Connectors.jpg

In the next section, let's take a more detailed look at the ENGTX560 Ti DCII TOP video card. I did a full tear-down, so we could see everything there is to see...

ENGTX560 Ti TOP Detailed Features

The first job is to uninstall the Direct CU II cooler to see everything below it. This cooler is quite easy to take off as it's retained by 4 little screws. As usual, ASUS put a lot of thermal paste between the GF114 core and the heatsink which I had to clean and re-apply after removing it. We'll check cooler performance in the next sections to see if 3 heat-pipes and a pair of fans are enough to keep the GTX 560 Ti at reasonable levels (even at overclocked conditions).

ENGTX560Ti_Dissasembled.jpg

Take a closer look at the PCB. This is not the classic shiny PCB. Instead of that, ASUS opted for a black matted PCB with rounded edges. Also it's possible to see the end of the heatpipes from this point of view.

ENGTX560Ti_PCB_finish.jpg

Once we get the heatsink uninstalled, we can see 3 direct-contact heatpipes with an Aluminum base at the side where the GPU core remains. There are small fins to allow air-flow through the memory circuits and the rest of the heatsink also helps cooling all the VRM and MOS-FETs. This design should work quite well, but looking it closely it doesn't look as impressive as the Twin Frozr design by MSI or other companies high-end cooling solutions.

ENGTX560Ti_Heatsink.jpg

Here's a closer look to the heatpipes which make direct contact with the GPU core to enhance cooling performance while transferring heat to the rest of the heatsink. By the way, those are 6mm heatpipes, not 8mm.

ENGTX560Ti_Heatpipes.jpg

The memory choice for the ENGTX560 Ti DCII TOP is consistent with the NVIDIA reference designs. The basic GTX 560 specs only require 1,000 MHz chips for the memory, but many cards have been using these Samsung K4G10325FE-HC04 GDDR5 parts, which are designed for up to 1250 MHz. The GTX 460 cards have shown some gains in gaming performance with increases in memory speed, much more so that the ATI HD 5xxx series has. These 1250 MHz versions of this chip have also been mediocre overclockers on the Radeon platform; obviously they work better here because ASUS already overclocked them to 1050MHz from factory.

ENGTX560Ti_Memory_IC.jpg

I'm happy to see ASUS covered all the VRMs and MOS-FETs with little heatsinks and they get cool enough thanks to the 80mm fan. It was quite annoying to see such important components being forgotten there without any heatsink as it was common with many GTX 460 video cards. Oh, and do you notice that "power ranger" sign at the top of the chokes? That's the Super-Alloy-Power logo.

ENGTX560Ti_VRM_Heatsinks.jpg

At the back side of the core there are lots of resistors, capacitors and ICs, which just re-affirm the excellent solder quality from ASUS products. Being one crowded section of a graphics card, they're still able to solder with good precision any component needed. This is one of the most critical sections of the PCB for build quality, as variations in stray capacitance here could impact the performance of the GPU, and certainly its overclocking ability. However it's important to notice that the ENGTX560 Ti is less crowded than any other GTX 560 Ti card I've seen before. They even have space to put their logo just below the center.

ENGTX560Ti_PCB.jpg

Now that we've had the grand tour of the ENGTX560 Ti DCII TOP, inside and out, it's time to put it to the test. Before going to tests, let me show you the ASUS bundled software in the next section.

ASUS ENGTX560 Ti DCII TOP Included Software

Brands like ASU or MSI always ship their cards with special software for monitoring and overclocking; this time is not an exception. Once you install the drivers from the CD (or download them from the Nvidia Forceware page) you'll be able to install ASUS utilities if you find them attractive. ASUS includes Gamer OSD, which helps to capture, broadcast and save movies or screenshots from every game. As you can see, there's a maximum 720x480 capture size, and the frame rate can be increased up to 30 FPS. While the broadcasting solution sounds interesting I don't think I'll want to add this processing to my whole computer when gaming in a competition or something. It could be useful if you're working as referee or spectator though

ENGTX560Ti_ASUS_GamerOSD.png

ASUS SmartDoctor is an application to monitor and overclock your GPU. I must admit I still prefer MSI Afterburner, since it's got more options and controls to use, but SMART doctor works for basic overclocking too. One important thing is that this application ensures voltage tweaking support, at least until your favorite software adds support to the GPU model in your hands. Only 3 variables can be moved in SmartDoctor: Memory frequency, GPU Core Frequency (Engine), and Vcore voltage. Vcore can be increased from 1025mV to 1150mV, which is the same MSI Afterburner allows. Unluckily, there's no option to decrease GPU voltage, so that's a bad point. Overall, the software still looks like a BETA version and the GUI is not as good or sleek as I'd like it to be, but it does it's work.

ENGTX560Ti_ASUS_SmartDoctor.png

Perhaps one of the most interesting settings from the SmartDoctor utility is the fan control, which allows you to set different fan speeds depending on your GPU temperature, but the rest of the settings are not very impressive as I thought.

ENGTX560Ti_ASUS_SMartDoctor_Options.png

When you install and power up your GPU, there will be a pair of LEDs at the back side of the PCB which will turn green if everything is working alright, or they'll change to red of something fails. For example, I intentionally didn't plug a PCI-e PSU connector and the LEDs immediately went red.

ENGTX560Ti_Installed.jpg

It's time to put this card into some serious tests. Well, Benchmark is our first name, so don't worry. Let's start off with a complete description of the Video Card Testing Methodology.

Video Card Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

According to the Steam Hardware Survey published for the month ending September 2010, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors). However, because this 1.31MP resolution is considered 'low' by most standards, our benchmark performance tests concentrate on higher-demand resolutions: 1.76MP 1680x1050 (22-24" widescreen LCD) and 2.30MP 1920x1200 (24-28" widescreen LCD monitors). These resolutions are more likely to be used by high-end graphics solutions, such as those tested in this article.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

ENGTX560Ti_GPU-Z.png

Intel P67 Express Test System

  • Motherboard: ASUS P8P67 WS Revolution (1254 BIOS)
  • System Memory: 2x 4GB G. Skill Ripjaws X 1300MHz (9-9-9-24)
  • Processor: Intel Core i7-2600K 3.4GHz (Turbo up to 3.8 GHz)
  • CPU Cooler: Prolimatech Megashadow (2x Noctua NF-P12 Fans)
  • Video: ASUS ENGTX560 Ti DCII TOP (Forceware 266.66)
  • Drive 1: Intel X-25M 80GB
  • Drive 2: Seagate 1.5TB
  • PSU: Antec Signature 850 Watts
  • Monitor: Samsung 27"; Widescreen LCD Monitor 1920X1200
  • Operating System: Windows 7 Ultimate x64 (Build 7600)

DirectX 10 Benchmark Applications

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)
  • Crysis Warhead v1.1 Benchmark
    • Extreme Settings: (Very High Quality, 4x and 16x AF, Airfield Demo)
  • Just Cause 2
    • Extreme Settings: (Max Display Settings, 8x Anti-Aliasing, 16x Anisotropic Filtering, Motion Blur ON, GPU Water Simulation OFF, Bokeh OFF)

DirectX 11 Benchmark Applications

  • Aliens vs. Predator Benchmark 1.0
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)
  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)
  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)
  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, 180s Fraps Chase Scene)
  • Unigine Heaven Benchmark 2.1
    • Moderate Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA)

Video Card Test Products

  • NVIDIA GeForce GTX 460 1GB (675 MHz GPU/1350 MHz Shader/900 MHz vRAM - Forceware 266.66)
  • AMD Radeon HD 6870 (900 MHz GPU/1050 MHz vRAM - AMD Catalyst Driver 10.11)
  • Nvidia GeForce GTX 560 Ti (900 MHz GPU/1800 MHz Shader/1050MHz vRAM - Forceware 266.66)
  • AMD Radeon HD 6850 (732 MHz GPU/1464 MHz Shader/950 MHz vRAM - AMD Catalyst 10.11)
  • AMD Radeon HD 6850 CFX (775 MHz GPU/1000 MHz vRAM - AMD Catalyst 10.11)
  • NVIDIA GeForce GTX 460 1GB SLI (675 MHz GPU/1350 MHz Shader/900 MHz vRAM - Forceware 266.66)
Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

3DMark Vantage Performance Tests

3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark Corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology.

3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results.

  • 3DMark Vantage v1.02
    • Extreme Settings: (Extreme Quality, 8x Multisample Anti-Aliasing, 16x Anisotropic Filtering, 1:2 Scale)

3DMark Vantage GPU Test: Jane Nash

Of the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.

ENGTX560Ti_3dMark_Vantage_Jane_Nash_Benchmark.jpg

Jane Nash Extreme Quality Settings

3DMark Vantage GPU Test: New Calico

New Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX-10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).

ENGTX560Ti_3dMark_Vantage_New_Calico_Benchmark.jpg

New Calico Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Moderate Settings: (Very High Quality, 4x AA, 16x AF, Airfield Demo)

ENGTX560Ti_Crysis_Warhead_Benchmark.jpg

Crysis Warhead Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

Aliens vs. Predator Test Results

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

MSi_R6870_Radeon_Video_Card_Aliens_vs_Predator

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs. Predator
    • Extreme Settings: (Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows)

ENGTX560Ti_Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs. Predator Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

DX11: BattleForge Results

BattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX-11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed.

Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings (detail) and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX-11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The charted results illustrate how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is enabled.

  • BattleForge v1.2
    • Extreme Settings: (Very High Quality, 8x Anti-Aliasing, Auto Multi-Thread)

EDITOR'S NOTE: AMD is aware of performance concerns with BattleForge, and offered us an official response:

"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."

ENGTX560Ti_BattleForge_DX11_Benchmark.jpg

BattleForge Extreme Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

Just Cause 2 Performance Tests

"Just Cause 2 sets a new benchmark in free-roaming games with one of the most fun and entertaining sandboxes ever created," said Lee Singleton, General Manager of Square Enix London Studios. "It's the largest free-roaming action game yet with over 400 square miles of Panaun paradise to explore, and its 'go anywhere, do anything' attitude is unparalleled in the genre." In his interview with IGN, Peter Johansson, the lead designer on Just Cause 2 said, "The Avalanche Engine 2.0 is no longer held back by having to be compatible with last generation hardware. There are improvements all over - higher resolution textures, more detailed characters and vehicles, a new animation system and so on. Moving seamlessly between these different environments, without any delay for loading, is quite a unique feeling."

Just Cause 2 is one of those rare instances where the real game play looks even better than the benchmark scenes. It's amazing to me how well the graphics engine copes with the demands of an open world style of play. One minute you are diving through the jungles, the next you're diving off a cliff, hooking yourself to a passing airplane, and parasailing onto the roof of a hi-rise building. The ability of the Avalanche Engine 2.0 to respond seamlessly to these kinds of dramatic switches is quite impressive. It's not DX11 and there's no tessellation, but the scenery goes by so fast there's no chance to study it in much detail anyway.

Although we didn't use the feature in our testing, in order to equalize the graphics environment between NVIDIA and ATI, the GPU water simulation is a standout visual feature that rivals DirectX 11 techniques for realism. There's a lot of water in the environment, which is based around an imaginary Southeast Asian island nation, and it always looks right. The simulation routines use the CUDA functions in the Fermi architecture to calculate all the water displacements, and those functions are obviously not available when using an ATI-based video card. The same goes for the Bokeh setting, which is an obscure Japanese term for out-of-focus rendering. Neither of these techniques uses PhysX, but they do use specific computing functions that are only supported by NVIDIA's proprietary CUDA architecture.

There are three scenes available for the in-game benchmark, and I used the last one, "Concrete Jungle" because it was the toughest and it also produced the most consistent results. That combination made it an easy choice for the test environment. All Advanced Display Settings were set to their highest level, and Motion Blur was turned on, as well.

ENGTX560Ti_Just_Cause_2_Benchmark.jpg

Just Cause 2 Concrete Jungle Benchmark High Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

Lost Planet 2 DX11 Benchmark Results

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Moderate Settings: (2x AA, Low Shadow Detail, High Texture, High Render, High DirectX 11 Features)

ENGTX560Ti_Lost_Planet_2_Benchmark.jpg

Lost Planet 2 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033
    • Moderate Settings: (Very-High Quality, AAA, 16x AF, Advanced DoF, Tessellation, Frontline Scene)

ENGTX560Ti_Metro-2033_DX11_Benchmark.jpg

Metro 2033 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

Unigine Heaven 2.1 Benchmark

The Unigine Heaven 2.1 benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Although Heaven-2.1 was recently released and used for our DirectX-11 tests, the benchmark results were extremely close to those obtained with Heaven-1.0 testing. Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 2.1
    • Extreme Settings: (High Quality, Normal Tessellation, 16x AF, 4x AA

ENGTX560Ti_Unigine_Heaven_DX11_Benchmark.jpg

Heaven 2.1 Moderate Quality Settings

Graphics Card GeForce GTX460 Radeon HD6850 Radeon HD6870 GeForce GTX560 Ti
GPU Cores 336 960 1120 384
Core Clock (MHz) 675 775 900 900
Shader Clock (MHz) 1350 N/A N/A 1800
Memory Clock (MHz) 900 1000 1050 1050
Memory Amount 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5 1024MB GDDR5
Memory Interface 256-bit 256-bit 256-bit 256-bit

ASUS ENGTX560 Ti DCII TOP Temperatures

Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.

To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark's "Torture Test" to generate maximum thermal load and record GPU temperatures at high-power 3D mode. FurMark does two things extremely well: drive the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output. During all tests, the ambient room temperature remained at a stable 20°C. The temperatures discussed below are absolute maximum values, and may not be representative of real-world temperatures while gaming:

Load

Fan Speed

GPU Temperature

Idle

17% - AUTO (1380rpm)

32C

Furmark

50% - AUTO (3200rpm)

83C

Furmark

100% - Manual (4620rpm)

73C

Even if the GPU carries 2x 80mm fans, speeds can be read with MSI Afterburner without problems. At idle mode, the GPU is very silent. The fans rotate at 1380rpm (17%) keeping the temperature at merely 32 degrees. However, once I launched Furmark temps sky-rocketed to 83 degrees at auto mode. This resulted into a very noisy setup with both fans running at 3200rpm (50% is what AB reports). Manually setting the GPU fans at 100% resulted into 4620rpm. At that speed, temps didn't pass over 73 degrees, which is still very high for the GTX 560 Ti, especially if you're paying some extra money for a decent cooler.

When I started gaming with some demanding titles the temperatures were much lower. After running Unigine's Heaven Benchmark for 30 minutes, the GPU core barely reached 75 degrees, which isn't that great for overclocking. Any other game like Metro 2033 or Crysis produced less heat, barely passing 70 degrees. At this point, I'm very disappointed with ASUS DirectCU II cooler's design. They really achieved to sell a nearly silent GPU at Idle mode, but it really fails at load conditions. I was expecting something more like the MSI GTX 560 Ti Twin Frozr design, which keeps the GPU Core below 60 degrees. Instead, ASUS ships a GPU which barely performs as the Nvidia's Reference Cooler; it just looks better.

ENGTX560Ti_Dissipation.jpg

ASUS ENGTX560 Ti DCII TOP Overclocking

When it comes to overclocking I usually get excited and try many things to achieve the best solid overclock with the GPU, especially if it's known to be a good one, which is the case of the GTX 560 Ti GPUs. Now that we've voltage control over many GPUs via software applications like MSI Afterburner, the only thing we need to keep in mind is heat. Since this is an already overclocked card, I'm not expecting to see that much of a change. I quickly installed the latest version of MSI Afterburner (2.1.7) to start doing some 3DMark damage at the orb.

ENGTX560Ti_Afterburner_OC.png

It turns out that if you have a so-so cooler design, you can't achieve superb clocks. While I reached 970MHz fully stable at Furmark and benchmarks with just 1075mV, I couldn't reach the magic 1GHz number; not even increasing Core voltage up to 1150mV (which is the limit with MSI Afterburner. In fact, if I increased voltages, temperatures got worst and then I had more problems to stabilize the GPU at the same frequency. Keep in mind that this is still an 18% OC from Nvidia's reference model, and it leads to better gaming performance. Temperatures also increased by 2-3 degrees, but nothing to be afraid of (more than I was after looking at those ugly results up there).

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow-capped poles quickly turning brown, the technology industry has a new attitude towards turning "green". I'll spare you the powerful marketing hype that gets sent from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now. Take a look at the idle clock rates that AMD programmed into the BIOS for this GPU; no special power-saving software utilities are required.

ENGTX560Ti_GPU-Z_Sensor.png

The GTX 560 Ti works at 50.6/101/67.5MHz in idle mode, while VDDC lowers to 0.950 volts. At full load it increases frequencies to 900/1050 and VDDC goes up to 1.025 volts. The good part is that the GTX 500 series can be overclocked while keeping idle frequencies, saving some energy and keeping lower temperatures. Just to make sure there's no mistake: non-OC GTX 560 models increase their Load voltage up to 1.00v instead of 1.025v. For reference, my sample successfully worked at 1.00v load voltage while keeping GPU frequencies at 900/1050MHz.

To measure isolated video card power consumption, I used the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The ENGTX560 Ti DCII TOP pulled just 19 (91-72) watts at idle and 219 (291-72) watts when running full out, using the test method outlined above. It seems like the ASUS ENGTX560 Ti consumes more power than reference's board, but that's logical as it powers 2x 80mm fans and works at 900MHz with 25 extra milli-Volts to do the job. Consider PSU efficiency into the equations as I'm using an 80 plus bronze power supply. We've become used to the low power ways of the newest processors, and there's no turning back.

I'll offer you some of final thoughts and conclusion on the next pages...

ASUS ENGTX560 Ti DCII TOP Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

From a performance standpoint, the GTX 560 Ti exceeds my expectations. I'm a little jaded I guess about manufacturer's claims, but NVIDIA didn't pull any punches with this update. The GTX460 is already a widely accepted success in the marketplace, and the GTX 560 Ti is just better. In many games, there's a performance increase up to 40-50% coming from the GTX 460. I'm sure the 80MHz factory OC is involved right here, but still it's a pretty good solution. I'm just sad about ASUS DCII cooling solution. I did try re-installing and checking if the heatsink was doing enough pressure against the GPU Core. I even re-applied thermal paste but results were the same. How many times you get a pretty good looking cooler that performs like the common and simple reference design? At the other hand, power requirements are very modest, as NVIDIA recommends a 500W PSU, which is actually below the minimum I would personally consider for any modern gaming rig.

The appearance of the ENGTX560 Ti DCII TOP video card is very attractive; the stylish fan shroud gives it a sports look mixed with those red stripes on it. The black matted PCB and rounded edges along with that side bar to avoid PCB warping just helps even more. There are also 3 ASUS logos in the GPU which can be visible through any windowed PC chassis. ASUS did a nice job producing a subtle design that is business-like, yet manages to show off its muscles at the same time. Too bad the heatsink doesn't really perform as well as it looks.

ENGTX560Ti_Final_Thoughts.jpg

The build quality of the ENGTX560 Ti DCII TOP was quite good. Everything is assembled well, everything fit when I put it back together, and the overall impression of the card was very solid. The cooler along with the PCB and small heatsinks certainly adds a heft to the card and also lends a good deal of rigidity to the package. The packaging was of the highest quality and very informative. The upgraded power supply arrangement with super-alloy-power components give a good impression, and the not-so-crowded design at the back of the PCB means ASUS has been working in the PCB design lately. Even at overclocked conditions the GPU felt very solid, and it never stumbled or complained once.

The basic features of the ENGTX560 Ti DCII TOP are fully comparable with the latest offerings from both camps. It has: Microsoft DirectX 11 Support, PhysX Technology, is 3D Vision Ready, also 3D Vision Surround Ready, CUDA Technology, SLI, 32x Anti-aliasing, PureVideo HD, and HDMI 1.4a support. If PhysX and 3D Vision Surround matter to you, then you are already firmly anchored in the NVIDIA camp, and the GTX 560 Ti is just icing on the cake. Comparing that with AMD's offerings, I think Nvidia wins this match. I still however think Nvidia should design a card to fully support Surround mode in a single solution (without SLI) as AMD does with their Eyefinity GPUs.

As of late January 2011, the price for the ENGTX560 Ti is $249.99 at Newegg. There are currently no rebates available and ASUS is not giving away any popular games at the moment, so consider that in your purchasing decisions. The price-to-performance ratio of this GPU is so good, there's not a lot of downside anywhere. This particular model offers DirectCU II cooling system, which works great at idle mode but doesn't impress at load mode. Still, I think it looks better that Nvidia's reference design. AMD just issued their challenge, in the form of new 1GB versions of the HD 6950 that are priced very aggressively, and I look forward to comparing that new competitor in the near future.

Almost any GTX 560 Ti card is going to get high marks at this stage of the game. NVIDIA has brought some pretty amazing performance improvements to a graphics platform that was already very competitive. AMD has responded with some serious price cuts on the HD 6870 ($219) and released a value version of the HD 6950 ($269) that will bracket the GTX560Ti in price, but as of today I think this is the card to beat in the $250 price range. One reason for that is the continued presence of serious overclocking headroom for this upgraded GPU. I got 970 MHz on the core clock with very little effort.

Pros:goldentachaward.png

+ Very quiet cooling system with performance-oriented fan profile
+ Lower temps than reference designs at Idle mode
+ Performance improvement over GTX 460 is impressive (20-50%)
+ Overclocking headroom is similar to the GTX 460
+ PhysX capability is great feature with minimal FPS impact
+ Upgraded power supply design with high quality components
+ Low price penalty for enhanced performance and features
+ Manufacturing quality is close to the very top
+ Industry leading 3D support by NVIDIA
+ Bundled software and utilities to OC and monitor your GPU

Cons:

- $250+ is still a lot of money for casual gamers to spend
- Hot air from GPU cooler stays inside case
- DirectCU II cooler's performance was way below expectations
- No game included with the GPU

Ratings:

  • Performance: 9.25
  • Appearance: 9.25
  • Construction: 9.00
  • Functionality: 9.50
  • Value: 9.00

Final Score: 9.20 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award

What do you think of the ASUS ENGTX560 Ti DCII TOP Video Card? Leave your comment below or ask questions in our Discussion Forum.


Related Articles:
 

Comments 

 
# nice lookingRealNeil 2011-02-14 04:51
That's a damn good looking card, and I'm not speaking to the three sexy red stripes either. It just looks very functional.
(be back after I read this)
Report Comment
 
 
# RE: nice lookingServando Silva 2011-02-14 09:05
That's true. Sadly, you won't be that happy after reading the "Temperatures & Overclocking" section though...
Report Comment
 
 
# RE: RE: nice lookingAdam 2011-02-14 12:43
That's strange considering that the previous Direct CU was pretty decent compared to the stock heatsink. Wonder if perhaps this one was poorly seated, or they might have just #ed up the design this time round.
Shame, I like ASUS normally.
Report Comment
 
 
# RE: RE: RE: nice lookingServando Silva 2011-02-14 13:12
I re-installed the heatsink and changed TIM but results are the same. Other sites have tested this model and even if they don't get such bad results, it falls behind MSI, eVGA and GIGABYTE's solutions.
Report Comment
 
 
# RE: RE: RE: RE: nice lookingAlejandro 2011-09-22 09:20
The card gets hot because the "Auto" option in ASUS Smart Doctor doesn't work, it just keeps the fans running at ~1100RPM even when in load, I've found that manually setting the speed for certain ranges of temperature keeps it well under 65°C
Hope this helps, I have this card and it's plain awesome :D
Report Comment
 
 
# eggegg chan 2012-08-17 16:53
because it's a 800mhz gpu, 1000mhz gddr5, card that's been overclocked to 900mhz, 1050mhz

wtf do you really expect?

a new gpu and mem with the same name on the card?

it is the same card with a new bios and higher clocks no TOP written anywhere but the packaging

hmmm how can we sell the left over stock and still make a prophet
Report Comment
 
 
# WOowHWMSTR 2011-02-14 06:16
The card is really cool!! I wanna have it! But its too expensive for a high school student like me..T_T
Report Comment
 
 
# CoolElite360 2011-03-02 09:04
I am getting it by the end of this mont h and I am in high school aswell.
My PC is amazing, bet it is better than yours.
Report Comment
 
 
# @Elite360Shane 2011-03-03 22:14
We can tell your in high school, but don't gloat about what you've got, because theirs always something better.
Report Comment
 
 
# Will it underclock itself in idle?lowpoxm 2011-03-28 13:01
I just bought this card.
I do have a major headache though. When I run CPU-Z or ASUS smart doctor, it says the GPU clock is only running at 830MHz and the shader at 1660MHz.
Anyone knows if it will underclock itself in idle mode?
I am getting desperate because it IS the ?TOP? version of the card. And I can?t find an answer anywhere.
I haven?t had the time to test if with furmark or anything like that.
Report Comment
 
 
# RE: Will it underclock itself in idle?Olin Coles 2011-03-28 13:31
Yes, in idle mode all video cards turn the GPU clocks down to conserve energy.
Report Comment
 
 
# RE: RE: Will it underclock itself in idle?lowpoxm 2011-03-28 23:28
Nice!
I really hope that?s the case with this card as well!
I haven?t been able to find that info anywhere else.......
I was really bumbed out, because it seemed like I payed for the "TOP" version of the card, but only got the stock one.

But I don?t get it, if that?s the case. Why do they even make a "top" version of the card?
If the stock version will overclock itself as well.
Report Comment
 
 
# RE: RE: RE: Will it underclock itself in idle?Olin Coles 2011-03-29 08:45
Every video card does this. Once you use a program/game that forces 3D mode, the GPU clock speed returns to whatever it's been set to. Turning the GPU clock down during idle helps save electricity.
Report Comment
 
 
# RE: RE: RE: RE: Will it underclock itself in idle?lowpoxm 2011-03-30 00:23
Well, I had furmark running at the max settings for 5 mins, logged the data with GPU-Z, and the speed remained the same at all time.......
So the card I received is the stock version, even though the box says "TOP"
Report Comment
 
 
# RE: ASUS ENGTX560 Ti DCII TOP Video CardServando Silva 2011-04-01 06:02
Check out the VGA Consumption section to read more about different load frequencies.
In idle mode it should underclock far more than that.
Report Comment
 
 
# RE: ASUS ENGTX560 Ti DCII TOP Video CardZmol 2011-11-05 08:05
Bought one, but returned it the same day as IT SOUNDS LIKE A HAIR DRYER and it runs hot.

35% fan speed: inaudible
40%: humming
50%: high pitch humming/whining
65-100%: HIGH PITCHED SCREAMING

Got a Gigabyte OC card instead. 900mhz also. Don't hear it untill 60% and even then it's just an air-noise as opposed to screeching and whining
Report Comment
 
 
# RE: ASUS ENGTX560 Ti DCII TOP Video Cardiv 2012-01-22 13:47
-right, a good looking card
-low temp in idle and not impressing temp in full or stress i expected lower because of my case antec 900
-with my e8500 proc i'm pleased by her performance
-is relatively easy to clean her of dust
-as usual the case of this vidcard cannot be removed for a better cleanup
-it not inspire me a solid construction because the pipes are not rigidized with card enough at jonction with uppercooler
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive