NVIDIA GeForce GTX 650 Ti Video Card |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday, 09 October 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA GeForce GTX 650 Ti Video Card Review
Manufacturer: NVIDIA Corporation Full Disclosure: The product sample used in this article has been provided by NVIDIA. When I returned home last week, there was a large unexpected package from NVIDIA. This came as a shock and mystery, since I wasn't advised there was another product launching from them in October. After tearing into the large box that it shipped in, I starred down at what was probably the smallest desktop video card I've seen in at least five years or longer. While the delivery caught me by surprise, it seemed like the GeForce GTX 650 Ti might offer more than meets the eye. NVIDIA's GeForce GTX series traditionally offers enthusiast-level performance with features like multi-card SLI pairing and GPU Boost application-driven variable overclocking technology. The GeForce GTX 650 Ti graphics card keeps with tradition in terms of performance by offering the capable GK106 GPU with 768 CUDA cores, but skips past some of the top-end extras. Nevertheless, NVIDIA Kepler GPU architecture delivers additional proprietary features such as: 3D Vision, Adaptive Vertical Sync, multi-display Surround, PhysX, and TXAA antialiasing. In this article Benchmark Reviews tests the NVIDIA GeForce GTX 650 Ti graphics card with DirectX 11 video games. The NVIDIA GeForce GTX 650 Ti's reference design features a 28nm Kepler GK106 GPU, which houses four SMX units and offers 768 CUDA Cores operating at a fixed base clock speed of 925 MHz with 64 texture units. There's 1024MB of GDDR5 video memory good for 86.4 GB/s over a 128-bit interface, all clocked to 1350 MHz (5400 MHz data rate). In comparison to the GeForce GTX 650, the GTX 650 Ti offers twice the number of CUDA Cores and texture units, with more memory bandwidth.
Of the many platforms available for gamers to enjoy video games, there's no question that the highest quality graphics come from PC. While game developers might not consider PC gaming as lucrative as entertainment consoles, companies like NVIDIA use desktop graphics to set the benchmark for smaller more compact designs that make it into notebooks, tablets, and smartphone devices. NVIDIA's Kepler GPU architecture is an example of this, delivering unprecedented performance while operating cooler and consuming far less power than previous generation graphics cards. For example, gamers who upgrade from the GeForce 9600 GT could see up to 5x performance from the GTX 650 Ti. GeForce GTX 650 Ti offers all the same high-end features found on the top-end GTX video cards but with a much more affordable price tag. In addition to a new and improved Kepler GPU architecture with NVIDIA GPU Boost technology, the GeForce GTX 650 Ti video card delivers further refinement to the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Adaptive vSync adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync, thereby reducing stutter and tearing artifacts. NVIDIA TXAA helps deliver a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality. NVIDIA's product stack presently includes support for the following graphics cards:
First Look: GeForce GTX 650 TiIn terms of appearance, the NVIDIA GeForce GTX 650 Ti is nothing like the other video cards in the GeForce GTX 600-series. Its squat 1.15" height requires two card slots for installation while allowing plenty of clearance, while the 5.65" long profile makes this is one of the smallest GTX's to come from NVIDIA in a very long time. In it's reference form, the GeForce GTX 650 Ti should easily fit most installations with room to spare. The majority of double-size video cards measure 1.5" tall, so with GTX 650 Ti being closer to the 0.75" single-slot height it's feasible we might see some NVIDIA partners creating such a design.
NVIDIA's GeForce GTX 650 Ti reference design operates quietly under full power operation thanks to a rear-mounted 75mm (2.95") fan that is positioned slightly above the plastic fan shroud. This allows sufficient air to reach the intake whenever two video cards are positioned in close-proximity; while SLI configurations aren't possible, another video card could be installed address PhysX demands. Specified at 110W Thermal Design Power output the GeForce GTX 650 Ti demands much less of the system's power supply unit, allowing NVIDIA to recommend a 400W PSU for single-card installations. GeForce GTX 650 Ti requires power from a single six-pin PCI-E connection. With GeForce GTX 650 Ti the power connection is relocated at the rear of the video card, so at least 0.75" of additional clearance is necessary for the power connector.
GeForce GTX 650 Ti offers two simultaneously functional dual-link DVI (DVI-D and DVD-I ) connections, alongside a mini-HDMI 1.4a output (requires included adapter). Only one of these video cards is necessary to drive triple-display NVIDIA Surround functionality, when using both DVI ports with the HDMI connection for third output. Since this is a dual-slot graphics card, there's a possibility some NVIDIA partners will use a larger I/O plate that offers full-size HDMI and possibly DisplayPort functionality. On other GTX-series cards these video interfaces would consume exhaust-vent real estate, but GTX 650 Ti exhausts internally (inside the case) so a larger plate would have little impact on cooling.
The memory subsystem on GeForce GTX 650 Ti delivers 1024MB GDDR5 video frame buffer to produce 86.4 GB/s total memory bandwidth at a noteworthy 5400 MHz data rate. Two memory controllers combine four GDDR5 ICs for a 128-bit memory lane, which moves data more efficiently than previous designs to yield a fill rate of 59.2 GigaTexels per second. GeForce GTX 650 Ti is backwards-compatible PCI-Express 3.0 compliant graphics device, although card's 128-bit memory interface makes it unlikely the added bandwidth will ever be fully saturated by the demands of this video card.
The card's exposed printed circuit board reveals indication that GTX 650 Ti could potentially support up to 2GB GDDR5 video memory, another option for daring card partners wanting to separate themselves from the competition. On some games with slow-moving yet extremely large worlds (World of Warcraft comes to mind) this could prove beneficial. In the next section, we detail our test methodology and give specifications for all of the benchmarks and equipment used in our testing process... VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.
A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Intel X79 Express Test System
DirectX-10 Benchmark Applications
DirectX-11 Benchmark Applications
PCI-Express Graphics Cards
|
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: 3DMark11
FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.
The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.
- Futuremark 3DMark11 Professional Edition
- Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
3DMark11 Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.
- Aliens vs Predator
- Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
Aliens vs Predator Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Batman Arkham City
Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.
In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.
Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.
- Batman: Arkham City
- Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
Batman: Arkham City Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Battlefield 3
In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.
The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.
Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.
- BattleField 3
- Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
Battlefield 3 Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Lost Planet 2
Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.
The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.
- Lost Planet 2 Benchmark 1.0
- Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
Lost Planet 2 Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.
- Metro 2033 Benchmark
- Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
Metro 2033 Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
DX11: Unigine Heaven 3.0
The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.
Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.
- Unigine Heaven Benchmark 3.0
- Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA
Heaven Benchmark Test Results
Graphics Card | Radeon HD7770 | GeForce GTX650Ti | GeForce GTX570 | Radeon HD6970 | GeForce GTX580 | GeForce GTX660 | GeForce GTX660Ti | Radeon HD7970 | GeForce GTX670 |
GPU Cores | 640 | 768 | 480 | 1536 | 512 | 960 | 1344 | 2048 | 1344 |
Core Clock (MHz) | 1000 | 925 | 732 | 880 | 772 | 980 | 915 | 925 | 915 |
Shader Clock (MHz) | N/A | N/A | 1464 | N/A | 1544 | 1033 Boost | 980 Boost | N/A | 980 Boost |
Memory Clock (MHz) | 1125 | 1350 | 950 | 1375 | 1002 | 1502 | 1502 | 1375 | 1502 |
Memory Amount | 1024MB GDDR5 | 1024MB GDDR5 | 1280MB GDDR5 | 2048MB GDDR5 | 1536MB GDDR5 | 2048MB GDDR5 | 2048MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 |
Memory Interface | 128-bit | 128-bit | 320-bit | 256-bit | 384-bit | 192-bit | 192-bit | 384-bit | 256-bit |
VGA Power Consumption
In this section, PCI-Express graphics cards are isolated for idle and loaded electrical power consumption. In our power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.
The power consumption statistics discussed in this section are absolute maximum values, and may not represent real-world power consumption created by video games or graphics applications.
A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #4 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card subtracted from measured combined total) displayed in Watts for each specified test product:
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
This article discusses the NVIDIA GeForce GTX 650 Ti video card that features reference design and speeds. Our power consumption results are not representative of the entire GTX 650 Ti-series products that may use a modified design. GeForce GTX 650 Ti 1GB requires a single 6-pin PCI-E power connections for normal operation, and will not activate the display unless proper power has been supplied. NVIDIA recommends a 400W power supply unit for stable operation with GTX 650 Ti, which should include the required 6-pin PCI-E connection without the use of an adapter.
In our test results the GeForce GTX 650 Ti consumed only 12W at idle, and 112W under full load. This positions the GTX 650 Ti among the least power-hungry video cards we've ever tested under load, but much more impressive that it's achieved by a GTX-series product. If you're familiar with electronics, it will come as no surprise that less power consumption equals less heat output as evidenced by our thermal results below...
GeForce GTX 650 Ti Temperatures
This section reports our temperature results subjecting the video card to maximum load conditions. During each test a 20°C ambient room temperature is maintained from start to finish, as measured by digital temperature sensors located outside the computer system. GPU-Z is used to measure the temperature at idle as reported by the GPU, and also under load.
Using a modified version of FurMark's "Torture Test" to generate maximum thermal load, peak GPU temperature is recorded in high-power 3D mode. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output.
The temperatures illustrated below are absolute maximum values, and do not represent real-world temperatures created by video games or graphics applications:
Video Card | Ambient | Idle Temp | Loaded Temp | Max Noise | ||||
ATI Radeon HD 5850 | 20°C | 39°C | 73°C | 7/10 | ||||
NVIDIA GeForce GTX 460 | 20°C | 26°C | 65°C | 4/10 | ||||
AMD Radeon HD 6850 | 20°C | 42°C | 77°C | 7/10 | ||||
AMD Radeon HD 6870 | 20°C | 39°C | 74°C | 6/10 | ||||
ATI Radeon HD 5870 | 20°C | 33°C | 78°C | 7/10 | ||||
NVIDIA GeForce GTX 560 Ti | 20°C | 27°C | 78°C | 5/10 | ||||
NVIDIA GeForce GTX 570 | 20°C | 32°C | 82°C | 7/10 | ||||
ATI Radeon HD 6970 | 20°C | 35°C | 81°C | 6/10 | ||||
NVIDIA GeForce GTX 580 | 20°C | 32°C | 70°C | 6/10 | ||||
NVIDIA GeForce GTX 590 | 20°C | 33°C | 77°C | 6/10 | ||||
AMD Radeon HD 6990 | 20°C | 40°C | 84°C | 8/10 | ||||
EVGA GeForce GTX 650 Ti | 20°C | 26°C | 62°C | 3/10 | ||||
NVIDIA GeForce GTX 670 | 20°C | 26°C | 71°C | 3/10 | ||||
NVIDIA GeForce GTX 680 | 20°C | 26°C | 75°C | 3/10 | ||||
NVIDIA GeForce GTX 690 | 20°C | 30°C | 81°C | 4/10 |
As we've mentioned on the pages leading up to this section, NVIDIA's Kepler architecture yields a much more efficient operating GPU compared to previous designs. This becomes evident by the low idle temperature, and translates into modest full-load temperatures. While NVIDIA's reference design works exceptionally well at cooling the GK106 GPU, consumers should expect add-in card partners to advertise unnecessarily excessive over-cooled versions for an extra premium. 62°C at 100% load after ten minutes is nothing at all, and is nowhere close to this card's 98°C thermal threshold.
GeForce GTX 650 Ti Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
First and foremost is our performance rating. When the GeForce GTX 650 Ti surprised me with its arrival, I was caught off guard and lacked lower-end video cards that might compete well against it. The AMD Radeon HD 7770 GHz Edition seemed like a worthwhile foe, and while it was easily outperformed in every single benchmark test it still did a decent job of offering a challenge. Lost Planet 2 and Battlefield 3 made the Radeon HD 6970 seem like a good match, too, although that card generally outperforms the 650 Ti. So where does it stand in comparison to the competition? According to NVIDIA the GTX 650 Ti is rivaled by the HD 7770 series, but with these results I presume they're referring to an extremely overclocked version because the 650 Ti trounced it every time. Perhaps the AMD Radeon HD 7850 might match up well, if only it didn't cost $25-40 more.
In the DirectX 10 game Crysis Warhead, the GeForce GTX 650 Ti surpassed the AMD Radeon HD 7770 GHz Edition, and trailed behind the more expensive Radeon HD 6970 at 1920x1080. DirectX 11 test results continued to keep the GeForce GTX 650 Ti consistently ahead of the Radeon HD 7770 in all tests. The ultra-demanding DX11 game Batman: Arkham Asylum made use of Kepler's optimized architecture, helping to deliver a very playable 36-FPS. Battlefield 3 gave the GeForce GTX 650 Ti a 5-FPS lead over the Radeon HD 7770, and enabled Ultra quality settings. Lost Planet 2 played well on all graphics cards when set to high quality with 4x AA, allowing the GeForce GTX 650 Ti to keep pace with AMD's Radeon HD 6970. In Aliens vs Predator the GeForce GTX 650 Ti outmatched Radeon HD 7770 performance. Metro 2033 is another demanding game that requires high-end graphics to enjoy quality settings, and although this benchmark favors Radeon products the GTX 650 Ti still pushed past the HD 7770.
Synthetic benchmark tools offer an unbiased read on graphics products, allowing manufacturers to display their performance without optimizations or driver influence. Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, forcing GTX 650 Ti to produce low FPS result but still overtake the AMD Radeon HD 7770. Unigine Heaven 3.0 benchmark tests used maximum settings that strained GTX 650 Ti's limited 128-bit bandwidth, allowing it still surpass the Radeon HD 7770 but only by a small margin. Taking all our benchmark results into consideration, the GeForce GTX 650 Ti commands a decisive lead past its price-relative competition but also occasionally shadows AMD's more expensive Radeon HD 6970 graphics card.
Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has traditionally used a recognizable design over the past two years with rearward-positioned fan and square corners, but GeForce GTX 650 Ti looks nothing like its siblings. Because GeForce GTX 650 Ti operates so efficiently and has a large 75mm fan cooling it, there's no worry about heated air being recirculated back into the computer case. While looks might mean a lot to some consumers, keep in mind that this product outperforms the competition while generating much less heat and producing very little noise.
Regardless of how you perceive this card's looks, construction is the one area that NVIDIA continually shines. The 75mm fan on GTX 650 Ti is extremely quiet during loaded operation, primarily due to an efficient design but also a result of Kepler's CUDA cores that consume less energy and emit less heat. GTX 650 TI requires a single lonely 6-pin PCI-E power connection, which helps to keep this video card compatible with nearly every existing power supply unit. GeForce GTX 650 Ti has one of the shortest PCBs we've seen from a GTX-series graphics card, measuring 5.65" long, and makes this the perfect graphics card for robust HTPC applications. Even better yet, consumers now have an affordable single-GPU solution capable of driving three monitors with the inclusion of two DVI ports and supplementary mini-HDMI.
As of launch day, the NVIDIA GeForce GTX 650 Ti video card sells for $149.99. Please keep in mind that hardware manufacturers and retailers are constantly adjusting prices, so expect it to change a few times between now and one month later. Even though the GeForce GTX 650 Ti easily outperforms the Radeon HD 7770, there's stil plenty of value beyond the basic frame rate since the added NVIDIA Kepler features and functionality run it off the charts. Furthermore, only NVIDIA video cards offer multi-display gaming, 3D Vision, Adaptive VSync, PhysX technology, GPU Boost, FXAA, and now TXAA.
Initially, I was disappointed that GeForce GTX 650 Ti lacked NVIDIA GPU Boost and SLI support. But then it occurred to me that overclocking would be as easy as it used to be, and most users aren't going to buy two of these cards for SLI in the first place with so many other choices available. For what it is, the GeForce GTX 650 Ti performs very well. Our benchmark tests confirmed that GTX 650 Ti can play games like BF3 at 1920x1080 with Ultra quality settings and still get good frame rates. For mainstream gamers who want to step up from aging hot-running graphics cards, the GeForce GTX 650 Ti is a value-packed option to consider.
Pros:
+ Easily outperforms the AMD Radeon HD 7770 GHz Edition
+ 75mm Cooling fan operates at very quiet acoustic levels
+ Extremely low power consumption and heat output
+ Short 5.65" profile fits into compact size computer cases
+ Triple-display NVIDIA Surround support
+ Delivers TXAA support
Cons:
- Does not offer NVIDIA GPU Boost
- Lacks SLI functionality
What do you think of the GeForce GTX 650 Ti graphics card, and would you buy one?
Related Articles:
- Batman Arkham City 3D Vision Game Review
- Fractal Design Define R4 Arctic White Case
- SilverStone RV02B-EW Computer Case
- Desktop PC: Intel Says the End is Near
- AMD Radeon HD 6870 Barts Video Card
- Guild Wars 2 Digital Deluxe Video Game
- Arctic USB Charger PRO 4 Rev.2
- PowerColor PCS+ AX5870 1GBD5-PPDHG2
- MSI Wind-Top AE2220 1080p All-In-One PC
- Zalman N-Series Sandforce SSD SSD0128N1
Comments
I'll probably spend more and end up with a matching pair of GTX-660s or if the finances permit, twin GTX660Ti cards.
Seeing as it's ~my~ money to 'waste' you should just get over it.
good review tough.
-Hank
I have three rigs here. One has two GTX-570s in it, one has two XFX Radeon R7770s in it in Crossfire, and one has a barely passable GT-240 in it.
I started using SLI and Crossfire about 6 months ago and never looked back. My crossfire setup is great and I have no problems with XFX's support.
I want the two GTX-660s (or 660Ti) for the main rig. Then the 570s move to #2 system, and the two Radeons move to the third box.
Battlefield 3 Ultra(1366x768)44-77fps....
Far Cry 3 Very High(1280x920)20-30fps...erm..
Decent Performance for 280watt load consumption.. all at 72c to 77c...very pleased coming from an xfx AMD 6870 1-gig it outperforms.
I'm set for a good long time now and happy with all three boxes. When my Grandkids come over, we have a blast with them.