Archive Home arrow Reviews: arrow Video Cards arrow XFX Radeon HD5750 Video Card HD-575X-ZNF7
XFX Radeon HD5750 Video Card HD-575X-ZNF7
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Wednesday, 21 October 2009

XFX Radeon HD5750

The launch of Radeon HD57xx Juniper-GPU series cards is going very smoothly. ATI learned some hard lessons when they launched the HD4850 a couple years back. All the partners seem to have their cards ready for distribution this time, and there's no price gouging, due to the stable supply. This is doubly important for the HD57xx, since they're in the middle of the pack, performance-wise, and there are lots of competitors. XFX is one of the premium retail partners in the video card industry, although they're a relative newcomer to the ATI camp, and they've supplied Benchmark Reviews a model HD-575X-ZNF7 Radeon HD5750 to review. We recently looked at an early engineering sample of the HD5770, now we have the opportunity to take a look at a production version of the lower priced companion card, the XFX Radeon HD5750. We already know it's not going to challenge the HD5770, but can it beat out its real competition at the lower price point?

XFX_Radeon_HD5750_Front_01.jpg

These mid-range cards compete in a much more crowded market with a lot more competitors overlapping into their performance and price zones. It's much more difficult to hit the bulls-eye in a market teaming with old standards and new stars, and of course, it's not a static target. Every day, the market shifts; sometime imperceptibly, sometimes radically. The target has been a bit jumpy these last few months, so let's see where this arrow lands.

About the company: XFXXFX_Play_Hard_Logo_200px.jpg

XFX dares to go where the competition would like to go, but can't. That's because at XFX, the corporate philosophy is not about pushing the limits. For example, the Research and Development team never asks "why?" They usually ask "why not?" The company's tenacious quest to be the best of the best is what enables it to continually create the mind-blowing, performance-crushing, competition-obliterating video cards and motherboards that make up its core product line.

XFX's expansive product lineup continues to motivate, inspire and exceed the demands of its diverse audience. The company is a category leader thanks to its high-performance products that generate exceptional graphics and realistic and immersive 3D environments that vastly improve the gaming experience.

Satisfying the insatiable needs of gamers worldwide is just the tip of the iceberg; XFX is also highly visible in the gaming community. Over the years, the company has expanded its For Gamers By Gamers principle through a variety of award-winning advertising campaigns and a U.S. "search for the next professional gamer" that promote gaming as a professional sport rather than as entertainment. The company also maintains a strong alliance with the world's best known gamer, Fatal1ty, with whom XFX collaborates to create a professional level of video cards and gaming accessories. It is this dedication to producing the impossible that has enabled XFX to achieve a stronghold in the gaming community and to garner hundreds of awards along the way.

XFX is a division of PINE Technology Holdings Limited, a leading manufacturer and marketer of innovative solutions for the worldwide gaming technologies market. Founded in 1989, PINE designs, develops, manufactures and distributes high-performance video graphics technology and computer peripherals. The company's dedicated research and development team are continually pushing the limits to meet the demands of the ever-growing and performance-driven community.

The company has more than 1,000 employees worldwide with 17 offices around the globe. With distribution in over 50 countries around the world, PINE's XFX division maintains two state-of-the-art research and development facilities in Taiwan and Shenzhen, technical support centers in the U.S., Europe and Asia, product marketing the U.S., and a factory in Mainland China. To learn more about PINE, please visit www.pinegroup.com

XFX Radeon HD5750 Features

The feature set of the entire ATI HD5700 video card series is nearly identical to the recently released HD5800 series. When you buy an HD5750, you get everything the top-of-the-line HD5870 video card has, at least feature-wise, for a much lower price. Performance-wise, it's another matter; all related to the fact that the HD5700 series chip (Juniper) is half the size of the HD5800 series (Cypress), with half the processing power. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look half familiar.

ATI_Radeon_HD5770_Architecture_01.jpg

All XFX ATI RadeonTM HD 5700 Series graphics cards come with ATI Eyefinity Technology, which can instantly triple your visual real estate, up to three displays for the ultimate in innovative "wrap around" capabilities, all with crisp, sharp picture quality. ATI Eyefinity technology engages your peripheral vision and puts you right in the game. At the office, you can multi-task without needing to flip between windows. Ideal for multi-media applications, keep as many palettes or panels open as you would like, while you edit images or videos.

ATI Stream Technology unleashes the massive parallel processing power of your GPU to help speed up demanding every-day applications. Experience fast video encoding and transcoding, so that video playback, editing and transferring content to your iPod or other portable media players is quick and easy.

As the first fully Microsoft DirectX 11-compatible GPUs in their class, the XFX ATI RadeonTM HD 5700 Series delivers unrivaled visual quality and intense gaming performance. Enjoy in-your-face 3D visual effects and dynamic interactivity, with features like HDR Texture Compression, DirectCompute 11 and Tessellation.

The 5700 Series is further supersized with GDDR5 memory, 1.8X of graphics performance boost with ATI CrossFireXTM technology in dual mode, and unparalleled anti-aliasing and enhanced anisotropic filtering for slick graphics and supreme realism.

XFX Radeon HD 5750 GPU Feature Summary

  • 1.04 billion 40nm transistors
  • TeraScale 2 Unified Processing Architecture
    • 720 Stream Processing Units
    • 36 Texture Units
    • 64 Z/Stencil ROP Units
    • 16 Color ROP Units
  • GDDR5 memory interface
  • PCI Express 2.1 x16 bus interface
  • DirectX 11 support
    • Shader Model 5.0
    • DirectCompute 11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency
  • OpenGL 3.2 support1
  • Image quality enhancement technology
    • Up to 24x multi-sample and super-sample anti-aliasing modes
    • Adaptive anti-aliasing
    • 16x angle independent anisotropic texture filtering
    • 128-bit floating point HDR rendering
  • ATI Eyefinity multi-display technology2,3
    • Three independent display controllers - Drive three displays simultaneously with independent resolutions, refresh rates, color controls, and video overlays
    • Display grouping - Combine multiple displays to behave like a single large display
  • ATI Stream acceleration technology
    • OpenCL 1.0 compliant
    • DirectCompute 11
    • Accelerated video encoding, transcoding, and upscaling4,5
    • Native support for common video encoding instructions
  • ATI CrossFireXTM multi-GPU technology6
    • Dual GPU scaling
  • ATI Avivo HD Video & Display technology7
    • UVD 2 dedicated video playback accelerator
    • Advanced post-processing and scaling8
    • Dynamic contrast enhancement and color correction
    • Brighter whites processing (blue stretch)
    • Independent video gamma control
    • Dynamic video range control
    • Support for H.264, VC-1, and MPEG-2
    • Dual-stream 1080p playback support9,10
    • DXVA 1.0 & 2.0 support
    • Integrated dual-link DVI output with HDCP11
      • Max resolution: 2560x160012
    • Integrated DisplayPort output
      • Max resolution: 2560x160012
    • Integrated HDMI 1.3 output with Deep Color, xvYCC wide gamut support, and high bit-rate audio
      • Max resolution: 1920x120012
    • Integrated VGA output
      • Max resolution: 2048x153612
    • 3D stereoscopic display/glasses support13
    • Integrated HD audio controller
      • Output protected high bit rate 7.1 channel surround sound over HDMI with no additional cables required
      • Supports AC-3, AAC, Dolby TrueHD and DTS Master Audio formats
  • ATI PowerPlayTM power management technology7
    • Dynamic power management with low power idle state
    • Ultra-low power state support for multi-GPU configurations
  • Certified drivers for Windows 7, Windows Vista, and Windows XP
  1. Driver support scheduled for release in 2010
  2. Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector
  3. ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays
  4. Requires application support for ATI Stream technology
  5. Digital rights management restrictions may apply
  6. ATI CrossFireXTM technology requires an ATI CrossFireX Ready motherboard, an ATI CrossFireXTM Bridge Interconnect for each additional graphics card) and may require a specialized power supply
  7. ATI PowerPlayTM, ATI AvivoTM and ATI Stream are technology platforms that include a broad set of capabilities offered by certain ATI RadeonTM HD GPUs. Not all products have all features and full enablement of some capabilities and may require complementary products
  8. Upscaling subject to available monitor resolution
  9. Blu-ray or HD DVD drive and HD monitor required
  10. Requires Blu-ray movie disc supporting dual 1080p streams
  11. Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
  12. Some custom resolutions require user configuration
  13. Requires 3D Stereo drivers, glasses, and display

ATI_Radeon_HD5770_Road_to_Fusion.jpg)

AMD is slowly working towards a future vision of graphics computing, as is their main competitor, Intel. They both believe that integrating graphics processing with the CPU provides benefits that can only be achieved by taking the hard road. For now, the only thing we can see is their belief; the roadmap is both sketchy and proprietary. One look at the size of the Juniper GPU die and it starts to look more like a possibility than a pipe dream, though.

Radeon HD5750 Specifications

The XFX Radeon HD5750 specifications don't fit neatly between two or more competing, or legacy models. It has fewer stream processors than the HD4850, but they run at a higher clock rate. It has only half the memory bus width of the whole HD48xx series, but every single 5xxx card runs GDDR5 at very high clock rates. The memory bandwidth of the HD5750 compares favorably to the HD4850, at 73.6 GB/sec versus 63.5 GB/s. You can see the complete specs in detail a little further below. The real story is how ATI has been able to reduce the cost of the HD5700 platform to below the HD4850. Take a look at where the four versions of the HD5xxx series end up relative to their forefathers. And remember, this is all based on launch pricing...

ATI_Radeon_HD5770_48-58_Progression.jpg

Now let's look at the actual HD5750 specs in detail:

Radeon HD5750 Speeds & Feeds

  • Engine clock speed: 700 MHz
  • Processing power (single precision): 1.008 TeraFLOPS
  • Polygon throughput: 700M polygons/sec
  • Data fetch rate (32-bit): 100.8 billion fetches/sec
  • Texel fill rate (bilinear filtered): 25.2 Gigatexels/sec
  • Pixel fill rate: 11.2 Gigapixels/sec
  • Anti-aliased pixel fill rate: 44.8 Gigasamples/sec
  • Memory clock speed: 1.15 GHz
  • Memory data rate: 4.6 Gbps
  • Memory bandwidth: 73.6 GB/sec
  • Maximum board power: 86 Watts
  • Idle board power: 16 Watts

The HD5770 and the HD5750 were released at the same time and the two cards are based on the same silicon. The HD5750 is likely built with chips that didn't meet the top clock spec, and/or had a defect that killed one of the stream processor units. As anyone who has followed the AMD product line knows, modern processors are designed with the capability of disabling portions of the die. Sometimes, it's done because there are defects on the chip (usually a small particle of dust that ruins a transistor) and all the internal sections don't pass testing. Sometimes it's done with perfectly good chips because the manufacturer needs to meet production requirements for lower cost market segments.

ATI_Radeon_HD5770_Series_Specs.jpg

It's always a delicate balance between economies of scale (building massive quantities of only one part) and the fact that you can usually meet the requirements for the lower specified product with a cheaper part. ATI has all the bases covered in this latest series of product launches; they've got the bigger, more expensive chips in the HD5800 series and the much cheaper, half-size chips in the HD5700 series. Within each series, they've got reduced spec versions that ensure that they make the most of the manufacturing yields that TSMC is able to achieve at the 40nm process node.

Closer Look: XFX Radeon HD5750

The HD 5750 breaks with the general design of all the other HD5xxx cards, in order to reduce production costs. The card is also smaller, at 18.4 x 11 x 3.8 cm, which means it will fit into any case without an issue. A radial-finned heatsink sits directly on the GPU, and does all of its cooling without the extra expense of heatpipes or folded and fabricated fin blocks. Just a simple casting does the job, combined with a plastic shroud to keep the air from the fan blowing through the heatsink, rather than around it. The reference design cooler also provides a large expanse of real estate for ATI's retail partners to display their branding. There are few secrets to the HD5750 overall design, what you see is what you get.

XFX_Radeon_HD5750_Front_34_01.jpg

The connections on the rear of the card are consistent with the entire HD5xxx series: two DVI, one HDMI and one DisplayPort connector. The collection of I/O ports leaves a small amount of room for some exhaust vents, but they really serve no real purpose with the cooling arrangement this card uses.

XFX_HD5750_Bracket_01.jpg

The back of the Radeon HD5750 is bare, which is normal for a card in this market segment. The main features to be seen here are the metal cross-brace for the GPU heatsink screws, which are spring loaded, and the four Hynix GDDR5 memory chips on the back side. They are mounted back-to-back with four companion chips on the top side of the board. Together, they make up the full 1GB of memory contained on this card.

XFX_Radeon_HD5750_PCB_Back.jpg

For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster is the byword for achieving gaming-class performance from the latest and greatest GPU. Even though the HD5750 is a mid-range card with a relatively small GPU die size, it's still a potential gaming product and will be pushed to maximum performance levels by many users.

XFX_Radeon_HD5750_Bottom_02.jpg

Looking at the heatsink from the side, we see that is a very low mass design. The fins are very thin and widely spaced, which may look like an economy measure, but is actually the most efficient design for moving heat away from the GPU and out into the airstream generated by the fan. We can also see that all the heat generated by the card is going to stay inside the case, but as we'll see later, there's not much of it to worry about. We'll be looking at cooling performance later on, to see if there are any issues caused by the lower cost cooling system.

Now, let's peek under the covers and have a good look at what's inside the XFX Radeon HD5750.

XFX Radeon HD5750 Detailed Features

The main attraction of ATI's new line of video cards is the brand new GPU with its 40nm transistors and an improved architecture. The chip in the 5700 series is called "Juniper" and is essentially half of the "Cypress", the high-end chip in the HD5800 series that was introduced in September, 2009.

ATI_Radeon_HD5770_Juniper_Headshot.jpg

The Juniper die is very small, as can be seen with this comparison to a well known dimensional standard. ATI still managed to cram over a billion transistors on there, and the small size is critical to the pricing strategy that ATI is pursuing with these new releases.

1 GB of GDDR5 memory, on a 128-bit bus with a 4.6 Gbps memory interface offers a maximum memory bandwidth of up to 73.6 GB/sec. Cutting the Cypress GPU in half limited the bus to 128-bit, but ATI has bumped up the clock rate on all their new boards. With GDDR5 running at 1150 MHz, the memory itself won't be a bottleneck on this card, but the narrower bus width does have a major performance impact. There is some room for memory overclocking via the Overdrive tool distributed by AMD.

ATI_Radeon_HD5770_HYNIX_GDDR5.jpg

The H5GQ1H24AFR-T2C chip from Hynix is rated for 5.0 Gbps, and is one of the higher rated chips in the series, as you can see in the table below. An overclock to the 1250-1300 MHz range is not unthinkable, especially if utilities become available to modify memory voltage.

ATI_Radeon_HD5770_Memory_Table.jpg

The power section provides 3-phase power to the GPU; that's about average for a mid-range graphics card, and while increasing the number of power phases achieves better voltage regulation, improves efficiency, and reduces heat, ATI has used the inherently lower power requirements of the Juniper GPU and some fancy footwork in the power supply control chip to reduce power draw to very low levels. The fact that the GPU on the 5750 has 10% of the available Stream Processors disabled and runs 18% slower than the 5770 also helps.

XFX_Radeon_HD5750_Power_End_01.jpg

Where the HD5800 series used a number of Volterra regulators and controllers, the HD5750 makes do with one uP6209AQ controller chip from uPI. It's a relatively simple controller compared to the units on the HD5770 and the 58xx series. The combination of a lower power GPU and low power GDDR5 memory however, yields an incredibly low power consumption of 16W at idle and 86W under duress. Another cost-cutting measure can be seen here, the use of standard electrolytic capacitors in a few locations.

XFX_Radeon_HD5750_Assembly_Q.jpg

The assembly quality on the XFX Radeon HD5750 PCB is up to the levels I expect to see on a high-end retail product like this. The image above is from the back side of the printed circuit board, directly below the GPU. It is one of the most crowded portions of the PCB, and one where any small misplacement of a component can have serious implication on stability, especially for overclocking. Before we dive into the testing portion of the review, let's look at one of the most exciting new features available on every Radeon HD5xxx series product, EyeFinity.

ATI Eyefinity Multi-Monitors

ATI Eyefinity advanced multiple-display technology launches a new era of panoramic computing, helping to boost productivity and multitasking with innovative graphics display capabilities supporting massive desktop workspaces, creating ultra-immersive computing environments with superhigh resolution gaming and entertainment, and enabling easy configuration. High end editions will support up to six independent display outputs simultaneously.

In the past, multi-display systems catered to professionals in specific industries. Financial, energy, and medical are just some industries where multi-display systems are a necessity. Today, more and more graphic designers, CAD engineers and programmers are attaching more than one display to their workstation. A major benefit of a multi-display system is simple and universal - it enables increased productivity. This has been confirmed in industry studies which show that attaching more than one display device to a PC can signficantly increase user productivity.

Early multi-display solutions were non-ideal. Bulky CRT monitors claimed too much desk space; thinner LCD monitors were very expensive; and external multidisplay hardware were inconvenient and also very expensive. These issues are much less of a concern today. LCD monitors are very affordable and current generation GPUs can drive multiple display devices independently and simultaneously, without the need for external hardware. Despite the advancements in multi-display technology, AMD engineers still felt there was room for improvement, especially regarding the display interfaces. VGA carries analog signals and needs a dedicated DAC per display output, which consumes power and ASIC space. Dual-Link DVI is digital, but requires a dedicated clock source per display output and uses too many I/O pins from the GPU. It was clear that a superior display interface was needed.

ati_eyefinity_battle_forge.jpg

In 2004, a group of PC companies collaborated to define and develop DisplayPort, a powerful and robust digital display interface. At that time, engineers working for the former ATI Technologies Inc. were already thinking about a more elegant solution to drive more than two display devices per GPU, and it was clear that DisplayPort was the interface of choice for this task. In contrast to other digital display interfaces, DisplayPort does not require a dedicated clock signal for each display output. In fact, the data link is fixed at 1.62Gbps or 2.7Gbps per lane, irrespective of the timing of the attached display device. The benefit of this design is that one reference clock source provides the clock signal needed to drive as many DisplayPort display devices as there are display pipelines in the GPU. In addition, with the same number of I/O pins used for Single-Link DVI, a full speed DisplayPort link can be driven which provides more bandwidth and translates to higher resolutions, refresh rates and color depths. All these benefits perfectly complement ATI Eyefinity Multi-Display Technology.

ati_eyefinity_test_drive_unlimited.jpg

ATI Eyefinity Technology from AMD provides advanced multiple monitor technology delivering an immersive graphics and computing experience, supporting massive virtual workspaces and super-high resolution gaming environments. Legacy GPUs have supported up to two display outputs simultaneously and independently for more than a decade. Until now graphics solutions have supported more than two monitors by combining multiple GPUs on a single graphics card. With the introduction of AMD's next-generation graphics product series supporting DirectX 11, a single GPU now has the advanced capability of simultaneously supporting up to six independent display outputs.

ATI Eyefinity Technology is closely aligned with AMD's DisplayPort implementation providing the flexibility and upgradability modern user's demand. Up to two DVI, HDMI, or VGA display outputs can be combined with DisplayPort outputs for a total of up to six monitors, depending on the graphics card configuration. The initial AMD graphics products with ATI Eyefinity technology will support a maximum of three independent display outputs via a combination of two DVI, HDMI or VGA with one DisplayPort monitor. AMD has a future product planned to support up to six DisplayPort outputs. Wider display connectivity is possible by using display output adapters that support active translation from DisplayPort to DVI or VGA.

The DisplayPort 1.2 specification is currently being developed by the same group of companies who designed the original DisplayPort specification. Its feature set includes higher bandwidth, enhanced audio and multi-stream support. Multi-stream, commonly referred to as daisy-chaining, is the ability to address and drive multiple display devices through one connector. This technology, coupled with ATI Eyefinity Technology, will be a key enabler for multi-display technology, and AMD will be at the forefront of this transition.

Video Card Testing Methodology

This is the beginning of a new era for testing at Benchmark Reviews. With the imminent release of Windows7 to the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro.We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors).

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the thre remaining results are displayed in our article.

Test System

  • Motherboard: Gigabyte GA-EP45-UD3P Rev 1.1 (F7c BIOS)
  • System Memory: 4X 1GB OCZ Reaper HPC DDR2 1150MHz (5-5-5-15)
  • Processor: Intel E7300 Core2 Duo 2.66GHz (Overclocked to 3.8 GHz)
  • CPU Cooler: CoolerMaster Hyper 212 RR-CCH-LB12-GP
  • Video: XFX Radeon HD5750, HD-575X-ZN
  • Drive 1: OCZ Summit SSD, 60GB
  • Drive 2: Western Digital VelociRaptor VR150, 150GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: SiverStone Fortress FT01BW ATX Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

Benchmark Applications

  • 3DMark Vantage v1.0.1 (8x Anti Aliasing, 16x Anisotropic Filter, Extreme Quality)
  • Crysis v1.21 Benchmark (High Settings, 0x and 4x Anti-Aliasing)
  • Devil May Cry 4 Benchmark Demo (Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 (Very High Performance, Ultra-High Quality, 8x Anti-Aliasing)
  • World in Conflict v1.0.0.9 Performance Test (Very High Setting: 4x AA/16x AF)
  • Battleforge Renegade v1.1 (Max Quality-Including SSAO, 8x Anti-Aliasing, MT Rendering)
  • Resident Evil 5 (8x Anti-Aliasing, Motion Blur ON, Quality Levels-High)

Video Card Test Products

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

  • MSI Radeon HD4830 (R4830 T2D512 - Catalyst 8.66.6_Beta1)
  • ASUS Radeon HD4850 (EAH4850 TOP - Catalyst 8.66.6_Beta1)
  • XFX Radeon HD5750 (HD-575X-ZN Catalyst 8.66.6_Beta1)
  • ATI Radeon HD5770 (ATI Radeon HD5770 - Catalyst 8.66.6_Beta1)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX - Forceware v190.62)
  • MSI GeForce GTX 275 (N275GTX Twin Frozr OC - Forceware v190.62)

Now we're ready to begin testing video game performance these video cards, so please continue to the next page as we start with the 3DMark Vantage results.

3DMark Vantage Benchmark Results

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

XFX_Radeon_HD5750_3DMark_Vantage_1680.jpg

The two test scenes in 3DMark Vantage provide a varied and modern set of challenges for the video cards and their subsystems, as described above. The results always produced higher frame rates for GT1 and so far, I haven't seen any curveball results like I used to see with 3DMark06. The XFX Radeon HD5750 basically equaled the performance of an overclocked (ASUS TOP series) HD4850 card in both GT1 and GT2. In both test cases, the HD5750 easily beat an HD4830. The HD5770 and GTX260 are in another league from the HD5750, though. There's no pretending that it's close; the extra stream processors in the HD5770 really do make a difference. The GTX275 pulls far away from the middle of the pack, as it should for the price difference.

XFX_Radeon_HD5750_3DMark_Vantage_1920.jpg

At a higher screen resolution, 1920x1200, the story changes a bit, as the HD5750 pulls out a two FPS lead on the HD4850. I know two FPS doesn't sound like much, but it's a 25% increase over the performance of the HD4850, so it's nothing to sneeze at. The HD5750 doesn't get any closer to the HD5770 or the GTX260, though. The 128-bit memory bus doesn't seem to hurt the card with higher resolutions. Just like the HD5770, the HD4850 beats the older, lower spec HD48xx series cards, but it doesn't blow them out of the water, and wouldn't be as much of an upgrade for Radeon users with cards that are 1-2 years old. We need to look at actual gaming performance to verify that, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards.

XFX_Radeon_HD5750_Crysis_NoAA.jpg

In my review of the HD5770, I said I was shocked by these numbers, and nothing has changed. Running XP-based systems and DirectX 9, the latest generation of video cards were starting to get a handle on Crysis. Certainly, in this test, with no anti-aliasing dialed in, any of the tested cards running in DX9 provided a usable solution. Now only the highest performing boards get close to an average frame rate of 30FPS. It seems like we've gone back in time, back to when only two or three video cards could run Crysis with all the eye candy turned on. Now, we'll have to wait until CryEngine3 comes out, and is optimized for the current generation of graphics APIs.

The results here are a bit disheartening, in that the HD5750 actually gets owned by the older DX9 era HD4850, albeit an overclocked version. We might be able to make the best out of a bad situation by overclocking the HD5750 to even up the match a bit, but the fact is, they are roughly equal at stock settings. Compared to the HD4830, there's not a big enough jump to justify upgrading if you want to run this game in DirectX 10. This may not be a universal problem, we'll have to see, later on.

XFX_Radeon_HD5750_Crysis_4XAA.jpg

Once a decent amount of anti-aliasing is factored in, the HD5750 pulls up its bootstraps and moves ahead of the HD4850 a bit. All those little improvements ATI made to the rendering processor pay off here. It's especially noticeable at the higher resolution. Frame rates are still well below acceptable until you get to the high end cards. If you want to play this game in DX10, you are going to have to pay the man...

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Devil May Cry 4 Benchmark

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used.

Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

XFX_Radeon_HD5750_DMC4_DX10.jpg

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

The HD5750 once again loses out to the HD4850 and falls far behind the HD5770 and the GTX260. They all provide excellent frame rates, however, well above the recommended minimums. The surprise of this test is the excellent performance of both the HD4850 and the HD4830. There's something about those two old soldiers that just loves this game. Suffice it to say, if you are getting 60+ frames per second in all your video games, you don't need a video card upgrade.

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics.

Far Cry 2 Benchmark Results

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for our tests, with the resolution set to 1920x1200. The performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality level, 8x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.

XFX_Radeon_HD5750_Far_Cry_2_DX10.jpg

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we use 8x AA and 'Ultra High' settings in Far Cry 2. Here we see the opposite effect, when switching our testing to DirectX 10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DX10 requirements.

Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), not all products are capable of producing playable frame rates with the settings all turned up. The Radeon HD5750 actually hangs close to its big brother, the HD5770 in this game. Although the Dunia engine seems to be optimized for NVIDIA chips, the improvements ATI incorporated in their latest GPUs are just enough to allow this game to be played with a mid-range card. Older ATI products struggle with this benchmark, and if you've got one of those, either the HD5750 or HD5770 would be an upgrade for playing this game.

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

Our next benchmark of the series puts our collection of video cards against some fresh graphics in the newly released Resident Evil 5 benchmark.

Resident Evil 5 Benchmark Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenaries mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.

XFX_Radeon_HD5750_Resident_Evil_5_DX10.jpg

The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes. In addition it calculates an overall average for the four scenes. The overall average is what we report here, as the scenes were pretty evenly matched and no scene had results that were so far above or below the average as to present a unique situation.

The 1680x1050 test results from this game scale almost as linearly as a synthetic benchmark. In the case of the video card we're interested in, the HD5750 sits on the exact same rung as the HD4850 and 6-7 FPS behind the HD5770. The GTX260-216 and GTX275 do very well in this game, beating both new ATI offerings easily.

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

Our next benchmark of the series features a strategy game with photorealistic modern-day wartime graphics: World in Conflict.

World in Conflict Benchmark Results

The latest version of Massive's proprietary Masstech engine utilizes DX10 technology and features advanced lighting and physics effects, and allows for a full 360 degree range of camera control. Massive's MassTech engine scales down to accommodate a wide range of PC specifications, if you've played a modern PC game within the last two years, you'll be able to play World in Conflict.

World in Conflict's FPS-like control scheme and 360-degree camera make its action-strategy game play accessible to strategy fans and fans of other genres... if you love strategy, you'll love World in Conflict. If you've never played strategy, World in Conflict is the strategy game to try.

Based on the test results charted below it's clear that WiC doesn't place a limit on the maximum frame rate (to prevent a waste of power) which is good for full-spectrum benchmarks like ours, but bad for electricity bills. The average frame rate is shown for each resolution in the chart below. World in Conflict just begins to place demands on the graphics processor at the 1680x1050 resolution, so we'll skip the low-res testing.

XFX_Radeon_HD5750_World_In_Conflict_DX10.jpg

The GT200 series GPUs from NVIDIA seem to have a distinct advantage with the World In Conflict benchmark. Once again, the older HD4850 improves on the performance of the HD5750, even in the higher resolution testing this time, despite having only 512MB of memory to play with. Something is clearly not optimized in this benchmark for the latest ATI version of pixel processing hardware.

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

Our last benchmark of the series brings DirectX 11 into the mix, a situation that only two of the cards under test are capable of handling.

BattleForge - Renegade Benchmark Results

In anticipation of the Release of DirectX 11 with Windows 7 and coinciding with the release of AMD's ATI HD 5870, BattleForge has been updated to allow it to run using DirectX 11 on supported hardware. Well what does all of this actually mean you may ask? It gives us a sip of water from the Holy Grail of game designing and computing in general: greater efficiency! What does this mean for you? It means that that the game will demonstrate a higher level of performance for the same processing power, which in turn allows more to be done with the game graphically. In layman's terms the game will have a higher frame rate and new ways of creating graphical effects, such as shadows and lighting. The culmination of all of this is a game that both runs and looks better. The game is running on a completely new graphics engine that was built for BattleForge.

BattleForge is a next-gen real time strategy game, in which you fight epic battles against evil along with your friends. What makes BattleForge special is that you can assemble your army yourself: the units, buildings and spells in BattleForge are represented by collectible cards that you can trade with other players. BattleForge is developed by EA Phenomic. The studio was founded by Volker Wertich, father of the classic "The Settlers" and the SpellForce series. Phenomic has been an EA studio since August 2006.

BattleForge was released on Windows in March 2009. On May 26, 2009, BattleForge became a Play 4 Free branded game with only 32 of the 200 cards available. In order to get additional cards, players will now need to buy points on the BattleForge website. The retail version comes with all of the starter decks and 3,000 BattleForge points.

XFX_Radeon_HD5750_Battleforge_DX11.jpg

Never mind the DX10 v. DX11 question, the real news here is that this game was almost certainly developed exclusively on ATI hardware, and it shows. The good news is that at both widescreen resolutions, the HD5750 trumps the GTX260, and comes within spitting distance of the GTX275, an almost unthinkable result. The bad news is that the old HD4850 does even better.

The BattleForge benchmark itself is a tough one, once all the settings are maxed out. In case you are wondering, these results are with SSAO "On" and set to the Very High setting. I know the NVIDIA cards do a little better when SSAO is set to "Off", and I will eventually get around to posting a full set of results with this setting. Personally though, I think the writing is on the wall as far as DirectX 11 goes, and if there isn't going to be a level playing field for 3-4 months, it's not ATI's fault. I mean, who DIDN'T know, more than a year ago, that Windows 7 and DX11 were coming?

Product Series

MSI Radeon HD4830 (R4830 T2D512)

ASUS Radeon HD4850 (EAH4850 TOP)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Engineering Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

Stream Processors

640

800

720

800

216

240

Core Clock (MHz)

585

680

700

850

576

66

Shader Clock (MHz)

N/A

N/A

N/A

N/A

1242

1476

Memory Clock (MHz)

900

1050

1150

1200

999

1161

Memory Amount

512MB - GDDR3

512MB - GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

896MB - GDDR3

Memory Interface

256-bit

256-bit

128-bit

128-bit

448-Bit

448-bit

In our next section, we investigate the thermal performance of the Radeon HD5750, and see if that half-size 40nm GPU die runs as cool as we think it will.

XFX Radeon HD5770 Temperature

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 21C throughout testing (it cooled off this week in DC...). The ATI Radeon HD5750 video card recorded 33C in idle 2D mode, and increased to 62C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution and the maximum MSAA setting of 8X. The fan was left on its stock settings for this test.

62°C is an excellent result for temperature stress testing, especially with stock fan settings. The built-in fan controller generally runs the fan at 1200RPM during 2D or idle. On most benchmarks, the temperature never got above 60C and the fan stayed there. Once temps got above 60C, the controller ramped up the fan to about 1400 RPM. With a few less stream processors and a lower GPU clock rate than the HD5770, it seemed like you really couldn't push the thermal boundaries of this card. Overclockers are licking their chops about now....

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

XFX_Radeon_HD5750_Furmark_Temp.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

XFX_Radeon-HD5750_CPU-Z.jpg

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The ATI Radeon HD5750 pulled 18 (96-78) watts at idle and 94 (172-78) watts when running full out, using the test method outlined above. These numbers are reasonably close to the factory numbers of 16W at idle and 86W under load. This is one area where this card excels. If you keep your computer running most of the day and/or night, this card could easily save you 1 kWh per day in electricity.

Radeon HD5750 Final Thoughts

The alternative title for this review could have been: "What Price DirectX 10?" or "Who Killed Crysis?" I know the big news is DirectX 11, and how it is a major advancement in both image quality and coding efficiency, but for the time being, we're stuck in a DirectX 10 world, for the most part. DX11 games won't be thick on the ground for at least a year, and some of us are going to continue playing our old favorites. So, with the switch to Windows 7, what's the impact on gaming performance? So far, it's a bit too random for my tastes.

ATI_Radeon_HD5770_DirectX_Comparis.jpg

We seem to be back to a situation where the software differences between games have a bigger influence on performance than hardware and raw pixel processing power. As the adoption rate for Windows 7 ramps up, more and more gamers are going to be wondering if DirectX 10 is a blessing or a curse. Crysis gets cut off at the knees, but Far Cry 2 gets a second wind with DX10. World In Conflict holds back its best game play for NVIDIA customers, but BattleForge swings the other way, with DX10 and DX11.

I have a feeling this is why gamers resolutely stuck with Windows XP, and never warmed up to Vista. It wasn't the operating system per se, as much as it was DirectX 10. And I want to clarify; there's probably nothing inherently wrong with DX10, it's just that so few games were designed to use it effectively. The other problem is that, unlike other image enhancing features, DirectX has no sliding scale. I can't select 2x or 4x or 8x, to optimize the experience, it's either all in, or all out.

ATI_Radeon_HD5770_DirectX11_Benefi.jpg

The good news is that the adoption rate for Windows 7 will probably set records, if anyone is keeping score. Combine that with the real-world benefit to software coders that DirectX 11 brings, and there is a good probability that we won't be stuck in DX10 land for very long. New graphics hardware from both camps, a new operating system, a new graphics API, and maybe an economic recovery in the works? It's going to be an interesting holiday season, this year!

XFX Radeon HD5750 Conclusion

The performance of the HD5750 is pretty good, considering the modest looking hardware resources that make it all possible. One way of showing this quantitatively is to look at the power required to deliver the performance. The HD5750 offers roughly the same performance as an HD4850 for half the power at full load, and only one third the power at idle. The difference could easily equal a savings of 1kWh per day. That's a nice perk for new users, but for those that may already have a mid-range card that's 1-2 years old, getting the equivalent performance of an HD4850 in late 2009 may not be enough. It's OK to want more, even in a world barely recovering from a global recession. We'll talk value in a minute, but the performance is what it is; it's competitive, not a giant killer. The presence of 1 GB of GDDR5 memory really helps at higher resolutions, so the card won't hold you back if you pick up a new monitor.

Performance is more than just frames-per-second, though; the ability to run 2-3 monitors with Full ATI EyeFinity Support counts, too. Plus, we've been measuring performance with Beta drivers. If you've read some of my recent video card reviews, you've got a better understanding of why driver performance on launch day is not a good measure of the final product. So, while the raw performance numbers are good enough for the target price point today, I predict even better things to come for both price and performance.

XFX_Radeon_HD5750_with-Box.jpg

The appearance of the product itself is a mixed bag. The card uses the reference cooler designed by ATI, and early pictures of the unadorned black fan shroud looked pretty goofy. Once XFX got their graphics artists to work up a product label, they improved the appearance by a large margin. The full cover and red hood scoops from the rest of the HD5xxx family were too expensive for this product, and the cheaper cooling system helped pay for the premium memory system; an excellent trade-off, I'd say. The reference design has plenty of cooling capacity for the tiny Juniper GPU, especially with the lower GPU clock and 80 disabled stream processors.

The build quality of the XFX Radeon 5750 is much better than the engineering sample I received before the launch date. The retail version XFX is putting out had no quality issues I could detect. The parts were all high quality, and the PC board was manufactured and assembled with care and precision.

The features of the HD5750 are amazing, having been carried over in full measure from the HD5800 series: DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11 and OpenCL Support, HDMI 1.3 with Dolby True HD and DTS Master Audio. We've barely scratched the surface in this review of all the capability on offer, by focusing almost exclusively on gaming performance, but the card has other uses as well.

As of late October, Newegg is selling the XFX Radeon HD5750 at $139.99, which is $10 higher than several others vendors. XFX has always commanded a premium for their cards, because of their enthusiast-based support model. They offer a double lifetime warranty, which is quite useful for enthusiasts that buy and sell the latest hardware on a regular basis. The second owner gets the second full lifetime warranty. That's a very nice benefit if you know the guy that owned it before you ran it 24/7, highly overclocked at full load, loading up on points in Folding@Home. I think this is less likely to be an issue with a card in this price range, but it does explain the price premium a bit. I feel somewhat disappointed that inflation seems to have eaten up the cost advantage I had hoped to see over the existing HD4850, but progress is measured more in the feature set of this card than in its raw graphics processing power.

The XFX Radeon HD5750 earns a Silver Tachometer Award, because it fills an important slot in the graphics card middle ground. With the launch of Windows 7 and its DirectX 11 interface, anyone wanting to take advantage of the advanced features becoming more prevalent in the next 4-6 months needs new hardware. If you're shopping in this price range, this is the only card to get; every other choice is going to cost more or do less. I was a little disappointed that XFX didn't include a coupon for DiRT 2, as some other ATI partners are doing; it's one of the DirectX 11 titles I'm most looking forward to.

Pros:silvertachaward.png

+ Unmatched feature set
+ Extremely low power consumption
+ 1GB of GDDR5 memory
+ HDMI and DisplayPort interfaces
+ Cool, quiet operation
+ Requires only one 6-pin power connector
+ Sleek, modern looks
+ Lowest cost DirectX 11 graphics card

Cons:

- Can't quite beat old-faithful; the HD4850
- Premium pricing at launch

Ratings:

  • Performance: 8.50
  • Appearance: 9.00
  • Construction: 9.00
  • Functionality: 9.50
  • Value: 8.75

Final Score: 8.95 out of 10.

Quality Recognition: Benchmark Reviews Silver Tachometer Award..

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive