Archive Home arrow Reviews: arrow Video Cards arrow XFX Radeon HD5830 DX11 Video Card
XFX Radeon HD5830 DX11 Video Card
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Friday, 26 March 2010

XFX Radeon HD5830 Video Card Review

The XFX Radeon HD 5830 looks deceptively simple; it doesn't even look like a new product. It doesn't shout, "Hey, look at me. I'm new and different." Every ATI Radeon HD 5000 series product launch until now has had a visually unique model range that you could point to and say, "That's a 5770 or That's a 5670." ad infinitum it seemed for the last several months. Now, with the release of almost a dozen new HD 5830 graphics cards in the last few weeks, there are hardly two HD5830 cards that look alike. It's a stealth product in chameleon's clothes. Well, Benchmark Reviews has one pinned down under the microscope, and we can all take a look at an actual retail video card, P/N HD-583X-ZNFV, from one of ATI's newest major AIB partners, XFX.

XFX_HD_5830_Video_Card_Front_View_02.jpg

Launch day for the ATI Radeon HD 5830 was a bit different than most; 90% of the dozens of reviews were of a card that would never be sold. The engineering prototypes that ATI supplied for the industry roll-out were all built on the HD 5870 reference design, because the 5830 needs more power than the 5850 voltage regulator modules could supply. Meanwhile, all the ATI Add-in-Board (AIB) partners were getting ready to release their own custom versions of the 5830. Most of them had their own custom designs for the 5870 and 5850 in the works, or on the market already, so it was supposed to be an easy matter to port the house design over to the 5830.

Ever since the introduction of the ATI Radeon HD5770, PC gamers and enthusiasts have been looking at the wide gap between the HD5770 and the HD5850 and knew that it would only be a matter of time before ATI plugged the hole in their product line. ATI is taking a slightly different approach with the Radeon HD 5830 video card; selling only the ASIC chips to their Add-In-Board (AIB) partners. Benchmark Reviews recently tested a prototype ATI Radeon HD 5830 video card, and now we have a full retail product to dissect. No more BIOS flashes of well-used test mules, this is the real thing.

Please follow along as Benchmark Reviews gives you a detailed look at the latest Radeon 5000-series product from XFX.

About the company: XFXxfx_logo_400px.png

XFX dares to go where the competition would like to go, but can't. That's because at XFX, the corporate philosophy is not about pushing the limits. For example, the Research and Development team never asks "why?" They usually ask "why not?" The company's tenacious quest to be the best of the best is what enables it to continually create the mind-blowing, performance-crushing, competition-obliterating video cards and motherboards that make up its core product line.

XFX's expansive product lineup continues to motivate, inspire and exceed the demands of its diverse audience. The company is a category leader thanks to its high-performance products that generate exceptional graphics and realistic and immersive 3D environments that vastly improve the gaming experience.

Satisfying the insatiable needs of gamers worldwide is just the tip of the iceberg; XFX is also highly visible in the gaming community. Over the years, the company has expanded its For Gamers By Gamers principle through a variety of award-winning advertising campaigns and a U.S. "search for the next professional gamer" that promote gaming as a professional sport rather than as entertainment. The company also maintains a strong alliance with the world's best known gamer, Fatal1ty, with whom XFX collaborates to create a professional level of video cards and gaming accessories. It is this dedication to producing the impossible that has enabled XFX to achieve a stronghold in the gaming community and to garner hundreds of awards along the way.

XFX is a division of PINE Technology Holdings Limited, a leading manufacturer and marketer of innovative solutions for the worldwide gaming technologies market. Founded in 1989, PINE designs, develops, manufactures and distributes high-performance video graphics technology and computer peripherals. The company's dedicated research and development team are continually pushing the limits to meet the demands of the ever-growing and performance-driven community.

The company has more than 1,000 employees worldwide with 17 offices around the globe. With distribution in over 50 countries around the world, PINE's XFX division maintains two state-of-the-art research and development facilities in Taiwan and Shenzhen, technical support centers in the U.S., Europe and Asia, product marketing the U.S., and a factory in Mainland China. To learn more about PINE, please visit www.pinegroup.com

XFX Radeon HD5830 Features

The feature set of the XFX HD5830 video card is identical to the two previously released HD5800 series cards. The only differences are in the number of processing units at various places in the architecture. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look somewhat familiar. What might catch your attention are the big white spaces where I used a digital eraser to show what gets left out on this particular chip. It's not nearly as pretty as the corporate graphics, but it gives you a better feel for what gets left on the cutting room floor when a potential HD 5870 GPU becomes an HD 5830. Notice that the number of ROPs gets cut in half, and I suspect some of that L2 cache goes away, too. This could potentially have a bigger performance impact than the reduction in shaders from 1600 down to 1120.

ATI_Radeon_HD5830_Video_Card_Hacked_Architecture.jpg

All ATI Radeon HD 5000 Series graphics cards come with ATI Eyefinity Technology, which can instantly triple your visual real estate, up to three displays for the ultimate in innovative "wrap around" capabilities, all with crisp, sharp picture quality. ATI Eyefinity technology engages your peripheral vision and puts you right in the game. At the office, you can multi-task without needing to flip between windows. Ideal for multi-media applications, keep as many palettes or panels open as you would like, while you edit images or videos.

ATI Stream Technology unleashes the massive parallel processing power of your GPU to help speed up demanding every-day applications. Experience fast video encoding and transcoding, so that video playback, editing and transferring content to your iPod or other portable media players is quick and easy.

As the first fully Microsoft DirectX 11-compatible GPUs, the ATI Radeon HD 5800 Series delivers unrivaled visual quality and intense gaming performance. Enjoy in-your-face 3D visual effects and dynamic interactivity, with features like HDR Texture Compression, DirectCompute 11 and Tessellation.

The 5800 Series is further supersized with GDDR5 memory, 1.8X of graphics performance boost with ATI CrossFireX technology in dual mode, and unparalleled anti-aliasing and enhanced anisotropic filtering for slick graphics and supreme realism.

Radeon HD5830 GPU Feature Summary

  • 2.15 billion 40nm transistors
  • TeraScale 2 Unified Processing Architecture
    • 1120 Stream Processing Units
    • 56 Texture Units
    • 64 Z/Stencil ROP Units
    • 16 Color ROP Units
  • 256-bit GDDR5 memory interface
  • PCI Express 2.1 x16 bus interface
  • DirectX 11 support
    • Shader Model 5.0
    • DirectCompute 11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency
  • OpenGL 3.2 support1
  • Image quality enhancement technology
    • Up to 24x multi-sample and super-sample anti-aliasing modes
    • Adaptive anti-aliasing
    • 16x angle independent anisotropic texture filtering
    • 128-bit floating point HDR rendering
  • ATI Eyefinity multi-display technology2,3
    • Three independent display controllers - Drive three displays simultaneously with independent resolutions, refresh rates, color controls, and video overlays
    • Display grouping - Combine multiple displays to behave like a single large display
  • ATI Stream acceleration technology ATI_Radeon_HD5830_Video_Card_Big_Bunny_02.jpg
    • OpenCL 1.0 compliant
    • DirectCompute11
    • Accelerated video encoding, transcoding, and upscaling4,5
    • Native support for common video encoding instructions
  • ATI CrossFireXTM multi-GPU technology6
    • Dual GPU scaling
  • ATI Avivo HD Video & Display technology7
    • UVD 2 dedicated video playback accelerator
    • Advanced post-processing and scaling8
    • Dynamic contrast enhancement and color correction
    • Brighter whites processing (blue stretch)
    • Independent video gamma control
    • Dynamic video range control
    • Support for H.264, VC-1, and MPEG-2
    • Dual-stream 1080p playback support9,10
    • DXVA 1.0 & 2.0 support
    • Integrated dual-link DVI output with HDCP11
      • Max resolution: 2560x160012
    • Integrated DisplayPort output
      • Max resolution: 2560x160012
    • Integrated HDMI 1.3 output with Deep Color, xvYCC wide gamut support, and high bit-rate audio
      • Max resolution: 1920x120012
    • Integrated VGA output
      • Max resolution: 2048x153612
    • 3D stereoscopic display/glasses support13
    • Integrated HD audio controller
      • Output protected high bit rate 7.1 channel surround sound over HDMI with no additional cables required
      • Supports AC-3, AAC, Dolby TrueHD and DTS Master Audio formats
  • ATI PowerPlayTM power management technology7
    • Dynamic power management with low power idle state
    • Ultra-low power state support for multi-GPU configurations
  • Certified drivers for Windows 7, Windows Vista, and Windows XP
  1. Driver support scheduled for release in 2010
  2. Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector
  3. ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays
  4. Requires application support for ATI Stream technology
  5. Digital rights management restrictions may apply
  6. ATI CrossFireXTMtechnology requires an ATI CrossFireX Ready motherboard, an ATI CrossFireXTM Bridge Interconnect for each additional graphics card) and may require a specialized power supply
  7. ATI PowerPlayTM, ATI AvivoTMand ATI Stream are technology platforms that include a broad set of capabilities offered by certain ATI RadeonTMHD GPUs. Not all products have all features and full enablement of some capabilities and may require complementary products
  8. Upscaling subject to available monitor resolution
  9. Blu-ray or HD DVD drive and HD monitor required
  10. Requires Blu-ray movie disc supporting dual 1080p streams
  11. Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
  12. Some custom resolutions require user configuration
  13. Requires 3D Stereo drivers, glasses, and display

Although most of this article will focus on gaming performance, it's important to remember that the Radeon HD5xxx series has the most extensive and effective video processing technology available today. Many of the features referenced in the list above have real-world implications for seemingly simple tasks like web browsing.

ATI Radeon HD5830 Specifications

If we just want to talk about the HD 5830 GPU, and the architecture that supports it, then this section is the second most important part of this review. There has been endless conjecture throughout the industry and among enthusiasts about how ATI was going to tweak the basic ingredients in order to hit the sweet spot that exists in the fairly wide performance gap between the HD 5770 and HD 5850. By way of introduction, I'll just say that when a group of journalists recently saw this chart, they had more questions after they saw it than they did before. Fortunately, ATI was very open with us and gave us some insights into the development process, which I'll share with you when we take a Closer Look, in the next section.

ATI_Radeon_HD5830_Video_Card_Specs_Chart_02.jpg

Specs are very important for this product, because they tell a vital part of the story. However, I believe testing is more important, where we see how the HD 5830 actually performs, relative to other options that are available now and some older products that users may want to upgrade from. Although you might think that pricing belongs in the top two, the PC Graphics market has a life of its own, and it's very difficult to accurately predict where and when the price will eventually settle. The video card market has always been very dynamic, and with the upcoming (we all hope...) introduction of FERMI-based products from NVIDIA, there are going to be some major wrinkles in the market pricing structure that will have to be ironed out pretty quickly. For now, take a look at where the various versions of the HD5000 series end up relative to one another on this price v. performance chart, and remember this is all based on launch pricing...

ATI_Radeon_HD5830_Video_Card_HD5xxx_Price_v._Performance.jpg

The HD 5830 is likely built with chips that didn't meet the top clock spec, and/or had a defect that killed one or more of the stream processor units. As anyone who has followed the AMD product line knows, modern processors are designed with the capability of disabling portions of the die. Sometimes, it's done because there are defects on the chip (usually a small particle of dust that ruins a transistor) and all the internal sections don't pass testing. Sometimes it's done with perfectly good chips because the manufacturer needs to meet production requirements for lower cost market segments. Given the well publicized issues with 40nm manufacturing yields at TSMC, I seriously doubt that ATI is crippling perfectly good chips, just to sell more lower-spec cards. With the release of this minor variant, ATI finally has all the major bases covered for cards based on the Cypress class GPUs.

Every AIB partner had to go it alone on product development for this card, so let's take a closer look at the XFX Radeon HD 5830, and see how their interpretation of the 5830 design specs has been implemented.

Closer Look: XFX Radeon HD 5830

The XFX Radeon HD 5830 is definitely a full member of the 58xx family; the GPU is not some sort of hybrid between Cypress and Juniper. The one thing that might lead you to think that is the GPU cooler that XFX has chosen for this version of the 5830. It's a dead ringer for the non-reference cooler used on their HD 5770 video card. That might seem like a big downgrade, but even though the Juniper chip is half the size of the 58xx series Cypress chip, the little bugger runs at 850 MHz.

XFX_HD_5830_Video_Card_Power_End_34_01.jpg

Thermal performance is more dependant on clock speed than number of transistors (remember the Pentium 4 space heaters...?), so don't dismiss the 5830 cooler as a lightweight performer. Of course, there are no free passes on Benchmark reviews; we'll examine the card's actual cooling performance in our testing section, a bit later.

XFX_HD_5830_Video_Card_Bare_Front_View.jpg

The XFX Radeon HD 5830 is not exclusively based on the 5870 design, like the prototypes from ATI were. It's a brand new board that incorporates a new power supply design with the muscle of the 5870, but without the complexity and flexibility of software voltage control. The overall size of the card is similar to an HD 5850, and this was achieved by both simplifying the power supply section and using the latest technology for the VRMs. The four small, square chips next to the iron-core chokes are DrMOS (Driver-MOSFET) chips from Renesas, which integrate three discrete power devices into a single chip, while occupying only half the space. We'll provide more detail on the power supply design in the next section. For now, let's just say that the availability of smaller, more highly integrated power chips couldn't have been timelier. XFX recommends a minimum PC power supply rating of 500 watts for systems that use this card. That's obviously going to depend on what else you have in the box, but the peak power numbers (175W max) are also there for you to use, if you need to perform a more detailed analysis.

XFX_HD_5830_Video_Card_Power_Section_01.jpg

The build quality of the XFX card is excellent. The heatsink-fan assembly is a well thought out design that is executed perfectly. Attention to detail is clearly evident, as the tiny fan cable tie-down clip demonstrates. A part like this could have been easily left off, or been deleted by a budget-conscious product manager, but then half the users would have cursed the unruly cable routing as they installed the card in their systems. Somebody cared enough about the user to put it there, and leave it there when challenged; that small act speaks volumes to me.

XFX_HD_5830_Video_Card_Fan_Clip.jpg

The back of the board is pretty standard for a card in this class. There are fewer components mounted on the back side than you might see on an HD 5870 for example, if you took the full-coverage metal back plate off that card. Part of that has to do with the simpler power supply design; there is only one PWM controller mounted here, instead of the three separate ones on the more expensive cards. The GPU cooler is mounted with four screws and the help of a flexible, spring-loaded back plate. The dark, grey-green color of the PCB itself is a common feature of all the XFX Radeon HD 5xxx cards, and looks suitably industrial; in fact it looks just like a type of conformal coating that is used on cards designed to operate in harsh environments. It's not, because those specialty coatings cost a whole lot of money, especially the types that are meant for high speed circuits.

XFX_HD_5830_Video_Card_Back_View_01.jpg

The assembly quality of the board itself is up to modern SMT manufacturing standards. The component placement and solder quality is quite good, as you can see here; this is the area on the back side of the board, directly below the GPU, and is one of the most crowded sections of the board. It is also one of the most critical sections for build quality, as any variations in stray capacitance here could impact the performance of the board, and certainly its overclocking ability.

XFX_HD_5830_Video_Card_Solder_Mask_Macro.jpg

ATI made the decision to reduce the number of Stream Processors a little more aggressively than they did when they created the HD 5850. They "took away" 320 Shaders this time, instead of only 160. This accomplishes two very important things; it keeps the HD 5830 far enough away from the HD 5850 to prevent cannibalizing sales of that very popular card. It also helps ATI "recover" more defective Cypress GPU chips, which is very helpful when your supplier is having extended manufacturing yield problems with their latest technology node. The downside to only having 1120 Stream Processors is that the GPU had to run a fairly high clock rate in order to hit the performance target that was the whole reason behind this product's very existence. ATI wanted the HD 5830 to hit the exact middle of the performance gap between the 5770 and 5850, too far one way or the other and you haven't really filled the gap. Based on their internal testing, ATI feels they hit the mark. Pay attention to the scaling of this chart....the key takeaway is how close to the middle, between the high and low bars on the left and right, that the HD 5830 bar lands.

ATI_Radeon_HD5830_Video_Card_fill_the_gap_01.jpg

So, the stock clock for the HD 5830 came to be set at 800 MHz, and we have the strange situation where the lower performing HD 5830 actually pulls more power than a stock HD 5850. We'll examine the impact of these design decisions in our testing section. For now, though, let's look at some of the features of the XFX HD 5830 in more detail.

Detailed Features: XFX Radeon HD 5830

For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster has been the byword for achieving gaming-class performance from the latest and greatest GPUs. The XFX HD 5830 uses a tried and true GPU cooler design that makes the most of its components. While seemingly simple, compared to the monster coolers that we enthusiasts sometimes covet, the radial fin design is a very efficient one. Sometimes, less is more. Really.

XFX_HD_5830_Video_Card_HSF_and_Board.jpg

Two large, 8mm diameter heatpipes are clamped to a solid copper contact block that sits on top of the GPU. The surface finish of the block is a bit rough; especially when you consider that the mating surface of the ATI Radeon GPU is like glass. As any cooling aficionado can tell you, the combination of two mirror finishes, with the smallest possible amount of Thermal Interface Material, can't be beat for effective heat transfer. We'll have to wait for our testing to see if the rough surface is a deal breaker, or just a missed opportunity to put some icing on the cake. Although there was some excess Thermal Interface Material pushed out from the sides of the GPU, it doesn't look like XFX used an over-abundance of TIM during assembly. The compound was initially placed on the surface of the GPU and flowed out from there.

XFX_HD_5830_Video_Card_Contact_Block_01.jpg

While there is no official HD 5830 reference design from ATI for sale anywhere, there still seems to be a broad consensus on power supply design, at least among several vendors. The power supply section on the XFX HD 5830 is very similar to some other HD 58xx non-reference designs from a variety of suppliers. For example, the architecture and component selection is almost identical to the design of the Powercolor HD 5870 PCS+, a factory-overclocked card. The implementation on the XFX card is actually a bit better, as XFX uses chokes instead of resistors in several filter circuits, and also includes a few extra decoupling capacitors in places that are left unpopulated on the Powercolor board. Any power supply designer will tell you that an L-C circuit is more effective than an R-C circuit, at filtering out AC components riding on top of your DC supply voltage.

XFX_HD_5830_Video_Card_DrMOS_Section_02.jpg

The XFX Radeon HD5830 uses Renesas R2J20602NP DrMOS (Driver-MOSFET) power semiconductor chips in the VRM section, although you would never know it unless you pulled the heatsinks off and looked. XFX doesn't mention it in any of their marketing materials, unlike other manufacturers that make a big deal about it. DrMOS is a term that describes the integration of three discrete devices into one chip. Putting the driver circuit and the two power MOSFETs on one chip not only saves space, it improves both thermal and high frequency performance, compared to a discrete implementation. High clock frequencies aren't just helpful in computing; they also improve the efficiency and performance of many power supply circuits; in this case, a DC-DC convertor.

XFX_HD_5830_Video_Card_drmos_package.png

The main power supply controller chip used on the XFX HD 5830 is a UP6213AJ chip, a 4-phase PWM controller that does not support software voltage control, like the more expensive Volterra chips used on the reference 5870 boards. The more adventurous overclockers among us can still perform old school hardware volt mods, if the urge strikes. So, there's always hope for those who desperately want to join the 1GHz GPU club.

XFX_HD_5830_Video_Card_uP6213_PWM_Controller.jpg

The memory choice for the XFX HD5830 video card is consistent with the high GPU clock rates that ATI blessed this card with. Even though the basic 5830 specs only require 1000 MHz chips for the memory, XFX has supplied the same memory chips that go into a 5870, which are good for 1250 MHz. At least on the memory side, there looks like lots of headroom is available for overclocking.

XFX_HD_5830_Video_Card_Samsung_5GHz_GDDR5.jpg

We've spent a lot more time in this review on the board design, since this is our first sample of a production model HD 5830. When we previewed the prototype for the 5830, we promised to look at the design and construction of a retail card in much more detail. Now that we've done that, we also want to validate that the production sample performs as well as the prototype GPU, so let's move on to the Testing section of our review.

Video Card Testing Methodology

This is the beginning of a new era for testing at Benchmark Reviews. With the imminent release of Windows7 to the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro.We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors). Radeon-HD5830_GPU-Z.png

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article.

Test System

  • Motherboard: ASUS M4A79T Deluxe (2205 BIOS)
  • System Memory: 2x 2GB OCZ Reaper HPC DDR3 1600MHz (7-7-7-24)
  • Processor: AMD Phenom II 720 Black Edition (Overclock to 3.6 GHz)
  • CPU Cooler: CoolerMaster Hyper Z600
  • Video: XFX Radeon HD5830, HD-583X-ZNFV
  • Drive 1: OCZ Summit SSD, 60GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: CM STORM Sniper Gaming Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

Benchmark Applications

  • 3DMark Vantage v1.0.1 Benchmark(8x Anti Aliasing & 16x Anisotropic Filtering)
  • Crysis v1.21 Benchmark (Very High Settings, 0x and 4x Anti-Aliasing)
  • Devil May Cry 4 Benchmark Demo (Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 Benchmark (Very High Performance, Ultra-High Quality, 8x Anti-Aliasing)
  • Resident Evil 5 Benchmark(8x Anti-Aliasing, Motion Blur ON, Quality Levels-High)
  • Unigine Heaven Benchmark (DX10, High Shaders, No Tessellation, 16x AF, 4x & 8x AA)
  • S.T.A.L.K.E.R. Call of Pripyat Benchmark (Ultra-Quality, Enhanced DX10 light, 4x MSAA, SSAO Off & Default-High)

Video Card Test Products

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit
  • MSI Radeon HD4830 (R4830 T2D512 - Catalyst 8.703.0.0)
  • ASUS Radeon HD4850 (EAH4850 TOP - Catalyst 8.703.0.0)
  • XFX Radeon HD5750 (HD-575X-ZN - Catalyst 8.703.0.0)
  • ATI Radeon HD5770 (Engineering Sample - Catalyst 8.703.0.0)
  • XFX Radeon HD5830 (HD-583X-ZNFV - Catalyst 8.703.0.0)
  • XFX Radeon HD5850 (21162-00-50R - ATI Catalyst 8.703.0.0)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX - Forceware v195.62)
  • MSI GeForce GTX 275 (N275GTX Twin Frozr OC - Forceware v195.62)
  • ASUS GeForce GTX 285 (GTX285 MATRIX - Forceware v195.62)
  • XFX Radeon HD5870 (HD-587X-ZNFC - Catalyst 8.703.0.0)

3DMark Vantage Benchmark Results

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

XFX_HD_5830_3DMark_Vantage_Jane_Nash_1680.jpg

Well, our first test looks promising! At 1680x1050 the Radeon HD 5830 slots in nicely between the HD 5770 and HD 5850. If anything, it leans towards the 5850 in this synthetic test. Also, notice that it just edges out a mildly overclocked GTX285. We'll have to keep an eye on that comparison as we move through our testing regimen. Wouldn't it be funny if that was the target performance level all along? Also, it's important to note that the HD 5830 results from this retail sample are absolutely consistent with the prototype card I tested earlier. Of course, it should be, but I feel better having tested it.

XFX_HD_5830_3DMark_Vantage_Jane_Nash_1920.jpg

At 1920x1200 resolution, things look much the same as they did at the lower screen size. The low end cards, with their limited 512MB of GDDR3 struggle to keep up, but everything else is the same. Let's take a look at test#2, which has a lot more surfaces to render, with all those asteroids flying around New Calico.

XFX_HD_5830_3DMark_Vantage_New_Calico_1680.jpg

In the New Calico test, the HD 5830 sits right in the center of the sweet spot between its siblings, the HD 5770 and HD 5850. So far, any concerns about the 50% reduction in ROPs seem unwarranted. The 5830 is keeping up with its big brother just fine.

XFX_HD_5830_3DMark_Vantage_New_Calico_1920.jpg

At a higher screen resolution of 1920x1200, we again see the 512MB cards falling behind, but the HD 5830 retains its spot halfway between the 5770 and 5850. It also barely tops the GTX285 again, so any complaints about the pricing on the HD 5830 need to consider the competition. We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards.

XFX_HD_5830_Crysis_NoAA_1680.jpg

With medium screen resolution and no AA dialed in, the HD 5830 continues to have a field day. Please remember all the test results in this article are with maximum allowable image quality settings. Also, it's good to remember how all the performance numbers in Crysis took a major hit when Benchmark Reviews switched over to the DirectX 10 API for all our testing. Considering all that, 31 FPS is a great result, especially as it beats the GTX285 again. One frame/second isn't much of a difference in performance, but there is that $100 price difference between the two to consider.

XFX_HD_5830_Crysis_NoAA_1920.jpg

At 1900 x 1200 resolution, everything looks the same; even the 512MB cards are still hanging in there. Those old HD48xx series cards were really good performers in Crysis, but they are giving up 8-12 FPS to the new ATI HD x8xx budget king.

XFX_HD_5830_Crysis_4xAA_1680.jpg

Now let's turn up the heat a bit, and add some Anti-Aliasing. With 4x AA cranked in, the HD 5830 backs off ever so slightly, only making up 42% of the performance difference between the HD 5770 and HD 5830. It's not 50% or above, but is still a respectable result, and of course it squeaks by the GTX285 again.

XFX_HD_5830_Crysis_4xAA_1920.jpg

This is one of our toughest tests, at 1900 x 1200, maximum quality levels, and 4x AA. Only one card gets above 30 FPS in this test, and it's the fastest single-GPU card on the planet, the Radeon HD 5870. In the middle ranges, the HD 5830 holds on to its spot, roughly half way between the HD 5770 and HD 5850. What I like about this test is that it shows how far ATI has come in one generation of video cards. The HD4830, which was the equivalent card in the HD48xx line up, only manages about 9 FPS, and the current generation card puts up 21. I see real progress here, and I just don't get it when people want to compare every card in the HD5xxx series to the HD4890.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Devil May Cry 4 Benchmark

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used.

Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

XFX_HD_5830_DMC4_DX10_Scene2.jpg

This looks like one benchmark where the reduction in number of ROPs makes a difference. The HD 5830 only beats the HD 5770 by 9%, and the GT200 cards get to strut their stuff.

XFX_HD_5830_DMC4_DX10_Scene4.jpg

In Scene #4, the HD 5850 doesn't turn in quite the stunning performance it did in Scene #3, so the gap between it and the HD 5770 isn't as large. Regardless, the HD 5830 sticks closer to the HD 5770 than it does to its big brother in this test, only filling 32% of the performance gap this time.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics in a far away land.

Far Cry 2 Benchmark Results

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for our tests, with the resolution set to 1920x1200. The performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality level, 8x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.

XFX_HD_5830_Far_Cry_2_DX10_1680.jpg

It's too early to call it a trend, but after just seeing the HD 5830 struggle a bit with the oldest benchmark in our test suite, I see it pretty much falling flat here on one of our newest gaming benchmarks. Once again, the HD 5850 really stands out here, and I think you have to point the finger at the fact that the 5850 has twice the number of ROPs as the HD 5830.

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we use 8x AA and 'Ultra High' settings in Far Cry 2. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), many of the midrange products we've tested are capable of producing playable frame rates with the settings all turned up. We also see a different effect when switching our testing to DirectX 10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DX10 requirements.

XFX_HD_5830_Far_Cry_2_DX10_1920.jpg

The Radeon HD 5830 hangs disappointingly close to its little brother again in the higher resolution testing. Although the Dunia engine seems to be optimized for NVIDIA chips, the mix of GPU components ATI incorporated in the 5850 and 5870 GPUs seem optimum for this game. That's obviously not the case for the HD 5830.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Our next benchmark of the series puts our collection of video cards against some fresh graphics in the newly released Resident Evil 5 benchmark.

Resident Evil 5 Benchmark Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenary mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.

XFX_HD_5830_Resident_Evil_5_DX10_Scene3.jpg

The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes which take place in different areas of the compound. In addition it calculates an overall average for the four scenes. The averages for scene #3 and #4 are what we report here, as they are the most challenging. Looking at area #3, two things are obvious; the NVIDIA cards do exceptionally well in this game, and the HD 5830 doesn't come anywhere near the performance of the HD 5850. There is quite a bit of variation in the gameplay between the four areas, so let's see what happens in the next most challenging scene, area #4.

XFX_HD_5830_Resident_Evil_5_DX10_Scene4.jpg

Once again, in this test the HD 5850 really stands out in the ATI lineup, and the HD 5830 hangs back with the likes of the 57xx series. Let's keep looking, especially at some new titles that were developed for DX11, and see if this trend continues.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, we look at the newest DX11 benchmark, straight from Russia and the studios of Unigine. Their latest benchmark is called "Heaven", and it has some very interesting and non-typical graphics. So, let's take a peek at what Heaven v1.0 looks like.

Unigine - Heaven Benchmark Results

Unigine Corp. released the first DirectX 11 benchmark "Heaven" that is based on its proprietary UnigineTM engine. The company has already made a name among the overclockers and gaming enthusiasts for uncovering the realm of true GPU capabilities with previously released "Sanctuary" and "Tropics" demos. Their benchmarking capabilities are coupled with striking visual integrity of the refined graphic art.

The "Heaven" benchmark excels at providing the following key features:

  • Native support of OpenGL, DirectX 9, DirectX 10 and DirectX 11
  • Comprehensive use of tessellation technology
  • Advanced SSAO (screen-space ambient occlusion)
  • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
  • Dynamic simulation of changing environment with high physical fidelity
  • Interactive experience with fly/walk-through modes
  • ATI EyeFinity support

The distinguishing feature of the benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Unigine Corp. is an international company focused on top-notch real-time 3D solutions. The development studio is located in Tomsk, Russia. Main activity of Unigine Corp. is development of UnigineTM, a cross-platform engine for virtual 3D worlds. Since the project start in 2004, it attracts attention of different companies and groups of independent developers, because Unigine is always on the cutting edge of real-time 3D visualization and physics simulation technologies.

XFX_HD_5830_Unigine_Heaven_DX10_4xAA.jpg

Getting back to a more synthetic type of benchmark, we see the HD 5830 doing a little better than it did with Far Cry 2 and Resident Evil 5. The HD 5850 still puts on a star performance and stands out from the crowd, but at least the HD 5830 distinguishes itself from the HD 5770. This test was run with 4x anti-aliasing; let's see how the cards stack up when we increase this to the maximum level of 8x.

XFX_HD_5830_Unigine_Heaven_DX10_8xAA.jpg

The impact of increasing the anti-aliasing is pretty clear. Two things happened; the older HD48xx cards took a nosedive, and so did the NVIDIA GT200 cards. While the HD 5830 still can't catch up to the HD 5850, it manages to get by the GTX285. One thing I noticed while observing the benchmark wind its way through the streets of Heaven 1.0; when smoke from the chimneys was in the scene, the frame rate dropped radically. It really hurt the older cards; I'm not sure if it was just their memory deficit, or what caused this effect.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Let's take a look at one more benchmark, a decidedly less cheerful scenario in a post-apocalyptic "Zone", which is traversed by mercenary guides called Stalkers.

S.T.A.L.K.E.R.: Call of Pripyat Benchmark Results

The events of S.T.A.L.K.E.R.: Call of Pripyat unfolds shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl. Having discovered about the open path to the Zone center, the government decides to hold a large-scale military "Fairway" operation aimed to take the CNPP under control. According to the operation's plan, the first military group is to conduct an air scouting of the territory to map out the detailed layouts of anomalous fields location. Thereafter, making use of the maps, the main military forces are to be dispatched. Despite thorough preparations, the operation fails. Most of the avant-garde helicopters crash. In order to collect information on reasons behind the operation failure, Ukraine's Security Service sends their agent into the Zone center.

S.T.A.L.K.E.R.: CoP is developed on X-Ray game engine v.1.6, and implements several ambient occlusion (AO) techniques including one that AMD has developed. AMD's AO technique is optimized to run on efficiently on Direct3D11 hardware. It has been chosen by a number of games (e.g. BattleForge, HAWX, or the new Aliens vs. Predator) for the distinct effect in it adds to the final rendered images. This AO technique is called HDAO which stands for ‘High Definition Ambient Occlusion' because it picks up occlusions from fine details in normal maps.

XFX_HD_5830_STALKER_DX10_SSAO_Off.jpg

Within the limits imposed by the NVIDIA cards that don't support DirectX 11, we turn the settings on S.T.A.L.K.E.R.: Call of Pripyat all the way up. The one thing we look at individually is SSAO, one of the technologies that made its appearance in DirectX 10. In the first test, with SSAO turned off, we see a familiar pattern in the comparison between the HD 5770, HD 5830, and HD 5850. Specifically, the HD 5830 has very little performance advantage over the HD 5770 and the HD 5850 rises up, way above all expectations. No wonder people love that card, and this testing was all done at stock clock rates, which are pretty low for the 5850, as it leaves the factory.

XFX_HD_5830_STALKER_DX10_SSAO_Default.jpg

Once we turn SSAO on and set it to High, the HD 5830 gains some of its performance advantage back, over the HD 5770. The other thing that happens is that the NVIDIA cards lose out big time. Despite the company's insistence that DX11 is largely unnecessary, their performance on one of the key enabling technologies of DX10 is less than compelling. This is one rendering technique that just pins the NVIDIA GPUs to the ground. How often do you see an HD4850 coming within 10% of a GTX285 and matching a GTX275?

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
XFX Radeon HD5830 (HD-583X-ZNFV) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
XFX Radeon HD5870 (HD-587X-ZNFC) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, we investigate the thermal performance of the Radeon HD5830, and see if the gimped Cypress GPU die runs cool enough with the simple radial heatpipe cooler that XFX brings to bear on it.

XFX Radeon HD5830 Temperature

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 24C throughout testing. The XFX Radeon HD 5830 video card recorded 35C in idle 2D mode, and increased to 89C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution, and the maximum MSAA setting of 8X. With the fan set on Automatic, the speed rose from 21% at idle to 50% under full load. I then set the fan speed manually, using Catalyst Control Center, to 70% and ran the load test again, and the GPU reached a maximum temperature of 81C.

Load

Fan Speed

GPU Temperature

Idle

21% - AUTO

35C

Furmark

50% - AUTO

89C

Furmark

70% - MANUAL

81C

89C is not a particularly good result for temperature stress testing, but with stock fan settings and fan speeds controlled by the card, it's the temperature XFX obviously feels comfortable with. I rarely do any benchmarking with fans set on Automatic, preferring to give the GPU or CPU the best shot at surviving the day intact. With an integrated temperature controller in play though, I want to see how the manufacturer programmed the system. 81C is obviously a better result, and running the fan on Manual at 70% is not unusual or unwarranted when running such a punishing benchmark as FurMark. With only a single axial fan running, the noise at 70% speed was not objectionable, and I wouldn't have any problem leaving it there permanently. Unfortunately, due to the simple 2-wire DC motor controller, I can't tell you the actual RPMs produced, only the percentages that were reported in GPU-Z.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

XFX_HD_5830_Video_Card_furmark_temp.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

XFX_HD_5830_Video_Card_Power_End_02.jpg

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The XFX Radeon HD 5830 pulled 29 (159-130) watts at idle and 207 (337-130) watts when running full out, using the test method outlined above. The idle power consumption test is very close to the factory number of 25W, and the load value is 30W above the 175W factory spec from ATI. That's about normal for this test, as it isn't possible to isolate the CPU load from the power measurements. The max power draw also depends on the fan design used, which is something ATI had little control over for any of the new HD 5830 cards.

So, no major surprises in the power consumption area; it's a big GPU, running at high clock rates. A good thing it's built on 40nm technology, otherwise those two billion transistors would be pulling a lot more power and generating a lot more heat. I next offer you some final thoughts, and my conclusions. On to the next page...

Radeon HD 5830 Final Thoughts

Why did ATI leave this huge hole in their product line, for so long? The flagship ATI video cards made a huge splash last September, but according to Mercury Research, cards costing over $200 only make up 7% of the market, and the 57xx series landed in the $100-$200 range, which makes up 27% of the market. That leaves a huge opening in the sub-$100 market, and ATI was busy filling in the gaps with all new, DirectX 11 capable cards in this segment. Enthusiasts may laugh at the diminutive HD55xx series and the HD5450, with its 80 shaders, but they provide a much-needed revenue stream for ATI. Don't begrudge them that, it's what pays for all the R&D that produced the 58xx series in the first place.

So, the halo products were doing fine; in fact they were in short supply for several months due to manufacturing yield problems at the chip foundry in Taiwan. Now, the middle ground and the HTPC markets are taken care of. There are enough chips floating out of TSMC to keep the retailer's shelves stocked. Now what...? Oh, yeah, let's go back and finish off the premier product line with a couple of easy wins. One card can fill the gap between the 58xx and 57xx series, and a dedicated Eyefinity HD5870 card for the AV market will sell like hotcakes at Belgian waffle prices. Because in that market, you're always spending someone else's money.

Let's play a game of "What If". What if you were King of ATI, and you knew that there was a gap in your product line, so you told your minions to go and design something to fill that gap. Lo and behold, some weeks later, the engineers came back with three proposals, because they had been arguing for almost the entire time over how to design the product. It turns out that there are three very easy, very plausible ways to build a product that will meet the performance requirements. Each of them is correct from a technical perspective, so the King has to decide. (You all knew that Marketing is the King, right...LOL)

  1. Crank up the 57xx product with selected Juniper GPUs that will run 1 GHz+, and a slightly higher spec memory, easily available from several suppliers.
  2. Take another 160 Stream Processors (10%) away from the Cypress GPU (1280 left), and down-clock it to the exact performance target you want. (This was the highly successful strategy for the 5850, BTW.)
  3. Take away 320 additional Stream Processors from the Cypress GPU (1120 left), disable some additional Texture Units, and gut the ROPs down to half strength. Take advantage of the high clock rates that are achievable with the latest 40nm chips that you are already paying dearly for, and make up the performance you lost by disabling over 30% of the working parts in each section of the architecture.

Well, the world is waiting for your answer....Kings are infallible you know, so whatever you say will automatically be correct, for all time. It's just that the wrong decision is going to cost you money, somewhere down the road.

Kings have special privileges, so I'm going to invoke mine and answer "1 & 3". I think a turbocharged 57xx is already in the product roadmap, it's just a question of time. I think #2 is what the market wanted, because they had already seen how well the HD5850 scaled with GPU clock speed, and they wanted to be able to overclock the 5830 and get 5850 performance out of it. Just like they saw everyone doing with the 5850, juicing it up to compete with the 5870, they wanted a repeat performance of The People's Champion.

XFX_HD_5830_Video_Card_GPU_Die.jpg

Alas, the King didn't want to lose all those HD5850 sales, at those nice HD5850 prices. I can't blame him; I would have done the same thing. Now, if you'll excuse me, I'm going to go try and get that 5830 chip up to 1.0 GHz, and see what it'll really do.

XFX Radeon HD 5830 Conclusion

The XFX Radeon HD 5830 easily equaled the performance levels set by the engineering prototype we received from ATI. That's important, because each of the AIB partners was on their own during the development process. There's a lot of diversity in the 5830 product mix, something we don't often see. It was reassuring to see the frame rates match up, but it was also good to see the thermal and power consumption results fall within the ballpark established by the prototype.

The performance of the HD 5830 GPU is still a bit of an enigma to the enthusiast community. ATI claims to have hit their overall performance target, but the rub is that enthusiasts won't be able to jack up the GPU clock and reap the kind of performance gains that they were able to get with the HD 5850. The variance in relative performance between the various benchmarks is a bit confusing. Everyone runs their benchmarks with slightly different settings, and we saw in our DirectX 11 follow-up article that relative performance levels can shift around when AF and MSAA are cranked up or down. The mix of GPU components, the recipe for the HD 5830, if you will, is unique. You have to pay attention to what works in the games you like to play; Crysis certainly worked very well with this card, and for some, that will be enough.

XFX_HD_5830_Video_Card_Box_and_Card.jpg

The appearance of the XFX HD 5830 video card is quite good. It strays from the "box" approach, without getting silly like some products; done up to look like a race car, for instance. It's always been clear that the 5830 needs a healthy dose of cooling to perform reliably, and XFX has been able to do this without an outsized fin and heatpipe arrangement. The radial design they've chosen makes the most of the airflow from the fan. There continues to be an abundance of creativity in the area of GPU cooling, and I'm happy that companies like XFX don't feel like they have to offer a me-too approach.

The build quality of the XFX Radeon HD 5830 is excellent. Everything is well put together and nothing is out of place. The assembly and soldering quality of the PC board is fully up to standard for this type of product, and the packaging was also first rate. The industry has been very tight-lipped about which AIB partners were having trouble with their 5830 power supply designs, but considering the XFX HD 5830's power section is built better than another vendor's factory-overclocked 5870 card, I'm not concerned about this product at all. Plus, the XFX Double-Lifetime warranty means you won't ever have to worry about product support, and you get to pass that on to the next owner, free of charge.

The features of the HD 5830 may seem slightly less amazing, now that we've been using all of them on other Radeon 5xxx cards since last September. Still, no one else has an equivalent combination of features that compete fully with DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11, OpenCL Support, HDMI 1.3 with Dolby True HD and DTS Master Audio. We've barely scratched the surface of all the features in this review, focusing almost exclusively on gaming performance, but the card excels at other uses as well.

As of March 2010, the price for the XFX Radeon HD 5830 (model HD-583X-ZNFV) is $259.99 at my favorite PC component supplier, Newegg. There is eventually going to be a wider price range than usual for this product, since there is no reference design to act as an anchor. For now, this XFX card is priced in the middle of the pack, and the inclusion of the Aliens vs. Predators game and XFX's exclusive Double-Lifetime warranty certainly adds some tangible value. Unless ATI jiggles the drivers around to give the 58xx series a performance advantage over the 57xx products, there will always be some who question the value of any HD 5830 video card that costs one cent more than the imaginary price point halfway between the 5770 and 5850. That's a criticism I think the free market will eventually answer.

The XFX Radeon HD 5830 earns a Silver Tachometer Award, because it pushes the design of the HD5xxx series further along than many of its contemporaries. The power supply is state-of-the-art in ways that make it smaller, cheaper and more efficient, not more complex. The cooling solution is right-sized, not over-sized, and also more efficient than most. It's always easier to solve engineering problems with a pile of parts and more money, rather than hard work, but I've consistently preferred the simpler, neater results from the latter approach.

Pros:silvertachaward.png

+ Robust, modern power supply design
+ Unmatched feature set
+ 1250 MHz Samsung GDDR5 memory
+ Full 256-bit memory architecture
+ 1.79 TeraFLOPS of double-precision computing
+ Free Aliens vs. Predators game
+ Double-Lifetime warranty
+ HDMI, (2) DVI-I, and DisplayPort interfaces
+ Easy to overclock with ATI Overdrive
+ Dual CrossFireX connectors

Cons:

- Only 1120 Stream Processors
- Only 16 ROPs, same as HD 5770
- No software voltage control, can limit OC abilities
- Stock fan settings make GPU too hot
- Stock GPU clock is nearly maxed out
- Requires more power than stock HD 5850

Ratings:

  • Performance: 8.25
  • Appearance: 9.25
  • Construction: 9.25
  • Functionality: 9.50
  • Value: 8.25

Final Score: 8.9 out of 10.

Quality Recognition: Benchmark Reviews Silver Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive