Archive Home arrow Reviews: arrow Video Cards arrow ATI Radeon HD 5830 DirectX-11 Video Card
ATI Radeon HD 5830 DirectX-11 Video Card
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Thursday, 25 February 2010

ATI Radeon HD 5830 Video Card Review

Ever since the introduction of the ATI Radeon HD5770, PC gamers and enthusiasts have been looking at the wide gap between the HD5770 and the HD5850 and knew that it would only be a matter of time before ATI plugged the hole in their product line. ATI is taking a slightly different approach with the Radeon HD5830 video card; selling only the ASIC chips to their Add-In-Board (AIB) partners. In this article Benchmark Reviews is going to demonstrate a prototype ATI Radeon HD 5830 video card, and our benchmark tests are going to put an end to the rumors about the specifications and graphical performance of this highly anticipated GPU.

ATI_Radeon_HD5830_Video_Card_Glam_600.jpg

The Radeon HD 5830 is not a 'reference card' as we know it, because that concept is not being used this time around. We have to wait for the AIB partners to start shipping their own board designs to see what will actually be available in the marketplace. No more slapping a label on a complete card that you buy from ATI's captive supplier... For now though, we have a quick and easy way to assess the performance of the ATI Radeon HD5830 chip, lovingly wrapped in an HD5870 package. Yes, a 5870 package; I'll explain later why this is necessary.

Please follow along as we give you a sneak preview of things to come from some of ATI's AIB Partners.

About the company: ATIati_premium_graphics_logo_200.png

Over the course of AMD's four decades in business, silicon and software have become the steel and plastic of the worldwide digital economy. Technology companies have become global pacesetters, making technical advances at a prodigious rate - always driving the industry to deliver more and more, faster and faster.

However, "technology for technology's sake" is not the way we do business at AMD. Our history is marked by a commitment to innovation that's truly useful for customers - putting the real needs of people ahead of technical one-upmanship. AMD founder Jerry Sanders has always maintained that "customers should come first, at every stage of a company's activities." We believe our company history bears that out.

ATI Radeon HD5830 Features

The feature set of the ATI HD5830 video card is identical to the two previously released HD5800 series cards. The only differences are in the number of processing units at various places in the architecture. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look somewhat familiar. What might catch your attention are the big white spaces where I used a digital eraser to show what gets left out on this particular chip. It's not nearly as pretty as the corporate graphics, but it gives you a better feel for what gets left on the cutting room floor when a potential HD5870 GPU becomes an HD5830. Notice that the number of ROPs gets cut in half, and I suspect some of that L2 cache goes away, too. This could potentially have a bigger performance impact than the reduction in shaders from 1600 down to 1120.

ATI_Radeon_HD5830_Video_Card_Hacked_Architecture.jpg

All ATI Radeon HD 5000 Series graphics cards come with ATI Eyefinity Technology, which can instantly triple your visual real estate, up to three displays for the ultimate in innovative "wrap around" capabilities, all with crisp, sharp picture quality. ATI Eyefinity technology engages your peripheral vision and puts you right in the game. At the office, you can multi-task without needing to flip between windows. Ideal for multi-media applications, keep as many palettes or panels open as you would like, while you edit images or videos.

ATI Stream Technology unleashes the massive parallel processing power of your GPU to help speed up demanding every-day applications. Experience fast video encoding and transcoding, so that video playback, editing and transferring content to your iPod or other portable media players is quick and easy.

As the first fully Microsoft DirectX 11-compatible GPUs, the ATI Radeon HD 5800 Series delivers unrivaled visual quality and intense gaming performance. Enjoy in-your-face 3D visual effects and dynamic interactivity, with features like HDR Texture Compression, DirectCompute 11 and Tessellation.

The 5800 Series is further supersized with GDDR5 memory, 1.8X of graphics performance boost with ATI CrossFireX technology in dual mode, and unparalleled anti-aliasing and enhanced anisotropic filtering for slick graphics and supreme realism.

ATI Radeon HD 5830 GPU Feature Summary

  • 2.15 billion 40nm transistors
  • TeraScale 2 Unified Processing Architecture
    • 1120 Stream Processing Units
    • 56 Texture Units
    • 64 Z/Stencil ROP Units
    • 16 Color ROP Units
  • 256-bit GDDR5 memory interface
  • PCI Express 2.1 x16 bus interface
  • DirectX 11 support
    • Shader Model 5.0
    • DirectCompute 11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency
  • OpenGL 3.2 support1
  • Image quality enhancement technology
    • Up to 24x multi-sample and super-sample anti-aliasing modes
    • Adaptive anti-aliasing
    • 16x angle independent anisotropic texture filtering
    • 128-bit floating point HDR rendering
  • ATI Eyefinity multi-display technology2,3
    • Three independent display controllers - Drive three displays simultaneously with independent resolutions, refresh rates, color controls, and video overlays
    • Display grouping - Combine multiple displays to behave like a single large display
  • ATI Stream acceleration technology
    • OpenCL 1.0 compliant
    • DirectCompute11
    • Accelerated video encoding, transcoding, and upscaling4,5
    • Native support for common video encoding instructions
  • ATI CrossFireXTM multi-GPU technology6
    • Dual GPU scaling
  • ATI Avivo HD Video & Display technology7
    • UVD 2 dedicated video playback accelerator
    • Advanced post-processing and scaling8
    • Dynamic contrast enhancement and color correction
    • Brighter whites processing (blue stretch)
    • Independent video gamma control
    • Dynamic video range control
    • Support for H.264, VC-1, and MPEG-2
    • Dual-stream 1080p playback support9,10
    • DXVA 1.0 & 2.0 support
    • Integrated dual-link DVI output with HDCP11
      • Max resolution: 2560x160012
    • Integrated DisplayPort output
      • Max resolution: 2560x160012
    • Integrated HDMI 1.3 output with Deep Color, xvYCC wide gamut support, and high bit-rate audio
      • Max resolution: 1920x120012
    • Integrated VGA output
      • Max resolution: 2048x153612
    • 3D stereoscopic display/glasses support13
    • Integrated HD audio controller
      • Output protected high bit rate 7.1 channel surround sound over HDMI with no additional cables required
      • Supports AC-3, AAC, Dolby TrueHD and DTS Master Audio formats
  • ATI PowerPlayTM power management technology7
    • Dynamic power management with low power idle state
    • Ultra-low power state support for multi-GPU configurations
  • Certified drivers for Windows 7, Windows Vista, and Windows XP
  1. Driver support scheduled for release in 2010
  2. Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector
  3. ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays
  4. Requires application support for ATI Stream technology
  5. Digital rights management restrictions may apply
  6. ATI CrossFireXTMtechnology requires an ATI CrossFireX Ready motherboard, an ATI CrossFireXTM Bridge Interconnect for each additional graphics card) and may require a specialized power supply
  7. ATI PowerPlayTM, ATI AvivoTMand ATI Stream are technology platforms that include a broad set of capabilities offered by certain ATI RadeonTMHD GPUs. Not all products have all features and full enablement of some capabilities and may require complementary products
  8. Upscaling subject to available monitor resolution
  9. Blu-ray or HD DVD drive and HD monitor required
  10. Requires Blu-ray movie disc supporting dual 1080p streams
  11. Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
  12. Some custom resolutions require user configuration
  13. Requires 3D Stereo drivers, glasses, and display

ATI_Radeon_HD5830_Video_Card_Big_Bunny_02.jpg

Although most of this article will focus on gaming performance, it's important to remember that the Radeon HD5xxx series has the most extensive and effective video processing technology available today. Many of the features referenced in the list above have real-world implications for seemingly simple tasks like web browsing.

ATI Radeon HD5830 Specifications

If we just talk about the HD5830 GPU, and the architecture that supports it, then this section is the second most important part of this review. There has been endless conjecture throughout the industry and among enthusiasts about how ATI was going to tweak the basic ingredients in order to hit the sweet spot that exists in the fairly wide performance gap between the HD5770 and HD5850. By way of introduction, I'll just say that when a group of journalists recently saw this chart, they had more questions after they saw it than they did before. Fortunately, ATI was very open with us and gave us some insights into the development process, which I'll share with you when we take a Closer Look, in the next section.

ATI_Radeon_HD5830_Video_Card_Specs_Chart_02.jpg

Specs are very important for this product, because they tell a vital part of the story. However, I believe the most important part is the testing section, where we finally get to see how the HD5830 performs, relative to other options that are available now and some older products that users may want to upgrade from. Although you might think that pricing belongs in the top two, it has a life of its own, and it's very difficult to accurately predict where the price will eventually settle. The video card market has always been very dynamic, and with the upcoming (we all hope...) introduction of FERMI-based products from NVIDIA, there are going to be some major wrinkles in the market pricing structure that will have to be ironed out pretty quickly. For now, take a look at where the various versions of the HD5000 series end up relative to one another on this price v. performance chart, and remember this is all based on launch pricing...

ATI_Radeon_HD5830_Video_Card_HD5xxx_Price_v._Performance.jpg

The HD5830 is likely built with chips that didn't meet the top clock spec, and/or had a defect that killed one or more of the stream processor units. As anyone who has followed the AMD product line knows, modern processors are designed with the capability of disabling portions of the die. Sometimes, it's done because there are defects on the chip (usually a small particle of dust that ruins a transistor) and all the internal sections don't pass testing. Sometimes it's done with perfectly good chips because the manufacturer needs to meet production requirements for lower cost market segments. Given the well publicized issues with 40nm manufacturing yields at TSMC, I seriously doubt that ATI is crippling perfectly good chips, just to sell more lower-spec cards. With the release of this minor variant, ATI finally has all the major bases covered for cards based on the Cypress class GPUs.

Let's take a much closer look at the How and Why of the development process for the ATI Radeon HD5830. It's an interesting story that reveals how the obvious choices sometimes lead to a place you don't want to go.

Closer Look: ATI Radeon HD5830

The ATI Radeon HD5830 is definitely a full member of the 58xx family; the GPU is not some sort of hybrid between Cypress and Juniper. The card we are looking at today, a prototype version from ATI, follows the pattern of the HD5870 in size, shape and general design. Yes, that's right, the HD5870, that extra-long beast of a card that sits atop the single-GPU hill. That seems like an odd choice, but there's a method to the madness.

amd_ati_radeon_hd-5870_video_card_rear_angle.jpg

In contrast to earlier launches of the ATI HD5xxx products, this time there is no real HD5830 "Reference Design" that can be purchased from ATI's captive suppliers. ATI is only selling ASICS for the HD5830 models, and their AIB partners are doing the rest. At this stage in the product lifecycle, many of the partners have their own unique board designs for the 58xx product line. Most of them have taken the reference design and put it through a process known as "Value Engineering". This is a well established technique for cutting costs while maintaining the full performance capabilities of the original. It works best when you have a well documented design to start from, one that includes detailed records of design reviews, testing protocols, and most importantly, complete test results. Especially the kind of informal testing that goes on in any developmental lab; you know... the ones that end up in smoke. Engineers tend not to document these, but if you're trying to cut cost from a product, you need to know its weaknesses more than you need to know its strengths.

When I first got my hands on the HD5830 prototype, my initial thought was, "Oh, they built it on the HD5850 platform. There's a hopeful sign that they didn't mess with the architecture." I guess that's what I was hoping to see, so that's what I DID see. Then I looked a bit closer, felt the obvious heft of the card, saw the full metal backplate and thought, "Why on earth would they build this on an HD5870 board?" It wasn't until several days later, during discussions with ATI, that the answer came to light.

amd_ati_radeon_hd-5870_video_card_bottom_underside.jpg

ATI made the decision to reduce the number of Stream Processors a little more aggressively than they did when they created the HD5850. They "took away" 320 Shaders this time, instead of only 160. This accomplishes two very important things; it keeps the HD5830 far enough away from the HD5850 to prevent cannibalizing sales of that very popular card. It also helps ATI "recover" more defective Cypress GPU chips, which is very helpful when your supplier is having extended manufacturing yield problems with their latest technology node. The downside to only having 1120 Stream Processors is that the GPU had to run a fairly high clock rate in order to hit the performance target that was the whole reason behind this product's very existence. ATI wanted the HD5830 to hit the exact middle of the performance gap between the 5770 and 5850, too far one way or the other and you haven't really filled the gap. Based on their internal testing, ATI feels they hit the mark. Pay attention to the scaling of this chart....the key takeaway is how close to the middle, between the high and low bars on the left and right, that the HD5830 bar lands.

ATI_Radeon_HD5830_Video_Card_fill_the_gap_01.jpg

So, the stock clock for the HD5830 came to be set at 800 MHz, and we have the strange situation where the lower performing HD5830 actually pulls more power than the HD5850, which normally runs at a pedestrian clock rate of 725 MHz when it leaves the factory. This turns out to have the unintended consequence of requiring the HD5830 boards to have a more robust power supply than the HD5850. Indeed, the guidance given to the AIB partners was to use the power supply specs from the HD5870 when designing their boards for the HD5830. I'm sure the Electrical Engineers understood what had happened, but I'm just as sure that the Product Marketing people were having a cow over this thought.

Ironically, the weak point of the reference HD5870 board has been its power supply section. The Voltage Regulation Modules (VRMs) are buried below the blower wheel and don't get as much cooling air as some other parts of the board. It's a strange twist of fate that the very thing that was needed from the HD5870 design was the weak point of the original system. Time will tell how well the AIB partners have mitigated this issue.

Now let's do something we often don't get a chance to; let's look into the future. Maybe only a few hours, maybe a day, maybe a week, who knows?

Looking Ahead: AIBs

For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster was the byword for achieving gaming-class performance from the latest and greatest GPU. Let's take a quick look at what some of the AIB partners have planned for cooling solutions on the Radeon HD 5830; it looks like there could be huge differences in the designs.

ATI_Radeon_HD5830_Video_Card_Gigabyte_34_01.jpg

Here's a dual-cooler version that should really help keep the VRM section from getting overheated.

ATI_Radeon_HD5830_Video_Card_HIS_34_01.jpg

This will probably be a popular configuration, with a single fan pushing air into two finned sections, one in front and one in back. The VRM section will still get some decent cooling air in this arrangement. Not as much as the dual fan setup, obviously, but at least it's in the airstream.

ATI_Radeon_HD5830_Video_Card_Sapphire_34_01.jpg

We can see the heatpipe construction a little better with this image. Once again, the plastic shroud looks like a very simple and inexpensive part. No doubt the AIB partners are trying their best to keep cooling performance high and costs low.

ATI_Radeon_HD5830_Video_Card_XFX.jpg

This is the most interesting of the variants that we got a peak at. If it's a real part, and not an early non-functioning mock-up, then it looks like XFX is way far ahead on Value Engineering. Not only is the cooling assembly much lower in cost, they've also reduced the overall size of the PCB down to HD5750 territory. This definitely the lowest cost card, if it can be built.

Even though the HD5830 has a lower performance profile than its big brothers, the relatively high stock clock rate means that the cooling solution couldn't be scaled back proportionately. Besides, it's still a full-on gaming product and will be pushed to maximum performance levels by most potential customers. We've not spent a lot of time in this review on the board design, since our current sample doesn't represent a production model. Once we get our hands on some retail units from the AIB partners, we'll spend more time looking closely at their design and construction. Today, we're going to dive into the HD5830 and find out how the GPU itself performs, so let's move on to the Testing section.

Video Card Testing Methodology

This is the beginning of a new era for testing at Benchmark Reviews. With the imminent release of Windows7 to the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro.We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors).

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article.

Test System

  • Motherboard: ASUS M4A79T Deluxe (2205 BIOS)
  • System Memory: 2x 2GB OCZ Reaper HPC DDR3 1600MHz (7-7-7-24)
  • Processor: AMD Phenom II 720 Black Edition (Overclocked to 3.6 GHz)
  • CPU Cooler: CoolerMaster Hyper Z600
  • Video: ATI Radeon HD5830, Engineering Sample
  • Drive 1: OCZ Summit SSD, 60GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: CM STORM Sniper Gaming Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

Benchmark Applications

  • 3DMark Vantage v1.0.1 Benchmark(8x Anti Aliasing & 16x Anisotropic Filtering)
  • Crysis v1.21 Benchmark (Very High Settings, 0x and 4x Anti-Aliasing)
  • Devil May Cry 4 Benchmark Demo (Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 Benchmark (Very High Performance, Ultra-High Quality, 8x Anti-Aliasing)
  • Resident Evil 5 Benchmark(8x Anti-Aliasing, Motion Blur ON, Quality Levels-High)
  • Unigine Heaven Benchmark (DX10, High Shaders, No Tessellation, 16x AF, 4x & 8x AA)
  • S.T.A.L.K.E.R. Call of Pripyat Benchmark (Ultra-Quality, Enhanced DX10 light, 4x MSAA, SSAO Off & Default)

Video Card Test Products

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit
  • MSI Radeon HD4830 (R4830 T2D512 - Catalyst 8.703.0.0)
  • ASUS Radeon HD4850 (EAH4850 TOP - Catalyst 8.703.0.0)
  • XFX Radeon HD5750 (HD-575X-ZN - Catalyst 8.703.0.0)
  • ATI Radeon HD5770 (Engineering Sample - Catalyst 8.703.0.0)
  • ATI Radeon HD5830 (Engineering Sample - Catalyst 8.703.0.0)
  • XFX Radeon HD5850 (21162-00-50R - ATI Catalyst 8.703.0.0)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX - Forceware v195.62)
  • MSI GeForce GTX 275 (N275GTX Twin Frozr OC - Forceware v195.62)
  • ASUS GeForce GTX 285 (GTX285 MATRIX - Forceware v195.62)
  • ATI Radeon HD5870 (Reference Design -Catalyst 8.703.0.0)

3DMark Vantage Benchmark Results

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

ATI_Radeon_HD5830_Video_Card_3DMark_Vantage_Jane_Nash_1680.jpg

Well, our first test looks promising! At 1680x1050 the Radeon HD5830 slots in nicely between the HD5770 and HD5850. If anything, it leans towards the 5850 in this synthetic test. Also, notice that it just edges out a mildly overclocked GTX285. We'll have to keep an eye on that comparison as we move through our testing regimen. Wouldn't it be funny if that was the target performance level all along?

ATI_Radeon_HD5830_Video_Card_3DMark_Vantage_Jane_Nash_1920.jpg

At 1920x1200 resolution, things look much the same as they did at the lower screen size. The low end cards, with their limited 512MB of GDDR3 struggle to keep up, but everything else is the same. Let's take a look at test#2, which has a lot more surfaces to render, with all those asteroids flying around New Calico.

ATI_Radeon_HD5830_Video_Card_3DMark_Vantage_New_Calico_1680.jpg

In the New Calico test, the HD5830 sits right in the center of the sweet spot between its siblings, the HD5770 and HD5850. So far, any concerns about the 50% reduction in ROPs seem unwarranted. The 5830 is keeping up with its big brother just fine.

ATI_Radeon_HD5830_Video_Card_3DMark_Vantage_New_Calico_1920.jpg

At a higher screen resolution of 1920x1200, we again see the 512MB cards falling behind, but the HD5830 retains its spot halfway between the 5770 and 5850. It also barely tops the GTX285 again, so any complaints about the pricing on the HD5830 need to consider the competition. We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards.

ATI_Radeon_HD5830_Video_Card_Crysis_NoAA_1680.jpg

With medium screen resolution and no AA dialed in, the HD5830 continues to have a field day. Please remember all the test results in this article are with maximum allowable image quality settings. Also, it's good to remember how all the performance numbers in Crysis took a major hit when Benchmark Reviews switched over to the DirectX 10 API for all our testing. Considering all that, 31 FPS is a great result, especially as it beats the GTX285 again. One frame/second isn't much of a difference in performance, but there is that $100 price difference between the two to consider.

ATI_Radeon_HD5830_Video_Card_Crysis_NoAA_1920.jpg

At 1900 x 1200 resolution, everything looks the same; even the 512MB cards are still hanging in there. Those old HD48xx series cards were really good performers in Crysis, but they are giving up 8-12 FPS to the new ATI HDx8xx budget king.

ATI_Radeon_HD5830_Video_Card_Crysis_4xAA_1680.jpg

Now let's turn up the heat a bit, and add some Anti-Aliasing. With 4x AA cranked in, the HD5830 backs off ever so slightly, making up 42% of the performance difference between the HD5770 and HD5830. It's not 50% or above, but is still a respectable result, and of course it squeaks by the GTX285 again.

ATI_Radeon_HD5830_Video_Card_Crysis_4xAA_1920.jpg

This is one of our toughest tests, at 1900 x 1200, maximum quality levels, and 4x AA. Only one card gets above 30 FPS in this test, and it's the fastest single-GPU card on the planet, the Radeon HD5870. In the middle ranges, the HD5830 holds on to its spot, roughly half way between the HD5770 and HD5850. What I like about this test is that it shows how far ATI has come in one generation of video cards. The HD4830, which was the equivalent card in the HD48xx line up, only manages about 9 FPS, and the current generation card puts up 21. I see real progress here, and I just don't get it when people want to compare every card in the HD5xxx series to the HD4890.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Devil May Cry 4 Benchmark

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used.

Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

ATI_Radeon_HD5830_Video_Card_DMC4_DX10_Scene2.jpg

This looks like one benchmark where the reduction in number of ROPs makes a difference. The HD5830 only beats the HD5770 by 9%, and the GTX200 cards get to strut their stuff.

ATI_Radeon_HD5830_Video_Card_DMC4_DX10_Scene4.jpg

In Scene #4, the HD5850 doesn't turn in quite the stunning performance it did in Scene #3, so the gap between it and the HD5770 isn't as large. Regardless, the HD5830 sticks closer to the HD5770 than it does to its big brother in this test, only filling 32% of the performance gap this time.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics in a far away land.

Far Cry 2 Benchmark Results

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for our tests, with the resolution set to 1920x1200. The performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality level, 8x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.

ATI_Radeon_HD5830_Video_Card_Far_Cry_2_DX10_1680.jpg

It's too early to call it a trend, but after just seeing the HD5830 struggle a bit with the oldest benchmark in our test suite, I see it pretty much falling flat here on one of our newest gaming benchmarks. Once again, the HD5850 really stands out here, and I think you have to point the finger at the fact that the 5850 has twice the number of ROPs as the HD5830.

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we use 8x AA and 'Ultra High' settings in Far Cry 2. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), many of the midrange products we've tested are capable of producing playable frame rates with the settings all turned up. We also see a different effect when switching our testing to DirectX 10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DX10 requirements.

ATI_Radeon_HD5830_Video_Card_Far_Cry_2_DX10_1920.jpg

The Radeon HD5830 hangs disappointingly close to its little brother again in the higher resolution testing. Although the Dunia engine seems to be optimized for NVIDIA chips, the mix of GPU components ATI incorporated in the 5850 and 5870 GPUs seem optimum for this game. That's obviously not the case for the HD5830.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Our next benchmark of the series puts our collection of video cards against some fresh graphics in the newly released Resident Evil 5 benchmark.

Resident Evil 5 Benchmark Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenary mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.

ATI_Radeon_HD5830_Video_Card_Resident_Evil_5_DX10_Scene3.jpg

The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes which take place in different areas of the compound. In addition it calculates an overall average for the four scenes. The averages for scene #3 and #4 are what we report here, as they are the most challenging. Looking at area #3, two things are obvious; the NVIDIA cards do exceptionally well in this game, and the HD5830 doesn't come anywhere near the performance of the HD5850. There is quite a bit of variation in the gameplay between the four areas, so let's see what happens in the next most challenging scene, area #4.

ATI_Radeon_HD5830_Video_Card_Resident_Evil_5_DX10_Scene4.jpg

Once again, in this test the HD5850 really stands out in the ATI lineup, and the HD5830 hangs back with the likes of the 57xx series. Let's keep looking, especially at some new titles that were developed for DX11, and see if this trend continues.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, we look at the newest DX11 benchmark, straight from Russia and the studios of Unigine. Their latest benchmark is called "Heaven", and it has some very interesting and non-typical graphics. So, let's take a peek at what Heaven v1.0 looks like.

Unigine - Heaven Benchmark Results

Unigine Corp. released the first DirectX 11 benchmark "Heaven" that is based on its proprietary UnigineTM engine. The company has already made a name among the overclockers and gaming enthusiasts for uncovering the realm of true GPU capabilities with previously released "Sanctuary" and "Tropics" demos. Their benchmarking capabilities are coupled with striking visual integrity of the refined graphic art.

The "Heaven" benchmark excels at providing the following key features:

  • Native support of OpenGL, DirectX 9, DirectX 10 and DirectX 11
  • Comprehensive use of tessellation technology
  • Advanced SSAO (screen-space ambient occlusion)
  • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
  • Dynamic simulation of changing environment with high physical fidelity
  • Interactive experience with fly/walk-through modes
  • ATI EyeFinity support

The distinguishing feature of the benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Unigine Corp. is an international company focused on top-notch real-time 3D solutions. The development studio is located in Tomsk, Russia. Main activity of Unigine Corp. is development of UnigineTM, a cross-platform engine for virtual 3D worlds. Since the project start in 2004, it attracts attention of different companies and groups of independent developers, because Unigine is always on the cutting edge of real-time 3D visualization and physics simulation technologies.

ATI_Radeon_HD5830_Video_Card_Unigine_Heaven_DX10_4xAA.jpg

Getting back to a more synthetic type of benchmark, we see the HD5830 doing a little better than it did with Far Cry 2 and Resident Evil 5. The HD5850 still puts on a star performance and stands out from the crowd, but at least the HD5830 distinguishes itself from the HD5770. This test was run with 4x anti-aliasing, let's see how the cards stack up when we increase this to the maximum level of 8x.

ATI_Radeon_HD5830_Video_Card_Unigine_Heaven_DX10_8xAA.jpg

The impact of increasing the anti-aliasing is pretty clear. Two things happened; the older HD48xx cards took a nosedive, and so did the NVIDIA GT200 cards. While the HD5830 still can't catch up to the HD5850, it manages to just get by the GTX285. One thing I noticed while observing the benchmark wind its way through the streets of Heaven 1.0; when smoke from the chimneys was in the scene, the frame rate dropped radically. It really hurt the older cards; I'm not sure if it was just their memory deficit, or what caused this effect.

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

Let's take a look at one more benchmark, a decidedly less cheerful scenario in a post-apocalyptic "Zone", which is traversed by mercenary guides called Stalkers.

S.T.A.L.K.E.R.: Call of Pripyat Benchmark Results

The events of S.T.A.L.K.E.R.: Call of Pripyat unfolds shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl. Having discovered about the open path to the Zone center, the government decides to hold a large-scale military "Fairway" operation aimed to take the CNPP under control. According to the operation's plan, the first military group is to conduct an air scouting of the territory to map out the detailed layouts of anomalous fields location. Thereafter, making use of the maps, the main military forces are to be dispatched. Despite thorough preparations, the operation fails. Most of the avant-garde helicopters crash. In order to collect information on reasons behind the operation failure, Ukraine's Security Service sends their agent into the Zone center.

S.T.A.L.K.E.R.: CoP is developed on X-Ray game engine v.1.6, and implements several ambient occlusion (AO) techniques including one that AMD has developed. AMD's AO technique is optimized to run on efficiently on Direct3D11 hardware. It has been chosen by a number of games (e.g. BattleForge, HAWX, or the new Aliens vs. Predator) for the distinct effect in it adds to the final rendered images. This AO technique is called HDAO which stands for ‘High Definition Ambient Occlusion' because it picks up occlusions from fine details in normal maps.

ATI_Radeon_HD5830_Video_Card_STALKER_DX10_SSAO_Off.jpg

Within the limits imposed by the NVIDIA cards that don't support DirectX 11, we turn the settings on S.T.A.L.K.E.R.: Call of Pripyat all the way up. The one thing we look at individually is SSAO, one of the technologies that made its appearance in DirectX 10. In the first test, with SSAO turned off, we see a familiar pattern in the comparison between the HD5770, HD5830, and HD5850. Specifically, the HD5830 has very only a 10% performance advantage over the HD5770 and the HD580 rises up, above all expectations. No wonder people love that card, and this testing was all done at stock clock rates, which are pretty low for the 5850, as it leaves the factory.

ATI_Radeon_HD5830_Video_Card_STALKER_DX10_SSAO_Default.jpg

Once we turn SSAO on and set it to High, the HD5830 gains some of its performance advantage back, over the HD5770. The other thing that happens is that the NVIDIA cards lose out big time. Despite the company's insistence that DX11 is largely unnecessary, their performance on one of the key enabling technologies of DX10 is less than compelling. This is one rendering technique that just pins the NVIDIA GPUs to the ground. How often do you see an HD4850 coming within 10% of a GTX285 and matching a GTX275?

Product Series Stream Processors Core Clock (MHz) Shader Clock (MHz) Memory Clock (MHz) Memory Amount Memory Interface
MSI Radeon HD4830 (R4830 T2D512) 640 585 N/A 900 512MB GDDR3 256-bit
XFX Radeon HD5750 (HD-575X-ZNFC) 720 700 N/A 1150 1.0GB GDDR5 128-bit
ASUS Radeon HD4850 (EAH4850 TOP) 800 680 N/A 1050 512MB GDDR3 256-bit
ATI Radeon HD5770 (Engineering Sample) 800 850 N/A 1200 1.0GB GDDR5 128-bit
ATI Radeon HD5830 (Engineering Sample) 1120 800 N/A 1000 1.0GB GDDR5 256-bit
ASUS GeForce GTX 260 (ENGTX260 MATRIX) 216 576 1242 999 896MB GDDR3 448-bit
XFX Radeon HD5850 (21162-00-50R) 1440 725 N/A 1000 1.0GB GDDR5 256-bit
MSI GeForce GTX 275 (N275GTX Twin Frozr OC) 240 666 1476 1161 896MB GDDR3 448-bit
ASUS GeForce GTX 285 (GTX285 MATRIX) 240 662 1476 1242 1.0GB GDDR3 512-bit
ATI Radeon HD5870 (Engineering Sample) 1600 850 N/A 1200 1.0GB GDDR5 256-bit

In our next section, we investigate the thermal performance of the Radeon HD5830, and see if the gimped Cypress GPU die runs cool with the full cooling complement of the HD5870 brought to bear on it.

ATI Radeon HD5830 Temperature

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 23C throughout testing. The ATI Radeon HD5830 video card recorded 35C in idle 2D mode, and increased to 70C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution, and the maximum MSAA setting of 8X. The fan was left on its stock, automatic settings for this test.

70°C is a good result for temperature stress testing, especially with stock fan settings. The built-in fan controller generally runs the fan at 1140 RPM during 2D or idle. On most benchmarks, the temperature never got above 57C and the fan stayed there. Once temps got above 60C, the controller ramped the fan up, to a maximum of 1600 RPM. Of course, retail versions of the card will have a completely different cooling solution, so these results are sort of academic.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

ATI_Radeon_HD5830_Video_Card_furmark_temp.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

ATI_Radeon_HD5830_Video_Card_Road_to_Fusion.jpg

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The prototype ATI Radeon HD5830 pulled 26 (156-130) watts at idle and 142 (272-130) watts when running full out, using the test method outlined above. The idle power consumption test is very close to the factory number of 25W, but the load value is well below the 175W factory spec. Perhaps FurMark has been outfoxed this time by ATI? I've heard they took to calling it the "FurMark virus", after receiving too many RMA units that had perished during this test. They also are rumored to have incorporated Safety Measures in the latest firmware to prevent this from happening.

Radeon HD5830 Final Thoughts

Why did ATI leave this huge hole in their product line, for so long? The flagship ATI video cards made a huge splash last September, but according to Mercury Research, cards costing over $200 only make up 7% of the market, and the 57xx series landed in the $100-$200 range, which makes up 27% of the market. That leaves a huge opening in the sub-$100 market, and ATI was busy filling in the gaps with all new, DirectX 11 capable cards in this segment. Enthusiasts may laugh at the diminutive HD55xx series and the HD5450, with its 80 shaders, but they provide a much-needed revenue stream for ATI. Don't begrudge them that, it's what pays for all the R&D that produced the 58xx series in the first place. BTW, did you notice that they released a Mobile Radeon HD5830 a while ago? No, it's not the same chip that we are looking at today; not by a long shot.

So, the halo products were doing fine; in fact they were in short supply for several months due to manufacturing yield problems at the chip foundry in Taiwan. Now, the middle ground and the HTPC markets are taken care of. There are enough chips floating out of TSMC to keep the retailer's shelves stocked. Now what...? Oh, yeah, let's go back and finish off the premier product line with a couple of easy wins. One card can fill the gap between the 58xx and 57xx series, and a dedicated Eyefinity HD5870 card for the AV market will sell like hotcakes at Belgian waffle prices. Because in that market, you're always spending someone else's money.

Let's play a game of "What If". What if you were King of ATI, and you knew that there was a gap in your product line, so you told your minions to go and design something to fill that gap. Lo and behold, some weeks later, the engineers came back with three proposals, because they had been arguing for almost the entire time over how to design the product. It turns out that there are three very easy, very plausible ways to build a product that will meet the performance requirements. Each of them is correct from a technical perspective, so the King has to decide. (You all knew that Marketing is the King, right...LOL)

  1. Crank up the 57xx product with selected Juniper GPUs that will run 1 GHz+, and a slightly higher spec memory, easily available from several suppliers.
  2. Take another 160 Stream Processors (10%) away from the Cypress GPU (1280 left), and down-clock it to the exact performance target you want. (This was the highly successful strategy for the 5850, BTW.)
  3. Take away 320 additional Stream Processors from the Cypress GPU (1120 left), disable some additional Texture Units, and gut the ROPs down to half strength. Take advantage of the high clock rates that are achievable with the latest 40nm chips that you are already paying dearly for, and make up the performance you lost by disabling over 30% of the working parts in each section of the architecture.

Well, the world is waiting for your answer....Kings are infallible you know, so whatever you say will automatically be correct, for all time. It's just that the wrong decision is going to cost you money, somewhere down the road.

Kings have special privileges, so I'm going to invoke mine and answer "1 & 3". I think a turbocharged 57xx is already in the product roadmap, it's just a question of time. I think #2 is what the market wanted, because they had already seen how well the HD5850 scaled with GPU clock speed, and they wanted to be able to overclock the 5830 and get 5850 performance out of it. Just like they saw everyone doing with the 5850, juicing it up to compete with the 5870, they wanted a repeat performance of The People's Champion.

Alas, the King didn't want to lose all those HD5850 sales, at those nice HD5850 prices. I can't blame him; I would have done the same thing. Now, if you'll excuse me, I'm going to go try and get that 5830 chip up to 1.0 GHz, and see what it'll really do.

Radeon-HD5830_GPU-Z.png

ATI Radeon HD5830 Conclusion

The performance of the HD5830 GPU is really what this entire review is all about. The design is a derivative of a known entity, or perhaps I should say "entities", since the hard-working chip requires the power supply from the HD5870 in order to perform reliably at its 800 MHz clock rate. ATI had several choices to make when they down-sized the performance of this Cypress chip, and they chose to shed more Stream Processors this time around, rather than reduce the clock rate significantly. ATI hit their overall performance target, but the rub is that enthusiasts won't be able to jack up the clock and reap the kind of performance gains that they were able to get with the HD5850. The overclockers were hoping for the same easy increases this time and I can understand their disappointment, but I don't think there is a legitimate complaint here. Just because you got something for nothing once, doesn't mean you are forever entitled to it, over and over again.

The variance in relative performance between the various benchmarks is a bit troubling. Everyone runs their benchmarks with slightly different settings, and we may have touched on some weak spots. The mix of GPU components, the recipe for the HD5830, if you will, is unique. It's not a linear scaling factor like the HD5850 was, so you have to pay attention to what works in the games you like to play. Crysis certainly worked very well with this card, and for some, that will be enough.

amd_ati_radeon_hd-5870_video_card_splash.jpg

The appearance of retail Radeon HD5830 cards is going to vary quite a bit, based on the in-house design of the Add-In-Board partners. It's clear that this design needs a healthy dose of cooling to perform reliably, so this is an area that each of the partners can highlight and differentiate their offering from the others. Based on some of the images ATI shared with us, there continues to be no shortage of creativity in this area.

The build quality of the Radeon 5830 as a product is a bit hard to assess, as all I had to evaluate was a prototype. The card I got looked exactly like an HD5870 production part; that is to say, well built and impressively constructed. Honestly, at this end of the market it's hard to find products that are poorly crafted. The fully enclosed cooler, full metal back plate, and the general fit and finish are exemplary; now it's up to the AIB partners to match this level of quality at a price decidedly lower than the HD5870 commands.

The features of the HD5830 may seem slightly less amazing, now that we've been exposed to them since last September. Still, no one else has an equivalent combination of features that compete fully with DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11 and OpenCL Support, HDMI 1.3 with Dolby True HD and DTS Master Audio. We've barely scratched the surface in this review, focusing almost exclusively on gaming performance, but the card excels at other uses as well.

As of the launch date, February 25th, ATI is aiming at an average retail price of $239 for the HD5830. Since they're not producing the whole cards, and are only providing the ASICs and acting as technical advisors to the AIB partners, ATI predicts a much wider price range than usual for this product. A quick look at Newegg shows the midpoint between the 5770 and 5850 video card prices to be $230. I don't think any of the partners are going to have trouble hitting that target if they want to.

There is a huge price gap between the 5770 and 5850 video cards; currently it's the difference between $160 and $300, using the low end for both cards as a reference. You could drive a truck through that gap, which is why there has been so much speculation on where the performance of the Radeon HD5830 would eventually land. As it turns out, ATI was aiming right at the middle and in some applications they hit it dead on. Unfortunately, in some other applications, the HD5830 performed too close to HD5770 territory. Still, it's a very good thing to be able to buy a card that will keep with a GTX285 in most applications for only $239. I know some will be disappointed with the perceived value, but I've heard too many people say that if it didn't perform close to the 5850 and cost $200 at most, that it would be a total failure. That's a completely unrealistic expectation.

The ATI Radeon HD 5830 is avialable in many speeds and cooling options. The PowerColor PCS+ version offers better cooling and a factory overclock for $240, and as of March 2010 this is the most affordable model. Gigabyte ($250) and Sapphire ($250) also offer reference models at a decent price.

The ATI Radeon HD5830 earns a Silver Tachometer Award, because it fills an important slot in the family tree at a price that most hardcore users can handle, and it gets them into the 58xx series. This is the cheapest double-precision card available from ATI, and as more and more games take advantage of the capabilities in DirectX 11, this will have an enabling effect on real-world gaming performance that the single-precision cards won't be able to match.

Pros:silvertachaward.png

+ The price might drift down to $200
+ Unmatched feature set
+ Fills the huge performance gap nicely
+ Full 256-bit memory architecture
+ 1.79 TeraFLOPS for < $250 (at launch)
+ HDMI and DisplayPort interfaces included
+ Wide selection of AIB Partner designs to choose from
+ Easy to overclock with ATI Overdrive
+ CrossfireX scalability has been excellent

Cons:

- Only 1120 Stream Processors
- Only 16 ROPs, same as HD5770
- GPU clock almost maxed out
- Requires more power and cooling than HD5850
- Power supply cost may keep card price from falling

Ratings:

  • Performance: 8.25
  • Appearance: 9.00
  • Construction: 9.00
  • Functionality: 9.50
  • Value: 8.25

Final Score: 8.8 out of 10.

Quality Recognition: Benchmark Reviews Silver Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments 

 
# Total FAIL of EPIC proportionSiliconDoc 2010-03-03 19:30
Ok, enough of the smoothing over and red rooster fantasizing. After carefully reviewing the review, the overpriced and unavailable 5830 "keeps with" the GTX260, NOT THE GTX285.
---
Oh gawd, the fanboyism from reviewers (excusable to a degree because of card tester givie me one for free review reverso rate pressure) let alone the pock marked apocalypse of the raging reds is almost unbearable.
--
THIS CARD IS COMPETING WITH THE GTX260, AND SINCE IT CANNOT BE OVERCLOCKED WORTH A DING DANG, THE GTX260 BEATS IT PERIOD!
---
Ok, now back to the red ragers fanboy fantasies that crank the red rooster cards a full card tier or two above where they actually are when comparing them to Nvidia. Carry on red raging ravers.
Report Comment
 
 
# The only EPIC FAIL is your comment...Olin Coles 2010-03-04 14:20
Not sure which review you 'carefully read', but it wasn't this one. The GTX260 performs WAY below the 5830... and doesn't even compare in DX11 games. All you had to do is look at the results. Not sure who at NVIDIA signs your paycheck, but you're the one that's clearly biased.
Report Comment
 
 
# RE: The only EPIC FAIL is your comment...Scaramunga 2010-03-26 03:31
You should read on past the crysis benchmark.. it didnt perform well in far cry 2 or devil may cry 4
Report Comment
 
 
# RE: RE: The only EPIC FAIL is your comment...gamechld 2010-03-26 14:46
Actually, the 5830 beats the GTX260 by a fair margin in the Devil May Cry 4 test, I think you might have been looking at the wrong colors. You are correct though regarding the Far Cry 2 test.
Report Comment
 
 
# goatgoatonastick 2010-04-19 12:14
what does it matter in dx11 tests? most people will buy cards for dx10 games....
Report Comment
 
 
# RE: Total FAIL of EPIC proportionBruce Normann 2010-03-10 15:38
Despite reports to the contrary, from people you think would know better, the HD5830 was available from Newegg in two flavors on Launch Day. I don't know why you would use the term "unavailable" to describe this product.
Report Comment
 
 
# @ Silicondoc , sorry sir but YOU ARE A TOTAL FAIL OF EPIC PROPORTIONRudi 2010-06-03 04:20
Im pretty sure you the epic fail cause in crysis this card beat the GTX 260 by about 6-8 fps and compared to the GTX 260 the ATI 5830 is real bang for your buck , $300 AUD with DX11 and Eyefinty unlike the GTX 260 ,$ AUD , DX10... cant even play crysis at a decent FPS , my 9500gt plays it at 15+ FPS..4 less than the GTX 260

So tell me , whos being a fan boy? , you or the truthfull reviewer
Report Comment
 
 
# ROFLMAOSiliconDoc 2010-06-04 10:08
Fan boy, you mention crysis, 4 games reviewed for both cards, and the two cards traded those 4.
YOU are the insane fanboy fool, not I.
The card competes with the 260 as I said, P E R I O D.
It doesn't compete with the 285, P E R I O D.
Six fps, or six percent, at 60-70 frames per sec, means very little without minimum numbers, and the GTX260 wins in that, especially with the 8x and 16x the reviewer kindly included.
( almost every other review site will only include 0xAA and 4xAA since the ati cards take a HUGE hit compared to the nvidia cards when AA is cranked up - same with Tesselation now).
So, I am still absolutely correct, PERIOD.
I'm sure the blathering, raging red rooster fanboy in you allowed you to only look at one game review, drool into your tinfoil hat cup (after it fell off your gyrating epileptic gourd), and screech "my 9500gt!" - oh man - WHAT A FREAKIN FAN BOY! HAHAHAHAH YER 9500GT ! HAHAHAHAHA
Report Comment
 
 
# WTF?myself 2010-06-06 01:28
Dude you sound like a wierdo, P E R I O D!
Report Comment
 
 
# Lol he isRudi 2010-06-06 04:06
yepers he is , the GX 260 is a good card no doubt , but the 5830 still beats it no matter what you say
Report Comment
 
 
# Cyber BullyOlin Coles 2010-06-21 21:49
SiliconDoc:
After several complaints of your bahavior, you are banned.

SiliconDoc

67.175.194.49 Atlanta, Georgia, USA
98.214.9.49 Rockford, Illinois, USA
Report Comment
 
 
# RE: ATI Radeon HD 5830 DirectX-11 Video CardYUYUY 2011-03-23 17:25
entigny,replica watch Sault Ste Marie,replica watches Peterborough Lethbridge, Kawartha Lakes, Newmarket Sarnia, Brossard, Prince George Chilliwack, Maple Ridge, SaintGlashutte watches John,Drummondville,M oncton, Saint-Jérôme,Norfolk County,New Westminster,StAlbert ,Caledon,Medicine Hat,Halton Hills,North Bay,Milton, Port Coquitlam, Shawinigan Saint Hyacinthe Wood Buffalo
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive