Archive Home arrow Reviews: arrow Video Cards arrow PowerColor PCS+ AX5870 1GBD5-PPDHG2
PowerColor PCS+ AX5870 1GBD5-PPDHG2
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Tuesday, 20 April 2010

PowerColor PCS+ AX5870 Video Card Review

It's been some time since the launch of the ATI Radeon HD 5870, over six months from September of 2009. In that time, the graphics card world has not stood still, yet the HD 5870 reigned supreme for the entire time as the world's fastest (single) GPU. Now that Fermi has finally launched, that title belongs to the GTX480, but the 5870 has matured since its launch and it's time to take a look at where it's at today. Development has continued on both the hardware side and the software side. Driver updates have been a constant source of improvement and anguish, nothing unusual there. It seems like ATI and NVIDIA typically produce only one set of drivers in any given year that gets universal acclaim from the user community.

PowerColor_PCS+_AX5870_Video_Card_Front_Full_01.jpg

The PowerColor PCS+ AX5870 is one of several new HD 58xx series cards released by ATI AIB partners in the last few months that follow a new design pattern. Advances in power-semiconductor packaging have allowed for a simpler VRM implementation that also consumes much less real estate on the board. At the same time, the use of a down-flow HSF arrangement eliminates the dead spots found below and behind the blower wheel on the reference design. The combination makes for a smaller, more efficient, and more reliable card that's less costly to produce. I call that progress, but there are others who bemoan the process of removing cost from a design, commonly known as Value Engineering. I can understand the angst if functions and features are being removed, but finding a cheaper way to deliver the same performance is a good thing, unless you need to establish and maintain snob appeal for your product.

Benchmark Reviews has tested a couple of ATI Radeon HD 5870 video cards already, so the performance and features of the GPU are hardly news, but with a number of second-generation cards appearing in the marketplace, we thought it was time to sample one and see how the basic design has matured. Please follow along as we give you a detailed look at one of the latest high-end Radeons from PowerColor.

About the company: PowerColorpowercolor_logo_300px.png

PowerColor, established by Tul Corporation in 1997, is a graphics industry leading brand name now and well known for its outstanding performance and innovative technology. As a leading provider of graphics card, PowerColor offers powerful, reliable and cost-effective solutions to customers worldwide.

PowerColor is the platform of choice for avid PC gamers and video prosumers looking to get the best possible performance out of the latest graphics processors from ATI. PowerColor graphics cards deliver every-last-drop of super-charged performance from ATI's most advanced visual processing units (VPU). Running a PowerColor graphics card, today's gamers are able to unleash the 3D graphics performance of their favorite game playing it the way it was meant to be played.

PowerColor PCS+ AX5870 Features

The feature set of the ATI HD5870 hasn't changed since its introduction, and is basically common across the entire HD 5xxx series of cards. The major differences between any two cards in the series are in the number of processing units at various places in the architecture, and the memory interface. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look very familiar. This time, there is a full working complement of shaders, ROPs, and texture units; the full power of the ATI Cypress-class chip is available for graphics processing.

PowerColor_PCS+_AX5870_Video_Card_Architecture_Full_5870.jpg

All ATI Radeon HD 5xxx Series graphics cards come with ATI Eyefinity Technology, which can instantly triple your visual real estate, up to three displays for the ultimate in innovative "wrap around" capabilities, all with crisp, sharp picture quality. ATI Eyefinity technology engages your peripheral vision and puts you right in the game. At the office, you can multi-task without needing to flip between windows. Ideal for multi-media applications, keep as many palettes or panels open as you would like, while you edit images or videos.

ATI Stream Technology unleashes the massive parallel processing power of your GPU to help speed up demanding every-day applications. Experience fast video encoding and transcoding, so that video playback, editing and transferring content to your iPod or other portable media players is quick and easy.

As the first fully Microsoft DirectX 11-compatible GPUs, the ATI Radeon HD 58xx Series delivers unrivaled visual quality and intense gaming performance. Enjoy in-your-face 3D visual effects and dynamic interactivity, with features like HDR Texture Compression, DirectCompute 11 and Tessellation.

The 58xx Series is further supersized with GDDR5 memory, 1.8X of graphics performance boost with ATI CrossFireX technology in dual mode, and unparalleled anti-aliasing and enhanced anisotropic filtering for slick graphics and supreme realism.

ATI Radeon HD 5870 GPU Feature Summary

  • 2.15 billion 40nm transistors
  • TeraScale 2 Unified Processing Architecture
    • 1600 Stream Processing Units
    • 80 Texture Units
    • 128 Z/Stencil ROP Units
    • 32 Color ROP Units
  • 256-bit GDDR5 memory interface
  • PCI Express 2.1 x16 bus interface
  • DirectX 11 support
    • Shader Model 5.0
    • DirectCompute 11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency
  • OpenGL 3.2 support1
  • Image quality enhancement technology
    • Up to 24x multi-sample and super-sample anti-aliasing modes
    • Adaptive anti-aliasing
    • 16x angle independent anisotropic texture filtering
    • 128-bit floating point HDR rendering
  • ATI Eyefinity multi-display technology2,3
    • Three independent display controllers - Drive three displays simultaneously with independent resolutions, refresh rates, color controls, and video overlays
    • Display grouping - Combine multiple displays to behave like a single large display
  • ATI Stream acceleration technology ATI_Radeon_HD5870_Video_Card_Big_Bunny_02.jpg
    • OpenCL 1.0 compliant
    • DirectCompute11
    • Accelerated video encoding, transcoding, and upscaling4,5
    • Native support for common video encoding instructions
  • ATI CrossFireXTM multi-GPU technology6
    • Dual GPU scaling
  • ATI Avivo HD Video & Display technology7
    • UVD 2 dedicated video playback accelerator
    • Advanced post-processing and scaling8
    • Dynamic contrast enhancement and color correction
    • Brighter whites processing (blue stretch)
    • Independent video gamma control
    • Dynamic video range control
    • Support for H.264, VC-1, and MPEG-2
    • Dual-stream 1080p playback support9,10
    • DXVA 1.0 & 2.0 support
    • Integrated dual-link DVI output with HDCP11
      • Max resolution: 2560x160012
    • Integrated DisplayPort output
      • Max resolution: 2560x160012
    • Integrated HDMI 1.3 output with Deep Color, xvYCC wide gamut support, and high bit-rate audio
      • Max resolution: 1920x120012
    • Integrated VGA output
      • Max resolution: 2048x153612
    • 3D stereoscopic display/glasses support13
    • Integrated HD audio controller
      • Output protected high bit rate 7.1 channel surround sound over HDMI with no additional cables required
      • Supports AC-3, AAC, Dolby TrueHD and DTS Master Audio formats
  • ATI PowerPlayTM power management technology7
    • Dynamic power management with low power idle state
    • Ultra-low power state support for multi-GPU configurations
  • Certified drivers for Windows 7, Windows Vista, and Windows XP
  1. Driver support scheduled for release in 2010
  2. Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector
  3. ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays
  4. Requires application support for ATI Stream technology
  5. Digital rights management restrictions may apply
  6. ATI CrossFireXTMtechnology requires an ATI CrossFireX Ready motherboard, an ATI CrossFireXTM Bridge Interconnect for each additional graphics card) and may require a specialized power supply
  7. ATI PowerPlayTM, ATI AvivoTMand ATI Stream are technology platforms that include a broad set of capabilities offered by certain ATI RadeonTMHD GPUs. Not all products have all features and full enablement of some capabilities and may require complementary products
  8. Upscaling subject to available monitor resolution
  9. Blu-ray or HD DVD drive and HD monitor required
  10. Requires Blu-ray movie disc supporting dual 1080p streams
  11. Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
  12. Some custom resolutions require user configuration
  13. Requires 3D Stereo drivers, glasses, and display

Although most of this article will focus on gaming performance, it's important to remember that the Radeon HD5xxx series has the most extensive and effective streaming video processing technology available today. Many of the features referenced in the list above have real-world implications for seemingly simple tasks like web browsing.

PowerColor PCS+ AX5870 Specifications

If we just talk about the HD 5870 GPU, and the architecture that supports it, then this section is the second most important part of this review. As I mentioned in the introduction, this review is more about the design changes that the AIB partners have implemented in their recently updated cards. The reference cards that ATI released were meant to show off the full range of technology on offer, and the top cards like the HD 5870 got all the bells and whistles. The new designs focus more on providing the best performance per dollar. As part of that design brief, the clock rates for the PowerColor PCS+ version are also bumped up slightly.

  • Engine clock speed: 875 MHz
  • Processing power (single precision): 2.8 TeraFLOPS
  • Processing power (double precision): 560 GigaFLOPS
  • Polygon throughput: 875M polygons/sec
  • Data fetch rate (32-bit): 280 billion fetches/sec
  • Texel fill rate (bilinear filtered): 70 Gigatexels/sec
  • Pixel fill rate: 28 Gigapixels/sec
  • Anti-aliased pixel fill rate: 112 Gigasamples/sec
  • Memory clock speed: 1250 MHz
  • Memory data rate: 5.0 Gbps
  • Memory bandwidth: 160 GB/sec
  • Maximum board power: 188 Watts
  • Idle board power: 27 Watts

PowerColor_PCS+_AX5870_Video_Card_IO_Top_34_02.jpg

Specifications are very important for graphics cards, because they tell a vital part of the story. However, testing is still the only way to see how any card actually performs, relative to other options that are available now and some older products that users may want to upgrade from. Pricing is also a factor, but there are so many variables that affect the marketplace, and of course, it's not static by any means. That's one of the reasons we're doing this review now, in the wake of the Fermi release, just to validate where the HD 5870 is in the marketplace, six months after its release.

ATI_Radeon_HD5870_Video_Card_HD5xxx_Price_v._Performance.jpg

The PowerColor PCS+ AX5870 is built with GPU chips that fully meet the top clock specification, and have no defects that would have killed one or more of the stream processor units. I don't think ATI is doing any special binning for PowerColor in this case; that's reserved for the PCS++ version of this card that is factory overclocked to 950 MHZ!

This is our first chance to examine a 2nd generation card based on the ATI Radeon HD 5870, so let's take a closer look at the PowerColor PCS+ AX5870, and see how their interpretation of the 5870 design specs has been implemented.

Closer Look: PowerColor PCS+ AX5870

The PowerColor PCS+ AX5870 is a great example of the continuing evolution of technology. The first generation products of any new technology are often over-designed in order to showcase the best parts, hide the bad parts, and ensure the highest possible reliability. The reference designs for the ATI HD 5870 certainly fit this mold, featuring state-of-the-art power controllers and a cooling assembly that only needed a 40% fan speed to handle the toughest GPU stress tests. The PowerColor PCS+ products started with a clean sheet of paper, and a design brief to achieve maximum performance at the lowest cost, without diminishing reliability. We're going to take a closer look at exactly how they accomplished that here, and in the next section.

PowerColor_PCS+_AX5870_Video_Card_Fan_Front_01.jpg

The first thing you notice with this video card is the set of heat pipes sprouting from the top of the fan shroud. This is a relatively new design thread that several vendors have embraced recently. Take a look at the new GTX480, for instance; it's not just confined to ATI cards. The idea behind it is that the area under the fan shroud can now be completely filled with cooling fins. Of course, the additional height of the card, where the pipes stick out, may cause problems with some smaller cases and/or side-mounted case fans. I had no problems with the CM Storm Sniper case, with its rather large 230mm side fan, but it does have an extremely wide footprint. BTW, don't pay any attention to the published dimensions of this card, the 111mm height specification just refers to the PC board; which is not very useful, IMHO. The real height dimension is more like 138mm to the top of the heatpipes.

PowerColor_PCS+_AX5870_Video_Card_Power_VRM_Section.jpg

In addition to the cooling changes, the PowerColor PCS+ AX5870 incorporates significant changes to the power supply. The new design, which a couple of AIB partners have picked up, provides the muscle of the original HD 5870, but without the complexity and flexibility of software voltage control. The overall size of the card has been reduced, it's shorter by about an inch, and this was achieved by both simplifying the power supply section and using the latest technology for the VRMs. The four small, square chips next to the filter chokes are DrMOS (Driver-MOSFET) chips from Renesas, which integrate three discrete power devices into a single chip, while occupying only half the space. We'll provide more detail on the power supply design in the next section. For now, let's just say that the availability of smaller, more highly integrated power chips couldn't have been timelier. PowerColor recommends a minimum PC power supply rating of 500 watts for systems that use this card. That's obviously going to depend on what else you have in the box, but the peak power numbers are 188W max, if you need to perform a more detailed analysis.

I do have a few nits to pick based on the image above; you can easily spot three instances of cost cutting. On the lower right, the fan connector has only two of the three positions filled, so the fan can't send back the tachometer reference signal. Its open loop control for the DC fan, not really a performance issue, just an inconvenience. The four capacitors, in a vertical line on the right, are missing their four companions. Maybe they're not strictly necessary, but for good DC filtering, bulking up on the first set of filter caps is a best practice. Just above them is a spot where one complete LC (inductor-capacitor) filter pair is left unpopulated. The zero-ohm jumpers put in their place don't perform any useful function. This is the power feed for the memory chips, and I think they deserve better.

PowerColor_PCS+_AX5870_Video_Card_Wave_Solder_Problem.jpg

The build quality of the PowerColor card is generally very good. The heatsink-fan assembly is well thought out and executed perfectly. The DrMOS chips mentioned above get their own aluminum heat sink, with plenty of surface area on the fins for heat transfer, and the airflow pattern suits their profile. There was one glaring quality error which I'll discuss now. That huge silver "spear" in the image above is a loose piece of solder that somehow fell onto the back side of the board, and remained stuck there in a blob of solder flux, until I removed it with some tweezers after testing was complete. I know that wave soldering is a marvelously complex process, and lead-free solder has made it even more challenging, but there are several inspection tools or processes that would have, and should have, detected this problem. Incredibly, because the errant solder didn't short out any connections on the board, it passed final test.

PowerColor_PCS+_AX5870_Video_Card_Back_Side_Solder.jpg

As if to compensate, most of the board had excellent solder quality, as can be seen here. The component placement is quite good; this is the area on the back side of the board, directly below the GPU, and is one of the most crowded sections of the board. I need to include a matchstick in these photos to show a sense of scale; the littlest components are just specks when viewed with the naked eye. The entire image above covers an area that is only 15mm wide, and the small components on the right are spaced on 1mm centers. Take a look at a metric ruler... It's one of the most critical sections for build quality, as variations in stray capacitance here could impact the performance of the board, and certainly its overclocking ability.

There was also some white residue on the back surface of the board that should have been removed in a cleaning step at the end of the wave soldering process. Large patches of it can also be seen in the image that shows the entire back surface of the board.

PowerColor_PCS+_AX5870_Video_Card_PCB_Back_Full_01.jpg

The layout on the back of the board is pretty standard for a card in this class. There are fewer components mounted on the back side than on the reference design. Part of that has to do with the simpler power supply design; there is only one PWM controller mounted here, instead of the three separate ones on the more expensive cards. The GPU cooler is mounted with four spring-loaded screws, without the aid of a back plate. The red color of the PCB itself is a common feature of all recent ATI cards from PowerColor. The symbolism should be obvious for anyone who follows the graphics card market.

PowerColor_PCS+_AX5870_Video_Card_Power_End_01.jpg

The board is fed from two 6-pin PCI-E power connectors, keeping within the original constraints of the reference design. Some vendors are piling it on by replacing one of the 6-pin connections with an 8-pin. The 6-pin connection is highly underrated, at 75W each. Since the 8-pin connection is rated for 150W, tell me how 33% more pins give 100% more power. And BTW, the extra two pins are both for used for the ground connection; there are still only three 12V+ pins, so it's really like 0% more pins providing 100% more power. The real capacity of a 6-pin connector is at least 100W, so there is closer to 275 W available from the system, well above the card's 188W maximum requirement.

So far, what we're seeing is a design that makes use of the knowledge gained from six months of experience in the marketplace with the reference design. Some things that someone was afraid to do without on launch day have now been deleted, such as software voltage control. Some areas have benefitted from new component technologies, like DrMOS power chips in the VRM section. Some parts have benefited from a different design philosophy; the down-flow HSF. We'll examine the impact of these design decisions in our testing section. For the moment, let's look at some of the features of the PowerColor PCS+ AX5870 in more detail.

Detailed Features: PowerColor PCS+ AX5870

For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster has been the byword for achieving gaming-class performance from the latest and greatest GPUs. The PowerColor PCS+ AX5870 uses a modern GPU cooler design that pushes the envelope, literally. The heatpipes are routed out from underneath the fan shroud into the open, leaving more room below the shroud for cooling fins. All other things being equal, more surface area on the fins means more heat is transferred from the pipes into the air.

PowerColor_PCS+_AX5870_Video_Card_HSF_Pipes_01.jpg

Two large, 8mm diameter heatpipes are combined with two 6mm pipes, passing directly over the GPU die and then spreading to the outer reaches of the fin assembly once they make their escape from the confines of the shroud. You can see how maximum use is made of the area inside the shroud for the fins. Another positive feature of this down flow arrangement is that the memory chips get air directly blown onto them. Heat spreaders and heat sinks have their place, but direct impingement on the bare surface of the memory chips is pretty efficient. The downside to this arrangement is that almost all of the heat generated by this powerful card goes directly into the case, and very little goes out the vents in the back. The reference design is generally better suited to multi-card CrossFireX applications, for this reason.

PowerColor_PCS+_AX5870_Video_Card_Front_HSF_Separated_01.jpg

The GPU makes contact with a copper block that is soldered to the four heatpipes passing directly over the top of the die. The back sides of the heat pipes make direct contact with the cooling fins, providing an extra, very short path for heat to get transferred into the fins. The finish on the copper block was rough by modern standards, but at least it was flat. The thermal interface material (TIM) was evenly distributed, but was still piled on thicker than necessary. Excess TIM can cover up sloppy assembly methods, but for the most part I've seen even distributions that indicate a certain amount of care was used to mate the two surfaces. The TIM was applied to the top surface of the GPU and then spread out once the HSF was mounted. For a thorough discussion of best practices for applying TIM, take a look here.

PowerColor_PCS+_AX5870_Video_Card_Copper_Contact_Block_01.jpg

The PowerColor PCS+ 5870 uses Renesas R2J20602NP DrMOS (Driver-MOSFET) power semiconductor chips in the VRM section, although you would never know it unless you pulled the heatsinks off and looked. PowerColor doesn't mention it in any of their marketing materials, unlike other manufacturers that make a big deal about it. DrMOS is a term that describes the integration of three discrete devices into one chip. Putting the driver circuit and the two power MOSFETs on one chip not only saves space, it improves both thermal and high frequency performance, compared to a discrete implementation. High clock frequencies aren't just helpful in computing; they also improve the efficiency and performance of many power supply circuits; in this case, a DC-DC convertor.

XFX_HD_5870_Video_Card_drmos_package.png

The main power supply controller chip used on the PowerColor PCS+ AX5870 is a UP6213AJ chip, a 4-phase PWM controller that does not support software voltage control, like the more expensive Volterra chips used on the reference 5870 boards. The more adventurous overclockers among us can still perform old school hardware volt mods, if the urge strikes. So, hope remains for those who desperately want to join the 1GHz GPU club without paying the full membership dues.

PowerColor_PCS+_AX5870_Video_Card_PWM_Control_Chip.jpg

The memory choice for the PowerColor PCS+ 5870 video card is consistent with the high GPU clock rates that ATI blessed this card with, and the mild overclock that PowerColor added on top of that. The basic 5870 specs only require 1200 MHz chips for the memory, but PowerColor has taken full advantage of these Samsung GDDR5 parts, which are good for 1250 MHz. Very few folks have successfully overclocked this RAM above 1300 MHz, but meeting its rated spec of 1250 is a cakewalk for high quality parts like these.

PowerColor_PCS+_AX5870_Video_Card_Samsung_Memory.jpg

We've spent a lot of time in this review on the board design, since this is our first sample of a true 2nd generation HD5870. Early custom cards from ATI AIB partners, like the Sapphire Radeon HD5870 Vapor-X we reviewed last November, used the reference PCB with all of its design features intact. Now that we've seen what's new and different with this latest generation, let's move on to the Testing section of our review.

Video Card Testing Methodology

This is the beginning of a new era for testing at Benchmark Reviews. With the imminent release of Windows7 to the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.PowerColor_PCS+_AX5870_Video_Card_GPU-Z_CCC_1.png

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro.We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors).

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in several benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article.

Test System

  • Motherboard: ASUS M4A79T Deluxe (2205 BIOS)
  • System Memory: 2x 2GB OCZ Reaper HPC DDR3 1600MHz (7-7-7-24)
  • Processor: AMD Phenom II 720 Black Edition (Overclock to 3.6 GHz)
  • CPU Cooler: CoolerMaster Hyper Z600
  • Video: PowerColor PCS+ AX5870 1GBD5-PPDHG2
  • Drive 1: OCZ Summit SSD, 60GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: CM STORM Sniper Gaming Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

Benchmark Applications

  • 3DMark Vantage v1.0.1 Benchmark(8x Anti Aliasing & 16x Anisotropic Filtering)
  • Crysis v1.21 Benchmark (Very High Settings, 0x and 4x Anti-Aliasing)
  • Devil May Cry 4 Benchmark Demo (Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 Benchmark (Very High Performance, Ultra-High Quality, 8x Anti-Aliasing)
  • Resident Evil 5 Benchmark(8x Anti-Aliasing, Motion Blur ON, Quality Levels-High)
  • Unigine Heaven Benchmark (DX11, High Shaders, Normal Tessellation, 16x AF, 4x & 8x AA)
  • S.T.A.L.K.E.R. Call of Pripyat Benchmark (Ultra-Quality, Enhanced DX10 and DX11 light, 4x MSAA, SSAO Default-High and HDAO-Ultra)

Video Card Test Products

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

  • ASUS Radeon HD4850 (EAH4850 TOP - Catalyst 8.712.0.0)
  • XFX Radeon HD5750 (HD-575X-ZN - Catalyst 8.712.0.0)
  • ATI Radeon HD5770 (Engineering Sample - Catalyst 8.712.0.0)
  • XFX Radeon HD5830 (HD-583X-ZNFV - Catalyst 8.712.0.0)
  • XFX Radeon HD5850 (21162-00-50R - ATI Catalyst 8.712.0.0)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX - Forceware v195.62)
  • MSI GeForce GTX 275 (N275GTX Twin Frozr OC - Forceware v195.62)
  • ASUS GeForce GTX 285 (GTX285 MATRIX - Forceware v195.62)
  • XFX Radeon HD5870 (HD-587X-ZNFC - Catalyst 8.712.0.0)
  • PowerColor PCS+ AX5870 (1GBD5-PPDHG2 - Catalyst 8.712.0.0)

3DMark Vantage Benchmark Results

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

PowerColor_PCS+_AX5870_Video_Card_3DMark_Vantage_Jane_Nash_1680.jpg

Our first test looks promising; at 1680x1050 the PowerColor PCS+ 5870 shows nice gains from both the 25 MHz factory overclock and the additional 50 MHz overclock I threw on top of that. All the results are very even and linear, just the way synthetic benchmarks are supposed to be.

PowerColor_PCS+_AX5870_Video_Card_3DMark_Vantage_Jane_Nash_1920.jpg

At 1920x1200 native resolution, things look much the same as they did at the lower screen size. The 5870 shows that it keeps going and going as the GPU clock rate goes up. It's the only card that can break 30FPS at this resolution, and it's pretty obvious as the test plays out on the screen. All the lower choices seem choppy by comparison. Let's take a look at test#2, which has a lot more surfaces to render, with all those asteroids flying around New Calico.

PowerColor_PCS+_AX5870_Video_Card_3DMark_Vantage_New_Calico_1680.jpg

In the New Calico test, the PowerColor PCS+ AX5830 sits right on top again and performance scales well with higher clock rates. Once again, the only other card that comes close is the HD 5850, everyone's favorite overachiever.

PowerColor_PCS+_AX5870_Video_Card_3DMark_Vantage_New_Calico_1920.jpg

At a higher screen resolution of 1920x1200, we see the lone 512MB card falling behind, and the HD 5850 retains its spot as the closest competitor to the 5870 cards. Even the fastest single GPU cards have trouble rendering this scene, with an average frame rate in the low 20s. Soon this benchmark suite may be replaced with DX11-based tests, but in the fading days of DX10 it has been a very reliable benchmark for high-end video cards. We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards.

PowerColor_PCS+_AX5870_Video_Card_Crysis_NoAA_1680.jpg

With medium screen resolution and no AA dialed in, the PowerColor PCS+ 5870 card continues to have a field day. Remember all the test results in this article are with maximum allowable image quality settings, plus all the performance numbers in Crysis took a major hit when Benchmark Reviews switched over to the DirectX 10 API for all our testing. Considering all that, the 5870 hits the sweet spot for Crysis, with no lag detectable in game play.

PowerColor_PCS+_AX5870_Video_Card_Crysis_NoAA_1920.jpg

At 1900 x 1200 resolution, everything looks the same; even the 512MB card is still hanging in there. Those old HD4850 cards were really good performers in Crysis with DX9, but they can't compete in DX10 with today's powerhouses. Both the HD5870 cards take about a 7FPS hit when moving up to 1920x1200, but they're still comfortably above the 30 FPS mark.

PowerColor_PCS+_AX5870_Video_Card_Crysis_4xAA_1680.jpg

Now let's turn up the heat a bit, and add some Anti-Aliasing. With 4x MSAA cranked in, the PCS+ AX5870 backs off slightly, only making 42 FPS when overclocked to 925 MHz. Of course this result demolishes all the GTX cards again.

PowerColor_PCS+_AX5870_Video_Card_Crysis_4xAA_1920.jpg

This is one of our toughest tests, at 1900 x 1200, maximum quality levels, and 4x AA. Only one GPU gets above 30 FPS in this test, and until recently it was the fastest single-GPU card on the planet, the Radeon HD 5870. The PowerColor PCS+ gains a little ground on its stock counterpart, and gains even more when pushed to a 9% overclock. The scaling in Crysis is almost as linear as a synthetic benchmark, I think that's one of the reasons it has stayed relevant all this time. In the middle ranges, the HD 5850 holds on to its spot as the value proposition to beat. Even with its low stock clocks, it beats the GTX285 by 29%.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Devil May Cry 4 Benchmark

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms. On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used. Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

PowerColor_PCS+_AX5870_Video_Card_DMC4_DX10_Scene2.jpg

The GTX cards from NVIDIA stage a comeback in Devil May Cry 4, but the 5870 cards still take top place. They also show that a little bit of overclocking still works, even at crazy frame rates, where the slowest HD 5870 video card still pulls down 117 FPS.

PowerColor_PCS+_AX5870_Video_Card_DMC4_DX10_Scene4.jpg

In Scene #4, the GTX cards pull a little bit closer, but are still about 30 FPS behind the 5870 trio, not that you'd notice at 90+ frame rates.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics in a far away land.

Far Cry 2 Benchmark Results

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for our tests, with the resolution set to 1920x1200. The performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality level, 8x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.

PowerColor_PCS+_AX5870_Video_Card_Far_Cry_2_DX10_1680.jpg

Even on a game that typically favors the Green Machine, the HD 5870 cards are top dog again. They also respond well to the GPU overclocks of 25 MHz and 75MHz above standard. Even with the higher overclock, the GPU temperature only maxed out at 59C. This test is generally one of the lighter GPU loads among our benchmarks; the coding appears to be highly optimized.

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we use 8x AA and 'Ultra High' settings in Far Cry 2. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), many of the midrange products we've tested are capable of producing playable frame rates with the settings all turned up. We also see a different effect when switching our testing to DirectX 10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DX10 requirements.

PowerColor_PCS+_AX5870_Video_Card_Far_Cry_2_DX10_1920.jpg

The higher resolution test doesn't change the rankings at all, and the overclock scaling for the PowerColor PCS+ AX5870 is right on target, netting a 9% gain in frame rates for a 9% overclock.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

Our next benchmark of the series puts our collection of video cards against some fresh graphics in the newly released Resident Evil 5 benchmark.

Resident Evil 5 Benchmark Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenary mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.

PowerColor_PCS+_AX5870_Video_Card_Resident_Evil_5_DX10_Scene3.jpg

The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes which take place in different areas of the compound. In addition it calculates an overall average for the four scenes. The averages for scene #3 and #4 are what we report here, as they are the most challenging. Looking at the results for area #3, it's obvious that the NVIDIA cards do exceptionally well in this benchmark, and the HD 5870 doesn't have nearly the same advantage it had in the other tests. The overclocks on the PowerColor PCS+ card help put some distance between the two competitors, but the GTX285 card we tested still has some overclocking room left in it, too. If this is your main game, the GTX cards may offer better value in this one instance. There is quite a bit of variation in the game play between the four areas, so let's see what happens in the next most challenging scene, area #4.

PowerColor_PCS+_AX5870_Video_Card_Resident_Evil_5_DX10_Scene4.jpg

In area #4, the 5870 convincingly reclaims its title, and the 5850 comes back to compete with the GTX285, just like we've seen on the other titles so far. I'm not sure what it is in area #3 that gives the GT200 cards such an advantage, but it doesn't last throughout the entire benchmark. Let's keep looking, especially at some new titles that were developed for DX11, and see if there's any more give-and-take, or if it remains all take for the HD 5870.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

In our next section, we look at the one of the newest DX11 benchmarks, straight from Russia and the studios of Unigine. Their latest benchmark is called "Heaven", and it has some very interesting and non-typical graphics. So, let's take a peek at what Heaven v2.0 looks like.

Unigine - Heaven Benchmark Results

Unigine Corp. released the first DirectX 11 benchmark "Heaven" that is based on its proprietary UnigineTM engine. The company has already made a name among the overclockers and gaming enthusiasts for uncovering the realm of true GPU capabilities with previously released "Sanctuary" and "Tropics" demos.

Recently Unigine released Heaven 2.0 with some new scenes and features. The biggest difference (besides all the newly installed cannons...?!?) is the ability to change the tessellation load. The jaded among us call the new Extreme Mode - "Fermi Mode", given the timing of the release and the fact that Heaven 1.0 was developed using the ATI Cypress chips and Heaven 2.0 was likely developed using NVIDIA GTX480s. For now, the Normal Mode seems best suited for today's hardware and it matches the results from Heaven 1.0 pretty well, so that what I used for testing.

Changes in version 2.0

  • Heavier tessellation load
  • Several major optimizations of the engine (including more effective culling of tessellated geometry)
  • Added new elaborated objects in the world (airship, fort, pier)
  • Physics-driven flags
  • Added more dynamic lights
  • Enhancements of some old assets
  • Introduced "moderate" and "extreme" tessellation modes in addition to the "normal" one
    • Moderate Mode - This mode is targeted to provide reasonable performance on a wide range of DX11 hardware.
    • Normal Mode - Default mode available in the benchmark shows optimal quality-to-performance ratio.
    • Extreme Mode - Designed to meet the perspectives of the next series of DX11-capable hardware, pushing up the tessellation level to the extreme in the next 1-2 years.

The "Heaven" benchmark excels at providing the following key features:

  • Native support of OpenGL, DirectX 9, DirectX 10 and DirectX 11
  • Comprehensive use of tessellation technology
  • Advanced SSAO (screen-space ambient occlusion)
  • Volumetric cumulonimbus clouds generated by a physically accurate algorithm
  • Dynamic simulation of changing environment with high physical fidelity
  • Interactive experience with fly/walk-through modes
  • ATI EyeFinity support

Unigine Corp. is an international company focused on top-notch real-time 3D solutions. The development studio is located in Tomsk, Russia. Main activity of Unigine Corp. is development of UnigineTM, a cross-platform engine for virtual 3D worlds. Since the project start in 2004, it attracts attention of different companies and groups of independent developers, because Unigine is always on the cutting edge of real-time 3D visualization and physics simulation technologies.

PowerColor_PCS+_AX5870_Video_Card_Unigine_Heaven_DX11_4xAA.jpg

Starting off with a lighter load of 4x MSAA, we see a steady progression of performance as you move up the ATI 5xxx ladder. The HD 5870 still puts on a star performance and leads the crowd, and the overclocks certainly don't hurt anything either. There is still some jerkiness with most of the cards, until you get to the top two. This test was run with 4x anti-aliasing; let's see how the cards stack up when we increase MSAA to the maximum level of 8x.

PowerColor_PCS+_AX5870_Video_Card_Unigine_Heaven_DX11_8xAA.jpg

Increasing the anti-aliasing did nothing to the relative rankings; it just pushed everything down about 15%. It's interesting to note that the HD 5850 doesn't stand out so much with this benchmark; everywhere else, it seems to jump a little higher that its neighbors. I also noticed less impact in Heaven 2.0 from the chimney smoke. In Heaven 1.0, when there was a bit of smoke in the scene, the frame rate dropped radically, and it really hurt the older cards in DX10 mode.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

Let's take a look at one more benchmark, a decidedly less cheerful scenario in a post-apocalyptic "Zone", which is traversed by mercenary guides called Stalkers.

S.T.A.L.K.E.R.: Call of Pripyat Benchmark Results

The events of S.T.A.L.K.E.R.: Call of Pripyat unfolds shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl. Having discovered about the open path to the Zone center, the government decides to hold a large-scale military "Fairway" operation aimed to take the CNPP under control. According to the operation's plan, the first military group is to conduct an air scouting of the territory to map out the detailed layouts of anomalous fields location. Thereafter, making use of the maps, the main military forces are to be dispatched. Despite thorough preparations, the operation fails. Most of the avant-garde helicopters crash. In order to collect information on reasons behind the operation failure, Ukraine's Security Service sends their agent into the Zone center.

S.T.A.L.K.E.R.: CoP is developed on X-Ray game engine v.1.6, and implements several ambient occlusion (AO) techniques including one that AMD has developed. AMD's AO technique is optimized to run on efficiently on Direct3D11 hardware. It has been chosen by a number of games (e.g. BattleForge, HAWX, or the new Aliens vs. Predator) for the distinct effect in it adds to the final rendered images. This AO technique is called HDAO which stands for ‘High Definition Ambient Occlusion' because it picks up occlusions from fine details in normal maps.

PowerColor_PCS+_AX5870_Video_Card_STALKER_DX10_SSAO_Default.jpg

Within the limits imposed by the NVIDIA cards that don't support DirectX 11, we can turn the settings on S.T.A.L.K.E.R.: Call of Pripyat all the way up. We're using SSAO, one of the technologies that first made its appearance in DirectX 10. In the first test, with SSAO turned on in Default Mode, and Quality set to High, we see a drastic performance advantage with the ATI boards. As an example, the HD5850 burns up the screen with 70% better performance than a mildly overclocked, and more expensive GTX285. Apparently, SSAO really hates the GTX platform. Despite the company's insistence that DX11 is largely unnecessary, their performance on one of the key enabling technologies of DX10 is less than compelling. One other thing looks a little different; the overclocking of the Radeon HD 5870 didn't have nearly as much effect in this benchmark. For a 9% increase in GPU clock, we only got a 4% increase in frames per second.

PowerColor_PCS+_AX5870_Video_Card_STALKER_DX11_HDAO_Ultra.jpg

Once we turn on DirectX 11, we're left with only Radeon GPUs to test with. There's a fairly even step up from one card to the next, similar to what you see in a synthetic benchmark. Also, overclocking of the GPU failed to provide any real advantage here with DX11.

Product Series

Stream Processors

Core Clock (MHz)

Shader Clock (MHz)

Memory Clock (MHz)

Memory Amount

Memory Interface

ASUS Radeon HD4850 (EAH4850 TOP)

800

680

N/A

1050

512MB GDDR3

256-bit

ATI Radeon HD5770 (Engineering Sample)

800

850

N/A

1200

1.0GB GDDR5

128-bit

XFX Radeon HD5830 (HD-583X-ZNFV)

1120

800

N/A

1000

1.0GB GDDR5

256-bit

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

216

576

1242

999

896MB GDDR3

448-bit

XFX Radeon HD5850 (21162-00-50R)

1440

725

N/A

1000

1.0GB GDDR5

256-bit

MSI GeForce GTX 275 (N275GTX Twin Frozr OC)

240

666

1476

1161

896MB GDDR3

448-bit

ASUS GeForce GTX 285 (GTX285 MATRIX)

240

662

1476

1242

1.0GB GDDR3

512-bit

XFX Radeon HD5870 (HD-587X-ZNFC)

1600

850

N/A

1200

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

875

N/A

1250

1.0GB GDDR5

256-bit

PowerColor AX5870 (1GBD5-PPDHG2)

1600

925 OC

N/A

1300 OC

1.0GB GDDR5

256-bit

In our next section, we investigate the thermal performance of the Radeon HD5830, and see if the gimped Cypress GPU die runs cool enough with the simple radial heatpipe cooler that XFX brings to bear on it.

PowerColor PCS+ AX5870 Temperature

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 24C throughout testing. The PowerColor PCS+ AX5870 video card recorded 36C in idle 2D mode, and increased to 79C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution, and the maximum MSAA setting of 8X. With the fan set on Automatic, the speed rose from 11% at idle to 61% under full load. I then set the fan speed manually, using Catalyst Control Center, to 100% and ran the load test again, and the GPU reached a slightly lower maximum temperature of 70C.

Load

Fan Speed

GPU Temperature

Idle

11% - AUTO

36C

Furmark

61% - AUTO

79C

Furmark

100% - MANUAL

70C

79C is a good result for temperature stress testing, especially with such a powerful GPU, stock fan settings and fan speeds controlled by the card. It's not quite as good as the result on the Sapphire Vapor-X that Benchmark Reviews tested last year, but the test conditions are not identical, particularly the ambient temp, so direct comparisons are not really possible. I rarely do my benchmarking tests with fans set on Automatic, preferring to give the GPU or CPU the best shot at surviving the day intact. With an integrated temperature controller in play though, I want to show how the manufacturer programmed the system. 70C is obviously a better result, and running the fan on Manual at 100% is not unusual or unwarranted when running such a punishing benchmark as FurMark.

Load temps never got higher than 68C when running gaming benchmarks on Automatic fan settings, so the cooling system definitely does the job, and there is a lot of temperature headroom left for the GPU. With only a single axial fan running, the noise at 100% speed was noticeable, and had the typical sound characteristic for this fan type, but I wouldn't have any problem leaving it there while gaming. For normal usage patterns, I'd leave the fan settings on Auto. Unfortunately, due to the simple 2-wire DC motor controller, I can't tell you the actual RPMs produced, only the percentages that were reported in GPU-Z.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

PowerColor_PCS+_AX5870_Video_Card_furmark_tem.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

PowerColor_PCS+_AX5870_Video_Card_HSF_Pipes_on_Card_01.jpg

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The PowerColor PCS+ AX5870 pulled 27 (157-130) watts at idle and 253 (383-130) watts when running full out, using the test method outlined above. The idle power consumption test is right on the factory number of 27W, and the load value is 65W above the 188W factory spec from PowerColor. That's about normal for this test, as it isn't possible to isolate the CPU load from the power measurements. You also have to factor in the efficiency of the power supply, which changes at different load levels. I think it's fair to say that the card pulls every bit of the full load current that is specified by the manufacturer, and maybe a few watts beyond that.

So, no major surprises in the power consumption area; it's ATI's biggest GPU, running at high clock rates. A good thing it's built on 40nm technology, otherwise those two billion transistors would be pulling a lot more power and generating a lot more heat. I next offer you some final thoughts, and my conclusions. On to the next page...

Radeon HD 5870 Final Thoughts

The ATI Radeon HD 5870 has been king of the single-GPU hill for 6 months now. Fermi finally launched early this month, and cards are now winding their way into the retail channels. With 50% more transistors and an architecture that is able to use the shader cores for tessellation computing, the GTX480 almost always pulls ahead of the 5870 in gaming benchmarks, particularly DX11 titles. Some of the games that feature "TWIMTBP" still radically favor the NVIDIA architecture, even though the GF100 has evolved quite a bit from the GT200 of the last decade.

So, where does that leave the HD 5870? Last I heard, the word on everyone's lips was "Fermi = competition". Well, unfortunately, I don't see it happening any time soon. The GTX480 launched at $500, with 50% more transistors, about 25% better performance and a price that's 25% higher, give or take a few bucks. That's not competition for the HD 5870, that's a different price point. Tell me why ATI or their partners are going to lower their 5870 prices, because the GTX480 costs more, does more and sucks electricity like it's free or something. In fact, pricing on the lowest priced 5870 cards is up $10 since the Fermi launch. Maybe if NVIDIA launches a GTX475 we'll have a comparable card that will offer real competition. That also won't happen soon, because the Green Team needs to focus on filling out the lower price segments, where there's a much larger market to tap into.

By all accounts, it wasn't the drivers that delayed the release of Fermi, it was the hardware. The software developers actually had an extra couple of months to optimize the driver package before the products were released to the public. So I think you have to figure that ATI doesn't really have a 6-month lead on NVIDIA for their drivers. Still, beta testing is not the same thing as consumer usage, so from a "bug" standpoint, there is probably still some work to be done on the NVIDIA side. There are also some features that they will want to expand upon and refine as time goes on. As far as graphics performance goes, they're probably on equal footing for now.

It seems like ATI and NVIDIA typically produce only one set of drivers in any given year that gets universal acclaim from the user community. The other ten times, there is always a small group who are dreadfully unhappy and a sizeable number who say, "That was OK, but what I really want is..." There are also a large number of people who are genuinely happy with that particular update because it fixes the one thing that they were having trouble with. Unfortunately, there is a vocal minority that insists every driver update should improve performance by 15-20% over the last update. I don't know why they get as much press as they do, since they only incite confusion and disappointment for those who have recently joined the gaming community.

PowerColor_PCS+_AX5870_Video_Card_ATI_HD5870_GPU.jpg

So, at the end of its six month reign as champion, I still think you can call the HD 5870 a viable leader in its segment. It has no competition at its price point and it runs cooler with less power consumption than the GT200 or GF100 GPUs from NVIDIA. Essentially, the Fermi introduction had absolutely no effect on the HD 5870 market position, and the GTX285 had already been blown into the weeds six months ago.

PowerColor PCS+ AX5870 Conclusion

The PowerColor PCS+ AX5870 easily improved on the basic performance levels set by the reference cards. That's important, because the AIB partners don't want to be in the position of degrading the performance capability just to save a few bucks. In almost every benchmark, the mild overclock that they incorporated improved gaming performance. Further proof that the 2nd generation card hasn't lost anything in translation is the increased performance I got by overclocking it even further. A 9% increase in clock rates, with no loss of stability or extreme temperatures is respectable, if not extraordinary.

The appearance of the PowerColor PCS+ video cards is quite good. It follows the trend of partial shrouds with a central fan, but the routing of the heatpipes above ground, so to speak, adds some flair to the visual design. The image on the box cover makes it look like the shroud is done up to look like a race car; fortunately that's just artistic license. Although some may have issues with the height of the card, most gamers will have a case large enough to fit it without interference problems. There continues to be an abundance of creativity in the area of GPU cooling, and I expect to see more use of this heatpipe arrangement on cards with high thermal loads.

PowerColor_PCS+_AX5870_Video_Card_Glam_600.jpg

The build quality of the PowerColor PCS+ AX5870 was mixed. Everything is well put together, the overall assembly of the card is fully up to standard for this type of product, and the packaging was also first rate. That's the good part, now the bad: I usually say "assembly and soldering" together in this section, but there was a major solder defect present on the board. After testing, I removed it and looked carefully for any more defects; there were none. There was also some residue on the back surface of the board that should have been removed in a cleaning step at the end of the wave soldering process. These are not the sort of defects that a company like PowerColor should put up with; I expect they will either hound their board supplier to improve their quality, or find a new one. This kind of defect almost always produces infant failures or it does nothing at all, which was the case here.

The features of the HD 5870 may seem slightly less amazing, now that we've been using them on a whole host of Radeon 5xxx cards since last September. Still, no one else has an equivalent combination of features that compete fully with DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11, OpenCL Support, HDMI 1.3 with Dolby True HD and DTS Master Audio. We've barely scratched the surface of all the features in this review, focusing almost exclusively on gaming performance, but the card excels at other uses as well. This is an area that will change eventually, as NVIDIA ramps up the feature set of their product line with new capabilities. They just got their first working products out the door, now they can expand on some features as they release new driver packages.

As of mid April 2010, the price for the PowerColor PCS+ AX5870 is $409.99 at my favorite PC component supplier, Newegg. This is currently the lowest price on that site for an HD 5870 and the PowerColor card is the only one at that price. It's a simple fact to call that a good value.

The PowerColor PCS+ AX5870 earns a recommendation for pushing the design of the HD58xx series further down the path of increasing value. As a result, they've been able to undercut the pricing of most of the competition in the e-tail market with this new series of cards. The power supply is state-of-the-art in ways that make it smaller, cheaper and more efficient, not more complex. The cooling solution is more efficient than most, because of the innovative use of space and it cools the VRM section better than the reference design. With the exception of software voltage control, they've improved the stock performance of what was a high performance model to begin with. Were it not for the solder quality issues that showed up on this sample, the overall rating would clearly have been higher.

Pros:Benchmark Reviews Silver Tachometer Award for Quality Recognition

+ Robust, modern power supply design runs cool
+ Unmatched feature set of HD 5xxx series
+ High value, lowest pricing
+ 1250 MHz Samsung GDDR5 memory
+ Easy to overclock with ATI Overdrive
+ Good cooling performance
+ Free Call of Duty: Modern Warfare 2 game
+ Driver updates have offered real improvements
+ The power to run ATI Eyefinity

Cons:

- Major quality problem with wave soldering process
- No software voltage control, limits OC abilities
- Fan noise unpleasant at 100%
- Still a few driver bugs to work out on the latest games

Ratings:

  • Performance: 9.75
  • Appearance: 9.25
  • Construction: 7.75
  • Functionality: 9.50
  • Value: 7.50

Final Score: 8.75 out of 10.

Quality Recognition: Benchmark Reviews Silver Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments 

 
# I'm not sure about good soldering qualityRod 2010-04-19 15:59
The solder drip you can see on the 3rd image down isn't from wave soldering, its in the wrong direction, it would be more likely from the rework station.

The solder joint on R639, tho may be intact, but I would hesitate to guess that given a bit more time, and temperature cycling that you'll have a dry joint on the upper left.

C658 & C660 seem to not have a good flow on the upper pad to the component, this should have been picked up in rework/inspection before it went in to the tester.

Though you say that even knowing those caps are on a 1mm grid, they are still HUGE components compared to whats out there and what I have worked with, 1mm gridding is still quite a large grid to work with.
Report Comment
 
 
# ThanksBruceBruce 2010-04-19 16:21
Rod,

Care to see the hi-res photos?
BTW, what component pitch is considered SOTA today?

Thanks for the feedback. FWIW, it's tough for the average person to even see this stuff. I have a 10X loupe, and it doesn't cut it. I have to go 2.5X with my Micro-Nikkor on a DSLR to see it properly.
Report Comment
 
 
# soldering qualityHap 2010-04-19 16:27
Most of the inspection is now done using a Automated Optical Inspection System, and it's up to the programmer what is caught as bad solder joints or min passing, as in the Landrex AOI Unit.
BTDT
Report Comment
 
 
# Thanks to the reviewer!Greg 2010-09-27 10:47
I really appreciate a well-written, and in depth review. I had recently purchased the ASUS EAH5870, and am sad to say that it is junk by my standards. Poor cooling, and came well-equipped to fail and cause artifacting due to a bad memory chip. As I am now on the quest to RMA, I am considering this card. Your review was extremely helpful in my decision, and I will be interested in ANY other hardware reviews you write.

Thanks!
Report Comment
 
 
# AppreciatedBruceBruce 2010-09-27 17:25
Hi Greg, Glad you found the review helpful. Although I complained about the cleaning solution residue, this card has been providing good service since the day I got it. I recently tried an aftermarket cooler on it, and the performance was only slightly better, really just lower noise. /index.php?option=com_content&task=view&id=595&Itemid=62

I am a little surprised by your experience with the ASUS model, they have been good cards for me...but nobody's perfect ALL the time.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive