Archive Home arrow Reviews: arrow Video Cards arrow ATI Radeon HD5670 HDMI Video Card
ATI Radeon HD5670 HDMI Video Card
Reviews - Featured Reviews: Video Cards
Written by Bruce Normann   
Thursday, 14 January 2010

ATI Radeon HD5670 Video Card Review

Oops, I did it again. Not me, but ATI. They've gone and cut their latest 40nm GPU in half again, given it a new name, and released a brand new video card that smacks the competition around. The original, full-size die was Cypress, cut it in half and you get Juniper, cut it in half again and you get Redwood. Just four months after the release of their first 5xxx series video card (and two million unit sales later...), ATI is releasing the sixth iteration of DX11-capable hardware. The Radeon HD5670 is targeted at the mainstream gaming audience, which in terms of units (not dollars) is two thirds of the total available market.

With the Radeon HD5670, ATI brings us another card based on class-leading 40nm GPUs and GDDR5 memory. The new card uses the exact same architecture as the HD58xx and HD57xx series; ATI basically cut the Juniper chip in half to create a brand new video card with hardware specs somewhere below the HD5750. How far below is what we need to find out and Benchmark Reviews is pleased to offer you the results of our extensive testing. One of the big differentiators for this card is the lack of any external power connection. The maximum power draw is well below the allowable current provided directly by the PCI-e slot.

ATI_Radeon_HD5670_Front_34_Top.jpg

The flagship ATI video cards made a huge splash in September, but according to Mercury Research, cards costing over $200 only make up 7% of the market, and the 57xx series landed in the $100-$200 range, which makes up 27% of the market. That leaves a huge opening in the sub-$100 market, and ATI is the first one to offer a DirectX 11 capable card in this segment. The specs indicate a performance level that will struggle with FPS games at high resolution, but will have no problem supporting the millions of people playing World of Warcraft.ATI_Radeon_Logo_250px.jpg

About the company: ATI

Over the course of AMD's four decades in business, silicon and software have become the steel and plastic of the worldwide digital economy. Technology companies have become global pacesetters, making technical advances at a prodigious rate - always driving the industry to deliver more and more, faster and faster.

However, "technology for technology's sake" is not the way we do business at AMD. Our history is marked by a commitment to innovation that's truly useful for customers - putting the real needs of people ahead of technical one-upmanship. AMD founder Jerry Sanders has always maintained that "customers should come first, at every stage of a company's activities." We believe our company history bears that out.

Radeon HD5670 Features

The feature set of the ATI HD5670 video card is nearly identical to the entire HD5xxx series. The important differences are all related to the fact that the HD5670 chip is half the size of the HD5770, with half the processing power. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look half again as familiar.

ATI_Radeon_HD5670_Architecture.jpg

ATI Radeon HD 5770 GPU Feature Summary

  • 627 million 40nm transistors
  • TeraScale 2 Unified Processing Architecture
    • 400 Stream Processing Units
    • 20 Texture Units
    • 32 Z/Stencil ROP Units
    • 8 Color ROP Units
  • GDDR5 memory interface
  • PCI Express 2.1 x16 bus interface
  • DirectX 11 support
    • Shader Model 5.0
    • DirectCompute11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency
  • OpenGL 3.2 support1
  • Image quality enhancement technology
    • Up to 24x multi-sample and super-sample anti-aliasing modes
    • Adaptive anti-aliasing
    • 16x angle independent anisotropic texture filtering
    • 128-bit floating point HDR rendering
  • ATI Eyefinity multi-display technology2,3
    • Three independent display controllers - Drive three displays simultaneously with independent resolutions, refresh rates, color controls, and video overlays
    • Display grouping - Combine multiple displays to behave like a single large display
  • ATI Stream acceleration technology
    • OpenCL 1.0 compliant
    • DirectCompute11
    • Accelerated video encoding, transcoding, and upscaling4,5
    • Native support for common video encoding instructions
  • ATI CrossFireX multi-GPU technology6
    • Dual GPU scaling
  • ATI Avivo HD Video & Display technology7
    • UVD 2 dedicated video playback accelerator
    • Advanced post-processing and scaling8
    • Dynamic contrast enhancement and color correction
    • Brighter whites processing (blue stretch)
    • Independent video gamma control
    • Dynamic video range control
    • Support for H.264, VC-1, and MPEG-2
    • Dual-stream 1080p playback support9,10
    • DXVA 1.0 & 2.0 support
    • Integrated dual-link DVI output with HDCP11
      • Max resolution: 2560x160012
    • Integrated DisplayPort output
      • Max resolution: 2560x160012
    • Integrated HDMI 1.3 output with Deep Color, xvYCC wide gamut support, and high bit-rate audio
      • Max resolution: 1920x120012
    • Integrated VGA output
      • Max resolution: 2048x153612
    • 3D stereoscopic display/glasses support13
    • Integrated HD audio controller
      • Output protected high bit rate 7.1 channel surround sound over HDMI with no additional cables required
      • Supports AC-3, AAC, Dolby TrueHD and DTS Master Audio formats
  • ATI PowerPlayTM power management technology7
    • Dynamic power management with low power idle state
    • Ultra-low power state support for multi-GPU configurations
  • Certified drivers for Windows 7, Windows Vista, and Windows XP
  1. Driver support scheduled for release in 2010
  2. Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector
  3. ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays
  4. Requires application support for ATI Stream technology
  5. Digital rights management restrictions may apply
  6. ATI CrossFireX technology requires an ATI CrossFireX Ready motherboard, an ATI CrossFireXTM Bridge Interconnect for each additional graphics card) and may require a specialized power supply
  7. ATI PowerPlay, ATI AvivoTMand ATI Stream are technology platforms that include a broad set of capabilities offered by certain ATI Radeon HD GPUs. Not all products have all features and full enablement of some capabilities and may require complementary products
  8. Upscaling subject to available monitor resolution
  9. Blu-ray or HD DVD drive and HD monitor required
  10. Requires Blu-ray movie disc supporting dual 1080p streams
  11. Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
  12. Some custom resolutions require user configuration
  13. Requires 3D Stereo drivers, glasses, and display

AMD is slowly working towards a future vision of graphics computing, as is their main competitor, Intel. They both believe that integrating graphics processing with the CPU provides benefits that can only be achieved by taking the hard road. For now, the only thing we can see is their belief; the roadmap is both sketchy and proprietary. One look at the size of their ever-shrinking GPU dies, and it starts to look more like a possibility than a pipe dream, though.

Radeon Video Card Specifications

I mentioned in the introduction that the HD5670 had hardware specs somewhere below an HD5750. You can see the 5670 specs in detail a little further below, and they're a mixed bag compared to the 57xxx series. The memory is on a slightly slower clock, but the GPU is running faster than a stock HD5750. While the specs give us a good clue to the performance of the HD5670, ultimately, it's the real-world performance we care about, and the price for that performance. This graphic puts the general pricing strategy in perspective for you:

ATI_Radeon_HD5670_Sweet_Spot.jpg

But before we get to our detailed teardown and testing, let's look at the actual HD5670 specs:

Radeon HD5670 Specifications

  • Engine clock speed: 775 MHz
  • Processing power (single precision): 620 GigaFLOPS
  • Texel fill rate (bilinear filtered): 25.2 Gigatexels/sec
  • Pixel fill rate: 6.2 Gigapixels/sec
  • Anti-aliased pixel fill rate: 24.8 Gigasamples/sec
  • Memory clock speed: 1.0 GHz
  • Memory data rate: 4.0 Gbps
  • Memory bandwidth: 64 GB/sec
  • Maximum board power: 61 Watts
  • Idle board power: 14 Watts

OK, tired of looking at numbers, let's take a closer look at the hardware, then? ATI was kind enough to send us a copy of the 512MB reference design video card, so let's examine the ins and outs.

Closer Look: Radeon HD5670

The HD5670 breaks with the general design of the previous HD5xxx cards. For one thing, the reference design is a single slot unit, which is a first for the Radeon HD5xxx series, and for DirectX 11. The card is only 170 mm long, which means it will fit into any case without an issue. It's quite small, and a simple, straightforward design, externally. The thing that makes it interesting, and worth testing is that pint-sized 40nm GPU hidden below decks. Keep in mind during this review, that there will be some variation in offerings by ATI's partners. Dual-slot coolers, 1GB v. 512MB, different I/O connections, are all open to interpretation by the supplier.

ATI_Radeon_HD5670_Front_34_Bottom.jpg

The red blower wheel on the reference cooler is quite different on this card, more of an inclined paddle wheel than the squirrel cage blower on the beefier 5xxx cards. It pushes air through a rather small, finned heatsink block that sits on top of the GPU, and out into the case. I don't see any evidence of heatpipe style construction, and it's probably not necessary, given the low power involved. With an idle power consumption of 14W, heat buildup in the case is not likely to be an issue. Those heatsink-style posts over the DRAM modules are there for show, really; as is the "Hot" sticker on the side...

ATI_Radeon_HD5670_HSF_Blower_Wheel.jpg

The connections on the rear of the card are arranged quite differently on this single slot offering, as well. From left to right: one DisplayPort, one HDMI and one DVI connector - one for everyone.

ATI_Radeon_HD5670_IO_Plate_01.jpg

There will be some flexibility in the I/O port arrangement for ATI partners, so pay attention to the product specs when you buy, as it can be hard to tell the HDMI and DisplayPort connections apart with a casual glance. Some units will also be shipped with a VGA connector displacing the DisplayPort, like this.

ATI_Radeon_HD5670_34_White_ATI_01.jpg

The back of the Radeon HD5770 is filled with little surface-mount-technology components, which is normal for a card in this market segment. The main feature to be seen here are the metal cross-brace for the GPU heatsink screws, which are spring loaded, and connect to threaded standoffs under the blower assembly on the front side of the card. Also, note the absence of back side DRAM for the 512MB version of the card.

ATI_Radeon_HD5670_Rear_PCB_Full.jpg

For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster was always the byword for achieving gaming-class performance from the latest and greatest GPU. The HD5670 is a mainstream video card with a very small GPU die, so even though it will be pushed to maximum performance levels by most potential customers, there just aren't enough transistors there to produce a lot of heat. To top it off, they're from the same process technology that is used on the larger 5xxx series cards, which have proven to be exceedingly efficient, with very little waste heat generation. What little heat there is will all come out here, at the back end of the cooler housing.

ATI_Radeon_HD5670_HSF_Exhaust.jpg

That's all there really is to see on the outside, so let's peel back the covers and have a good look around on the inside.

Radeon HD5670 Detailed Features

The main attraction of ATI's new line of video cards is the brand new GPU with its 40nm transistors and an improved architecture. The chip in the 5670 is called "Redwood" and is essentially half of the "Juniper", or if you like, one quarter of the "Cypress", the high-end chip that was first introduced in the HD5800 series, in September, 2009.

ATI_Radeon_HD5670_Redwood_Die_wDime.jpg

The Redwood die is very small, as can be seen with this comparison to a well known dimensional standard. ATI still managed to cram over a 600 million transistors on there, and the small size is critical to the pricing strategy that ATI is pursuing with these new releases.

The base card uses 512MB of GDDR5 memory, on a 128-bit bus with a 4.0 Gbps memory interface. This combination offers a maximum memory bandwidth of up to 64 GB/sec. There will also be 1GB versions of the card available, at a slight price premium. Memory prices are going through the roof these days, so it's not surprising that ATI is offering a choice here. When they cut the Juniper GPU in half, they kept the memory bus at 128-bit, so don't expect memory to be a bottleneck on this card. There may not be much room for memory overclocking, via the Overdrive tool distributed by AMD, but with only half the shader processors running point, it's not a capability that likely to be missed.

ATI_Radeon_HD5670_Memory_Chips.jpg

The H5GQ1H24MFR-TOC DRAM chip from Hynix is in the same family as the GDDR5 memory used in the HD57xx and HD58xx series video cards. It's in the lower tier, and only rated for 4.0 Gbps, which is a downgrade from the HD57xx series, which used a 5.0 Gbps version. An overclock to the 1250-1300 MHz range is going to be tougher with these chips, even if you increase the memory voltage. To be fair to ATI, this card is NOT meant for tweakers, it's meant to be an easy plug-and-play unit for people currently stuck with low end OEM or IGP graphics. As such, they are fully justified in optimizing the memory performance/price ratio to offer the best value to the intended customer.

ATI_Radeon_HD5670_ATI_Radeon_HD5770_Memory_Table.jpg

The power section provides 2-phase power to the GPU based on the uP6201 controller; a very simple arrangement compared to the 58xx series, and more like what was used on the HD5750. The combination of a very low power GPU and low power GDDR5 memory means that a smart power supply design isn't really warranted in this situation. Even with the simple power scheme, they have achieved an incredibly low power consumption of 14W at idle and 61W under stress.

ATI_Radeon_HD5670_Power_Controller.jpg

The assembly quality on the PCB was not the best I've seen, but I had engineering samples to look at. Before we dive into the testing portion of the review, let's look at one of the most exciting new features available on every Radeon HD5xxx series product, Eyefinity.

ATI Eyefinity Multi-Monitors

Even at this low price point, ATI felt that people might want to take advantage of the new Eyefinity technology. Especially if you look at this product as an upgrade part for someone with OEM or IGP video, they can add a second monitor at the same time and really improve their gaming and HD video experience.

ATI_Radeon_HD5670_3_Main_Features_01.jpg

ATI Eyefinity advanced multiple-display technology launches a new era of panoramic computing, helping to boost productivity and multitasking with innovative graphics display capabilities supporting massive desktop workspaces, creating ultra-immersive computing environments with superhigh resolution gaming and entertainment, and enabling easy configuration. High end editions will support up to six independent display outputs simultaneously.

In the past, multi-display systems catered to professionals in specific industries. Financial, energy, and medical are just some industries where multi-display systems are a necessity. Today, more and more graphic designers, CAD engineers and programmers are attaching more than one display to their workstation. A major benefit of a multi-display system is simple and universal - it enables increased productivity. This has been confirmed in industry studies which show that attaching more than one display device to a PC can significantly increase user productivity.

Early multi-display solutions were non-ideal. Bulky CRT monitors claimed too much desk space; thinner LCD monitors were very expensive; and external multidisplay hardware were inconvenient and also very expensive. These issues are much less of a concern today. LCD monitors are very affordable and current generation GPUs can drive multiple display devices independently and simultaneously, without the need for external hardware. Despite the advancements in multi-display technology, AMD engineers still felt there was room for improvement, especially regarding the display interfaces. VGA carries analog signals and needs a dedicated DAC per display output, which consumes power and ASIC space. Dual-Link DVI is digital, but requires a dedicated clock source per display output and uses too many I/O pins from the GPU. It was clear that a superior display interface was needed.

In 2004, a group of PC companies collaborated to define and develop DisplayPort, a powerful and robust digital display interface. At that time, engineers working for the former ATI Technologies Inc. were already thinking about a more elegant solution to drive more than two display devices per GPU, and it was clear that DisplayPort was the interface of choice for this task. In contrast to other digital display interfaces, DisplayPort does not require a dedicated clock signal for each display output. In fact, the data link is fixed at 1.62Gbps or 2.7Gbps per lane, irrespective of the timing of the attached display device. The benefit of this design is that one reference clock source provides the clock signal needed to drive as many DisplayPort display devices as there are display pipelines in the GPU. In addition, with the same number of I/O pins used for Single-Link DVI, a full speed DisplayPort link can be driven which provides more bandwidth and translates to higher resolutions, refresh rates and color depths. All these benefits perfectly complement ATI Eyefinity Multi-Display Technology.

ATI_Radeon_HD5670_Eyefinity_01.jpg

ATI Eyefinity Technology from AMD provides advanced multiple monitor technology delivering an immersive graphics and computing experience, supporting massive virtual workspaces and super-high resolution gaming environments. Legacy GPUs have supported up to two display outputs simultaneously and independently for more than a decade. Until now graphics solutions have supported more than two monitors by combining multiple GPUs on a single graphics card. With the introduction of AMD's next-generation graphics product series supporting DirectX 11, a single GPU now has the advanced capability of simultaneously supporting up to six independent display outputs.

ATI Eyefinity Technology is closely aligned with AMD's DisplayPort implementation providing the flexibility and upgradability modern user's demand. Up to two DVI, HDMI, or VGA display outputs can be combined with DisplayPort outputs for a total of up to six monitors, depending on the graphics card configuration. The initial AMD graphics products with ATI Eyefinity technology will support a maximum of three independent display outputs via a combination of two DVI, HDMI or VGA with one DisplayPort monitor. AMD has a future product planned to support up to six DisplayPort outputs. Wider display connectivity is possible by using display output adapters that support active translation from DisplayPort to DVI or VGA.

The DisplayPort 1.2 specification is currently being developed by the same group of companies who designed the original DisplayPort specification. Its feature set includes higher bandwidth, enhanced audio and multi-stream support. Multi-stream, commonly referred to as daisy-chaining, is the ability to address and drive multiple display devices through one connector. This technology, coupled with ATI Eyefinity Technology, will be a key enabler for multi-display technology, and AMD will be at the forefront of this transition.

Video Card Testing Methodology

This is the beginning of a new era for testing at Benchmark Reviews. With the remarkably quick adoption rate of Windows7, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years.

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro. We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). Given the mainstream user base that this card is aimed at, we limited our testing to 1280x1024 and the commonly encountered wide-screen format of 1680x1050 (20-22" wide-screen LCD).

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the three remaining results is displayed in our article.

ATI_Radeon_HD5670_Road_to_Fusion.jpg

Test System

  • Motherboard: ASUS M4A79T Deluxe (2205 BIOS)
  • System Memory: 2X 2GB OCZ Reaper HPC DDR3 1600MHz (7-7-7-24)
  • Processor: AMD Phenom II 720 Black Edition (Overclocked to 3.8 GHz)
  • CPU Cooler: CoolerMaster Hyper Z600
  • Video: ATI Radeon HD5670, Engineering Sample
  • Drive 1: OCZ Summit SSD, 60GB
  • Optical Drive: Sony NEC Optiarc AD-7190A-OB 20X IDE DVD Burner
  • Enclosure: CM STORM Sniper Gaming Case
  • PSU: Corsair CMPSU-750TX ATX12V V2.2 750Watt
  • Monitor: SOYO 24"; Widescreen LCD Monitor (DYLM24E6) 1920X1200
  • Operating System: Windows 7 Ultimate Version 6.1 (Build 7600)

Benchmark Applications

  • 3DMark Vantage v1.0.1 (8x Anti Aliasing & 16x Anisotropic Filtering)
  • Crysis v1.21 Benchmark (High Settings, 0x and 4x Anti-Aliasing)
  • Devil May Cry 4 Benchmark Demo (Ultra Quality, 8x MSAA)
  • Far Cry 2 v1.02 (Very High Performance, Ultra-High Quality, 8x Anti-Aliasing)
  • Resident Evil 5 (8x Anti-Aliasing, Motion Blur ON, Quality Levels-High)

Video Card Test Products

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

  • EVGA GeForce 8600GT (256-P2-N751-TR Forceware v195.62 WHQL)
  • ATI Radeon HD5670 (Engineering Sample Catalyst 8.69 RC3)
  • MSI Radeon HD4830 (R4830 T2D512 Catalyst 8.69_RC3)
  • XFX Radeon HD5750 (HD-575X-ZN Catalyst 8.69_RC3)
  • ATI Radeon HD5770 (Engineering Sample Catalyst 8.69_RC3)
  • ASUS GeForce GTX 260 (ENGTX260 MATRIX Forceware v195.62 WHQL)

Now we're ready to begin testing video game performance these video cards, so please continue to the next page as we start with the 3DMark Vantage results.

3DMark Vantage Benchmark Results

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. These two, relatively low, resolutions are the most appropriate for a review of mainstream hardware, and we'll also be using the following reduced settings for 3DMark Vantage: No Anti-Aliasing, 2x Anisotropic Filtering, all quality levels at Entry, and Post Processing Scale set at 1:2.

ATI_Radeon_HD5670_3DMark_Vantage_Jane_Nash.jpg

The first test ran pretty smoothly for most of the hardware, with the exception of the NVIDIA-based 8600GT. This low level of graphics performance is exactly why people need a low-cost upgrade solution. Once I installed the HD5670, I was back in the game, so to speak. 22 and 29 frames per second aren't exactly OMG performance, but at least games will be playable at reasonable screen resolutions. The newcomer's results are comparable to an HD4830, which was a respectable mainstream performer in the prior generation of ATI cards. From there, things only get better with increasing transistor count and corresponding increases in cost.

ATI_Radeon_HD5670_3DMark_Vantage_New_Calico.jpg

Test two is a little more challenging for most video cards, due to the large number of irregularly shaped asteroids that need to be rendered in New Calico. Once again, the HD5670 just barely gets in the gate, with FPS numbers in the teens at 1680x1050 resolution. Dropping down to 1280x1024 get you up to 20 FPS, but for some reason this scene gives the HD4830 a leg up to near-5750 standards. We need to look at actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the traditional standard bearer for gaming benchmarks, Crysis.

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

Once again, we are going to concentrate on relatively low-resolution testing performance. At the 1280x1024 resolution used by 17"; and 19"; monitors, the CPU and memory may have some influence on the results, but our test rig in this case is well above the specs that a typical HD5670 user will have, so we've eliminated that variable. At 1280x1024, and the widescreen resolutions of 1680x1050, the performance differences between the video cards under test are mostly down to the cards.

ATI_Radeon_HD5670_Crysis_NoAA.jpg

The results here are consistent with 3DMark Vantage, in that the HD5670 pulls way ahead of the typical low-end video card. It loses about 5 FPS to an older HD4830, and its performance is always better than half that of the HD5770. For most users in this market space, the performance of the HD5670 is going to be a revelation, and they'll be pleased with the low cost and ease of installation. I chose to compare the HD5670 directly against its higher priced siblings because I wanted to see how well the GPU scales, when you cut the number of stream processors in half.

ATI_Radeon_HD5670_Crysis_4xAA.jpg

Once a decent amount of anti-aliasing is factored in, the HD5670 loses about 5 FPS. All those little improvements ATI made to the rendering processor pay off here, as the performance closes in on the HD4830. Frame rates are still somewhat below acceptable until you get to the high end cards. If you want to play this game in DX10 with eye candy turned on, you are going to have to pay.

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Devil May Cry 4 Benchmark

Devil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years.

MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms.

On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems.

It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used.

Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews used the 1680x1050 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.

ATI_Radeon_HD5670_DMC4_DX10.jpg

Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here.

The HD5670, once again annihilates the 8600GT, and this time, provides excellent frame rates of 45 and 55FPS, well above the recommended 30FPS minimum. This is a perfect example of a less demanding game achieving full performance with a mainstream graphics card. Despite the torture treatment that most of us put the cards through for benchmarking applications, a lot of games just don't put the same demands on a video card that we see in typical testing scenarios.

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics.

Far Cry 2 Benchmark Results

Ubisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline.

The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions.

The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects.

There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used slightly lower settings for this test, with the resolution set to 1280x1024 and 1680x1050. The performance settings were all set to 'Medium', Render Quality was set to 'Optimum' (which was the same as "High" in this case), 4x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.

ATI_Radeon_HD5670_Far_Cry_2_DX10.jpg

Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be similar. Far Cry 2 also seems to have been optimized for, or at least written with a clear understanding of, DX10 requirements.

Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), only the modern video cards are capable of producing playable frame rates with moderate settings applied. The Radeon HD5670 is one of those that gets its chin above the bar in this game. Although the Dunia engine seems to be optimized for NVIDIA chips, the improvements ATI incorporated in their latest GPUs are just enough to allow this game to be played with a mid-range card, albeit one at the upper end of the range. Older ATI products struggle with this benchmark, and if you've got one of those, the HD5xxx cards would be a good upgrade for playing this game.

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

Our next benchmark of the series puts our collection of video cards against some very demanding graphics in the newly released Resident Evil 5 benchmark.

Resident Evil 5 Benchmark Results

PC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenaries mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe.

Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies.

From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.

ATI_Radeon_HD5670_Resident_Evil_5_DX10.jpg

The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes. In addition it calculates an overall average for the four scenes. The averages for scene #3 and #4 are what we report here, as they are the most challenging.

This new game is quite playable with this mainstream hardware, at lower screen resolutions. Once again, the HD5670 trumps the older NVIDIA card, and hangs tight with last year's gaming bargain, the HD4830. This is quite an accomplishment when you consider that the HD4830 has 50% more steam processors to put in play. You are still going to get a smoother experience out of the higher priced cards, but we ran this test with all the settings turned up to full, so there is room for improvement in these scores.

Product Series

EVGA GeForce 8600GT (256-P2-N751-TR)

ATI Radeon HD5670 (Mfr. Sample)

MSI Radeon HD4830 (R4830 T2D512)

XFX Radeon HD5750 (HD-575X-ZN)

ATI Radeon HD5770 (Mfr. Sample)

ASUS GeForce GTX 260 (ENGTX260 MATRIX)

Stream Processors

32

400

640

720

800

216

Core Clock (MHz)

540

775

585

700

850

576

Shader Clock (MHz)

1180

N/A

N/A

N/A

N/A

1242

Memory Clock (MHz)

700

1000

900

1150

1200

999

Memory Amount

256MB - GDDR3

512MB - GDDR5

512MB-GDDR3

1024MB-GDDR5

1024MB-GDDR5

896MB - GDDR3

Memory Interface

128-bit

256-bit

256-bit

128-Bit

128-Bit

448-bit

Our next sections look at thermal performance and power consumption, both key qualities for this mainstream product.

ATI Radeon HD5670 Temperature

It's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test.

To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 23C throughout testing. The ATI Radeon HD5670 video card recorded 38C in idle 2D mode, and increased to 60C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution and the maximum MSAA setting of 8X. The fan was left on its stock settings for this test.

60°C is an excellent result for temperature stress testing, especially with stock fan settings. The fan has no controller that I could see, and the motor runs on two wires, so there was no tachometer output available. For such a simple, low cost card, it does the job without any excess complications. People are going to see versions of the HD5670 come out with dual-slot coolers, and wonder if the single-slot reference cooler is up to the job. The answer is, undoubtedly yes.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

ATI_Radeon_HD5670_furmark_temp.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill...

VGA Power Consumption

Life is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.

ATI_Radeon_HD5670_GPU-Z_0.3.png

To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The ATI Radeon HD5670 pulled 16 (146-130) watts at idle and 79 (209-130) watts when running full out, using the test method outlined above. These numbers are very close to the factory numbers of 14W at idle and 61W under load. With the video cards power demands dropping so low, this type of test starts to show its limitations. Nevertheless, the numbers are still in the ballpark.

Radeon HD5670 Final Thoughts

The alternative title for this review could have been: "Honey, I shrunk the Kids". I wonder how many more times ATI can slice the pie and still come up with a fully functional video card. There may be one more cut, for an ultra-low power solution, and God knows what kind of mischief the engineers play with the mobile solutions, but I think this is probably it, for a card that can honestly support gaming applications as well as general usage and HD video. I almost wonder if ATI didn't work backwards; take the marketing spec for the HD5670 and multiply it times 4 for the HD5870. However they did it, and trust me, someone ran the numbers; you can be sure of that, the fact remains that the Radeon HD5670 is perfectly suited for the target market.

ATI_Radeon_HD5670_Fusion_Ball.jpg

I used to laugh at the concept of marrying the GPU with a CPU. I mean, every new CPU seemed to be bigger and burn hotter than the previous iteration. The problem was, I kept looking at the top of the pile, where 140 watts power dissipation from the CPU was not a bad dream, but a reality to be dealt with. One look at an NVIDIA GT200 chip and it was all I could do to stop from laughing out loud when thinking about CPU-GPU integration. Well, with just one generation of progress in IC lithography, from 55nm down to 40nm, suddenly, all things seem possible. Once everyone gets their feet wet at 32nm, I think there will be some products on the table that do exactly what I thought impossible last year or so. Clearly, the first chips to market won't support realistic gaming scenarios, but in one or two more generations, it could happen.

ATI Radeon HD5670 Conclusion

The performance of the HD5670 is pretty amazing, considering the modest looking hardware and low cost. One way of showing this objectively is to look at the power required to deliver the performance. The HD5670 offers slightly less same performance as an HD4830 for less than half the power, and that's at full load, without all the power saving tricks that are used to get the idle power to 14 watts. It's 10 degrees cooler, too, at both idle and full load, with a tiny, single-slot cooler. Performance is more than just frames-per-second, though; the ability to run 2-3 monitors with Full ATI EyeFinity Support counts, too. Plus, we've been measuring performance with Beta drivers. So, while the raw performance numbers are good enough for the target price point today, I predict even better things to come for both price and performance.

ATI_Radeon_HD5670_34_Black.jpg

The appearance of the product itself is both small and substantial. The cooler housing is pretty simple for the most part; the design is clean and offers a perfect canvas for the partners to display their best artwork. There are definitely some non-reference designs in the works, but the usual motivation for that effort is usually improving the thermal performance. I don't see that as a real necessity with this card/chip combo. The reference design has plenty of cooling capacity for the tiny Redwood GPU.

The build quality of the Radeon 5670 was good, for an engineering sample. The parts were all high quality, and while the PC board may have had a few rough edges, the cooler section was manufactured and assembled perfectly.

The features of the HD5670 have been carried over in full measure from the HD5800 series: DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11 and OpenCL Support, HDMI 1.3a with Dolby True HD and DTS Master Audio. Nothing was left out on this card, despite it being produced for a price point well below its kin. We've barely scratched the surface of the features in this review, but clearly the card will thrive in a multi-functional role, as well as provide a solid entry-level gaming experience.

ATI is aiming at a price point of $99 for the HD5670 with 512MB of GDDR5 RAM. The Sapphire Radeon HD 5670 sells for $90, while the PowerColor AX5670 lists for $95 and XFX for $100. A quick look at Newegg shows this to be the target price for most GT240 cards, with GDDR5 memory. ATI priced this card right at the GT240, knowing that it had an advantage, performance-wise. In addition, it has advanced features that the other cards can't match. I expect pricing to be more dynamic in this sector, as the competition is fierce for this, the largest share of the consumer pie.

The ATI Radeon HD5670 earns a Golden Tachometer Award, because it's the card many people kept wishing for. The mainstream consumer wanted something powerful, easy to install, and cheap. ATI hit all three targets with a low power solution that answers the eternal question: "Can I use this video card with my XYZ OEM power supply?" For every mainstream user you know, who wants a cheap, easy upgrade, and maybe a dual-monitor setup, too...the ATI Radeon HD5670 fits the bill.

Pros:Benchmark Reviews Golden Tachometer Award

+ Unmatched feature set
+ Extremely low power consumption
+ 620 GigaFLOPS for $99 (at launch)
+ HDMI and DisplayPort interfaces
+ Cool, quiet operation
+ Sleek, modern looks
+ Low heat generation inside the case

Cons:

- Small, simple hardware design won't impress others
- Price structure seems a little high (launch pricing...)

Ratings:

  • Performance: 9.50
  • Appearance: 9.00
  • Construction: 9.25
  • Functionality: 9.75
  • Value: 9.00

Final Score: 9.3 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments 

 
# awesome card!Awesome man 2010-03-04 11:25
Best card i have ever used!
Report Comment
 
 
# which company's 5670 did you buy?Don 2010-03-24 07:00
Which company's 5670 did you buy Aus or XFX?
Report Comment
 
 
# HIS 5670Bill 2010-04-05 11:32
I bought a HIS 5670 early march. It gave me horrifying crashes during playing games. I played BF2:BC2 and Dirt 2, both at 1680x1050 with high settings.

Although the games played well, the crashes couldn't be tolerated and so I returned the card. I am thinking of buying it again as the new drivers seem to fix the problem as I read on many forums.

If the new 10.3 drivers did in fact fix the problems, please confirm someone with this experience.

Bill
Report Comment
 
 
# HD5670Ian Ray Betron 2010-05-01 03:40
Yep, ATI Catalyst 10.4 stabilized everything (on my XFX 5670). Very nice card for its class.
Report Comment
 
 
# Price/=/performanceBanzai 2010-05-19 07:07
Overall it's a great card for people on a tight budget, smaller cases such as prebuilts and slim builds, and those who have a psu under 300-400w or can't connect via pci-e. This card is a great substitution for those who originally wanted a 4650/70, or even a 9800/4850 as it competes with them. Price wise, you get the performance of a 4850 card without the need for a 6-pin, a med size case, or an extremely powerful cpu (anything less than a dual core at 2.4GHz will bottleneck it). The main gains over a 4850, is that you get a dx11 card for nearly 20$ less, supports eyefinity(although it costs to make this card use three screens) and has DDR5. So if you wanted a 4850/lesser or a 9800gt/lesser than this card is the better option.
Sadly Radeon has been upping their prices, although their still lower than Nvidia cards. Although you get a 10fps performance over a 4670, the near 20-30$ increase on a card that costs lesser to manufacture and produce isn't worth it.
Report Comment
 
 
# My HD 5760 is too hotLittleBear 2010-06-22 20:06
The idle temp. is at 59oC. Do you think something is wrong with the card?
Report Comment
 
 
# I.T pro.Joe 2010-07-13 00:38
update the software of the 5760...
Report Comment
 
 
# Which SW?BruceBruce 2010-07-13 05:03
Do you mean the BIOS on the card or the Driver SW?
Report Comment
 
 
# hit tab to see score then crashes to desktopPall 2010-07-17 22:14
238fb22844c3fe1cfcrashea.dice.RomePC.Cl2010-07-16 07:16:08553292 0x61D866 0x61D866 0xCD3A6B 0xCDA77D 0xCEDE9E 0xCEE518 0xCD43D0 0xD8C0EF 0xCD8EC0 0x13F3758 0x9089AE 0x62B8E2 0x7635166B 0x800000 0x6B3B2A49 0x6B3B90E8 0x6B3D8C16 0x6B3C0ED5 0x6D0B26FB 0x6CBE62C1 0x6CBE87E0 0xD908C1 0x508FA7 0xA6D50C 0x7635166B 0x7635166B 0x62BEB9 0x595F3B 0x76351126 0x596149 0x75980849 0x591B4E 0x591B4E 0x153BD80 0x5992A6 0x153BDA4 0xAAF058 0x14B6808 0x71F929BB 0x153BD80 0x71F98CED 0x71F92A47 0x76353677 0x76EC9D42 0x76379775 0x76379775 0x76F003DD 0x72A84B 0x76EC9D15 0x71F929E1 0x71F929E1registers: Edi 52057c10, Esi 00000000, Ebx 191019f0, Edx 00000000, Ecx 0d30b700, Eax 20f604d3 Ebp 0e92fa50, Eip 0061d866, Esp 0e92fa40
Report Comment
 
 
# RE: hit tab to see score then crashes to desktopOlin Coles 2010-07-17 22:16
Is that supposed to be a crash report? Not very helpful information, and probably a game/driver bug.
Report Comment
 
 
# same as abovePall 2010-07-17 22:15
I have updated my driver to 10.6 and have not found a solution
Report Comment
 
 
# RE: same as aboveOlin Coles 2010-07-17 22:19
Did this start before or after the driver update? I've had many problems with 10.6, and a few issues with 10.5. I'm actually still using 10.4 for gaming.
What game is this? Is it via Steam?
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive