XFX Radeon HD 5850 HD-585A-ZNFC |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Saturday, 19 December 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
XFX Radeon HD5850 HD-585A-ZNFCXFX knows that gamers want unmatched performance from their hardware, which is why they now supply ATI Radeon desktop graphics as well as NVIDIA GeForce products. There's no better time for AMD-designed video cards than now, as the Radeon 5800-series has climbed to the top of gamers' most-wanted list. In this article, Benchmark Reviews tests the XFX Radeon HD5850 HD-585A-ZNFC video card against a large cross-section of modern graphics accelerators and explore the visual quality Microsoft Windows 7 will deliver with DirectX 11. Armed with 1440 shader cores, the 40nm Cypress GPU HD5850 is positioned to offer an excellent value for the upper mid-range and hits the sweet-spot for DX11 gamers. While the list of DirectX 11 video games has just started to grow, with one of the first being a free Massive Multiplayer Online Role Playing Game (MMORPG) named BattleForge. Perhaps ATI has created the perfect storm for their Radeon HD 5800-series by offering a price-competitive graphics card with several free games included or available. While NVIDIA toils away with CUDA and PhysX, ATI is busy delivering the next generation of hardware for the gaming community to enjoy. AMD launched their Radeon 5800-series as the first assault on their multi-monitor ATI Eyefinity Technology feature, using native HDMI 1.3 output paired with DisplayPort connectivity. The new Cypress GPU features the latest ATI Stream Technology, which is designed to utilize DirectCompute 5.0 and OpenCL code. These new features improve all graphical aspects of the end-user experience, such as faster multimedia transcode times and better GPGPU compute performance. AMD has already introduced a DirectCompute partnership with CyberLink, and the recent Open Physics Initiative with Pixelux promises to offer physics middleware built around OpenCL and Bullet Physics. This looks like ATI's recipe for success, since NVIDIA does not have a GPU to compete against the Radeon 5800 series or support DirectX 11. It doesn't help matters any that NVIDIA GPUs do not support OpenCL and DirectCompute 11 environments, leaving them out in the cold for the coming winter months. From these developments ATI has distanced themselves ahead of NVIDIA by placing gamers first in their consideration, and have positioned the ATI 5xxx-series to introduce enthusiasts to a new world of DirectX 11 video games on the Microsoft Windows 7 Operating System. While most hardware enthusiasts are familiar with the back-and-forth competition between these two leading GPU chip makers, it might come as a surprise that NVIDIA actually remarked that DirectX 11 video games won't fuel video card sales, and have instead decided to revolutionize the military with CUDA technology. Perhaps we're seeing the evolution of two companies: NVIDIA transitions to the industrial sector and departs the enthusiast gaming space, and ATI successfully answers retail consumer demand.
As of 23 September 2009 AMD was rightful to claim that the ATI Cypress GPU inside the Radeon HD 5800 series could achieve 2.72 TeraFLOPS, more powerful than any other known microprocessor. ATI's next-generation Radeon HD 5800 graphics card share also the world's first and only GPU to fully support Microsoft DirectX 11, the new gaming and compute standard that ships with the Microsoft Windows 7 operating system. The ATI Radeon HD 5800 series effectively doubles the value consumers can expect of their graphics purchases, beginning with the release of two cards: the ATI Radeon HD 5870 and the ATI Radeon HD 5850, each with 1GB GDDR5 memory. With the ATI Radeon HD 5800 series of graphics cards, PC users can expand their computing experience with ATI Eyefinity multi-display technology, accelerate their computing experience with ATI Stream technology, and dominate the competition with superior gaming performance and full support of Microsoft DirectX 11, making it a "must-have" consumer purchase just in time for Microsoft Windows 7 operating system. Modeled on the full DirectX 11 specifications, the ATI Radeon HD 5800 series of graphics cards delivers up to 2.72 TeraFLOPS of compute power in a single card, translating to superior performance in the latest DirectX 11 games, as well as in DirectX 9, DirectX 10, DirectX 10.1 and OpenGL titles in single card configurations or multi-card configurations using ATI CrossFireX technology. When measured in terms of game performance experienced in some of today's most popular games, the ATI Radeon HD 5800 series is up to twice as fast as the closest competing product in its class; allowing gamers to enjoy incredible new DirectX 11 games - including the forthcoming DiRT 2 from Codemasters, and Aliens vs. Predator from Rebellion, and updated version of The Lord of the Rings Online and Dungeons and Dragons Online Eberron Unlimited from Turbine - all in stunning detail with incredible frame rates. About XFX (Pine Group)
XFX is a division of PINE Technology Holdings Limited, a leading manufacturer and marketer of innovative solutions for the worldwide gaming technologies market. Founded in 1989, PINE designs, develops, manufactures and distributes high-performance video graphics technology and computer peripherals. The company's dedicated research and development team are continually pushing the limits to meet the demands of the ever-growing and performance-driven community. The company has more than 1,000 employees worldwide with 17 offices around the globe. With distribution in over 50 countries around the world, PINE's XFX division maintains two state-of-the-art research and development facilities in Taiwan and Shenzhen, technical support centers in the U.S., Europe and Asia, product marketing the U.S., and a factory in Mainland China. To learn more about PINE, please visithttps://www.pinegroup.com/ XFX Double Lifetime ProtectionNothing tops our warranty. It's not just a limited lifetime warranty, it's a transferable lifetime warranty. So, should you sell your XFX Radeon HD Series cards, whomever you give it to or sell it to is protected, as well. Better still, it's the best card on the planet for gamers who push our cards to the limit. ATI Radeon HD 5850 Features
HD-585A-ZNFC Specifications
Video Card Attributes
XFX HD5850 Closer LookThe video card industry is hurting as bad as anyone during this economic recession, and nobody is walking around happy about PC graphics these days. They can't, really, not when many of the latest video game titles for the personal computer are released only after console versions have been made available first. Even once you get past that initial burn, you're greeted by yet another. During the 2009 business year we've seen dozens of great video games released on the PC platform, but very few of them demand any more graphical processing power than games demanded back in 2006. Video cards certainly got bigger and faster, but video games we're seriously lacking fresh development. DirectX 10 helped the industry, but every step forward received two steps back because of Windows Vista. Introduced with Windows 7, enthusiasts now have DirectX 11 detail and special effects in their video games.
The XFX Radeon HD 5850 is identical to the ATI reference design video card and measures exactly 9.5" long, putting it a full inch behind the 10.5" long GeForce GTX 260/285 that it competes with. ATI produces and directly supplies every Radeon HD 5850 that operates at this reference specification, with add-in card/add-in board partners (such as XFX) supplying necessary artwork.
Most overclocker-enthusiasts prefer an externally-exhausting VGA cooler (such as the one used on reference-design Radeon HD 5850 video cards) over a cooler that vents back into the computer case. While the majority of the heated air does pass through the vent on the I/O plate, it's only about 0.5x1.5" in diameter. Additional ventilation is located directly beside the external vent, located along the 'spine' of the video card near the I/O plate (visible below).
Probably the newest edition to the ATI Radeon series is the inclusion of DisplayPort output beside a native HDMI 1.3 port, which is available on all Radeon HD 5700/5800-series video cards, and not specific to the XFX HD-585A-ZNFC model we're testing. Two DVI digital video outputs are connected to monitors for dual-view, or a third monitor can be added via DisplayPort to enable ATI Eyefinity technology.
The reference cooling unit on the XFX/ATI Radeon HD 5850 video card is held tight to the 40nm "Cypress" GPU with four screws in the corner-reinforced bracket and ten plastic screws on the back of the PCB. Considering the Cypress GPU die size is a rather large 334 mm2, and fits 2154 million transistors, the overall heat dissipation is spread over a suitable landscape. The double-height cooler does a very good job of cooling the 5850, but there is still a tremendous amount of heat that builds up on backside of the PCB. If you're an overclocker, there isn't much that can be done to help cool the unit from the reverse side of the circuit board, especially since there aren't any vRAM modules mounted at the backside of the PCB on this video card.
In next several sections, Benchmark Reviews details our methodology for testing video cards followed by a performance comparison against many of the most popular graphics accelerators available. The XFX ATI Radeon HD 5850 competes against the NVIDIA GeForce GTX 260, 285, and Radeon 4890; so of course we'll be keeping a close eye on comparative performance. VGA Testing MethodologyAs of October 2009 Benchmark Reviews has discontinued testing on the Windows XP (DirectX 9) Operating System and has moved to the recently-launched Windows 7 testing; although it is recognized that 52% or more of the gaming world still use the Windows XP O/S at this time. DirectX 11 is native to the Microsoft Windows 7 Operating System, and will be the centerpiece of our test platform for the foreseeable future. In some tests DirectX 10 is utilized on the Windows 7 platform simply to allow uniform test levels between hardware devices that may not support the DX11 API. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors). In each benchmark test conducted, five tests are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts.
Intel X58 Test System
Benchmark Applications
Video Card Test Products
ATI Eyefinity Multi-MonitorsATI Eyefinity advanced multiple-display technology launches a new era of panoramic computing, helping to boost productivity and multitasking with innovative graphics display capabilities supporting massive desktop workspaces, creating ultra-immersive computing environments with super-high resolution gaming and entertainment, and enabling easy configuration and supporting up to six independent display outputs simultaneously. In the past, multi-display systems catered to professionals in specific industries. Financial, gas and oil, and medical are just some industries where multi-display systems are not only desirable, but a necessity. Today, even graphic designers, CAD engineers and programmers are attaching more than one displays to their workstation. A major benefit of a multi-display system is simple and universal - it enables increased productivity. This has been demonstrated in industry studies which have shown that attaching more than one display device to a PC can significantly increase user productivity. The early multi-display solutions were non-ideal. The bulky CRT monitors claimed too much desk space, thinner LCD monitors were very expensive, and external multi-display hardware was inconvenient and also very expensive. These issues are much less of a concern today. LCD monitors are very affordable and current generation GPUs can drive multiple display devices independently and simultaneously, without the need for external hardware. Despite the advancements in multi-display technology, AMD engineers still felt there was room for improvement, especially regarding the display interfaces. VGA carries analog signals and needs a dedicated DAC per display output, which consumes power and ASIC space. Dual-Link DVI is digital, but requires a dedicated clock source per display output and uses too many IO pins from our GPU. If we were to overcome the dual display per GPU barrier, it was clear that we needed a superior display interface.
In 2004, a group of PC companies collaborated to define and develop DisplayPort, a powerful and robust digital display interface. At that time, engineers working for the former ATI Technologies Inc. were already thinking about a more elegant solution to drive more than two display devices per GPU, and it was clear that DisplayPort was the interface of choice for this task. In contrast to other digital display interfaces, DisplayPort does not require a dedicated clock signal for each display output. In fact the data link is fixed at 1.62Gbps or 2.7Gbps per lane, irrespective of the timing of the attached display device. The benefit of this design is that one reference clock source can provide the clock signals needed to drive as many DisplayPort display devices as there are display pipelines in the GPU. In addition, with the same number of IO pins used for Single-Link DVI, a full speed DisplayPort link can be driven which provides more bandwidth and translates to higher resolutions, refresh rates and color depths. All these benefits perfectly complement ATI Eyefinity Multi-Display Technology.
ATI Eyefinity Technology from AMD provides advanced multiple monitor technology delivering an incredibly immersive graphics and computing experience with innovative display capabilities, supporting massive desktop workspaces and super-high resolution gaming environments. Legacy GPUs have supported up to two display outputs simultaneously and independently for more than a decade. Until now graphics solutions have supported more than two monitors by combining multiple GPUs on a single graphics card. With the introduction of AMD's next-generation graphics product series supporting DirectX 11, a single GPU now has the advanced capability of simultaneously supporting up to six independent display outputs. ATI Eyefinity Technology is closely aligned with AMD's DisplayPort implementation providing the flexibility and upgradability modern user's demand. Up to two DVI, HDMI, or VGA display outputs can be combined with DisplayPort outputs for a total of up to six monitors, depending on the graphics card configuration. The initial AMD graphics products with ATI Eyefinity technology will support a maximum of three independent display outputs via a combination of two DVI, HDMI or VGA with one DisplayPort monitor. AMD has a future product planned to support up to six DisplayPort outputs. Wider display connectivity is possible by using display output adapters that support active translation from DisplayPort to DVI or VGA.
The DisplayPort 1.2 specification is currently being developed by the same group of companies who designed the original DisplayPort specification. This new spec will include exciting new features for our customers. Its feature set includes higher bandwidth, enhanced audio and multi-stream support. Multi-stream, commonly referred to as daisy-chaining, is the ability to address and drive multiple display devices through one connector. This technology, coupled with ATI Eyefinity Technology, will re-introduce multi-display technology and AMD will be at the forefront of this transition. 3DMark Vantage GPU Tests3DMark Vantage is a PC benchmark suite designed to test the DirectX10 graphics card performance. FutureMark 3DMark Vantage is the latest addition the 3DMark benchmark series built by FutureMark corporation. Although 3DMark Vantage requires NVIDIA PhysX to be installed for program operation, only the CPU/Physics test relies on this technology. 3DMark Vantage offers benchmark tests focusing on GPU, CPU, and Physics performance. Benchmark Reviews uses the two GPU-specific tests for grading video card performance: Jane Nash and New Calico. These tests isolate graphical performance, and remove processor dependence from the benchmark results. 3DMark Vantage GPU Test: Jane NashOf the two GPU tests 3DMark Vantage offers, the Jane Nash performance benchmark is slightly less demanding. In a short video scene the special agent escapes a secret lair by water, nearly losing her shirt in the process. Benchmark Reviews tests this DirectX 10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is ustilized, and is the highest this test allows. By maximizing the processing levels of this test, the scene creates the highest level of graphical demand possible and sorts the strong from the weak.
Beginning with the NVIDIA GeForce GTS 250 (aka GeForce 9800 GTX+ prior to product line renaming) the performance is extremely sub-par and not on level with many of the products that gamers consider mainstream by today's standard. The ATI Radeon HD 5770, the DirectX 11 mainstream video card, managed to perform almost as well as the Radeon HD 4890. An overclocked Palit GTX 260 Sonic narrowly outperforms the 4890 in this test, before the overclocked ASUS GeForce GTX 285 Top is edged out by the XFX Radeon HD 5850 DirectX11 video card. The reference ATI Radeon HD 5870 outperformed the overclocked NVIDIA counterpart by 30% at 1680x1050 and nearly 32% at 1920x1280. 3DMark Vantage GPU Test: New CalicoNew Calico is the second GPU test in the 3DMark Vantage test suite. Of the two GPU tests, New Calico is the most demanding. In a short video scene featuring a galactic battleground, there is a massive display of busy objects across the screen. Benchmark Reviews tests this DirectX 10 scene at 1680x1050 and 1920x1200 resolutions, and uses Extreme quality settings with 8x anti-aliasing and 16x anisotropic filtering. The 1:2 scale is utilized, and is the highest this test allows. Using the highest graphics processing level available allows our test products to separate themselves and stand out (if possible).
Starting with the lowest performer, the NVIDIA GeForce GTS 250/GeForce 9800 GTX+, it becomes obvious that GPU performance from this yesteryear-mainstream product is way below what's going to be required for modern DirectX 10 and DirectX 11 video games. If you're using a product from NVIDIA GeForce 9800 series or older/lower, it might be time to consider an upgrade. ATI's mainstream Juniper GPU powers the Radeon HD 5770 video card, which happens to nearly match performance to the former premium-level ATI Radeon HD 4890 in the New Calico benchmark. The Radeon 4890 performs at nearly the same frame rate as the overclocked GeForce GTX 260, which is also only a few FPS away from the overclocked ASUS GeForce GTX 285 Top. The DirectX 11-compatible Cypress GPU clearly dominates the field in this test, allowing the XFX Radeon HD 5850 to easily overtake the GeForce GTX 285 by 11% at 1680x1050 and then an astonishing 27% at 1920x1200. The top-of-the-line Radeon HD 5870 is well ahead of the others, especially NVIDIA's closest graphics solution, and outperforms our overclocked ASUS GTX 285 Top by 40% at 16x10 and 46% at 19x12 resolution.
BattleForge PerformanceBattleForge is free Massive Multiplayer Online Role Playing Game (MMORPG) developed by EA Phenomic with DirectX 11 graphics capability. Combining strategic cooperative battles, the community of MMO games, and trading card gameplay, BattleForge players are free to put their creatures, spells and buildings into combination's they see fit. These units are represented in the form of digital cards from which you build your own unique army. With minimal resources and a custom tech tree to manage, the gameplay is unbelievably accessible and action-packed. Benchmark Reviews uses the built-in graphics benchmark to measure performance in BattleForge, using Very High quality settings and 8x anti-aliasing with auto multi-threading enabled. BattleForge is one of the first titles to take advantage of DirectX 11 in Windows 7, and offers a very robust color range throughout the busy battleground landscape. The first chart illustrates how performance measures-up between video cards when Screen Space Ambient Occlusion (SSAO) is disabled, which runs tests at DirectX 10 levels.
When SSAO is disabled, older GeForce and Radeon products are compared on a more even playing field (so long as you discredit the fact that we have a few DirectX 11 cards in the mix, and that BattleForge is a DirectX 11 game). Looking at performance using the 1920x1200 resolution, the ATI Radeon HD5770 Juniper GPU does extremely well and slightly outperforms the overclocked GTX 260 model with 29.1 FPS. The ASUS GeForce GTX 285 TOP comes in right behind them, scoring 34.0 FPS. ATI's Radeon HD 4890 still has muscle to flex, rendering 36.2 FPS and trailing just behind the XFX Radeon HD 5850's score 38.8 FPS. Even when Screen Space Ambient Occlusion is disabled to give older cards their last chance for high frame rates, the ATI Radeon HD 5870 reminds them that SSAO isn't a challenge it has to concern itself with. Scoring 45.3 FPS, the Radeon 5870 outperforms an overclocked GTX 285 by more than 33% when settings accommodate all products. The next chart (below) illustrates how BattleForge reacts when SSAO is enabled, which forces multi-core optimizations that only DirectX 11-compatible video cards can utilize.
NVIDIA's GTS 250 has already demonstrated how poorly it performs in DirectX 10 tests, and DirectX 11 is just that much worse for the DirectX 9-era rebranded 9800 GTX+. Suffice it to say, NVIDIA should not have included this product in their current product family because it's well past-due for retirement and DirectX 11 games won't tolerate it. As expected, the DirectX 11-compatible ATI 5800 series runs rings around everything NVIDIA currently offers. In respect to EA's BattleForge, a reference-design mainstream ATI Radeon HD 5770 is able to outperform the overclocked ASUS GeForce GTX 285 TOP by 29% at 1920x1200, thus proving that Windows 7 will re-center the definition of 'mainstream' products. What was top shelf in Windows XP or Vista will soon become the low end with DirectX 11 in Windows 7. For gamers who plan to use Windows 7, and especially those who play BattleForge, the Radeon HD 5850 offered excellent performance and topped our overclocked GTX 285 by 91%. If that wasn't proof evident that NVIDIA should be worried, the Sapphire ATI Radeon HD 5870 DirectX 11 video card 21161-00-50R was able to easily outperform NVIDIA's GTX 285 by more than 125% at the same price point when DirectX 11 is called into action.
Crysis Warhead TestCrysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX 10) API for graphics rendering. Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX 10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.
Using only the highest quality settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. The re-branded GeForce GTS250 (aka GeForce 9800 GTX+) shows us how low the bottom gets by producing only 10 FPS at 1680x1050, while the Radeon HD 5870 illustrates the top with 27 FPS. The ATI Radeon HD 5770 does well, especially considering it's only meant as a lower mid-level graphics solution, yet still falls short of the GeForce GTX 260 and Radeon HD 4890 which share the same 21 frames per second performance. There are a few frames between the XFX Radeon HD 5850 and the overclocked ASUS GeForce GTX 285 TOP, but at 1920x1280 those differences disappear. The ATI Radeon HD 5870 video card doesn't flex its muscle in this NVIDIA-optimized game the way it does in DirectX 11, but it still outperforms the GTX 285 within the same price point.
Devil May Cry 4 BenchmarkDevil May Cry 4 was released on PC in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years. MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms. On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Microsoft Windows XP and Vista Operating Systems. It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used. Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge multi-GPU video cards, Benchmark Reviews uses the DirectX 10 test set at 1920x1200 resolution to test with 8x AA (highest common AA setting available between GeForce and Radeon video cards) and 16x AF. The benchmark runs through four different test scenes, but scenes #2 and #4 usually offer the most graphical challenge.
Devil May Cry 4 doesn't stress the GPU to the extent that other game engines do. This isn't to say that the graphics don't look good, because they do, it's just that the MT Framework game engine is very well optimized. Even the GeForce GTS 250 produces 49 FPS in scene #2 at 1920x1200, which means that gamers can enjoy DMC4 with older video cards. The Juniper GPU inside ATI's Radeon HD 5770 produced 72 FPS, which is extremely close to the 79 FPS rendered by our overclocked Palit GeForce GTX 260 Sonic video card. The ATI Radeon HD 4890 matched the overclocked ASUS GeForce GTX 285 TOP video card performance with 89 FPS. ATI's Cypress GPU found in the Radeon HD 5850 and 5870 certainly stood out from the crowd. The XFX Radeon HD 5850 HD-585A-ZNFC produced 108 FPS, which equates to 21% better performance than the GeForce GTX 285. ATI's Radeon HD 5870 rendered 126 FPS in the second benchmark scene, outperforming the NVIDIA GTX 285 counterpart by nearly 42%. Despite wearing a NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, the Capcom MT Framework game engine appears to enjoy ATI's latest Cypress and Juniper GPUs.
Far Cry 2 BenchmarkUbisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline. The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions. The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects. There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for DirectX 10 tests, with the resolution set to 1920x1200. Performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality, 8x anti-aliasing was applied, and HDR and Bloom were enabled.
Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we used 8x AA and 'Ultra High' settings in Far Cry 2. The end effect was a separation between what is capable of maximum settings, and what is not. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), we noticed that there are very few products capable of producing playable frame rates with the settings all turned up. Inspecting the performance at 1920x1200 resolution, it appears that every graphics card aside from the GTS 250 (GeForce 9800 GTX+) can handle higher quality settings and post-processing effects in Far Cry 2. ATI's Radeon HD 5770 produces 29.0 FPS, which is extremely close to the 31.4 FPS delivered by the Radeon HD 4890. Palit's GeForce GTX 260 Sonic performed at 36.4 FPS, followed by the ASUS GeForce GTX 285 TOP with 43.0 FPS. The XFX Radeon HD 5850 leads the overclocked GTX 285 by only about one frame, and produced 43.9 FPS on average. The ATI Radeon HD 5870 DirectX 11 video card topped the single-GPU Far Cry 2 performance chart with 51.3 FPS at 1920x1200, with a 19% lead over NVIDIA's direct competition.
Resident Evil 5 TestsBuilt upon an advanced version of Capcom's proprietary MT Framework game engine to deliver DirectX 10 graphic detail, Resident Evil 5 offers gamers non-stop action similar to Devil May Cry 4, Lost Planet, and Dead Rising. The MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms. On the PC version of Resident Evil 5, both DirectX 9 and DirectX 10 modes are available for Microsoft Windows XP and Vista Operating Systems. Microsoft Windows 7 will play Resident Evil with backwards compatible Direct3D APIs. Resident Evil 5 is branded with the NVIDIA The Way It's Meant to be Played (TWIMTBP) logo, and receives NVIDIA GeForce 3D Vision functionality enhancements. NVIDIA and Capcom offer the Resident Evil 5 benchmark demo for free download from their website, and Benchmark Reviews encourages visitors to compare their own results to ours. Because the Capcom MT Framework game engine is very well optimized and produces high frame rates, Benchmark Reviews uses the DirectX 10 version of the test at 1920x1200 resolution. Super-High quality settings are configured, with 8x MSAA post processing effects for maximum demand on the GPU. Test scenes from Area #3 and Area #4 require the most graphics processing power, and the results are collected for the chart illustrated below.
Resident Evil 5 has really proved how well the proprietary Capcom MT Framework game engine can look with DirectX 10 effects. The Area 3 and 4 tests are the most graphically demanding from this free downloadable demo benchmark, but the results make it appear that the Area #3 test scene performs better with NVIDIA GeForce products compared to the Area #4 scene that favors ATI Radeon GPUs. Although this benchmark tool is distributed directly from NVIDIA and GeForce Forceware drivers likely have optimizations written for the Resident Evil 5 game, there doesn't appear to be any favoritism towards GeForce products over Radeon counterparts from within the game itself. Even so, the ATI Radeon HD 5770 rendered 36 FPS in test scene 3, while jumping to 47 FPS in test scene 4. This loosely indicates that lower-end graphics cards can still play Resident Evil 5 at 1920x1200, and produce good 30+ frame rates with maximum settings. For these results however, it seems that driver optimizations between manufacturers could account for the disparity among test scenes, although the Resident Evil 5 game itself 'normalizes' in the two other (less demanding) scenes. Much of the test results in Resident Evil 5 were identical to performance standings in our other tests. The GTX260 and HD4890 produce the same frame rates, while the HD5850 and GTX285 push and pull between tests. The Radeon HD5870 keeps up with the GTX295, while the Radeon HD 5970 performs ahead of them all with a 38% lead.
Pro/ENGINEER BenchmarkSPECviewperf 10 is a synthetic benchmark designed to be a predictor of application performance and a measure of graphics subsystem performance. SPECviewperf 10 provides the ability to compare performance of systems running in higher-quality graphics modes that use full-scene anti-aliasing, and measures how effectively graphics subsystems scale when running multithreaded graphics content. The SPECopc project group's SPECviewperf 10 is a performance evaluation software requiring OpenGL 1.5 and a minimum of 1GB of system memory. It currently supports 32/64-bit versions of the Microsoft Windows Operating System. Since the SPECviewperf source and binaries have been upgraded to support changes, no comparisons should be made between past results and current results for viewsets running under SPECviewperf 10. The proe-04 viewset was created from traces of the graphics workload generated by the Pro/ENGINEER 2001 application from PTC. Mirroring the application, draw arrays are used for the shaded tests and immediate mode is used for the wireframe. The gradient background used by the Pro/E application is also included to better model the application workload. Two models and three rendering modes are measured during the test. PTC contributed the models to SPEC for use in measurement of the Pro/ENGINEER application. The first of the models, the PTC World Car, represents a large-model workload composed of 3.9 to 5.9 million vertices. This model is measured in shaded, hidden-line removal, and wireframe modes. The wireframe workloads are measured both in normal and antialiased mode. The second model is a copier. It is a medium-sized model made up of 485,000 to 1.6 million vertices. Shaded and hidden-line-removal modes were measured for this model. This viewset includes state changes as made by the application throughout the rendering of the model, including matrix, material, light and line-stipple changes. The PTC World Car shaded frames include more than 100MB of state and vertex information per frame. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.
Unlike the other benchmark tests in this article, the Pro/ENGINEER 2001 targets industrial application and disregards video game performance, although the two overlap. SPECviewperf 10 tests the proe-04 viewset in seven different tests, ranging from shaded to wire- and anti-aliased line views. The results are NOT the average FPS, but are actually the weighted geometric mean for the combined performance of all seven tests. Judging from the results, there's very little difference in weighted geometric mean between the 13.1 score of NVIDIA's GTS 250, or the 14.0 scores of nearly all ATI Radeon products. But when you look a little closer, you notice that the GTS 250, overclocked GTX 260, and the overclocked ASUS GeForce GTX 285 TOP all score the same 13.1 geometric mean. Consider then that the mid-level Radeon HD 4770 scores a 13.7. while the ATI Radeon HD 4890, 5850, and 5870 all score a solid 14.0 geometric mean. So what can we infer from these SPECviewperf Pro/ENGINEER tests? From what little can be gathered, it appears that ATI Radeon video cards are generally superior to NVIDIA GeForce products in terms of industrial CAD applications. To be fair these are both consumer-level display adapters, and the NVIDIA Quadro or AMD/ATI FirePro series is designed specifically for workstation graphics.
Radeon HD 5850 TemperaturesBenchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, or merely a hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking the NVIDIA GeForce Video Card, which gives detailed instruction on how to tweak a GeForce graphics card for better performance. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test. FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application of video game, and it does so with consistency every time. While I have proved that FurMark is not a true benchmark tool for comparing video cards, it would still work very well to compare one product against itself at different stages. FurMark would be very useful for comparing the same GPU against itself using different drivers or clock speeds, of testing the stability of a GPU as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.
To begin my testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained at a stable 20.0°C throughout testing, while the inner-case temperature hovered around 36°C. The XFX Radeon HD 5850 HD-585A-ZNFC video card recorded a cool 38°C in idle 2D mode, and increased to only 72°C in full 3D mode. These temperatures were much lower than the reference-design HD4890 at idle, indicating that reduced heat is a direct result of lower idle power draw. The Cypress GPU has a massive 334 mm2 die size, which offers a much larger footprint for cooling the 2.154 billion transistors when they're just sitting idle. So with this in mind, it's understandable to see an impressive low idle temperature (although it's slightly highter than the Radeon HD 5870's idle temp). The reduced clock speed lends itself to a reduced maximum loaded temperature, making clear that the ATI Cypress GPU has thermal and power management under close control. VGA Power ConsumptionLife is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
* Results are accurate to within +/- 5W.
Sipping only 24W of electricity at idle, the XFX Radeon HD 5850 HD-585A-ZNFC video card matches energy consumption with the GeForce 8800 GT, GTX 285, and Radeon HD 4890. Once 3D-applications begin to demand power from the GPU, electrical power consumption climbs. Under full 3D load the ATI Radeon HD 5850 requires 157W, which is 23% more efficient than the 204W consumed by the GeForce GTX 260 under load, or 38% better than the GeForce GTX 275 that it more directly competes with. Radeon 5000-Series Final ThoughtsReading the editorial articles surrounding the launch of ATI's Radeon 5800-series video cards has become entertainment in and of itself. Websites loyal to NVIDIA assert that the Cypress GPU is nothing more than an overpowered product trying to push DirectX 11 onto unwilling consumers, or that NVIDIA is doing more for gamers than ATI because they offer The Way It's Meant to be Played, GeForce 3D Vision, PhysX, and CUDA. Some sites have even taken the time to research the amount of progress AMD has had with their Stream technology, and then complain that ATI isn't doing enough to compete with NVIDIA in regard to GPGPU. All of this rhetoric amounts to a desperate attempt at hiding some very frightening facts. NVIDIA was extremely vocal when Windows Vista launched with DirectX 10, and they couldn't emphasize enough how important Vista/DirectX 10 was going to be to gamers and that enthusiast should upgrade to their DirectX 10-compliant video card. Oddly enough gamers didn't take to Windows Vista like NVIDIA had hoped, and even now as Windows 7 launches there is a 52.6% market share still using Windows XP compared to 36.4% using Vista (with 10% of the market already uses a beta version of Windows 7). The DirectX 11 Direct3D API is native to the Windows 7 Operating System, a product Microsoft is releasing, not AMD. ATI has simply prepared for the inevitable launch of Windows 7 better than NVIDIA, and now the green machine claims nobody will buy a video card for DirectX 11.
From these developments ATI has distanced themselves ahead of NVIDIA by placing gamers first in their consideration, and have positioned the ATI 5000-series to introduce enthusiasts to a new world of DirectX 11 video games on the Microsoft Windows 7 Operating System. While most hardware enthusiasts are familiar with the back-and-forth competition between these two leading GPU chip makers, it might come as a surprise that NVIDIA actually states that DirectX 11 video games won't fuel video card sales, and have instead decided to revolutionize the military with CUDA technology. Perhaps we're seeing the evolution of two companies: NVIDIA transitions to the industrial sector and departs the enthusiast gaming space, and ATI successfully answers retail consumer demand. AMD has launched the Radeon 5870 as their first assault on their multi-monitor ATI Eyefinity Technology feature, using native HDMI 1.3 output paired with DisplayPort connectivity. The new Cypress GPU features the latest ATI Stream Technology, which is designed to utilize DirectCompute 5.0 and OpenCL code. These new features improve all graphical aspects of the end-user experience, such as faster multimedia transcode times and better GPGPU compute performance. AMD has already introduced a DirectCompute partnership with CyberLink, and the recent Open Physics Initiative with Pixelux promises to offer physics middleware built around OpenCL and Bullet Physics. This looks like ATI's recipe for success, since NVIDIA does not have a GPU to compete against the Radeon 5800 series or support DirectX 11. It doesn't help matters any that NVIDIA GPUs do not support OpenCL and DirectCompute 11 environments, leaving them out in the cold for the coming winter months. Any GeForce DirectX 11 graphics solution is still many months away for NVIDIA and not expected until 2010, which leaves very few options in the fiercely competitive discrete graphics market. So far NVIDIA's only counter-attack on ATI's new 5000-series product line has been the ultra low-end GeForce 200 (no letter designation) and GeForce GT 220 series meant to one-up integrated graphics. Integrated graphics? You read that correctly, NVIDIA launched a product so feeble that it competes with older Integrated graphics. Outstanding. Maybe you can enable triple-SLI and get GeForce GTS 250 performance out of them for a good game of Solitaire. So what can NVIDIA do to compete with ATI? Since DirectX 11 is dominated by ATI for the near future, it would seem that price reductions should be in order. Just not yet, apparently. It's not clear what NVIDIA is waiting for, but the price of their current GeForce family hasn't changed much since the ATI Radeon 5870/5850 launch. Perhaps NVIDIA could help develop video games that punish gamers by excluding non-GeForce products from using post-processing effects. While ATI was busy building a better video card, NVIDIA teamed up with Eidos to produce Batman: Arkham Asylum. The game looks great when you use maximum "NVIDIA Anti-Aliasing" (later renamed to Anti-Aliasing with the version 1.1 patch that added PhysX support), but products like the ATI Radeon must enable full-time AA in the control panel and suffer a performance hit when it's not needed by the game. ATI Eyefinity Technology is a tough nut to crack for NVIDIA, since their DirectX 10 products only offer dual-DVI output. As an alternative, NVIDIA GeForce owners can use the $300 Matrox TripleHead2Go add-on peripheral to spread the picture across up to three screens. Make sure you're using a GeForce GTX 275 of higher, since the added resolution is more than the video card was designed to accommodate. XFX Radeon HD5850 ConclusionAlthough our rating and final score are made to be as objective as possible, please be advised that every author perceives these factors differently at different points in time. While we do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that would render our rating obsolete. Please do not base your purchases solely on our conclusion, as it represents our product rating at the time of publication. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate. The first section we rate in our conclusion is performance, which considers how effective the XFX Radeon HD 5850 HD-585A-ZNFC DirectX 11 video card performs in designated operations against direct competitor products. Nailing down a direct competitor model is tricky though, since the NVIDIA GeForce GTX 260 and GTX 275 both sell in the same $240-260 price segment as the ATI Radeon HD 5850. In terms of DirectX 10 performance however, the HF5850 was more of a threat to the GeForce GTX 285. In synthetic 3DMark Vantage tests, the Radeon HD 5850 performed up to 22% better than an overclocked GTX 285, depending on the test scene and resolution. In DX10 games the HD5850 either met or exceeded GTX285 performance, but completely left the GeForce series behind in the DirectX 11 game BattleForge. Temperatures at load were very good, even though the Cypress GPU has 54% more transistors per square millimeter of chip die than the GT200. Aside from impressive DirectX 11 performance, electrical power consumption was extremely efficient with only 31W consumed at idle and 157W under full 3D load. For the time being, XFX's Radeon HD 5850 is the best value among single-GPU products available, and one the few graphics cards capable of playing DirectX 11 games in Windows 7.
Product appearance is relative to personal tastes, but the entire Radeon 5000-series looks very similar to one another with the only difference being overall length. The XFX Radeon HD 5850 measures 9.5" long, making it a good fit for most all ATX cases. Lately it seems that almost everything has been encased in a plastic housings with a label fixed to the top, so I'm rather used to the basic style lines. In the end, ATI's design strikes a comfortable blend of elegance and flair that seems appealing to my senses. Construction is solid, but there seems to be some room for design improvements. While I appreciate ATI for not placing memory module IC's on the backside of the PCB, the heated exhaust vents could have received more attention. Most overclockers are no fan to hot air inside the computer case, and the ventilation design of the Radeon 5800-series doesn't exhaust all heated air outside of the case. Instead of the 0.5x1.5" diameter vent in the I/O plate, ATI could have extended the vent to at least 2.0-2.25" wide with larger vent holes. While most consumer buy a discrete graphics card for the sole purpose of PC video games, there's a very small niche who expect extra features beyond frame rates. AMD isn't the market leader in GPGPU functionality, but their ATI Stream Technology is the only one designed to utilize DirectCompute 5.0 and OpenCL code. ATI Eyefinity technology is impressive, as it demonstrates yet another dimension of visual experience that the competition cannot offer. For most consumers, it's the added connectivity that really counts: native DisplayPort and HDMI interfaces. As of December 2009 the reference-design 725/1000MHz XFX Radeon HD 5850 HD-585A-ZNFC video card sells at NewEgg for $309.99; with the overclocked 755/1125 MHz and 765/1125 MHz versions also available. Even though nine other suppliers of the Radeon HD5850 sell their product for the exact same price, only XFX offers a double-lifetime warranty on their products. This means that either you, or the second owner of the video card, will be protected by a full lifetime warranty against failure. In some cases, when the product has been end-of-life for a significant amount of time, XFX will offer a fair market value replacement from a current-generation product. Consider these things if you're in the market for an NVIDIA GeForce GTX 285 video card (that sells for roughly $330), because the XFX Radeon HD 5850 is obviously the winning choice. DirectX 11 and energy efficiency change this dynamic considerably, and make the decision even more favorable towards the new Radeon 5800-series. In conclusion, there's a long future ahead for the Radeon HD 5850... especially when XFX gives it a double-lifetime warranty (there's a pun somewhere in there). DirectX 11 gaming is here and now whether the competition likes it or not, and ATI has a huge head-start on absorbing an early market share. Eyefinity is a nice touch and it certainly adds to the gaming experience, but there's such an incredibly small portion of potential users for the technology that in reality the Radeon 5800-series has only its shear graphics power to make the sales pitch. Although the XFX HD5850 is clocked to 725MHz and doesn't have the top-end power (or shaders) as the HD5870, it also doesn't share the same price. For the cost, my recommendation is for the XFX Radeon HD 5850 DirectX 11 video card 5850 HD-585A-ZNFC. High-performance gamers and multi-monitor power users can't go wrong at this price point, and it will only get better. Pros:
+ Double-lifetime product replacement warranty service Cons:
- Fan exhausts some heated air back into case Ratings:
Final Score: 9.0 out of 10.Excellence Achievement: Benchmark Reviews Golden Tachometer Award.Nomination: 2009 Editor's Choice Award for Performance Graphics Products.Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.
Related Articles:
|