ATI Radeon HD5770 Juniper GPU Video Card |
Reviews - Featured Reviews: Video Cards | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Bruce Normann | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday, 13 October 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ATI Radeon HD5770Coming right on the heels of the HD5800 series launch, ATI brings us another batch of cards based on class-leading 40nm GPUs and GDDR5 memory. The new cards, Radeon HD5770 and HD5750, use the same architecture as the new HD5800 series, but ATI basically cut the Cypress chip in half to create a brand new video card with hardware specs somewhere between an HD4870 and an HD4890. If you're thinking that's not a bad place to be, but want to see some proof of how the HD5770 performs, Benchmark Reviews is pleased to offer you the results of our extensive testing.
While the flagship ATI products got their day in the sun in September, these mid-range cards are going to compete in a much larger market with a lot more competitors. It's much more difficult to hit the bulls-eye in a market teaming with old standards and new stars, and of course, it's not a static target. Every day, the market shifts; sometime imperceptibly, sometimes radically. The target has been a bit jumpy these last few months, so let's see where this arrow lands. About the company: ATI
Over the course of AMD's four decades in business, silicon and software have become the steel and plastic of the worldwide digital economy. Technology companies have become global pacesetters, making technical advances at a prodigious rate - always driving the industry to deliver more and more, faster and faster. However, "technology for technology's sake" is not the way we do business at AMD. Our history is marked by a commitment to innovation that's truly useful for customers - putting the real needs of people ahead of technical one-upmanship. AMD founder Jerry Sanders has always maintained that "customers should come first, at every stage of a company's activities." We believe our company history bears that out. Radeon HD5770 FeaturesThe feature set of the ATI HD5700 series video cards is nearly identical to the recently released HD5800 series. The important differences are all related to the fact that the HD5700 series chip is half the size of the HD5800, with half the processing power. For those who perused the mountain of details that accompanied the 5800 series launch, this graphic should look half familiar.
ATI Radeon HD 5770 GPU Feature Summary
AMD is slowly working towards a future vision of graphics computing, as is their main competitor, Intel. They both believe that integrating graphics processing with the CPU provides benefits that can only be achieved by taking the hard road. For now, the only thing we can see is their belief; the roadmap is both sketchy and proprietary. Radeon HD5770 SpecificationsI mentioned in the introduction that the HD5770 had hardware specs somewhere between an HD4870 and an HD4890. You can see them in detail a little further below. The real story is how ATI has been able to reduce the cost of the HD5700 platform to below the HD4850. What, you say...you sneaked a peek at the end and saw the launch price? Well, launch price, and the price one year after launch are two different things, altogether. For now, look at where the four versions of the HD5000 series end up relative to their forefathers. And remember, this is all based on launch pricing...
Now let's look at the actual HD5770 specs in detail: Radeon HD5770 Speeds & Feeds
Although this review is for the HD5770, the HD5750 is being released at the same time and the two cards are based on the same silicon. The HD5750 is likely built with chips that didn't meet the top clock spec, and/or had a defect that killed one of the stream processor units. As anyone who has followed the AMD product line knows, modern processors are designed with the capability of disabling portions of the die. Sometimes, it's done because there are defects on the chip (usually a small particle of dust that ruins a transistor) and all the internal sections don't pass testing. Sometimes it's done with perfectly good chips because the manufacturer needs to meet production requirements for lower cost market segments.
It's aways a delicate balance between economies of scale (building massive quantities of only one part) and the fact that you can usually meet the requirements for the lower priced product with a cheaper part. ATI has all the bases covered in this latest series of product launches; they've got the more expensive chips in the HD5800 series and the much cheaper, half-size chips in the HD5700 series. Within each series, they've got reduced spec versions that ensure that they make the most of the manufacturing yields that TSMC is able to achieve at the 40nm process node. Closer Look: Radeon HD5770The HD 5770 follows the general design of the HD5850 card, only on a slightly smaller scale. The card is only 220 mm long (8.63"), which means it will fit into most any case without an issue. The signature red blower wheel, sourced from Delta is there, pushing air through a finned heatsink block that sits on top of the GPU, and out the back of the card through the small set of vents on the I/O plate.
The connections on the rear of the card mimic the HD5800 series exactly: two DVI, one HDMI and one DisplayPort connector. The collection of I/O ports doesn't leave much room for the exhaust vents, but if ATI can keep the HD5800 series cool with the same design, the half-size HD5700 series GPU should be fine. The housing is a one-piece plastic affair, black with red accents. The external appearance hints at a simplistic design; it looks like a cover and nothing more. Once we look inside, that impression will be laid to rest.
The far end of the card showcases the new "hood scoop" design that is carried over from the high end ATI cards. They don't really feed air into the blower, but do provide some ventilation for some of the power supply components located at this end of the card. Power supply + ventilation is always a good thing. The red racing graphics on the top edge of the new cards is both decorative and functional, as there are some additional vents molded in there.
The back of the Radeon HD5770 is bare, which is normal for a card in this market segment. The main features to be seen here are the metal cross-brace for the GPU heatsink screws, which are spring loaded, and the four Hynix GDDR5 memory chips on the back side. They are mounted back-to-back with four companion chips on the top side of the board. There are also five little surprises on the back of the PCB, but I'll show you those when we look at detailed features in the next section. For most high-end video cards, the cooling system is an integral part of the performance envelope for the card. Make it run cooler, and you can make it run faster was the byword for achieving gaming-class performance from the latest and greatest GPU. Even though the HD5770 is a mid-range card with a small GPU die size, it's still a gaming product and will be pushed to maximum performance levels by most potential customers.
Popping off the cover reveals a deceptively simple, ducted heat sink with a copper base and tightly spaced aluminum fins. The blower is thinner than the units in the HD5800 series, but follows the same format. A portion of the duct opens up to the case by way of some vents in the top rail, molded here in red. Clearly, the majority of the air is meant to exhaust through the I/O plate, but it never hurts to have a backup plan. Let's really peel back the covers and have a good look inside. Radeon HD5770 Detailed FeaturesThe main attraction of ATI's new line of video cards is the brand new GPU with its 40nm transistors and an improved architecture. The chip in the 5700 series is called "Juniper" and is essentially half of the "Cypress", the high-end chip in the HD5800 series that was introduced in September, 2009.
The Juniper die is very small, as can be seen with this comparison to a well known dimensional standard. ATI still managed to cram over a billion transistors on there, and the small size is critical to the pricing strategy that ATI is pursuing with these new releases. 1 GB of GDDR5 memory, on a 128-bit bus with a 4.8 Gbps memory interface offers a maximum memory bandwidth of up to 76.8 GB/sec. Cutting the Cypress GPU in half limited the bus to 128-bit, but ATI has bumped up the clock rate on all their new boards. With GDDR5 running at 1200 MHz, don't expect memory to be a bottleneck on this card. There is some limited room for memory overclocking, all sanctioned by corporate, via the Overdrive tool distributed by AMD.
The H5GQ1H24AFR-T2C chip from Hynix is rated for 5.0 Gbps, and is one of the higher rated chips in the series, as you can see in the table below. An overclock to the 1250-1300 MHz range is not unthinkable, especially if utilities become available to modify memory voltage.
The power section provides 3-phase power to the GPU; that's about average for a mid-range graphics card, and while increasing the number of power phases achieves better voltage regulation, improves efficiency, and reduces heat, ATI has used the inherently lower power requirements of the Juniper GPU and some fancy footwork in the power supply control chip to reduce power draw to very low levels. Where the HD5800 series used a number of Volterra regulators and controllers, the HD5770 makes do with one L6788A controller chip from ST. It's still a relatively sophisticated controller, and the combination of a lower power GPU, low power GDDR5 memory, and smart power supply design yields an incredibly low power consumption of 18W at idle and 108W under duress. ATI benchmarked these chips with 3DMark03, which they claim pulls higher current than more current versions of the synthetic benchmark.
We've already looked at the back side of the board, and I promised a surprise. Well, when is the last time you saw a DIP switch on a discrete graphics card? I can't remember if I've ever seen one. The Radeon HD5770 has five dual-switch modules mounted on the PCB and they're not labeled, except by their component designator, SW400x. They are covered with tiny pieces of Kapton tape, so they don't stand out quite as much as if they were bare, but after a second glance I knew they couldn't be anything else. I don't know what they are for, so let the conspiracy theorizing begin. While we're at it, it's interesting to note that the switches don't appear on production units, just solder links, where required.
The assembly quality on the PCB was not the best I've seen, but I had engineering samples to look at. Once we have some ATI partner cards in house, we'll take another look. Now, let's dive into the testing portion of the review, where there are a few surprises waiting. ATI Eyefinity Multi-MonitorsATI Eyefinity advanced multiple-display technology launches a new era of panoramic computing, helping to boost productivity and multitasking with innovative graphics display capabilities supporting massive desktop workspaces, creating ultra-immersive computing environments with superhigh resolution gaming and entertainment, and enabling easy configuration. High end editions will support up to six independent display outputs simultaneously. In the past, multi-display systems catered to professionals in specific industries. Financial, energy, and medical are just some industries where multi-display systems are a necessity. Today, more and more graphic designers, CAD engineers and programmers are attaching more than one display to their workstation. A major benefit of a multi-display system is simple and universal - it enables increased productivity. This has been confirmed in industry studies which show that attaching more than one display device to a PC can signficantly increase user productivity. Early multi-display solutions were non-ideal. Bulky CRT monitors claimed too much desk space; thinner LCD monitors were very expensive; and external multidisplay hardware were inconvenient and also very expensive. These issues are much less of a concern today. LCD monitors are very affordable and current generation GPUs can drive multiple display devices independently and simultaneously, without the need for external hardware. Despite the advancements in multi-display technology, AMD engineers still felt there was room for improvement, especially regarding the display interfaces. VGA carries analog signals and needs a dedicated DAC per display output, which consumes power and ASIC space. Dual-Link DVI is digital, but requires a dedicated clock source per display output and uses too many I/O pins from the GPU. It was clear that a superior display interface was needed.
In 2004, a group of PC companies collaborated to define and develop DisplayPort, a powerful and robust digital display interface. At that time, engineers working for the former ATI Technologies Inc. were already thinking about a more elegant solution to drive more than two display devices per GPU, and it was clear that DisplayPort was the interface of choice for this task. In contrast to other digital display interfaces, DisplayPort does not require a dedicated clock signal for each display output. In fact, the data link is fixed at 1.62Gbps or 2.7Gbps per lane, irrespective of the timing of the attached display device. The benefit of this design is that one reference clock source provides the clock signal needed to drive as many DisplayPort display devices as there are display pipelines in the GPU. In addition, with the same number of I/O pins used for Single-Link DVI, a full speed DisplayPort link can be driven which provides more bandwidth and translates to higher resolutions, refresh rates and color depths. All these benefits perfectly complement ATI Eyefinity Multi-Display Technology.
ATI Eyefinity Technology from AMD provides advanced multiple monitor technology delivering an immersive graphics and computing experience, supporting massive virtual workspaces and super-high resolution gaming environments. Legacy GPUs have supported up to two display outputs simultaneously and independently for more than a decade. Until now graphics solutions have supported more than two monitors by combining multiple GPUs on a single graphics card. With the introduction of AMD's next-generation graphics product series supporting DirectX 11, a single GPU now has the advanced capability of simultaneously supporting up to six independent display outputs. ATI Eyefinity Technology is closely aligned with AMD's DisplayPort implementation providing the flexibility and upgradability modern user's demand. Up to two DVI, HDMI, or VGA display outputs can be combined with DisplayPort outputs for a total of up to six monitors, depending on the graphics card configuration. The initial AMD graphics products with ATI Eyefinity technology will support a maximum of three independent display outputs via a combination of two DVI, HDMI or VGA with one DisplayPort monitor. AMD has a future product planned to support up to six DisplayPort outputs. Wider display connectivity is possible by using display output adapters that support active translation from DisplayPort to DVI or VGA. The DisplayPort 1.2 specification is currently being developed by the same group of companies who designed the original DisplayPort specification. Its feature set includes higher bandwidth, enhanced audio and multi-stream support. Multi-stream, commonly referred to as daisy-chaining, is the ability to address and drive multiple display devices through one connector. This technology, coupled with ATI Eyefinity Technology, will be a key enabler for multi-display technology, and AMD will be at the forefront of this transition. Video Card Testing MethodologyThis is the beginning of a new era for testing at Benchmark Reviews. With the imminent release of Windows7 to the marketplace, and given the prolonged and extensive pre-release testing that occurred on a global scale, there are compelling reasons to switch all testing to this new, and highly anticipated, operating system. Overall performance levels of Windows 7 have been favorably compared to Windows XP, and there is solid support for the 64-bit version, something enthusiasts have been anxiously awaiting for several years. Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. All of the benchmark applications are capable of utilizing DirectX 10, and that is how they were tested. Some of these benchmarks have been used widely for DirectX 9 testing in the XP environment, and it is critically important to differentiate between results obtained with different versions. Each game behaves differently in DX9 and DX10 formats. Crysis is an extreme example, with frame rates in DirectX 10 only about half what was available in DirectX 9. At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro.We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. According to the Steam Hardware Survey published at the time of Windows 7 launch, the most popular gaming resolution is 1280x1024 (17-19" standard LCD monitors) closely followed by 1024x768 (15-17" standard LCD). However, because these resolutions are considered 'low' by most standards, our benchmark performance tests concentrate on the up-and-coming higher-demand resolutions: 1680x1050 (22-24" widescreen LCD) and 1920x1200 (24-28" widescreen LCD monitors). Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, the high and low results are discarded, and the average of the thre remaining results are displayed in our article. Test System
Benchmark Applications
Video Card Test Products
Now we're ready to begin testing video game performance these video cards, so please continue to the next page as we start with the 3DMark Vantage results. 3DMark Vantage Benchmark Results3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system. There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt. At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another. 1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.
The two test scenes in 3DMark Vantage provide a varied and modern set of challenges for the video cards and their subsystems, as described above. The results always produced higher frame rates for GT1 and so far, I haven't seen any curveball results like I used to see with 3DMark06. The HD5770 came close to the GTX260-216 in GT1 and basically equaled it in GT2. In both test cases, the HD5770 easily beat an overclocked (ASUS TOP series) HD4850 card. The GTX275 and GTX285 pull away from the middle of the pack, as they should for the price difference. The relative parity of these two high end contenders is due to the factory overclocks that they came with, out of the box. The GTX285 is capable of higher numbers, which I demonstrated in this article.
At a higher screen resolution, 1920x1200, the story is similar, but the HD5770 gets a little closer to the GTX260 in GT1 and actually surpasses it by a small margin in GT2. The 128-bit memory bus doesn't seem to hurt the card at all with higher resolutions. Once again the HD5770 beats the older HD48xx series cards, hinting that the latest mid-range card could be an excellent upgrade for Radeon users with cards that are 1-2 years old. We need to look at actual gaming performance to verify that, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.
Crysis Benchmark ResultsCrysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX 10) framework, but can also run using DirectX 9, on Vista, Windows XP and the new Windows 7. As we'll see, there are significant frame rate reductions when running Crysis in DX10. It's not an operating system issue, DX9 works fine in WIN7, but DX10 knocks the frame rates in half. Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading. Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged. Low-resolution testing allows the graphics processor to plateau its maximum output performance, and shifts demand onto the other system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, but it is sometimes helpful in creating a baseline for measuring maximum output performance. At the 1280x1024 resolution used by 17" and 19" monitors, the CPU and memory have too much influence on the results to be used in a video card test. At the widescreen resolutions of 1680x1050 and 1900x1200, the performance differences between video cards under test are mostly down to the cards.
Quite frankly, I was shocked by these numbers. Running XP-based systems and DirectX 9, the latest generation of video cards were starting to get a handle on Crysis. Certainly, in this test, with no anti-aliasing dialed in, any of the tested cards running in DX9 provided a usable solution. Now only the highest performing boards get close to an average frame rate of 30FPS. It seems like we've gone back in time, back to when only two or three video cards could run Crysis with all the eye candy turned on. Now, we'll have to wait until CryEngine3 comes out, and is optimized for the current generation of graphics APIs. The results here are similar to the GT2 scene in 3DMark Vantage, in that the HD5770 pulls slightly ahead of the GTX260-216. Of course we can make the situation go back and forth a bit by fiddling with clock rates, but the fact is, they are roughly equal at stock settings. Compared to older cards, such as the HD4850 and HD4830, there's not a big enough jump to justify upgrading if you want to run this game in DirectX 10. Keep in mind; this is not a universal problem, as we'll see later.
Once a decent amount of anti-aliasing is factored in, the HD5770 pulls a little further ahead of the GTX260-216. All those little improvements ATI made to the rendering processor pay off here. Frame rates are still well below acceptable until you get to the high end cards. If you want to play this game in DX10, you are going to have to pay.
In our next section, Benchmark Reviews tests with Devil May Cry 4 Benchmark. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products. Devil May Cry 4 BenchmarkDevil May Cry 4 was released for the PC platform in early 2007 as the fourth installment to the Devil May Cry video game series. DMC4 is a direct port from the PC platform to console versions, which operate at the native 720P game resolution with no other platform restrictions. Devil May Cry 4 uses the refined MT Framework game engine, which has been used for many popular Capcom game titles over the past several years. MT Framework is an exclusive seventh generation game engine built to be used with games developed for the PlayStation 3 and Xbox 360, and PC ports. MT stands for "Multi-Thread", "Meta Tools" and "Multi-Target". Originally meant to be an outside engine, but none matched their specific requirements in performance and flexibility. Games using the MT Framework are originally developed on the PC and then ported to the other two console platforms. On the PC version a special bonus called Turbo Mode is featured, giving the game a slightly faster speed, and a new difficulty called Legendary Dark Knight Mode is implemented. The PC version also has both DirectX 9 and DirectX 10 mode for Windows XP, Vista, and Widows 7 operating systems. It's always nice to be able to compare the results we receive here at Benchmark Reviews with the results you test for on your own computer system. Usually this isn't possible, since settings and configurations make it nearly difficult to match one system to the next; plus you have to own the game or benchmark tool we used. Devil May Cry 4 fixes this, and offers a free benchmark tool available for download. Because the DMC4 MT Framework game engine is rather low-demand for today's cutting edge video cards, Benchmark Reviews uses the 1920x1200 resolution to test with 8x AA (highest AA setting available to Radeon HD video cards) and 16x AF.
Devil May Cry 4 is not as demanding a benchmark as it used to be. Only scene #2 and #4 are worth looking at from the standpoint of trying to separate the fastest video cards from the slower ones. Still, it represents a typical environment for many games that our readers still play on a regular basis, so it's good to see what works with it and what doesn't. Any of the tested cards will do a credible job in this application, and the performance scales in a pretty linear fashion. You get what you pay for when running this game, at least for benchmarks. This is one time where you can generally use the maximum available anti-aliasing settings, so NVIDIA users should feel free to crank it up to 16X. The DX10 "penalty" is of no consequence here. The HD5770, once again hangs tight with the GTX260-216, and they both provide excellent frame rates, well above the recommended minimums. The surprise of this test is the excellent performance of the HD4850. Suffice it to say, if you are getting 60+ frames per second in all your video games, you don't need a video card upgrade.
Our next benchmark of the series is for a very popular FPS game that rivals Crysis for world-class graphics. Far Cry 2 Benchmark ResultsUbisoft has developed Far Cry 2 as a sequel to the original, but with a very different approach to game play and story line. Far Cry 2 features a vast world built on Ubisoft's new game engine called Dunia, meaning "world", "earth" or "living" in Farci. The setting in Far Cry 2 takes place on a fictional Central African landscape, set to a modern day timeline. The Dunia engine was built specifically for Far Cry 2, by Ubisoft Montreal development team. It delivers realistic semi-destructible environments, special effects such as dynamic fire propagation and storms, real-time night-and-day sun light and moon light cycles, dynamic music system, and non-scripted enemy A.I actions. The Dunia game engine takes advantage of multi-core processors as well as multiple processors and supports DirectX 9 as well as DirectX 10. Only 2 or 3 percent of the original CryEngine code is re-used, according to Michiel Verheijdt, Senior Product Manager for Ubisoft Netherlands. Additionally, the engine is less hardware-demanding than CryEngine 2, the engine used in Crysis. However, it should be noted that Crysis delivers greater character and object texture detail, as well as more destructible elements within the environment. For example; trees breaking into many smaller pieces and buildings breaking down to their component panels. Far Cry 2 also supports the amBX technology from Philips. With the proper hardware, this adds effects like vibrations, ambient colored lights, and fans that generate wind effects. There is a benchmark tool in the PC version of Far Cry 2, which offers an excellent array of settings for performance testing. Benchmark Reviews used the maximum settings allowed for our tests, with the resolution set to 1920x1200. The performance settings were all set to 'Very High', Render Quality was set to 'Ultra High' overall quality level, 8x anti-aliasing was applied, and HDR and Bloom were enabled. Of course DX10 was used exclusively for this series of tests.
Although the Dunia engine in Far Cry 2 is slightly less demanding than CryEngine 2 engine in Crysis, the strain appears to be extremely close. In Crysis we didn't dare to test AA above 4x, whereas we use 8x AA and 'Ultra High' settings in Far Cry 2. Here we see the opposite effect, when switching our testing to DirectX 10. Far Cry 2 seems to have been optimized, or at least written with a clear understanding of DX10 requirements. Using the short 'Ranch Small' time demo (which yields the lowest FPS of the three tests available), not all products are capable of producing playable frame rates with the settings all turned up. The Radeon HD5770 is one of those that gets its chin above the bar in this game. Although the Dunia engine seems to be optimized for NVIDIA chips, the improvements ATI incorporated in their latest GPUs are just enough to allow this game to be played with a mid-range card, albeit one at the upper end of the range. Older ATI products struggle with this benchmark, and if you've got one of those, the HD5770 would be a good upgrade for playing this game.
Our next benchmark of the series puts our collection of video cards against some very demanding graphics in the newly released Resident Evil 5 benchmark. Resident Evil 5 Benchmark ResultsPC gamers get the ultimate Resident Evil package in this new PC version with exclusive features including NVIDIA's new GeForce 3D Vision technology (wireless 3D Vision glasses sold separately), new costumes and a new mercenaries mode with more enemies on screen. Delivering an infinite level of detail, realism and control, Resident Evil 5 is certain to bring new fans to the series. Incredible changes to game play and the world of Resident Evil make it a must-have game for gamers across the globe. Years after surviving the events in Raccoon City, Chris Redfield has been fighting the scourge of bio-organic weapons all over the world. Now a member of the Bio-terrorism Security Assessment Alliance (BSSA), Chris is sent to Africa to investigate a biological agent that is transforming the populace into aggressive and disturbing creatures. New cooperatively-focused game play revolutionizes the way that Resident Evil is played. Chris and Sheva must work together to survive new challenges and fight dangerous hordes of enemies. From a gaming performance perspective, Resident Evil 5 uses Next Generation of Fear - Ground breaking graphics that utilize an advanced version of Capcom's proprietary game engine, MT Framework, which powered the hit titles Devil May Cry 4, Lost Planet and Dead Rising. The game uses a wider variety of lighting to enhance the challenge. Fear Light as much as Shadow - Lighting effects provide a new level of suspense as players attempt to survive in both harsh sunlight and extreme darkness. As usual, we maxed out the graphics settings on the benchmark version of this popular game, to put the hardware through its paces. Much like Devil May Cry 4, it's relatively easy to get good frame rates in this game, so take the opportunity to turn up all the knobs and maximize the visual experience.
The Resident Evil5 benchmark tool provides a graph of continuous frame rates and averages for each of four distinct scenes. In addition it calculates an overall average for the four scenes. The overall average is what we report here, as the scenes were pretty evenly matched and no scene had results that were so far above or below the average as to present a unique situation. The 1680x1050 test results from this game scale almost as linearly as a synthetic benchmark. In the case of the video card we're interested in, the HD5770 sits neatly between the HD4850 and the GTX260-216. The 1920x1200 test brings the two a little closer together in performance, but the ranking remains the same.
Our next benchmark of the series features a strategy game with photorealistic graphics: World in Conflict. World in Conflict Benchmark ResultsThe latest version of Massive's proprietary Masstech engine utilizes DX10 technology and features advanced lighting and physics effects, and allows for a full 360 degree range of camera control. Massive's MassTech engine scales down to accommodate a wide range of PC specifications, if you've played a modern PC game within the last two years, you'll be able to play World in Conflict. World in Conflict's FPS-like control scheme and 360-degree camera make its action-strategy game play accessible to strategy fans and fans of other genres... if you love strategy, you'll love World in Conflict. If you've never played strategy, World in Conflict is the strategy game to try. Based on the test results charted below it's clear that WiC doesn't place a limit on the maximum frame rate (to prevent a waste of power) which is good for full-spectrum benchmarks like ours, but bad for electricity bills. The average frame rate is shown for each resolution in the chart below. World in Conflict just begins to place demands on the graphics processor at the 1680x1050 resolution, so we'll skip the low-res testing.
The GT200 series GPUs from NVIDIA seem to have a distinct advantage with the World In Conflict benchmark. The GTX260-216 pulls out a 6 FPS lead over the HD5770 at 1680x1050 and holds onto it at the higher 1920x1200 resolution. The GTX275 and GTX285 pile on another 4-6 FPS above that. Also, the HD4850 comes close to the performance of the HD5770, even in the higher resolution testing, despite having only 512MB of memory to play with. Something is clearly not optimized in this benchmark for the latest ATI version of pixel processing hardware.
Our last benchmark of the series brings DirectX 11 into the mix, a situation that only one of the cards under test is capable of handling. BattleForge - Renegade Benchmark ResultsIn anticipation of the Release of DirectX 11 with Windows 7 and coinciding with the release of AMD's ATI HD 5870, BattleForge has been updated to allow it to run using DirectX 11 on supported hardware. Well what does all of this actually mean you may ask? It gives us a sip of water from the Holy Grail of game designing and computing in general: greater efficiency! What does this mean for you? It means that that the game will demonstrate a higher level of performance for the same processing power, which in turn allows more to be done with the game graphically. In layman's terms the game will have a higher frame rate and new ways of creating graphical effects, such as shadows and lighting. The culmination of all of this is a game that both runs and looks better. The game is running on a completely new graphics engine that was built for BattleForge. BattleForge is a next-gen real time strategy game, in which you fight epic battles against evil along with your friends. What makes BattleForge special is that you can assemble your army yourself: the units, buildings and spells in BattleForge are represented by collectible cards that you can trade with other players. BattleForge is developed by EA Phenomic. The studio was founded by Volker Wertich, father of the classic "The Settlers" and the SpellForce series. Phenomic has been an EA studio since August 2006. BattleForge was released on Windows in March 2009. On May 26, 2009, BattleForge became a Play 4 Free branded game with only 32 of the 200 cards available. In order to get additional cards, players will now need to buy points on the BattleForge website. The retail version comes with all of the starter decks and 3,000 BattleForge points.
Never mind the DX10 v. DX11 question, the real news here is that this game was almost certainly developed exclusively on ATI hardware, and it shows. At both widescreen resolutions, the HD5770 trumps the GTX285, an almost unthinkable result. Each of the product families seems to scale appropriately, but the NVIDIA cards take a beating here. The benchmark itself is a tough one, once all the settings are maxed out. EDITOR'S NOTE: SSAO Was enabled for these tests, which utilizes DirectX 11 code that older products may not be compatible with.
In our next section, we investigate the thermal performance of the Radeon HD5770, and see if that half-size 40nm GPU die runs as cool as we think it will. ATI Radeon HD5770 TemperatureIt's hard to know exactly when the first video card got overclocked, and by whom. What we do know is that it's hard to imagine a computer enthusiast or gamer today that doesn't overclock their hardware. Of course, not every video card has the head room. Some products run so hot that they can't suffer any higher temperatures than they generate straight from the factory. This is why we measure the operating temperature of the video card products we test. To begin testing, I use GPU-Z to measure the temperature at idle as reported by the GPU. Next I use FurMark 1.7.0 to generate maximum thermal load and record GPU temperatures at high-power 3D mode. The ambient room temperature remained stable at 25C throughout testing (it's still warm in DC...). The ATI Radeon HD5770 video card recorded 36C in idle 2D mode, and increased to 69C after 20 minutes of stability testing in full 3D mode, at 1920x1200 resolution and the maximum MSAA setting of 8X. The fan was left on its stock settings for this test. 69°C is an excellent result for temperature stress testing, especially with stock fan settings. The built-in fan controller generally runs the fan at 1200RPM during 2D or idle. On most benchmarks, the temperature never got above 65C and the fan stayed there. Once temps got above 65C, the controller ramped the fan up, to a maximum of 1600 RPM. FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As an oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.
FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application or video game, and it does so with consistency every time. While FurMark is not a true benchmark tool for comparing different video cards, it still works well to compare one product against itself using different drivers or clock speeds, or testing the stability of a GPU, as it raises the temperatures higher than any program. But in the end, it's a rather limited tool. In our next section, we discuss electrical power consumption and learn how well (or poorly) each video card will impact your utility bill... VGA Power ConsumptionLife is not as affordable as it used to be, and items such as gasoline, natural gas, and electricity all top the list of resources which have exploded in price over the past few years. Add to this the limit of non-renewable resources compared to current demands, and you can see that the prices are only going to get worse. Planet Earth is needs our help, and needs it badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your computer hasn't been doing much to help save energy... at least up until now.
To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our computer system, which is allowed to boot into Windows and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
* Results are accurate to within +/- 5W.
The ATI Radeon HD5770 pulled 22 (119-97) watts at idle and 115 (212-97) watts when running full out, using the test method outlined above. These numbers are very close to the factory numbers of 18W at idle and 108W under load. Radeon HD5770 Final ThoughtsThe alternative title for this review could have been: "What Price DirectX 10?" or "Who Killed Crysis?". I know the big news is DirectX 11, and how it is a major advancement in both image quality and coding efficiency, but for the time being, we're stuck in a DirectX 10 world, for the most part. DX11 games won't be thick on the ground for at least a year, and some of us are going to continue playing our old favorites. So, with the switch to Windows 7, what's the impact on gaming performance? So far, it's a bit too random for my tastes.
We seem to be back to a situation where the software differences between games have a bigger influence on performance than hardware and raw pixel processing power. As the adoption rate for Windows 7 ramps up, more and more gamers are going to be wondering if DirectX 10 is a blessing or a curse. Crysis gets cut off at the knees, but Far Cry 2 gets a second wind with DX10. World In Conflict holds back its best game play for NVIDIA customers, but BattleForge swings the other way, with DX10 and DX11. I have a feeling this is why gamers resolutely stuck with Windows XP, and never warmed up to Vista. It wasn't the operating system per se, as much as it was DirectX 10. And I want to clarify; there's probably nothing inherently wrong with DX10, it's just that so few games were designed to use it effectively. The other problem is that, unlike other image enhancing features, DirectX has no sliding scale. I can't select 2x or 4x or 8x, to optimize the experience, it's either all in, or all out.
The good news is that the adoption rate for Windows 7 will probably set records, if anyone is keeping score. Combine that with the real-world benefit to software coders that DirectX 11 brings, and there is a good probability that we won't be stuck in DX10 land for very long. New graphics hardware from both camps, a new operating system, a new graphics API, and maybe an economic recovery in the works? It's going to be an interesting holiday season, this year! ATI Radeon HD5770 ConclusionThe performance of the HD5770 is pretty amazing, considering the modest looking hardware resources that make it all possible. One way of showing this objectively is to look at the power required to deliver the performance. The 5770 offers roughly the same performance as an HD4870 for approximately half the power, and that's at full load, without all the power saving tricks that are used to get the idle power below 20 watts. Performance is more than just frames-per-second, though; the ability to run 2-3 monitors with Full ATI EyeFinity Support counts, too. Plus, we've been measuring performance with Beta drivers. If you've read some of my recent video card reviews, you've got a better understanding of why driver performance on launch day is not a good measure of the final product. So, while the raw performance numbers are good enough for the target price point today, I predict even better things to come for both price and performance.
The appearance of the product itself is both retro and futuristic at once. The red hood scoops may have been lampooned as Batmobile wannabees, but for the most part, the design is clean and sleek; a perfect canvas for the partners to display their best artwork. There may be some non-reference designs in the works, but the usual motivation for that effort is usually improving the thermal performance. I don't see that as a real necessity with this card/chip combo. The reference design has plenty of cooling capacity for the tiny Juniper GPU. The build quality of the Radeon 5770 is a bit hard to assess. I've already noted some unique characteristics of the engineering sample I received, like DIP switches that don't appear on production versions, so I hesitate to pass judgment on something that a consumer will never see. Overall, the parts were all high quality, and while the PC board may have had a few rough edges, the cooler section was manufactured and assembled perfectly. The features of the HD5770 are also amazing, having been carried over in full measure from the HD5800 series: DirectX 11, Full ATI Eyefinity Support, ATI Stream Technology Support, DirectCompute 11 and OpenCL Support, HDMI 1.3 with Dolby True HD and DTS Master Audio. We've barely scratched the surface in this review, focusing almost exclusively on gaming performance, but the card has other uses as well. As of October 13, at launch, ATI is aiming at a price point of $159 for the HD5770. A quick look at Newegg shows this to be right in the mix between the HD4870 and the GTX260-216. ATI priced this card right in the middle of the pack, performance-wise, but it has advanced features that the other cards can't match. Many ATI partners will also be bundling the card with a free download of DiRT 2, the latest version of an action-packed favorite, in order to showcase the new DirectX 11 capability. Once prices come down ten or twenty dollars, this card is going to be a screaming value. The ATI Radeon HD5770 earns a Golden Tachometer Award, because it's a game-changing package for the middle ground. Perfectly timed to match up with the launch of Windows 7 and its DirectX 11 interface, it's the card to get if you are putting a system together in its price range. Pros:
+ Unmatched feature set Cons:
- 800 Stream Processors at 850MHz, but performance < HD4890? Ratings:
Final Score: 9.2 out of 10.Excellence Achievement: Benchmark Reviews Golden Tachometer Award.Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.
Related Articles:
|
Comments
i cant seem to know the problem with my set up why im getting a low scores on benchmark test with re5 using ati 5770 GPU.
scores with the same set-up: 1920 x 1200 with all settings were max out 8xAA was only 24.1fps with the single card and 42.1 fps with crossfire setup.
Specs:
GPU: Sapphire ATI 5770 Vapor X crossfire config with 1 crossfire cable connected
Proci: thuban 1090t h50 water cooled
mobo: msi 790gx-65g
hdd: samsung 500gb 7200rpm 3.5
mem: ocz obsidian dd3 1600mhz 4GB ( 2by2Gb)
PSU: thermaltake litepower 600w
Monitor: Sony 32in bravia lcd TV
software: dx 11 cat 10.11 win 7
can somebody please help..could it be some components maybe bottleneck?
Reply