XFX Radeon R7950 Black Edition Video Card |
Reviews - Featured Reviews: Video Cards | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by David Ramsey | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday, 31 January 2012 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
XFX R7950 Black Edition Double Dissipation
Manufacturer: XFX Creation, Inc. Full Disclosure: The product sample used in this article has been provided by XFX. AMD might have trouble keeping up with the competition in the CPU arena, but their acquisition of ATI allowed them to become a major player in the graphics world. AMD and NVIDIA regularly trade places in the "fastest video card" rankings, and while NVIDIA's been good holding down the title with its current high-end, the Fermi-based GTX 580, AMD's new "Tahiti" GPU decisively stole the performance crown in its 7970 form. While our recent test of the XFX R7970 Black Edition Double Dissipation video card praised its performance, we did gulp at the price: an eye-watering $600. This is $50 more (at Newegg prices) than a 3GB GTX580 and about $130 more than the standard 1.5GB GTX580s. Today we have the R7970's little brother, the R7950, which comes in at a still-high $500...but how will its performance compare?
Like its big brother, this card comes with a factory overclock and an elaborate dual-fan cooling system. I'll compare its performance against NVIDIA and AMD's current top-end cards, as well as the R7970 we tested recently, to see how well it performs. AMD "Southern Islands" GPUIt's always exciting to see AMD or NVIDIA come out with a completely new GPU architecture. The "Southern Islands" GPUs are AMD's implementation of its "Graphics Core Next" architecture, and comprise three different families of GPUs:
Right now, the only Southern Islands GPUs available are the Tahiti-based Radeon HD 7970 and 7950 GPUs, but cards based on the Pitcairn and Cape Verde variants will appear in the coming months. Graphics Core NextAMD had several goals in mind for Graphics Core Next, and one of the main things they wanted to do was to catch up with NVIDIA in the "GPU compute" arena. Right now, NVIDIA's "CUDA" (Compute Unified Device Architecture) dominates in GPU computing, with a robust set of developer tools and years of track record behind it. AMD's "DirectCompute" alternative has been around almost as long but has failed to catch on with developers to the degree that CUDA has. AMD is making a real push for DirectCompute with these new GPUs, and claims that over 200 applications already benefit from DirectCompute technology. For Southern Islands, AMD has grouped simple ALUs (arithmetic logic units) into a single SIMD (Single Instruction Multiple Data) unit. A number of SIMD units, along with instruction decoders and schedulers, branch units, vector processors, and other items comprise a compute unit, and a number of these compute units (along with memory controllers and whatnot) comprise a Southern Islands GPU chip. Each compute unit comprises 64 shaders, and while the 7970 has 32 of them (and thus 2,048 shaders), the Radeon 7950 gets by with only 28 (and 1,792 shaders). That's a decrease of only 12.5%, which doesn't seem like much. Additionally, the standard 7950 clock speed is 800MHz as compared to the 7970's 925MHz. Here's a summary of the differences between these two XFX video cards:
Pre-release rumors had the 7950 running with a 256-bit interface to memory, but it has the same 384-bit path that its big brother does. While the 7950 will be available in with 1.5GB of onboard memory, the XFX R7950 Black Edition Double Dissipation Benchmark Reviews has to test today carries the larger 3GB memory option. AMD has tweaked their VLIW (very long instruction word) architecture to provide more consistent performance. Previous generations of AMD GPUs often left many compute units/stream processors idle, because dependencies in the data being worked on meant that not all the compute units could be used at once. Southern Islands architecture provides a greater degree of parallelism (it's that SIMD stuff, really, being used effectively) and can keep most compute units working all the time, leading to more consistent (and higher) performance. This has obvious advantages in both graphics processing and general GPU-compute operations. Other enhancements include: Partially Resident Textures: As games increasingly use very large textures, loading and manipulating the texture data takes more time. A Souther Islands GPU can load only the part of the texture that will actually be visible in a frame, reducing the memory bandwidth and workload. Error-correcting code support: There's not much detail on this feature yet, but it looks as if AMD will be able to offer optional ECC support (important for industrial applications) without having to use ECC memory. This will detect and correct memory errors, although AMD's tech white paper doesn't go into specifics such as how many bits can be detected/corrected. PowerTune and ZeroPower: These feature dynamically clock the card's GPU and memory doesn (PowerTune) when high performance isn't needed, and can shut off entire sections of the GPU (ZeroPower) when the card is idle. For example, the second card in a CrossFireX system can be idled down to less than 5 watts if you're just browsing the Windows desktop; a single card system will power down if your display goes to sleep. Combined with the inherent efficiency of the 28nm fabrication process, this results in significant power savings. Side benefits you'll notice include less heat and noise emanating from your rig, especially when you're not gaming. Eyefinity 2.0: New support for 5x1 monitor layouts, improved bezel correction, and support for custom resolutions enhance AMD's existing Eyefinity feature. I saw a 5x1 system demonstrated at an AMD press even a few months ago and it was quite impressive. 28nm fabrication process: If you make 'em smaller, you can fit more of 'em in. The 7970 GPU has a staggering 4.3 billion transistors. The original Intel 4004 microprocessor had about 2,300. PCI Express 3.0 support: This has twice the bandwidth of PCI-E 2.0, but I'm not sure what real-world effect this will have, especially on x16 slots. Even the beefiest current video cards aren't hobbled by 8x PCI-E 2.0 bandwidth. Let's take a look at what these features actually mean on a real, live video card. Closer Look: XFX Radeon HD 7950The XFX R7950 Black Edition Double Dissipation comes with an accessory package identical to its big brother R7970. Along with the various bits of documentation is a driver CD, a CrossFireX cable, and a passive HDMI to DVI cable. There's also a cool metal and plastic stick-on "XFX Black Edition" badge you can put somewhere on your case to show off. ![]() If you've owned or read reviews on AMD video cards before, you're familiar with the "black slab with red accents" styling that AMD reference cards have used for the past few years. Indeed, even the new Tahiti-based cards use this styling, and most of the cards you'll see will consist of the reference design with a vendor-specific sticker (often featuring guns, women, and monsters) applied. XFX, however, has decided that their new high-end cards deserve better:
The first thing you'll notice is the dual-fan cooler. The cooler is quite elaborate: XFX replaces the stock AMD vapor chamber cooler with a "HydroCell" vapor chamber of their own design. They then apply two Duratec IP-5X dust-free fans whose design prevents dust from ever reaching the fan bearings, and surrounds the whole assembly with an aluminum "Ghost Thermal Technology" shroud whose design is said to improve cooling by directing air out the top and bottom of the card.
That said, there doesn't actually appear to be a lot of space for air to exit from the top of the card. A red trim panel and the cooler base above it block most of the airflow from the top of the card, although there is some space below the trim panel. As we'll see later, this doesn't seem to affect the GPU temperatures, and besides, it looks pretty cool in a tower case with a window.
The back of the card doesn't have any sort of cover plate, but you can see 15 or 16 of the 20+ screws that secure the cooling apparatus to the card. White circular "Warranty void if removed" stickers cover two of the screws. I don't know if it makes any difference in the real world, but I prefer backplates to protect the miniscule components of the card from damage and static. Let's take a closer look at this card in the next section. FX-795A-TDBC Detailed FeaturesThe connectors available include one DVI, one HDMI, and two mini-DisplayPort. The DVI connector is a startling red color. XFX claims their custom backplate with the "XFX" cutout permits twice the airflow of a standard backplate, but this seems unlikely.
The XFX R7950 Black Edition Double Dissipation video card has two 6-pin PCI-E power connectors. However, solder pads on the back of the card reveal that a version with an 8-pin connector could be made.
While the "Double Dissipation" cooler on the R7950 superficially seems identical to the one on the R7970, looking at the bottom we can see a visible copper heat pipe on the R7970 (upper card in the image below) that's not present on the R7950.
With the cooler removed, we can see the copper plate for GPU cooling and thermal pads for the memory and VRMs. As is sadly often the case, way too much thermal compound was used during the assembly of this card.
The Radeon 7950 GPU is a blank mirror with no part numbers or other markings. 12 Hynix memory chips (one with a thermal pad over it) surround the GPU. ![]() We'll cover the features and specifications of this card in the next section. R7950 Black Edition Features
![]() FX-795A-TDBC Specifications
There's a lot going on "under the hood" of this card. In addition to the elaborate cooler, XFX enhances the reference 7950 design with a 2 ounce copper PCB, ferrite core chokes, and solid capacitors capable of operation up to 105 degrees Centigrade. Their Black Edition 7950 cards use hand-selected GPUs that XFX says are in the "top 1%" and can reach higher clock speeds without having to increase the wattage being drawn by the card. XFX delivers the card pre-overclocked with a GPU speed of 900MHz (as compared to a stock speed of 800MHz) and a memory speed of 1375MHz (stock speed 1250MHz). We already know that the R7970 is the fastest single GPU you can buy. Let's see how its little brother R7950 compares... VGA Testing MethodologyThe Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. All of the tests in this review were run with DX11 graphics. According to the Steam hardware survey, as of December 2011 the most popular desktop resolution (for Steam users) is 1680x1050 pixels, with a 17.59% share, with 1920x1080 pixels coming in second with only 7.7%. However, when testing video cards at this level, the higher the resolution you can test with, the better. I ran most tests at both 1680x1050 and 1920x1200 (I continue to prefer the 16:10 ratio of 1920x1200 to the mysteriously more popular 1920x1080 resolution). Frankly the top-end video cards these days are so powerful that even 1920x1200 isn't much of a challenge... I used a combination of synthetic and video game benchmark tests in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game. Note: Since Benchmark Reviews tested the XFX R7970 Black Edition Double Dissipation video card, AMD has released a new driver that promises increased performance across a broad range of applications. For this test I re-ran the benchmarks for the R7970 with the new driver, so the scores for that card in this review will be different from the scores in the original review. AMD has not rolled the Tahiti drivers into their standard Catalyst release yet, so we're stuck with mysterious numbers: we originally tested the R7970 with driver "8.921-111202a-129903E-ATI", while in this test we're using the newer "8.921.2-120119a-132101E-ATI". DX11 Cost to Performance RatioFor this article Benchmark Reviews has included cost per FPS for graphics performance results. Only the least expensive product price is calculated, and does not consider tax, freight, promotional offers, or rebates into the cost. All prices reflect product series components, and do not represent any specific manufacturer, model, or brand. These retail prices for each product were obtained from NewEgg.com on 8-January-2012:
Intel X79-Express Test System
DirectX-11 Benchmark Applications
Video Card Test Products
|
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Crysis 2
The latest entry in Crytek's "Crysis" series launched in early spring of 2011. The initial release lacked DX11 support and was widely criticized for low-quality textures. A few months later Crytek released patches that provided DX11 features and more than a gigabyte of new high-resolution textures. Thus fortified, Crysis 2 is one of the most visually impressive games available.
I used the Adrenaline Crysis 2 benchmark tool to benchmark a scripted run-through of Times Square. I set the quality level to "Extreme" and turned on DirectX 11 and high-resolution textures. AA was enabled in the benchmark (although it's not a game option normally) to increase the load on these graphics cards.
- Crysis 2 with Adrenaline Benchmark
- Extreme quality, DX11 features, high-resolution textures, 4xAA, Edge AA, Times Square sequence
Cost Analysis: Crysis 2 (1920x1200)
Test Summary: The 6990's surprisingly low score is something I can't explain; I double-checked my settings and re-ran the test several times. Look how close the overclocked 7950 comes to the mighty GTX 590! It's only about 8% slower.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Batman: Arkham City
If there was ever a game that showcased the growing gap between game consoles and high-end gaming PCs, Batman: Arkham City is it. In this dystopian near-future, part of Gotham City has been walled off as an enclave for criminals (rather like Escape from New York). It's a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, and is based on an updated Unreal Engine 3 game engine. Batman: Arkham City is a DirectX 11 title that uses multi-threaded rendering to produce life-like tessellation effects.
One annoyance with the game is that all game settings must be made through a hidden application called "BMLauncher". Once you've made your settings, though, an in-game benchmark provides the feedback you'll need to tune your system's performance.
-
Batman: Arkham City
-
FXAA (high), DirectX 11 features, High tessellation, Extreme Detail, Dynamic Shadows, Motion Blur, Distortion, Lens Flares, Light Shafts, Reflections, Ambient Occlusion, hardware-accelerated PhysX
-
The AMD Radeon graphics cards are at a disadvantage here, since, like Arkham Asylum, Arkham City is a showcase of PhysX effects, which can be directly accelerated by NVIDIA cards but not by AMD cards. Some might argue that turning PhysX off would provide a fairer comparison, but the extensive PhysX effects are such in integral part of the game I decided to leave them on. In this case, the CPU provides PhysX computations, and while the Intel Core i7-3960X processor is a best-case example, I'd expect similar performance from a 2500K or 2600K, since the per-core performance of these CPUs is similar to that of the 3960X.
Note that the dual-GPU Radeon HD 6990 turns in the same frame rates as the 6970. This is almost certainly because AMD doesn't have a CrossFireX profile for this new game, thus the card runs in single-GPU mode. But the Tahiti-core cards are a solid 36% faster than the previous-generation cards, and are solidly playable frame rates even with PhysX enabled. That's impressive.
Cost Analysis: Arkham City (1920x1200)
Test Summary: Although the HD 7970 is at a disadvantage here with its lack of PhysX support, it still turns in very playable frame rates. Oddly, the overclocked score is virtually identical to the stock clocked score.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Aliens vs Predator
Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.
I configured Aliens vs. Predator to use the highest quality settings with 4x AA and 16x AF, as well as turning on DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation, along with advanced shadows.
- Aliens vs Predator
- Texture quality High, Shadow quality High, 4xAA, 16xAF, SSAO on, Hardware tessellation, Advanced Shadow Sampling
Cost Analysis: Aliens vs Predator (1920x1200)
Test Summary: The Radeon HD 6990 comes roaring back here, easily beating the GTX 590. The R7950 beats the GTX 580 by just over 26% at 1920x1200.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Lost Planet 2
Capcom provides a stand-alone benchmark tool for Lost Planet 2. Reviewers love stand alone benchmarks, and users should, too, since they allow the evaluation of a system without the trouble and expense of purchasing and configuring the actual game. Lost Planet 2 takes place on E.D.N. III, the same planet in the original Lost Planet game, but ten years later. The snow has melted and somehow giant tropical jungles have grown to fill the landscape.
Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs. There are two parts to the benchmark: Test A, which is a semi-random script that's a good example of normal game play, and Test B, which is a deterministic script that places a significantly heavier load on the card being tested.
- Lost Planet 2
- 1920x1200, 8X AA, Motion Blur, High Shadow Detail, High Texture Detail, High Rendering, High DX11 features
Cost Analysis: Lost Planet 2 (1920x1200, Test B)
Test Summary: Radeon cards do very well in this test, although the dual-GPU GTX 580 still posts the highest scores. It's interesting to note that while the Radeon HD 6990 beats the Tahiti cards in Test A, the tables are turned in Test B.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.
The 4A engine is multi-threaded in such that only PhysX had a dedicated thread (although PhysX is disabled for this test), and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.
Metro 2033 features superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.
-
Metro 2033
-
DirectX 11, Very High quality, 4xAA, 16xAF, tessellation, DOF, "Frontline" scene, no PhysX
-
NVIDIA has been diligently working to promote Metro 2033, and for good reason: it is the most demanding PC video game we've ever tested. When their flagship GeForce GTX 580 struggles to produce 26 FPS at 1920x1200 with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All our tests disable advanced PhysX options.
Cost Analysis: Metro 2033 (1920x1200)
Test Summary: There's no doubt that this is a very demanding game. What Crysis was to top-end video cards a few years ago, Metro 2033 is now, with even the strongest single-GPU cards struggling to break 30 frames per second at 1920x1200. And note that this is with PhysX features turned off! Still, both Tahiti-based cards managed to post average frame rates about 30FPS, even at 1920x1200 with the game's most demanding settings.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
DX11: Unigine Heaven 2.5
The Unigine "Heaven 2.1" benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.
The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand. The "Heaven" benchmark excels at providing the following key features:
- Native support of OpenGL, DirectX 9, DirectX-10 and DirectX-11
- Comprehensive use of tessellation technology
- Advanced SSAO (screen-space ambient occlusion)
- Volumetric cumulonimbus clouds generated by a physically accurate algorithm
- Dynamic simulation of changing environment with high physical fidelity
- Interactive experience with fly/walk-through modes
- ATI Eyefinity support
- Unigine Heaven 2.5
- High Shaders, Normal tessellation, 8xAA, 16xAF
Cost Analysis: Unigine Heaven (1920x1200)
Test Summary: AMD cards dominate in this test, albeit only at the 1920x1200 resolution. At both resolutions, though, the overclocked XFX R7950 Black Edition Double Dissipation is nipping at the heels of the $250 more expensive GTX 590. At 1920x1200 it decisively beats the GTX 580.
Graphics Card | GeForce GTX570 | GeForce GTX580 | GeForce GTX590 | Radeon HD6970 | Radeon HD6990 | Radeon HD7950 | Radeon HD7970 |
GPU Cores | 480 | 512 | 1024 | 1536 | 3072 | 1792 | 2048 |
Core Clock (MHz) | 732 | 772 | 607 | 880 | 800 | 900 | 1000 |
Shader Clock (MHz) | 1464 | 1544 | 1215 | N/A | N/A | N/A | N/A |
Memory Clock (MHz) | 1900 | 2004 | 1707 | 900 | 1250 | 1375 | 1425 |
Memory Amount | 1280 MB GDDR5 | 1536MB GDDR5 | 3072MB GDDR5 | 2048MB GDDR5 | 4096MB GDDR5 | 3072MB GDDR5 | 3072MB GDDR5 |
Memory Interface | 320-bit | 384-bit | 384-bit | 256-bit | 256-bit | 384-bit | 384-bit |
XFX 7950 DD Temperatures
Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, gamer, or merely a PC hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information. Benchmark Reviews has a very popular guide written on Overclocking Video Cards, which gives detailed instruction on how to tweak a graphics cards for better performance. Of course, not every video card has overclocking headroom. Some products run so hot that they can't suffer any higher temperatures than they already do. This is why we measure the operating temperature of the video card products we test.
At the start of the test, I measure the idle temperature of the card with the card sitting at the Windows desktop, using the GPU-Z utility. Next, I start FurMark's stress test and let it run until the temperature curve flattens and the temperature has not varied more than 1 degree in the last five minutes.
FurMark does two things extremely well: drive the thermal output of any graphics processor higher than applications of video games realistically could, and it does so with consistency every time. FurMark works great for testing the stability of a GPU as the temperature rises to the highest possible output. The temperatures discussed below are absolute maximum values, and not representative of real-world performance.
Equipped with a very energy-efficient 28nm GPU and an enhanced vapor chamber cooler, the R7950 Black Edition Double Dissipation returns temperatures a good 30 degrees lower under stress than I've seen from an NVIDIA GTX 580 card...and that's when the card is overclocked beyond its already overclocked specs!
It's interesting to note that the overclocked load temperatures are only a single degree higher than the standard load temperatures. The XFX R7950 video card seems to have a different, and more aggressive, fan profile than the XFX R7970 I reviewed. Under load, that R7970's fans only reached 49% of their maximum speed, while the R7950's fans hit 66% of their maximum speed. The R7950 is noticeably noisier than its big brother under load, but the payoff is very low temperatures.
Note: After we completed testing on this card, XFX notified us that retail cards will have a new BIOS with a less aggressive fan profile, although the BIOS we tested will be available as a download for users who want the lowest possible temperatures. With the new BIOS the temperatures and noise should be closer to those we recorded with the XFX R7970.
VGA Power Consumption
AMD's Tahiti GPU is the first GPU fabricated on a 28nm process. There are two advantages to making transistors smaller: you can make 'em faster, and they use less power. Both are true here, but AMD didn't stop there, since their Tahiti architecture has a number of clever power-saving features.
Like a modern CPU, a Tahiti GPU will aggressively clock itself down when its full capabilities aren't needed, reducing current draw with what AMD calls "PowerTune". But they go even further, with "ZeroCore" technology turning off entire sections of the chip when they aren't in use. This features work amazingly well.
To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product:
Situation | Power | Card delta |
Windows login, no video card | 101 watts | -- |
Windows login, video card | 117 watts | 16 watts |
Windows desktop | 119 watts | 18 watts |
Windows desktop, display sleep | 109 watts | 8 watts |
FurMark load | 332 watts | 231 watts |
Overclocked FurMark load | 347 watts | 246 watts |
VGA Product Description(sorted by combined total power) |
Idle Power |
Loaded Power |
---|---|---|
NVIDIA GeForce GTX 480 SLI Set |
82 W |
655 W |
NVIDIA GeForce GTX 590 Reference Design |
53 W |
396 W |
ATI Radeon HD 4870 X2 Reference Design |
100 W |
320 W |
AMD Radeon HD 6990 Reference Design |
46 W |
350 W |
NVIDIA GeForce GTX 295 Reference Design |
74 W |
302 W |
ASUS GeForce GTX 480 Reference Design |
39 W |
315 W |
ATI Radeon HD 5970 Reference Design |
48 W |
299 W |
NVIDIA GeForce GTX 690 Reference Design |
25 W |
321 W |
ATI Radeon HD 4850 CrossFireX Set |
123 W |
210 W |
ATI Radeon HD 4890 Reference Design |
65 W |
268 W |
AMD Radeon HD 7970 Reference Design |
21 W |
311 W |
NVIDIA GeForce GTX 470 Reference Design |
42 W |
278 W |
NVIDIA GeForce GTX 580 Reference Design |
31 W |
246 W |
NVIDIA GeForce GTX 570 Reference Design |
31 W |
241 W |
ATI Radeon HD 5870 Reference Design |
25 W |
240 W |
ATI Radeon HD 6970 Reference Design |
24 W |
233 W |
NVIDIA GeForce GTX 465 Reference Design |
36 W |
219 W |
NVIDIA GeForce GTX 680 Reference Design |
14 W |
243 W |
Sapphire Radeon HD 4850 X2 11139-00-40R |
73 W |
180 W |
NVIDIA GeForce 9800 GX2 Reference Design |
85 W |
186 W |
NVIDIA GeForce GTX 780 Reference Design |
10 W |
275 W |
NVIDIA GeForce GTX 770 Reference Design |
9 W |
256 W |
NVIDIA GeForce GTX 280 Reference Design |
35 W |
225 W |
NVIDIA GeForce GTX 260 (216) Reference Design |
42 W |
203 W |
ATI Radeon HD 4870 Reference Design |
58 W |
166 W |
NVIDIA GeForce GTX 560 Ti Reference Design |
17 W |
199 W |
NVIDIA GeForce GTX 460 Reference Design |
18 W |
167 W |
AMD Radeon HD 6870 Reference Design |
20 W |
162 W |
NVIDIA GeForce GTX 670 Reference Design |
14 W |
167 W |
ATI Radeon HD 5850 Reference Design |
24 W |
157 W |
NVIDIA GeForce GTX 650 Ti BOOST Reference Design |
8 W |
164 W |
AMD Radeon HD 6850 Reference Design |
20 W |
139 W |
NVIDIA GeForce 8800 GT Reference Design |
31 W |
133 W |
ATI Radeon HD 4770 RV740 GDDR5 Reference Design |
37 W |
120 W |
ATI Radeon HD 5770 Reference Design |
16 W |
122 W |
NVIDIA GeForce GTS 450 Reference Design |
22 W |
115 W |
NVIDIA GeForce GTX 650 Ti Reference Design |
12 W |
112 W |
ATI Radeon HD 4670 Reference Design |
9 W |
70 W |
It's obvious that AMD's power-saving technologies work incredibly well. Unless you're gaming or running stress tests, video, or benchmarks, the card's power use is amazingly low. Low power means less heat, longer component life, and a smaller electric bill.
XFX R7950 DD Overclocking
The XFX R7950 Black Edition Double Dissipation video card comes pre-overclocked. While the stock Radeon 7950 runs its GPU at 800MHz, out of the box the XFX card is running at 900MHz. But maybe there's more to be had. I first tried overclocking with a beta version of MSI's Afterburner utility, but wasn't able to get things working correctly (especially the voltage adjustments), so I was limited to AMD's own Overdrive utility, built into the Catalyst Control Center.
While I was able to take XFX's R7970 GPU as far as Overdrive would take it (1100MHz), I could only reach 1020MHz on the R7950. Actually, 1050MHz would work on every benchmark except Crysis 2, where it would reliably lock up just a few seconds into the benchmark, every time...and since it was the last benchmark in my tests, I had to re-run everything at 1020MHz. I was able to overclock the memory the full additional 200MHz that Overdrive allowed, to 1575MHz. A 13.3% overclock might not seem like much, but remember XFX has already taken the GPU from its stock 800MHz clock to 900MHz, so 1020MHz represents a 27.5% overclock from the "real" stock setting.
The card was completely stable at these settings, and the fans didn't seem any louder except under FurMark testing. Here's how the overclock helped performance at 1920x1200:
Benchmark | Stock FPS | OCP FPS | % improvement |
Heaven 2.5 | 45.6 | 50.6 | 10.96 |
3DMark11 GT1 | 11.27 | 13 | 15.35 |
3DMark11 GT2 | 12.68 | 14.29 | 12.70 |
3DMark11 GT3 | 11.71 | 13.03 | 11.27 |
3DMark11 GT4 | 6.69 | 7.53 | 12.56 |
Aliens vs. Predator | 55.3 | 62.2 | 12.48 |
Lost Planet 2 Test A | 61.6 | 70.6 | 14.61 |
Lost Planet 2 Test B | 52.1 | 58.3 | 11.90 |
Metro 2033 | 31.67 | 35 | 10.51 |
Arkham City | 48 | 49 | 2.08 |
Crysis 2 | 62.2 | 69.0 | 10.93 |
Average Improvement | 11.4% |
As you can see from the charts, when overclocked, the R7950 comes very close to the performance of the R7970.
XFX Radeon 7950 DD Final Thoughts
I've tested and owned a number of video cards in the last few years; my main system currently runs a pair of NVIDIA GTX 580 cards. The performance they provide is amazing, but so is the power draw and the noise under load (although they're certainly quieter than their GTX 480 forebears). Even when my system's idling, the GTX 580's are pulling enough power that their heat sinks are quite hot to the touch. With the release of AMD's Tahiti-architecture cards, I can now have better performance with much less power and heat.
AMD's Tahiti GPUs represent a number of firsts for the industry:
-
The first 28nm GPU
-
The first DirectX 11.1 GPU
-
The first PCI-E 3.0 GPU
...so AMD has a lot to be proud of. There's even more stuff, like support for 4k ultra high resolution displays via the DisplayPort connectors, but it will probably be a while before you can buy one of those.
AMD's 7950 GPU is simply a 7970 with four of its compute units (representing 256 shaders) disabled. Before the 7970 and 7950 were introduced, the buzz was that the 7970 would be the NVIDIA GTX580 competitor, but as Benchmark Reviews has shown, it's really the 7950 that fill that niche: at 1920x1200, the XFX R7950 Black Edition Double Dissipation card easily beat a reference design NVIDIA GTX580 in every test except Arkham City, where it was handicapped by its lack of PhysX support; the 7950 would have easily won that test had PhysX been disabled. PhysX is a significant competitive advantage for NVIDIA now, but it's nice to see that this card, combined with a powerful CPU, can still generate playable frame rates with PhysX effects enabled.
The Black Edition Double Dissipation is XFX's most expensive version of the 7950; as with the 7970, it's offered in four different versions: Core Edition, Black Edition, Double Dissipation, and Black Edition Double Dissipation. The highest-end card carries a $50 premium over the base Core Edition. It's exactly $100 less than the 7970 version of the same card. For this 20% savings you give up a little over 9% in frame rate, measured across these benchmarks. That seems a reasonable tradeoff, especially since you can easily overclock the R7950 to virtual performance parity with the stock-clocked R7970. And even when overclocked, the GPU temperatures remained very low (at the cost of some noise under load, although according to XFX the retail cards will be quieter), so you don't need to fear for your card's longivity.
At $500, the XFX R7950 is a very expensive video card. But you might feel better about it if you look at the benchmark charts again and compare its performance to the $700 Radeon 6990 and the $750 NVIDIA GTX590. Tahiti performance is quite close to these monsters, and in one or two cases it beats them...all for much less money and much less power. Its $-per-FPS is lower than the GTX 580 (often substantially lower) on every single test except the PhysX-enabled Arkham City.
XFX R7950 Black Edition Conclusion
IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion, as it represents our product rating specifically for the product tested, which may differ from future versions. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.
The performance of the card was excellent. Out of 16 separate tests, it beat the reigning single-GPU champ, the NVIDIA GTX 580, in 13 of them, losing out only in PhysX-heavy Batman: Arkham City and the 1680x1050 Unigine Heaven 2.5 benchmark. This card is easily powerful enough to handle a triple-monitor gaming setup all by itself, especially given its 3GB of onboard memory.
The appearance of the card was striking and original. XFX even made the effort of using a red, lettered top plate so everyone will know what card you have should you display it in a windowed case. I do think the card would look a little "cleaner" with a back plate to cover the exposed circuitry on the rear of the card.
The construction of the card seems very solid. I was disappointed to find a huge glob of thermal paste over the GPU, but it doesn't seem to have caused any problem with the actual cooling, since the load temperatures were very low. Amazingly low, actually.
AMD's Tahiti GPUs bring new functionality to the graphics card world: extremely efficient power usage, 4K monitor support, PCI-E 3.0 support, and DirectX 11.1. Admittedly some of these features have no immediate use (it's not as if even dual GPU cards saturate a PCI-E 2.0 x16 slot), but it's nice to know they're there. Topping things off is a 3 gigabyte slug of video RAM, which means the card will easily support multi-monitor systems at high resolutions. The only thing lacking is PhysX support. NVIDIA continues to insist PhysX is an open standard, although nobody else has implemented it yet, and there must be a reason why. Nonetheless, PhysX-enabled games such as Batman: Arkham City bring new levels of realism to the consumer space, and not being able to support this feature well is definitely a drawback.
Although a $500 graphics card is still expensive by any measure, the XFX R7950 Black Edition Double Dissipation represents a better value than its R7970 big brother and a much better value than any other existing high-end video card-- just check the "$ per FPS" charts on each benchmark! Average Newegg prices for the NVIDIA GTX 580 are still in the $479-and-up range, and that's with the reference standard of 1.5GB of memory. The 3GB cards go for $550 or more. With AMD's new driver, the performance delta betwee the GTX 580 and the Tahiti cards grows even wider, and since the Fermi drivers are very mature now, I doubt we'll see any significant improvements on that score; we'll have to wait for the forthcoming Kepler cards to see NVIDIA's answer. In the meantime, AMD's Tahiti-based cards set new standards for performance and efficiency.
The first run of AMD's 7970-based video cards sold out virtually instantly and they remain unavailable as of the time of this writing. Considering that the 7950 represents only slightly less performance for a lot less money, you should grab one of these quickly if you want one.
Pros:
+ Incredible performance
+ Incredible power efficiency
+ Incredible low noise except at maximum load
+ Striking appearance
+ 3GB video RAM
+ 4K display support
+ Switchable BIOS
+ Very low GPU temperatures under load
Cons:
- No PhysX support
- Noisier under load than the R7970 (XFX says that retail cards will be quieter)
Ratings:
- Performance: 9.9
- Appearance: 9.5
- Construction: 9.5
- Functionality: 8.5
- Value: 9.0
Final Score: 9.28 out of 10.
Excellence Achievement: Benchmark Reviews Golden Tachometer Award.
Benchmark Reviews invites you to leave constructive feedback below, or ask questions in our Discussion Forum.
Related Articles:
- Apevia X-Jupiter Jr G Mid Tower ATX Case
- Zalman VF3000A VGA Cooler
- Eagle Arion ET-AR504LR-BK 2.1 Soundstage Speakers
- AZiO Levetron GH808 USB Gaming Headset
- Corsair Vengeance LP 16GB DDR3-1600
- MSI Wind Top AE2010 AIO Desktop System
- Genius DVR-FHD590 Full HD Vehicle Recorder
- Gigabyte GA-P55-UD6 Motherboard: P55 vs X58
- Xigmatek Gigas uATX Computer Enclosure
- Cooler Master HAF XM Computer Case
Comments
Even games that are console ports use higher definition models and textures for the PC versions, and many games still don't exist in console versions...
Just because they add a few hd texture doesnt make the gameplay anybetter. Is just aesthetic and to be honest, even without those, they still look great on a 1080p display using the consoles.
The point of the matter is that this card is way too expensive to justify it. I agree with hcf regarding the Crysis 1 times where it even became a model of " but can it run crysis" because it was a product for PC. Like i said, There is no point in getting these expensive cards or even running crossfire or sli because of the ports that we now enjoy.
Well, sure. It depends on the individual, of course, but many serious gamers obsess over pixel-level detail and can easily see when GPU-intensive features like SSAO are turned on. More obviously, your card can't perform tesselation, so many modern games will look "flat".
But if you're happy with it, there's certainly no reason to go out and drop $500 on a 7950. But much cheaper cards like the GTX 560 series would still improve your gameplay experience.
When i saw that first video of BF3, remember, that game looked amazing. The pre release video. I said, holy # i am going to drop 3K on a new system just for that.
Well, the beta comes out and i can play it just as good...ive seen the game on ultra high settings and it does not look like that video AT ALL.
I have one of those top end cards and I crank everything to max.
BTW - it makes a big difference in game play if your system is smoking fast. No, you don't need to spend $3,000.00 - but an i3/2100 a decent sandy bridge board, some cheap ddr3, and a good video card will do it.
You already have a system - 600-800 bucks would get you a smoking modern gamer player - and you'd definitely change your tune then.
I'll watch the video and take it back if need be if you provide the link.
A PC has different settings such as Ultra and AA modifications.
This gen consoles run on about a low to medium setting. My PC runs everything at ultra
Do you know anything about this?
There are lots of games out there and deciding which ones to use for benchmarks is up to the judgement of the reviewer. When you write your own reviews, feel free to include whatever games you wish.
Ultra high fps is not needed to kill people. You really care about grass? you care about more shadows that actually hinders you when in jungle because of how camouflage works in the game ?
The point is that it does not justify the price when the PS3 version and the Xbox version run the same and it surely looks the same on a 1080p display. Yeah its less people, but thats only due to the networking limitations on consoles. The next Gen consoles are going to make our PCs look like #. They will start to make games for Tablets then. port it over to the pc and call it a day
I think you are mixing visual fidelity with "game quality"... they are comparable as apples & oranges...
visual fidelity just adds immersion, doesn't change the game mechanics.
--
You should read that Hexus article on WHY consoles can have decent graphics on crappy chips and on a PC it takes a lot more work - it has to do with the way calls are made with DX9+ versions...
--
In any case you are sorely mistaken no console looks as good as PC today.
You are just clinging to your formerly $600 card and unwilling to bone up the $ to keep up on PC.
That's ok, your call, but the whining excuses are just that - excuses and incorrect ones.
There's not a card out today powerful enough for a range of what you whine are ported PC games - they fail in fps strength and eye candy - on AMD you have to turn down MSAA to 2X or zero - you won't be turning on PhysX, you won't be running high and highest tessellation.
The real truth is even the 7970 lacks enough power on the now common 1920x1050 common flat panels. It's not enough guy.
I'm sorry your attitude frankly stinks to the point of disinformation and I'm sorry it's become a common meme or theme - BECAUSE IT IS ABSOLUTELY INCORRECT.
But boy there are a lot of you - feel good you have plenty of company, plenty of sour puss whining company.
Get a paper route and in a few months you'll have plenty to have a top tier smoker of a system - it's not that much money - do it reasonable
core i3 2100 $125 $300 vid, $100 mboard $50 ram $100 ssd
--
That's $675 to get to nearly the top of all performance rungs - and certainly enough to bury your $600+ release price 8800GTX
---
If you don't like the cost, fine, but don't feed me those sorry lies in the whines please - it's time this repeated pop culture theme whine be put to rest it is a LIE.
Prices for DDR3 Memory and SSD's has remained competitive, but how long will it remain so?
Hard disk prices aside, you can build a system that would crush the very best possible a few years ago for a lot less money than those older machines cost. As long as you don't need the very top end parts, I think prices are at historic lows.
---
There ! Howsa bout it amd fanboys!!?? DON'T YOU WISH AMD HAD A SET TOO ?
They don't - they whine and bullet physics is freaking nowhere ...
---
Now amd had to redo their MLAA for a fps cheat on the 79xx-- LOL - AND IT SUCKS WORSE THAN FXAA ANYWAY - even though it doesn't blur huds anymore like it did foreva cause amd sucks sucks suck suck suck.
-
Dear God, thank you for one company that isn't filled and followed by whining sour pussed crybabies, for one company that doesn't whine and moan in every PR stunt to stir up hate for the superior competition, BUT INSTEAD GOES THE DRAWING BOARD AND CRANKS OUT FXAA !
THANK YOU GOD THAT ONE COMPANY IS NOT A CRYBABY WHINING SACK OF CRAP!
---
GOD!!!!! will you people ever stop ? ( I suspect not )
and i love to see AMD putting some tough competition this time, It is always good for the end users ... Both new releases by AMD are really interesting and are very much capable ... well done AMD !!!
--
You cannot take super supreme extreme massive rig and compare it to plain vanilla - then whine about pricing.
I want to know what the top end decked out x360 costs with all the addons... OH FREAK IT'S TWO GRAND !
---
lol-
KISS IT MAN...no do not
NVIDIA says this is because they can't guarantee the operation of PhysX unless the primary card is an NVIDIA card, but that's just bull#.
AMD won't pay to use something that cost 100 million to develop.
AMD fgans far and wide scream I hate PhysX and it sucks for the 2 games it means nothing to that it is used on.
So really, who gives a damn that AMD and their sick child fans are so raging mad - they won't pay up for tech, they slam the competition that won't hand it to them for free, then after ripping that same comp in public for endless months of shrieking terror they are surprised when they get their crap card hack disabled.
IT'S THEIR OWN FAULT. THEY DESERVE IT, AND DESERVED IT.
NOW PAY UP YOU CRYBABIES, OR GET US BULLET PHYSICS AND IT BETTER BE NON PROPRIETARY, IT BETTER RUN FOR FREE ON NVIDIA CARDS AND YOU BETTER BE SPENDING LOTS OF YOUR AMD $$$ TO GET IT IN TWO CRAPPY GAMES...
---
Ok, thanks I feel better. I am sorry Nvidia didn't punch amd in the kisser and say "if our physx sucks so bad, and you don't want it, then you've got your wish ! "
---
Way to go - if all the little crybabies had AN OUNCE of honesty in their bones they would have asked their king god amd TO PAY A FEW BUCKS FOR PHYSX.... OR PAY NVIDIA HALF THE DEVELOPEMENT COSTS ALREADY ACCRUED.
I GUES AMD IS A LEECH WELFARE CRYBABY COMPANY ! SO ARE ALL IT'S FANS!( well most of em cause even ones who didn't agree never shut up the little terrorists )
----
PAY FOR WHAT YOU WANT FOR FREE DAVE RAMSEY ! PAY UP !
That's just stupid: it's like having your anti-lock brakes disabled if you buy non-Ford tires or something.
The industry doesn't exist in a bubble in your computer - there's a lot more at stake than your personal preference for extra graphics in your machine.
The competitors decided they would go on a PR bash campaign, and fight the future - and blame it all on the competition, and demand to pay nothing, while incurring costs for Nvidia for porting, and then went about destroying their public image.
A million IDIOTS bought it.
---
Since what you buy is connected not just to your rig but to a worldwide competition, make your choice... you saw what amd fanboys did - you saw what Richard Huddy of AMD did...
You support THAT CRAP ? - keep your amd card in and suck it up.
Nvidia is PUNISHING you. Now pull that rooster card out or live with it or HACK IT.
FACE REALITY that's what's going on and playing a 3rd grader "it's stupid" is PLAYING LIKE YOU ARE A BLIND FOOL.
Are you that stupid Dave ?
I certainly don't think so - so stop pretending you are.
But you haven't explained why NVIDIA should disable PhysX on their cards if there's an AMD card in the system. I bought the NVIDIA card. Why aren't I entitled to its full capabilities?
Nvidia paid AGEIA. Nvidia spent a hundred million on dev.
PAY UP - that means BUY WHAT YOU NEED TO USE IT - YOU NEED TO BUY NVIDIA TO USE IT.
---
It's SAD you aren't aware of AMD's PR campaign.
Check Hexus, dude.
I'm sorry, really, that you don't have a clue, and the brainwash crew owns you. Really, that is so astounding.
" But you haven't explained why NVIDIA should disable PhysX on their cards if there's an AMD card in the system. "
---
If you don't get it yet, you never will. You called Nvidia a liar when they told you it is a software issue- it is their lawsuit exposure - okay so Nvidia lied their tokus off to you...let's give you that - they have no costs now or ever when the crappy amd radeon drivers blow away a systems capability to stably run PhysX - let's give you that one, that their vbendor partners would have no tech support over AMD cards ***** it all up - let's give you that one...
---
Guess what's left Dave ?
AMD went on a PAID PR to demand nvidia open source PhysX, then cried foul and claimed Nvidia is evil and amd would never do such a thing developing proprietary tech... I guess you lived under a cave at the time and cannot even gauge your own commentary and others in this thread - who've with less said so. Missed that too right ? LOL wow.
---
So in any case - since AMD wasn't going to play fair, wasn't going to cooperate, went on a smear campaign, and riled up it little minions totally on purpose and all we sentient beings watched it happen - NVidia did what it had to do...
I _did_ buy NVIDIA. I'm hardly an AMD fanboy: my primary gaming system runs two GTX 580s in SLI.
And they're cards I bought myself, not review samples. I don't understand why plugging in an ATI card for should disable PhysX. And you haven't given me any good reasons why it should other than "That's the way NVIDIA wants it."
(We know they're probably lying about the undefined technical issues since there are hacks out there that re-enable PhysX with an AMD card in the system, and they work just fine.)
NVIDIA claims PhysX is an open API that anyone can write to, and that they'll even help (well, they used to claim this. Haven't seen it in some years). I can't find anything about AMD ever demanding that NVIDIA make PhysX open source...can you provide a link?
NGOHQ David - the guy there wrote the PORT for PhysX to AMD cards - and as he tells, NVIDIA supported his work and sent several engineers to assist him - and he was successful. He also claims - says - that AMD refused to work with him, and PURPOSEFULLY BROKE the PhysX he ported to their cards in their next driver release.
It's all public information man.
What you need to understand is how the relationship between industry rivals works in the real world. It involves LAWS, and lawsuits ion many cases, and especially between the parties in question - and we can add in Intel and Microsoft...
What's going on, is what I tried in vain to explain to you, that you do not want to acknowledge. Let me try harder....
PhysX, with an AMD card as main, would by default be modifying screen output in the games - and would be related to the stability of the competing card in question,as well as it's own.
Someone has to write those drivers - and someone has to constantly update those drivers, and test those drivers.
NVidia does not have - by law, all the hooks or code secrets for AMD drivers - and as far as I understand it, is not on legal ground without AMD permission and cooperation.
I mentioned the lawsuit issue, and assumed you had the sense and back round knowledge to understand...
Why you will say, was it then working for some time ?
Well, that's also how industry works - things are done, and until the legal departments issue cease and desist notices, things can happen for a time period that do not go on forever...
As far as AMD's PR Richard Huddy and Nvidia's PR - they both publicly claimed certain things - (this is separate from the NGOHQ work Nvidia supported and AMD put a stop to) both claimed NVIDIA has repeatedly offered PhysX for a price to AMD. Huddy from AMD said yes it has happened, and claimed, that Nvidia was not really serious. Nvidia claims AMD does not want to pay - which coincides with Huddy's demand for PhysX going open source...( let's face it - must not be that hard to make 1 driver work on 1 amd driver set, the guy at NGOHQ did it with help from 2 NV engineers ) ....
It works 100% for AMD by destroying PhysX, turning the public against Nvidia, and placing all the blame on them - the PR lines are far cheaper than paying a fair amount for PhysX and paying for the coding for the monthly driver releases, and paying Nvidia in some joint agreement on PhysX driver releases.
---
So much more detail can be gone into concerning the legalities, but the bottom line is Nvidia can't just do it - and their attempt for FORCE AMD into some deal with NGOHQ assistance was crushed by AMD...
---
AMD much prefers the public to be disgruntled, to hate Nvidia, to scream PhysX sucks, to not pay a dime to work with game developers to implement it, and to take the HUGE PR VICTORY people like you give them.
--
Nvidia's statement on the matter, if you read it, points out the legal problems (in corporate speak one could complain), but anyone with common knowledge on the matter knows what's going on.
---
Same type of thing happened with Batman AA - Nvidia did the work, spent the money, and AMD wanted a PR campaign victory - demanded it all for free and said "cooperation is key!" and the fools of the world bought it.
---
AMD is near bankrupt - maybe if their PR was nice, and their rabid fan base wasn't twisted into NV hatred, maybe NV would do their video card division a solid for free... maybe they do all the time and we don't hear about that...
But that's the bottom line - Nvidia can no longer just let AMD's card work with their PhysX - AMD could sue them, vendors could sue them, end users could sue them - they don't have permission - and are NOT going to spend the money upkeeping drivers for AMD and are likely to be absolutely legally banned from that too.
When AGEIA cards were still in use drivers were there that worked with AMD... that's the nature of an independent company.
What AMD fans SHOULD BE grateful for and are not- is the fact that NVIDIA NEVER hard coded PhysX into their proceeding chip lineups and really made it completely proprietary at some silicon level... we know it runs on amd because of NGOHQ.
Oh yes the guys name there is ERAN, I just recalled.
Then use your Nvidia cards. You're making a problem you claim you don't have.
You also completely spun what I said, because of your rose colored glasses. YOU claim it's because Nvidia wants it that way, not me. YOU claim it's because they're liars, not me.
I claim it's because AMD WON'T PAY - and in fact, that's the way THEY want it.
Anyone with a lick of business sense, any experience knows it's true.
NVidia would love nothing more than for the entire video card and mobile handset world to adopt PhysX - they would profit handsomely.
You claimed something about open source this or that- but that's another one of those pop culture spews that has very little sense in reality - it means to the duped idiot "free" - but PhysX IS NOT FREE.
It's like linux or flavors of the same, let's say - MANY builds sell for many millions of dollars all the time - but the "word in stupidville" is "it's open source".
Not only would they not pay, not only did they spread "physx sucks" - they demanded an open port source for free.... and all their little minions loved it - all the little amd fans repeat it...
---
ITS IN THIS THREAD.
---
So man up - take out your AMD card, sell it, and buy Nvidia for primary and secondary PERIOD. Or, go buy amd - go ahead - Nvidia invites you to. Please do. In fact if you cannot understand, you ought to buy only AMD. And you don't get it.
--
PS - amd has no competing physics - bullet doesn't compete (as our little retard saying goes) .
--
That's what happens in the big bad world Dave - companies compete - they make decisions based upon competitions misdeeds, do they not ?
YES.
Time to grow up and face reality.
AMD and their cards do not get a pass from Nvidia - you're supporting AMD, you're supporting the problem - so NVIDIA makes you choose - MAN UP - you're not getting PhysX "with amd" for free. YOU WILL PAY NVIDIA. NO YOU HAVEN'T ALREADY. NO GUARANTEE OF COMPATIBILITY WITH RADEON - SORRY DAVE ! GROW UP.
PS2 - NO AGREEMENT IS MADE BY EITHER OF THE BIG TWO THAT FUTURE DRIVERS WILL CARRY ALL FEATURES FORWARD FOREVER.
---
You're part of the "influential" reviewers, and I see exactly "what your influence is".
I guarantee when NVidia reps look at it like I have here - the answr is "PULL YOUR AND CARD IF YOU WANT PHYSX - IF YOU HAVE A PROBLEM GO WHINE TO AMD WHO WON'T PAY UP".
None of us live in the bubble of stupidity and singularity your excuse or cry foul against "stupid Nvidia who disabled the brakes" demands.