Archive Home arrow Reviews: arrow Video Cards arrow ZOTAC GeForce GTX 280 AMP! Edition Video Card
ZOTAC GeForce GTX 280 AMP! Edition Video Card
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Tuesday, 01 July 2008

ZOTAC GTX 280 AMP! Edition

I want to begin this article by expressing that like many of you reading this article, I spent money on 9800-series GeForce products thinking that my choice would be free of buyers remorse for at least a year. After all, it wasn't all that long ago that the GeForce 8800 series landed itself atop the competition and reigned supreme for well over a year. So it seemed logical that when NVIDIA launched their 9800 series, that things would somehow remain the same. So this is where I break some painful news to owners of premium top-end GeForce products: there's a new king named GTX 280 and he's not just a bigger, better, version of something we've already seen. The new GeForce GTX 280 presents a completely new core design, and introduces NVIDIA's 2nd-generation DirectX 10 architecture as opens up a new dimension of heterogeneous computing.ZOTAC_AMP!_Logo_250px.jpg

Hot on the heels of a rapid-succession GeForce 9800 GX2 and GeForce 9800 GTX launch only two short months ago, NVIDIA has recently launched the GeForce GTX 280 and GTX 260 video cards. Both the GTX 280 and GTX 260 products position themselves at the most elite segment of the GeForce product line, so just imagine how much more powerful the GeForce GTX 280 could become after ZOTAC give it their special AMP! Edition treatment. The recent Radeon HD 4870 launch may have shown how close ATI/AMD can get to NVIDIA's bar of performance, but the ZOTAC GeForce GTX 280 AMP! Edition graphics card has just raised that bar much higher. Benchmark Reviews tests the ZOTAC ZT-X28E3LA-FCP against the GeForce 9800 GX2 and 9800 GTX, as well as the new Radeon HD 4850 in CrossFireX configuration.

ZOTAC_GeForce_GTX_280_AMP!_Edition_Kit.jpg

Now would also be a good time to explain why the new GTX 280 and GTX 260 product launch had to occur just nine weeks after the last GeForce 9-series launch. We offer a full explanation in the following section, but the short explanation is that the GT200 GPU isn't just another GPU with a few extra cores and speed increases; this is a whole new creature that does more than just render graphics.

Sure, you can realistically expect phenomenal frame rate results out of this video card, but you can also expect that real-world applications such as Adobe's upcoming CS4 software suite can actually perform every manner of tasks faster with this new GPU than any multi-core CPU ever could (which I witnessed first-hand at the NVIDIA Editors Day 2008 event). Finally, graphical demands of every imaginable level are handled by a GPU that out-paces the ability of a CPU, making it a lot more than just another video card. It's going to be tough to contain my enthusiasm since I've been testing this card for almost two weeks now; but I assure you that while the performance is every bit as real as I say it is.

Currently Supported GeForce Products GeForce GT200 Family GeForce 9 Series Family GeForce 8 Series Family
  • GeForce GTX 280
  • GeForce GTX 260
  • GeForce 9800 GTX
  • GeForce 8800 GTS 512 MB
  • GeForce 8800 GT
  • GeForce 9600 GT
  • GeForce 8600 GTS
  • GeForce 8600 GT
  • GeForce 8500 GT
  • GeForce 8400 GS
  • GeForce 7300 GS
  • GeForce GTX 280
  • GeForce GTX 260
  • GeForce 9800 GX2
  • GeForce 9800 GTX
  • GeForce 9600 GT
  • GeForce 9600 GSO
  • GeForce 9400 GT
  • GeForce 9300 GS
  • GeForce 9300 GE

  • GeForce 8800 Ultra
  • GeForce 8800 GTX
  • GeForce 8800 GTS (640 MB, 512 MB and 320 MB)
  • GeForce 8800 GT
  • GeForce 8800 GT for Mac
  • GeForce 8800 GS
  • GeForce 8600 GTS
  • GeForce 8600 GT
  • GeForce 8500 GT
  • GeForce 8400 GS

As sure as our name is Benchmark Reviews, this article will be report every GTX 280 benchmark result we've collected; but please, for the sake of missing out on something very big, don't skip all of the information we offer here just to see video game performance charts and glimpse at our conclusion. This isn't just another article about the latest and greatest video card or how well it handles the latest game titles, this article is also meant to explain why the GTX 200 graphics processor is going to change the way we all use computer hardware now and into the future.

Even before the GeForce GT200 GPU, NVIDIA has been consistently overwhelming the graphics card industry. Anymore it seems like the only products that manage to outperform their video cards are other GeForce graphic cards. Industry competitors have been very unsuccessful at beating NVIDIA, and very recently their biggest rival waved a white flag in surrender and relegated themselves to feeding off a low-end market segment just to maintain an identity. Sometimes though, I think that you become so good at what you do that you begin to compete with yourself. Not surprisingly NVIDIA has already anticipated this problem and planned for a solution, which is why this article will introduce a lot more than just video game frame rates for the new compute-ready GT200 graphics processor.

GeForce_GTX-200_GPU_Pair.jpg

In the next section, Benchmark Reviews takes the time to give a full explanation as to why the GTX 200 graphics processor arrived so quickly after the last product launch, and begin to point out why it deserves some special attention.

About the Company: ZOTAC International (MCO) Limited

ZOTAC International (MCO) Limited was established in 2006 with a mission to deliver superb quality of NVIDIA graphic solutions to the industry. It has strong backup from parent group, PC Partner Ltd. Headquartered in Hong Kong, factory in mainland China and regional sales offices in Europe, Asia Pacific and North America. The support ZOTAC provides is currently the largest of its kind around the world.

With 40 SMT lines, 6,000 workers and 100,000 square-feet meter, ZOTAC features a full array of state-of-the-art facilities and machinery. In addition, ZOTAC has over 130 R&D professionals in Hong Kong, China and warranty and service center in strategic countries to enable effective and efficient worldwide as well as localized sales and marketing supports.

ZOTAC with NVIDIA not only means superb quality, it also means high performance, absolute reliability and great value. In the past year, ZOTAC was compared and tested by several influential members in the media and have proven its products are good quality, worth-to-buy graphic cards in the market. With the product features of overclocked performance, excellent cooling properties and unique packaging ZOTAC products definitely exceed users' expectations.

ZOTAC_Logo_600px.jpg

ZOTAC's commitment to our user is to bring the latest products quickly to the market with the best value. Doubtless to say ZOTAC is the right choice for those who require high-quality graphic solutions. For additional information please visit the ZOTAC website.

GT200 GPU: Why Now?

As my review of the GeForce 9800 GTX was just being published for the April 1st launch, there were already rumors circulating about a mystery "GeForce 9900" video card. At first, I found myself just a little irritated at the prospect of working on one major GeForce product launch while another was right around the corner. For most of early May there was a strong buzz around the coming product line, but it wasn't until I attended NVIDIA Editors Day 2008 that it was all laid out in front of me. Once I witnessed first-hand how the new GT200 GPU transcoded video at speeds I never imagined (and I transcode DVD publications often) it began to make sense. Further enforcing my interest in NVIDIA's latest technology was information about CUDA that would enable me to actually leverage GeForce products into commercial environments for the purpose of increase productivity. Not only was the GT200 changing the way we will perceive a video card, but it was evident that the term "display adapter" may no longer apply.

GeForce_GTX-200_Block_Diagram.jpg

Before I share anymore information on the new architecture and the advanced technology it utilizes, I will answer the fundamental question: why now? To understand the answer, you must first accept how the industry works and that when there's a development break-through it may not always be scheduled on a calendar. Most people don't realize that it takes between 1-2 years (according to NVIDIA sources) to produce a stable graphics processor architecture. In fact, you might consider the development timeline a lot like a chess game because of constant trail and error turn-taking. So when NVIDIA finalizes a newly engineered design and makes it retail-ready, the company personnel go from a year-long yellow light to a full-blown green. So when a 1-2 year long development successfully completes with amazing results, you can understand the urgency of getting their bleeding-edge technology to market.

GT200 GPU: So What's New?

GeForce GT200 GPUs (presently the backbone of both the GTX 260 and GTX 280 products) are massively multithreaded, many-core, visual computing processors that incorporate both a second-generation unified graphics architecture and an enhanced high-performance, parallel-computing architecture. Two over-arching themes drove GeForce GT200 architectural design and are represented by two key phrases: "Beyond Gaming" and "Gaming Beyond." You may have caught this emphasis when I gave my report on NVIDIA's Editors Day 2008.

"Beyond Gaming" means the GPU has finally evolved beyond being used primarily for 3D games and driving standard PC display capabilities. This is what I was referring to when I said that calling the GTX 280 a display adapter was now inappropriate. You're going to see this be commonplace more and more often, because GPUs are accelerating non-gaming, computationally-intensive applications for both professionals and consumers. "Gaming Beyond" means that the GeForce GT200 GPUs will also enable amazing new gaming effects and dynamic realism, delivering much higher levels of scene and character detail, more natural character motion, and very accurate and convincing physics effects. The GeForce GT200 GPUs are designed to be fully compliant with Microsoft DirectX 10 and Open GL 2.1.

NVIDIAs_3_Kings.jpg

NVIDIA's second generation unified visual computing architecture as embodied in the new GeForce GTX 200 GPUs is a significant evolution over the original unified architecture of GeForce 8 and 9 series GPUs. Numerous extensions and functional enhancements to the architecture permit a performance increase averaging 1.5× the prior architecture. Improvements in sheer processing power combined with improved architectural efficiency allow amazing speedups in gaming, visual computing, and high-end computation.

GT200 GPU: Bigger and Better

NVIDIA engineers specified the following design goals for the GeForce GT200 GPUs:

  • Design a processor with up to twice the performance of GeForce 8800 GTX
  • Rebalance the architecture for future games that use more complex shaders and more memory
  • Improve architectural efficiency per watt and per square millimeter
  • Improve performance for DirectX 10 features such as geometry shading and stream out
  • Provide significantly enhanced computation ability for high-performance CUDA applications and GPU physics
  • Deliver improved power management capability, including a substantial reduction in idle power.

Features

8800 GTX

GTX 280

% Increase

Cores

128

240

87.5 %

TEX

64t/clk

80t/clk

25 %

ROP Blend

12p/clk

32p/clk

167 %

Precision

fp32

fp64

--

GFLOPs

518

933

80 %

FB Bandwidth

86 GB

142 GB

65 %

Texture Fill

37 GT/s

48 GT/s

29.7 %

ROP Blend

7 GBL/s

19 GBL/s

171 %

PCI Express

6.4 GB

12.8 GB

100 %

Video

VP1

VP2

--

The new second-generation SPA architecture in the GeForce GTX 280 improves performance compared to the prior generation G80 and G92 designs on two levels. First, it increases the number of SMs per TPC from two to three. Second, it increases the maximum number of TPCs per chip from 8 to 10. The effect is multiplicative, resulting in 240 processor cores.

Compared to earlier GPUs such as GeForce 8800 GTX, the GeForce GTX 280 provides:

  • 1.88× more processing cores
  • 2.5× more threads per chip
  • Doubled register file size
  • Double-precision floating-point support
  • Much faster geometry shading
  • 1 GB frame buffer with 512-bit memory interface
  • More efficient instruction scheduling and instruction issue
  • Higher clocked and more efficient frame buffer memory access
  • Improvements in on-chip communications between various units
  • Improved Z-cull and compression supporting higher performance at high resolutions
  • 10-bit color support

What makes the GeForce GT200 a great parallel processor?

NVIDIA_GTX-280_Splash.jpg

There are three key ingredients:

  • CUDA: The greatest obstacle to parallel computing has always been the software. The GeForce GTX 280 supports CUDA, the industry's first parallel computing language to have deep penetrating (70 million user base) on the PC. CUDA is simple, powerful and offers exceptional scaling on visual computing applications.
  • GPU Computing Architecture: The GeForce GTX 280 is designed specifically for parallel computing, incorporating unique features like shared memory, atomic operations and double precision support.
  • Many-core architecture: With 240 cores running at 1.3GHz, the GeForce GTX 280 is the most powerful floating point processor ever created for the PC.
  • Torrential bandwidth: Due to their high data content, visual computing applications become bandwidth starved on the CPU. With eight on-die memory controllers, the GeForce 280 GTX can access 141GB of data per second, greatly accelerating HD video transcoding, physics and image processing applications.

Please see our NVIDIA GPU Computing FAQ for additional information on this topic, including the GPU F@H client, and BadaBoom transcoding software.

GTX 280 AMP! Specifications

Coupled with PureVideo HD technology, the NVIDIA GTX 280 video card delivers an astounding multimedia experience. The GeForce GTX 280 features two dual-link, HDCP-enabled DVI-I outputs for connection to analog and digital PC monitors and HDTVs, a 7-pin analog video-out port that supports S-Video directly, plus composite and component (YPrPb) outputs via an optional (and included) dongle.

  • ZOTAC GeForce GTX 280 AMP! Edition video card ZT-X28E3LA-FCP
  • HDMI resolutions: 480p/720p/1080i/1080p
  • PCI Express 2.0 interface
  • Dual card-slot active cooling solution
  • PureVideo HD technology with hardware decoding of high-definition video formats
  • Dual dual-link DVI - up to 2560x1600
  • DVI HDTV output: 480p/720p/1080i

Bus Support

  • PCI Express 2.0
  • PCI Express x16 Backwards Compatible

3D AccelerationGeForce_GTX-200_GPU_Silicon_Die.jpg

  • Microsoft DirectX10 support
  • Unified Shader Model 4.0
  • OpenGL 2.1

Others

  • HDTV Ready (using dongle adapter)
  • Vista Ready
  • SLI and 3-Way SLI Ready
  • HDCP Ready
  • DVI Audio (using digital audio connection)
  • Dual Link Dual DVI
  • RoHS Compliant

Dual-Stream Decode

Recently, studios have begun taking advantage of the additional space high-definition media such as Blu-Ray and HD DVD discs provide by adding dual-stream picture-in-picture functionality to movies. Often the PiP content is coupled with advanced BD-J (Java) or HDi (XML) features, so taking the processing burden off of the CPU is even more important for titles with these advanced features. The latest PureVideo HD engine now supports dual-stream hardware acceleration which takes the workload off of the CPU and gives it to the more powerful GPU.

GT200 Graphics Processing UnitNVIDIA_GTX-280_NF790i_SLI.jpg

  • 1.4 Billion transistors
  • 933 gigaFLOPs of processing power
  • Local 16k shared memory
  • 700 MHz Graphics engine clock speed
  • 240 Graphics processor cores
  • 1400 MHz graphics processor clock speed
  • 400 MHz RamDAC
  • Max. Resolution @ 2560 x 1600
  • True 128-bit floating point high dynamic-range (HDR) lighting with 16x full- screen anti-aliasing
  • Double precision 64-bit floating point computation support
  • 2nd Generation NVIDIA Unified Architecture
  • Supports future 10-bit color and and 120 MHz LCD panels

GT200 Video Memory

  • 1 GB GDDR3 video memory
  • 1150 MHz memory clock (2300 MHz DDR realized)
  • 16Mx32 Memory configuration
  • 512-bit memory bus
  • Hynix H5RS5223CFR-N2C (Rated 1200 MHz 0.8ns 2.05V)
  • Memory pieces: 16
  • Memory package: uBGA

HDCP over dual-link allows video enthusiasts to enjoy high-definition movies on extreme high-resolution panels such as the 30" Dell 3007WFP at 2560 x 1600 with no black borders. The GeForce GTX 280 also provides native support for HDMI output, using a certified DVI-to-HDMI adaptor in conjunction with the built-in SPDIF audio connector.

Aero with HD DVD and Blu-ray Playback

Until now, users have been unable to take advantage of the Aero user interface in Windows Vista while playing HD video. When this was attempted, Vista would revert back to a basic theme and Aero would be disabled.

PureVideo HD now supports HD movie playback in Aero mode. This creates a more seamless user experience by eliminating the pop-up message notifying that Vista has switched to basic mode. As you can see in the screenshot below, Aero windows are enabled in conjunction with HD movie playback.

With HDMI support the GTX 280-based graphics solution is among the fastest graphics card available, and when paired with a 7 Series NVIDIA nForce motherboard, creates the latest in a line of powerful NVIDIA gaming platforms. Be blown away by scorching frame rates, true-to-life extreme HD gaming, and picture-perfect Blu-ray and HD DVD movies.

GeForce GTX 280 Features

Backed by NVIDIA's Lumenex Engine, the GeForce GTX 280 Features delivers true 128-bit floating point high dynamic range (referred to as HDR), lighting capabilities with up to 16x full-screen anti-aliasing. Second-generation NVIDIA PureVideo HD technology with HDCP compliance delivers the ultimate high-definition video viewing experience to the NVIDIA GeForce GTX 280 video card.

With hardware decoding for Blu-ray and HD DVD formats, PureVideo HD technology lowers CPU utilization when watching high-definition video formats by decoding the entire video stream in the graphics processor, freeing up the processor for other tasks. In addition to low CPU utilization, PureVideo HD enhances standard definition video content with de-interlacing and other post-processing algorithms to ensure standard DVD movies look their best on the PC screen and high-definition television sets. High definition content protection, or HDCP, technology ensures a secure connection between the GTX 280 graphics card and an HDCP capable monitor for viewing protected content such as high-definition Blu-ray or HD DVD movies.

Coupled with PureVideo HD technology, the GeForce GTX 280 deliver the ultimate multimedia experience. HDMI technology allows users to connect PCs to high-definition television sets with a single cable, delivering high-definition surround sound audio and video with resolutions up to 1080p. PureVideo HD technology scales video in the highest quality up to resolutions of 2560x1600 - from standard and high-definition file formats - while preserving the details of the original content. PureVideo HD technology also accelerates high-definition video decode, freeing up CPU cycles while watching high-definition Blu-ray and HD DVD movies or other VC-1 and H.264 encoded file formats.

NVIDIA Unified Architecturepurevideo.jpg

  • Unified shader architecture
  • GigaThreadTM technology
  • Full support for Microsoft DirectX 10
  • Geometry shaders
  • Geometry instancing
  • Streamed output
  • Shader Model 4.0
  • Full 128-bit floating point precision through the entire rendering pipeline

NVIDIA Lumenex Enginepurevideo_hd_logos.jpg

  • 16x full screen anti-aliasing
  • Transparent multisampling and transparent supersampling
  • 16x angle independent anisotropic filtering
  • 128-bit floating point high dynamic-range (HDR) lighting with anti-aliasing
  • 32-bit per component floating point texture filtering and blending
  • Advanced lossless compression algorithms for color, texture, and z-data
  • Support for normal map compression
  • Z-cull
  • Early-Z

NVIDIA Quantum Effects Technology

  • Advanced shader processors architecture for physics computation
  • Simulate and render physics effects on the graphics processor

NVIDIA Triple-SLI Technology

  • Patented hardware and software technology allows three GeForce-based graphics cards to run in parallel to scale performance and enhance image quality on today's top titles.

header_purevideo_hd_new.jpg

NVIDIA PureVideo HD Technologywith_purevideo.jpg

Along with world-class video acceleration, PureVideo HD has been at the forefront of advanced video post-processing. With the R174 series driver, we are introducing new features for PureVideo HD for GeForce GTX 200-based products. These new features, Dynamic Contrast Enhancement and Dynamic Blue, Green, and Skin Tone Enhancements, are extremely computationally intensive and not found on even the most high-end Blu-ray or HD DVD players. But by tapping into the enormous pool of computational power offered by our processor cores, we can now enable post-processing techniques that have yet to be realized in fixed-function video processors.

  • Dedicated on-chip video processor
  • High-definition H.264, VC-1, MPEG2 and WMV9 decode acceleration
  • Advanced spatial-temporal de-interlacing
  • HDCP capable3
  • Spatial-Temporal De-Interlacing
  • Noise Reduction
  • Edge Enhancement
  • Bad Edit Correction
  • Inverse telecine (2:2 and 3:2 pull-down correction)
  • High-quality scaling
  • Video color correction
  • Microsoft Video Mixing Renderer (VMR) support

Advanced Display Functionality

  • Two dual-link DVI outputs for digital flat panel display resolutions up to 2560x1600
  • One dual-link DVI outputs for digital flat panel display resolutions up to 2560x16004
  • Dual integrated 400MHz RAMDACs for analog display resolutions up to and including 2048x1536 at 85Hz
  • Integrated HDTV encoder provides analog TV-output (Component/Composite/S-Video) up to 1080i resolution
  • NVIDIA nView multi-display technology capability
  • 10-bit display processing

Dynamic Color Enhancement

By analyzing the color components of each frame, we can also isolate and improve the appearance of blue, green, and skin tones, which the human eye is particularly sensitive to. Unlike televisions which have built-in image processors, PC monitors typically display the input picture without any processing, which can result in comparatively dull images. Dynamic blue, green, and skin tone enhancement alleviates this problem by applying correction curves on these sensitive colors. The result is improved total balance and clarity, without over saturation.

Built for Microsoft Windows Vistapurevideo_ecosystem.jpg

  • Full DirectX 10 support
  • Dedicated graphics processor powers the new Windows Vista Aero 3D user interface
  • VMR-based video architecture

High Speed Interfaces

  • Designed for PCI Express x16
  • Designed for high-speed GDDR3 and DDR3 memory

Operating Systems

  • Built for Microsoft Windows Vista
  • Windows XP/Windows XP 64
  • Linux

API Support

  • Complete DirectX support, including Microsoft DirectX 10 Shader Model 4.0
  • Full OpenGL support, including OpenGL 2.0

NVIDIA Hybrid SLI Technology

Benchmark Reviews learned of Hybrid SLI during our time with NVIDIA at the 2008 International CES. I thought that seeing the Stereoscopic 3D Gaming demonstration would be the highlight of NVIDIA's offerings, but then the following morning they proved to have at least one more trick up their sleeve. At CES we were privileged to see Hybrid SLI make it's formal release.

NVIDIA announced the industry's first hybrid technology for PC platforms-Hybrid SLITM-that addresses two critical issues: increasing graphics performance and reducing power consumption. NVIDIA Hybrid SLI technology will be incorporated into a wide variety of graphics and motherboard desktop and notebook products that the Company is rolling out for both AMD and Intel desktop and notebook computing platforms throughout 2008.

"From the introduction of programmable GPU's to the rapid adoption of our multi-GPU SLI technology, NVIDIA has repeatedly pioneered and innovated to solve difficult problems for the industry. We believe Hybrid SLI technology is one of the most important innovations we've come up with to date," said Jen-Hsun Huang, CEO of NVIDIA. "Hybrid SLI delivers new multi-GPU technology to a large segment of the PC market, delivering consumers a level of PC graphics performance and power efficiency never before seen."

NVIDIA Announces Hybrid SLI Multi-GPU Technology

First disclosed in June 2007, NVIDIA Hybrid SLI technology is based on the Company's market-leading GeForce graphics processor units (GPUs) and SLI multi-GPU technology. Hybrid SLI enables NVIDIA motherboard GPUs (mGPUs) to work cooperatively with discrete NVIDIA GPUs (dGPUs) when paired in the same PC platform. Hybrid SLI provides two new technologies - GeForce Boost and HybridPower - that allow the PC to deliver graphics performance for today's applications and games when 3D graphics horsepower is required, or transition to a lower-powered operating state when not.

NVIDIA HybridPower Technology

For lower energy consumption and quieter PC operation, HybridPower allows the PC to switch processing from a single GPU or multiple GPUs in SLI configuration to the onboard motherboard GPU. HybridPower is most useful in situations where graphics horsepower is not required, such as high definition movie playback on a notebook platform or simple e-mail or Internet browsing on a desktop. It is also beneficial for those users who want a quiet operating state with reduced thermals and noise. For notebooks, HybridPower can also dramatically extend battery life by up to 3 hours. When a game or application is started that requires the additional 3D horsepower, the PC can automatically transition back to the discrete graphics cards and power up the 3D capabilities all transparent to the end user.

In applications where 3D performance is required, GeForce Boost turbo-charges 3D operation by combining the processing power of the traditional NVIDIA GeForce-based graphics card with that of the second GPU integrated into the motherboard core logic. In media-rich applications, both GPUs work in tandem to render the combined images with the end user benefiting from the increase in performance and frame rate. For typical games and 3D applications, GeForce Boost can kick in automatically resulting in a greatly enhanced consumer experience.

Innovative Multi-GPU Technology Raises Performance, Reduces Power Consumption for PC Graphics

When coupled with a HybridPower-enabled motherboard, the GeForce GTX 280 (and GTX 260) can be power down completely. For everyday computing and watching HD movies, the motherboard GPU is used and the GTX 280 can be turned off, consuming no power at all. When an intensive 3D application is engaged, users can turn on the GeForce GTX 280 for maximum performance. HybridPower works by sending the output of the discrete GPU through the output connector on the motherboard. This allows the system to use both GPUs as it sees fit without physically changing the connector.

Hybrid Multi-GPU Technology Raises Performance, Reduces Power Consumption for PC Graphics

NVIDIA is the recognized market leader for GPU desktop and notebook solutions for both Intel and AMD platforms and has a full lineup of Hybrid SLI-capable graphics and motherboard products planned for 2008. New Hybrid SLI-capable products include the upcoming NVIDIA nForce 780a SLI, nForce 750a SLI, and nForce 730a media and communication processors (MCPs) for AMD CPUs, which will be released next month, as well as the new GeForce 8200-the industry's first micro-ATX motherboard solution with an onboard Microsoft DirectX 10-compliant motherboard GPU. NVIDIA Hybrid SLI notebooks as well as desktop products designed for Intel CPUs will be available next quarter. Look for Hybrid SLI to make its way into everything NVIDIA produces from this point forward.

ZOTAC GTX 280 Closer Look

The NVIDIA GeForce GTX 280 comes in one color: black. All GTX 280 and GTX 260 products are currently identical to NVIDIA reference versions, with manufacturers simply branding their allotment of video cards with a decal. Although it is uncertain, NVIDIA may eventually offer their add-in card partners (AIC's) the engineering information necessary to create working modifications so that unique varieties of the GT200-based graphics card can be independently designed.

ZOTAC_GeForce_GTX_280_AMP!_Edition_Package.jpg

Utilizing a glossy piano-black shell to encase the GT200 GPU, the delicate electronics inside are kept safe from accidental impact damage. I still feel the sting of a past incident where an accidental snag loosened a critical electronic component from the PCB of our older GeForce 8800 GTX in-between benchmark tests, which resulted in flawed results and some nasty fan mail. Thanks to the new well-conceived design those worries are all behind us now. The blower fan is angled so that the line of airflow will draft directly against the graphics processor.

ZOTAC_GeForce_GTX_280_AMP!_Edition_Corner.jpg

The NVIDIA GTX 280 graphics card is a performance optimized high-end graphics card on each and every level. Power is taken from the PCI Express host bus as well as from one 8-pin and another 6-pin PCI Express power connector. Without any auxiliary power provided to the GeForce GTX 280 graphics card, an LED on the header bracket will shine red and the graphics card will not boot. In addition, the connector that is not adequately powered will turn red. Together this new functionality offers immediate feedback for enthusiasts concerned about providing adequate power to the GPU. In the past, low/no auxiliary power situations sounded a piezo buzzer which was so loud you could often mis-located the origin of the alarm.

NVIDIA_GTX-280_Bottom.jpg

The underside (top actually, once installed) of the new GTX 280 shown above is made from metal, and acts as a heat-dissipating device similar to a heatsink with a few vents allowing for a small amount of air circulation. The entire row of ventilation slots is 1.75" wide by 8.25" long, however only about 3.25" of that length is actually an open vented slot. This metal piece gets very saturated with residual heat, so perhaps all of the vents seen in the image below will be opened up for better cooling performance in future retail models.

At the upper-right corner of the GeForce GTX 280 shown above you would be keen to notice the rubber cover that is hiding two SLI connections. NVIDIA has designed the GeForce GTX 280 to operate in a 3-way SLI configuration. For many of the extremely demanding applications and video games, a GeForce GTX 280 placed into a 3-way SLI set will be much faster than a set of Quad SLI GeForce 9800 GX2's. The big question gamers and hardware enthusiasts will need answer for themselves is if their configuration is will support this functionality in terms of power supply, case, and cooling.

ZOTAC_GeForce_GTX_280_AMP!_Edition_Upright.jpg

While the angled blower fan is going to help do a better job of cooling the GT200 GPU, and the contoured design will assist in bringing an unrestricted supply of fresh air, but the ventilation is not exclusively exhausted outside of the case. In my testing, I found that a tremendous amount of heat radiated from the video card itself, making it very hot to the touch, and that an additional fan needed to be added inside the case to cope with the added exhaust air.

This concludes our skin-deep look at the ZOTAC GeForce GTX 280 AMP! Edition video card, which has revealed several interesting discoveries about the hardware design and the cooling process. In our next section, Benchmark Reviews gives a detailed look at the factory-overclocked ZOTAC ZT-X28E3LA-FCP product.

GTX 280 Detailed Look

At first, the ZOTAC GeForce GTX 280 AMP! Edition video card could be easily mistaken for the older 9800 GTX. The GeForce GTX 280 uses a dual-slot design with improved clearance around the fan for optimal cooling and airflow. The board is cooled with an exceptionally quiet on-board "smart" blower fan; even when playing the most graphics-intensive 3D games, the GeForce 9800 GTX remained at a hushed level that would cause no real interference to my concentration.

Since I'm sure you just read the long list of new features and specifications GTX 280, you already know that you can use GT200 graphics processor for a lot more than playing video games. Aside from the entire Parallel Computing Architecture, the HDMI functionality paired with the smart fan design and external exhausting ventilation may help the GTX 280 find a home inside premium-level HTPC environments.

GTX-280-and-9800-GTX-Top-View.jpg

A few months back we reviewed ZOTAC's GeForce 8800 GT AMP! Edition HDMI video card, which used a DVI-to-HDMI adapter and S/PDIF audio input cable to stream full HDMI audio and video output for the first time in any NVIDIA product. Then just months ago the GeForce 9800 GTX series was launched with the same HDMI functionality and features. HDMI is back again (although not with the native interface found on the 9800 GX2) as ZOTAC offers in their AMP!'ed version of the GTX 280.

NVIDIA_GTX-280_Header_Panel.jpg

Because the HDMI audio functionality is controlled at a hardware level, there is no need for additional drivers or software. Much like the SPDIF connection on the back of a motherboard, the video cards audio out function is plug-n-play. The P/SPDIF cable included with the kit connects between a small two-pin port on the power-connection side of the unit and onto the HT Omega Claro Plus+ AD8620BR Op Amp sound card we used for testing. Your setup may be different so the cable may connect between the GTX 280 and the digital audio input pins on either your motherboard or add-in sound card. Not all motherboards and sound cards support this option, so make sure it's available before you make your purchase.

GeForce_GTX_280_SLI_Connections.jpg

Looking at how the ZOTAC GeForce GTX 280 AMP! Edition might stack-up in a triple-SLI configuration, you can see a few advantages and disadvantages over previous GeForce products. One of the primary problems encountered with the 8800 GTX and Ultra series was the tight confinement created inside the case, which often times led to poor cooling. NVIDIA later redesigned the tail end for their 9800 GTX using contours to help open up air channels. The opening at the rear of the card is hollowed to allow supplemental cooling-air intake, resembling a jets intake manifold.

NVIDIA_GTX-280_NF790i_Triple-SLI.jpg

Whether the ZOTAC GeForce GTX 280 AMP! Edition video card will be used in single-unit standalone mode, or multi-unit SLI mode, the GTX 280 has already proven itself capable of matching the power of NVIDIA's previously most-powerful offering. Continue on to our next section, which clarifies the testing methodology here at Benchmark Reviews.

Video Card Testing Methodology

Benchmark Reviews has high hopes that one day we will be so giant and world famous that every combination of the graphic card available will be on-hand for our product testing... and we're getting closer! I envy the review sites that have twenty other video cards tested in stand-alone, SLI, and CrossFireX arrays for each and every review. Eventually we will be that big, and offer all of those configurations. Readers can help us grow to that size by spreading the word, but for now we'll have to make due with what our budget can afford. In this article, Benchmark Reviews is going to test and compare the NVIDIA GeForce GTX 280 1GB graphics card against several other closely-ranked products from within the GeForce family.

At the start of all tests, the previous display adapter driver is uninstalled and trace components are removed using Driver Cleaner Pro. We then restart the computer system to establish our display settings and define the monitor. Once the hardware is prepared, we begin our testing. The synthetic benchmark tests in 3DMark06 will utilize shader models 2.0 and 3.0. Every test is conducted at the following resolutions: 1024x768 (17" Standard LCD), 1280x1024 (19" Standard LCD), 1680x1050 (22-24" Widescreen LCD), and 1920x1200 (24-28" Widescreen LCD). In some tests we utilized widescreen monitor resolutions, since more users are beginning to feature these products for their own computing.

Each benchmark test program begins after a system restart, and the very first result for every test will be ignored since it often only caches the test. This process proved extremely important in the World in Conflict and Supreme Commander benchmarks, as the first run served to cache maps allowing subsequent tests to perform much better than the first. Each test is completed five times, with the average results displayed in our article.

NVIDIA_GTX-280_CAD.jpg

Our site polls and statistics indicate that the over 90% of our visitors use their PC for playing video games, and practically every one of you are using a screen resolutions mentioned above. Since all of the benchmarks we use for testing represent different game engine technology and graphic rendering processes, I feel that this battery of tests will provide a diverse range of results for you to gauge performance on your own computer system. Since most gamers and enthusiasts are still using Windows XP, it was decided that DirectX 9 would be used for all tests until demand and software support improve for Windows Vista.

Test System

Benchmark Applications

  • 3DMark06 v1.1.0 (8x Anti Aliasing & 16x Anisotropic Filtering)
  • Crysis v1.21 Benchmark (High Settings, No Anti Aliasing)
  • Lightsmark 2007 v1.3
  • Unreal Tournament 3 v1.2 (High Quality, 16x Anisotropic Filtering)
  • World in Conflict v1.0.0.8 Performance Test (Very High Setting: 4x AA/4x AF)

Video Card Test Products

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

ZOTAC_GTX280_AMP!_Edition_GPU-Z.png

Using the latest GPU-Z utility available for free from our industry affiliate techPowerUp!, we verify manufacturer specifications with the actual internal specifications. In regard to this GeForce GTX 280 AMP! Edition video card, it appears that all specifications for the ZT-X28E3LA-FCP model match those stated by ZOTAC.

Now we're ready to begin testing video game performance on the GeForce GTX 280 AMP! Edition graphics card, so please continue to the next page as we start with the 3DMark06 results.

3DMark06 Benchmark Results

3DMark is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 9 performance of 3D game performance with graphics cards. 3DMark06 uses advanced real-time 3D game workloads to measure PC performance using a suite of DirectX 9 3D graphics tests, CPU tests, and 3D feature tests.

3DMark06 tests include all new HDR/SM3.0 graphics tests, SM2.0 graphics tests, AI and physics driven single and multiple cores or processor CPU tests and a collection of comprehensive feature tests to reliably measure next generation gaming performance today. Some enthusiasts may note that Benchmark Reviews does not include CPU-bound tests in our benchmark battery, and that only graphic-bound tests are included.

Here at Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, I believe 3DMark is a very reliable tool for comparing graphic cards against one-another.

Using a base resolution of 1024x768 as our starting point (representative of 17" LCD monitors) the maximum settings were applied to 3dMark06, which for these tests includes 8x Anti-Aliasing and 16x Anisotropic Filtering. Low-resolution testing allows the graphics processor to plateau maximum output performance, which thereby shifts demand onto the system components to keep up. At the lower resolutions 3DMark will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, and is helpful in measuring the maximum output performance in the test results.

3DMark06_1024x768.jpg

There doesn't seem to be any question that 3dMark06 really likes the CrossFireX pair of Sapphire Radeon HD 4850 video cards. Because of the added overhead of combined graphics processors and video frame buffer memory, the light load created by the shader model 2.0 tests have a negative impact on the CrossFireX score. Another way of describing this phenomenon is comparing the combined Radeon HD 4850 video cards to a race car with only the highest gears available: it will have a faster top-end speed, but it will take longer to get there.

3DMark06_1280x1024.jpg

The ZOTAC GeForce GTX 280 AMP! Edition video card is appears to also be another high-gear example, as it is nudged out by the GeForce 9800 GX2 for SM 2.0 tests but outperforms the GX2 in the more complex HDR/SMR 2.0 tests. At our lowest test resolution of 1024x768, the ZOTAC GeForce GTX 280 AMP! Edition puts the reference NVIDIA GeForce GTX 280 a decent margin below it.

Bumping the GPU strain up a notch with 1280x1024 resolutions the scores remain relatively comparable in terms of performance ratio. More visitors to Benchmark Reviews operate at this resolution than anything else, as it represents the native resolution of 19" LCD monitors. The Sapphire Radeon HD 4850 keeps pace with the GeForce 9800 GTX at this resolution, in the same way the Gigabyte GeForce 9800 GX2 keep pace with the overclocked GeForce GTX 280 AMP! Edition video card. However, moving into the more advanced HDR tests there is a decisive lead over the 9800 GTX by the Radeon HD 4850, and the ZOTAC GTX 280 takes a healthy single-GPU lead over the crowd.

3DMark06_1680x1050.jpg

At the widescreen resolution of 1680x1050, the scores are practically identical in ratio to all of our previous tests. Once again, the shader model 2.0 tests put the 9800 GTX barely ahead of the HD 4850, at least until they reach the shader model 3.0 tests where everything is reversed. Twin Sapphire Radeon HD 4850 video cards in a CrossFireX set are still running circles around the competition in 3dMark06, and the ZOTAC GeForce GTX 280 AMP! Edition is the most powerful single-GPU video card.

While the entire G90-series GPU family is PCI Express 2.0 compatible, there doesn't seem to be enough demand to create an immediate advantage. However, with the much higher-output GT200 GPU, the bandwidth demands raise from 6.4 GBps on the GeForce 8800 GTX to 12.8 GBps on the GTX 280. The Radeon CrossFireX set of HD 4850 video cards actually seems to work very well with our test motherboard, the Gigabyte GA-X48T-DQ6.

3DMark06_1920x1200.jpg

Finishing up the series of synthetic benchmark tests under heavy load, the FOXCONN GeForce 9800 GTX Standard OC Edition video card matches the Sapphire Radeon HD 4850 in the SM 2 tests, yet the Radeon HD 4850 dominates over the 9800 GTX by 32% in the more advanced shader model 3.0 tests. It would take two Sapphire Radeon HD 4850's to beat out the GeForce 9800 GX2, and both standard and ZOTAC-overclocked GTX 280 video cards.

One of NVIDIA's goals for the GT200 was to produce a GPU that doubles the performance of the 8800 GTX, but it looks like ZOTAC's goal was a little different as it nearly doubles the performance of the newer 9800 GTX. Producing 3647 HDR/SM3 points, the ZOTAC GTX 280 AMP! Edition outperforms the twin-G92 9800 GX2 by only 6%, and the Radeon HD 4850 by 41%. However, taking cost into consideration, the CrossFireX set of Sapphire 4850's outperforms the more expensive GTX 280 by over 40% in 3dMark06.

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

Take the 3DMark06 tests at face value, because in our next section we begin real-world testing on a cadre of popular video games known for taxing the graphics processor, and the performance curve is expected change. Our first up is Crysis, so please continue on...

Crysis Benchmark Results

Crysis uses a new graphics engine: the CryENGINE2, which is the successor to Far Cry's CryENGINE. CryENGINE2 is among the first engines to use the Direct3D 10 (DirectX10) framework of Windows Vista, but can also run using DirectX9, both on Vista and Windows XP.

Roy Taylor, Vice President of Content Relations at NVIDIA, has spoken on the subject of the engine's complexity, stating that Crysis has over a million lines of code, 1GB of texture data, and 85,000 shaders. To get the most out of modern multicore processor architectures, CPU intensive subsystems of CryENGINE 2 such as physics, networking and sound, have been re-written to support multi-threading.

Crysis offers an in-game benchmark tool, which is similar to World in Conflict. This short test does place some high amounts of stress on a graphics card, since there are so many landscape features rendered. For benchmarking purposes, Crysis can mean trouble as it places a high demand on both GPU and CPU resources. Benchmark Reviews uses the Crysis Benchmark Tool by Mad Boris to test frame rates in batches, which allows the results of many tests to be averaged.

The very first thing we discovered in the low-resolution tests was how seemingly poor both of our multi-GPU products performed. The Gigabyte GeForce 9800 GX2 was matched in average frame rate by the Sapphire Radeon HD 4850, and the GeForce 9800 GTX edged out the CrossFireX set of 4850's. To be fair, none of these video cards will probably ever realistically see this low resolution, so the performance only illustrates how high-end GPU power can be cut short if the monitor (resolution) doesn't match it.

Crysis_1024x768.jpg

Low-resolution testing allows the graphics processor to plateau maximum output performance, which thereby shifts demand onto the system components. At the lower resolutions Crysis will reflect the GPU's top-end speed in the composite score, indicating full-throttle performance with little load. This makes for a less GPU-dependant test environment, and is helpful in creating a baseline for measuring maximum output performance in the next few test results.

At the 1280x1024 resolution used by 19" monitors, our results show that performance is beginning to really drop despite the small difference is pixels drawn. In terms of general performance, all of these products maintain the same performance ratio as before, except for the 9800 GX2 which seems to hold its ground.

Crysis_1280x1024.jpg

The CrossFireX set of HD 4850's is going to soon reach it's limit, as it is in last gear and the 9800 GX2 is still shifting up the tree. The NVIDIA GeForce GTX 280 Reference Product is outperformed by the overclocked ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP with a 11% margin; which goes to show how far a 100 MHz overclock will take the GTX200 GPU.

For widescreen users, our benchmarks below indicate that the the ATI Radeon HD 4850 matches the performance of NVIDIA's GeForce 4850 video card, although the 4850 stop delivering post-processing effects at 8x AA and the 9800 GTX can reach 32x AA if the software supports it.

Crysis_1680x1050.jpg

A CrossFireX set of HD 4850's beats out the GeForce 9800 GX2 by almost 13% at this lower widescreen resolution, but they can't touch the GTX 280 series... yet. Testing in high-pressure Crysis also seems to have effected both of the GeForce GTX 280 products we've tested, which are barely ahead of the GeForce 9800 GX2 dual-GPU graphics card.

Heading into the 1920x1200 resolutions produced on the SOYO DYLM26E6 used for testing, Crysis forces 2.3 million pixels to be processed by our graphical test products. Despite what 3dMark06 has reported, the CrossFireX set of Radeon HD 4850's is not king; the GeForce GTX 280 series is. If only by a small difference, the overclocked ZOTAC GTX 280 enjoys a 5% (2 FPS) lead over the GeForce 9800 GX2, which also seem keep its own 8% (3 FPS) lead over the CrossFireX set.

Crysis_1920x1200.jpg

At our highest widescreen resolution, the overclocked Foxconn 9800GTX-512N performs the same as Sapphire's Radeon HD 4850. At the end of our Crysis testing, neither the GeForce 9800 GTX or CrossFireX HD 4850 set could touch the single ZOTAC GTX 280 AMP! Edition video card. But before we leave Crysis, I decided to include a look at post-processing performance with 4x AA enabled at the 1680x1050 resolution; which is really about the only resolution that AA is playable with these products.

Crysis_1680x1050_4x-AA.jpg

Since NVIDIA has recently reduced the price of GeForce 9800 GTX products to compete with the HD 4850, there will be some intense fighting between these two products. My professional opinion is that if these two products shared the exact same price and I only wanted to buy just one of them, my money would go to the GeForce 9800 GTX over the Radeon HD 4850. However, if you want excellent bang for the buck from a multi-GPU array, my money would go to CrossFireX because of performance and widespread compatibility.

Even with a decent dose of anti-aliasing added to Crysis (at 1680x1050), the performance is still relatively decent for all products. Our Island timedemo mixes a some beach and water views so it's going to be on the high side of frame rates when compared to actual game play, but as you can see every product we've tested hovers near the 30 FPS barrier for playable frame rates. It's worth noting that the reference GTX 280 produces a 3% lead over the 9800 GX2, which is minimal at best, and the 14% lead that a ZOTAC-overclocked GTX 280 AMP! Edition can deliver may not make the best argument for price.

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

In our next section, Benchmark Reviews switches to video-output only benchmarking with Lightsmark 2007. Read on to see how a blended high-demand GPU test with low video frame buffer demand will impact our test products.

Lightsmark Frame Rates

Stepan Hrbek is the mastermind behind Lightmark 2007, a program that allows you to benchmark real-time global illumination. Natural lighting makes artificial graphics life-like and real. Computers get faster, but rendering more polygons doesn't add value if lighting still looks faked, so insiders know that the next big thing is proper lighting; aka Realtime Global Illumination.

Typical workloads in real-time rendering will shift, and Lightsmark simulates it. Global Illumination renders often take hours, so is your computer fast enough for real-time?

Before Lightsmark, real-time global illumination was limited to small scenes, small resolutions, small speeds, specially crafted scenes with handmade optimizations. Lightsmark breaks all limits at once, running in reasonably sized scene (220000 triangles) in high resolutions at excellent speed. Lightsmark is comparable to lower-demand OpenGL video games, such as: Call of Duty 4, Prey, Quake 4, and Doom 3.

At the ultra-low resolution of 1024x768, Lightsmark doesn't need to work very hard to get our graphic cards to render 786,432 pixel. This resolution forces each GPU to open up performance full-throttle and react to rapidly called tasks. Demands are quick-paced, and not surprisingly, a larger hardware communication overhead means that performance suffers. The extended memory address of the CrossFireX set and GeForce GTX 280 will be put to the test in this speed-critical benchmark.

Lightsmark2007_1024x768.jpg

Lighting is computed fully automatically in an original unmodified scene from 2007 game World of Padman. This benchmark is not tweaked for Lightsmark, and contains all sorts of geometrical difficulties with extra rooms hidden below the floor.

This scene places medium to low demands on a graphics card processor and tests the maximum speed with which the scene can be properly displayed at each resolution. At the lower resolution, the large frame buffer does not offer the same benefits as it would at a higher resolution. Additionally, the larger video memory means a longer round-trip for information, and when the resolution is low that trip doesn't last very long and needs to be completed very quickly.

Lightsmark2007_1280x1024.jpg

This is our first evidence that matching the video card to the rest of your hardware is just as important as matching it to the expected task. Notice from this test that Lightsmark doesn't favor the goliath Gigabyte GeForce 9800 GX2, or the CrossFireX set of Radeon HD 4850 graphic cards. In fact, our GeForce 9800 GX2 was outperformed in every single Lightsmark test by GeForce 9800 GTX.

When we tested Crysis at 1650x1050, video frame buffer was not as critical as raw processing power. It helped, but obviously it didn't make a huge margin of difference. In Lightsmark, information is passed through the buffer and called on very quickly, and the only thing which was going to benefit this test was the appropriate ratio of graphical stream processors to video memory buffer, so that it could keep up with demands.

Lightsmark2007_1680x1050.jpg

In terms of performance, this test offers very short but taxing graphics, and only the most nimble products with capable muscle can take advantage. This translates into trouble for anyone using new graphics hardware to render older (OpenGL) video games such as Doom 3 or Quake 4.

Lightsmark2007_1920x1200.jpg

After all of the Lightsmark tests were complete, I'm sure these results aren't going to indicate anything in particular to most readers. As I mentioned before, the frame buffer has a whole lot to do with the speed of rendering. The larger the frame buffer, the longer it will take to complete the strobe of information. Lightsmark is meant to represent that collection of older games, which some of you might still be playing. Even Call of Duty 4: Modern Warfare runs on a proprietary game engine that Infinity Ward based off of the tried-and-true Q3 structure. So keep this in mind as you're shopping for a new video card, and don't overpower an older video game with multi-GPU graphics solutions because they will not produce the results you want.

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

In the next section we change gears and test to compare our group of video cards in Unreal Tournament 3. Please continue on to see how the Unreal Engine 3 performs with our test collection of products.

Unreal Tournament 3

Unreal Tournament 3 (UT3) is a first-person shooter and online multiplayer video game by Epic Games and is the next installment of the Unreal series after Unreal Tournament 2004. It is published by Midway Games and was released in North America for Windows on November 19, 2007.

Unreal Tournament 3 is actually the fourth game in the Unreal Tournament series and the eighth Unreal game, but it has been numbered in terms of the engine it runs on. UT3 is subsequently part of the third generation, because it runs on the Unreal Engine 3, and does not reuse any content from previous versions.

Since Unreal Tournament 3 was designed as a DirectX 9 video game with no current support expected for DirectX 10, we use Windows XP Pro (Service Pack 3) for our benchmark testing.

Beginning with the entry-level resolution of 1024x768, the benchmark scores are so close for some products that it might be time to eliminate this setting from our testing process. Nevertheless, it looks like the Unreal Engine 3 game engine doesn't care too much for the ATI Radeon HD 4850 video card, since the CrossFireX set was just barely able to keep pace with the others. Even with High Quality settings with all of the tweaks, and 16x anisotropic filtering enabled, Unreal Tournament 3 doesn't seem to strain the graphics card enough to create a noticeable difference in benchmark scores.

Unreal_Tournament_III_1024x768.jpg

Tested at 1280x1024, the resolution is just beginning to create any real load on our test products. The ZOTAC GeForce GTX 280 AMP! Edition loses just a single frame per second (0.6%), and the Sapphire Radeon drops only 6% followed by the GeForce 9800 GTX at 5%. As the resolution increases, these scores should begin to separate more effectively. However, for now it appears that just about any graphics card can play Unreal Tournament 3 without issue.

Unreal_Tournament_III_1280x1024.jpg

When I tested the Honeywell HWLM2216 recently, I noticed how the 1680x1050 widescreen display resolution of this 22" LCD monitor offered very little strain over a 19" standard display LCD monitor. Comparatively, 1680x1050 produces 1.76 MP and 1280x1024 produces 1.31, so there's only a very small difference expected between performance levels. The biggest difference is in the user experience, because the widescreen monitor comes in very handy for watching multimedia video or playing large world-scape video games.

Unreal_Tournament_III_1680x1050.jpg

At 1680x1050 resolution, the differences are beginning to show, but I am getting the impression that only 1920x1200 will be useful for illustrating how each product performs. Although it's all still a tight race, the ZOTAC GTX 280 leads with 160 average frames per minutes, and the Sapphire Radeon HD 4850 trails behind the pack with 107 FPS. The GeForce 9800 GTX leads the HD 4850 by 16%, but the CrossFireX set of 4850's fights back with a 13% lead.

Unreal_Tournament_III_1920x1200.jpg

Finally arriving at 2.3 MP with a 1920x1280 resolution on our 26" SOYO DYLM26E6 test monitor, we can begin to see how the playing field has leveled out. The Sapphire Radeon HD 4850 still trails behind the GeForce 9800 GTX by 6%, but put another 4850 together for a CrossFireX set and they lead by 28%. The GeForce 9800 GX2 puts both G92 graphics processors to good use, and beats both the CrossFireX 4850's (by 16%) and the NVIDIA GeForce GTX 280 engineering sample (by only 4%). When everything was said and done, the overclocked ZOTAC GTX 280 AMP! Edition video card pulled off a very narrow victory.

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

Our last benchmark of the article is coming next, which puts our collection of video cards against some very demanding graphics.

World in Conflict Benchmark Results

The latest version of Massive's proprietary Masstech engine utilizes DX10 technology and features advanced lighting and physics effects, and allows for a full 360 degree range of camera control. Massive's MassTech engine scales down to accommodate a wide range of PC specifications, if you've played a modern PC game within the last two years, you'll be able to play World in Conflict.

World in Conflict's FPS-like control scheme and 360-degree camera make its action-strategy game play accessible to strategy fans and fans of other genres... if you love strategy, you'll love World in Conflict. If you've never played strategy, World in Conflict is the strategy game to try.

World in Conflict offers an in-game benchmark; which records the minimum, average, and maximum frame rates during the test. Very recently another hardware review website made the assertion that these tests are worthless, but we couldn't disagree more. When used to compare video cards which are dependant on the same driver and use the same GPU architecture, the in-game benchmark works very well and comparisons are apples-to-apples.

First tested was the 1024x768 resolution in WiC, which is representative of the (very few) gamers using a 17" LCD monitor with this widescreen-preferred video game. Based on the test results charted below it's clear that WiC doesn't place a limit on the maximum frame rate (to conserve wasted power) which is good for full-spectrum benchmarks like ours, but bad for electricity bills. The average frame rate is shown in each chart, but our initial results are so close that it becomes obvious that WiC doesn't ask much from the graphics card at low resolutions. That's okay, because we've got three more to offer.

World_in_Conflict_1024x768.jpg

At 1024x768 the 9800 GTX was ahead by one single FPS against the Sapphire Radeon HD 4850, but at 1280x1024 the positions and results are exactly reversed. The CrossFireX set of HD 4850's are just a step behind the average frame rate of the GeForce 9800 GX2. Ultimately the overclocked ZOTAC GTX 280 would secure the lead with an average frame rate of 69 FPS; but a 3 FPS lead over the GeForce 9800 GTX is not exactly impressive.

World_in_Conflict_1280x1024.jpg

Moving up a small step to 1680x1050 widescreen resolution, the trends are kept within the ratio they have operated at for the past two test. The ZOTAC GeForce GTX 280 holds its ground and drops only 2 FPS, which results in a decidedly small lead over the 9800 GX2 by a whole 2 FPS. The Foxconn GeForce 9800 GTX is neck-and-neck with the Sapphire Radeon HD 4850, and the CrossFireX setup is in-line with the 9800 GX2 and both GTX 280's.

World_in_Conflict_1680x1050.jpg

With a balanced demand for CPU and GPU power, World in Conflict just begins to place demands on the graphics processor at the 1920x1280 resolution. I was expecting more of the same, and that is pretty much exactly what I got.

The performance decay had its hardest impact on the mid-level video cards: GeForce 9800 GTX and Radeon HD 4850, which for all intents an purposes performed exactly the same throughout our entire WiC testing. Two HD 4850's in CrossFireX configuration will yield a 46% improvement over using only one, while it matches performance with our reference NVIDIA GeForce GTX 280. The GeForce 9800 GX2 barely moved two full frames per second as it worked without effort from 0.79 MP up to 2.3 MP.

World_in_Conflict_1920x1200.jpg

Taking a broader look at the average frame rate, there appears to be a major difference between the mid-range and high-end video card products when it comes to World in Conflict. This game offers DirectX 10 functionality, which could lend itself to taxing the CrossFireX, 9800 GX2, and GTX 280 graphics cards more appropriately. For our testing, it appears that only the mid-level GeForce 9800 GTX and Sapphire Radeon HD 4850 demonstrate a performance decay as the resolution is raised.

Product Series FOXCONN GeForce 9800 GTX 9800GTX-512N Sapphire Radeon HD 4850 102-B50102-00-AT

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B

NVIDIA GeForce GTX 280 Reference ZOTAC GeForce GTX 280 AMP! Edition ZT-X28E3LA-FCP
Stream Processors 128 800 128 (x2) 240 240
Core Clock (MHz) 685 625 600 (x2) 602 700
Shader Clock (MHz) 1713 N/A 1500 (x2) 1296 1400
Memory Clock (MHz) 1100 993 1000 (x2) 1107 1150
Memory Amount 512 MB GDDR3 512 MB GDDR3 512MB (x2) GDDR3 1024 MB GDDR3 1024 MB GDDR3
Memory Interface 256-bit 256-bit 256-bit (x2) 512-bit 512-bit

In our next section, we discuss electrical power consumption and temperature levels for these products. Learn how well (or poorly) each video card will impact your utility bill...

ZOTAC GTX 280 AMP! Temperatures

This section is probably the most popular for me, not so much as a reviewer but more for my enthusiast side. Benchmark tests are always nice, so long as you care about comparing one product to another. But when you're an overclocker, or merely a hardware enthusiast who likes to tweak things on occasion, there's no substitute for good information.

Benchmark Reviews has a very popular guide written on How To Overclock the NVIDIA GeForce Series video card, but it was published shortly after the 8th generation of GeForce products was launched. Currently we are preparing for a more updated article, with additional information on shader overclocking and temperature control as the newest GeForce products are made available. Once published you can expect more detailed information than what is shown below, as for now the temperatures depicted are GPU core temperatures at idle and under load.

To begin my testing, I used FurMark v1.4.0 record GPU temperatures at idle and again at high-power 3D mode. The ambient room temperature was a comfortable 22.0°C and the inner-case temperature hovered around 34°C. The ZOTAC ZT-X28E3LA-FCP GeForce GTX 280 AMP! Edition video card recorded 45°C in idle 2D mode, and increased to 86°C in full 3D mode.

FurMark is an OpenGL benchmark that heavily stresses and overheats the graphics card with fur rendering. The benchmark offers several options allowing the user to tweak the rendering: fullscreen / windowed mode, MSAA selection, window size, duration. The benchmark also includes a GPU Burner mode (stability test). FurMark requires an OpenGL 2.0 compliant graphics card with lot of GPU power! As a oZone3D.net partner, Benchmark Reviews offers a free download of FurMark to our visitors.

ZOTAC_GeForce_GTX_280_AMP!_Edition_Temps.jpg

FurMark does do two things extremely well: drive the thermal output of any graphics processor higher than any other application of video game, and it does so with consistency every time. While I have proved that Furmark is not a true benchmark tool for comparing video cards, it would still work very well to compare one product against itself at different stages. FurMark would be very useful for comparing the same GPU against itself using different drivers or clock speeds, of testing the stability of a GPU as it raises the temperatures higher than any program. But in the end, it's a rather limited tool.

I must admit that 86°C is not the coolest-running GeForce product I have ever tested. Since the metal underplate acts to dissipate heat, you don't really want to touch it after loaded use. Keeping in mind that GT200 is an entirely new graphics processor, you can relate this product launch back to the day G80 processors launched in the GeForce 8800 series. What this means is that the graphics processor will undergo a some level of fabrication refinement over time, and new GPU will be binned for faster products, similar to how the GeForce 8800 Ultra was one year ago. Adding to the improvements that refinement will undoubtedly bring, you can expect cooling to become improved with more efficient die processes.

The most favored feature of past upper-level GeForce designs has been the focused exhaust design. Heated air recirculating around inside the computer case is could reduce stability for your sensitively overclocked computer system. While 86°C is considerably hot under full load, and more than ten degrees hotter than the 9800 GTX which means that there's some room for improvement in the cooling department. But here's a little fact you probably didn't know: the GT200 GPU is designed operate safely up to its 105°C thermal threshold. What happens after that? Believe it or not, if the GPU exceeds this temperature the clock speed will automatically be dialed down to avoid damage.

GTX 280 Power Consumption

It's becoming difficult to dodge the "doom and gloom" talks these days. Planet Earth is needs our help, badly. With forests becoming barren of vegetation and snow capped poles quickly turning brown, the technology industry has a new attitude towards suddenly becoming "green". I'll spare you the powerful marketing hype that I get from various manufacturers every day, and get right to the point: your CPU has been doing a lot more to save the planet than your GPU has... until now. It's taken some time, but NVIDIA has finally worked out that problem.

GeForce GTX 200 GPU's (which include the GTX 280 and GTX 260 at the time of this writing) include a more dynamic and flexible power management architecture than past generation NVIDIA GPUs. Four different performance / power modes are employed on the new GT200 processor:

  1. Idle/2D power mode (approx 25 W)
  2. Blu-ray DVD playback mode (approx 35 W)
  3. Full 3D performance mode (varies-worst case TDP 236 W)
  4. HybridPower mode (effectively 0 W)

Using a HybridPower-capable nForce motherboard, such as those based on the nForce 780a chipset, a GeForce GT200 GPU can be fully powered off when not performing intensive graphics operations and graphics output can be handled by the motherboard GPU (mGPU). For 3D graphics-intensive applications, the NVIDIA driver can seamlessly switch between the power modes based on utilization of the GPU.

GeForce_GTX_280_Exposed_Top.jpg

Each of the new GeForce GTX 200 GPUs integrates utilization monitors ("digital watchdogs") that constantly check the amount of traffic occurring inside of the GPU. Based on the level of utilization reported by these monitors, the GPU driver can dynamically set the appropriate performance mode (i.e., a defined clock and voltage level) that minimizes the power draw of the graphics card-all fully transparent to the end user.
The GPU also has clock-gating circuitry, which effectively "shuts down" blocks of the GPU which are not being used at a particular time (where time is measured in milliseconds), further reducing power during periods of non-peak GPU utilization.

All this enables GeForce GTX 200 graphics cards to deliver idle power that is nearly 1/10th of its maximum power (approximately 25 W on GeForce GTX 280). This dynamic power range gives you incredible power efficiency across a full range of applications (gaming, video playback, surfing the web, etc). While current Intel central processing units are using a power-efficient 24nm die process size, the graphics processor is a bit slower to catch up to this refinement level and the GT200 is built by TSMC's 65 nm fabrication process. Below is a chart with the isolated video card Watts (not system total) consumed by each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

In regard to power requirements, the GeForce GTX 280 has the same hunger that haunted the older 8800 GTX and requires one 8-pin and another 6-pin PCI-Express power connection for proper operation. Taken into broader perspective, the GTX 280 consumes nearly the same power as the older 8800 GTX while producing twice the performance in most gaming applications and adding parallel computing power. So the loaded power consumption has become more efficient, which is not very common since emphasis is usually placed on idle/standby mode efficiency and conservation.

NVIDIA has designed the GT200 graphics processor to be an efficient GeForce product at idle, too, and thereby reduces the power consumption full-time. The newly improved design inherently gives the GT200 an efficiency advantage at every level, and the power consumption falls to same low level as recorded for the Palit GeForce 9600 GT 1GB Sonic NE/960TSX0202. NVIDIA's most top-end GeForce product consumes just as much power in idle 2D mode as their lowest models. That's really quite impressive, to say the least.

Please continue to the review conclusion in the next section, where I share my final thoughts on the NVIDIA GT200 graphics processor and give my opinion of the new high-level AMP!'ed GTX 280 product offering.

GT200 GPU Final Thoughts

Paying to be an early adopter of technology or buying from the top-shelf has never really been my personal taste, even for someone as immersed in technology as I am. There are always new technologies that people talk up as they are developed, such as Blu-ray Disc for example. Yet, because there isn't enough value behind the added features or functionality to warrant paying for the premium price tag, most people simply wait extended periods of time before making their purchase. There is occasionally the rare exception however, when you can find a revolutionary new product that really makes the price worth the purchase. For me, the GT200 graphics processor conducting a symphony of 240 processor cores inside the GeForce GTX 280 video card makes me believe a uniquely rare exception has may have occured... if the price is right.

Now I'm not going to tell you that these GT200-based video cards are a must-have item for everyone. After all, the GTX 280 and GTX 260 are now NVIDIA's top-shelf premium GeForce products and not everyone can pay the high price of admission. However, there will undoubtedly be products launched with lower model numbers that make the argument much more plausible for the average person. Either way, the GT200 is as revolutionary to graphics and computing as the electric motor has been to automobiles. I would even go so far as to say that NVIDIA's GT200 GPU is an evolution in video cards in much the same way as Solid State Drives are to storage. It feels that big... and so far I've only touched on the graphics side of the product.

NVIDIA_GTX-280_Splash_Angle.jpg

There has been the continued mention of parallel computing architecture throughout this article, and for very good reason. The GT200 isn't just a graphics processor, at least not in the sense we have all experienced for the past decades. Beginning with the GT200, you'll need to look at NVIDIA GPUs the same way as you view AMD or Intel CPUs. While they each have their strengths, these days they play more of a multi-purpose role. Intel and AMD processors have long since be capable of lower-level graphics processing (mostly 2D limited), and lately they have "evolved" into four cores. Well, I suppose evolution made a special visit to NVIDIA's TSMC facility because the GT200 has 240 processor cores and can achieve excellent compute-level tasks with high performance results.

What's going to be difficult to pull off is educating the end-user, the consumer, and corporate buyer. After my testing was complete, I experimented with different Intel processors to see what kind of difference they made. Making a long story short, the benchmark results for Crysis at 1920x1200 were virtually identical between the dual-core E8700 and quad-core Q6700. But I already know how this works: Benchmark Reviews recommends that gamers spend more money on the GPU and less on the CPU, and reader promptly dismiss us for NVIDIA fan boys. That same visitor will then read the same opinion at a few other websites and either suspect we're all being paid off (and will probably post something of the sort in a forum somewhere) or they'll begin to suspect that something is actually happening in the world of technology. So after the world reports that the GT200 is a better investment than a new Extreme Edition processor, that visitor will still go out and buy a new quad core and claim the old 7900 GT that's smoldering inside his case.

For everyone else who actually read this entire article, there's a lot going on with the GT200 that is not available anywhere else. For those with deep pockets, NVIDIA SLI technology is taken to unreachable levels with GeForce GTX 200-series graphics cards. NVIDIA PhysX technology, which is becoming mainstream in game development, will require no additional accelerator to enjoy the amazing new graphical effects of upcoming game titles. Even Enterprise computing environments will benefit from CUDA applications coded to make use of the many cores inside the GT200, more threads, double-precision math, and increased register file size.
Hopefully, the money-wise hardware enthusiast will begin making smarter decisions when purchasing new computer systems, and might conduct a rudimentary performance analyses to optimize their PC to match the CPU with the GPU. I think that they will find out how a lower-end CPU paired with a higher-end GPU produces more performance than the reverse; and for the same price.

This idea of heterogeneous computing is what NVIDIA has been working hard to accomplish. Selecting the most appropriate graphics processor is now exactly as important as choosing the right processor any specific task. Please see our NVIDIA GPU Computing FAQ for additional information on this topic.

ZOTAC GTX 280 Conclusion

When Benchmark Reviews tested the GeForce 9800 GX2, the box-like NVIDIA reference design was not incredibly appealing to me. Apparently I just needed to wait for the 9800 GTX design before I would see curves influence an NVIDIA product appearance. When we launched NVIDIA's GeForce GTX 280 last week, a new king greeted the public wearing clothes but they aren't exactly new. While I never really considered the entire pre-G92 GeForce 8800 series to be very attractive as a whole, primarily because of the awkward half-covered products, the GTX 280 has finished what was started. One particular favorite of mine is the tilted blower fan, which corrects the functional flaws of the parallel blower fan found in the 9800 series. Unlike the past generation of products, this GeForce video card does not offer LED lights for cosmetic accents because they are now utilized for functional indication of hardware status.

In the not so distant past I have had to replace my GeForce 8800 GTX because of an errant SATA cable swiped off one of the capacitors. At that moment, I felt that NVIDIA definitely should have done something more to protect the electronics on their product. Unlike the higher-end 8800 series GeForce products, the GTX 280 leave nothing exposed to potential damage to sensitive electronic components. NVIDIA has engineered the GeForce GTX 280 to sustain above-average abuse, which also means you'll have very little change of having to RMA this product because it falls apart on you. The plastic shell covering the GTX 280 will work very well in cramped environments where the video card will be in contact with cables and components, just so long as it can fit.

In regards to performance and functionality, NVIDIA has redefined the graphics card space. Beginning with 240 processor cores, the GeForce GTX 280 is everything that previous products has not been: parallel-computing ready. Without question, the GeForce GTX 280 has earned the top position for NVIDIA's video card product lineup. The core, shader, and memory clocks are at the launch-date reference level, so it might be a short while before drivers are stable enough to gain stable overclocks. Optimized post process compression combined with a future-proof 1024 MB of video frame buffer will make this the must-have card for extreme gamers for the foreseeable future (*see intro). A long-overdue 512-bit memory bus calls upon the PCI-E 2.0 bandwidth opportunities, and opens the design to GDDR4 components as the product line matures. Additionally, full HDMI audio and video output is available for HTPC builds and viewing high definition copyright protected material. Unfortunately though, there is no DisplayPort functionality in the new GTX 280.

ZOTAC_GeForce_280_GTX_AMP!_Edition_Splash.jpg

At the time of this writing ZOTAC's ZT-X28E3LA-FCP GeForce GTX 280 AMP! Edition video card has just been sent out to distributors, so you can expect to see them available at retail locations very soon. I expect NewEgg to keep in-line with prices for other GTX 280 products around the $629 price point. Some enthusiasts have started to complain that these products have become too expensive, but I am reminded that the GeForce 8800 GTX and GTS launched with nearly identical price tags almost two years ago. So let's see, count for inflation and a US dollar in decline, then add in 100% graphics performance improvement, 240 compute-ready cores, and a very power efficient architecture, and you might begin to see the value a little more clearly. Helping to blur the line of value is GeForce 9800 GX2, which might not offer the same level of compute power but can play video games at nearly the same level of performance. As of December 2008, the Zotac GTX 280 AMP! Edition was found on NewEgg for $399.99 with a $40 rebate for a limited time.

In summary, the ZOTAC GeForce GTX 280 AMP! Edition compute-ready GT200 video card has proved itself to be the long-overdue solution to intensive graphics applications for far too long. To describe performance, you have to think of more than just video game frame rates, because now transcoding, rasterization, and graphics ripping will occur in thin percentiles of the time it previously took. With the power of CUDA technology and the new CUDA runtime for Windows Vista, intensive computational tasks can be offloaded from the CPU to the GPU making this the first GeForce product worthy of Enterprise computing environments.

The GT200 processor is a remarkable achievement that NVIDIA should be proud of, and for once I find myself giving an expensive premium product my highest recommendation; but it's not without some reservations. It's nice that the GTX 200-series offers HDMI video output (via adapter) along with digital audio output through the attached S/PDIF audio cable, but I think that a product of this level should also be looking at native DisplayPort connectivity to fully secure the idea of future-proof hardware. If multimedia transcoding is a selling point, than connecting to the equipment that cutting-edge professionals will be using should be just as important.

ZOTAC includes the CodeMasters PC video game RaceDriver GRID with the GTX 280 AMP! Edition, and I have to personally confide with all of you that this became a secret obsession of mine for at least a week straight. With games like Crysis and World in Conflict being replaced later this year, the newest titles are beginning to revolve around features like PhysX and higher post-processing effects. Expect the GTX 280 to shine in upcoming titles like FarCry 2 which uses the Dunia game engine and will place real demand on the 1 GB video frame buffer; even Shadow Harvest and S.T.A.L.K.E.R. Clear Sky should make this product worth while. The future of gaming might let you play the game with an older graphics solution, but it doesn't make any promises on enjoyment. So if you're a competitive hardcore gamer on with an appetite (and disposable income) for the absolute best, ZOTAC's GTX 280 AMP! Edition in the undisputed champion of graphic cards. If you're not so extreme, than the ZOTAC GeForce 9800 GX2 still performs just as well.

Pros:Benchmark Reviews Golden Tachometer Award

+ Outstanding AA/AF performance from demanding games
+ Supports DirectX 10, OpenGL 2.1, and Shader Model 4
+ 700 MHz GPU/1400 MHz Shader/1150 MHz RAM
+ Parallel Compute ability for CUDA applications and GPU physics
+ Enables NVIDIA HybridPower technology
+ Unprecedented single-GPU performance - outperforms 9800 GX2
+ Double-precision floating-point support
+ 240 Compute-capable processing cores
+ HDMI Audio and Video supported for HDCP output
+ Contoured enclosure offers improved airflow and cooling
+ 16x Coverage Sampling Antialiasing (CSAA) algorithm
+ Supports triple-SLI functionality
+ Ultra-efficient 65nm GT200 processor
+ 512-bit 12.8 GBps GDDR3 1 GB frame buffer

Cons:

- Cooling improvements would be desirable
- Large footprint full ATX form factor VGA space
- Expensive enthusiast product
- Lacks DisplayPort interface

Ratings:

  • Presentation: 9.50
  • Appearance: 9.00
  • Construction: 9.75
  • Functionality: 10.0
  • Value: 8.00

Final Score: 9.25 out of 10.

Excellence Achievement: Benchmark Reviews Golden Tachometer Award.

Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.


Related Articles:
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive