GeForce 8 series

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by ARCG (talk | contribs) at 14:12, 4 June 2007 (Fixed typo.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Jump to navigation Jump to search

Template:NvidiaGPU

The Geforce 8 Series, or codename G80, is the eighth generation of NVIDIA's Geforce graphics cards. The GeForce 8 is the third fundamentally new architecture developed by NVIDIA, the first one being GeForce 256. [1] The GeForce 8 is also the world's first fully unified shader architecture for the PC market.[citation needed]

GeForce 8 Series Overview

File:GeForce newlogo.jpg
GeForce logo

The GeForce 8 series arrives with NVIDIA's first unified shader DirectX 10.0 Shader Model 4.0 / OpenGL 2.1 architecture. It is believed that "G80" is just a renamed "NV50". The design is a major shift in GPU functionality and capability, the most obvious change being the move from the separate functional units (pixel shaders, vertex shaders) within old GPUs to a homogeneous collection of universal floating point processors (called "stream processors") that can perform any number of tasks. This allows an arbitrary assignment of processing resources to maintain a more optimal workload within the GPU, because not all applications use each type of shader program in the same ratio. For instance, the GPU can allocate more resources to that task instead of pixel shading for a program that needs more vertex shader geometry power. GeForce 8 does this allocation in hardware.

While GeForce 8 has a large number of stream processors, it should be noted that those processors are relatively simple compared to the shader units of older GPUs. Each is scalar and thus can operate on only one component at a time, making them less complex while still being quite flexible and universal. Yet, scalar shader units have the advantage of being 100% efficient in any given case/time as compared to vector shader units that use only 30-80% of their computational resources at any given time. Previous generation shader units operate on data in a vector fashion. The simplicity of G80's processors is mostly compensated by the high clock speed at which they run and the efficiency of being scalar, parallel and streaming (please refer to stream processing for more info). GeForce 8800 runs the various parts of its core at differing clock speeds (clock domains), similar to the operation of the previous NVIDIA G7x GPUs. The stream processors of 8800GTX, for example, operate at a 1.35 GHz clock rate while the rest of the chip is running at 575 MHz.

Geforce 8800 GTX has 32 Texture Filtering (TF) units. Nvidia's new chip also performs correct texture filtering, a major upgrade from the previous generations that used various optimizations and visual tricks to speed up rendering while impairing filtering quality. It correctly renders an angle-independent anisotropic filtering algorithm along with full trilinear texture filtering. NVIDIA has also introduced new polygon edge anti-aliasing methods, including the ability of the GPU's ROPs to perform both Multisample anti-aliasing (MSAA) and HDR lighting at the same time, correcting various limitations of previous generations. GeForce 8 can perform MSAA with both FP16 and FP32 texture formats. GeForce 8 supports 128-bit HDR rendering, an increase from prior cards' 64-bit support. The chip's new anti-aliasing technology, called coverage sampling AA (CSAA), uses Z, color, and coverage information to determine final pixel color. This technique of color optimization allows 16X CSAA to look crisp and sharp.

Another addition is the capability for the GPU to use its processors for physics rendering, a technique named Quantum Effects Technology by Nvidia. Additionally, Nvidia has created the Compute Unified Device Architecture (CUDA) technology. This is an interface for the GeForce 8 cards that provides general purpose functionality for the processors within the GPU, which is a developing technique, known more generically as General Purpose GPU (GPGPU). Performance wise, in the general purpose computing arena the NVIDIA GeForce 8800 GTX provides up to 197 times the performance of an Intel Core 2 Duo E6700 dual-core Conroe processor running at 2.67GHz[2].

Unfortunately, current GPUs only run on 32-bit technology. This limits them to only being capable of single-precision floating point capability and not the double-precision floating point (64-bit) capability used in today's CPUs. When compared to the broad range of calculation types a standard CPU can compute, GPUs can only run a limited number calculation types. This currently limits them to only being able to run certain types of applications and keeps them from being used as fully functioning standalone CPUs. However, NVIDIA has stated in the CUDA Release Notes Version 0.8 file available on their site, that NVIDIA GPUs supporting (64-bit) double-precision floating point arithmetic in hardware will become available late in 2007.[3]. But even with only single-precision floating point support at the moment, the GeForce 8 Series is still a great leap forward in the development of GPGPU technology for a broader spectrum of use.

The new GeForce 8 series also supports 10-bit display output, up from 8-bit on previous cards. This potentially allows higher fidelity color representation on capable displays. NVIDIA's PureVideo HD video rendering technology is an improved version of the original PureVideo introduced with GeForce 6. The HD edition includes GPU-based hardware acceleration for decoding HD movie formats, post-processing of HD video for enhanced images, built-in High-bandwidth Digital Content Protection (HDCP) support at the card level.[4] The GeForce 8 series also supports Scalable Link Interface (SLI) for multi-card rendering.

GeForce 8800 Series

GeForce 8800 GTX is the flagship card in the GeForce 8800 Series (XFX model).

The 8800 series codenamed G80 was launched on November 8, 2006 with the release of the GeForce 8800 GTX, with 768MB RAM, and the less powerful 8800 GTS[5], with 640MB RAM. The 8800 series replaces the GeForce 79x0 series as NVIDIA's top performing consumer video card. GeForce 8800 GTX and GTS use identical GPU cores, but the GTS model disables parts of the GPU and reduces RAM size / bus width to lower product cost.

The 8800 GTX has 8 clusters of 16 stream processors, for a total of 128 stream processors (called "SPs" by NVIDIA). 8800 GTS, in comparison, features a G80 processor with 2 of the 8 clusters disabled, leaving 96 stream processors arranged as 6 clusters of 16 stream processors. As for processing power, NVIDIA claims that the GeForce 8800 GTX has 518.4 Gigaflops performance given the fact that there are 128 processors at 1.35GHz MADD+MUL dual-issue[(MADD(2flops)+MUL(1flop))×1350MHz ×128 SPs=518.4 Gigaflops)][6]. However, it should be noted that 8800 GTX's 518.4 Gigaflops may not be the theoretical peak performance, due to the fact that the MUL operation is not always available[7]. However the FLOP performance is still 100% efficient,so we can't compare the 8800 GTX's Gigaflops directly to previous generation GPU architectures that had instruction issue limits given that they were of a vector shader design (Vec3+1) making their Gigaflops never reaching their theoretical peaks. Both the 8800GTX and the 8800GTS are built on PCBs larger than any previous consumer graphics card, with the 8800GTX measuring 10.6 in (~26.9 cm) in length and the GTS measuring 9 in (~23 cm). This raises the concern that the graphics cards will not fit inside some smaller computer cases. Both cards have two dual-link DVI connectors and a HDTV/S-Video out connector. The 8800GTX requires 2 PCIe power inputs, to keep to the PCIe standard, while the GTS requires just 1. Performance wise, GeForce 8800 GTX provides 2 to 3 times the performance of a Radeon X1950 XTX 512MB GPU in shader intensive PC games. When compared to a 7900 GTX 512MB GPU the performance gap is even larger at 2.5 to 3.5 times in favour of the 8800 GTX. In addition, GeForce 8800 GTX has over 2 times the performance of a 7950 GX2 GPU in shader intensive PC games. Thus,"G80" GeForce 8800 GTX marks the largest technology and performance jump from one generation to the next in NVIDIA's history. The performance gap as compared to last generation GPUs will get even bigger as future games become more shader intensive. For example in certain shader operations it is found that 8800 GTX is 11 times faster than a Radeon X1950 XTX 512MB and 7900 GTX 512MB GPUs and NVIDIA believes that this trend will continue in future PC games and applications.

NVIDIA released a 320 MB version of the 8800 GTS on February 12, 2007 in order to tap into a more mainstream market. Aside from the decreased amount of video memory, all other aspects of the 8800 GTS remained unchanged. The unit retails at US$299. At this price range, it is expected to compete with the ATI Radeon X1950 XTX while, in addition to NVIDIA's own GeForce 7900 GTX and 7950 GT. Such competition will likely lower demand for the aging DirectX 9.0 parts, resulting in price drops. [8]

Although a minor manufacturing defect related to a resistor of improper value caused a recall of the 8800GTX models (not the 8800GTS) just two days before the product launch, the launch itself was unaffected.[9]

As of April 2007, the G80 is the largest commercial GPU ever constructed. It consists of 681 million transistors covering a 480 mm² die surface area built on a 90 nm process.(In fact G80's total transistor count is ~690 million, but since the chip was made on a 90nm process and due to process limitations and yield feasibility, NVIDIA had to break the main design into two chips : Main shader core at 681 million transistors and NV I/O core of about ~5 million transistors making the entire G80 design standing at ~690 million transistors). Thus making G80 the largest and most complex design ever made for the PC market.

On May 2nd, 2007, NVIDIA released the 8800 Ultra, and it retails at $829.

Technical Summary

Model Release Date Codename Fabrication process (nm) Core clock max (MHz) Fillrate max (billion texel/s) Shaders Memory Power Consumption (Watts) Transistor Count (Millions) Shader Processing Power (Gigaflops)
Stream Processors Clock (MHz) Bandwidth max (GB/s) Bus type Bus width (bit) Megabytes Clock (MHz)
GeForce 8800 GTS [10] [11] [12] 8th November 2006 G80 90 500 24.00 96 1200 64.00 GDDR3 320 320/640 1600 108 681 (~690) 345.60
GeForce 8800 GTX [10] [11] [12] 8th November 2006 G80 90 575 36.80 128 1350 86.40 GDDR3 384 768 1800 145 681 (~690) 518.40
GeForce 8800 Ultra 2nd May 2007 G80 90 612 39.16 128 1500 103.68 GDDR3 384 768 2160 175 681 (~690) 576.00

GeForce 8600 & 8500 Series

On April 17th, 2007, NVIDIA released 3 new members of the GeForce 8 product family: the GeForce 8500 GT; 8600 GT; and 8600 GTS.

The performance of these cards do not quite meet the expectations of the x600 series tradition. The 8600GT performs on par with the 7900GS (except on higher resolutions, such as 1600x1200) and the 8600GTS is around a Radeon x1950 Pro level in terms of graphics performance.[13] The 8 series midrange cards seem to take a larger hit on performance than price competitors when AA is enabled. The MSRP (retail price) for a 8600 GTS is 199-229 USD, MSRP for 8600 GT is 149-169 USD and MSRP for 8500 GT is set at 89 USD. Some graphics cards producers, such as BFG and XFX, are releasing factory overclocked versions of the 8600 series. They will cost around 20-40 USD more than the editions that are not overclocked.[citation needed]

Compared to the earlier Geforce 8 products (8800), the 8500/8600 family introduces the Purevideo2 engine. Purevideo2 improves upon Purevideo by adding more decoding-assistance for VC-1 and H264. With the 8600, Nvidia claims PCs with slow CPUs can play HD-DVD/Bluray without skipping frames. The functionality of Purevideo2 is similar to ATI's Universal Video Decoder.

Technical summary

Model Release Date Codename Fabrication process (nm) Core clock max (MHz) Fillrate max (billion texel/s) Shaders Memory Power Consumption (Watts) Transistor Count (Millions) Shader Processing Power (Gigaflops)
Stream Processors Clock (MHz) Bandwidth max (GB/s) Bus type Bus width (bit) Megabytes Clock (MHz)
GeForce 8300 GS (OEM) [14] May 2007 G86 80 450 1.80 8 900 6.40 GDDR2 64 128/256 800 ? 210 21.60
GeForce 8400 GS (OEM) [14] May 2007 G86 80 450 3.60 16 900 6.40 GDDR2 64 256/128 800 ? 210 43.20
GeForce 8500 GT[15] [14] 17th April 2007 G86 80 450 3.60 16 900 12.80 GDDR2 128 256/512 800 40 210 43.20
GeForce 8600 GT [14] 17th April 2007 G84 80 540 8.64 32 1190 22.40 GDDR3 128 256 1400 43 289 113.28
GeForce 8600 GTS [14] 17th April 2007 G84 80 675 10.80 32 1450 32.00 GDDR3 128 256 2000 71 289 139.20

GeForce 8M GPUs

On May 10, 2007, NVIDIA announced the availability for their first notebook GPU's through select OEM's. So far the lineup consists of the 8400M series and the 8600M series chips. [16]

GeForce 8600M Series

Announced chips are the GeForce 8600M GS and GeForce 8600M GT versions.

GeForce 8400M Series

Announced chips are the GeForce 8400M G, GeForce 8400M GS and GeForce 8400M GT.


Future development

Template:Future chip

  • There may not be a mobile version of the G80 core, as it is too power hungry for notebooks on a 90 nm process. Instead, the high-end mobile version will have the code name G81M and will be built on an 80 nm process.[4] [5] [6] [7]

There have also been supposed leakages about the planned releases for the future 8000 series of Graphic cards, with the upgrading of the 8800 GTX to the 8900 GTX, which is rumoured to have 25% more shaders then the 8800 GTX while still using the same G80 core. Also, the reintroduction of the dual GPU card, the 8950 GX2, the succesor to the 7950 GX2, which implemented two graphic cards into one PCIE slot, thus allowing a maximum of 4 Graphic cards into an SLI motherboard, often referred to as Quad Sli. [8]

G90 (G92) releated :

According to a statement made by NVIDIA "G92" GeForce 9800 GTX will be released in Q4 2007. looks like in November 2007 just a month before Xmas. please refer to GeForce 9 Series for more data.

See also

References

  1. ^ Q3 2007 NVIDIA Corporation Earnings Conference. Nvidia.com. November 9, 2006.
  2. ^ [1]
  3. ^ http://developer.nvidia.com/object/cuda.html#documentation
  4. ^ Shrout, Ryan. Nvidia's PureVideo HD Technology - Is the PC Ready of HD Video?, PC Perspective, December 5, 2006.
  5. ^ GeForce 8800 Press Release, NVIDIA.com, accessed November 9, 2006.
  6. ^ http://forum.beyond3d.com/showthread.php?t=33576&page=83
  7. ^ NVIDIA G80: Architecture and GPU Analysis - Page 11
  8. ^ Shilov, Anton. Nvidia Prepares GeForce 8800 GTS 320MB, xbitlabs.com, January 10, 2007.
  9. ^ "Visionary". All 8800 GTX Cards Being Recalled, VR-Zone.com, November 6, 2006.
  10. ^ a b GeForce 8800 specifications, NVIDIA.com, accessed November 9, 2006.
  11. ^ a b NVIDIA GeForce 8800 GTX/GTS Tech Report, TechARP.com, accessed April 10, 2007.
  12. ^ a b NVIDIA GeForce 8800 GTX/GTS Performance Preview, FiringSquad.com, accessed April 10, 2007.
  13. ^ AnandTech:DX10 for the Masses:NVIDIA 8600 and 8500 Series Launch
  14. ^ a b c d e [2], Theinquirer.net, accessed April 12, 2007.
  15. ^ "Mid-range GeForce 8000 series Launch Dates, Prices". [3], DailyTech.com, accessed April 8, 2007.
  16. ^ NVIDIA GeForce 8M Series, nvidia.com, May 10, 2007.