FACTOID # 23: Wisconsin has more metal fabricators per capita than any other state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > GeForce FX
NVIDIA GeForce FX logo
Enlarge
NVIDIA GeForce FX logo
The famous Dawn demo was released by NVIDIA to showcase pixel and vertex shaders effects of the GeForce FX Series
The famous Dawn demo was released by NVIDIA to showcase pixel and vertex shaders effects of the GeForce FX Series

The GeForce FX (codenamed NV30) is a graphics card in the GeForce line, from the manufacturer NVIDIA. NVIDIA GeForce FX This is a copyrighted and/or trademarked logo. ... NVIDIA GeForce FX This is a copyrighted and/or trademarked logo. ... Screenshot from nVidia Dawn demo. ... Screenshot from nVidia Dawn demo. ... Dawns realistic face Dawn is both the name of NVIDIAs technology demo and its main character. ... A code name or cryptonym is a word or name used clandestinely to refer to another name or word. ... A graphics/video/display card/board/adapter is a computer component designed to convert the logical representation of visual information into a signal that can be used as input for a display medium. ... The logo for the GeForce 6800, one of Nvidias flagship video cards. ... NVIDIA Corporation (NASDAQ: NVDA) is a major supplier of graphics processors (graphics processing units, GPUs), graphics cards, and media and communications devices for PCs and game consoles such as the original Xbox and the new upcoming next generation Playstation 3. ...

Contents


Overview

NVIDIA's GeForce FX series is the fifth generation in the GeForce line. With GeForce 3, NVIDIA introduced programmable shader units into their 3D rendering capabilities, in line with the release of Microsoft's DirectX 8.0 release. With real-time 3D graphics technology continually advancing, the release of DirectX 9.0 ushered in a further refinement of programmable pipeline technology with the arrival of Shader Model 2.0. The GeForce FX series brings to the table NVIDIA's first generation of Shader Model 2 hardware support. The architecture was a major departure from the GeForce 4 series. The GeForce 3 (codenamed NV20) was NVIDIAs third-generation GeForce chip. ... DirectX is a collection of APIs for easily handling tasks related to game programming on the Microsoft Windows operating system. ... The rewrite of this article is being devised at Talk:3D computer graphics/Temp. ... A shader is a program used in 3D computer graphics to determine the final surface properties of an object or image. ... A GeForce 4 (codenames below) is a fourth-generation graphics processing unit (GPU) manufactured by NVIDIA which forms the basis of many computer graphics cards. ...


While it is the fifth major revision in the series of GeForce graphics cards, it wasn't marketed as a GeForce 5. The FX ("effects") in the name was decided on to illustrate the power of the latest design's major improvements and new features, and to virtually distinguish the FX series as something greater than a revision of earlier designs. The FX in the name also was used to market the fact that the GeForce FX was the first GPU to be a combined effort from the previously acquired 3DFX engineers and NVIDIA's own engineers. NVIDIA's intention was to underline the extended capability for cinema-like effects using the card's numerous new shader units. 3dfx Interactive was a company which specialized in the manufacturing of cutting-edge 3D graphics processing units and, later, graphics cards. ...


The FX features DDR, DDR-II or GDDR-3 memory, a 130 nm fabrication process, and Shader Model 2.0/2.0A compliant vertex and pixel shaders. The FX series is fully compliant and compatible with DirectX 9.0b. The GeForce FX also included an improved VPE (Video Processing Engine), which was first deployed in the GeForce4 MX. Its main upgrade was per pixel video-deinterlacing — a feature first offered in ATI's Radeon, but seeing little use until the maturation of Microsoft's DirectX-VA and VMR (video mixing renderer) APIs. Among other features was an improved anisotropic filtering algorithm which was not angle-dependant (unlike its competitor, the Radeon 9700/9800 series) and offered better quality, but affected performance somewhat. Though NVIDIA reduced the filtering quality in the drivers for a while, the company eventually got the quality up again, and this feature remains one of the highest points of the GeForce FX family to date (However, this method of anisotropic filtering was dropped by NVIDIA with the GeForce 6 series for performance reasons). DDR2 SDRAM or double-data-rate two synchronous dynamic random access memory is a computer memory technology. ... GDDR3 is a graphics card-specific memory technology, designed by ATI Technologies. ... A nanometre (American spelling: nanometer) is 1. ... Vertex and pixel (or fragment) shaders are shaders that run on a graphics card, executed once for every vertex or pixel in a specified 3D mesh. ... Deinterlacing the process of converting interlaced video (a sequence of fields) into a non-interlaced form (a sequence of frames). ... An application programming interface (API) is the interface that a computer system, library or application provides in order to allow requests for service to be made of it by other computer programs, and/or to allow data to be exchanged between them. ... An illustration of texture filtering methods showing trilinear MIP map texture on the left and enhanced with anisotropic texture filtering on the right. ... The Radeon R300 architecture (introduced August 2002) was ATIs first DirectX 9. ... The GeForce 6 Series is nVidias newest series of graphics processors. ...


The last model, the GeForce FX 5950 Ultra, is comparable to competitor ATI Technologies's Radeon 9800 XT. ATI Technologies Inc. ... Radeon is a brand of graphics processing units (GPU) that has been manufactured by ATI Technologies since 2000 and the successor to their Rage line. ...


The advertising campaign for the GeForce FX featured the Dawn fairy demo, which was the work of several veterans from the computer animation Final Fantasy: The Spirits Within. NVIDIA touted it as "The Dawn of Cinematic Computing", while critics noted that this was the strongest case of using sex appeal in order to sell graphics cards yet. It is still probably the best-known of the NVIDIA Demos. Dawns realistic face Dawn is both the name of NVIDIAs technology demo and its main character. ... by Sophie Anderson A fairy, or faerie, is a spirit or supernatural being that is found in the legends, folklore, and mythology of many different cultures. ... Final Fantasy: The Spirits Within is a science fiction movie by Hironobu Sakaguchi, the creator of the Final Fantasy series of video games. ... In order to showcase various chipsets, NVIDIA creates demos. ...


Delays

The NV30 project had been delayed for three key reasons. One was because NVIDIA decided to produce an optimized version of the GeForce 3 (NV 20) which resulted in the GeForce 4 Ti (NV 25), while ATI cancelled its competing optimized chip (R250) and opted instead to focus on the Radeon 9700. The other reason was NVIDIA's commitment with Microsoft, to deliver the Xbox console's graphics processor (NV2A). The Xbox venture diverted most of NVIDIA's engineers over not only the NV2A's initial design-cycle but also during the mid-life product revisions needed to discourage hackers. Finally, NVIDIA's transition to a 130 nm manufacturing process encountered unexpected difficulties. NVIDIA had ambitiously selected TSMC's then state-of-the-art (but unproven) Low-K dielectric 130 nm process node. After sample silicon-wafers exhibited abnormally high defect-rates and poor circuit performance, NVIDIA was forced to re-tool the NV30 for a conventional (FSG) 130 nm process node. (NVIDIA's manufacturing difficulties with TSMC spurred the company to search for a second foundry. NVIDIA selected IBM to fabricate several future GeForce chips, citing IBM's process technology leadership. Yet curiously, NVIDIA avoided IBM's Low-K process.) The GeForce 3 (codenamed NV20) was NVIDIAs third-generation GeForce chip. ... A GeForce 4 (codenames below) is a fourth-generation graphics processing unit (GPU) manufactured by NVIDIA which forms the basis of many computer graphics cards. ... The Radeon 8500 (a. ... The Radeon R300 architecture (introduced August 2002) was ATIs first DirectX 9. ... Microsoft Corporation (NASDAQ: MSFT, HKSE: 4338) is an international computer technology corporation with 2005 global annual sales of US$42. ... The Microsoft Xbox is a sixth generation era video game console first released on November 15, 2001 in North America, then released on February 22, 2002 in Japan, and on March 14, 2002 in Europe. ... A nanometre (American spelling: nanometer) is 1. ... A Low-K dielectric is one with a small dielectric constant. ...


Disappointment

Analysis of the Hardware

GeForce FX 5800
GeForce FX 5800

Hardware enthusiasts saw the GeForce FX series as a disappointment as it did not live up to expectations. NVIDIA had aggressively hyped the card up throughout the Summer and Fall of 2002, to combat ATI Technologies' Fall release of the powerful Radeon 9700. ATI's very successful Shader Model 2 card had arrived several months earlier than NVIDIA's first NV30 board, the GeForce FX 5800. Image File history File links GeForceFX5800. ... Image File history File links GeForceFX5800. ... ATI Technologies Inc. ... // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ...


When the FX 5800 launched it was discovered after much testing and research on the part of hardware review websites that the 5800 was not a match for Radeon 9700, especially when pixel shading was involved. The 5800 had roughly a 30% memory bandwidth deficit caused by the use of a narrower 128-bit memory bus (compared to ATI's 256-bit). The card used expensive and hot GDDR-2 RAM while ATI was able to use cheaper lower-clocked DDR SDRAM with their wider bus. And, while the R300 core used on 9700 was capable of 8 pixels per clock with its 8 pipelines, the NV30 was discovered to be a 4 pixel pipeline chip. However, because of both the expensive RAM and 130 nm chip process used for the GPU, NVIDIA was able to clock both components significantly higher than ATI to close these gaps somewhat. Still, the fact that ATI's solution was more robust architecturally caused FX 5800 to fail to defeat the older Radeon 9700. The initial version of the GeForce FX (the 5800) was so large that it required two slots to accommodate it, requiring a massive heat sink and blower arrangement called "Flow FX" that produced a great deal of noise. This was jokingly coined into the description 'Dustbuster' and graphics cards which happen to be loud are often compared to the GeForce FX 5800 for this reason. To make matters worse, ATI's refresh of Radeon 9700, the Radeon 9800, arrived shortly after NVIDIA's boisterous launch of the disappointing FX 5800, and Radeon 9800 brought a significant performance boost over the already superior Radeon 9700, further separating the failed FX 5800 from its competition. DDR2 SDRAM or double-data-rate two synchronous dynamic random access memory is a computer memory technology. ... DDR SDRAM or double-data-rate synchronous dynamic random access memory is a type of memory integrated circuit used in computers. ... A large copper heatsink. ... Non-electric fan Household Electric Fan A fan has two purposes – to move air for creature comfort or for ventilation and to move air or gas from one location to another for industrial purposes. ... // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ...


With regard to the much-vaunted Shader Model 2 capabilities of the NV3x series, the performance was shockingly poor. The chips were designed for use with a mixed precision programming methodology, using 64-bit FP16 for situations where high precision math was unnecessary to maintain image quality, and using the 128-bit FP32 mode only when absolutely necessary. The GeForce FX architecture was also extremely sensitive to instruction ordering in the pixel shaders. This required more complicated programming from developers because they had to not only concern themselves with the shader code mathematics and instruction order, but also with testing to see if they could get by with lower precision. Additionally, the R300-based cards from ATI did not benefit from partial precision in any way because these chips were designed purely for DirectX 9's required minimum of 96-bit FP24 for full precision. The NV30, NV31, and NV34 also were handicapped because they contained a mixture of DirectX 7 fixed-function T&L units, DirectX 8 integer pixel shaders, and DirectX 9 floating point pixel shaders. The R300 chips emulated these older functions on their pure Shader Model 2 hardware allowing the SM2 hardware to use far more transistors for SM2 performance with the same transistor budget. For NVIDIA, with their mixture of hardware, this resulted in non-optimal performance of pure SM2 programming, because only a portion of the chip could calculate this math, and due to programmers' neglect of partial precision optimizations in their coding seeing as ATI's chips performed far better even without the extra effort. NVIDIA released several guidelines for creating GeForce FX-optimized code over the lifetime of the product, and worked with Microsoft to create a special shader model called "Shader Model 2.0A", which generated the optimal code for the GeForce FX, and improved performance noticeably. It was later found that even with the use of partial precision and Shader Model 2.0A, the GeForce FX's performance in shader-heavy applications trailed behind the competition. However, the GeForce FX still remained competitive in OpenGL applications, which can be attributed to the fact that most OpenGL applications use manufacturer-specific extensions to support advanced features on various hardware and to obtain the best possible performance, since the manufacturer-specific extension would be perfectly optimized to the target hardware. Transform and Lighting is a computing term used in computer graphics, generally used in the context of hardware acceleration (Hardware T&L). Transform refers to the task of converting coordinates in space, which in this case involves moving 3D objects in a virtual world and converting 3D coordinates to a... Assorted transistors The transistor is a solid state semiconductor device that can be used for amplification, switching, voltage stabilization, signal modulation and many other functions. ... OpenGL official logo OpenGL (Open Graphics Library) is a standard specification defining a cross-language cross-platform API for writing applications that produce 3D computer graphics (and 2D computer graphics as well). ...


To industry analysts, the GeForce FX's poor shader 2.0 performance was evidence of bad architectural decisions. A contractual dispute over the pricing of the Xbox's NV2A graphics processor led to Microsoft's withholding the specifications for Shader Model 2.0 (in DirectX 9.0.) The FX's disorientation in its feature set and pipeline architecture are directly attributed to NVIDIA's designers' mis-guessing the direction of the Direct3D API. NVIDIA felt confident that Microsoft would base DirectX 9's shader model on NVIDIA's own Cg programming language. However, Microsoft instead chose the High Level Shader Language (HLSL) model, which misled NVIDIA's design teams somewhat. Another aspect was how the design of the shaders was focused - while ATI designed the R300-series to support the minimum DirectX 9 requirements and optimised for speed, NVIDIA designed the GeForce FX's shaders to offer far more capabilities than what the DirectX 9 specifications required. NVIDIA heavily promoted this fact during the initial launch of the series, but it backfired massively when it was discovered that actually using the shaders in the way NVIDIA was promoting would result in terrible and effectively unusable performance figures. The succeeding generation of GeForces would discard the FX designation, reverting to the labels GeForce 6 and GeForce 7. The Microsoft Xbox is a sixth generation era video game console first released on November 15, 2001 in North America, then released on February 22, 2002 in Japan, and on March 14, 2002 in Europe. ... Cg or C for Graphics is a High level shader language created by nVidia for programming vertex and pixel shaders. ... The High Level Shader Language (HLSL) is a shader language developed by Microsoft for use with DirectX, and is very similar to Cg. ... The GeForce 6 Series is nVidias newest series of graphics processors. ... The GeForce 7 Series is the seventh generation of NVIDIAs GeForce graphics cards. ...


The FX series was a moderate success but because of its delayed introduction and flaws, NVIDIA ceded market leadership to ATI's Radeon 9700. Due to market demand and the FX's deficiency as a worthy successor, NVIDIA extended the production life of the aging GeForce 4, keeping both the FX and 4 series in production for some time, at great expense. The Radeon R300 architecture (introduced August 2002) was ATIs first DirectX 9. ...


Valve's Presentation

In late 2003, the GeForce FX series became known for poor performance with DirectX 9 Vertex & Pixel shaders because of a very vocal presentation by popular game developer, Valve Software. Early indicators of potentially poor Pixel Shader 2.0 performance had come from synthetic benchmarks (such as 3DMark 2003). But outside of the developer community and tech-savvy computer gamers, few mainstream users were aware of such issues. Then, Valve Software dropped a bombshell on the gaming public. Using a pre-release build of the highly anticipated Half-Life 2 game, using the "Source" engine, Valve published benchmarks revealing a complete generational gap (80-120% or more) between the GeForce FX 5900 Ultra and the ATI Radeon 9800. In Shader 2.0 enabled game-levels, NVIDIA's top-of-the-line FX 5900 Ultra performed about as fast as ATI's mainstream Radeon 9600, which cost approximately a third as much as the NVIDIA card. Valve had initially planned on supporting partial floating point precision (FP16) to optimize for NV3x, however they eventually discovered that this plan would take far too long to accomplish. As said earlier, ATI's cards did not benefit from FP16 mode, so all of the work would be entirely for NVIDIA's NV3x cards, a niche too small to be worthy of the time and effort especially at a time when DirectX 8 cards such as GeForce4 were still far more prevalent than DirectX 9 cards. When Half-Life 2 was released a year later, Valve opted to make all GeForce FX hardware default to using the game's DirectX 8 shaders in order to avoid the FX series' poor Shader 2.0 performance. Valve Software is a Bellevue, Washington-based video game developer made famous by its first product, Half-Life, which was released in November 1998. ... 3DMark is a computer benchmark by Futuremark (formerly MadOnion) to determine the DirectX performance of graphics cards. ... Half-Life 2 is a science fiction first-person shooter computer game and the sequel to Half-Life, developed by Valve Corporation. ... // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ...


Note that it is possible to force Half Life 2 to run in DirectX 9 mode on all cards with a simple tweak to a configuration file. When this was tried, users and reviewers noted a significant performance loss on NV3x cards, with only the top of the line variants (5900 and 5950) remaining playable. However, an unofficial fan-made patch (which optimized the Half-Life 2 shaders for GeForce FX) allowed users of lower-end GeForce FX cards (5600 and 5700) to comfortably play the game in DirectX 9 mode and considerably improved performance on the GeForce FX 5800, 5900 and 5950 graphics cards. But this only proved that the GeForce FX was a poor performer if the DX9 shaders are not optimized for its architecture.


Questionable Tactics

NVIDIA's GeForce FX era was one of great controversy for the company. The competition had soundly beaten them on the technological front and the only way to get the FX chips competitive with the Radeon R300 chips was to optimize the drivers to the extreme.


This took several forms. NVIDIA historically has been known for their impressive OpenGL driver performance and quality, and the FX series certainly maintained this. However, with image quality in both Direct3D and OpenGL, they aggressively began various questionable optimization techniques not seen before. They started with filtering optimizations by changing how trilinear filtering operated on game textures, reducing its accuracy, and thus quality, visibly. Anisotropic filtering also saw dramatic tweaks to limit its use on as many textures as possible to save memory bandwidth and fillrate. Tweaks to these types of texture filtering can often be spotted in games from a shimmering phenomena that occurs with floor textures as the player moves through the environment (often signifying poor transitions between mip-maps). Changing the driver settings to "High Quality" can alleviate this occurrence at the cost of performance. OpenGL official logo OpenGL (Open Graphics Library) is a standard specification defining a cross-language cross-platform API for writing applications that produce 3D computer graphics (and 2D computer graphics as well). ... This article or section does not cite its references or sources. ... Trilinear filtering is an extension of the bilinear texture filtering method, which also performs linear interpolation between mipmaps. ... An illustration of texture filtering methods showing trilinear MIP map texture on the left and enhanced with anisotropic texture filtering on the right. ... In computer graphics, texture filtering is the method used to map texels (pixels of a texture) to points on a 3D object. ... In 3D computer graphics texture mapping, MIP maps (also mipmaps) are pre-calculated, optimized collections of bitmap images that accompany a main texture, intended to increase rendering speed and reduce artifacts. ...


NVIDIA also began to clandestinely replace pixel shader code in software with hand-coded optimized versions with lower accuracy, through detecting what program was being run. These "tweaks" were especially noticed in benchmark software from Futuremark. In 3DMark03 it was found that NVIDIA had gone to extremes to limit the complexity of the scenes through driver shader changeouts and aggressive hacks that prevented parts of the scene from even rendering at all. This artificially boosted the scores the FX series received. Side by side analysis of screenshots in games and 3DMark03 showed vast differences between what a Radeon 9800/9700 displayed and what the FX series was doing. NVIDIA also publically attacked the usefulness of these programs and the techniques used within them in order to undermine their influence upon consumers. To meet Wikipedias quality standards and conform with our NPOV policy, this article or section may require cleanup. ... 3DMark is a computer benchmark by Futuremark (formerly MadOnion) to determine the DirectX performance of graphics cards. ...


Basically, NVIDIA programmed their driver to look for specific software and apply aggressive optimizations tailored to the limitations of the poorly designed NV3x hardware. Upon discovery of these tweaks there was a very vocal uproar from the enthusiast community, and from several popular hardware analysis websites. Unfortunately, disabling most of these optimizations showed that NVIDIA's hardware was dramatically incapable of rendering the scenes on a level of detail similar to what ATI's hardware was displaying. So most of the optimizations stayed, except in 3DMark where the Futuremark company began updates to their software and screening driver releases for hacks.


Both NVIDIA and ATI are guilty of optimizing drivers like this historically. However, NVIDIA went to a new extreme with the FX series. Both companies optimize their drivers for specific applications even today (2006), but a tight reign and watch is kept on the results of these optimizations by a now more educated and aware user community.


Competitive Response

By early 2003, ATI had captured a considerable chunk of the high-end graphics market and their popular Radeon 9600 was dominating the mid-high performance segment as well. In the meantime, NVIDIA introduced the mid-range 5600 and low-end 5200 models to address the mainstream market. With conventional single-slot cooling and a more affordable price-tag, the 5600 had respectable performance but failed to measure up to its direct competitor, Radeon 9600. As a matter of fact, the mid-range GeForce FX parts did not even advance performance over the chips they were designed to replace, the GeForce 4 Ti. In DirectX 8 applications, the 5600 lost to or matched the Ti 4200. Likewise, the entry-level FX 5200 performed only about as well as the GeForce 4 MX 460, despite the FX 5200 possessing a far better 'checkbox' feature-set. FX 5200 was easily matched in value by ATI's older R200-based Radeon 9000-9250 series and outperformed by the even older Radeon 8500. // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ... A GeForce 4 (codenames below) is a fourth-generation graphics processing unit (GPU) manufactured by NVIDIA which forms the basis of many computer graphics cards. ... The Radeon 8500 (a. ... The Radeon 8500 (a. ...


With the launch of the GeForce FX 5900, NVIDIA fixed many of the problems of the 5800. While the 5800 used fast but hot and expensive GDDR-2 and had a 128-bit memory bus, the 5900 reverted to the slower and cheaper DDR, but it more than made up for it with a wider 256-bit memory bus. The 5900 performed somewhat better than the Radeon 9800 in everything not heavily using shaders, and had a quieter cooling system than the 5800, but most cards based on the 5900 still occupied two slots (the Radeon 9700 and 9800 were both single-slot cards). By mid-2003, ATI's top product (Radeon 9800) was outselling NVIDIA's top-line FX 5900, perhaps the first time that ATI had been able to displace NVIDIA's position as market leader. DDR2 SDRAM or double-data-rate two synchronous dynamic random access memory is a computer memory technology. ... // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ... // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ...

GeForce FX 5950
GeForce FX 5950

NVIDIA later attacked ATI's mid-range card, the Radeon 9600, with the GeForce FX 5700 and 5900XT. The 5700 was a new chip sharing the architectural improvements found in the 5900's NV35 core. The FX 5700's use of GDDR-2 memory kept product prices expensive, leading nVIDIA to introduce the FX 5900XT. The 5900XT was identical to the 5900, but was clocked slower, and used slower memory. Image File history File links Gffx5950. ... Image File history File links Gffx5950. ...


The final GeForce FX model released was the 5950 Ultra, which was a 5900 Ultra with higher clockspeeds. This model did not prove particularly popular, as it was not much faster than the 5900 Ultra, yet commanded a considerable price premium over it. The board was fairly competitive with the Radeon 9800XT, again as long as pixel shaders were lightly used. // Development Radeon 9700 board The first DirectX 9 card released to the market was the ATI Technologies Radeon 9700PRO (a. ...


The Way It's Meant To Be Played

NVIDIA debuted a new campaign to motivate developers to optimize their titles for NVIDIA hardware at the Game Developers Conference (GDC) in 2002. The program offered game developers the added publicity of NVIDIA's program in exchange for the game being consciously optimized for NVIDIA graphics solutions. The program aims at delivering the best possible user experience on the GeForce line of graphics processing units. Image File history File links Twimtbp. ... The Game Developers Conference (GDC) is an annual gathering of video game developers. ...


Windows Vista and GeForce FX PCI cards

Although ATI's DirectX 8 cards clearly surpassed the GeForce FX series among gamers, NVIDIA may still get the last laugh with the release of Windows Vista, which requires DirectX 9 for its signature Windows Aero interface. Windows Vista is a proprietary operating system developed by Microsoft. ... Windows Aero is the name for the new graphical user interface and visual style in Windows Vista. ...


Many integrated-graphics users without AGP or PCIe slots are likely to demand DirectX 9 PCI video cards for Vista upgrades. Except for the now-discontinued Volari V3XT chip from XGI Technology and two PCI cards BFG Technologies has made with the GeForce 6200 chip, all such cards to date use GeForce FX-series chips. (Most use the FX 5200, but some use the FX 5500 or FX 5700 LE.) XGI Technology Inc. ... BFG Tech Logo BFG Technologies is a privately held U.S.-based supplier of premium 3D video cards based on NVIDIA graphics technology. ... The GeForce 6 Series (codenamed NV40) is NVIDIAs sixth generation of GeForce graphics chipsets. ...


ATI has not only refused (so far) to put its DirectX 9 chips in PCI cards, but it may have also helped assure NVIDIA's dominance of the field by buying some of XGI's assets, thus helping it exit the graphics card business in early 2006. [1]


GeForce FX Models

Name           Codename Core Design Clocks
core/mem
Memory Bus Architecture Info
FX 5200 NV34 1:2:4 250/200 64 or 128 bit Entry level chip. Replacement for GeForce4 MX family. Quadro FX 330, 500, 600 is based on the GeForceFX 5200. Lacked IntelliSample technology. No lossless color compression or Z compression. PCX uses AGP to PCIe bridge chip for use on PCIe motherboards. Has 4 pixel pipelines if no pixel shading is used. Each pixel pipe = 1 FP32 ALU handling 2 TMUs + 2 FX12 Mini-ALU (each one can do 2 MULs or 1 ADD or 1 MAD)
FX 5200 Ultra NV34 1:2:4 325/325 128 bit
PCX 5300 NV34 1:2:4 250/325 64 or 128 bit
FX 5500 NV34 1:2:4 270/200 128 bit
FX 5600 NV31 1:2:4 325/275 64 or 128 bit Midrange chip. Sometimes slower than GeForce4 Ti 4200. No Quadro equivalent. Actually has 3 vertex shaders, but 2 are defective. Has 4 pixel pipelines if no pixel shading is used. Each pixel pipe = 1 FP32 ALU handling 2 TMUs + 2 FX12 Mini-ALU (each one can do 2 MULs or 1 ADD or 1 MAD)
FX 5600 Ultra NV31 1:2:4 350/350 128 bit
FX 5600 XT NV31 1:2:4 235/200 128 bit
FX 5700 NV36 3:2:4 425/250 128 bit NV36, like NV35, swapped hardwired DirectX 7 T&L Units + DirectX 8 integer pixel shader units for DirectX 9 floating point units. Quadro equivalent is the Quadro FX 1100. Later models were equipped with GDDR3, which was also clocked higher than the DDR2 modules previously used. On Ultra, RAM speed of 475 MHz also seen. PCX uses AGP to PCIe bridge chip for use on PCIe motherboards. Has 4 pixel pipelines if no pixel shading is used. Each pixel pipe = 1 FP32 ALU handling 2 TMUs + 2 FP32 mini ALU (each one can do 1 MUL or 1 ADD or 1 FP16 MAD).
FX 5700 LE NV36 3:2:4 250/200 128 bit
FX 5700 Ultra NV36 3:2:4 475/450 128 bit (DDR2/GDDR-3)
PCX 5700 NV36 3:2:4 425/250 128 bit
PCX 5750 NV36 3:2:4 475/425 128 bit (GDDR-3)
FX 5800 NV30 3:4:4 400/400 128 bit (DDR2) Production was troubled by migration to 130 nm processes at TSMC. Produced a lot of heat. Cooler nicknamed the 'Dustbuster', 'Vacuum Cleaner', or 'Hoover' by some sites; NVIDIA later released a video mocking the cooler. Due to manufacturing delays it was quickly replaced by the on-schedule NV35. Its Quadro sibling, Quadro FX 1000, 2000 was somewhat more successful. Double Z fillrate (helps shadowing). Each pixel pipe = 1 FP32 ALU handling 2 TMUs + 2 FX12 Mini-ALU (each one can do 2 MULs or 1 ADD or 1 MAD)
FX 5800 Ultra NV30 3:4:4 500/500 128 bit (DDR2)
FX 5900 NV35 3:4:4 400/425 256 bit Swapped hardwired DirectX 7 T&L Units + DirectX 8 integer pixel shader units for DirectX 9 floating point units. Introduced a new feature called 'UltraShadow', upgraded to CineFX 2.0 Specification. Removed the noisy cooler, but still stole the PCI slot adjacent to the card by default. Quadro equivalent is QuadroFX 700, 3000. PCX uses AGP to PCIe bridge chip for use on PCIe motherboards. Double Z fillrate (helps shadowing). Each pixel pipe = 1 FP32 ALU handling 2 TMUs + 2 FP32 mini ALU (each one can do 1 MUL or 1 ADD or 1 FP16 MAD).
FX 5900 Ultra NV35 3:4:4 450/425 256 bit
PCX 5900 NV35 3:4:4 350/275 256 bit
FX 5900 XT NV35 3:4:4 400/350 256 bit
FX 5950 NV38 3:4:4 475/475 256 bit Essentially a speed bumped GeForceFX 5900. Some antialiasing and shader unit tweaks in hardware. PCX uses AGP to PCIe bridge chip for use on PCIe motherboards. Quadro equivalent is QuadroFX 1300.
PCX 5950 NV38 3:4:4 350/475 256 bit

Core Design = # Vertex Shaders : # Pixel Pipelines : # ROPs A GeForce4 (codenames below) is a fourth-generation graphics processing unit (GPU) manufactured by NVIDIA which forms the basis of many computer graphics cards. ... What is Quadro? Quadro is a new robust mid-level programming language created by three computer scientists; Dr Darren Davies, MA Jamie Cameron, MA Simon Garner. ... AGP slot (maroon), although the color is usually brown. ... PCI Express (formerly known as 3GIO for 3rd Generation I/O, not to be mistaken with PCI-X) is an implementation of the PCI computer bus that uses existing PCI programming concepts and communications standards, but bases it on a much faster serial communications system. ... A GeForce4 (codenames below) is a fourth-generation graphics processing unit (GPU) manufactured by NVIDIA which forms the basis of many computer graphics cards. ... What is Quadro? Quadro is a new robust mid-level programming language created by three computer scientists; Dr Darren Davies, MA Jamie Cameron, MA Simon Garner. ... Transform and Lighting is a computing term used in computer graphics, generally used in the context of hardware acceleration (Hardware T&L). Transform refers to the task of converting coordinates in space, which in this case involves moving 3D objects in a virtual world and converting 3D coordinates to a... The integers consist of the positive natural numbers (1, 2, 3, …), their negatives (−1, −2, −3, ...) and the number zero. ... A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. ... What is Quadro? Quadro is a new robust mid-level programming language created by three computer scientists; Dr Darren Davies, MA Jamie Cameron, MA Simon Garner. ... GDDR3 (Graphics Double Data Rate, version 3) is a graphics card-specific memory technology, designed by ATI Technologies. ... AGP slot (maroon), although the color is usually brown. ... PCI Express (formerly known as 3GIO for 3rd Generation I/O, not to be mistaken with PCI-X) is an implementation of the PCI computer bus that uses existing PCI programming concepts and communications standards, but bases it on a much faster serial communications system. ... A nanometre (American spelling: nanometer) is 1. ... Taiwan Semiconductor Manufacturing Company, Limited (Traditional Chinese: 台灣積體電路製造股份有限公司, abbrev. ... Transform and Lighting is a computing term used in computer graphics, generally used in the context of hardware acceleration (Hardware T&L). Transform refers to the task of converting coordinates in space, which in this case involves moving 3D objects in a virtual world and converting 3D coordinates to a... The integers consist of the positive natural numbers (1, 2, 3, …), their negatives (−1, −2, −3, ...) and the number zero. ... A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. ... 32-bit PCI expansion slots on a motherboard 64-bit PCI expansion slots inside a Power Macintosh G4 The Peripheral Component Interconnect standard (in practice almost always shortened to PCI) specifies a computer bus for attaching peripheral devices to a computer motherboard. ... What is Quadro? Quadro is a new robust mid-level programming language created by three computer scientists; Dr Darren Davies, MA Jamie Cameron, MA Simon Garner. ... AGP slot (maroon), although the color is usually brown. ... PCI Express (formerly known as 3GIO for 3rd Generation I/O, not to be mistaken with PCI-X) is an implementation of the PCI computer bus that uses existing PCI programming concepts and communications standards, but bases it on a much faster serial communications system. ... In digital signal processing, anti-aliasing is the technique of minimizing aliasing when representing a high-resolution signal at a lower resolution. ... AGP slot (maroon), although the color is usually brown. ... PCI Express (formerly known as 3GIO for 3rd Generation I/O, not to be mistaken with PCI-X) is an implementation of the PCI computer bus that uses existing PCI programming concepts and communications standards, but bases it on a much faster serial communications system. ... What is Quadro? Quadro is a new robust mid-level programming language created by three computer scientists; Dr Darren Davies, MA Jamie Cameron, MA Simon Garner. ... Vertex and pixel (or fragment) shaders are shaders that run on a graphics card, executed once for every vertex or pixel in a specified 3D mesh. ... In 3D computer graphics, the terms graphics pipeline or rendering pipeline most commonly refer to the current state of the art method of rasterization-based rendering as supported by commodity graphics hardware. ... The Render Output Unit, often abbreviated as ROP, and sometimes called (perhaps more properly) Raster Operations Pipeline, is one of the final steps in the rendering process of modern 3D accelerator boards. ...


References

  • "Benchmarking Half-Life 2: ATI vs. nVidia" by Jason Cross, ExtremeTech. November 29, 2004
  • Beyond3D's 3D Chip/Card Tables
  • "CineFX (NV30) Inside (architectural analysis)" by Demirug, 3DCenter.Org, August 31, 2003
  • New Family of Low-Cost NVIDIA Chips: GeForce FX 5600/5200 Review at X-bit Labs. April 16, 2003.
  • Triolet, Damien. NV3x Pipeline Information, Beyond3D.Com forum, June 2005.
  • Valve's Half-Life 2 Test Results Confirmed at ExtremeTech.com. September 12, 2003
  • Wasson, Scott. Further NVIDIA optimizations for 3DMark03? Quake/Quack, meet 3DMark/3DMurk at TechReport.Com. June 5, 2003.

See also

This table contains general information about NVIDIAs GPUs and videocards based on official NVIDIA specifications. ...

External links


NVIDIA Gaming Graphics Processors
Early Chips: NV1NV2
DirectX 5/6: RIVA 128RIVA TNTRIVA TNT2
DirectX 7.x: GeForce 256GeForce2
DirectX 8.x: GeForce3GeForce4
DirectX 9.x: GeForce FXGeForce 6GeForce 7
DirectX 10: GeForce 8
Other NVIDIA Technologies
nForce: 220/415/420234500SoundStorm
Professional Graphics: Quadro
Software: GelatoCg
Consumer Electronics: GoForce
Game Consoles: XboxPlayStation 3

  Results from FactBites:
 
GeForce FX - Wikipedia, the free encyclopedia (3644 words)
The initial version of the GeForce FX (the 5800) was so large that it required two slots to accommodate it, requiring a massive heat sink and blower arrangement called "Flow FX" that produced a great deal of noise.
However, the GeForce FX still remained competitive in OpenGL applications, which can be attributed to the fact that most OpenGL applications use manufacturer-specific extensions to support advanced features on various hardware and to obtain the best possible performance, since the manufacturer-specific extension would be perfectly optimized to the target hardware.
FX 5200 was easily matched in value by ATI's older R200-based Radeon 9000-9250 series and outperformed by the even older Radeon 8500.
GeForce - Wikipedia, the free encyclopedia (1125 words)
The GPU was sometimes outperformed by GeForce 2 Ultra since the Ultra had a faster core clock; however, the GeForce 3 was able to outperform the Ultra at higher resolutions and when anti-aliasing was applied due to an improved and more efficient memory controller.
The FX series also suffered from a high level of incompatibility because of issues with the shader, although this was largely due to buggy drivers which were later fixed.
The GeForce FX series became a subject of controversy when it was discovered that nVidia had "optimized" specific drivers to work with 3D Mark 2003, a popular benchmarking program used by media reviewers to compare graphic cards.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m