FACTOID # 29: 73.3% of America's gross operating surplus in motion picture and sound recording industries comes from California.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "FLOPS" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > FLOPS

In computing, FLOPS (or flops or flop/s) is an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations; similar to instructions per second. Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate. Look up flop in Wiktionary, the free dictionary. ... RAM (Random Access Memory) Look up computing in Wiktionary, the free dictionary. ... This article is about the machine. ... Computer Performance is characterised by the amount of useful work accomplished by a computer system compared to the time and resources used. ... A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. ... A calculation is a deliberate process for transforming one or more inputs into one or more results. ... Instructions per second (IPS) is a measure of a computers processor speed. ...


Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as teraFLOPS (1×1012 FLOPS). An SI prefix (also known as a metric prefix) is a name or associated symbol that precedes a unit of measure (or its symbol) to form a decimal multiple or submultiple. ...


According to Top500.org, the fastest computer in the world as of June 2007 was the IBM Blue Gene/L supercomputer, measuring a peak of 360 TFLOPS. The Cray XT4 hits second place with 101.7 TFLOPS. Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ... This article is about the supercomputer. ... The Cray XT4 (codenamed Hood during development) is an updated version of the Cray XT3 supercomputer, which includes an updated version of the SeaStar interconnect router, processor sockets for Socket AM2 Opteron processors, and 240-pin unbuffered DDR2 memory. ...


A basic calculator performs relatively few FLOPS. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is perceived as instantaneous by a human operator, so a simple calculator could be said to operate at about 10 FLOPS. For other uses, see Calculator (disambiguation). ... In telecommunication, response time is the time a system or functional unit takes to react to a given input. ...

Contents

Measuring performance

In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark. In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. ... LINPACK is a software library for performing numerical linear algebra on digital computers. ...


FLOPS in isolation are arguably not very useful as a benchmark for modern computers. There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance. Cache coherency (alternatively cache coherence or cache consistency) refers to the integrity of data stored in local caches of a shared resource. ... The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. ... The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. ...


For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective. The integers are commonly denoted by the above symbol. ... Million instructions per second (MIPS) is a measure of a computers processor speed. ...


Historically, the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s. Shield of the U.S. Atomic Energy Commission. ... The CDC 6600 was a mainframe computer from Control Data Corporation, first manufactured in 1965. ...


The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurement of "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. On that date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the Export Administration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted teraFLOPS (WT). is the 114th day of the year (115th in leap years) in the Gregorian calendar. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... The United States Department of Commerce is a Cabinet department of the United States government concerned with promoting economic growth. ... The Bureau of Industry and Security (BIS) is an agency of the United States Department of Commerce which deals with issues involving national security and high technology. ... Adjusted Peak Performance (APP) is a metric introduced by the U.S. Department of Commerces Bureau of Industry and Security to more accurately predict the suitability of a computing system to complex computational problems such as simulating nuclear weapons. ...


Records

Today Blue Gene is the world's fastest computer, at 360 TFLOPS. On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS. When configured to do so, it can reach speeds in excess of three petaFLOPS. This article is about the supercomputer. ... is the 177th day of the year (178th in leap years) in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ...


In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaFLOPS, over three times faster than the Blue Gene/L. MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It has special-purpose pipelines for simulating molecular dynamics. MDGRAPE-3 houses 4,808 custom processors, 64 servers each with 256 dual-core processors, and 37 servers each containing 74 processors, for a total of 40,314 processor cores, compared to the 131,072 needed for the Blue Gene/L. MDGRAPE-3 is able to do many more computations with few chips because of its specialized architecture. The computer is a joint project between Riken, Hitachi, Intel, and NEC subsidiary SGI Japan. June 2006 : ← - January - February - March - April - May - June - July - August - September - October - November - December- → Extraordinary renditions. ... RIKEN is the largest research institute for natural sciences in Japan. ... MDGrape 3 is a high peformance computer processor being developed by RIKEN in Japan. ... RIKEN is the largest research institute for natural sciences in Japan. ... It has been suggested that Hitachi Works be merged into this article or section. ... Intel Corporation (NASDAQ: INTC, SEHK: 4335), founded in 1968 as Integrated Electronics Corporation, is an American multinational corporation that is best known for designing and manufacturing microprocessors and specialized integrated circuits. ... NEC Corporation (Japanese: Nippon Denki Kabushiki Gaisha; TYO: 6701 , NASDAQ: NIPNY) is a Japanese multinational IT company headquartered in Minato-ku, Tokyo, Japan. ...


Distributed computing uses the Internet to link personal computers to achieve a similar effect: Distributed computing is a method of computer processing in which different parts of a program run simultaneously on two or more computers that are communicating with each other over a network. ...

  • The entire BOINC averages 663 TFLOPS as of September 8, 2007.[1]
  • SETI@Home computes data averages more than 265 TFLOPS.[2]
  • Folding@Home has reached over 1 PFLOPS[3] as of September 15, 2007.[4] Note, as of March 22, 2007, PlayStation 3 owners may now participate in the Folding@home project. Because of this, Folding@home is now sustaining considerably higher than 210 TFLOPS (1267 TFLOPS as of September 23, 2007). See the current stats[5] for details.
  • Einstein@Home is crunching more than 70 TFLOPS.[6]
  • As of June 2007, GIMPS is sustaining 23 TFLOPS.[7]
  • Intel Corporation has recently unveiled the experimental multi-core POLARIS chip, which achieves 1 TFLOPS at 3.2 GHz. The 80-core chip can increase this to 1.8 TFLOPS at 5.6 GHz, although the thermal dissipation at this frequency exceeds 260 Watts.

As of 2007, the fastest PC processors perform over 30 GFLOPS.[8] GPUs in PCs are considerably more powerfull in terms of pure FLOPS. For example, in the GeForce 8 Series the nVidia 8800 Ultra performs around 576 GFLOPS on 128 Processing elements. This equates to around 4.5 GFLOPS per element, compared with 2.75 per core for the Blue Gene/L. It should be noted that the 8800 series performs only single precision calculations, and that while GPUs are highly efficient at calculations they are not as flexible as a general purpose CPU. Current top end ATI GPU cards do perform double precision operations. The Berkeley Open Infrastructure for Network Computing (BOINC) is a distributed computing infrastructure intended to be useful to fields beyond SETI. It is being developed by a team based at the University of California, Berkeley led by the project director of SETI@home, David Anderson. ... is the 251st day of the year (252nd in leap years) in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ... SETI@home logo SETI@home (SETI at home) is a distributed computing project using Internet-connected computers, hosted by the Space Sciences Laboratory, at the University of California, Berkeley, in the United States. ... Folding@home (also known as FAH or F@H) is a distributed computing project designed to perform computationally intensive simulations of protein folding and other molecular dynamics simulations. ... is the 81st day of the year (82nd in leap years) in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ... is the 266th day of the year (267th in leap years) in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st century. ... Einstein@Home is a distributed computing project running on the Berkeley Open Infrastructure for Network Computing (BOINC) software platform. ... 2007 is a common year starting on Monday of the Gregorian calendar. ... The Great Internet Mersenne Prime Search, or GIMPS, is a collaborative project of volunteers, who use Prime95 and MPrime, special software that can be downloaded from the Internet for free, in order to search for Mersenne prime numbers. ... Intel redirects here. ... A dual-core CPU combines two independent processors and their respective caches and cache controllers onto a single silicon chip, or integrated circuit. ... Teraflops Research Chip (also called Polaris) is a processor prototype developed by Intel and demonstrated in 2007. ... A microprocessor is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single semiconducting integrated circuit (IC). ... “GPU” redirects here. ... The Geforce 8 Series is the eighth generation of NVIDIAs GeForce graphics cards. ...


Cost of computing

  • 1997: about US$30,000 per GFLOPS; with two 16-Pentium-Pro–processor Beowulf cluster computers[9]
  • 2000, April: $1,000 per GFLOPS, Bunyip, Australian National University. First sub-US$1/MFlop. Gordon Bell Prize 2000.
  • 2000, May: $640 per GFLOPS, KLAT2, University of Kentucky
  • 2003, August: $82 per GFLOPS, KASY0, University of Kentucky
  • 2006, February: about $1 per GFLOPS in ATI PC add-in graphics card (X1900 architecture) — these figures are disputed as they refer to highly parallelized GPU power.
  • 2007, March: about $0.42 per GFLOPS in Ambric AM2045[10].
  • 2007, October: about $0.20 per GFLOPS with the cheapest retail Sony PS3 console, at US$400, that runs at a claimed 2 teraFLOPS; these figures represent the processing power of the GPU. The seven CPUs run collectively at a lower 218 GFLOPS.[11]

This trend toward lower and lower cost for the same computing power follows Moore's law. The Borg, a 52-node Beowulf cluster used by the McGill University pulsar group to search for pulsations from binary pulsars. ... The Australian National University, or ANU, is a public university located in Canberra, Australia. ... The University of Kentucky, also referred to as UK, is a public, co-educational university located in Lexington, Kentucky. ... The Sony PlayStation 3 (colloquially known as the PS3) will be the new video game console in Sonys PlayStation series. ... GPU may stand for: Graphics processing unit, a special stream processor used in computer graphics hardware Gosudarstvennoye Politicheskoye Upravlenie (Главное Политическое Управление, or Main Political Directorate) of the Red Army, responsible for troops morale and propaganda. ... CPU can stand for: in computing: Central processing unit in journalism: Commonwealth Press Union in law enforcement: Crime prevention unit in software: Critical patch update, a type of software patch distributed by Oracle Corporation in Macleans College is often known as Ash Lim. ... Gordon Moores original graph from 1965 Growth of transistor counts for Intel processors (dots) and Moores Law (upper line=18 months; lower line=24 months) For the observation regarding information retrieval, see Mooers Law. ...


See also

// Gordon Bell Prizes The Gordon Bell Prizes are a set of awards that were established in 1987. ...

References

  1. ^ Berkeley Open Infrastructure for Network Computing (BOINC)
  2. ^ SETI at home
  3. ^ Folding@home
  4. ^ [1]
  5. ^ http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats
  6. ^ Einstein@Home - Server Status
  7. ^ Internet PrimeNet Server Parallel Technology for the Great Internet Mersenne Prime Search
  8. ^ Tom's Hardware's 2007 CPU Charts
  9. ^ Loki and Hyglac
  10. ^ http://www.ambric.com/pdf/MPR_Ambric_Article_10-06_204101.pdf
  11. ^ http://news.bbc.co.uk/2/hi/technology/4554025.stm

External links


  Results from FactBites:
 
What is FLOPS? - A Word Definition From the Webopedia Computer Dictionary (0 words)
The FLOPS measurement, therefore, actually measures the speed of the FPU.
One of the most common benchmark tests used to measure FLOPS is called Linpack.
Many experts feel that FLOPS is not a relevant measurement because it fails to take into account factors such as the condition under which the microprocessor is running (e.g., heavy or light loads) and which exact operations are included as floating-point operations.
Introduction to Flip Flops: D and T (1365 words)
For a D flip flop, the control input is labelled D. For a T flip flop, the control input is labelled T. The other input is the clock.
Basically, a JK flip flop is a combination of a D and T flip flop (or more accurately, a D and T flip flop are a simplification of a JK flip flop).
Flip flops allow sequential circuits to have state (i.e., memory), which is something that combinational logic circuits do not have.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m