FACTOID # 14: North Carolina has a larger Native American population than North Dakota, South Dakota and Montana combined.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW RELATED ARTICLES People who viewed "Computer" also viewed:

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Computer
A computer in a wristwatch.

Computers take numerous physical forms. The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1941), although the computer concept and various machines similar to computers existed prior. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers.[1] Modern computers are based on comparatively tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. [2] Today, simple computers may be made small enough to fit into a wrist watch and be powered from a watch battery. Personal computers in various forms are icons of the information age and are what most people think of as "a computer". However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices — for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and even children's toys. (19th century - 20th century - 21st century - more centuries) Decades: 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s As a means of recording the passage of time, the 20th century was that century which lasted from 1901&#8211;2000 in the sense of the Gregorian calendar (1900&#8211;1999... Year 1940 (MCMXL) was a leap year starting on Monday (link will display the full 1940 calendar) of the Gregorian calendar. ... For other uses, see 1941 (disambiguation). ... Integrated circuit of Atmel Diopsis 740 System on Chip showing memory blocks, logic and input/output pads around the periphery Microchips with a transparent window, showing the integrated circuit inside. ... A watch is a timepiece or portable clock that displays the time and sometimes the day, date, month and year. ... Type CR2032 watch battery (lithium anode, 3 V, 20. ... A university computer lab containing many desktop PCs The transition of communication technology: Oral Culture, Manuscript Culture, Print Culture, and Information Age Information Age is a name given to a period after the industrial age and before the Knowledge Economy. ... An embedded system is a special-purpose computer system, which is completely encapsulated by the device it controls. ... An A-10 Thunderbolt II, F-86 Sabre, P-38 Lightning and P-51 Mustang fly in formation during an air show at Langley Air Force Base, Virginia. ... An industrial robot is officially defined by ISO[1] as an automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes. ... A SiPix digital camera next to a matchbox to show scale Nikon D200 SLR with Nikon film scanner, which converts film images to digital A Hasselblad 503CW with a digital camera back A digital camera is an electronic device used to capture and store photographs digitally, instead of using photographic... A teddy bear A toy is an object used in play. ...

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church – Turing thesis is a mathematical statement of this versatility: Any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity. For other uses, see Calculator (disambiguation). ... In computability theory the Churchâ€“Turing thesis (also known as Churchs thesis, Churchs conjecture and Turings thesis) is a combined hypothesis about the nature of effectively calculable (computable) functions by recursion (Churchs Thesis), by mechanical device equivalent to a Turing machine (Turings Thesis) or by... Look up Personal digital assistant in Wiktionary, the free dictionary. ... A supercomputer is a computer that led the world (or was close to doing so) in terms of processing capacity, particularly speed of calculation, at the time of its introduction. ...

History of computing

Main article: History of computing
The Jacquard loom was one of the first programmable devices.

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. The history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables. ... Jacquard loom on display at the museum of science and industry. ... Jacquard loom on display at the museum of science and industry. ... Jacquard loom on display at Museum of Science and Industry in Manchester, England The Jacquard Loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801, which utilized holes punched in pasteboard, each row of which corresponded to one row of the design. ...

Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device. Examples of early mechanical computing devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. Before mechanical and electronic computers, the term computer, in use from the mid 17th century, meant a human undertaking mathematical calculations. ... A basic arithmetic calculator. ... It has been suggested that Abax be merged into this article or section. ... A typical 10 inch student slide rule (Pickett N902-T simplex trig). ... A 16th century astrolabe. ... The Antikythera mechanism (main fragment). ... The Middle Ages formed the middle period in a traditional schematic division of European history into three ages: the classical civilization of Antiquity, the Middle Ages, and modern times, beginning with the Renaissance. ... Wilhelm Schickard Wilhelm Schickard (April 22, 1592 â€“ October 23, 1635) was a German polymath who built the first computer in 1623. ...

However, none of those devices fit the modern definition of a computer because they could not be programmed. In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability. This does not adequately cite its references or sources. ... For other uses, see Loom (disambiguation). ... Punched cards (or Hollerith cards, or IBM cards), are pieces of stiff paper that contain digital information represented by the presence or absence of holes in predefined positions. ...

In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine".[3] Due to limited finance, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine. Babbage redirects here. ... The analytical engine, an important step in the history of computers, was the design of a mechanical general-purpose computer by the British professor of mathematics Charles Babbage. ...

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter. The Eleventh United States Census was taken June 1, 1890. ... Tabulating machine constructed by Hollerith The tabulating machine was a machine designed to assist in tabulations. ... Herman Hollerith (February 29, 1860 â€“ November 17, 1929) was an German-American statistician who developed a mechanical tabulator based on punched cards in order to rapidly tabulate statistics from millions of pieces of data. ... The Computing Tabulating Recording Corporation (CTR)[1] was incorporated on June 15, 1911 in Endicott, New York a few miles west of Binghamton. ... For other uses, see IBM (disambiguation) and Big Blue. ... Punched cards (or Hollerith cards, or IBM cards), are pieces of stiff paper that contain digital information represented by the presence or absence of holes in predefined positions. ... Boolean algebra is the finitary algebra of two values. ... Structure of a vacuum tube diode Structure of a vacuum tube triode In electronics, a vacuum tube, electron tube, or (outside North America) thermionic valve or just valve, is a device used to amplify, switch or modify a signal by controlling the movement of electrons in an evacuated space. ... Teletype machines in World War II A teleprinter (teletypewriter, teletype or TTY for TeleTYpe/TeleTYpewriter) is a now largely obsolete electro-mechanical typewriter which can be used to communicate typed messages from point to point through a simple electrical communications channel, often just a pair of wires. ...

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. A page from the Bombardiers Information File (BIF) that describes the components and controls of the Norden bombsight. ... Electric redirects here. ... Look up computation in Wiktionary, the free dictionary. ...

Defining characteristics of five first operative digital computers
Computer Shown working Binary Electronic Programmable Turing complete
Zuse Z3 May 1941 Yes No By punched film stock Yes (1998)
Atanasoff–Berry Computer Summer 1941 Yes Yes No No
Colossus December 1943 / January 1944 Yes Yes Partially, by rewiring No
Harvard Mark I – IBM ASCC 1944 No No By punched paper tape Yes (1998)
ENIAC 1944 No Yes Partially, by rewiring Yes
1948 No Yes By Function Table ROM Yes

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.
• Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.
• The non-programmable Atanasoff – Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory.
• The secret British Colossus computer (1944), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
• The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
• The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.

Nearly all modern computers implement some form of the stored program architecture, making it the single trait by which the word "computer" is now defined. By this standard, many earlier devices would no longer be called computers by today's definition, but are usually referred to as such in their historical context. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. The design made the universal computer a practical reality. Design of the Von Neumann architecture For the robotic architecture also named after Von Neumann, see Von Neumann machine The von Neumann architecture is a computer design model that uses a single storage structure to hold both instructions and data. ...

Microprocessors are miniaturized devices that often implement stored program CPUs.

Vacuum tube-based computers were in use throughout the 1950s, but were largely replaced in the 1960s by transistor-based devices, which were smaller, faster, cheaper, used less power and were more reliable. These factors allowed computers to be produced on an unprecedented commercial scale. By the 1970s, the adoption of integrated circuit technology and the subsequent creation of microprocessors such as the Intel 4004 caused another leap in size, speed, cost and reliability. By the 1980s, computers had become sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. Around the same time, computers became widely accessible for personal use by individuals in the form of home computers and the now ubiquitous personal computer. In conjunction with the widespread growth of the Internet since the 1990s, personal computers are becoming as common as the television and the telephone and almost all modern electronic devices contain a computer of some kind.

Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future. A computer program is a collection of instructions that describe a task, or set of tasks, to be carried out by a computer. ... â€œProgrammingâ€ redirects here. ... â€œProgrammingâ€ redirects here. ... In computer science, an instruction typically refers to a single operation of a processor within a computer architecture. ... A computer program is a collection of instructions that describe a task, or set of tasks, to be carried out by a computer. ...

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to that point. This article does not cite any references or sources. ... In computer science, conditional statements are a vital part of a programming language. ... In computer science, a subroutine (function, method, procedure, or subprogram) is a portion of code within a larger program, which performs a specific task and can be relatively independent of the remaining code. ...

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. In computer science control flow (or alternatively, flow of control) refers to the order in which the individual statements, instructions or function calls of an imperative or functional program are executed or evaluated. ...

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time — with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example: For other uses, see Calculator (disambiguation). ...

` sum ← 0 ; set sum to 0 num ← 1 ; set num to 1 loop: ; define the beginning of the loop sum ← (num + sum) ; add num to sum and store new result num ← (num + 1) ; add 1 to num if num ≤ 1000 loop ; continue to loop while num ≤ 1000, if not continue below halt ; end of program, stop running `

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[4]

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation

$1+2+3+...+n = {{n(n+1)} over 2}$

and arrive at the correct answer (500,500) with little work.[5] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

Programs

A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.

Large computer programs may take teams of computer programmers years to write and the probability of the entire program having been written completely in the manner intended is unlikely. Errors in computer programs are called bugs. Sometimes bugs are benign and do not affect the usefulness of the program, in other cases they might cause the program to completely fail (crash), in yet other cases there may be subtle problems. Sometimes otherwise benign bugs may be used for malicious intent, creating a security exploit. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[6] In computing, a programmer is someone who does computer programming and develops computer software. ... A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended (e. ... A crash in computing is a condition where a program (either an application or part of the operating system) stops performing its expected function and also stops responding to other parts of the system. ... An exploit is a piece of software, a chunk of data, or sequence of commands that take advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). ...

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions, the more complex computers have several hundred to choose from — each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. Machine code or machine language is a system of instructions and data directly understandable by a computers central processing unit. ... Microprocessors perform operations using binary bits (on/off/1or0). ... The term Harvard architecture originally referred to computer architectures that used physically separate storage and signal pathways for their instructions and data (in contrast to the von Neumann architecture). ... Portion of the Harvard-IBM Mark 1, left side. ... Diagram of a CPU memory cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. ...

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[7] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember — a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[8] A system of codes directly understandable by a computers CPU is termed this CPUs native or machine language. ... For other uses, see Mnemonic (disambiguation). ... See the terminology section, below, regarding inconsistent use of the terms assembly and assembler. ... This article does not cite any references or sources. ... The ARM architecture (previously, the Advanced RISC Machine, and prior to that Acorn RISC Machine) is a 32-bit RISC processor architecture developed by ARM Limited that is widely used in a number of embedded designs. ... Look up Personal digital assistant in Wiktionary, the free dictionary. ... A handheld video game is a video game designed primarily for handheld game consoles such as Nintendos Game Boy line. ... This article does not cite any references or sources. ... The Athlon 64 is an eighth-generation, AMD64 architecture microprocessor produced by AMD, released on September 23, 2003. ... A personal computer (PC) is a computer whose price, size, and capabilities make it useful for individuals. ...

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[9] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. A high-level programming language is a programming language that, in comparison to low-level programming languages, may be more abstract, easier to use, or more portable across platforms. ... In computing, a programmer is someone who does computer programming and develops computer software. ... A diagram of the operation of a typical multi-language, multi-target compiler. ... â€œGame consoleâ€ redirects here. ...

The task of developing large software systems is an immense intellectual effort. It has proven, historically, to be very difficult to produce software with an acceptably high reliability, on a predictable schedule and budget. The academic and professional discipline of software engineering concentrates specifically on this problem. Computer software (or simply software) refers to one or more computer programs and data held in the storage of a computer for some purpose. ... Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software. ...

Example

A traffic light showing red.

Suppose a computer is being employed to drive a traffic light. A simple stored program might say: Image File history File linksMetadata Download high-resolution version (2048x1536, 1138 KB)y bib y invntea the trafic File history Legend: (cur) = this is the current file, (del) = delete this old version, (rev) = revert to this old version. ... Image File history File linksMetadata Download high-resolution version (2048x1536, 1138 KB)y bib y invntea the trafic File history Legend: (cur) = this is the current file, (del) = delete this old version, (rev) = revert to this old version. ... â€œTraffic Signalâ€ redirects here. ...

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is intended be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to: Electrical switches. ...

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. If the maintenance switch is NOT turned on then jump to instruction number 2
12. Turn on the red light
13. Wait for one second
14. Turn off the red light
15. Wait for one second

In this manner, the computer is either running the instructions from number (2) to (11) over and over or its running the instructions from (11) down to (16) over and over, depending on the position of the switch.[10]

How computers work

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires. â€œCPUâ€ redirects here. ... A microprocessor is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single semiconducting integrated circuit (IC). ... ALU redirects here. ... A control unit is the part of a CPU or other device that directs its operation. ... This article does not cite any references or sources. ... In computer architecture, a bus is a subsystem that transfers data or power between computer components inside a computer or between computers and typically is controlled by device driver software. ... A wire is a single, usually cylindrical, elongated strand of drawn metal. ...

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor. â€œCPUâ€ redirects here. ... Integrated circuit of Atmel Diopsis 740 System on Chip showing memory blocks, logic and input/output pads around the periphery Microchips with a transparent window, showing the integrated circuit inside. ... A microprocessor is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single semiconducting integrated circuit (IC). ...

Control unit

Main articles: CPU design and Control unit

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.[11] Control systems in advanced computers may change the order of some instructions so as to improve performance. CPU design is the hardware design of a central processing unit. ... A control unit is the part of a CPU or other device that directs its operation. ...

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[12] The program counter (also called the instruction pointer in some computers) is a register in a computer processor which indicates where the computer is in its instruction sequence. ... In computer architecture, a processor register is a small amount of very fast computer memory used to speed the execution of computer programs by providing quick access to frequently used valuesâ€”typically, these values are involved in multiple expression evaluations occurring within a small region on the program. ...

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control system's function is as follows — note that this is a simplified description and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Image File history File links Mips32_addi. ... Image File history File links Mips32_addi. ... A MIPS R4400 microprocessor made by Toshiba. ...

1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). In computer science control flow (or alternatively, flow of control) refers to the order in which the individual statements, instructions or function calls of an imperative or functional program are executed or evaluated. ...

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen. In the field of computer architecture and engineering, a sequencer or microsequencer, is a part of a control unit of a CPU. It generates the addresses used to step through the microprogram of a control store. ... A microprogram is a program consisting of microcode that controls the different parts of a computers central processing unit (CPU). ...

Arithmetic/logic unit (ALU)

Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic. A typical schematic symbol for an ALU: A & B are operands; R is the output; F is the input from the Control Unit; D is an output status In computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. ...

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers — albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation — although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Wikibooks has a book on the topic of Trigonometry All of the trigonometric functions of an angle Î¸ can be constructed geometrically in terms of a unit circle centered at O. Trigonometry (from Greek trigÅnon triangle + metron measure[1]), informally called trig, is a branch of mathematics that deals with... In mathematics, a square root of a number x is a number r such that , or in words, a number r whose square (the result of multiplying the number by itself) is x. ... The integers are commonly denoted by the above symbol. ... A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In logic and mathematics, a logical value, also called a truth value, is a value indicating to what extent a proposition is true. ...

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic. Boolean logic is a complete system for logical operations. ... AND Logic Gate In logic and mathematics, logical conjunction (usual symbol and) is a two-place logical operation that results in a value of true if both of its operands are true, otherwise a value of false. ... OR logic gate. ... Exclusive disjunction, also known as exclusive or and symbolized by XOR or EOR, is a logical operation on two operands that results in a logical value of true if and only if one of the operands, but not both, has a value of true. ... Negation, in its most basic sense, changes the truth value of a statement to its opposite. ... In computer science, conditional statements are a vital part of a programming language. ... Boolean logic is a complete system for logical operations. ...

Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices. Simple superscalar pipeline. ... â€œGPUâ€ redirects here. ... -1... Multiple Instruction Multiple Data (MIMD) is a type of parallel computing architecture where many functional units perform different operations on different data. ... A vector going from A to B. In physics and in vector calculus, a spatial vector, or simply vector, is a concept characterized by a magnitude and a direction. ... In mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries), which may be numbers or, more generally, any abstract quantities that can be added and multiplied. ...

Memory

Main article: Computer storage
Magnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory. The binary or base-two numeral system is a system for representing numbers in which a radix of two is used; that is, each digit in a binary numeral may have either of two different values. ... This article is about the unit of information. ... In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ... The twos complement of a binary number is the value obtained by subtracting the number from a large power of two (specifically, from 2N for an N-bit twos complement). ...

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. In computer architecture, a processor register is a small amount of very fast computer memory used to speed the execution of computer programs by providing quick access to frequently used valuesâ€”typically, these values are involved in multiple expression evaluations occurring within a small region on the program. ...

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.[13] â€œRAMâ€ redirects here. ... Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices. ... For other uses, see Bios. ... // An operating system (OS) is the software that manages the sharing of the resources of a computer. ... An embedded system is a special-purpose computer system, which is completely encapsulated by the device it controls. ... A microcontroller, like this PIC18F8720 is controlled by firmware stored inside on FLASH memory In computing, firmware is a computer program that is embedded in a hardware device, for example a microcontroller. ... A USB flash drive. ...

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Diagram of a CPU memory cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. ...

Input/output (I/O)

Main article: Input/output
Hard disks are common I/O devices used with computers.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. â€œGPUâ€ redirects here. ... 3D computer graphics (in contrast to 2D computer graphics) are graphics that utilize a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. ... Desktop computer with several common peripherals (Monitor, keyboard, mouse, speakers, microphone and a printer) A desktop computer is a computer made for use on a desk in an office or home and is distinguished from portable computers such as laptops or PDAs. ...

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is... In computing, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ...

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[14] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of a the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. It has been suggested that simulation software be merged into this article or section. ... This article does not cite any references or sources. ... The German Lorenz cipher machine, used in World War II for encryption of very high-level general staff messages Cryptography (or cryptology; derived from Greek ÎºÏÏ…Ï€Ï„ÏŒÏ‚ kryptÃ³s hidden, and the verb Î³ÏÎ¬Ï†Ï‰ grÃ¡fo write or Î»ÎµÎ³ÎµÎ¹Î½ legein to speak) is the study of message secrecy. ... In the jargon of parallel computing, an embarrassingly parallel workload (or embarrassingly parallel problem) is one for which no particular effort is needed to segment the problem into a very large number of parallel tasks, and there is no essential dependency (or communication) between those parallel tasks. ...

Networking and the Internet

Main articles: Computer networking and Internet
Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information in multiple locations since the 1950s, with the U.S. military's SAGE system the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre. This article or section is in need of attention from an expert on the subject. ... Image File history File links Size of this preview: 600 Ã— 600 pixelsFull resolution (1280 Ã— 1280 pixel, file size: 1. ... Image File history File links Size of this preview: 600 Ã— 600 pixelsFull resolution (1280 Ã— 1280 pixel, file size: 1. ... This article is about routing (or routeing) in computer networks. ... SAGE Sector Control Room. ... Sabre Logo Sabre is a computer reservations system/global distribution system (GDS) used by airlines, railways, hotels, travel agents and other travel companies. ...

Further topics

Hardware

Main article: Computer hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware. It has been suggested that Peripheral be merged into this article or section. ...

 First Generation (Mechanical/Electromechanical) Calculators Antikythera mechanism, Difference Engine, Norden bombsight Programmable Devices Jacquard loom, Analytical Engine, Harvard Mark I, Z3 Second Generation (Vacuum Tubes) Calculators Atanasoff – Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120 Programmable Devices ENIAC, EDSAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22 Third Generation (Discrete transistors and SSI, MSI, LSI Integrated circuits) Mainframes IBM 7090, IBM 7080, System/360, BUNCH Minicomputer PDP-8, PDP-11, System/32, System/36 Fourth Generation (VLSI integrated circuits) Minicomputer VAX, AS/400 4-bit microcomputer Intel 4004, Intel 4040 8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80 16-bit microcomputer 8088, Zilog Z8000, WDC 65816/65802 32-bit microcomputer 80386, Pentium, 68000, ARM architecture 64-bit microcomputer[15] x86-64, PowerPC, MIPS, SPARC Embedded computer 8048, 8051 Personal computer Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet computer, Wearable computer Theoretical/experimental Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer
 Peripheral device (Input/output) Input Mouse, Keyboard, Joystick, Image scanner Output Monitor, Printer Both Floppy disk drive, Hard disk, Optical disc drive, Teleprinter Computer busses Short range RS-232, SCSI, PCI, USB Long range (Computer networking) Ethernet, ATM, FDDI

Software

Main article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area somewhere between hardware and software. It has been suggested that this article or section be merged with Computer program. ... For other uses, see Bios. ... Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices. ... The Columbia MPC was one of the many IBM PC compatibles offered on the US market. ...

 Operating system Unix/BSD UNIX System V, AIX, HP-UX, Solaris (SunOS), FreeBSD, OpenBSD, NetBSD, IRIX GNU/Linux List of Linux distributions, Comparison of Linux distributions Microsoft Windows Windows 9x, Windows NT, Windows XP, Windows Vista, Windows CE DOS 86-DOS (QDOS), PC-DOS, MS-DOS, FreeDOS Mac OS Mac OS classic, Mac OS X Embedded and real-time List of embedded operating systems Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs Library Multimedia DirectX, OpenGL, OpenAL Programming library C standard library, Standard template library Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP File format HTML, XML, JPEG, MPEG, PNG User interface Graphical user interface (WIMP) Microsoft Windows, GNOME, QNX Photon, CDE, GEM Text user interface Command line interface, shells Other Application Office suite Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software Internet Access Browser, E-mail client, Web server, Mail transfer agent, Instant messaging Design and manufacturing Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management Graphics Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics, Video editing, Image processing Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music Software Engineering Compiler, Assembler, Interpreter, Debugger, Text Editor, Integrated development environment, Performance analysis, Revision control, Software configuration management Educational Edutainment, Educational game, Serious game, Flight simulator Games Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction Misc Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

Programming languages

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine language by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages — some intended to be general purpose, others useful only for highly specialized applications. The term natural language is used to distinguish languages spoken and signed (by hand signals and facial expressions) by humans for general-purpose communication from constructs such as writing, computer-programming languages or the languages used in the study of formal logic, especially mathematical logic. ... A system of codes directly understandable by a computers CPU is termed this CPUs native or machine language. ... A diagram of the operation of a typical multi-language, multi-target compiler. ... See the terminology section, below, regarding inconsistent use of the terms assembly and assembler. ... In computer science, an interpreter is a computer program that executes, or performs, instructions written in a computer programming language. ...

 Lists of programming languages Timeline of programming languages, Categorical list of programming languages, Generational list of programming languages, Alphabetical list of programming languages, Non-English-based programming languages Commonly used Assembly languages ARM, MIPS, x86 Commonly used High level languages BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal Commonly used Scripting languages Bourne script, JavaScript, Python, Ruby, PHP, Perl

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers. Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".

 Hardware-related Electrical engineering, Electronics engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoscale engineering Software-related Human-computer interaction, Information technology, Software engineering, Scientific computing, Web design, Desktop publishing

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. Electrical Engineers design power systemsâ€¦ â€¦ and complex electronic circuits. ... It has been suggested that this article or section be merged with electrical and electronics engineering. ... Computer engineering (also called electronic and computer engineering) is a discipline that combines elements of both electrical engineering and computer science. ... Telecommunications engineering focuses on the transmission of information across a channel such as a coax cable, optical fibre or free space. ... Optical engineering is the field of study which focuses on applications of optics. ... Nanoengineering is the practice of engineering on the nanoscale. ... // Humanâ€“computer interaction (HCI), alternatively manâ€“machine interaction (MMI) or computerâ€“human interaction (CHI)This interactive computer allows the user to intergrate a reaction towards oneself and the primary source that is the http server, the port and Ip address show as the user connects to the imb harddrive , is... Information and communication technology spending in 2005 Information technology (IT), as defined by the Information Technology Association of America (ITAA), is the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware. ... Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software. ... Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and numerical solution techniques and using computers to analyze and solve scientific and engineering problems. ... Web design is a process of conceptualization, planning, modeling, and execution of electronic media delivery via Internet in the form of Markup language suitable for interpretation by Web browser and display as Graphical user interface (GUI). ... Adobe InDesign CS2, one of many popular desktop publishing applications. ...

 Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C Professional Societies ACM, ACM Special Interest Groups, IET, IFIP Free/Open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

Look up Computer in
Wiktionary, the free dictionary.
Wikiquote has a collection of quotations related to:
Wikimedia Commons has media related to:

Wikipedia does not have an article with this exact name. ... Wiktionary (a portmanteau of wiki and dictionary) is a multilingual, Web-based project to create a free content dictionary, available in over 150 languages. ... Image File history File links This is a lossless scalable vector image. ... Wikiquote is one of a family of wiki-based projects run by the Wikimedia Foundation, running on MediaWiki software. ... Image File history File links Commons-logo. ... In computer science, computability theory is the branch of the theory of computation that studies which problems are computationally solvable using different models of computation. ... Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ... RAM (Random Access Memory) Look up computing in Wiktionary, the free dictionary. ... This page is intended to be a list of computers in fiction and science fiction. ... This article describes how security can be achieved through design and engineering. ... Many current computer systems have only limited security precautions in place. ... This is a list of the origins of computer-related terms (i. ... In computing, virtualization is a broad term that refers to the abstraction of computer resources. ...

Notes

1. ^ In 1946, ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer may use around 400 W; over four hundred times less. (Kempf 1961)
2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern "commodity" microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.
3. ^ The Analytical Engine should not be confused with Babbage's difference engine which was a non-programmable mechanical calculator.
4. ^ This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. All the text after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)
5. ^ Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of artificial intelligence.
6. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
7. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
8. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
9. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly by another program called an interpreter.
10. ^ Although this is a simple program, it contains a software bug. If the traffic signal is showing red when someone switches the "flash red" switch, it will cycle through green once more before starting to flash red as instructed. This bug is quite easy to fix by changing the program to repeatedly test the switch throughout each "wait" period — but writing large programs that have no bugs is exceedingly difficult.
11. ^ The control unit's rule in interpreting instructions has varied somewhat in the past. While the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, the first modern stored program computer to be designed, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
12. ^ Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.
13. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)
14. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
15. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table existed in 32-bit forms before their 64-bit incarnations were introduced.

References

Aberdeen Proving Ground is a United States Army facility located at Aberdeen, Maryland (in Harford county). ... The United States Army is the largest and oldest branch of the armed forces of the United States. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 95th day of the year (96th in leap years) in the Gregorian calendar. ... The DEC logo Digital Equipment Corporation was a pioneering American company in the computer industry. ...   Maynard is a town in Middlesex County, Massachusetts, United States. ... Hans Meuer is a Professor of Computer Science at the University of Mannheim, general manager of Prometeus GmbH and general chairman of the International Supercomputing Conference[1]. In 1986, he became co-founder and organizer of the first Mannheim Supercomputer Conference[2], which has been held annually ever since. ... Jack Dongarra is a University Distinguished Professor of Computer Science in the Computer Science Department [1] at the University of Tennessee. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 317th day of the year (318th in leap years) in the Gregorian calendar. ... The TOP500 project ranks and details the 500 most powerful publicly-known computer systems in the world. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 331st day of the year (332nd in leap years) in the Gregorian calendar. ...

Results from FactBites:

 SCHOOL OF COMPUTER SCIENCE/Carnegie Mellon University (492 words) A paper detailing the algorithm, developed by Tuomas Sandholm, Avrim Blum (professors of computer science), and graduate assistant David J. Abraham, will be presented at the Association for Computing Machinery’s Conference on Electronic Commerce in San Diego. Computational Thinking: By coining the term “computational thinking,” Jeannette Wing, Head CSD, encapsulated both the answer to the question, “What is computer science?”; and a viewpoint on how computer science is revolutionizing not only all the sciences, but impacting every aspect of our lives in the 21st century. You can also download a copy of the Computer Science poster which displays many of the diverse and exciting areas of Computer Science.
 Home Computer Security (12040 words) While intruders also attack home computers connected to the Internet through dial-in connections, high-speed connections (cable modems and DSL modems) are a favorite target. Instead, it goes from your computer to another computer to still another computer and so on, eventually reaching his or her computer. For a computer, the repair cycle might have to be repeated until a patch completely fixes a problem.
More results at FactBites »

Share your thoughts, questions and commentary here