- Zero redirects here. For other meanings of that word, see also zero (disambiguation).
Zero or nought (0) is a number that precedes the positive one, and all positive numbers, and follows negative one, and all negative numbers.
Zero is a number which means nothing, null, void or an absence of value. For example, if the number of one's brothers is zero, then that person has no brothers. If the difference between the number of pieces in two piles is zero, it means the two piles have the same number of pieces.
In certain calendars it is common usage to omit the year zero, such as the proleptic Gregorian calendar and proleptic Julian calendar.
The numeral or digit zero is used in numeral systems where the position of a digit signifies its value. Successive positions of digits have higher values, so the digit zero is used to skip a position and give appropriate value to the preceding and following digits.
By the mid second millennium BC, Babylonians had a sophisticated sexagesimal positional numeral system. The lack of a positional value (or zero) was indicated by a space between sexagesimal numerals. By 300 BC a punctuation symbol (two slanted wedges) was co-opted as a placeholder in the same Babylonian system.
The Ancient Greeks were unsure about the status of zero as a number: they asked themselves "how can 'nothing' be something?", leading to interesting philosophical and, by the Medieval period, religous arguments about the nature and existence of zero and the vacuum. The paradoxes of Zeno of Elea depend in large part on the uncertain interpretation of zero.
By 130 Ptolemy, influenced by Hipparchus and the Babylonians, had begun to use a symbol for zero (a small circle with a long overbar) within a sexagesimal system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first true zero in the Old World. In later Byzantine manuscripts of his Syntaxis Mathematica (Almagest), the Hellenistic zero had morphed into the Greek letter omicron (usually meaning 70).
But the late Olmec had already begun to use a true zero (a shell glyph) several centuries before Ptolemy in the New World (possibly by the fourth century BC but certainly by 40 BC), which became an integral part of Maya numerals. Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, nulla meaning nothing, not as a symbol. When division produced zero as a remainder, nihil, also meaning nothing, was used. These medieval zeros were used by all future computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol.
The first decimal zero was introduced by Indian mathematicians about 300. An early study of the zero by Brahmagupta dates to 628. By this time it was already known in Cambodia, and it later spread to China and the Islamic world, from where it reached Europe in the 12th century.
The word zero (as well as cipher) comes from Arabic sifr, meaning "empty".
Zero (0) is both a number and a numeral. The natural number following zero is one and no natural number precedes zero. Zero may or may not be counted as a natural number, depending on the definition of natural numbers.
In set theory, the number zero is the size of the empty set: if you do not have any apples, then you have zero apples. In fact, in certain axiomatic developments of mathematics from set theory, zero is defined to be the empty set.
The following are some basic rules for dealing with the number zero. These rules apply for any complex number x, unless otherwise stated.
- Addition: x + 0 = x and 0 + x = x. (That is, 0 is an identity element with respect to addition.)
- Subtraction: x − 0 = x and 0 − x = − x.
- Multiplication: x · 0 = 0 · x = 0.
- Division: 0 / x = 0, for nonzero x. But x / 0 is undefined, because 0 has no multiplicative inverse, a consequence of the previous rule.
- Exponentiation: x0 = 1, except that the case x = 0 may be left undefined in some contexts. For all positive real x, 0x = 0.
The expression "0/0" is an "indeterminate form". That does not simply mean that it is undefined; rather, it means that if f(x) and g(x) both approach 0 as x approaches some number, then f(x)/g(x) could approach any finite number or ∞ or −∞; it depends on which functions f and g are. See L'Hopital's rule.
The sum of 0 numbers is 0, and the product of 0 numbers is 1.
Extended use of zero in mathematics
- Zero is the identity element in an additive group or the additive identity of a ring.
- A zero of a function is a point in the domain of the function whose image under the function is zero. See zero (complex analysis).
- In geometry, the dimension of a point is 0.
- In analytic geometry, 0 is the origin.
- The concept of "almost" impossible in probability. More generally, the concept of almost nowhere in measure theory.
- A zero function is a function with 0 as its only possible output value. A particular zero function is a zero morphism. A zero function is the identity in the additive group of functions.
- The zero of a function is a preimage of zero, also called the root of a function.
- Zero is one of three possible return values of the Möbius function. Passed an integer x2 or x2y, the Möbius function returns zero.
- It is the number of n×n magic squares for n = 2.
- It is the number of n-queens problem solutions for n = 2, 3.
Counting from 1 or 0?
Human beings usually count things from one, not zero. Yet in computer science zero has become the popular indication for a starting point. For example, in almost all old programming languages, an array starts from 1 by default, which is natural for humans. As programming languages have developed, it has become more common that an array starts from zero by default (zero-based). This is because, with a one-based index, one must be subtracted to obtain a correct offset for things like obtaining the location of a specific element.
A null pointer in C usually contains the memory address zero. However, it is not required to be zero. Some computer architectures use bit patterns other than zero as their null pointer.
Distinguishing zero from O
The oval-shaped zero (appearing like a rugby ball stood on end) and rectangular letter O together came into use on modern character displays. The zero with a dot in the centre seems to have originated as an option on IBM 3270 controllers (this has the problem that it looks like the Greek letter Theta). The slashed zero, looking identical to the letter O other than the slash, is used in old-style ASCII graphic sets descended from the default typewheel on the venerable ASR-33 Teletype. This format causes problems for certain Scandinavian languages which use Ø as a letter.
The convention which has the letter O with a slash and the zero without was used at IBM and a few other early mainframe makers; this is even more problematic for Scandinavians because it means two of their letters collide. Some Burroughs/Unisys equipment displays a zero with a reversed slash. And yet another convention common on early line printers left zero unornamented but added a tail or hook to the letter-O so that it resembled an inverted Q or cursive capital letter-O.
The typeface used on some European number plates for cars distinguish the two symbols by making the O rather egg-shaped and the zero more rectangular, but most of all by opening the zero on the upper right side, so here the circle is not closed any more (as in German plates).
In paper writing one may not distinguish the 0 and O at all, or may add a slash across it in order to show the difference, although this sometimes causes ambiguity in regard to the symbol for the null set.
"Zero" as a verb
In computing, zero is a default digit, meaning none and initial value. To zero (or zeroise or zeroize) a set of data means to set every bit in the data to zero (or off). This is usually said of small pieces of data, such as bits or words (especially in the construction "zero out").
Zero means to erase, to discard all data from. This is often said of disks and directories, where "zeroing" need not involve actually writing zeroes throughout the area being zeroed. One may speak of something being "logically zeroed" rather than being "physically zeroed".
In firearms, to zero a weapon means adjusting the iron sights or the telescopic sight so that it aims exactly where the bullet goes at a given distance. If the weapon was "zero-ed" at 100 yards, shooting at a target at 150 yards will require to aim higher, as requested by the science of ballistics.
- A history of Zero (http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Zero.html)
- Zero Saga (http://home.ubalt.edu/ntsbarsh/zero/ZERO.HTM)
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.