FACTOID # 21: 15% of Army recruits from South Dakota are Native American, which is roughly the same percentage for female Army recruits in the state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Integer (computer science)

In computer science, the term integer is used to refer to any data type which can represent some subset of the mathematical integers. These are also known as integral data types. A data type is a constraint placed upon the interpretation of data in a type system in computer programming. ... The integers are commonly denoted by the above symbol. ...

Contents

Value and representation

The value of a datum with an integral type is the mathematical integer that it corresponds to. The representation of this datum is the way the value is stored in the computer’s memory. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well). For other uses, see Data (disambiguation). ...


The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the bits varies; see Endianness. The width or precision of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n−1. This article is about the unit of information. ... The binary numeral system, or base-2 number system, is a numeral system that represents numeric values using two symbols, usually 0 and 1. ... In computing, endianness is the byte (and sometimes bit) ordering in memory used to represent some kind of data. ...


There are three different ways to represent negative numbers in a binary numeral system. The most common is two’s complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two’s complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values, and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. The other possibilities are sign-magnitude and ones' complement. See Signed number representations for details. The twos complement of a binary number is the value obtained by subtracting the number from a large power of two (specifically, from 2N for an N-bit twos complement). ... A bijective function. ... 3 + 2 = 5 with apples, a popular choice in textbooks[1] Addition is the mathematical operation of combining or adding two numbers to obtain an equal simple amount or total. ... 5 - 2 = 3 (verbally, five minus two equals three) An example problem Subtraction is one of the four basic arithmetic operations; it is essentially the opposite of addition. ... In mathematics, multiplication is an elementary arithmetic operation. ... In mathematics, negative numbers in any base are represented in the usual way, by prefixing them with a − sign. ...


Another, rather different, representation for integers is binary-coded decimal, which is still commonly used in mainframe financial applications and in databases. In computing and electronic systems, Binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. ... For other uses, see Mainframe. ...


Common integral data types

Bits Name Range Uses
8 byte, octet Signed: −128 to +127
Unsigned: 0 to +255
ASCII characters, C int8_t, Java byte
16 halfword, word Signed: −32,768 to +32,767
Unsigned: 0 to +65,535
UCS-2 characters, C int16_t, Java char, Java short
32 word, doubleword, longword Signed: −2,147,483,648 to +2,147,483,647
Unsigned: 0 to +4,294,967,295
UCS-4 characters, Truecolor with alpha, C int32_t, Java int
64 doubleword, longword, quadword Signed: −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807
Unsigned: 0 to +18,446,744,073,709,551,615
C int64_t, Java long
128   Signed: −170,141,183,460,469,231,731,687,303,715,884,105,728 to +170,141,183,460,469,231,731,687,303,715,884,105,727
Unsigned: 0 to +340,282,366,920,938,463,463,374,607,431,768,211,455
C only available as non-standard compiler-specific extension
n n-bit integer Signed: − 2n − 1 to 2n − 1 − 1
Unsigned: 0 to 2n − 1
 

Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types but only a small, fixed set of widths. In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ... In computer technology and networking, an octet is a group of 8 bits. ... 127 is the natural number following 126 and preceding 128. ... Image:ASCII fullsvg There are 95 printable ASCII characters, numbered 32 to 126. ... In computing, word is a term for the natural unit of data used by a particular computer design. ... Ten thousand (10000) is the natural number following 9999 and preceding 10001. ... In computing, UCS-2 and UTF-16 are alternative names for a 16-bit Unicode Transformation Format, a character encoding form that provides a way to represent a series of abstract characters from Unicode and ISO/IEC 10646 as a series of 16-bit words suitable for storage or transmission... C is a general-purpose, block structured, procedural, imperative computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system. ... UTF-32 and UCS-4 are alternate names for a method of encoding Unicode characters, using the fixed amount of exactly 32 bits for each Unicode code point. ... Truecolor (also spelled Truecolour; called Millions on a Macintosh) graphics is a method of storing image information in a computers memory such that each pixel is represented by three or more bytes. ... “CPU” redirects here. ...


The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a ‘double width’ integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (which can represent only the integers in a specified range).


Some languages, such as Lisp, REXX and Haskell, support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages which do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package. These use as much of the computer’s memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they too can only represent a finite subset of the mathematical integers. These schemes support very large numbers, for example one kilobyte of memory could be used to store numbers up to 2466 digits long. Lisp is a family of computer programming languages with a long history and a distinctive fully-parenthesized syntax. ... REXX (REstructured eXtended eXecutor) is an interpreted programming language which was developed at IBM. It is a structured high-level programming language which was designed to be both easy to learn and easy to read. ... Haskell is a standardized purely functional programming language with non-strict semantics, named after the logician Haskell Curry. ... A bignum package in a computer or program allows internal representation of very large integers, rational numbers, decimal numbers, or floating-point numbers (limitted only by available memory), and provides a set of arithmetic operations on such numbers. ... Wikibooks has a book on the topic of Perl Programming Perl is a dynamic programming language created by Larry Wall and first released in 1987. ...


A Boolean or Flag type is a type which can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access. In computer science, the Boolean datatype, sometimes called the logical datatype, is a primitive datatype having two values: one and zero (which are equivalent to true and false). ... In computer programming, flag refers to one or more bits that are used to store a binary value or code that has an assigned meaning. ...


A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal. A nibble (or less commonly but more accurately, nybble) is the computing term for a four-bit aggregation, or half an octet (an octet being an 8-bit byte). ... In mathematics and computer science, hexadecimal, base-16, or simply hex, is a numeral system with a radix, or base, of 16, usually written using the symbols 0–9 and A–F, or a–f. ...


Data type names

Bits Signed Java C# VB.Net SQL92 vbScript C
8 Yes byte sbyte SByte int8_t, signed char
16 Yes short short, Int16 Short, Int16 smallint, int2 int int16_t, short, int
32 Yes int int, Int32 Integer, Int32 integer, int, int4 long int32_t, long
64 Yes long long, Int64 Long, Int64 bigint, int8 int64_t, long long
8 No byte Byte tinyint, int1 byte uint8_t, unsigned char
16 No char ushort, UInt16 UShort, UInt16 uint16_t, unsigned short, unsigned
32 No uint, UInt32 UInteger, UInt32 uint32_t, unsigned long
64 No ulong, UInt64 ULong, UInt64 uint64_t, unsigned long long

Note: C++ has no compiler-independent integer types with fixed bit widths. C has them only since C99, in the form (u)int(n)_t. It does specify the minimum widths for char, short, int, long, and long long (as shown in the table above). It also specifies that each of those types is no larger than the following, and that char is exactly one byte (eight bits in vitually all modern computers; the exact value is defined as CHAR_BIT in <limits.h>, also for older machines with wider bytes). In computer programming, a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. ... In computer programming, a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. ... In computer science, a long integer is a variable that can hold a positive or negative whole number whose range is greater or equal to that of a standard integer on the same machine. ... In computer science, a long integer is a variable that can hold a positive or negative whole number whose range is greater or equal to that of a standard integer on the same machine. ... In computer science, a long integer is a data type that can represent a positive or negative whole number whose range is greater than or equal to that of a standard integer on the same machine. ... In computer science, a long integer is a data type that can represent a positive or negative whole number whose range is greater than or equal to that of a standard integer on the same machine. ... On a computer, arbitrary-precision arithmetic, also called bignum arithmetic, is a technique that allows computer programs to perform calculations on integers or rational numbers (including floating-point numbers) with an arbitrary number of digits of precision, typically limited only by the available memory of the host system. ... In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ... In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ... In computer programming, a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. ... In computer programming, a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. ... In computer programming, a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. ... The Ulong tribe (YEW-long) appeared in the Palau season of Survivor. ... The Ulong tribe (YEW-long) appeared in the Palau season of Survivor. ...


Pointers

A pointer is often, but not always, represented by an unsigned integer of specified width. This is often, but not always, the widest integer that the hardware supports directly. The value of this integer is often, but not always, the memory address of whatever the pointer points to. This article discusses a general notion of reference in computing. ...


Bytes and octets

Main article: Byte

The term byte initially meant ‘the smallest addressable unit of memory’. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits (‘bit-addressed machine’), or that could only address 16- or 32-bit quantities (‘word-addressed machine’). The term byte was usually not used at all in connection with bit- and word-addressed machines. In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ...


The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate. For the scientific and engineering discipline studying computer networks, see Computer networking. ...


In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet.


Words

Main article: Word (computing)

The term word is used for a small group of bits which are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS. In computing, word is a term for the natural unit of data used by a particular computer design. ... A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79). ...


As of 2006, 32-bit word sizes are most common among general-purpose computers, with 64-bit machines used mostly for large installations. Embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers, but word sizes that are not a multiple of 8 have vanished along with non-8-bit bytes. 2006 is a common year starting on Sunday of the Gregorian calendar. ... A router, an example of an embedded system. ... 36-bit word length describes the number of bits, 36, used in some early computers to represent data in the form of words—their basic units of addressing and calculation. ...


See Also


 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m