FACTOID # 22: South Dakota has the highest employment ratio in America, but the lowest median earnings of full-time male employees.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "64 bit" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > 64 bit


N-bit computers
4-bit | 8-bit | 16-bit | 32-bit | 64-bit | 128-bit
N-bit applications
4-bit | 8-bit | 16-bit | 32-bit | 64-bit | 128-bit

In computing, a 64-bit component is one in which data are processed or stored in 64-bit units (words). The term often refers to a computer's CPU, describing the size of the registers used to hold memory addresses and other data, as well as the ALU that operates on those registers. As of 2004, 64-bit CPUs are common in servers, and have recently been introduced to the (previously 32-bit) mainstream personal computer arena in the form of the AMD64, EM64T, and PowerPC 970 (or "G5") processor architectures.


Though a CPU may be 64-bit internally, its external data bus or address bus may have a different size, either larger or smaller, and the term is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses, and may occasionally be referred to as "64-bit" for this reason. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data. Without further qualification, however, a computer architecture described as "64-bit" generally has integer registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64-bit "chunks" of data.

Contents

Architectural implications

Registers in a processor are generally divided into three groups: integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non-integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers.


Nearly all common general purpose processors (with the notable exception of the ARM and most 32_bit MIPS implementations) have integrated floating point hardware, which may or may not use 64 bit registers to hold data for processing. For example, the AMD64 architecture defines a SSE unit which includes 16 128-bit wide registers, and the traditional x87 floating point unit defines 8 64-bit registers in a stack configuration. By contrast, the 64-bit Alpha family of processors defines 32 64-bit wide floating point registers in addition to its 32 64-bit wide integer registers.


Memory Limitations

Most CPUs are designed so that the contents of a single integer register can store the address (location) of any datum in the computer's virtual memory. Therefore, the total number of addresses in the virtual memory — the total amount of data the computer can keep in its working area — is determined by the width of these registers. Beginning in the 1960s with the IBM System 360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid-1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses, or 4 gigabytes of RAM memory, could be referenced. At the time these architectures were devised, 4 gigabytes of memory was so far beyond the typical quantities available in installations that this was considered to be enough "headroom" for addressing. 4_gigabyte addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases.


However, with the march of time and the continual reductions in the cost of memory (see Moore's Law), by the early 1990s installations with quantities of RAM approaching 4 gigabytes began to appear, and the use of virtual memory spaces greater than the four gigabyte limit became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with Apple Computer's PowerMac desktop line as of 2003 and its iMac home computer line (as of 2004) both using 64-bit processors (Apple calls it the G5 chip), and AMD's "AMD64" architecture (and Intel's "EM64T") becoming common in high-end PCs.


32 vs 64 bit

A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as operating systems must be modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support an older 32-bit instruction set as well as the new modes), or through software emulation.


While 64-bit architectures indisputably make working with huge data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks.


Theoretically, some programs could well be faster in 32-bit mode. Instructions for 64-bit computing take up more storage space than the earlier 32-bit ones, so it is possible that some 32-bit programs will fit into the CPU's high-speed cache while equivalent 64-bit programs will not. However, in applications like scientific computing, the data being processed often fits naturally in 64-bit chunks, and will be faster on a 64-bit architecture because the CPU will be designed to process such information directly rather than requiring the program to perform multiple steps. Such assessments are complicated by the fact that in the process of designing the new 64-bit architectures, the instruction set designers have also taken the opportunity to make other changes that address some of the deficiencies in older instruction sets by adding new performance-enhancing facilities (such as the extra registers in the AMD64 design).


Pros and cons

A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of memory. This is not entirely true:

  • Some operating systems reserve portions of each process' address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, Windows XP DLLs and userland OS components are mapped into each process' address space, leaving only 2 or 3 GB (depending on the settings) address space available under Windows XP, even if the computer has 4 GB of RAM. This restriction is not present in 64-bit Windows.
  • Memory mapping of files is becoming more dangerous with 32-bit architectures, especially with the introduction of relatively cheap recordable DVD technology. A 4 GB file is no longer uncommon, and such large files cannot be memory mapped easily to 32-bit architectures. This is an issue, as memory mapping remains one of the most efficient disk-to-memory methods, when properly implemented by the OS.

The main disadvantage of 64-bit architectures is that relative to 32-bit architectures the same data occupies slightly more space in memory (due to swollen pointers and possibly other types and alignment padding). This increases the memory requirements of a given process, and can have implications for efficient processor cache utilisation. Maintaining a partial 32-bit data model is one way to handle this, and is in general reasonably effective.


64-bit data models

Converting application software written in a high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common recurring problem is that some programmers assume that pointers (variables that store memory addresses) have the same length as some other data type. Programmers assume they can transfer quantities between these data types without losing information. Those assumptions happen to be true on some 32 bit machines (and even some 16 bit machines), but they are no longer true on 64 bit machines. The C programming language and its descendant C++ make it particularly easy to make this sort of mistake.


To avoid this mistake in C and C++, the sizeof() operator can be used to determine the size of these primitive types if decisions based on their size need to be made at run time. Also, limits.h in the C99 standard and climits in the C++ standard give more helpful info; sizeof() only returns the number of bytes, which is sometimes misleading, because the size of a byte is also not well defined in C or C++. One needs to be careful to use the ptrdiff_t type (in the standard header <stddef.h>) when doing pointer arithmetic; too much code incorrectly uses "int" or "long" instead.


Neither C nor C++ define the length of a pointer, int, or long to be a specific number of bits.


In most programming environments on 32 bit machines, pointers, "int" variables, and "long" variables, are all 32 bits long.


However, in many programming environments on 64-bit machines, "int" variables are still 32 bits wide, but "long"s and pointers are 64 bits wide. These are described as having an LP64 data model. Another alternative is the ILP64 data model in which all three data types are 64 bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment without changes. Another alternative is the LLP64 model that maintains compatibility with 32 bit code, by leaving both int and long as 32-bit, and defining a new 64-bit long long type that is compatible with the pointer attribute.


Note that a programming model is a choice made on a per compiler basis, and several can coexist on the same OS. However typically the programming model chosen by the OS API as primary model dominates.


Another consideration is the data model used for drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for DMA. As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA.


References

64-bit processor architectures include:

Beyond 64 bits

64-bit words seem to be sufficient for most practical uses today. Still it may be mentioned that IBM's System/370 used 128-bit floating point numbers, and many modern processors also include 128-bit floating point registers. The System/370 was notable, however, in that it also used variable-length decimal numbers of up to 16 bytes (i.e. 128 bit).


See also

External links

  • Data Size Neutrality and 64-bit Support (http://www.usenix.org/publications/login/standards/10.data.html)
  • Henry Spencer's 10 Commandments for C Programmers (http://herd.plethora.net/~seebs/c/10com.html) specifically mentions 64-bit portability

This article was originally based on material from the Free On_line Dictionary of Computing, which is licensed under the GFDL.



  Results from FactBites:
 
The 64-Bit Question - Technology (2750 words)
The move from 32 bits to 64 is unlikely to bring the same sort of quantum jump in speed or capabilities that we got moving from 16 bits to 32.
While 64 bits is new to the world of x86, other microprocessors made the transition to 64 bits back in the 1990s.
Having lived through the jump from 8 to 16 bits, then 16 to 32, and now 32 to 64, it’s only natural to think that sometime in the distant future we’ll be making the transition from 64-bit to 128-bit systems.
64-bit - Wikipedia, the free encyclopedia (3298 words)
In computer architecture, 64-bit integers, memory addresses, or other data units are those that are at most 64 bits (8 bytes) wide.
Without further qualification, however, a computer architecture described as "64-bit" generally has integer registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64-bit "chunks" of integer data.
Those assumptions happen to be true on some 32 bit machines (and even some 16 bit machines), but they are no longer true on 64 bit machines.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m