In computer science, a hash table, or a hash map, is a data structure that associates keys with values. The primary operation it supports efficiently is a lookup: given a key (e.g. a person's name), find the corresponding value (e.g. that person's telephone number). It works by transforming the key using a hash function into a hash, a number that is used as an index in an array to locate the desired location ("bucket") where the values should be. Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ...
A binary tree, a simple type of branching linked data structure. ...
The word key has several uses: A key (lock) as a physical object (tool) used to manipulate a lock. ...
Value in mathematics refers to the quantity that is represented by a variable. ...
A hash function is a reproducible method of turning some kind of data into a (relatively) small number that may serve as a digital fingerprint of the data. ...
Hash tables support the efficient insertion of new entries, expected O(1) time. The time spent in searching depends on the hash function and the load of the hash table; both insertion and search approach O(1) time with well chosen values and hashes. For other uses, see Big O. In computational complexity theory, big O notation is often used to describe how the size of the input data affects an algorithms usage of computational resources (usually running time or memory). ...
For other uses, see Big O. In computational complexity theory, big O notation is often used to describe how the size of the input data affects an algorithms usage of computational resources (usually running time or memory). ...
A small phone book as a hash table. Image File history File links HASHTB08. ...
Image File history File links HASHTB08. ...
Time complexity and common uses of hash tables
Hash tables are often used to implement associative arrays, sets and caches. Like arrays, hash tables provide constanttime O(1) lookup on average, regardless of the number of items in the table. While theoretically the worstcase lookup time can be as bad as O(n), this is, for practical purposes, statistically unlikely unless the hash function is poorly designed or unless the set of keys is maliciously chosen with the given hash function in mind. These corner cases are addressed in mathematical analysis with the Simple Uniformed Hashing Assumption, which puts basic assumed conditions on the hash function. An associative array (also map, hash, dictionary, finite map, lookup table, and in queryprocessing an index or index file) is an abstract data type composed of a collection of keys and a collection of values, where each key is associated with one value. ...
In computer science, the set is a collection of certain values without any particular order. ...
For other uses, see cache (disambiguation). ...
For the microarray in genetics, see SNP array. ...
The Big O notation is a mathematical notation used to describe the asymptotic behavior of functions. ...
Compared to other associative array data structures, hash tables are most useful when large numbers of records are to be stored, especially if the size of the data set can be predicted. Hash tables may be used as inmemory data structures. Hash tables may also be adopted for use with persistent data structures; database indices sometimes use diskbased data structures based on hash tables, although balanced trees are more popular. In computing, a persistent data structure is a data structure which always preserves the previous version of itself when it is modified; such data structures are effectively immutable, as their operations do not (visibly) update the structure inplace, but instead always yield a new updated structure. ...
In computing, a selfbalancing binary search tree is a binary search tree that attempts to keep its height, or the number of level of nodes beneath the root, as small as possible at all times, automatically. ...
Choosing a good hash function 
Main article: Hash function A good hash function is essential for good hash table performance. A poor choice of a hash function is likely to lead to clustering, in which probability of keys mapping to the same hash bucket (i.e. a collision) is significantly greater than would be expected from a random function. A nonzero collision probability is inevitable in any hash implementation, but usually the number of operations required to resolve a collision scales linearly with the number of keys mapping to the same bucket, so excess collisions will degrade performance significantly. In addition, some hash functions are computationally expensive, so the amount of time (and, in some cases, memory) taken to compute the hash may be burdensome. A hash function is a reproducible method of turning some kind of data into a (relatively) small number that may serve as a digital fingerprint of the data. ...
Choosing a good hash function is tricky. The literature is replete with poor choices, at least when measured by modern standards. For example, the very popular multiplicative hash advocated by Donald Knuth in The Art of Computer Programming (see reference below) has particularly poor clustering behavior. [1] However, since poor hashing merely degrades hash table performance for particular input key distributions, such problems commonly go undetected. Donald Ervin Knuth ( or KaNOOTH[1], Chinese: [2]) (b. ...
Cover of books The Art of Computer Programming[1] is a comprehensive monograph written by Donald Knuth which covers many kinds of programming algorithms and their analysis. ...
The literature is similarly sparse on the criteria for choosing a hash function. Unlike most other fundamental algorithms and data structures, there is no universal consensus on what makes a "good" hash function. The remainder of this section is organized by three criteria: simplicity, speed, and strength. In addition, it will survey algorithms known to perform well by these criteria. Simplicity and speed are readily measured objectively (by number of lines of code and CPU benchmarks, for example), but strength is a more slippery concept. Obviously, a cryptographic hash function such as SHA1 would satisfy the relatively lax strength requirements needed for hash tables, but their slowness and complexity makes them unappealing. However, using cryptographic hash functions can protect against collision attacks when the hash table modulus and its factors can be kept secret from the attacker,^{[citation needed]} or alternatively, by applying a secret salt. However, for these specialized cases, a universal hash function can be used instead of one static hash. In cryptography, a cryptographic hash function is a hash function with certain additional security properties to make it suitable for use as a primitive in various information security applications, such as authentication and message integrity. ...
SHA redirects here. ...
In cryptography, a salt consists of random bits used as one of the inputs to a key derivation function. ...
In computer science, a universal hash function is a theoretical construct primarily used to show that an algorithm based on a hash function cannot be forced to have bad performance by an adversary. ...
In the absence of a standard measure for hash function strength, the current state of the art is to employ a battery of statistical tests to measure whether the hash function can be readily distinguished from a random function. Arguably the most important test is to determine whether the hash function displays the avalanche effect, which essentially states that any singlebit change in the input key should affect, on average, half the bits in the output. Bret Mulvey advocates testing the strict avalanche condition in particular, which states that, for any singlebit change, each of the output bits should change with probability onehalf, independent of the other bits in the key. Purely additive hash functions such as CRC fail this stronger condition miserably. This article is about the field of statistics. ...
This article is about cryptography; for other meanings, see snowball effect. ...
A cyclic redundancy check (CRC) is a type of function that takes as input a data stream of any length and produces as output a value of a certain fixed size. ...
Clearly, a strong hash function should have a uniform distribution of hash values. Bret Mulvey proposes the use of a chisquared test for uniformity, based on power of two hash table sizes ranging from 2^{1} to 2^{16}. This test is considerably more sensitive than many others proposed for measuring hash functions, and finds problems in many popular hash functions. In mathematics, the uniform distributions are simple probability distributions. ...
A chisquare test is any statistical hypothesis test in which the test statistic has a chisquare distribution when the null hypothesis is true, or any in which the probability distribution of the test statistic (assuming the null hypothesis is true) can be made to approximate a chisquare...
In mathematics, a power of two is any of the nonnegative integer powers of the number two; in other words, two times itself a certain number of times. ...
Fortunately, there are good hash functions that satisfy all these criteria. The simplest class all consume one byte of the input key per iteration of the inner loop. Within this class, simplicity and speed are closely related, as fast algorithms simply don't have time to perform complex calculations. A mathematical bytebybyte implementation that performs particularly well is the Jenkins Oneatatime hash, adapted here from an article by Bob Jenkins, its creator. uint32_t jenkins_one_at_a_time_hash(unsigned char *key, size_t key_len) { uint32_t hash = 0; size_t i; for (i = 0; i < key_len; i++) { hash += key[i]; hash += (hash << 10); hash ^= (hash >> 6); } hash += (hash << 3); hash ^= (hash >> 11); hash += (hash << 15); return hash; } Avalanche behavior of Jenkins Oneatatime hash over 3byte keys The avalanche behavior of this hash is shown on the right. The image was made using Bret Mulvey's AvalancheTest in his Hash.cs toolset. Each of the 24 rows corresponds to a single bit in the 3byte input key, and each of the 32 columns corresponds to a bit in the output hash. Colors are chosen by how well the input key bit affects the given output hash bit: a green square indicates good mixing behavior, a yellow square weak mixing behavior, and red would indicate no mixing. Only a few bits in the last byte of the input key are weakly mixed to a minority of bits in the output hash, a performance vastly better than a number of widely used hash functions. Many commonly used hash functions perform poorly when subjected to such rigorous avalanche testing. The widely favored FNV hash, for example, shows many bits with no mixing at all, especially for short keys. See the evaluation of FNV by Bret Mulvey for a more thorough analysis. Fowler/Noll/Vo is a noncryptographic message digest function created by Glenn Fowler, Landon Curt Noll, and Phong Vo. ...
If speed is more important than simplicity, then the class of hash functions which consume multibyte chunks per iteration may be of interest. One of the most sophisticated is "lookup3" by Bob Jenkins, which consumes input in 12 byte (96 bit) chunks. Note, though, that any speed improvement from the use of this hash is only likely to be useful for large keys, and that the increased complexity may also have speed consequences such as preventing an optimizing compiler from inlining the hash function. Bret Mulvey analyzed an earlier version lookup2, and found it to have excellent avalanche behavior. One desirable property of a hash function is that conversion from the hash value (typically 32 bits) to a bucket index for a particularsize hash table can be done simply by masking, preserving only the lower k bits for a table of size 2^{k} (an operation equivalent to computing the hash value modulo the table size). This property enables the technique of incremental doubling of the size of the hash table  each bucket in the old table maps to only two in the new table. Because of its use of XORfolding, the FNV hash does not have this property. Some older hashes are even worse, requiring table sizes to be a prime number rather than a power of two, again computing the bucket index as the hash value modulo the table size. In general, such a requirement is a sign of a fundamentally weak function; using a prime table size is a poor substitute for using a stronger function. Modular arithmetic (sometimes called modulo arithmetic, or clock arithmetic because of its use in the 24hour clock system) is a system of arithmetic for integers, where numbers wrap around after they reach a certain value â€” the modulus. ...
Collision resolution If two keys hash to the same index, the corresponding records cannot be stored in the same location. So, if it's already occupied, we must find another location to store the new record, and do it so that we can find it when we look it up later on. To give an idea of the importance of a good collision resolution strategy, consider the following result, derived using the birthday paradox. Even if we assume that our hash function outputs random indices uniformly distributed over the array, and even for a hash table with 1 million indices, there is a 95% chance of at least one collision occurring before it contains 2500 records. In probability theory, the birthday paradox states that in a group of 23 (or more) randomly chosen people, there is more than 50% probability that some pair of them will have the same birthday. ...
In probability theory and statistics, the discrete uniform distribution is a discrete probability distribution that can be characterized by saying that all values of a finite set of possible values are equally probable. ...
There are a number of collision resolution techniques, but the most popular are open addressing and chaining.
Open hashing Open hashing is any hash method where the data indexed by the hash is stored externally to the hash table. The data is initially divided up into groups, for instance each group might store ten items, and allocation be done on a firstcome, firstserved basis. In this case, the first item goes in group 0, as does the second, etc., and 11th  20th items would be in group 1. Each group is normally stored in its own file or 'chunk' of a file or memory, so that the system can quickly load up the entire group, these group storage areas are often called 'buckets'. The key that identifies a bucket is now put into the hash table (in this case it is as simple as the bucket number). Now, when we are looking for an item, we look up its entry in the hash table, and this tells us which bucket it is in. We then load up the bucket and retrieve the desired item from it. The advantage of open hashing over closed hashing is that the data stored in the bucket can be of variable size. Hash tables normally require the items stored in them to be the same size, since then it makes finding the n'th item as simple as:  Hash_Table_Start + (n * Item_Length)
Hence, if we store a fixed length 'bucket pointer' in the hash table, we can put variable length data in the buckets.
Separate chaining
Hash collision resolved by chaining. Sometimes called simply chaining or direct chaining, this technique in its simplest form has a linked list of inserted records at each slot in the array references. Each linked list has each element that collides to the same slot. Insertion requires finding the correct slot, and appending to either end of the list in that slot; deletion requires searching the list and removal. Image File history File links HASHTB32. ...
Image File history File links HASHTB32. ...
In computer science, a linked list is one of the fundamental data structures, and can be used to implement other data structures. ...
Chained hash tables have advantages over open addressed hash tables in that the removal operation is simple and resizing the table can be postponed for a much longer time because performance degrades more gracefully even when every slot is used. Indeed, many chaining hash tables may not require resizing at all since performance degradation is linear as the table fills. For example, a chaining hash table containing twice its recommended capacity of data would only be about twice as slow on average as the same table at its recommended capacity. Graceful degradation is a property of a computer system whereby it reacts safely and proportionately to erroneous or unexpected circumstances. ...
Chained hash tables inherit the disadvantages of linked lists. When storing small records, the overhead of the linked list can be significant. An additional disadvantage is that traversing a linked list has poor cache performance. It has been suggested that this article or section be merged with Memory locality. ...
Alternative data structures can be used for chains instead of linked lists. By using a selfbalancing tree, for example, the theoretical worstcase time of a hash table can be brought down to O(log n) rather than O(n). However, since each list is intended to be short, this approach is usually inefficient unless the hash table is designed to run at full capacity or there are unusually high collision rates, as might occur in input designed to cause collisions. Dynamic arrays can also be used to decrease space overhead and improve cache performance when records are small. In computing, a selfbalancing binary search tree or heightbalanced binary search tree is a binary search tree that attempts to keep its height, or the number of levels of nodes beneath the root, as small as possible at all times, automatically. ...
A dynamic array, growable array, resizable array, dynamic table, or array list is a data structure, an array which is automatically expanded to accommodate new objects if filled beyond its current size. ...
Some chaining implementations use an optimization where the first record of each chain is stored in the table. ^{[1]} The purpose is to increase cache efficiency of hash table access. In order to avoid wasting large amounts of space, such hash tables would maintain a load factor of 1.0 or greater. The term direct chaining is sometimes used to describe implementations that do not use this optimization.
Open addressing 
Hash collision resolved by linear probing (interval=1). Open addressing hash tables store the records directly within the array. This approach is also called closed hashing. A hash collision is resolved by probing, or searching through alternate locations in the array (the probe sequence) until either the target record is found, or an unused array slot is found, which indicates that there is no such key in the table. ^{[2]} Well known probe sequences include: In computer science, a hash table is a data structure for storing information that allows specific information to be quickly located. ...
Image File history File links HASHTB12. ...
Image File history File links HASHTB12. ...
 linear probing
 in which the interval between probes is fixed  often at 1.
 quadratic probing
 in which the interval between probes increases proportional to the hash value (the interval thus increasing linearly and the indices are described by a quadratic function).
 double hashing
 in which the interval between probes is computed by another hash function.
Linear probing is a scheme in computer programming for resolving collisions of values of hash functions using two valuesone as a starting value and one as an interval between successive values in modular arithmetic. ...
Quadratic probing is a scheme in computer programming for resolving collisions in hash tables. ...
Double hashing is a computer programming technique used in hashing to resolve hash collisions, cases when two different values to be searched for produce the same hash key. ...
Open addressing versus chaining Chained hash tables have the following benefits over open addressing:  They are simple to implement effectively and only require basic data structures.
 From the point of view of writing suitable hash functions, chained hash tables are insensitive to clustering, only requiring minimization of collisions. Open addressing depends upon better hash functions to avoid clustering. This is particularly important if novice programmers can add their own hash functions, but even experienced programmers can be caught out by unexpected clustering effects.
 They degrade in performance more gracefully. Although chains grow longer as the table fills, a chained hash table cannot "fill up" and does not exhibit the sudden increases in lookup times that occur in a nearfull table with open addressing. (see right)
 If the hash table stores large records, about 5 or more words per record, chaining uses less memory than open addressing.
 If the hash table is sparse (that is, it has a big array with many free array slots), chaining uses less memory than open addressing even for small records of 2 to 4 words per record due to its external storage.
This graph compares the average number of cache misses required to lookup elements in tables with chaining and linear probing. As the table passes the 80%full mark, linear probing's performance drastically degrades. For small record sizes (a few words or less) the benefits of inplace open addressing compared to chaining are: Image File history File links Download high resolution version (954x620, 12 KB) Summary Shows the average number of cache misses expected when inserting into a hash table with various collision resolution mechanisms; on modern machines, this is a good estimate of actual clock time required. ...
Image File history File links Download high resolution version (954x620, 12 KB) Summary Shows the average number of cache misses expected when inserting into a hash table with various collision resolution mechanisms; on modern machines, this is a good estimate of actual clock time required. ...
 They can be more spaceefficient than chaining since they don't need to store any pointers or allocate any additional space outside the hash table. Simple linked lists require a word of overhead per element.
 Insertions avoid the time overhead of memory allocation, and can even be implemented in the absence of a memory allocator.
 Because it uses internal storage, open addressing avoids the extra indirection required for chaining's external storage. It also has better locality of reference, particularly with linear probing. With small record sizes, these factors can yield better performance than chaining, particularly for lookups.
 They can be easier to serialize, because they don't use pointers.
On the other hand, normal open addressing is a poor choice for large elements, since these elements fill entire cache lines (negating the cache advantage), and a large amount of space is wasted on large empty table slots. If the open addressing table only stores references to elements (external storage), it uses space comparable to chaining even for large records but loses its speed advantage. It has been suggested that this article or section be merged with Memory locality. ...
This article is about data structure encoding. ...
Diagram of a CPU memory cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. ...
Generally speaking, open addressing is better used for hash tables with small records that can be stored within the table (internal storage) and fit in a cache line. They are particularly suitable for elements of one word or less. In cases where the tables are expected to have high load factors, the records are large, or the data is variablesized, chained hash tables often perform as well or better. Ultimately, used sensibly any kind of hash table algorithm is usually fast enough; and the percentage of a calculation spent in hash table code is low. Memory usage is rarely considered excessive. Therefore, in most cases the differences between these algorithms is marginal, and other considerations typically come into play.
Coalesced hashing 
A hybrid of chaining and open addressing, coalesced hashing links together chains of nodes within the table itself. ^{[2]} Like open addressing, it achieves space usage and (somewhat diminished) cache advantages over chaining. Like chaining, it does not exhibit clustering effects; in fact, the table can be efficiently filled to a high density. Unlike chaining, it cannot have more elements than table slots. Coalesced Hashing example Coalesced hashing, also called coalesced chaining, is a strategy of collision resolution in a hash table that forms a hybrid of separate chaining and open addressing. ...
Perfect hashing 
If all of the keys that will be used are known ahead of time, and there are no more keys than can fit the hash table, perfect hashing can be used to create a perfect hash table, in which there will be no collisions. If minimal perfect hashing is used, every location in the hash table can be used as well. Perfect hash functions are hash functions which guarantee O(1) operations complexity when used in a hash table. ...
Perfect hash functions are hash functions which guarantee O(1) operations complexity when used in a hash table; In other words, a hash table that uses a perfect hash function guarantees that retrieving any value will take no more than some constant time, regardless of the tables size. ...
Perfect hash functions are hash functions which guarantee O(1) operations complexity when used in a hash table; In other words, a hash table that uses a perfect hash function guarantees that retrieving any value will take no more than some constant time, regardless of the tables size. ...
Perfect hashing gives a hash table where the time to make a lookup is constant in the worst case. This is in contrast to chaining and open addressing methods, where the time for lookup is low on average, but may be arbitrarily large. There exist methods for maintaining a perfect hash function under insertions of keys, known as dynamic perfect hashing. A simpler alternative, that also gives worst case constant lookup time, is cuckoo hashing. Cuckoo hashing example. ...
Probabilistic hashing Perhaps the simplest solution to a collision is to replace the value that is already in the slot with the new value, or slightly less commonly, drop the record that is to be inserted. In later searches, this may result in a search not finding a record which has been inserted. This technique is particularly useful for implementing caching. An even more spaceefficient solution which is similar to this is use a bit array (an array of onebit fields) for our table. Initially all bits are set to zero, and when we insert a key, we set the corresponding bit to one. False negatives cannot occur, but false positives can, since if the search finds a 1 bit, it will claim that the value was found, even if it was just another value that hashed into the same array slot by coincidence. In reality, such a hash table is merely a specific type of Bloom filter. A bit array (or bitmap, in some cases) is an array data structure which compactly stores individual bits (boolean values). ...
A false positive, also called false alarm, exists when a test reports, incorrectly, that it has found a signal where none exists in reality. ...
The Bloom filter, conceived by Burton H. Bloom in 1970, is a spaceefficient probabilistic data structure that is used to test whether an element is a member of a set. ...
Robin Hood hashing One interesting variation on doublehashing collision resolution is that of Robin Hood hashing. The idea is that a key already inserted may be displaced by a new key if its probe count is larger than the key at the current position. The net effect of this is that it reduces worst case search times in the table. This is similar to Knuth's ordered hash tables except the criteria for bumping a key does not depend on a direct relationship between the keys.^{[3]}
Table resizing With a good hash function, a hash table can typically contain about 70%–80% as many elements as it does table slots and still perform well. Depending on the collision resolution mechanism, performance can begin to suffer either gradually or dramatically as more elements are added. To deal with this, when the load factor exceeds some threshold, it is necessary to allocate a new, larger table, and add all the contents of the original table to this new table. In Java's HashMap class, for example, the default load factor threshold is 0.75. Java language redirects here. ...
This can be a very expensive operation, and the necessity for it is one of the hash table's disadvantages. In fact, some naive methods for doing this, such as enlarging the table by one each time you add a new element, reduce performance so drastically as to make the hash table useless. However, if the table is enlarged by some fixed percent, such as 10% or 100%, it can be shown using amortized analysis that these resizings are so infrequent that the average time per insertion remains constanttime. To see why this is true, suppose a hash table using chaining begins at the minimum size of 1 and is doubled each time it fills above 100%. If in the end it contains n elements, then the total add operations performed for all the resizings is: In computational complexity theory, amortized analysis is the time per operation averaged over a worst_case sequence of operations. ...
 1 + 2 + 4 + 8 + ... + n = 2n  1.
Because the costs of the resizings form a geometric series, the total cost is O(n). But it is necessary also to perform n operations to add the n elements in the first place, so the total time to add n elements with resizing is O(n), an amortized time of O(1) per element. In mathematics, a geometric progression is a sequence of numbers such that the quotient of any two successive members of the sequence is a constant called the common ratio of the sequence. ...
On the other hand, some hash table implementations, notably in realtime systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt timecritical operations. One simple approach is to initially allocate the table with enough space for the expected number of elements and forbid the addition of too many elements. Another useful but more memoryintensive technique is to perform the resizing gradually: Realtime computing is the subject of hardware and software systems which are subject to constraints in time. ...
 Allocate the new hash table, but leave the old hash table and check both tables during lookups.
 Each time an insertion is performed, add that element to the new table and also move k elements from the old table to the new table.
 When all elements are removed from the old table, deallocate it.
To ensure that the old table will be completely copied over before the new table itself needs to be enlarged, it's necessary to increase the size of the table by a factor of at least (k + 1)/k during the resizing. Linear hashing ^{[4]} is a hash table algorithm that permits incremental hash table expansion. It is implemented using a single hash table, but with two possible lookup functions. Linear Hashing is a dynamic hash table algorithm invented by Witold Litwin in 1980. ...
Another way to decrease the cost of table resizing is to choose a hash function in such a way that the hashes of most values do not change when the table is resized. This approach, called consistent hashing, is prevalent in diskbased and distributed hashes, where resizing is prohibitively costly. Consistent hashing is a scheme that provides hash table functionality in a way that the addition or removal of one slot does not significantly change the mapping of keys to slots. ...
Ordered retrieval issue Hash tables store data in pseudorandom locations, so accessing the data in a sorted manner is a very time consuming operation. Other data structures such as selfbalancing binary search trees generally operate more slowly (since their lookup time is O(log n)) and are rather more complex to implement than hash tables but maintain a sorted data structure at all times. See a comparison of hash tables and selfbalancing binary search trees. In computing, a selfbalancing binary search tree or heightbalanced binary search tree is a binary search tree that attempts to keep its height, or the number of levels of nodes beneath the root, as small as possible at all times, automatically. ...
An associative array (also map, hash, dictionary, finite map, lookup table, and in queryprocessing an index or index file) is an abstract data type composed of a collection of keys and a collection of values, where each key is associated with one value. ...
Problems with hash tables Although hash table lookups use constant time on average, the time spent can be significant. Evaluating a good hash function can be a slow operation. In particular, if simple array indexing can be used instead, this is usually faster. Hash tables in general exhibit poor locality of reference—that is, the data to be accessed is distributed seemingly at random in memory. Because hash tables cause access patterns that jump around, this can trigger microprocessor cache misses that cause long delays. Compact data structures such as arrays, searched with linear search, may be faster if the table is relatively small and keys are cheap to compare, such as with simple integer keys. According to Moore's Law, cache sizes are growing exponentially and so what is considered "small" may be increasing. The optimal performance point varies from system to system. It has been suggested that this article or section be merged with Memory locality. ...
Diagram of a CPU memory cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. ...
In computer science, linear search is a search algorithm, also known as sequential search, that is suitable for searching a set of data for a particular value. ...
Gordon Moores original graph from 1965 Growth of transistor counts for Intel processors (dots) and Moores Law (upper line=18 months; lower line=24 months) For the observation regarding information retrieval, see Mooers Law. ...
More significantly, hash tables are more difficult and errorprone to write and use. Hash tables require the design of an effective hash function for each key type, which in some situations is more difficult and timeconsuming to design and debug than the simple comparison function required for a selfbalancing binary search tree. In openaddressed hash tables it is fairly easy to create a poor hash function. In computing, a selfbalancing binary search tree or heightbalanced binary search tree is a binary search tree that attempts to keep its height, or the number of levels of nodes beneath the root, as small as possible at all times, automatically. ...
Additionally, in some applications, a black hat with knowledge of the hash function may be able to supply information to a hash which creates worstcase behavior by causing excessive collisions, resulting in very poor performance (i.e., a denial of service attack). In critical applications, either universal hashing can be used or a data structure with better worstcase guarantees may be preferable. For details, see Crosby and Wallach's Denial of Service via Algorithmic Complexity Attacks. For other uses, see Black hat (disambiguation). ...
A denialofservice attack (also, DoS attack) is an attack on a computer system or network that causes a loss of service to users, typically the loss of network connectivity and services by consuming the bandwidth of the victim network or overloading the computational resources of the victim system. ...
It has been suggested that this article or section be merged with Universal hash function. ...
Implementations While many programming languages already provide hash table functionality (see language support for associative arrays), there are several independent implementations worth mentioning. An associative array (also map, hash, dictionary, finite map, lookup table, and in queryprocessing an index or index file) is an abstract data type composed of a collection of keys and a collection of values, where each key is associated with one value. ...
 Google Sparse Hash The Google SparseHash project contains several hashmap implementations in use at Google, with different performance characteristics, including an implementation that optimizes for space and one that optimizes for speed. The memoryoptimized one is extremely memoryefficient with only 2 bits/entry of overhead.
 SunriseDD An open source C library for hash table storage of arbitrary data objects with lockfree lookups, builtin reference counting and guaranteed order iteration. The library can participate in external reference counting systems or use its own builtin reference counting. It comes with a variety of hash functions and allows the use of runtime supplied hash functions via callback mechanism. Source code is well documented.
 uthash This is an easytouse hash table for C structures.
 A number of language runtimes and/or standard libraries use hash tables to implement their support for associative arrays.
 Software written to minimize memory usage can conserve memory by keeping all allocated strings in a hash table. If an already existing string is found a pointer to that string is returned; otherwise, a new string is allocated and added to the hash table. (This is the normal technique used in Lisp for the names of variables and functions; see the documentation for the intern and internsoft functions if you are using that language.) The data compression achieved in this manner is usually around 40%.^{[citation needed]}
See also The Bloom filter, conceived by Burton H. Bloom in 1970, is a spaceefficient probabilistic data structure that is used to test whether an element is a member of a set. ...
Distributed hash tables (DHTs) are a class of decentralized distributed systems that provide a lookup service similar to a hash table: (name, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given name. ...
A hash function is a reproducible method of turning some kind of data into a (relatively) small number that may serve as a digital fingerprint of the data. ...
The RabinKarp algorithm is a string searching algorithm created by Michael O. Rabin and Richard M. Karp that uses hashing to find a substring in a text. ...
In computer science, a hash list can mean any kind of list of hashes. ...
In computer science, hash trees, also known as Merkle (hash) trees or Tiger tree hashes, are an extension of the simpler concept hash list, which in turn is an extension of the old concept of hashing, for instance, a file. ...
In computer science and software engineering, a Judy array is a complex but very fast associative array data structure for storing and looking up values using integer or string keys. ...
A trie for keys to, tea, ten, i, in, and inn. In computer science, a trie, or prefix tree, is an ordered tree data structure that is used to store an associative array where the keys are usually strings. ...
Stable hashing is a tool used to implement randomized load balancing and distributed lookup in peertopeer computer systems. ...
Extendible hashing is a type of hash system which treats a hash as a bit string, and uses a trie for bucket lookup. ...
References  ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms, second edition, MIT Press and McGrawHill, pp. 222. ISBN 9780262531962.
 ^ ^{a} ^{b} Tenenbaum, Aaron M.; Langsam, Yedidyah & Augenstein, Moshe J. (1990), Data Structures Using C, Prentice Hall, pp. pp. 456461, pp. 472, ISBN 0131997467
 ^ Celis, Pedro (1986). Robin Hood hashing. Technical Report Computer Science Department, University of Waterloo CS8614.
 ^ Litwin, Witold (1980). "Linear hashing: A new tool for file and table addressing". Proc. 6th Conference on Very Large Databases: 212223.
Thomas H. Cormen is the coauthor of Introduction to Algorithms, along with Charles Leiserson, Ron Rivest, and Cliff Stein. ...
Charles E. Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof; as part of this effort, he developed the Cilk multithreaded language. ...
Professor Ron Rivest Professor Ronald Linn Rivest (born 1947, Schenectady, New York) is a cryptographer, and is the Viterbi Professor of Computer Science at MITs Department of Electrical Engineering and Computer Science. ...
Clifford Stein is a computer scientist, currently working as a professor at Columbia University in New York, NY. He earned his BSE from Princeton University in 1987, a MS from Massachusetts Institute of Technology in 1989, and a PhD from Massachusetts Institute of Technology in 1992. ...
Cover of the second edition Introduction to Algorithms is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ...
Further reading  Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition. AddisonWesley, 1998. ISBN 0201896850. Section 6.4: Hashing, pp.513–558.
 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGrawHill, 2001. ISBN 0262032937. Chapter 11: Hash Tables, pp.221–252.
 Michael T. Goodrich and Roberto Tamassia. Data Structures and Algorithms in Java, 4th edition. John Wiley & Sons, Inc. ISBN 0471738840. Chapter 9: Maps and Dictionaries. pp.369–418
Donald Ervin Knuth ( or KaNOOTH[1], Chinese: [2]) (b. ...
Thomas H. Cormen is the coauthor of Introduction to Algorithms, along with Charles Leiserson, Ron Rivest, and Cliff Stein. ...
Charles E. Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof; as part of this effort, he developed the Cilk multithreaded language. ...
Professor Ron Rivest Professor Ronald Linn Rivest (born 1947, Schenectady, New York) is a cryptographer, and is the Viterbi Professor of Computer Science at MITs Department of Electrical Engineering and Computer Science. ...
Clifford Stein is a computer scientist, currently working as a professor at Columbia University in New York, NY. He earned his BSE from Princeton University in 1987, a MS from Massachusetts Institute of Technology in 1989, and a PhD from Massachusetts Institute of Technology in 1992. ...
Cover of the second edition Introduction to Algorithms is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ...
External links  Tables and Hashing Excellent presentation
 A Hash Function for Hash Table Lookup by Bob Jenkins
 Hash Functions by Bret Mulvey (Pluto Scarab)  with nice graphs
 Hash functions by Paul Hsieh
 Hashtables, Part 2 by Maciej Stachowiak
 Libhashish is one of the most feature rich hash libraries (builtin hash functions, several collision strategies, extensive analysis functionality, ...)
 NIST entry on hash tables
 Open addressing hash table removal algorithm from ICI programming language, ici_set_unassign in set.c (and other occurrences, with permission).
 The Perl Wikibook  Perl Hash Variables
 A basic explanation of how the hash table works by Reliable Software
 Lecture on Hash Tables
 Hash'em all! — free online text and file hashing with different algorithms
 qDecoder's C/C++ Hashtable API — opensource hashtable library including dynamic allocatable and static array based hashtable API
As a nonregulatory agency of the United States Department of Commerce’s Technology Administration, the National Institute of Standards (NIST) develops and promotes measurement, standards, and technology to enhance productivity, facilitate trade, and improve the quality of life. ...
The ICI Programming Language is a general purpose interpreted, computer programming language originally developed by Tim Long in 1992. ...
