FACTOID # 10: The total number of state executions in 2005 was 60: 19 in Texas and 41 elsewhere. The racial split was 19 Black and 41 White.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Cache algorithms

Cache algorithms (also frequently called replacement algorithms or replacement policy) are optimizing instructions – algorithms – that a computer program or a hardware-maintained structure can follow to manage a cache of information stored on the computer. Cache size is usually limited, and if the cache is full, the computer (that is, the programmer) must decide which items to keep and which to discard to make room for new items. Image File history File links Please see the file description page for further information. ... In a computer operating system which utilises paging for virtual memory memory management, page replacement algorithms decide what pages to page out (swap out) when a page needs to be allocated. ... In mathematics, computing, linguistics, and related disciplines, an algorithm is a finite list of well-defined instructions for accomplishing some task that, given an initial state, will terminate in a defined end-state. ... A computer program is a collection of instructions that describe a task, or set of tasks, to be carried out by a computer. ... Look up cache in Wiktionary, the free dictionary. ... The NASA Columbia Supercomputer. ... A programmer or software developer is someone who programs computers, that is, one who writes computer software. ...


Examples of caching algorithms are:

Least Recently Used (LRU)
LRU discards the least recently used items first. This algorithm requires keeping track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item.
Most Recently Used (MRU)
MRU discards, in contrast to LRU, the most recently used items first. This caching mechanism is used when access is unpredictable, and determining the least most recently used section of the cache system is a high time complexity operation. A common example of this is database memory caches.
Pseudo-LRU (PLRU)
For caches with large associativity (generally >4 ways), the implementation cost of LRU becomes prohibitive. If a probabilistic scheme that almost always discards one of the least recently used items is sufficient, the Pseudo-LRU algorithm can be used which only needs one bit per cache item to work.
Least Frequently Used (LFU) 
LFU counts how often an item is needed. Those that are used least often are discarded first.
Belady's Min 
The most efficient caching algorithm would be to always discard the information that will not be needed for the longest time in the future. Since it is impossible to predict how far in the future information will be needed, this is not implementable in hardware, however (with pre-defined simulations) it can be used as a gauge as to the effectiveness of other algorithms.
Adaptive Replacement Cache (ARC) 
ARC improves on LRU by constantly balancing between recency and frequency

Other things to consider: Pseudo-LRU (also known as Tree-LRU) is an efficient algorithm to find an item that most likely has not been accessed very recently, given a set of items and a sequence of access events to the items. ... Diagram of a CPU memory cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. ... Pseudo-LRU (also known as Tree-LRU) is an efficient algorithm to find an item that most likely has not been accessed very recently, given a set of items and a sequence of access events to the items. ... Laszlo Belady (born April 29, 1928 in Hungary) is a computer scientist notable for devising the Beladys Min theoretical memory caching algorithm in 1966 while working at IBM Research. ... Adaptive Replacement Cache (ARC) is a cache management algorithm with better performance[1] than LRU (Least Recently Used) developed[2] at the IBM Almaden Research Center. ...

  • Cost: if items have different costs, keep those items that are expensive to obtain, e.g. those that take a long time to get.
  • Size: If items have different sizes, the cache may want to discard a large item to store several smaller ones.
  • Time: Some caches keep information that expires (e.g. a news cache, a DNS cache, or a web browser cache). The computer may discard items because they are expired. Depending on the size of the cache no further caching algorithm to discard items may be necessary.

Various algorithms also exist to maintain cache coherency. Cache coherence refers to the integrity of data stored in local caches of a shared resource. ...


See also

In a computer operating system which utilises paging for virtual memory memory management, page replacement algorithms decide what pages to page out (swap out) when a page needs to be allocated. ...

External links

  • fully associative cache
  • set associative cache
  • direct mapped cache

  Results from FactBites:
 
Cache-Control (2979 words)
Cache directives are unidirectional in that the presence of a directive in a request does not imply that the same directive is to be given in the response.
Cache directives MUST be passed through by a proxy or gateway application, regardless of their significance to that application, since the directives might be applicable to all recipients along the request/response chain.
If a cache returns a stale response, either because of a max-stale directive on a request, or because the cache is configured to override the expiration time of a response, the cache MUST attach a Warning header to the stale response, using Warning 110 (Response is stale).
DEFER Cache Algorithms (818 words)
Further, clean blocks are not affected by the logging algorithms and hence read performance of clean blocks is as good as that of any other read optimized cooperative caching algorithm.
When a workstation receives a write request, it writes one copy of the data to its local cache and then forwards a copy to a remote host (it might be a peer or the server depending on the logging algorithm used).
Cache blocks that are logged to the disk are kept in the remote cache for up to 17#17 seconds before being logged to capture temporal locality.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m