FACTOID # 18: Alaska spends more money per capita on elementary and secondary education than any other state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Garbage collection (computer science)

In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory used by objects that will never be accessed or mutated again by the application. Garbage collection was invented by John McCarthy around 1959 to solve the problems of manual memory management in his Lisp programming language. [1] Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ... Memory management is the act of managing computer memory. ... It has been suggested that this article or section be merged with Garbage collection (computer science). ... In strictly mathematical branches of computer science the term object is used in a purely mathematical sense to refer to any thing. While this interpretation is useful in the discussion of abstract theory, it is not concrete enough to serve as a primitive datatype in the discussion of more concrete... Application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Lisp is a family of computer programming languages with a long history and a distinctive fully-parenthesized syntax. ...


Garbage collection is often portrayed as the opposite of manual memory management, which requires the programmer to specify which objects to deallocate and return to the memory system. However, many systems use a combination of the two approaches, and there are other techniques being studied (such as region inference) to solve the same fundamental problem.[1] Manual memory management, in computer science, refers to the usage of manual instructions by the programmer in order to identify and deallocate unused objects, or garbage. ... Region inference is a memory management method for computer programming. ...

Contents

Description

The basic principle of how a garbage collector works is:

  1. Determine what data objects in a program will not be accessed in the future
  2. Reclaim the resources used by those objects

By making manual memory deallocation unnecessary (and typically forbidding it), garbage collection frees the programmer from having to worry about releasing objects that are no longer needed, which can otherwise consume a significant amount of design effort. It also aids programmers in their efforts to make programs more stable, because it prevents several classes of runtime errors. For example, it prevents dangling pointer errors, where a reference to a deallocated object is used. (The pointer still points to the location in memory where the object or data was, even though the object or data has since been deleted and the memory may now be used for other purposes, creating a dangling pointer.) Many computer languages require garbage collection, either as part of the language specification (e.g. C#, and most scripting languages) or effectively for practical implementation (e.g. formal languages like lambda calculus); these are said to be garbage-collected languages. Other languages were designed for use with manual memory management, but have garbage collected implementations (e.g., C, C++). Newer Delphi versions support garbage collected dynamic arrays, long strings, variants and interfaces. Some languages, like Modula-3, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects, or yet others like D, which is garbage-collected but allows the user to manually delete objects and also entirely disable garbage collection when speed is required. In any case, it is far easier to implement garbage collection as part of the language's compiler and runtime system,[citation needed] but post hoc GC systems exist, including ones that do not require recompilation. The garbage collector will almost always be closely integrated with the memory allocator. Dangling pointers in programming are pointers whose objects have since been deleted or deallocated, without modifying the value of the pointer. ... A programming language is an artificial language that can be used to control the behavior of a machine, particularly a computer. ... The title given to this article is incorrect due to technical limitations. ... Scripting programming languages (commonly called scripting languages or script languages) are computer programming languages designed for scripting the operation of a computer. ... The lambda calculus is a formal system designed to investigate function definition, function application, and recursion. ... C is a general-purpose, block structured, procedural, imperative computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system. ... C++ (pronounced see plus plus, IPA: ) is a general-purpose programming language with high-level and low-level capabilities. ... For other uses, see Delphi (disambiguation). ... This article or section does not adequately cite its references or sources. ... Manual memory management, in computer science, refers to the usage of manual instructions by the programmer in order to identify and deallocate unused objects, or garbage. ... D is an object-oriented, imperative system programming language designed by Walter Bright of Digital Mars as a re-engineering of C/C++. He has done this by re-designing many C++ features, and borrowing ideas from other programming languages. ... A diagram of the operation of a typical multi-language, multi-target compiler. ... In computer science, runtime or run time describes the operation of a computer program, the duration of its execution, from beginning to termination (compare compile time). ... In computer science, dynamic memory allocation is the allocation of memory storage for use in a computer program during the runtime of that program. ...


Benefits

Garbage collection frees the programmer from manually dealing with memory allocation and deallocation. As a result, certain categories of bugs are eliminated or substantially reduced:

  • Dangling pointer bugs, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is used.
  • Double free bugs, which occur when the program attempts to free a region of memory that is already free.
  • Certain kinds of memory leaks, in which a program fails to free memory that is no longer referenced by any variable, leading over time to memory exhaustion.

Researchers draw a distinction between "physical" and "logical" memory leaks. In a physical memory leak, the last pointer to a region of allocated memory is removed, but the memory is not freed. In a logical memory leak, a region of memory is still referenced by a pointer, but is never actually used. [2] Garbage collectors generally can do nothing about logical memory leaks. Novice programmers sometimes believe that garbage collection makes memory leaks impossible, not realizing that logical leaks are still possible. In computer science, a memory leak is a particular kind of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. ...


Tracing garbage collectors

Tracing garbage collectors are the most common type of garbage collector. They first determine which objects are reachable (or potentially reachable), and then discard all remaining objects.


Reachability of an object

Informally, a reachable object can be defined as an object for which there exists some variable in the program environment that leads to it, either directly or through references from other reachable objects. More precisely, objects can be reachable in only two ways:

  1. A distinguished set of objects are assumed to be reachable—these are known as the roots. Typically, these include all the objects referenced from anywhere in the call stack (that is, all local variables and parameters in the functions currently being invoked), and any global variables.
  2. Anything referenced from a reachable object is itself reachable; more formally, reachability is a transitive closure.

The reachability definition of "garbage" is not optimal, insofar as the last time a program uses an object could be long before that object falls out of the environment scope. A distinction is sometimes drawn between syntactic garbage, those objects the program cannot possibly reach, and semantic garbage, those objects the program will in fact never again use. For example: In computer science, a call stack is a special stack which stores information about the active subroutines of a computer program. ... In mathematics, the transitive closure of a binary relation R on a set X is the smallest transitive relation on X that contains R. For any relation R the transitive closure of R always exists. ...

 var x = new Foo(); var y = new Bar(); x = new Quux(); //at this point, we know that the Foo() object will //never be accessed - it is syntactic garbage if(x.check_something()) { x.do_something(y); } exit(); //in the above block, y *could* be semantic garbage, //but we won't know until x.check_something() returns //some value - if it returns at all 

The problem of precisely identifying semantic garbage can easily be shown to be partially decidable: a program that allocates an object X, runs an arbitrary input program P, and uses X if and only if P finishes would require a semantic garbage collector to solve the halting problem. Although conservative heuristic methods for semantic garbage detection remain an active research area, essentially all practical garbage collectors focus on syntactic garbage as described here. In computability theory and computational complexity theory, a decision problem is a question in some formal system with a yes-or-no answer. ... In computability theory the halting problem is a decision problem which can be stated as follows: Given a description of a program and a finite input, decide whether the program finishes running or will run forever, given that input. ...


Basic algorithm

Tracing collectors are called that way because they trace through the working set of memory. These garbage collectors perform collection in cycles. A cycle is started when the collector decides (or is notified) that it needs to reclaim storage, which in particular happens when the system is low on memory. The original method involves a naive mark-and-sweep in which the entire memory set is touched several times.


In this method, each object in memory has a flag (typically a single bit) reserved for garbage collection use only. This flag is always cleared (counter-intuitively), except during the collection cycle. The first stage of collection sweeps the entire 'root set', marking each accessible object as being 'in-use'. All objects transitively accessible from the root set are marked, as well. Finally, each object in memory is again examined; those with the in-use flag still cleared are not reachable by any program or data, and their memory is freed. (For objects which are marked in-use, the in-use flag is cleared again, preparing for the next cycle.)


This method has several disadvantages, the most notable being that the entire system must be suspended during collection; no mutation of the working set can be allowed. This will cause programs to 'freeze' periodically (and generally unpredictably), making real-time and time-critical applications impossible. In addition, the entire working memory must be examined, much of it twice, potentially causing problems in paged memory systems. This article is about computer virtual memory. ...


Because of these pitfalls, all modern tracing garbage collectors implement some variant of the tri-colour marking abstraction, but simple collectors (such as the mark-and-sweep collector) often do not make this abstraction explicit. Tri-colour marking works as follows: In computer science, abstraction is a mechanism and practice to reduce and factor out details so that one can focus on a few concepts at a time. ...

  1. Create initial white, grey, and black sets; these sets will be used to maintain progress during the cycle. Initially the white set or condemned set is the set of objects that are candidates for having their memory recycled. The black set is the set of objects that cheaply can be proven to have no references to objects in the white set; in many implementations the black set starts off empty. The grey set is all the remaining objects that may or may not have references to objects in the white set (and elsewhere). These sets partition memory; every object in the system, including the root set, is in precisely one set.
  2. Pick an object from the grey set. Blacken this object (move it to the black set), by greying all the white objects it references directly.
  3. Repeat the previous step until the grey set is empty.
  4. When there are no more objects in the grey set, then all the objects remaining in the white set are provably not reachable and the storage occupied by them can be reclaimed.

The tri-colour marking algorithm preserves an important invariant: A partition of U into 6 blocks: an Euler diagram representation. ...

No black object points directly to a white object.

This ensures that the white objects can be safely destroyed once the grey set is empty. (Some variations on the algorithm do not preserve the tricolour invariant but they use a modified form for which all the important properties hold.)


The tri-colour method has an important advantage: it can be performed 'on-the-fly', without halting the system for significant time periods. This is accomplished by marking objects as they are allocated and during mutation, maintaining the various sets. By monitoring the size of the sets, the system can perform garbage collection periodically, rather than as-needed. Also, the need to touch the entire working set each cycle is avoided.


Implementation strategies

In order to implement the basic tri-colour algorithm, several important design decisions must be made, which can significantly affect the performance characteristics of the garbage collector.


Moving vs. non-moving

Once the unreachable set has been determined, the garbage collector may simply release the unreachable objects and leave everything else as it is, or it may copy some or all of the reachable objects into a new area of memory, updating all references to those objects as needed. These are called "non-moving" and "moving" garbage collectors, respectively. In computer science, unreachable memory is a block of memory allocated dynamically where the program that allocated the memory no longer has any reachable pointer that refers to it. ...


At first, a moving GC strategy may seem inefficient and costly compared to the non-moving approach, since much more work would appear to be required on each cycle. In fact, however, the moving GC strategy leads to several performance advantages, both during the garbage collection cycle itself and during actual program execution:

  • No additional work is required to reclaim the space freed by dead objects; the entire region of memory from which reachable objects were moved can be considered free space. In contrast, a non-moving GC must visit each unreachable object and somehow record that the memory it alone occupied is available.
  • Similarly, new objects can be allocated very quickly. Since large contiguous regions of memory are usually made available by the moving GC strategy, new objects can be allocated by simply incrementing a 'free memory' pointer. A non-moving strategy may, after some time, lead to a heavily fragmented heap, requiring expensive consultation of "free lists" of small available blocks of memory in order to allocate new objects.
  • If an appropriate traversal order is used, objects that refer to each other frequently can be moved very close to each other in memory, increasing the likelihood that they will be located in the same cache line or virtual memory page. This can significantly speed up access to these objects through these references.

One disadvantage of a moving garbage collector is that it only allows access through references that are managed by the garbage collected environment, and does not allow pointer arithmetic. This is because any native pointers to objects will be invalidated when the garbage collector moves the object (they become dangling pointers). For interoperability with native code, the garbage collector must copy the object contents to a location outside of the garbage collected region of memory. An alternative approach is to allow pinning the object in memory, preventing the garbage collector from moving it, and allowing the memory to be directly shared with native pointers (and possibly allowing pointer arithmetic).[2] In computer storage, there are three related uses of the term fragmentation: external fragmentation, internal fragmentation, and data fragmentation, all related to storage. ... This article or section should be merged with CPU cache. ... The program thinks it has a large range of contiguous addresses; but in reality the parts it is currently using are scattered around RAM, and the inactive parts are saved in a disk file. ... In computer science, a pointer is a programming language data type whose value refers directly to (or “points to”) another value stored elsewhere in the computer memory using its address. ... Dangling pointers in programming are pointers whose objects have since been deleted or deallocated, without modifying the value of the pointer. ... Interoperability is connecting people, data and diverse systems. ...


Copying vs. mark-and-sweep vs. mark-and-don't-sweep

To further refine the distinction, tracing collectors can also be divided by considering how the three sets of objects (white, grey, and black) are maintained during a collection cycle.


The most straightforward approach is the semi-space collector, which dates to 1969. In this moving GC scheme, memory is partitioned into a "from space" and "to space". Initially, objects are allocated into "to space", until it becomes full and a collection is triggered. At the start of a collection, the "to space" becomes the "from space", and vice versa. The objects reachable from the root set are copied from the "from space" to the "to space". These objects are scanned in turn, and all objects that they point to are copied to "to space", until all reachable objects have been copied to "to space". Once the program continues execution, new objects are once again allocated from the "to space" until it is once again full and the process is repeated. This approach has the advantage of conceptual simplicity (the three object color sets are implicitly constructed during the copying process), but the disadvantage that a (possibly) very large contiguous region of free memory is necessarily required on every collection cycle. This technique is also known as stop-and-copy. Cheney's algorithm is an improvement on the semi-space collector. Cheneys algorithm, first described in a 1970 ACM paper by C.J. Cheney, is a method of garbage collection in computer software systems. ...


A mark and sweep garbage collector maintains a bit (or two) with each object to record whether it is white or black; the grey set is either maintained as a separate list (such as the process stack) or using another bit. As the reference tree is traversed during a collection cycle (the "mark" phase), these bits are manipulated by the collector to reflect the current state. A final "sweep" of the memory areas then frees white objects. The mark and sweep strategy has the advantage that, once the unreachable set is determined, either a moving or non-moving collection strategy can be pursued; this choice of strategy can even be made at runtime, as available memory permits. It has the disadvantage of "bloating" objects by a small amount.


A mark and don't sweep garbage collector, like the mark-and-sweep, maintains a bit with each object to record whether it is white or black; the gray set is either maintained as a separate list (such as the process stack) or using another bit. There are two key differences here. First, black and white mean different things than they do in the mark and sweep collector. In a "mark and don't sweep" system, all reachable objects are always black. An object is marked black at the time it is allocated, and it will stay black even if it becomes unreachable. A white object is unused memory and may be allocated. Second, the interpretation of the black/white bit can change. Initially, the black/white bit may have the sense of (0=white, 1=black). If an allocation operation ever fails to find any available (white) memory, that means all objects are marked used (black). The sense of the black/white bit is then inverted (for example, 0=black, 1=white). Everything becomes white. This momentarily breaks the invariant that reachable objects are black, but a full marking phase follows immediately, to mark them black again. Once this is done, all unreachable memory is white. No "sweep" phase is necessary.


Generational GC (aka Ephemeral GC)

It has been empirically observed that in many programs, the most recently created objects are also those most likely to quickly become unreachable (known as infant mortality or the generational hypothesis). A generational GC divides objects into generations and, on most cycles, will place only the objects of a subset of generations into the initial white (condemned) set. Furthermore, the runtime system maintains knowledge of when references cross generations by observing the creation and overwriting of references. When the garbage collector runs, it may be able to use this knowledge to prove that some objects in the initial white set are unreachable without having to traverse the entire reference tree. If the generational hypothesis holds, this results in much faster collection cycles while still reclaiming most unreachable objects.


In order to implement this concept, many[citation needed] generational garbage collectors use separate memory regions for different ages of objects. When a region becomes full, those few objects that are referenced from older memory regions are promoted (copied) up to the next highest region, and the entire region can then be overwritten with fresh objects. This technique permits very fast incremental garbage collection, since the garbage collection of only one region at a time is all that is typically required.


Generational garbage collection is a heuristic approach, and some unreachable objects may not be reclaimed on each cycle. It may therefore occasionally be necessary to perform a full mark and sweep or copying garbage collection to reclaim all available space. In fact, runtime systems for modern programming languages (such as Java and the .NET Framework) usually use some hybrid of the various strategies that have been described thus far; for example, most collection cycles might look only at a few generations, while occasionally a mark-and-sweep is performed, and even more rarely a full copying is performed to combat fragmentation. The terms "minor cycle" and "major cycle" are sometimes used to describe these different levels of collector aggression. In computer science, besides the common use as rule of thumb (see heuristic), the term heuristic has two well-defined technical meanings. ... Java language redirects here. ... The Microsoft . ...


Stop-the-world vs. incremental vs. concurrent

Simple stop-the-world garbage collectors completely halt execution of the program to run a collection cycle, thus guaranteeing that new objects are not allocated and objects do not suddenly become unreachable while the collector is running. This has the obvious disadvantage that the program can perform no useful work while a collection cycle is running (sometimes called the "embarrassing pause").


Incremental garbage collectors are designed to reduce this disruption by interleaving their work with activity from the main program. Careful design is necessary to ensure that the main program does not interfere with the garbage collector and vice versa; for example, when the program needs to allocate a new object, the runtime system may either need to suspend it until the collection cycle is complete, or somehow notify the garbage collector that there exists a new, reachable object.


Finally, a concurrent garbage collector can run concurrently in real time with the main program on a symmetric multiprocessing machine. Complex locking regimes may be necessary in order to guarantee correctness, and cache issues also make this less helpful than one might imagine. Nonetheless, concurrent GC may be desirable for SMP applications with high performance requirements. Symmetric multiprocessing, or SMP, is a multiprocessor computer architecture where two or more identical processors are connected to a single shared main memory. ... In computer science, a lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. ...


Precise vs. conservative and internal pointers

Some collectors can correctly identify all pointers (references) in an object; these are called "precise" (or "exact" or "accurate") collectors, the opposite being a "conservative" or "partly conservative" collector. "Conservative" collectors have to assume that any bit pattern in memory is a pointer if (when interpreted as a pointer) it would point into any allocated object. Thus, conservative collectors may have some false negatives, where storage is not released because of accidental fake pointers, but this is rarely a significant drawback in practice. Whether a precise collector is practical usually depends on type safety properties of the programming language.


A related issue concerns internal pointers, or pointers to fields within an object. If the semantics of a language allow internal pointers, then there may be many different addresses that can refer to the same object, which complicates determining whether an object is garbage or not.


Performance implications

Tracing garbage collectors require some implicit runtime overhead that may be beyond the control of the programmer, and can sometimes lead to performance problems. For example, commonly used Stop-The-World garbage collectors, which pause program execution at arbitrary times, may make GC languages inappropriate for some embedded systems, high-performance server software, and applications with real-time needs. In computer science, overhead is generally considered any combination of excess or indirect computation time, memory, bandwidth, or other resources that are required to be utilized or expended to enable a particular goal. ... Realtime redirects here. ...


A more fundamental issue is that garbage collectors violate locality of reference, since they deliberately go out of their way to find bits of memory that haven't been accessed recently. The performance of modern computer architectures is increasingly tied to caching, which depends on the assumption of locality of reference for its effectiveness. Some garbage collection methods result in better locality of reference than others. Generational garbage collection is relatively cache-friendly, and copying collectors automatically defragment memory helping to keep related data together. Nonetheless, poorly timed garbage collection cycles could have a severe performance impact on some computations, and for this reason many runtime systems provide mechanisms that allow the program to temporarily suspend, delay or activate garbage collection cycles. It has been suggested that this article or section be merged with Memory locality. ... For other uses, see cache (disambiguation). ...


Despite these issues, for many practical purposes, allocation/deallocation-intensive algorithms implemented in modern garbage collected languages can actually be faster than their equivalents using explicit memory management (at least without optimizations by an expert programmer).[citation needed] A major reason for this is that the garbage collector allows the runtime system to amortize allocation and deallocation operations in a potentially advantageous fashion.[citation needed] Each has different sorts of overheads: In computational complexity theory, amortized analysis is the time per operation averaged over a worst_case sequence of operations. ...

Manual allocation:
  • search for best/first-fit block of sufficient size
  • free list maintenance
Garbage collection:
  • locate reachable objects
  • copy reachable objects for moving collectors
  • read/write barriers for incremental collectors
  • search for best/first-fit block and free list maintenance for non-moving collectors

It is difficult to compare the two cases directly, as their behavior depends on the situation. For example, in the best case for a garbage-collecting system, allocation just increments a pointer; but in the best speed performance case for manual allocation, the allocator maintains freelists of specific sizes and allocation only requires following a pointer. However this size segregation usually cause a large degree of external fragmentation and this can impact cache behaviour.


The overhead of write barriers is more likely to be noticeable in an imperative-style program which frequently writes pointers into existing data structures than in a functional-style program which constructs data only once and never changes them.


Some advances in garbage collection can be understood as reactions to performance issues. Early collectors were stop-the-world collectors, but the performance of this approach was distracting in interactive applications. Incremental collection avoided this disruption, but at the cost of decreased efficiency due to the need for barriers. Generational collection techniques are used with both stop-the-world and incremental collectors to increase performance--the trade-off is that some garbage is not detected as garbage for longer than normal.


Reference counting

Main article: reference counting

Reference counting is a form of automatic memory management where each object has a count of the number of references to it. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. The object's memory is reclaimed when the count reaches zero. In computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object or block of memory. ... In computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object or block of memory. ...


There are two major disadvantages to reference counting:

  • If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some GC systems (like the one in CPython) using reference counting use specific cycle-detecting algorithms to deal with this issue.
  • In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, optimizations to this are described in the literature. When used in a multithreaded environment, these modifications (increment and decrement) may need to be interlocked. This is usually a very expensive operation.

Python is an interpreted programming language created by Guido van Rossum in 1990. ... In computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object or block of memory. ...

Escape analysis

Main article: Escape analysis

Escape analysis can be used to convert heap allocations to stack allocations, thus reducing the amount of work needed to be done by the garbage collector. In programming language compiler optimization theory, escape analysis is a method for determining the dynamic scope of pointers. ... In programming language compiler optimization theory, escape analysis is a method for determining the dynamic scope of pointers. ...


Availability

Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In languages that do not have it built in, garbage collection can often be added as a library, as with the Boehm garbage collector for C and C++. A high-level programming language is a programming language that, in comparison to low-level programming languages, may be more abstract, easier to use, or more portable across platforms. ... In computer science, Boehm-Demers-Weiser garbage collector, often simply known as Boehm GC, is a conservative garbage collector for C and C++, which is used by many projects that are implemented in C or C++, as well as by runtime environments for a number of other languages, including the...


Functional programming languages, like ML, Haskell, and APL, have garbage collection built in. Lisp, which introduced functional programming, is especially notable for using garbage collection before this technique was commonly used. Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions. ... ML is a general-purpose functional programming language developed by Robin Milner and others in the late 1970s at the University of Edinburgh, whose syntax is inspired by ISWIM. Historically, ML stands for metalanguage as it was conceived to develop proof tactics in the LCF theorem prover (the language of... Haskell is a standardized purely functional programming language with non-strict semantics, named after the logician Haskell Curry. ... APL (for A Programming Language) is an array programming language based on a notation invented in 1957 by Kenneth E. Iverson while at Harvard University. ... Lisp is a family of computer programming languages with a long history and a distinctive fully-parenthesized syntax. ... Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. ...


Other dynamic languages, such as Ruby (but not Perl, or PHP, which use reference counting), also tend to use GC. Object-oriented programming languages like Smalltalk, Java and ECMAScript usually provide integrated garbage collection, a notable exception being C++. Ruby is a reflective, object-oriented programming language. ... Wikibooks has a book on the topic of Perl Programming Perl is a dynamic programming language created by Larry Wall and first released in 1987. ... For other uses, see PHP (disambiguation). ... Smalltalk is a dynamically typed object oriented programming language designed at Xerox PARC by Alan Kay, Dan Ingalls, Ted Kaehler, Adele Goldberg, and others during the 1970s. ... Java language redirects here. ... ECMAScript is a scripting programming language, standardized by Ecma International in the ECMA-262 specification. ... C++ (pronounced see plus plus, IPA: ) is a general-purpose programming language with high-level and low-level capabilities. ...


Historically, languages intended for beginners, such as BASIC and the Lisp-derived Logo, have often used garbage collection for variable-length data types, such as strings and lists, so as to not burden novice programmers with the hassles of memory management. On early microcomputers, with their limited memory and slow processors, garbage collection could often cause apparently random, inexplicable pauses in the midst of program operation. This article is about the programming language. ... The Logo programming language is a functional programming language. ...


Limited environments

Garbage collection is rarely used on embedded or real-time systems because of the perceived need for very tight control over the use of limited resources. However, garbage collectors compatible with such limited environments have been developed. [3] The Microsoft .NET Micro Framework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection. . ... In computing, the Java Platform, Micro Edition or Java ME (previously known as Java 2 Platform, Micro Edition or J2ME) is a specification of a subset of the Java platform aimed at providing a certified collection of Java APIs for the development of software for small, resource-constrained devices such...


See also

Cheneys algorithm, first described in a 1970 ACM paper by C.J. Cheney, is a method of garbage collection in computer software systems. ... Memory management is the act of managing computer memory. ... The term reversible computing refers to any computational process that is (at least to some close approximation) reversible, i. ... In computer science, a memory leak is a particular kind of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. ... In computer programming, a weak reference is a reference that does not protect the referent object from collection by a garbage collector. ...

References

  1. ^ Note that there is an ambiguity of terms, as theory often uses the terms manual garbage-collection and automatic garbage-collection rather than manual memory management and garbage-collection, and does not restrict garbage-collection to memory management, rather considering that any logical or physical resource may be garbage-collected.
  2. ^ Maebe, Ronsse, and De Bosschere, "Precise detection of memory leaks." http://www.cs.virginia.edu/woda2004/papers/maebe.pdf
  3. ^ Wei Fu and Carl Hauser, "A Real-Time Garbage Collection Framework for Embedded Systems". ACM SCOPES '05, 2005. http://portal.acm.org/ft_gateway.cfm?id=1140392&type=pdf&coll=GUIDE&dl=GUIDE&CFID=15151515&CFTOKEN=6184618

Memory management is the act of managing computer memory. ... A resource, also referred to as system resource, is any physical or virtual system component of a computer system with limited availability. ...

External links


  Results from FactBites:
 
garbage collection: Definition and Much More from Answers.com (3476 words)
Garbage collection was invented by John McCarthy around 1959 to solve the problems of manual memory management in his Lisp programming language.
Garbage collection is often portrayed as the opposite of manual memory management, which requires the programmer to specify which objects to deallocate and return to the memory system.
In contrast to tracing garbage collection, reference counting is a form of automatic memory management where each object has a count of the number of references to it.
COMPUTER SCIENCE (2719 words)
These skills are necessary for dealing with the problems encountered in business, industry, and governmental computer applications; for holding administrative or engineering positions requiring the planning and implementation of computer systems; for teaching computer science; and/or further study in computer science, in particular, for doctoral study.
A bachelor's degree in a high quality computer science program or satisfactory completion of the Advanced GRE in computer Science may be substituted for some or all of the subject area admission requirements.
Individuals who have worked at a high professional level in the computer industry may be able to substitute work experience for some of the specific subject area requirements, subject to review by the department graduate committee.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m