FACTOID # 12: It's not the government they hate: Washington DC has the highest number of hate crimes per capita in the US.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Kernel (computer science)
A kernel connects the application software to the hardware of a computer.
A kernel connects the application software to the hardware of a computer.

In computer science, the kernel is the central component of most computer operating systems (OS). Its responsibilities include managing the system's resources (the communication between hardware and software components).[1] As a basic component of an operating system, a kernel provides the lowest-level abstraction layer for the resources (especially memory, processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. Image File history File links Kernel. ... Image File history File links Kernel. ... // An operating system (OS) is the software that manages the sharing of the resources of a computer. ... It has been suggested that Peripheral be merged into this article or section. ... It has been suggested that this article or section be merged with Computer program. ... An abstraction layer is a way of hiding the implementation details of a particular set of functionality. ... “RAM” redirects here. ... “CPU” redirects here. ... Energy Input: The energy placed into a reaction. ... This article does not cite any references or sources. ... In computing, a process is an instance of a computer program that is being executed. ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... In computing, a system call is the mechanism used by an application program to request service from the operating system. ...


These tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the code in the same address space to increase the performance of the system,[citation needed] microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[2] A range of possibilities exists between these two extremes. It has been suggested that Monolithic system be merged into this article or section. ... In computing, an address space defines a range of discrete addresses, each of which may correspond to a physical or virtual memory register, a network host, peripheral device, disk sector or other logical or physical entity. ... Graphical overview of a microkernel A microkernel is a minimal computer operating system kernel providing only basic operating system services (system calls), while other services (commonly provided by kernels) are provided by user-space programs called servers. ... An operating system usually segregates the available system memory into kernel space and user space. ...

Contents

Overview

A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79).
A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79).

On the definition of 'kernel' Jochen Liedtke said that the word is "traditionally used to denote the part of the operating system that is mandatory and common to all other software."[3] Image File history File links No higher resolution available. ... Image File history File links No higher resolution available. ... A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79). ... For other uses, see Hardware (disambiguation). ... A microcontroller, like this PIC18F8720 is controlled by firmware stored inside on FLASH memory In computing, firmware is a computer program that is embedded in a hardware device, for example a microcontroller. ... See the terminology section, below, regarding inconsistent use of the terms assembly and assembler. ... // An operating system (OS) is the software that manages the sharing of the resources of a computer. ... In computing, a process is an instance of a computer program that is being executed. ... He was behind the original implementation of L4 microkernel when he was in University of Karlsruhe in Germany. ...


Most operating systems rely on the kernel concept. The existence of a kernel is a natural consequence of designing a computer system as a series of abstraction layers,[4] each relying on the functions of layers beneath itself. The kernel, from this viewpoint, is simply the name given to the lowest level of abstraction that is implemented in software. In order to avoid having a kernel, one would have to design all the software on the system not to use abstraction layers; this would increase the complexity of the design to such a point that only the simplest systems could feasibly be implemented. An abstraction layer is a way of hiding the implementation details of a particular set of functionality. ... Computer software (or simply software) refers to one or more computer programs and data held in the storage of a computer for some purpose. ...


While it is today mostly called the kernel, the same part of the operating system has also in the past been known as the nucleus or core.[5][6][1][7] (Note, however, that the term core has also been used to refer to the primary memory of a computer system, typically because some early computers used a form of memory called Core memory.) A 16×16 cm area core memory plane of 128×128 bits, i. ...


In most cases, the boot loader starts executing the kernel in supervisor mode,[8] The kernel then initializes itself and starts the first process. After this, the kernel does not typically execute directly, only in response to external events (e.g. via system calls used by applications to request services from the kernel, or via interrupts used by the hardware to notify the kernel of events). Additionally, the kernel typically provides a loop that is executed whenever no processes are available to run; this is often called the idle process. In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system. ... In computer terms, supervisor mode is a hardware-mediated flag which can be changed by code running in system-level software. ... In computing, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ...


Kernel development is considered one of the most complex and difficult tasks in programming.[9] Its central position in an operating system implies the necessity for good performance, which defines the kernel as a critical piece of software and makes its correct design and implementation difficult. For various reasons, a kernel might not even be able to use the abstraction mechanisms it provides to other software. Such reasons include memory management concerns (for example, a user-mode function might rely on memory being subject to demand paging, but as the kernel itself provides that facility it cannot use it, because then it might not remain in memory to provide that facility) and lack of reentrancy, thus making its development even more difficult for software engineers. In computer science, abstraction is a mechanism and practice to reduce and factor out details so that one can focus on a few concepts at a time. ... Memory management is the act of managing computer memory. ... In computer operating systems, demand paging is an application of virtual memory. ... A computer program or routine is described as reentrant if it can be safely called recursively or from multiple processes. ...


A kernel will usually provide features for low-level scheduling[10] of processes (dispatching), Inter-process communication, process synchronization, context switch, manipulation of process control blocks, interrupt handling, process creation and destruction, process suspension and resumption (see process states).[5][7] It has been suggested that this section be merged into dispatcher. ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... In computer science, especially parallel computing, synchronization means the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ... A Process Control Block (PCB, also called Task Control Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. ... In computing, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ... In a multitasking computer system, processes may occupy a variety of states. ...


Kernel basic facilities

The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources.[1] Typically, the resources consist of:

  • The CPU (frequently called the processor). This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time)
  • The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
  • Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device)

Key aspects necessary in resource managements are the definition of an execution domain (address space) and the protection mechanism used to mediate the accesses to the resources within a domain.[1] “CPU” redirects here. ... Different types of RAM. From top to bottom: DIP, SIPP, SIMM 30 pin, SIMM 72 pin, DIMM, RIMM RAM redirects here. ... Energy Input: The energy placed into a reaction. ... In computing, an address space defines a range of discrete addresses, each of which may correspond to a physical or virtual memory register, a network host, peripheral device, disk sector or other logical or physical entity. ...


Kernels also usually provide methods for synchronization and communication between processes (called inter-process communication or IPC). In computer science, especially parallel computing, synchronization means the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions. ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ...


A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other.


Finally, a kernel must provide running programs with a method to make requests to access these facilities.


Process management

The main task of a kernel is to allow the execution of applications and support them with features such as hardware abstractions. A process defines which memory portions the application can access[11] (for this introduction, process, application and program are used as synonymous) Kernel process management must take into account the hardware built-in equipment for memory protection.[12] Process management is the ensemble of activities of planning and monitoring the performance of a process, especially in the sense of business process, often confused with reengineering. ... Memory protection is a system that prevents one process from corrupting the memory of another process running on the same computer at the same time. ...


To run an application, a kernel typically sets up an address space for the application, loads the file containing the application's code into memory (perhaps via demand paging), sets up a stack for the program and branches to a given location inside the program, thus starting its execution.[13] In computing, an address space defines a range of discrete addresses, each of which may correspond to a physical or virtual memory register, a network host, peripheral device, disk sector or other logical or physical entity. ... In computer operating systems, demand paging is an application of virtual memory. ... In computer science, a call stack is a special stack which stores information about the active subroutines of a computer program. ...


Multi-tasking kernels are able to give the user the illusion that the number of processes being run simultaneously on the computer is higher than the maximum number of processes the computer is physically able to run simultaneously. Typically, the number of processes a system may run simultaneously is equal to the number of CPUs installed (however this may not be the case if the processors support simultaneous multithreading). In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is... Simultaneous multithreading, often abbreviated as SMT, is a technique for improving the overall efficiency of superscalar CPUs. ...


In a pre-emptive multitasking system, the kernel will give every program a slice of time and switch from process to process so quickly that it will appear to the user as if these processes were being executed simultaneously. The kernel uses scheduling algorithms to determine which process is running next and how much time it will be given. The algorithm chosen may allow for some processes to have higher priority than others. The kernel generally also provides these processes a way to communicate; this is known as inter-process communication (IPC) and the main approaches are shared memory, message passing and remote procedure calls (see concurrent computing). Pre-emptive multitasking is a form of multitasking. ... It has been suggested that this section be split into a new article entitled Scheduling (communications). ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... // Diagram of a typical Shared memory system. ... In computer science, message passing is a form of communication used in concurrent programming, parallel programming, object-oriented programming, and interprocess communication. ... Remote procedure call (RPC) is a protocol that allows a computer program running on one computer to cause a subroutine on another computer to be executed without the programmer explicitly coding the details for this interaction. ... Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. ...


Other systems (particularly on smaller, less powerful computers) may provide co-operative multitasking, where each process is allowed to run uninterrupted until it makes a special request that tells the kernel it may switch to another process. Such requests are known as "yielding", and typically occur in response to requests for interprocess communication, or for waiting for an event to occur. Older versions of Windows and Mac OS both used co-operative multitasking but switched to pre-emptive schemes as the power of the computers to which they were targeted grew. It has been suggested that this article or section be merged into Computer_multitasking#Cooperative_multitasking. ... 1. ... This article or section does not adequately cite its references or sources. ...


The operating system might also support multiprocessing (SMP or Non-Uniform Memory Access); in that case, different programs and threads may run on different processors. A kernel for such a system must be designed to be re-entrant, meaning that it may safely run two different parts of its code simultaneously. This typically means providing synchronization mechanisms (such as spinlocks) to ensure that no two processors attempt to modify the same data at the same time. Multiprocessing is traditionally known as the use of multiple concurrent processes in a system as opposed to a single process at any one instant. ... Symmetric multiprocessing, or SMP, is a multiprocessor computer architecture where two or more identical processors are connected to a single shared main memory. ... Non-Uniform Memory Access or Non-Uniform Memory Architecture (NUMA) is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. ... In computer science, especially parallel computing, synchronization means the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions. ... In software engineering, a spinlock is a lock where the thread simply waits in a loop (spins) repeatedly checking until the lock becomes available. ...


Memory management

The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other.[13] To meet Wikipedias quality standards, this article or section may require cleanup. ... In computer operating systems, paging memory allocation, paging refers to the process of managing program access to virtual memory pages that do not currently reside in RAM. It is implemented as a task that resides in the kernel of the operating system and gains control when a page fault takes... Segmentation is one of the most common ways to achieve memory protection; another common one is paging. ...


On many systems, a program's virtual address may refer to data which is not currently in memory. The layer of indirection provided by virtual addressing allows the operating system to use other data stores, like a hard drive, to store what would otherwise have to remain in main memory (RAM). As a result, operating systems can allow programs to use more memory than the system has physically available. When a program needs data which is not currently in RAM, the CPU signals to the kernel that this has happened, and the kernel responds by writing the contents of an inactive memory block to disk (if necessary) and replacing it with the data requested by the program. The program can then be resumed from the point where it was stopped. This scheme is generally known as demand paging. Typical hard drives of the mid-1990s. ... In computer operating systems, demand paging is an application of virtual memory. ...


Virtual addressing also allows creation of virtual partitions of memory in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the applications (user space). The applications are not permitted by the processor to address kernel memory, thus preventing an application from damaging the running kernel. This fundamental partition of memory space has contributed much to current designs of actual general-purpose kernels and is almost universal in such systems, although some research kernels (e.g. Singularity) take other approaches. In computer engineering the kernel is the core of an operating system. ... An operating system usually segregates the available system memory into kernel space and user space. ... Singularity is a Microsoft Research project to build a highly-dependable operating system in which the kernel, device driver, and applications are all written in managed code. ...


Device management

To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel.[13] It has been suggested that this article or section be merged into Computer hardware. ... Windows XP loading drivers during a Safe Mode bootup A device driver, or software driver is a computer program allowing higher-level computer programs to interact with a computer hardware device. ...


A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). For the specific branded ISA add-on technology marketed by Intel and Microsoft, see Plug-And-Play. ...


In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers. In computer architecture, a bus is a subsystem that transfers data or power between computer components inside a computer or between computers. ... 64-bit PCI expansion slots inside a Power Macintosh G4 The Peripheral Component Interconnect, or PCI Standard (in practice almost always shortened to PCI), specifies a computer bus for attaching peripheral devices to a computer motherboard. ... “USB” redirects here. ...


As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead.[citation needed] Energy Input: The energy placed into a reaction. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ...


System calls

To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invoke the related kernel functions.[citation needed] A C library is a collection of libraries used in programming with the C programming language. ... API and Api redirect here. ...


The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are:

  • Using a software-simulated interrupt. This method is available on most hardware, and is therefore very common.
  • Using a call gate. A call gate is a special address which the kernel has added to a list stored in kernel memory and which the processor knows the location of. When the processor detects a call to that location, it instead redirects to the target location without causing an access violation. Requires hardware support, but the hardware for it is quite common.
  • Using a special system call instruction. This technique requires special hardware support, which common architectures (notably, x86) may lack. System call instructions have been added to recent models of x86 processors, however, and some (but not all) operating systems for PCs make use of them when available.
  • Using a memory-based queue. An application that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of memory that the kernel periodically scans to find requests.

In computing, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ... A call gate (or callgate) is a mechanism in Intels x86 architecture for changing the privilege level of the CPU when it executes a predefined function call using a CALL FAR instruction. ... x86 or 80x86 is the generic name of a microprocessor architecture first developed and manufactured by Intel. ...

Kernel design decisions

Issues of kernel support for protection

An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviors (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.[1] In computer science, Fault-tolerance is the property of a computer system to continue operation at an acceptable quality, despite the unexpected occurrence of hardware or software failures. ... This article describes how security can be achieved through design and engineering. ... In Computer sciences the Separation of protection and security is a design choice. ... Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ...


The mechanisms or policies provided by the kernel can be classified according to several criteria, as: static (enforced at compile time) or dynamic (enforced at runtime); preemptive or post-detection; according to the protection principles they satisfy (i.e. Denning[14][15]); whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more. In computer science, compile time, as opposed to runtime, is the time when a compiler compiles code written in a programming language into an executable form. ... In computer science, runtime or run time describes the operation of a computer program, the duration of its execution, from beginning to termination (compare compile time). ... Denning may refer to: Denning, New York Denning, Arkansas William Frederick Denning Alfred Denning, Baron Denning, Lord Denning This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the same title. ...


Fault tolerance

A useful measure of the level of fault tolerance of a system is how closely it adheres to the principle of least privilege.[16] In cases where multiple programs are running on a single computer, it is usually important to prevent a fault in one of the programs from negatively affecting the other.[citation needed] Extended to malicious design rather than a fault, this also applies to security, and is necessary to prevent processes from accessing information without being granted permission.[citation needed] In computer science and other fields the principle of minimal privilege, also known as the principle of least privilege or just least privilege, requires that in a particular abstraction layer of a computing environment every module (such as a process, a user or a program on the basis of the... This article describes how security can be achieved through design and engineering. ...


The two major hardware approaches[17] for protection (of sensitive information) are Hierarchical protection domains (also called ring architectures, segment architectures or supervisor mode),[18] and Capability-based addressing.[19] Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... In computer science, hierarchical protection domains, often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... In computer science, hierarchical protection domains, often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... In computer science, capability-based addressing is a scheme used by some computers to control access to memory. ...

Privilege rings, such as in the x86, are a common implementation of Hierarchical protection domains used in many commercial systems to have some level of fault tolerance.
Privilege rings, such as in the x86, are a common implementation of Hierarchical protection domains used in many commercial systems to have some level of fault tolerance.

Hierarchical protection domains are much less flexible, as is the case with every kernel with a hierarchical structure assumed as global design criterion.[1] In the case of protection it is not possible to assign different privileges to processes that are at the same privileged level, and therefore is not possible to satisfy Denning's four principles for fault tolerance[14][15] (particularly the Principle of least privilege). Hierarchical protection domains also have a major performance drawback, since interaction between different levels of protection, when a process has to manipulate a data structure both in 'user mode' and 'supervisor mode', always requires message copying (transmission by value).[20] A kernel based on capabilities, however, is more flexible in assigning privileges, can satisfy Denning's fault tolerance principles,[21] and typically doesn't suffer from the performance issues of copy by value. Image File history File links Priv_rings. ... Image File history File links Priv_rings. ... Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... x86 or 80x86 is the generic name of a microprocessor architecture first developed and manufactured by Intel. ... Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... Peter J. Denning is a computer scientist and one of the team members of the Multics project. ... In computer science, an evaluation strategy is a set of (usually deterministic) rules for determining the evaluation of expressions in a programming language. ...


Both approaches typically require some hardware or firmware support to be operable and efficient. The hardware support for hierarchical protection domains[22] is typically that of "CPU modes." An efficient and simple way to provide hardware support of capabilities is to delegate the MMU the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing.[21] Most commercial computer architectures lack MMU support for capabilities. An alternative approach is to simulate capabilities using commonly-support hierarchical domains; in this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel performs the access for it. The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly.[23][24] Approaches where protection mechanism are not firmware supported but are instead simulated at higher levels (e.g. simulating capabilities by manipulating page tables on hardware that does not have direct support), are possible, but there are performance implications.[25] Lack of hardware support may not be an issue, however, for systems that choose to use language-based protection.[26] CPU modes (also called processor modes or privilege levels, and by other names) are operating modes for the central processing unit of some computers that place variable restrictions on the operations that can be performed by the CPU. Mode types At a minimum, any CPU with this type of architecture... This 68451 MMU could be used with the Motorola 68010 MMU, short for memory management unit or sometimes called paged memory management unit as PMMU, is a class of computer hardware components responsible for handling memory accesses requested by the CPU. Among the functions of such devices are the translation... In computer science, capability-based addressing is a scheme used by some computers to control access to memory. ...


Security

An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels.[27][28][29][30][31][21]


One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security. The German Lorenz cipher machine, used in World War II for encryption of very high-level general staff messages Cryptography (or cryptology; derived from Greek κρυπτός kryptós hidden, and the verb γράφω gráfo write or λεγειν legein to speak) is the study of message secrecy. ... A diagram of the operation of a typical multi-language, multi-target compiler. ...


The lack in current mainstream operating systems of many critical security mechanisms, actually impedes to implement adequate security policies at the application abstraction level.[27] In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support.[27] An abstraction layer (or abstraction level) is a way of hiding the implementation details of a particular set of functionality. ...


Hardware-based protection or language-based protection

Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces.[32] Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods.


An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.[26] A language-based system the operating system has both the kernel and applications implemented using an high level programming language, such as Java. ... A diagram of the operation of a typical multi-language, multi-target compiler. ...


Advantages of this approach include:

  • Lack of need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space.
  • Flexibility. Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme (e.g. from a hierarchical system to a capability-based one) do not require new hardware.

Disadvantages include:

  • Longer application start up time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode.
  • Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe. Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.

Examples of systems with language-based protection include JX and Microsoft's Singularity. Bytecode is a binary representation of an executable program designed to be executed by a virtual machine rather than by dedicated hardware. ... In computer science, a programming language is type safe when the language does not permit the programmer to treat a value as a type to which it does not belong. ... In computer science, JX is a microkernel operating system with both the kernel and applications implemented using the Java programming language. ... Microsoft Corporation, (NASDAQ: MSFT, HKSE: 4338) is a multinational computer technology corporation with global annual revenue of US$44. ... Singularity is a Microsoft Research project to build a highly-dependable operating system in which the kernel, device driver, and applications are all written in managed code. ...


Process cooperation

Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation.[33] However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible.[7] Edsger Dijkstra Edsger Wybe Dijkstra (Rotterdam, May 11, 1930 – Nuenen, August 6, 2002; IPA: ) was a Dutch computer scientist. ... In computer science, an atomic operation is one that either completes fully, or has no lasting effect. ... In computer science, a lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. ... A semaphore is a protected variable (or abstract data type) and constitutes the classic method for restricting access to shared resources (e. ... In computer science, message passing is a form of communication used in concurrent programming, parallel programming, object-oriented programming, and interprocess communication. ...


I/O devices management

The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967[34][35]). In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes.[7] Per Brinch Hansen. ...


Kernel-wide design approaches

Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation.


The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels.[36][37] Here a mechanism is the support that allows to implement many different policies, while a policy is a particular "mode of operation".[clarify] In minimal microkernel just some very basic policies are included,[37] and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high level process scheduling, file system management, ecc.).[1][7] A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them. The principle Separation of mechanism[1] and policy has several uses in the field of Computer science. ...


Per Brinch Hansen presented cogent arguments in favor of separation of mechanism and policy.[1][7] The failure to properly fulfill this separation, is one of the major causes of the lack of substantial innovation in existing operating systems,[1] a problem common in computer architecture.[38][39][40] The monolithic design is induced by the "kernel mode"/"user mode" architectural approach to protection (technically called hierarchical protection domains), which is common in conventional commercial system;[41] in fact, every module needing protection is therefore preferably included into the kernel.[41] This link between monolithic design and "privileged mode" can be reconducted to the key issue of mechanism-policy separation;[1] in fact the "privileged mode" architectural approach melts together the protection mechanism with the security policies, while the major alternative architectural approach, capability-based addressing, clearly distinguishes between the two, leading naturally to a microkernel design[1] (see Separation of protection and security). Per Brinch Hansen. ... An argument is cogent if and only if the truth of the arguments premises would render the truth of the conclusion probable (i. ... Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... In computer science, capability-based addressing is a scheme used by some computers to control access to memory. ... In Computer sciences the Separation of protection and security is a design choice. ...


While monolithic kernels execute all of their code in the same address space (kernel space) microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[2] Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems. The Xen hypervisor, for example, is an exokernel. It has been suggested that Monolithic system be merged into this article or section. ... In computer engineering the kernel is the core of an operating system. ... Graphical overview of a microkernel A microkernel is a minimal computer operating system kernel providing only basic operating system services (system calls), while other services (commonly provided by kernels) are provided by user-space programs called servers. ... Graphical overview of a hybrid kernel Hybrid kernel is a kernel architecture based on combining aspects of microkernel and monolithic kernel architectures used in computer operating systems. ... In computer science, a nanokernel or picokernel is a very minimalist operating system kernel. ... Graphical overview of Exokernel Exokernel is an operating system kernel developed by the MIT Parallel and Distributed Operating Systems group, and also a class of similar operating systems. ... Xen is a free virtual machine monitor for IA-32, x86-64, IA-64 and PowerPC architectures. ...


Monolithic kernels

Main article: Monolithic kernel
Diagram of Monolithic kernels
Diagram of Monolithic kernels

In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson, maintain that monolithic systems are easier to design and implement than other solutions.[citation needed] The main disadvantages of monolithic kernels are the dependencies between system components - a bug in a device driver might crash the entire system - and the fact that large kernels can become very difficult to maintain. It has been suggested that Monolithic system be merged into this article or section. ... Image File history File links Kernel-simple. ... Image File history File links Kernel-simple. ...


Microkernels

Main article: Microkernel
In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.
In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.

The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls. Graphical overview of a microkernel A microkernel is a minimal computer operating system kernel providing only basic operating system services (system calls), while other services (commonly provided by kernels) are provided by user-space programs called servers. ... Image File history File links Kernel-microkernel. ... Image File history File links Kernel-microkernel. ... Graphical overview of a microkernel A microkernel is a minimal computer operating system kernel providing only basic operating system services (system calls), while other services (commonly provided by kernels) are provided by user-space programs called servers. ... Client/Server is a network application architecture which separates the client (usually the graphical user interface) from the server. ... In computing, a system call is the mechanism used by an application program to request service from the operating system. ... Memory management is the act of managing computer memory. ... In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... This article or section is in need of attention from an expert on the subject. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ...


A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel.[7] It is also possible to dynamically switch among operating systems and to have more than one active simultaneously.[7] A high-level programming language is a programming language that is more user-friendly, to some extent platform-independent, and abstract from low-level computer processor operations such as memory accesses. ...


Monolithic kernels vs microkernels

As the computer kernel grows, a number of problems become evident. One of the most obvious is that the memory footprint increases. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support.[42] To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code. It has been suggested that Memory-footprint be merged into this article or section. ... How virtual memory maps to physical memory Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it were contiguous. ... A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications (see also Tanenbaum 79). ...


Due to the problems that monolithic kernels pose, they were considered obsolete by the early 1990s. As a result, the design of Linux using a monolithic kernel rather than a microkernel was the topic of a famous flame war between Linus Torvalds and Andrew Tanenbaum.[43] There is merit on both sides of the argument presented in the Tanenbaum/Torvalds debate. This article is about operating systems that use the Linux kernel. ... This article is about the Internet meaning of the word flaming. For other meanings, and meanings of the word flame, see Flame. ... Linus Benedict Torvalds  ; born December 28, 1969 in Helsinki, Finland, is a Finnish software engineer best known for initiating the development of the Linux kernel. ... Andrew S. Tanenbaum Dr. Andrew Stuart Andy Tanenbaum (sometimes called ast)[1] (born 1944) is a professor of computer science at the Vrije Universiteit, Amsterdam in the Netherlands. ... The Tanenbaum-Torvalds debate is a famous debate started in 1992 by Andrew S. Tanenbaum with Linus Torvalds regarding Linux and kernel architecture in general on Usenet discussion group comp. ...


Some, including early UNIX developer Ken Thompson, argued[citation needed] that while microkernel designs were more aesthetically appealing, monolithic kernels were easier to implement. However, a bug in a monolithic system usually crashes the entire system, while this doesn't happen in a microkernel with servers running apart from the main thread. Monolithic kernel proponents reason that incorrect code doesn't belong in a kernel, and that microkernels offer little advantage over correct code. Microkernels are often used in embedded robotic or medical computers where crash tolerance is important and most of the OS components reside in their own private, protected memory space. This is impossible with monolithic kernels, even with modern module-loading ones. Kenneth Thompson redirects here. ...


Performances

Monolithic kernels are designed to have all of their code in the same address space (kernel space) to increase the performance of the system.[citation needed] Some developers, as UNIX developer Ken Thompson, maintain that monolithic systems are extremely efficient if well-written.[citation needed] The monolithic model tends to be more efficient[citation needed] through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.[citation needed] It has been suggested that Monolithic system be merged into this article or section. ... In computer engineering the kernel is the core of an operating system. ... In computer science, message passing is a form of communication used in concurrent programming, parallel programming, object-oriented programming, and interprocess communication. ...


The performance of microkernels constructed in the 1980's and early 1990's was poor.[3][44] The studies that empirically measured the performance of some of those particular microkernels, didn't analyze the reasons of such inefficiency.[3] The explanations to those performance were left to "folklore", with the common and then unproved beliefs that they were due to the increased frequency of switches from "kernel-mode" to "user-mode"[3] (but such a hierarchical design of protection is not inherent in microkernels)[1][41], to the increased frequency of inter-process communication (but IPC can be implemented an order of magnitude faster than previously believed),[3] and to the increased frequency or context switches.[3] Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ...


In fact, as guessed in 1995, the reasons for those poor performance might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts.[3] Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.[3]


On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel,[41] has a significant performance drawback each time there's an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in 'user mode' and 'supervisor mode'), since this requires message copying by value.[20] Privilege rings for the x86 available in protected mode In computer science, hierarchical protection domains,[1][2] often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... In computer science, an evaluation strategy is a set of (usually deterministic) rules for determining the evaluation of expressions in a programming language. ...


By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically,[citation needed] but recently, newer microkernels, optimized for performance, such as L4[45] and K42 have addressed these problems.[verification needed] L4 is, collectively, a family of related computer programs. ... K42 is an open-source research operating system for cache-coherent 64-bit multiprocessor systems. ...


Hybrid kernels

Main article: Hybrid kernel
The hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel.
The hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel.

Hybrid kernels are essentially a compromise between the monolithic kernel approach and the microkernel system. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead[citation needed] of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space. Graphical overview of a hybrid kernel Hybrid kernel is a kernel architecture based on combining aspects of microkernel and monolithic kernel architectures used in computer operating systems. ... Image File history File links This is a lossless scalable vector image. ... Image File history File links This is a lossless scalable vector image. ... Graphical overview of a hybrid kernel Hybrid kernel is a kernel architecture based on combining aspects of microkernel and monolithic kernel architectures used in computer operating systems. ... A protocol stack is a particular software implementation of a computer networking protocol suite. ... See Filing system for this term as it is used in libraries and offices In computing, a file system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them. ...


Nanokernels

Main article: Nanokernel

A nanokernel delegates virtually all services — including even the most basic ones like interrupt controllers or the timer — to device drivers to make the kernel memory requirement even smaller than a traditional microkernel.[46] In computer science, a nanokernel or picokernel is a very minimalist operating system kernel. ... A Programmable Interrupt Controller (PIC) is a device which allows priority levels to be assigned to its interrupt outputs. ... A simple digital timer. ... Windows XP loading drivers during a Safe Mode bootup A device driver, or software driver is a computer program allowing higher-level computer programs to interact with a computer hardware device. ...


Exokernels

Main article: Exokernel

An exokernel is a type of kernel that does not abstract hardware into theoretical models. Instead it allocates physical hardware resources, such as processor time, memory pages, and disk blocks, to different programs. A program running on an exokernel can link to a library operating system that uses the exokernel to simulate the abstractions of a well-known OS, or it can develop application-specific abstractions for better performance.[47] Graphical overview of Exokernel Exokernel is an operating system kernel developed by the MIT Parallel and Distributed Operating Systems group, and also a class of similar operating systems. ...


History of kernel development

Early operating system kernels

Main article: History of operating systems

Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from ROM. As these were developed, they formed the basis of what became early operating system kernels. The "bare metal" approach is still used today on some video game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels. The history of computer operating systems recapitulates to a degree, the recent history of computing. ... The purpose of a loader is to move the object code in an object file into the computers main memory for execution. ... A debugger is a computer program that is used to test and debug other programs. ... Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices. ... Bare metal (also bare-metal programming) is a very low-level method of programming, usually involving machine code. ... “Game console” redirects here. ... A router, an example of an embedded system. ...


In 1969 the RC 4000 Multiprogramming System introduced the system design philosophy of a small nucleus "upon which operating systems for different purposes could be built in an orderly manner",[48] what would be called the microkernel approach. The RC 4000 Multiprogramming System was an operating system developed for the RC 4000 minicomputer in 1969. ...


Time-sharing operating systems

Main article: Time-sharing

In the decade preceding Unix, computers had grown enormously in power - to the point where computer operators were looking for new ways to get people to use the spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine.[49] Alternate uses: see Timesharing Time-sharing is an approach to interactive computing in which a single computer is used to provide apparently simultaneous interactive general-purpose computing to multiple users by sharing processor time. ... Filiation of Unix and Unix-like systems Unix (officially trademarked as UNIX®) is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs including Ken Thompson, Dennis Ritchie and Douglas McIlroy. ... Alternate uses: see Timesharing Time-sharing is an approach to interactive computing in which a single computer is used to provide apparently simultaneous interactive general-purpose computing to multiple users by sharing processor time. ...


The development of time-sharing systems led to a number of problems. One was that users, particularly at universities where the systems were being developed, seemed to want to hack the system to get more CPU time. For this reason, security and access control became a major focus of the Multics project in 1965.[50] Another ongoing issue was properly handling computing resources: users spent most of their time staring at the screen instead of actually using the resources of the computer, and a time-sharing system should give the CPU time to an active user during these periods. Finally, the systems typically offered a memory hierarchy several layers deep, and partitioning this expensive resource led to major developments in virtual memory systems. This article is about computer hacking. ... “CPU” redirects here. ... This article describes how security can be achieved through design and engineering. ... Access control is the ability to permit or deny the use of something by someone. ... Multics (Multiplexed Information and Computing Service) was an extraordinarily influential early time-sharing operating system. ... The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. ... How virtual memory maps to physical memory Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it were contiguous. ...


Unix

Main article: Unix
A diagram of the predecessor/successor family relationship for Unix-like systems.

Unix represented the culmination of decades of development towards a modern operating system. During the design phase, programmers decided to model every high-level device as a file, because they believed the purpose of computation was data transformation.[51] For instance, printers were represented as a "file" at a known location — when data was copied to the file, it printed out. Other systems, to provide a similar functionality, tended to virtualize devices at a lower level — that is, both devices and files would be instances of some lower level concept. Virtualizing the system at the file level allowed users to manipulate the entire system using their existing file management utilities and concepts, dramatically simplifying operation. As an extension of the same paradigm, Unix allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single-purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased flexibility as well as ease of development and use, allowing the user to modify their workflow by adding or removing a program from the chain. Filiation of Unix and Unix-like systems Unix (officially trademarked as UNIX®) is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs including Ken Thompson, Dennis Ritchie and Douglas McIlroy. ... Image File history File links Unix-history. ... Image File history File links Unix-history. ... Diagram of the relationships between several Unix-like systems A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. ... Filiation of Unix and Unix-like systems Unix (officially trademarked as UNIX®) is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs including Ken Thompson, Dennis Ritchie and Douglas McIlroy. ... A device file or special file is an interface for a device driver that appears in a file system as if it were an ordinary file. ... A pipeline of three programs run on a text terminal In Unix-like computer operating systems, a pipeline is the original software pipeline: a set of processes chained by their standard streams, so that the output of each process (stdout) feeds directly as input (stdin) of the next one. ...


In the Unix model, the Operating System consists of two parts; one the huge collection of utility programs that drive most operations, the other the kernel that runs the programs.[51] Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program running in supervisor mode[8] that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space. In software engineering, a lock is a mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. ... Energy Input: The energy placed into a reaction. ... An operating system usually segregates the available system memory into kernel space and user space. ...


Over the years the computing model changed, and Unix's treatment of everything as a file no longer seemed to be as universally applicable as it was before. Although a terminal could be treated as a file or a stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. Networking posed another problem. Even if network communication can be compared to file access, the low-level packet-oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, Unix became increasingly cluttered with code. While kernels might have had 100,000 lines of code in the seventies and eighties, kernels of modern Unix successors like Linux have more than 4.5 million lines.[52] Thus, the biggest problem with monolithic kernels, or monokernels, was sheer size. The code was so extensive that working on such a large codebase was extremely tedious and time-consuming. A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. ... “GUI” redirects here. ... This article or section is in need of attention from an expert on the subject. ... Source lines of code (SLOC) is a software metric used to measure the amount of code in a software program. ... The Linux kernel is a Unix-like operating system kernel. ...


Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples for this are Linux distributions like Debian GNU/Linux, Red Hat Linux and Ubuntu Linux, as well as Berkeley software distributions such as FreeBSD and NetBSD. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux and/or being compatible with it.[53] A Linux distribution, often simply distribution or distro, is a member of the Linux family of Unix-like operating systems comprising the Linux kernel, the non-kernel parts of the GNU operating system, and assorted other software. ... Debian, created by the Debian Project, is a widely used distribution of free software developed through the collaboration of volunteers from around the world. ... Red Hat Linux was a popular Linux distribution assembled by Red Hat until the early 2000s, when it was discontinued. ... Ubuntu is a desktop Linux distribution, based on Debian GNU/Linux. ... Berkeley Software Distribution (BSD, sometimes called Berkeley Unix) is the Unix derivative distributed by the University of California, Berkeley, starting in the 1970s. ... FreeBSD is a Unix-like free operating system descended from AT&T UNIX via the Berkeley Software Distribution (BSD) branch through the 386BSD and 4. ... NetBSD is a freely redistributable, open source version of the Unix-like BSD computer operating system. ... Operating system development refers to the development of operating systems, usually as a hobby realized by people not constituting a company. ...


Mac OS

Main article: Mac OS history

Apple Computer first launched Mac OS in 1984, bundled with its Apple Macintosh personal computer. For the first few releases, Mac OS (or System Software, as it was called) lacked many essential features, such as multitasking and a hierarchical filesystem. With time, the OS evolved and eventually became Mac OS 10 and had many new features added, but the kernel basically stayed the same. Against this, Mac OS X is based on Darwin, which uses a hybrid kernel called XNU, which was created combining the 4.3BSD kernel and the Mach kernel.[54] Apple marketed its operating system software as Mac OS, beginning in 1997. ... Apple Inc. ... This article or section does not adequately cite its references or sources. ... The first Macintosh computer, introduced in 1984, upgraded to a 512K Fat Mac. The Macintosh or Mac, is a line of personal computers designed, developed, manufactured, and marketed by Apple Computer. ... Mac OS X (IPA: ) is a line of graphical operating systems developed, marketed, and sold by Apple Inc. ... Darwin is a free and open source, Unix-like operating system first released by Apple Inc. ... XNU is the name of the kernel that Apple acquired and developed for use in the Mac OS X operating system and released as open source as part of the Darwin operating system. ... BSD redirects here; for other uses see BSD (disambiguation). ... Mach is an operating system microkernel developed at Carnegie Mellon University to support operating system research, primarily distributed and parallel computation. ...


Amiga

Main article: AmigaOS

The Commodore Amiga was released in 1985, and was among the first (and certainly most successful) home computers to feature a microkernel operating system. The Amiga's kernel, exec.library, was small but capable, providing fast pre-emptive multitasking on similar hardware to the cooperatively-multitasked Apple Macintosh, and an advanced dynamic linking system that allowed for easy expansion.[55] AmigaOS is the default native operating system of the Amiga personal computer. ... Commodore, the commonly used name for Commodore International, was an American electronics company based in West Chester, Pennsylvania which was a vital player in the home/personal computer field in the 1980s. ... This article is about the family of home computers. ... A linker or link editor is a program that takes one or more objects generated by compilers and assembles them into a single executable program. ...


Windows

Main article: History of Microsoft Windows

Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, some believe this means it cannot be an operating system itself, although whether this is true depends entirely on the definition of operating system in use. This product line would continue through the release of the Windows 9x series (upgrading the systems's capabilities to 32-bit addressing and pre-emptive multitasking) and end with Windows Me. Meanwhile, Microsoft had been developing Windows NT, an operating system intended for high-end and business users, since 1993. This line started with the release of Windows NT 3.1 and replaced the main product line with the release of the NT-based Windows 2000. The Windows logo used since November 2006. ... Windows redirects here. ... Microsofts disk operating system, MS-DOS, was Microsofts implementation of DOS, which was the first popular operating system for the IBM PC, and until recently, was widely used on the PC compatible platform. ... Windows 9x is a term used to describe the DOS-based operating systems Windows 95 and Windows 98, similar versions of Microsoft Windows which were produced in the 1990s. ... Windows Millennium Edition, or Windows Me (IPA pronunciation: [miː], [ɛm iː]), is a hybrid 16-bit/32-bit graphical operating system released on September 14, 2000 by Microsoft. ... Windows NT (New Technology) is a family of operating systems produced by Microsoft, the first version of which was released in July 1993. ... Windows NT 3. ... In Microsoft Windows operating systems, the term NT-based describes the architecture of the operating system. ... Windows 2000 (also referred to as Win2K) is a preemptive, interruptible, graphical and business-oriented operating system that was designed to work with either uniprocessor or symmetric multi-processor 32-bit Intel x86 computers. ...


The highly successful Windows XP brought these two product lines together, combining the stability of the NT line with consumer features from the 9x series.[56] It uses the NT kernel, which is generally considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Manager, but several subsystems run in user mode.[57] Windows XP is a line of operating systems developed by Microsoft for use on general-purpose computer systems, including home and business desktops, notebook computers, and media centers. ... The Windows NT operating system familys architecture consists of two layers (user mode and kernel mode), with many different modules within both of these layers. ...


Development of microkernels

Although Mach, developed at Carnegie Mellon University from 1985 to 1994 is the best-known general-purpose microkernel, other microkernels have been developed with more specific aims. The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernels are not necessarily slow.[45] Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces.[58][59] Mach is an operating system microkernel developed at Carnegie Mellon University to support operating system research, primarily distributed and parallel computation. ... Carnegie Mellon University is a private research university in Pittsburgh, Pennsylvania, United States. ... L4 is, collectively, a family of related computer programs. ... This article is about operating systems that use the Linux kernel. ...


QNX is a real-time operating system with a minimalistic microkernel design that has been developed since 1982, having been far more successful than Mach in achieving the goals of the microkernel paradigm.[60] It is principally used in embedded systems and in situations where software is not allowed to fail, such as the robotic arms on the space shuttle and machines that control grinding of glass to extremely fine tolerances, where a tiny mistake may cost hundreds of thousands of dollars, as in the case of the mirror of the Hubble Space Telescope.[61] QNX (pronounced either Q-N-X or Q-nix) is a commercial POSIX-compliant Unix-like real-time operating system, aimed primarily at the embedded systems market. ... A real-time operating system (RTOS) is a multitasking operating system intended for real-time applications. ... A router, an example of an embedded system. ... NASAs Space Shuttle, officially called Space Transportation System (STS), is the United States governments current manned launch vehicle. ... The Hubble Space Telescope (HST) is a telescope in orbit around the Earth, named after astronomer Edwin Hubble. ...


See also

Computer Operating Systems (OSes) have at their core, kernels. ... // An operating system (OS) is the software that manages the sharing of the resources of a computer. ... This article does not cite any references or sources. ... “RAM” redirects here. ... How virtual memory maps to physical memory Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it were contiguous. ... In computer operating systems, paging memory allocation, paging refers to the process of managing program access to virtual memory pages that do not currently reside in RAM. It is implemented as a task that resides in the kernel of the operating system and gains control when a page fault takes... Segmentation is one of the most common ways to achieve memory protection; another common one is paging. ... Swap space is the term used to describe an area of disk (e. ... An operating system usually segregates the available system memory into kernel space and user space. ... This 68451 MMU could be used with the Motorola 68010 MMU, short for memory management unit or sometimes called paged memory management unit as PMMU, is a class of computer hardware components responsible for handling memory accesses requested by the CPU. Among the functions of such devices are the translation... In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is... In computing, a process is an instance of a computer program that is being executed. ... For the form of code consisting entirely of subroutine calls, see Threaded code. ... Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time operating system design. ... (Type of Multi-Tasking Operating Systems) Time Sharing system is a type of Multi-Tasking Operating Systems which operates in an interactive mode with a quick response time. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ... Inter-Process Communication (IPC) is a set of techniques for the exchange of data between two or more threads in one or more processes. ... IRQ redirects here. ... Energy Input: The energy placed into a reaction. ... Windows XP loading drivers during a Safe Mode bootup A device driver, or software driver is a computer program allowing higher-level computer programs to interact with a computer hardware device. ... A diagram of the operation of a typical multi-language, multi-target compiler. ... It has been suggested that this article or section be merged into Exception handling. ...

Footnotes

For notes referring to sources, see bibliography below.

  1. ^ a b c d e f g h i j k l Wulf 74 pp.337-345
  2. ^ a b Roch 2004
  3. ^ a b c d e f g h Liedtke 95
  4. ^ Tanenbaum 79, chapter 1
  5. ^ a b Deitel 82, p.65-66 cap. 3.9
  6. ^ Lorin 81 pp.161-186, Schroeder 77, Shaw 75 pp.245-267
  7. ^ a b c d e f g h Brinch Hansen 70 pp.238-241
  8. ^ a b The highest privilege level has various names throughout different architectures, such as supervisor mode, kernel mode, CPL0, DPL0, Ring 0, etc. See Ring (computer security) for more information.
  9. ^ Bona Fide OS Development - Bran's Kernel Development Tutorial, by Brandon Friesen
  10. ^ for low level scheduling see Deitel 82, chap.10 pp.249-268
  11. ^ Levy 1984, p.5
  12. ^ Needham, R.M., Wilkes, M. V. Domains of protection and the management of processes, Computer Journal, vol. 17, no. 2, May 1974, pp 117-120.
  13. ^ a b c Silberschatz 1990
  14. ^ a b Denning 1976
  15. ^ a b Swift 2005, p.29 quote: "isolation, resource control, decision verification (checking), and error recovery."
  16. ^ Cook, D.J. Measuring memory protection, accepted for 3rd International Conference on Software Engineering, Atlanta, Georgia, May 1978.
  17. ^ Swift 2005 p.26
  18. ^ Intel Corporation 2002
  19. ^ Houdek et al. 1981
  20. ^ a b Hansen 73, section 7.3 p.233 "interactions between different levels of protection require transmission of messages by value"
  21. ^ a b c Linden 76
  22. ^ Schroeder 72
  23. ^ Stephane Eranian & David Mosberger, Virtual Memory in the IA-64 Linux Kernel, Prentice Hall PTR, 2002
  24. ^ Silberschatz & Galvin, Operating System Concepts, 4th ed, pp445 & 446
  25. ^ Hoch, Charles; J. C. Browne (University of Texas, Austin) (July 1980). "An implementation of capabilities on the PDP-11/45" (pdf). ACM SIGOPS Operating Systems Review 14 (3): 22 - 32. DOI:10.1145/850697.850701. Retrieved on 2007-01-07. 
  26. ^ a b A Language-Based Approach to Security, Schneider F., Morrissett G. (Cornell University) and Harper R. (Carnegie Mellon University)
  27. ^ a b c P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. [1].
  28. ^ J. Lepreau et al. The Persistent Relevance of the Local Operating System to Global Applications. Proceedings of the 7th ACM SIGOPS European Workshop, September 1996.
  29. ^ M. Abrams et al, Information Security: An Integrated Collection of Essays, IEEE Comp. 1995.
  30. ^ J. Anderson, Computer Security Technology Planning Study, Air Force Elect. Systems Div., ESD-TR-73-51, October 1972.
  31. ^ * Jerry H. Saltzer, Mike D. Schroeder (September 1975). "The protection of information in computer systems". Proceedings of the IEEE 63 (9): 1278 - 1308. 
  32. ^ Jonathan S. Shapiro; Jonathan M. Smith; David J. Farber. "EROS: a fast capability system". Proceedings of the seventeenth ACM symposium on Operating systems principles. 
  33. ^ Dijkstra, E. W. Cooperating Sequential Processes. Math. Dep., Technological U., Eindhoven, Sept. 1965.
  34. ^ SHARER, a time sharing system for the CDC 6600. Retrieved on 2007-01-07.
  35. ^ Dynamic Supervisors - their design and construction. Retrieved on 2007-01-07.
  36. ^ Baiardi 1988
  37. ^ a b Levin 75
  38. ^ Denning 1980
  39. ^ Jürgen Nehmer The Immortality of Operating Systems, or: Is Research in Operating Systems still Justified? Lecture Notes In Computer Science; Vol. 563. Proceedings of the International Workshop on Operating Systems of the 90s and Beyond. pp. 77 - 83 (1991) ISBN:3-540-54987-0 [2] quote: "The past 25 years have shown that research on operating system architecture had a minor effect on existing main stream systems." [3]
  40. ^ Levy 84, p.1 quote: "Although the complexity of computer applications increases yearly, the underlying hardware architecture for applications has remained unchanged for decades."
  41. ^ a b c d Levy 84, p.1 quote: "Conventional architectures support a single privileged mode of operation. This structure leads to monolithic design; any module needing protection must be part of the single operating system kernel. If, instead, any module could execute within a protected domain, systems could be built as a collection of independent modules extensible by any user."
  42. ^ Virtual addressing is most commonly achieved through a built-in memory management unit.
  43. ^ Recordings of the debate between Torvalds and Tanenbaum can be found at dina.dk, groups.google.com, oreilly.com and Andrew Tanenbaum's website
  44. ^ Härtig 97
  45. ^ a b The L4 microkernel family - Overview
  46. ^ KeyKOS Nanokernel Architecture
  47. ^ MIT Exokernel Operating System
  48. ^ Hansen 2001 (os), pp.17-18
  49. ^ BSTJ version of C.ACM Unix paper
  50. ^ Introduction and Overview of the Multics System, by F. J. Corbató and V. A. Vissotsky.
  51. ^ a b The UNIX System — The Single Unix Specification
  52. ^ Linux Kernel 2.6: It's Worth More!, by David A. Wheeler, October 12, 2004
  53. ^ This community mostly gathers at Bona Fide OS Development and The Mega-Tokyo Message Board.
  54. ^ XNU: The Kernel
  55. ^ Sheldon Leemon. What makes it so great! (Commodore Amiga). Creative Computing. Retrieved on 2006-02-05.
  56. ^ LinuxWorld IDC: Consolidation of Windows won't happen
  57. ^ Windows History: Windows Desktop Products History
  58. ^ The Fiasco microkernel - Overview
  59. ^ L4Ka - The L4 microkernel family and friends
  60. ^ QNX Realtime Operating System Overview
  61. ^ Hubble Facts, by NASA, January 1997

In computer science, hierarchical protection domains, often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behaviour (computer security). ... A digital object identifier (or DOI) is a standard for persistently identifying a piece of intellectual property on a digital network and associating it with related data, the metadata, in a structured extensible way. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st Century. ... is the 7th day of the year in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st Century. ... is the 7th day of the year in the Gregorian calendar. ... Year 2007 (MMVII) is the current year, a common year starting on Monday of the Gregorian calendar and the AD/CE era in the 21st Century. ... is the 7th day of the year in the Gregorian calendar. ... This 68451 MMU could be used with the Motorola 68010 MMU, short for memory management unit or sometimes called paged memory management unit as PMMU, is a class of computer hardware components responsible for handling memory accesses requested by the CPU. Among the functions of such devices are the translation... is the 285th day of the year (286th in leap years) in the Gregorian calendar. ... Year 2004 (MMIV) was a leap year starting on Thursday of the Gregorian calendar. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 36th day of the year in the Gregorian calendar. ... This article is about the American space agency. ...

References

  • Deitel, Harvey M. [1982] (1984). An introduction to operating systems, revisited first edition, Addison-Wesley, 673. ISBN 0-201-14502-2. 
  • Denning, Peter J. (April 1980). "Why not innovations in computer architecture?". ACM SIGARCH Computer Architecture News 8 (2): 4-7. ISSN 0163-5964. 
  • Hansen, Per Brinch (April 1970). "The nucleus of a Multiprogramming System". Communications of the ACM 13 (4): 238-241. ISSN 0001-0782. 
  • Houdek, M. E., Soltis, F. G., and Hoffman, R. L. 1981. IBM System/38 support for capability-based addressing. In Proceedings of the 8th ACM International Symposium on Computer Architecture. ACM/IEEE, pp. 341–348.
  • Intel Corporation (2002) The IA-32 Architecture Software Developer’s Manual, Volume 1: Basic Architecture
  • Levin, R.; E. Cohen, W. Corwin, F. Pollack, William Wulf (1975). "Policy/mechanism separation in Hydra". ACM Symposium on Operating Systems Principles / Proceedings of the fifth ACM symposium on Operating systems principles: 132-140. 
  • Levy, Henry M. (1984). Capability-based computer systems. Maynard, Mass: Digital Press. ISBN 0-932376-22-3. 
  • Linden, Theodore A. (December 1976). "Operating System Structures to Support Security and Reliable Software". ACM Computing Surveys (CSUR) 8 (4): 409 - 445. ISSN 0360-0300.  [5]
  • Lorin, Harold (1981). Operating systems. Boston, Massachusetts: Addison-Wesley, pp.161-186. ISBN 0-201-14464-6. 
  • Schroeder, Michael D.; Jerome H. Saltzer (March 1972). "A hardware architecture for implementing protection rings". Communications of the ACM 15 (3): 157 - 170. ISSN 0001-0782. 
  • Shaw, Alan C. (1974). The logical design of Operating systems. Prentice-Hall, 304. ISBN 0-13-540112-7. 
  • Wulf, W.; E. Cohen, W. Corwin, A. Jones, R. Levin, C. Pierson, F. Pollack (June 1974). "HYDRA: the kernel of a multiprocessor operating system". Communications of the ACM 17 (6): 337 - 345. ISSN 0001-0782.  [6]
  • Baiardi, F.; A. Tomasi, M. Vanneschi (1988). Architettura dei Sistemi di Elaborazione, volume 1 (in Italian). Franco Angeli. ISBN 88-204-2746-X. 
  • Swift, Michael M; Brian N. Bershad , Henry M. Levy, Improving the reliability of commodity operating systems, [7] ACM Transactions on Computer Systems (TOCS), v.23 n.1, p.77-110, February 2005

Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 285th day of the year (286th in leap years) in the Gregorian calendar. ... Abraham (Avi) Silberschatz is the Sidney J. Weinberg Professor and Chair of Computer Science at Yale University. ... “Boston” redirects here. ... Peter J. Denning is a computer scientist and one of the team members of the Multics project. ... ISSN, or International Standard Serial Number, is the unique eight-digit number applied to a periodical publication including electronic serials. ... Peter J. Denning is a computer scientist and one of the team members of the Multics project. ... Per Brinch Hansen. ... Per Brinch Hansen. ... Map highlighting Englewood Cliffs location within Bergen County. ... Per Brinch Hansen. ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 297th day of the year (298th in leap years) in the Gregorian calendar. ... He was behind the original implementation of L4 microkernel when he was in University of Karlsruhe in Germany. ... Intel redirects here. ... William A. Wulf (born December 8, 1939) is a computer scientist notable for his work in programming languages and compilers. ... He was behind the original implementation of L4 microkernel when he was in University of Karlsruhe in Germany. ... “Boston” redirects here. ... Michael Schroeder is a computer scientist perhaps most famous as the co-inventor of the Needham-Schroeder protocol. ... Andrew S. Tanenbaum Dr. Andrew Stuart Andy Tanenbaum (sometimes called ast)[1] (born 1944) is a professor of computer science at the Vrije Universiteit, Amsterdam in the Netherlands. ... Map highlighting Englewood Cliffs location within Bergen County. ... William A. Wulf (born December 8, 1939) is a computer scientist notable for his work in programming languages and compilers. ...

Further reading

  • Andrew Tanenbaum, Operating Systems - Design and Implementation (Third edition);
  • Andrew Tanenbaum, Modern Operating Systems (Second edition);
  • Daniel P. Bovet, Marco Cesati, The Linux Kernel;
  • David A. Peterson, Nitin Indurkhya, Patterson, Computer Organization and Design, Morgan Koffman (ISBN 1-55860-428-6);
  • B.S. Chalk, Computer Organisation and Architecture, Macmillan P.(ISBN 0-333-64551-0).

Andrew S. Tanenbaum Andrew Stuart Andy Tanenbaum (born 1944) is the head of Department of Computer Systems, Vrije Universiteit, Netherlands. ...

External links

  • KERNEL.ORG, official Linux kernel home..
  • Operating System Kernels at SourceForge.
  • Operating System Kernels at Freshmeat.
  • MIT Exokernel Operating System.
  • Kernel image - Debian Wiki.
  • The KeyKOS Nanokernel Architecture, a 1992 paper by Norman Hardy et al.
  • An Overview of the NetWare Operating System, a 1994 paper by Drew Major, Greg Minshall, and Kyle Powell (primary architects behind the NetWare OS).
  • Kernelnewbies, a community for learning Linux kernel hacking.
  • Detailed comparison between most popular operating system kernels.


  Results from FactBites:
 
Kernel (computer science) - Information at Halfvalue.com (5778 words)
While monolithic kernels will try to achieve these goals by executing all the code in the same address space to increase the performance of the system, microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase.
Multi-tasking kernels are able to give the user the illusion that the number of processes being run simultaneously on the computer is higher than the maximum number of processes the computer is physically able to run simultaneously.
While monolithic kernels execute all of their code in the same address space (kernel space) to increase the performance of the system, microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase.
Computer Science (1958 words)
Computer science is the art and science of developing automated, information-based processes, including modeling, algorithms, communication techniques, languages, implementation, performance measurement and prediction, and applications of such processes.
Prerequisites: Computer Science 5 and one semester of calculus.
Prerequisites: Mathematics 73 and 82 and Computer Science 60.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m