FACTOID # 8: Bookworms: Vermont has the highest number of high school teachers per capita and third highest number of librarians per capita.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Mach (kernel)

Mach is an operating system microkernel developed at Carnegie Mellon University to support operating system research, primarily distributed and parallel computation. It is one of the earliest examples of a microkernel, and still the standard by which similar projects are measured. To meet Wikipedias quality standards, this article or section may require cleanup. ... Graphical overview of a microkernel A microkernel is a minimal form of computer operating system kernel providing a set of primitives, or system calls, to implement basic operating system services such as address space management, thread management, and inter-process communication. ... To meet Wikipedias quality standards, this article or section may require cleanup. ...


The project at Carnegie Mellon ran from 1985 to 1994, ending with Mach 3.0. A number of other efforts have continued Mach research, including the University of Utah's Mach 4. Mach was developed as a replacement for the kernel in the BSD version of Unix, so no new operating system would have to be designed around it. Today further experimental research on Mach appears ended, although Mach and its derivatives are in use in a number of commercial operating systems, such as NEXTSTEP and OPENSTEP, and most notably Mac OS X (using the XNU kernel). The University of Utah (also The U or the U of U) is a public university in Salt Lake City, Utah. ... Berkeley Software Distribution (BSD, sometimes called Berkeley Unix) is the Unix derivative distributed by the University of California, Berkeley starting in the 1970s. ... Unix or UNIX is a computer operating system originally developed in the 1960s and 1970s by a group of AT&T Bell Labs employees including Ken Thompson, Dennis Ritchie, and Douglas McIlroy. ... NeXTSTEP is the original object-oriented, multitasking operating system that NeXT Computer, Inc. ... The OPENSTEP desktop. ... Mac OS X (officially pronounced Mac Oh-Ess Ten) is a line of proprietary, graphical operating systems developed, sold, and marketed by Apple Computer, the latest of which is included with all currently shipping Macintosh computers. ... XNU is the name of the kernel that Apple acquired and developed for use in the Mac OS X operating system and released as open source as part of the Darwin operating system. ...


Mach is the logical successor to Carnegie Mellon's Accent kernel. The lead developer on the Mach project, Richard Rashid, has been working at Microsoft since 1991 in various top-level positions revolving around the Microsoft Research division. Another of the original Mach developers, Avie Tevanian, was formerly head of software at NeXT, then Chief Software Technology Officer at Apple Computer until March 2006.[1] Accent was a message passing kernel developed at Carnegie Mellon University designed to handle large networks of uniprocessor workstations. ... Richard Rashid currently oversees Microsoft Researchs worldwide operations. ... The Microsoft Corporation, commonly known as just Microsoft, (NASDAQ: MSFT, HKSE: 4338) is a multinational computer technology corporation with global annual sales of US$44. ... 1991 (MCMXCI) was a common year starting on Tuesday of the Gregorian calendar. ... Microsoft Research is a division of Microsoft that is devoted to researching various computer science topics and issues. ... As of 2005 Avadis Avie Tevanian is the Chief Software Technology Officer at Apple Computer. ... NeXT was a computer company that developed and manufactured two computer workstations during its existence, the NeXTcube and NeXTstation. ... Apple Computer, Inc. ...

Contents


Mach concepts

Since Mach was designed as a "drop-in" replacement for the traditional Unix kernel, this discussion focuses on what distinguishes Mach from Unix. It became clear early that Unix's concept of everything-as-a-file would no longer work on modern systems, though some systems as Plan 9 from Bell Labs have tried this way. Nevertheless, those same developers lamented the loss of flexibility that the original concept offered. Another level of virtualization was sought that would make the system "work" again. To meet Wikipedias quality standards, this article or section may require cleanup. ... In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. ...


The critical abstraction in Unix was the pipe. What was needed was a pipe-like concept that worked at a much more general level, allowing a broad variety of information be passed between programs. Such system did exist using inter-process communication (IPC): A pipe-like system that would move any information between two programs, as opposed to file-like information. While many systems, including most Unices, had added various IPC implementations over the years, these were special-purpose libraries only really useful for one-off tasks. In computer science, abstraction is a mechanism and practice to reduce and factor out details so that one can focus on few concepts at a time. ... In UNIX and other Unix-like operating systems, a pipeline is a set of processes chained by their standard streams, so that the output of each process (stdout) feeds directly as input (stdin) of the next one. ... The expression Inter-process communication (IPC) describes the exchange of data between one process and another, either within the same computer or over a network. ...


Carnegie Mellon University started experimentation along these lines under the Accent kernel project, using an IPC system based on shared memory. Accent was a purely experimental system with many features and developed in an ad-hoc fashion over a period of time with changing research interests. Additionally, Accent's usefulness for research was limited because it was not Unix-compatible, and Unix was already the de-facto standard for almost all operating system research. Finally, Accent was tightly coupled with the hardware platform it was developed on, and at the time in the early 1980s it appeared there would soon be an explosion of new platforms, many of them massively parallel. To meet Wikipedias quality standards, this article or section may require cleanup. ... Accent was a message passing kernel developed at Carnegie Mellon University designed to handle large networks of uniprocessor workstations. ... In computer hardware, shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system. ... Massively parallel is a description which appears in computer science, life science, medical diagnositcs, and other fields. ...


Mach started largely as an effort to produce a cleanly-defined, Unix-based, highly portable Accent. The result was a short list of generic concepts:

  • a "task" is a set of resources that enable "threads" to run
  • a "thread" is a single unit of code running on a processor
  • a "port" defines a secure pipe for IPC between tasks
  • "messages" are passed between programs on ports

Mach developed on Accent's IPC concepts, but made the system much more Unix-like in nature, even able to run Unix programs with little or no modification. To do this Mach introduced the concept of a port, representing each endpoint of a two-way IPC. Ports had security and rights like files under Unix, allowing a very Unix-like model of protection to be applied to them. Additionally Mach allowed any program to be handed privileges that would normally be given to the kernel only, in order to allow user space programs to handle things like interacting with hardware. An operating system usually segregates the available system memory into kernel space and user space. ...


Under Mach, and like Unix, the operating system again becomes primarily a collection of utilities. As Unix, Mach keeps the concept of a driver for handling the hardware. Therefore all the drivers for the present hardware have to be included in the microkernel. Other architectures based in Hardware Abstraction Layer or exokernels could move the drivers out of the microkernel. A hardware abstraction layer (HAL) is an abstraction layer between the physical hardware of a computer and the software that runs on that computer. ... Graphical overview of Exokernel Exokernel is the name of an operating system developed by the Parallel and Distributed Operating Systems group at MIT, and of a class of similar operating systems. ...


The main difference with Unix is that instead of utilities handling files, they could handle any "task". More code was moved out of the kernel and into user space, resulting in a much smaller kernel and the rise of the term microkernel. Unlike traditional systems, under Mach a process, or "task", can consist of a number of threads. While this is common in modern systems, Mach was the first system to define tasks and threads in this way. The kernel's job was reduced from essentially being the operating system to maintaining the "utilities" and scheduling their access to hardware. Graphical overview of a microkernel A microkernel is a minimal form of computer operating system kernel providing a set of primitives, or system calls, to implement basic operating system services such as address space management, thread management, and inter-process communication. ...


The existence of ports and the use of IPC is perhaps the most fundamental difference between Mach and traditional kernels. Under Unix, calling the kernel consists of an operation known as a syscall or trap. The program uses a library to place data in a well known location in memory and then causes a fault, a type of error. When the system is first started the kernel is set up to be the "handler" of all faults, so when the program causes a fault the kernel takes over, examines the information passed to it, and then carries out the instructions. In computing, a system call is the mechanism used by an application program to request service from the operating system. ... A signal is an asynchronous event transmitted between one process and another. ... Illustration of an application which may use libvorbisfile. ...


Under Mach, the IPC system was used for this role instead. In order to call system functionality, a program would ask the kernel for access to a port, then use the IPC system to send messages to that port. Although the messages were triggered by syscalls as they would be on other kernels, under Mach that was pretty much all the kernel did -- handling the actual request would be up to some other program.


The use of IPC for message passing benefited support for threads and concurrency. Since tasks consisted of multiple threads, and it was the code in the threads that used the IPC mechanism, Mach was able to freeze and unfreeze threads while the message was handled. This allowed the system to be distributed over multiple processors, either using shared memory directly as in most Mach messages, or by adding code to copy the message to another processor if needed. In a traditional kernel this is difficult to implement; the system has to be sure that different programs don't try to write to the same memory from different processors. Under Mach this was well defined and easy to implement; it was the very process of accessing that memory, the ports, that was made a first-class citizen of the system.


The IPC system initially had performance problems, so a few strategies were developed to minimize the impact. In particular, Mach used a single shared-memory mechanism for physically passing the message from one program to another. Physically copying the message would be too slow, so Mach relies on the machine's memory management unit (MMU) to quickly map the data from one program to another. Only if the data is written to would it have to be physically copied, a process known as copy-on-write. MMU, short for Memory Management Unit, is a class of computer hardware components responsible for handling memory accesses requested by the CPU. Among the functions of such devices are the translation of virtual addresses to physical addresses (i. ... Copy-on-write (sometimes referred to as COW) is an optimization strategy used in computer programming. ...


Messages were also checked for validity by the kernel, to avoid bad data crashing one of the many programs making up the system. Ports were deliberately modeled on the Unix file system concepts. This allowed the user to find ports using existing file system navigation concepts, as well as assigning rights and permissions as they would on the file system.


Development under such a system would be easier. Not only would the code being worked on exist in a traditional program that could be built using existing tools, it could also be started, debugged and killed off using the same tools. With a monokernel a bug in new code would take down the entire machine and require a reboot, whereas under Mach this would only require that program to be restarted. Additionally the user could tailor the system to include, or exclude, whatever features they required. Since the operating system was simply a collection of programs, they could add or remove parts by simply running or killing them as they would any other program. Graphical overview of a monolithic kernel A monolithic kernel defines a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode. ...


Finally, under Mach, all of these features were deliberately designed to be extremely platform neutral. To quote one text on Mach:

Unlike UNIX, which was developed without regard for multiprocessing, Mach incorporates multiprocessing support throughout. Its multiprocessing support is also exceedingly flexible, ranging from shared memory systems to systems with no memory shared between processors. Mach is designed to run on computer systems ranging from one to thousands of processors. In addition, Mach is easily ported to many varied computer architectures. A key goal of Mach is to be a distributed system capable of functioning on heterogeneous hardware.

There are a number of disadvantages, however. A relatively mundane one is that it is not clear how to find ports. Under Unix this problem was solved over time as programmers agreed on a number of "well known" locations in the file system to serve various duties. While this same approach worked for Mach's ports as well, under Mach the operating system was assumed to be much more fluid, with ports appearing and disappearing all the time. Without some mechanism to find ports and the services they represented, much of this flexibility would be lost.


Development

Mach was initially hosted as additional code written directly into the existing 4.2BSD kernel, allowing the team to work on the system long before it was complete. Work started with the already functional Accent IPC/port system, and moved on to the other key portions of the OS, tasks and threads and virtual memory. As portions were completed various parts of the BSD system were re-written to call into Mach, and a change to 4.3BSD was also made during this process.


By 1986 the system was complete to the point of being able to run on its own on the DEC VAX. Although doing little of practical value, the goal of making a microkernel was realized. This was soon followed by versions on the IBM PC/RT and for Sun Microsystems 68030-based workstations, proving the system's portability. By 1987 the list included the Encore Multimax and Sequent Balance machines, testing Mach's ability to run on multiprocessor systems. A public Release 1 was made that year, and Release 2 followed the next year. VAX is a 32-bit computing architecture that supports an orthogonal instruction set (machine language) and virtual addressing (i. ... The IBM RT was a computer based around on the PC-AT bus and IBMs ROMP microprocessor, a single-chip version of the IBM 801. ... Sun Microsystems, Inc. ... Motorola 68030 Processor from a Macintosh IIsi The Motorola 68030 is a 32-bit microprocessor in Motorolas 68000 family. ... Sequent Computer Systems, or Sequent, was a computer company that designed and manufactured multiprocessing computer systems. ...


Throughout this time the promise of a "true" microkernel was not yet being delivered. These early Mach versions included the majority of 4.3BSD in the kernel, a system known as POE, resulting in a kernel that was actually larger than the Unix it was based on. The goal, however, was to move the Unix layer out of the kernel into user-space, where it could be more easily worked on and even replaced outright. Unfortunately performance proved to be a major problem, and a number of architectural changes were made in order to solve this problem. Poe may refer to: People named Poe Edgar Allan Poe (1809-1849), a famous American writer Poe (born 1968), the stage name of rock singer-songwriter Ann Danielewski John Poe (fl. ...


The resulting Mach 3 was released in 1990, and generated intense interest. A small team had built Mach and ported it to a number of platforms, including complex multiprocessor systems which were causing serious problems for older-style kernels. This generated considerable interest in the commercial market, where a number of companies were in the midst of considering changing hardware platforms. If the existing system could be ported to run on Mach, it would seem it would then be easy to change the platform underneath.


Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well. Mach 2.5 was also selected for the NeXTSTEP system and a number of commercial multiprocessor vendors. Mach 3 led to a number of efforts to port other operating systems to the kernel, including IBM's Workplace OS and several efforts by Apple Computer to build a cross-platform version of the Mac OS. The Open Software Foundation (OSF) was an organization founded in 1988 to create an open standard for an implementation of the Unix operating system. ... NeXTSTEP is the original object-oriented, multitasking operating system that NeXT Computer, Inc. ... now. ... Workplace OS was to be a new computer operating system from IBM, planned as its attempt to move several of its operating system products to a common microkernel to improve portability and reduce maintenance costs. ... Apple Computer, Inc. ... Mac OS, which stands for Macintosh Operating System, is a series of graphical user interface-based operating systems developed by Apple Computer for their Macintosh line of computer systems. ...


For some time it appeared that every operating system in the world would be based on Mach by the late 1990s.


Performance problems

Mach was originally intended to be a replacement for classical Unix, and for this reason contained many Unix-like ideas. For instance, Mach used a permissioning and security system patterned on Unix's file system. Since the kernel was privileged (running in kernel-space) it was possible for malfunctioning or malicious programs to send it commands that would cause damage to the system, and for this reason the kernel checked every message for validity. Additionally most of the functionality was to be located in user-space programs, so this meant there needed to be some way for the kernel to grant these programs additional privileges, to operate on hardware for instance.


Some of Mach's more esoteric features were also based on this same IPC mechanism. For instance, Mach was able to support multi-processor machines with ease. In a traditional kernel extensive work needs to be carried out to make it reentrant or interruptable, as programs running on different processors could call into the kernel at the same time. Under Mach, the bits of the operating system are isolated in servers, which are able to run, like any other program, on any processor. Although in theory the Mach kernel would also have to be reentrant, in practice this isn't an issue because its response times are so fast it can simply wait and serve requests in turn. Mach also included a server that could forward messages not just between programs, but even over the network, which was an area of intense development in the late 1980s and early 1990s. A computer program or routine is described as reentrant if it is designed in such a way that a single copy of the programs instructions in memory can be shared by multiple users or separate processes. ...


Unfortunately the use of IPC for almost all tasks turned out to have serious performance impact. Benchmarks on 1997 hardware showed that Mach 3.0-based Unix single-server implementations were about 50% slower than native Unix[1] [2]. Unix or UNIX is a computer operating system originally developed in the 1960s and 1970s by a group of AT&T Bell Labs employees including Ken Thompson, Dennis Ritchie, and Douglas McIlroy. ...


Studies showed the vast majority of this performance hit, 73% by one measure, was due to the overhead of the IPC [citation needed]. And this was measured on a system with a single large server providing the operating system; breaking the system down further into smaller servers would only make the problem worse. It appeared the goal of a collection-of-servers was simply not possible.


Many attempts were made to improve the performance of Mach and Mach-like microkernels, but by the mid-1990s much of the early intense interest had died. The concept of an operating system based on IPC appeared to be dead, the idea itself flawed [citation needed].


In fact further study of the exact nature of the performance problems turned up a number of interesting facts. One was that the IPC itself was not the problem: there was some overhead associated with the memory mapping needed to support it, but this added only a small amount of time to making a call. The rest, 80% of the time being spent, was due to additional tasks the kernel was running on the messages. Primary among these was the port rights checking and message validity. In tests on a 486DX-50 a standard Unix system call took 21 microseconds to complete, while a corresponding operation on Mach took 114 microseconds. Only 18 microseconds of this was hardware related; the rest was the Mach kernel running various routines on the message [citation needed]. For the processor, see Intel 80486. ... The pages linked in the right-hand column contain lists of times that are of the same order of magnitude (power of ten). ...


When Mach was first being seriously used in the 2.x versions, performance was slower than traditional kernels, perhaps as much as 25% [citation needed]. This cost was not considered particularly worrying, however, because the system was also offering multi-processor support and easy portability. Many felt this was an expected and acceptable cost to pay. In fact the system was hiding a serious performance problem, one that only became obvious when Mach 3 started to be widely used, and developers attempted to make systems running in user-space.


When Mach 3 attempted to move the operating system into user-space, the overhead suddenly became overwhelming. In this case consider the simple task of asking the system for the time. Under a true user-space system, there would be a server handling this request. The caller would trigger the IPC system to run the kernel, causing a context switch and memory mapping. The kernel would then examine the contents of the message to see if the caller had rights to call the server, and if so, do another mapping into the server's memory and another context switch to allow it to run. The process then repeats to return the results, adding up to a total of four context switches and memory mappings, as well as two runs of the code to check the rights and validity of the messages.


To put numbers to this, a call into the BSD kernel on a 486DX-50 requires about 20 microseconds (μs). The same call on the same system running Mach 3 required 114 μs. Given a syscall that does nothing, a full round-trip under BSD would require about 40 μs, whereas on a user-space Mach system it would take just under 500 μs. In detailed testing published in 1991, Chen and Bershad found overall system performance was degraded by up to 66% compared to a traditional kernel [citation needed]. // Overview The exposed die of an Intel 80486DX2 microprocessor. ...


This was not the only source of performance problems. Another centered on the problems of trying to handle memory properly when physical memory ran low and paging had to occur. In the traditional monokernels the authors had direct experience with which parts of the kernel called which others, allowing them to fine tune their pager to avoid paging out code that was about to be used. Under Mach this wasn't possible because the kernel had no real idea what the operating system consisted of. Instead they had to use a single one-size-fits-all solution that added to the performance problems. Mach 3 attempted to address this problem by providing a simple pager, relying on user-space pagers for better specialization. But this turned out to have little effect. In practice, any benefits it had were wiped out by the expensive IPC needed to call it in.


Some of these problems would exist in any system that had to work on multiple processors, and in the mid-1980s it appeared the future market would be filled with these. In fact, this evolution did not play out as expected. Multiprocessor machines were used for a brief time in server applications in the early 1990s, but then faded away. Meanwhile commodity CPUs grew in performance at a rate of about 60% a year, magnifying any ineffeciency in code. Worse, the speed of memory access grew at only 7% a year, meaning that the cost of accessing memory grew tremendously over this period, and since Mach was based on mapping memory around between programs, any "cache miss" made IPC calls excruciatingly slow.


Regardless of the advantages of the Mach approach, these sorts of real-world performance hits were simply not acceptable. As other teams found the same sorts of results, the early Mach enthusiasm quickly disappeared. After a short time many in the development community seemed to conclude that the entire concept of using IPC as the basis of an operating system was inherently flawed [citation needed].


Potential solutions

In the last section, we saw that the IPC overhead is a major issue for Mach 3 systems. However, the concept of a multi-server system is still promising, though it still requires some research. The developers have to be careful to isolate code into modules that do not call from server to server. For instance, the majority of the networking code would be placed in a single server, thereby minimizing IPC for normal networking tasks. Under Unix this isn't very easy, however, because the system is based on using the file system as the basis for everything from security to networking.


Most developers instead stuck with the original POE concept of a single large server providing the operating system functionality. In order to ease development, they allowed the operating system server to run either in user-space or kernel-space. This allowed them to develop in user-space and have all the advantages of the original Mach idea, and then move the debugged server into kernel-space in order to get better performance. Several operating systems have since been constructed using this method, known as co-location, among them Lites (a port of 4.4BSD Lite), MkLinux, OSF/1 and NeXTSTEP/OPENSTEP/Mac OS X. The Chorus microkernel made this a feature of the basic system, allowing servers to be raised into the kernel space using built-in mechanisms. MkLinux is an Open Source Software project, initiated by OSF Research Institute and Apple Computer, in order to port Linux to the PowerPC platform. ... NeXTSTEP is the original object-oriented, multitasking operating system that NeXT Computer, Inc. ... The OPENSTEP desktop. ... Mac OS X (officially pronounced Mac Oh-Ess Ten) is a line of proprietary, graphical operating systems developed, sold, and marketed by Apple Computer, the latest of which is included with all currently shipping Macintosh computers. ... ChorusOS is a microkernel real-time operating system designed for embedded systems. ...


Mach 4 attempted to address these problems, this time with a more radical set of upgrades. In particular, it was found that program code was typically not writable, so potential hits due to copy-on-write were rare. Thus it made sense to not map the memory between programs for IPC, but instead migrate the program code being used into the local space of the program. This led to the concept of "shuttles" and it seemed performance had improved, but the developers moved on with the system in a semi-usable state. Mach 4 also introduced built-in co-location primitives, making it a part of the kernel itself.


In all of these tests the IPC performance was found to be the main contributor to the problem, accounting for about 73% of the lost cycles.


By the mid-1990s, work on microkernel systems was largely dead. Although the market generally believed that all modern operating systems would be microkernel based by the 1990s, today the only widespread desktop use is in Apple's Mac OS X, using a co-located server running on top of a heavily modified Mach 3.


The Next Generation

Further analysis demonstrated that the IPC performance problem was not as obvious as it seemed. Recall that a single-side of a syscall took 20 μs under BSD and 114 μs on Mach running on the same system. Of the 114, 11 was the context switch, identical to BSD. An additional 18 were used by the MMU to map the message between user-space and kernel space. This adds up to only 31 μs, longer than a traditional syscall, but not by much.


The rest, the majority of the actual problem, was due to the kernel performing tasks such as checking the message for port access rights. While it would seem this is an important security concern, in fact, it only makes sense in a Unix-like system. For instance, a single-user operating system running on a cell phone might not need any of these features, and this is exactly the sort of system where Mach's pick-and-choose operating system would be most valuable. Likewise Mach caused problems when memory had been moved by the operating system, another task that only really makes sense if the system has more than one address space. DOS and the early Mac OS had a single large address space shared by all programs, so under these systems the mapping is a waste of time. Motorola T2288 mobile phone A mobile phone is a portable electronic device which behaves as a normal telephone whilst being able to move over a wide area (compare cordless phone which acts as a telephone only within a limited range). ... ‹ The template below has been proposed for deletion. ... Mac OS, which stands for Macintosh Operating System, is a series of graphical user interface-based operating systems developed by Apple Computer for their Macintosh line of computer systems. ...


These realizations led to a series of second generation microkernels, which further reduced the complexity of the system and placed almost all functionality in the user space. For instance, the L4 kernel includes only seven functions and uses 12k of memory, whereas Mach 3 includes about 140 functions and uses about 330k of memory. IPC calls under L4 on a 486DX-50 take only 5 μs, faster than a Unix syscall on the same system, and over 20 times as fast as Mach. Of course this ignores the fact that L4 is not handling permissioning or security, but by leaving this to the user-space programs, they can select as much or as little overhead as they require. L4 is, collectively, a family of related microkernels that are becoming well known in the computer industry for their excellent performance and small footprint. ...


The potential performance gains of L4 are tempered by the fact that the user-space applications will often have to provide many of the functions formerly supported by the kernel. In order to test the end-to-end performance, MkLinux in co-located mode was compared with an L4 port running in user-space. L4 added about 5%-10% overhead, compared to Mach's 15%, all the more interesting considering the double context switches needed[citation needed].


These newer microkernels have revitalized the industry as a whole, and many formerly dead projects such as the GNU Hurd have received new attention as a result. GNU Hurd logo Hurd redirects here. ...


Operating systems based on Mach

GNU Hurd logo Hurd redirects here. ... GNU Mach, an implementation of the Mach microkernel, is the default microkernel in the GNU Hurd kernel of the GNU operating system. ... Lites is a Unix-like operating system built on the Mach microkernel. ... MkLinux is an Open Source Software project, initiated by OSF Research Institute and Apple Computer, in order to port Linux to the PowerPC platform. ... MachTen is a Unix-like operating system from Tenon Intersystems that runs as an application program (in a virtual machine) on Apple Macintosh computers running Mac OS. MachTen is based on 4. ... MacMach is a computer operating system from the early 1990s. ... Mac OS X (officially pronounced Mac Oh-Ess Ten) is a line of proprietary, graphical operating systems developed, sold, and marketed by Apple Computer, the latest of which is included with all currently shipping Macintosh computers. ... NeXTSTEP is the original object-oriented, multitasking operating system that NeXT Computer, Inc. ... Workplace OS was to be a new computer operating system from IBM, planned as its attempt to move several of its operating system products to a common microkernel to improve portability and reduce maintenance costs. ... UNICOS is the Unix successor of the Cray Operating System (COS) for Cray supercomputers. ...

See also

Graphical overview of a microkernel A microkernel is a minimal form of computer operating system kernel providing a set of primitives, or system calls, to implement basic operating system services such as address space management, thread management, and inter-process communication. ... L4 is, collectively, a family of related computer programs. ...

References

  1. ^ M. Condict, D. Bolinger, E. McManus, D. Mitchell, S. Lewontin (April 1994). "Microkerael modularity with integrated kernel performance". Technical report, OSF Reseamh Institute, Cambridge, MA.
  2. ^ Hermann Härtig, Michael Hohmuth, Jochen Liedtke, Sebastian Schönberg, Jean Wolter (October 1997). "The performance of μ-kernel-based systems". Proceedings of the 16th ACM symposium on Operating systems principles (SOSP), Saint-Malo, France. ISBN 0-89791-916-5. url2
  • J. Bradley Chen, Brian N. Bershad. The impact of operating system structure on memory system performance, ACM Press, 1994, ISBN:0-89791-632-8

He was behind the original implementation of L4 microkernel when he was in University of Karlsruhe in Germany. ...

External links

  • The Mach project at Carnegie Mellon
  • THE MACH SYSTEM – an excellent introduction to Mach concepts
  • A comparison of Mach, Amoeba and Chorus
  • Towards Real Microkernels – contains numerous performance measurements, including those quoted in the article
  • The Performance of µ-Kernel-Based Systems – contains an excellent performance comparison of Linux running as a monokernel, on Mach 3 and on L4
  • Rick Rashid's page at Microsoft Research

  Results from FactBites:
 
Mach kernel - Wikipedia, the free encyclopedia (3928 words)
Mach is the logical successor to Carnegie Mellon's Accent kernel.
Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well.
Mach 3 led to a number of efforts to port other operating systems to the kernel, including IBM's Workplace OS and several efforts by Apple Computer to build a cross-platform version of the Mac OS.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m