FACTOID # 1: Idaho produces more milk than Iowa, Indiana and Illinois combined.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Computer multitasking

In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs. In computing, a process is, roughly speaking, a task being run by a computer, often simultaneously with many other tasks. ... “CPU” redirects here. ... Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time operating system design. ... A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. ... Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. ... Multiprocessing is traditionally known as the use of multiple concurrent processes in a system as opposed to a single process at any one instant. ...


Operating systems may adopt one of many different scheduling strategies, which generally fall into the following categories: Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time operating system design. ...

  • In multiprogramming systems, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage.
  • In time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow several programs to execute apparently simultaneously.
  • In real-time systems, some waiting tasks are guaranteed to be given the CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing.

The term time-sharing is no longer commonly used, having been replaced by simply multitasking. Alternate uses: see Timesharing Time-sharing is an approach to interactive computing in which a single computer is used to provide apparently simultaneous interactive general-purpose computing to multiple users by sharing processor time. ... In computer engineering, an interrupt is a signal from a device which typically results in a context switch: that is, the processor sets aside what its doing and does something else. ... Realtime redirects here. ...

Contents

Multiprogramming

In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the CPU would have to stop executing program instructions while the peripheral processed the data. This was deemed very inefficient. CPU Time is the amount of time a computer program uses in processing on a CPU, often measured in clock ticks. ... It has been suggested that this article or section be merged into Computer hardware. ...


The first efforts to create multiprogramming systems took place in the 1960s. Several different programs in batch were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.


Multiprogramming doesn't give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced the waiting.


As noted in the OS/360 entry its original O/S followed the above model but was replaced the very next year, 1967, by MFT which set limits on the amount of CPU time any single process could consume before being switched out. The hours-long wait referred to above generally was related to the fact that computers simply weren't very fast as compared to today. Year 1967 (MCMLXVII) was a common year starting on Sunday (link will display full calendar) of the 1967 Gregorian calendar. ...


Cooperative multitasking/time-sharing

When computer usage evolved from batch mode to interactive mode, multiprogramming was no longer a suitable approach. Each user wanted to see his program running as if it was the only program in the computer. The use of time sharing made this possible, with the concession that the computer would not seem as fast to any one user as it really would be if it were running only that user's program.


Early multitasking systems consisted of suites of related applications that voluntarily ceded time to each other. This approach, which was eventually supported by many computer operating systems, is today known as cooperative multitasking. Although it is rarely used in larger systems, Microsoft Windows prior to Windows 95 and Windows NT, and Mac OS prior to Mac OS X both used cooperative multitasking to enable the running of multiple applications simultaneously. // An operating system (OS) is the software that manages the sharing of the resources of a computer. ... Windows redirects here. ... Windows 95 is a consumer-oriented graphical user interface-based operating system. ... Windows NT (New Technology) is a family of operating systems produced by Microsoft, the first version of which was released in July 1993. ... This article or section does not adequately cite its references or sources. ... Mac OS X (IPA: ) is a line of graphical operating systems developed, marketed, and sold by Apple Inc. ...


Cooperative multitasking has many shortcomings. For one, a cooperatively multitasked system must rely on each process to regularly give time to other processes on the system. Any single poorly designed program, or any single "hung" process, can effectively bring the system to a halt. The design requirements of a cooperatively multitasked program can also be onerous for some purposes, and may result in irregular (or inefficient) use of system resources. To remedy this situation, most time-sharing systems quickly evolved a more advanced approach known as preemptive multitasking. Pre-emptive multitasking is a form of multitasking. ...


Preemptive multitasking/time-sharing

Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process. Pre-emption as used with respect to operating systems means the ability of the operating system to preempt or stop a currently scheduled task in favour of a higher priority task. ...


At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. IO bound refers to a condition in which the time it takes to complete a computation is determined principally by the speed of IO operations. ... CPU bound refers to a condition where the time to complete a computation is determined principally by the speed of the central processor and main memory. ... It has been suggested that this article or section be merged with Polling (computer science). ...


Real time

Another reason for multitasking was in the design of real-time computing systems, where a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system was coupled with process prioritization to ensure that key activities were given a greater share of available process time. Realtime redirects here. ...


Multithreading

As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e.g. one process gathering input data, one process processing input data, one process writing out results on disk.) This, however, required some tools to allow processes to efficiently exchange data.


Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are basically processes that run in the same memory context. Threads are described as lightweight because switching between threads does not involve changing the memory context. For the form of code consisting entirely of subroutine calls, see Threaded code. ...


While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.[citation needed] A fiber in computer science is a term for a particularly lightweight thread of execution. ... Multiprocessing is traditionally known as the use of multiple concurrent processes in a system as opposed to a single process at any one instant. ...


Some systems directly support multithreading in hardware. Multithreading computers have hardware support to efficiently execute multiple threads. ...


Memory protection

When multiple programs are present in memory, an ill-behaved program may (inadvertently or deliberately) overwrite memory belonging to another program, or even to the operating system itself.


The operating system therefore restricts the memory accessible to the running program. A program trying to access memory outside its allowed range is immediately stopped before it can change memory belonging to another process.


Another key innovation was the idea of privilege levels. Low privilege tasks are not allowed some kinds of memory access and are not allowed to perform certain instructions. When a task tries to perform a privileged operation a trap occurs and a supervisory program running at a higher level is allowed to decide how to respond. This created the possibility of virtualizing the entire system, including virtual peripheral devices. Such a simulation is called a virtual machine operating system. Early virtual machine systems did not have virtual memory, but both are common today. A trap is a device or tactic intended to harm, capture, detect, or inconvenience an intruder. ... In computer science, a virtual machine is software that creates a virtualized environment between the computer platform and its operating system, so that the end user can operate software on an abstract machine. ... How virtual memory maps to physical memory Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it were contiguous. ...


Memory swapping

Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. How virtual memory maps to physical memory Virtual memory is an addressing scheme implemented in hardware and software that allows non-contiguous memory to be addressed as if it were contiguous. ... In computer storage, secondary storage, or external memory, is computer memory that is not directly accessible to the central processing unit of a computer, requiring the use of computers input/output channels. ... In computer storage, secondary storage, or external memory, is computer memory that is not directly accessible to the central processing unit of a computer, requiring the use of computers input/output channels. ...


Programming in a multitasking environment

Processes that are entirely independent are not much trouble to program. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks. Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource. Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. ...


Bigger computer systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multi-processing. One use for interrupts is to allow a simpler processor to simulate the dedicated I/O processors that it does not have. Multiprocessing is traditionally known as the use of multiple concurrent processes in a system as opposed to a single process at any one instant. ...


Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities. Symmetric multiprocessing, or SMP, is a multiprocessor computer architecture where two or more identical processors are connected to a single shared main memory. ...


See also

In computing, a process is an instance of a computer program that is being executed. ... In a multitasking computer system, processes may occupy a variety of states. ... In computer science and computer programming, system time represents a computer systems notion of the passing of time. ... A task is an execution path through address space. In other words, a set of program instructions that is loaded in memory. ...

External links

  • How to Select an RTOS
  • How Preemption Works
  • Perils of Preemption

  Results from FactBites:
 
Computer multitasking - Encyclopedia, History, Geography and Biography (1624 words)
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it soon became apparent that multitasking was useful regardless of the number of users.
Another reason for multitasking was in the design of real-time computing systems, where a number of possibly unrelated external activities needed to be controlled by a single processor system.
Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m