FACTOID # 29: 73.3% of America's gross operating surplus in motion picture and sound recording industries comes from California.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Interrupt latency

Interrupt latency is the time between the generation of an interrupt by a device and the servicing of the device which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be effected by interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods. In computer science, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ... An interrupt handler, also known as an interrupt service routine, is a subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. ... A Programmable Interrupt Controller (or PIC) is an Intel 8259A chip that controls interrupts. ... In computer science, an interrupt is an asynchronous signal from hardware or software indicating the need for attention. ... An operating system is a special computer program that manages the relationship between application software, the wide variety of hardware that makes up a computer system, and the user of the system. ...


Background

There is usually a tradeoff between interrupt latency, throughput, and processor utilization. Many of the techniques of CPU and OS design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput. In information technology, throughput is the rate at which a computer or network sends or receives data. ... Microprocessors, including an Intel 80486DX2 and an Intel 80386. ... An operating system is a special computer program that manages the relationship between application software, the wide variety of hardware that makes up a computer system, and the user of the system. ...


Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also effect the jitter in the interrupt latency, which can drastically affect the real-time schedulability of the system. The Intel APIC Architecture is well known for producing a huge amount of interrupt latency jitter. A Programmable Interrupt Controller (or PIC) is an Intel 8259A chip that controls interrupts. ... In Telecommunication, jitter is an abrupt and unwanted variation of one or more signal characteristics, such as the interval between successive pulses, the amplitude of successive cycles, or the frequency or phase of successive cycles. ... It has been suggested that this article or section be merged into Real-time. ... Scheduling is the process of assigning tasks to a set of resources. ... The Intel APIC Architecture is a system of Advanced Programmable Interrupt Controllers (APICs) designed by Intel for use in Symmetric Multi-Processor (SMP) computer systems. ...


Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical sections of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place. In computer programming a critical section is a piece of code that can only be executed by one process or thread at a time. ...


Many computer systems require low interrupt latencies, especially embedded systems that need to control machinery in real-time. Sometimes these systems use a real-time operating system (RTOS). An RTOS makes the promise that no more than an agreed upon maximum amount of time will pass between executions of subroutines. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum. An embedded system is a special-purpose computer controlled electo-mechanical system in which the computer is completely encapsulated by the device it controls. ... It has been suggested that this article or section be merged with Control theory. ... A real-time operating system (RTOS) is a class of operating system intended for real-time applications. ... In computer science, a subroutine (function, procedure, or subprogram) is a sequence of code which performs a specific task, as part of a larger program, and is grouped as one or more statement blocks; such code is sometimes collected into software libraries. ...


Considerations

There are many methods that hardware use to increase the interrupt latency that can be tolerated. These include buffers, and flow control. For example, most network cards implement transmit and receive ring buffers, interrupt rate limiting, and hardware flow control. Buffers allow data to be stored until it can be transferred, and flow control ... The flow control mechanism is used for controlling the flow of data in a network under well-defined conditions, while congestion control is used for controlling the flow of data when congestion has actually occurred . ... A circular buffer is a method of using memory within a computer program. ...


Modern hardware also implement interrupt rate limiting. This helps prevent interrupt storms or live lock by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Execeding this time results in a soft (recoverable) or hard (non-recoverable) error. In operating systems, an Interrupt Storm is the generally accepted jargon term for the phenomena where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts. ...


See also


  Results from FactBites:
 
Interrupt handler - Wikipedia, the free encyclopedia (496 words)
An interrupt handler, also known as an interrupt service routine, is a subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt.
Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task.
In response to an interrupt, there is a context switch, and the code for the interrupt is loaded and executed.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m