FACTOID # 29: 73.3% of America's gross operating surplus in motion picture and sound recording industries comes from California.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Safety engineering

Safety engineering is an applied science strongly related to systems engineering and the subset System Safety Engineering. Safety engineering assures that a life-critical system behaves as needed even when pieces fail. Systems engineering techniques are used in complex projects: from spacecrafts to chip design, from robotics to creating large software products to building bridges, Systems engineering uses a host of tools that include modeling & simulation, requirements analysis, and scheduling to manage complexity Systems Engineering (SE) is an interdisciplinary approach and means... ‹ The template below (Expand) is being considered for deletion. ...


In the real world the term "safety engineering" refers to any act of accident prevention by a person qualified in the field. Safety engineering is often reactionary to adverse events, also described as "incidents", as reflected in accident statistics. This arises largely because of the complexity and difficulty of collecting and analysing data on "near misses".


Increasingly, the importance of a safety review is being recognised as an important risk management tool. Failure to identify risks to safety, and the according inability to address or "control" these risks, can result in massive costs, both human and economic. The multidisciplinary nature of safety engineering means that a very broad array of professionals are actively involved in accident prevention or safety engineering.


The majority of those practicing safety engineering are employed in industry to keep workers safe on a day to day basis. See the American Society of Safety Engineers publication Scope and Function of the Safety Profession.


Safety engineers distinguish different extents of defective operation: A "failure" is "the inability of a system or component to perform its required functions within specified performance requirements", while a "fault" is "a defect in a device or component, for example: a short circuit or a broken wire".[1] System-level failures are caused by lower-level faults, which are ultimately caused by basic component faults. (Some texts reverse or confuse these two terms. See NUREG-0492 page V-1.) The unexpected failure of a device that was operating within its design limits is a "primary failure", while the expected failure of a component stressed beyond its design limits is a "secondary failure". A device which appears to malfunction because it has responded as designed to a bad input is suffering from a "command fault".[2] A "critical" fault endangers one or a few people. A "catastrophic" fault endangers, harms or kills a significant number of people. // Scope of a Safety Engineer To perform their professional functions, safety engineering professionals must have education, training and experience in a common body of knowledge. ... Safety engineering is an applied science strongly related to systems engineering and the subset System Safety Engineering. ... See: statistical significance significant figures This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the same title. ...


Safety engineers also identify different modes of safe operation: A "probabilistically safe" system has no single point of failure, and enough redundant sensors, computers and effectors so that it is very unlikely to cause harm (usually "very unlikely" means, on average, less than one human life lost in a billion hours of operation). An inherently safe system is a clever mechanical arrangement that cannot be made to cause harm – obviously the best arrangement, but this is not always possible. A fail-safe system is one that cannot cause harm when it fails. A "fault-tolerant" system can continue to operate with faults, though its operation may be degraded in some fashion. Probability is the likelihood that something is the case or will happen. ... In engineering, the duplication of critical components of a system with the intention of increasing reliability of the system, usually in the case of a backup or fail-safe, is called redundancy. ... Not to be confused with censure, censer, or censor. ... The NASA Columbia Supercomputer. ... One thousand million (1,000,000,000) is the natural number following 999,999,999 and preceding 1,000,000,001. ... An inherently safe system is a variety of a certain system that cannot be made to cause harm – obviously the best arrangement safety-wise, but not always possible. ... Fail Safe is an episode from Season 5 of the science fiction television series Stargate SG-1. ...


These terms combine to describe the safety needed by systems: For example, most biomedical equipment is only "critical", and often another identical piece of equipment is nearby, so it can be merely "probabilistically fail-safe". Train signals can cause "catastrophic" accidents (imagine chemical releases from tank-cars) and are usually "inherently safe". Aircraft "failures" are "catastrophic" (at least for their passengers and crew) so aircraft are usually "probabilistically fault-tolerant". Without any safety features, nuclear reactors might have "catastrophic failures", so real nuclear reactors are required to be at least "probabilistically fail-safe", and some such as pebble bed reactors are "inherently fault-tolerant". Look up aircraft in Wiktionary, the free dictionary. ... Core of a small nuclear reactor used for research. ... Graphite Pebble for Reactor The pebble bed reactor (PBR) or pebble bed modular reactor (PBMR) is an advanced nuclear reactor design. ...

Contents

The process

Ideally, safety-engineers take an early design of a system, analyze it to find what faults can occur, and then propose safety requirements in design specifications up front and changes to existing systems to make the system safer. In an early design stage, often a fail-safe system can be made acceptably safe with a few sensors and some software to read them. Probabilistic fault-tolerant systems can often be made by using more, but smaller and less-expensive pieces of equipment. Computer software (or simply software) refers to one or more computer programs and data held in the storage of a computer for some purpose. ...


Far too often, rather than actually influencing the design, safety engineers are assigned to prove that an existing, completed design is safe. If a safety engineer then discovers significant safety problems late in the design process, correcting them can be very expensive. This type of error has the potential to waste large sums of money. // Scope of a Safety Engineer To perform their professional functions, safety engineering professionals must have education, training and experience in a common body of knowledge. ...


The exception to this conventional approach is the way some large government agencies approach safety engineering from a more proactive and proven process perspective. This is known as System Safety. The System Safety philosophy, supported by the System Safety Society, is to be applied to complex and critical systems, such as commercial airliners, military aircraft, munitions and complex weapon systems, spacecraft and space systems, rail and transportation systems, air traffic control system and more complex and safety-critical industrial systems. The proven System Safety methods and techniques are to prevent, eliminate and control hazards and risks through designed influences by a collaboration of key engineering disciplines and product teams. Software safety is fast growing field since modern systems functionality are increasingly being put under control of software. The whole concept of system safety and software safety, as a subset of systems engineering, is to influence safety-critical systems designs by conducting several types of hazard analyses to identify risks and to specify design safety features and procedures to strategically mitigate risk to acceptable levels before the system is certified.


Additionally, failure mitigation can go beyond design recommendations, particularly in the area of maintenance. There is an entire realm of safety and reliability engineering known as "Reliability Centered Maintenance" (RCM), which is a discipline that is a direct result of analyzing potential failures within a system and determining maintenance actions that can mitigate the risk of failure. This methodology is used extensively on aircraft and involves understanding the failure modes of the serviceable replaceable assemblies in addition to the means to detect or predict an impending failure. Every automobile owner is familiar with this concept when they take in their car to have the oil changed or brakes checked. Even filling up one's car with gas is a simple example of a failure mode (failure due to fuel starvation), a means of detection (fuel gauge), and a maintenance action (fill 'er up!). A fuel gauge (or gas gauge) is an instrument used to indicate the level of fuel contained in a tank. ...


For large scale complex systems, hundreds if not thousands of maintenance actions can result from the failure analysis. These maintenance actions are based on conditions (e.g., gauge reading or leaky valve), hard conditions (e.g., a component is known to fail after 100 hrs of operation with 95% certainty), or require inspection to determine the maintenance action (e.g., metal fatigue). The Reliability Centered Maintenance concept then analyzes each individual maintenance item for its risk contribution to safety, mission, operational readiness, or cost to repair if a failure does occur. Then the sum total of all the maintenance actions are bundled into maintenance intervals so that maintenance is not occurring around the clock, but rather, at regular intervals. This bundling process introduces further complexity, as it might stretch some maintenance cycles, thereby increasing risk, but reduce others, thereby potentially reducing risk, with the end result being a comprehensive maintenance schedule, purpose built to reduce operational risk and ensure acceptable levels of operational readiness and availability.


Analysis techniques

The two most common fault modeling techniques are called "failure modes and effects analysis" and "fault tree analysis". These techniques are just ways of finding problems and of making plans to cope with failures, as in Probabilistic Risk Assessment (PRA or PSA). One of the earliest complete studies using PRA techniques on a commercial nuclear plant was the Reactor Safety Study (RSS), edited by Prof. Norman Rasmussen[3] (see WASH-1400) Probabilistic risk assessment (PRA) (or probabilistic safety assessment/analysis) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity (such as airliners or nuclear power plants). ... WASH-1400, Reactor Safety Study was produced by a committee of specialists under Professor Norman Rasmussen in 1975 for the USNRC. It is thus often referred to as the Rasmussen Report. ...


Failure modes and effects analysis

In the technique known as "failure mode and effects analysis" (FMEA), an engineer starts with a block diagram of a system. The safety engineer then considers what happens if each block of the diagram fails. The engineer then draws up a table in which failures are paired with their effects and an evaluation of the effects. The design of the system is then corrected, and the table adjusted until the system is not known to have unacceptable problems. It is very helpful to have several engineers review the failure modes and effects analysis. Failure Mode and Effects Analysis (FMEA) is a risk assessment technique for systematically identifying potential failures in a system or a process. ... Failure Mode and Effects Analysis (FMEA) is a risk assessment technique for systematically identifying potential failures in a system or a process. ... // Scope of a Safety Engineer To perform their professional functions, safety engineering professionals must have education, training and experience in a common body of knowledge. ...


Fault tree analysis

In the technique known as "fault tree analysis", an undesired effect is taken as the root ('top event') of a tree of logic. Then, each situation that could cause that effect is added to the tree as a series of logic expressions. When fault trees are labelled with actual numbers about failure probabilities, which are often in practice unavailable because of the expense of testing, computer programs can calculate failure probabilities from fault trees. A computer program is a collection of instructions that describe a task, or set of tasks, to be carried out by a computer. ...

A fault tree diagram

The Tree is usually written out using conventional logic gate symbols. The route through a Tree between an event and an initiator in the tree is called a Cutset. The shortest credible way through the tree from Fault to initiating Event is called a Minimal Cutset. Fault tree diagram. ... Fault tree diagram. ... A logic gate performs a logical operation on one or more logic inputs and produces a single logic output. ...


Some industries use both Fault Trees and Event Trees (see Probabilistic Risk Assessment). An Event Tree starts from an undesired initiator (loss of critical supply, component failure etc) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of 'top events' arising from the initial event can then be seen. Probabilistic risk assessment (PRA) (or probabilistic safety assessment/analysis) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity (such as airliners or nuclear power plants). ...


Classic programs include the Electric Power Research Institute's (EPRI) CAFTA Software, which is used by almost all the US nuclear power plants and by a majority of US and international aerospace manufacturers, and the Idaho National Laboratory's SAPHIRE, which is used by the U.S. Government to evaluate the safety and reliability of nuclear reactors, the Space Shuttle, and the International Space Station. The Electric Power Research Institute (EPRI) conducts research on issues of interest to the electric power industry in the USA. EPRI is an independent, nonprofit organization funded by the electric utility industry. ... The Idaho National Laboratory (INL) is an 890 square mile (2,300 km²) complex located in the Idaho desert between the towns of Arco and Idaho Falls. ... SAPHIRE is a probabilistic risk and reliability assessment software tool. ... Reliability concerns quality or consistency. ... Core of a small nuclear reactor used for research. ... NASAs Space Shuttle, officially called Space Transportation System (STS), is the United States governments current manned launch vehicle. ... “ISS” redirects here. ...


Safety certification

Usually a failure in safety-certified systems is acceptable if, on average, less than one life per 109 hours of continuous operation is lost to failure. Most Western nuclear reactors, medical equipment, and commercial aircraft are certified to this level. The cost versus loss of lives has been considered appropriate at this level (by FAA for aircraft under Federal Aviation Regulations). Product certification or product qualification is the cornerstone of all bounding and the process of certifying that a certain product has passed performance and/or quality assurance tests or qualification requirements stipulated in regulations such as a building code and nationally accredited test standards, or that it complies with a... Nuclear power station at Leibstadt, Switzerland. ... Look up aircraft in Wiktionary, the free dictionary. ... FAA may refer to: Federal Aviation Administration in the United States Fleet Air Arm in the UK Royal Navy Fuerza Aérea Argentina in Argentina This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the same title. ... It has been suggested that Temporary Flight Restriction be merged into this article or section. ...


Preventing failure

Probabilistic fault tolerance: adding redundancy to equipment and systems

A NASA graph shows the relationship between the survival of a crew of astronauts and the amount of redundant equipment in their spacecraft (the "MM", Mission Module).
A NASA graph shows the relationship between the survival of a crew of astronauts and the amount of redundant equipment in their spacecraft (the "MM", Mission Module).

Once a failure mode is identified, it can usually be prevented entirely by adding extra equipment to the system. For example, nuclear reactors emit dangerous radiation and contain nasty poisons, and nuclear reactions can cause so much heat that no substance might contain them. Therefore reactors have emergency core cooling systems to keep the temperature down, shielding to contain the radiation, and engineered barriers (usually several, nested, surmounted by a containment building) to prevent accidental leakage. Image File history File links This is a lossless scalable vector image. ... Image File history File links This is a lossless scalable vector image. ... The National Aeronautics and Space Administration (NASA) is an agency of the United States government, responsible for the nations public space program. ... Radiation as used in physics, is energy in the form of waves or moving subatomic particles. ... The skull and crossbones symbol (Jolly Roger) traditionally used to label a poisonous substance. ... For other uses, see Heat (disambiguation) In physics, heat, symbolized by Q, is energy transferred from one body or system to another as a result of a difference in temperature. ... A containment building, in its most common usage, is a steel or concrete structure enclosing a nuclear reactor. ...


Most biological organisms have a certain amount of redundancy: multiple organs, multiple limbs, etc. This article needs additional references or sources for verification. ...


For any given failure, a fail-over, or redundancy can almost always be designed and incorporated into a system.


Inherent fail-safe design

For more details on this topic, see Inherent safety.

When adding equipment is impractical (usually because of expense), then the least expensive form of design is often "inherently fail-safe". The typical approach is to arrange the system so that ordinary single failures cause the mechanism to shut down in a safe way. (For nuclear power plants, this is termed a passively safe design, although more than ordinary failures are covered.) An inherently safe system is a variety of a certain system that cannot be made to cause harm – obviously the best arrangement safety-wise, but not always possible. ... Passively safe is a form of nuclear reactor which uses the laws of physics to keep the nuclear reaction under control rather than engineered safety systems. ...


One of the most common fail-safe systems is the overflow tube in baths and kitchen sinks. If the valve sticks open, rather than causing an overflow and damage, the tank spills into an overflow. Many modern sinks are made of stainless steel Older sinks are usually made of porcelain. ...


Another common example is that in an elevator the cable supporting the car keeps spring-loaded brakes open. If the cable breaks, the brakes grab rails, and the car does not fall. A set of lifts in the lower level of a London Underground station. ... This article needs additional references or sources for verification. ...


Inherent fail-safes are common in medical equipment, traffic and railway signals, communications equipment, and safety equipment.


Containing Failure

It is also common practice to plan for the failure of safety systems through containment and isolation methods. The use of isolating valves, also known as the Block and bleed manifold, is very common in isolating pumps, tanks, and control valves that may fail or need routine maintenance. In addition, nearly all tanks containing oil or other hazardous chemicals are required to have containment barriers set up around them to contain 100% of the volume of the tank in the event of a catastrophic tank failure. Similarly, long pipelines have remote-closing valves periodically installed in the line so that in the event of failure, the entire pipeline is not lost. The goal of all such containment systems is to provide means of limiting the damage done by a failure to a small localized area. This article is considered orphaned, since there are very few or no other articles that link to this one. ...


References

  1. ^ Radatz, Jane (Sep 28, 1990). IEEE Standard Glossary of Software Engineering Terminology (PDF), New York, NY, USA: The Institute of Electrical and Electronics Engineers, 84 pages. ISBN 1-55937-067-X. Retrieved on 2006-09-05. 
  2. ^ Vesely, W.E.; F. F. Goldberg, N. H. Roberts, D. F. Haasl (Jan, 1981). Fault Tree Handbook (PDF), Washington, DC, USA: U.S. Nuclear Regulatory Commission, page V-3. NUREG-0492. Retrieved on 2006-08-31. 
  3. ^ Rasmussen, Professor Norman C.; et al (Oct, 1975). Reactor Safety Study (PDF), Washington, DC, USA: U.S. Nuclear Regulator Commission, Appendix VI "Calculation of Reactor Accident Consequences". WASH-1400 (NUREG-75-014). Retrieved on 2006-08-31. 

Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 248th day of the year (249th in leap years) in the Gregorian calendar. ... Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ... Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ...

See also

Articles

  • Lutz, Robyn R. (2000). "Software Engineering for Safety: A Roadmap" (PDF). The Future of Software Engineering: 10 pages, store.acm.org: ACM Press. ISBN 1-58113-253-0. Retrieved on 2006-08-31. 
  • Lars Grunske (2005). "Specification and Evaluation of Safety Properties in a Component-based Software Engineering Process" (PDF). Springer.com. Retrieved on 2006-08-31.
  • US DOD (10 Feb, 2000). Standard Practice for System Safety (PDF), Washington, DC, USA: U.S. Department of Defense, 31 pages. MIL-STD-822D. Retrieved on 2006-08-31. 
  • US FAA (30 Dec, 2000). System Safety Handbook (PDF), Washington, DC, USA: U.S. Federal Aviation Administration (FAA). Retrieved on 2006-08-31. 

Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ... Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ... The United States Department of Defense, abbreviated DoD or DOD and sometimes called the Defense Department, is a civilian Cabinet organization of the United States government. ... Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ... “FAA” redirects here. ... Year 2006 (MMVI) was a common year starting on Sunday (link displays full 2006 calendar) of the Gregorian calendar. ... is the 242nd day of the year (243rd in leap years) in the Gregorian calendar. ...

Related concepts

This is an article about the modern meaning of the term public safety. ... // Scope of a Safety Engineer To perform their professional functions, safety engineering professionals must have education, training and experience in a common body of knowledge. ... This article is considered orphaned, since there are very few or no other articles that link to this one. ... This diagram demonstrates the defense in depth quality of nuclear power plants. ... A life-critical system or safety-critical system is a system whose failure or malfunction may result in death or serious injury. ... A life-critical system or safety-critical system is a system whose failure or malfunction may result in a) death or serious injury to people, or b) loss or severe damage to equipment or c) environmental harm. ... Reliability engineering is the discipline of ensuring that a system (or a device in general) will perform its intended function(s) when operated in a specified manner for a specified length of time. ... Reliability theory developed apart from the mainstream of probability and statistics, and was used originally as a tool to help nineteenth century maritime insurance and life insurance companies compute profitable rates to charge their customers. ... Reliability Theory of Aging and Longevity is a scientific approach aimed to gain theoretical insights into mechanisms of biological aging and species survival patterns by applying a general theory of systems failure, known as reliability theory. ... Human reliability is related to the field of human factors engineering, and refers to the reliability of humans in fields such as manufacturing, transportation, the military, or medicine. ... Risk assessment is a step in the risk management process. ... For non-business risks, see risk or the disambiguation page risk analysis. ... Piping diagram from 1920 of a Westinghouse E-T Air Brake system. ... The AbioCor artificial heart, an example of a biomedical engineering application of mechanical engineering with biocompatible materials for Cardiothoracic Surgery using an artificial organ. ... SAPHIRE is a probabilistic risk and reliability assessment software tool. ... Risk analysis is a technique to identify and assess factors that may jeopardize the success of a project or achieving a goal. ... Computer software (or simply software) refers to one or more computer programs and data held in the storage of a computer for some purpose. ... Security engineering is the field of engineering dealing with the security and integrity of real-world systems. ... In engineering, the duplication of critical components of a system with the intention of increasing reliability of the system, usually in the case of a backup or fail-safe, is called redundancy. ... Double switching is the practice in railway signalling in particular of cutting the power to a relay in both the positive and negative sides, so that a single false feed of current to that relay is unlikely to cause a wrong side failure. ... Workplace safety is an important management responsibility in industry. ... DO-178B, Software Considerations in Airborne Systems and Equipment Certification is a standard for software development published by RTCA, Incorporated. ... DO-254, Design Assurance Guidance for Airborne Electronic Hardware is a standard for complex electronic hardware development published by RTCA, Incorporated. ... ARP4761, Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment is a standard from SAE. See also ARP4754 DO-254 DO-178B Safety engineering avionics Category: ... In development of avionics, a hazard analysis is used to characterize the elements of risk. ... The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ... Process Safety Management is a United States regulation intended to prevent a disaster like the 1984 Bhopal Disaster. ...

External links


  Results from FactBites:
 
Safety engineering - Wikipedia, the free encyclopedia (1593 words)
Safety engineering assure that a life-critical system behaves as needed even when pieces fail.
Safety engineers distinguish different extents of defective operation: A "fault" is said to occur when some piece of equipment does not operate as designed.
Safety engineers also identify different modes of safe operation: A "probabilistically safe" system has no single point of failure, and enough redundant sensors, computers and effectors so that it is very unlikely to cause harm (usually "very unlikely" means, on average, less than one human life lost in a billion hours of operation).
Safety - Wikipedia, the free encyclopedia (700 words)
Safety is the condition of being protected against physical, social, spiritual, financial, political, emotional, occupational, psychological or other types or consequences of failure, damage, error, accidents, harm or any other event.
Safety is often in relation to some guarantee of a standard of insurance to the quality and unharmful function of a thing or organization.
Safety is generally interpreted as implying a real and significant impact on risk of death, injury or damage to property.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m