FACTOID # 11: Oklahoma has the highest rate of women in State or Federal correctional facilities.
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 


FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:



(* = Graphable)



Encyclopedia > Beta testing

Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification. It can only find defects, not prove that there are none. There are a number of different testing approaches that are used to do this ranging from the most informal ad hoc testing, to formally specified and controlled methods such as automated testing.

The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.



In general, software engineers distinguish software faults and software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU . A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.

Software testing may be viewed as a sub-field of software quality assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.

Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.

Defect rates are measured against function points in the software under test. Function points are a measure of software complexity (while lines of code is merely a measure of size). (Every block of code, one statement or sequence can be a single function point, connected by loops and conditional constructs; but complex expressions contain functions points at each conjunction operator ('and', 'or', etc).

A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software.

A common practice of software testing is that it is performed by an independent group of testers after finishing the software product and before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes.

It is commonly believed that the earlier a defect is found the cheaper it is to fix it.

In counterpoint, some emerging software disciplines such as extreme programming and the agile development movement, adhere to a "test driven software development" model. In this process unit tests are written first, by the programmers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed.

Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).

The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Alpha testing

In software development, testing is usually required before release to the general public. In-house developers often test the software in what is known as 'alpha' testing which is often performed under a debugger or with hardware_assisted debugging to catch bugs quickly.

It can then be handed over to testing staff for additional inspection in an environment similar to how it was intended to be used. This technique is known as black box testing. This is often known as the second stage of alpha testing.

Beta testing

Following that, limited public tests known as beta_versions are often released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta-versions are made available to the open public to increase the feedback field to a maximal number of future users.

Gamma testing

This is an informal phrase that refers derisively to the release of "buggy" (defect-ridden) products. It is not technically a term of art among testers, but rather an example of referential humor.

Some cynics refer to all software releases as "gamma testing" since defects are found in almost all commercial, commodity and publicly available software eventually. (Some classes of embedded, and highly specialized process control software are tested far more thoroughly and subjected to other forms of rigorous software quality assurance; particularly those that control "life critical" equipment where a failure can result in injury or death). (see Ivars Peterson's Fatal Defect for counter examples).

White Box vs Black Box

In the terminology of testing professionals (software and some hardware) the phrases "white box" and "black box" testing refer to whether the test case developer has access to the source code of the software under test, and whether the testing is done through (simulated) user interfaces or through the application programming interfaces either exposed by (published) or internal to the target.

In white box testing the test developer has access to the source code and can write code which links into the libraries which are linked into the target software. This is typical of unit tests, which only test parts of a software system. They ensure that components used in the construction are functional and robust to some degree.

In black box testing the test engineer only access the software through the same interfaces that the customer or user would, or possibly through remotely controllable, automation interfaces that connect another computer or another process into the target of the test. For example a test harness might push virtual keystrokes and mouse or other pointer operations into a program through any inter-process communications mechanism, with the assurance that these events are route through the same code paths as real keystrokes and mouse clicks.

Where "alpha" and "beta" refer to stages of before release (and also implicitly on the size of the testing community, and the constraints on the testing methods), white box and black box refer to the ways in which the tester accesses the target.

Beta testing is generally constrained to black box techniques (though a core of test engineers are likely continue with white box testing in parallel to the beta tests). Thus the term "beta test" can refer to the stage of the software (closer to release than being "in alpha") or it can refer to the particular group and process being done at that stage. So a tester might be continuing to work in white box testing while the software is "in beta" (a stage) but he or she would then not be part of "the beta test" (group/activity).

Code Coverage

In contrast code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is excercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never access under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested.

Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions.

Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.

There are also some sorts of defects which are affected by such tools. In particular some race conditions or similarly real time sensitive operations are impossible to detect while run under code coverage environments; and conversely some of these defects are only trigger as a result of the additional overhead of the testing code.

Custodiet Ipsos Custodes

One principle in software testing is best summed up by the classical Latin question posed by Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept. Heisenberg's uncertainty principle makes it clear that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.

In practical terms the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The tools can have their own defects and the process can fail in ways that are not the result of defects in the target but results as artifacts of the harness.

See also

Software testing activities


  • "An effective way to test code is to exercise it at its natural boundaries" -- Brian Kernighan

External links

  Results from FactBites:
EA - Battle for Middle-Earth II (0 words)
There may be some restrictions in regards to posting beta test feedback outside the beta test program so make sure to read all the instructions when you register.
If you want to participate in the beta test, make sure the retailer where you pre-order the game is a participating retailer and is actively promoting the beta test pre-order program.
If you are selected for the early, closed beta test, you are not required to pre-order the game as a form of access to the beta test.
  More results at FactBites »



Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m