FACTOID # 16: In the 2000 Presidential Election, Texas gave Ralph Nader the 3rd highest popular vote count of any US state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Serial Digital Interface

The Serial Digital Interface (SDI), standardized in ITU-R BT.656 and SMPTE 259M, is a digital video interface used for broadcast-grade video. A related standard, known as High Definition Serial Digital Interface (HD-SDI), is standardized in SMPTE 292M; this provides a nominal data rate of 1.485 Gbit/s. An emerging interface, commonly known in the industry as dual link HD-SDI and consisting essentially of a pair of SMPTE 292M links, is standardized in SMPTE 372M; this provides a nominal 2.970 Gbit/s interface used in applications (such as digital cinema) that require greater fidelity and resolution than standard HDTV can provide. A more recent interface, consisting of a single 2.97 Gbit/s serial link, is standardized in SMPTE 424M. A male DE-9 connector used for a serial port on a PC style computer. ... The ITU Radiocommunication Sector (ITU-R) is a standards body subcommittee of the International Telecommunication Union (ITU) relating to radio communication. ... The Society of Motion Picture and Television Engineers or SMPTE (pronounced simptee or sometimes sumptee) is a US professional association of engineers. ... For other uses, see Video (disambiguation). ... Broadcasting is the distribution of audio and/or video signals which transmit programs to an audience. ... Serial Digital Interface (SDI), standardized in ITU-R BT.656 and SMPTE-259M, is a digitized video interface used for broadcast grade video. ... This article is about digital presentation. ... SMPTE 424M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2. ...


These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio) within television facilities; they can also be used for packetized data. They are designed for operation over short distances; due to their high bitrates they are inappropriate for long-distance transmission. SDI and HD-SDI are currently only available in professional video equipment; various licensing agreements, restricting the use of unencrypted digital interfaces to professional equipment, prohibit their use in consumer equipment. (There are various mod kits for existing DVD players and other devices, which allow a user to add a serial digital interface to these devices). Xenium Mod Chip soldered into a Xbox. ... The inside of a DVD player A DVD player is a device not only playing discs produced under the DVD Video standard but also playing discs under the standard of DVD Audio. ...

Contents

Electrical interface

The various serial digital interface standards all use one (or more) coaxial cables with BNC connectors, with a nominal impedance of 75 ohms. This is the same type of cable used in analog video setups, which potentially makes for easier upgrades (though higher quality cables may be necessary for long runs at the higher bitrates). The specified signal amplitude at the source is 800 mV (±10%) peak-to-peak; far lower voltages may be measured at the receiver owing to attenuation. Using equalisation at the receiver, it is possible to send 270 Mbit/s SDI over 300 metres without use of repeaters, but shorter lengths are preferred. The HD bitrates have a shorter maximum run length, typically 100 meters. coaxial cable In geometry, coaxial means that two or more forms share a common axis; it is the three-dimensional linear analog of concentric. Coaxial cable, as a common example, has a wire conductor in the center (D) a circumferential outer conductor (B) and an insulating medium called the dielectric... Male BNC connector Cables with BNC connectors Adapter between a female BNC connector and banana plugs Picture to show the similarity between 50 Ω and 75 Ω BNC connectors Pulse generators with BNC connectors and cables. ... The ohm (symbol: Ω) is the SI unit of electric resistance. ... The correct title of this article is . ... This article is about Physics. ... In audio processing, equalization (EQ) is the process of modifying the frequency envelope of a sound. ... In radio terminology, a receiver is an electronic circuit that receives a radio signal from an antenna and decodes the signal for use as sound, pictures, navigational-position information, etc. ...


Uncompressed digital component signals are transmitted. Data is encoded in NRZI format, and a linear feedback shift register is used to scramble the data to reduce the likelihood that long strings of zeroes or ones will be present on the interface. The interface is self-synchronizing and self-clocking. Framing is done by detection of a special synchronization pattern, which appears on the (unscrambled) serial digital signal to be a sequence of ten ones followed by twenty zeroes (twenty ones followed by forty zeroes in HD); this bit pattern is not legal anywhere else within the data payload. Three cables, each with RCA plugs at both ends, are often used to carry analog component video Component video is a video signal that has been split into two or more components. ... Categories: Stub ... A linear feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state. ... Synchronization (or Sync) is a problem in timekeeping which requires the coordination of events to operate a system in unison. ...


Standards

Standard Name Bitrates Example Video Formats
SMPTE 259M SD-SDI 270 Mbit/s, 360 Mbit/s, 143 Mbit/s, and 177 Mbit/s 480i, 576i
SMPTE 344M 540 Mbit/s 480p, 576p
SMPTE 292M HD-SDI 1.485 Gbit/s, and 1.485/1.001 Gbit/s 720p, 1080i
SMPTE 372M Dual Link HD-SDI 2.970 Gbit/s, and 2.970/1.001 Gbit/s 1080p
SMPTE 424M 3G-SDI 2.970 Gbit/s, and 2.970/1.001 Gbit/s 1080p

The Serial Digital Interface (SDI), standardized in ITU-R BT.656 and SMPTE 259M, is a digital video interface used for broadcast-grade video. ... This article or section does not cite its references or sources. ... Serial Digital Interface (SDI), standardized in ITU-R BT.656 and SMPTE-259M, is a digitized video interface used for broadcast grade video. ... SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2. ... SMPTE 424M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2. ...

Bit rates

Several bit rates are used in serial digital video:

  • For standard definition applications, as defined by SMPTE 259M, the possible bit rates are 270 Mbit/s, 360 Mbit/s, 143 Mbit/s, and 177 Mbit/s. 270 Mbit/s is by far the most commonly used; though the 360 Mbit/s interface (used for widescreen standard definition) is sometimes encountered. The 143 and 177 Mbit/s interfaces were intended for transmission of composite-encoded (NTSC or PAL) video digitally, and are now considered obsolete.
  • For enhanced definition applications (mainly 525P), there are several 540 Mbit/s interfaces defined, as well as an interface standard for a dual-link 270 Mbit/s interface. These are rarely encountered.
  • For HDTV applications, the serial digital interface is defined by SMPTE 292M. Two bit rates are defined, 1.485 Gbit/s, and 1.485/1.001 Gbit/s. The factor of 1/1.001 is provided to allow SMPTE 292M to support video formats with frame rates of 59.94 Hz, 29.97 Hz, and 23.98 Hz, in order to be upwards compatible with existing NTSC systems. The 1.485 Gbit/s version of the standard supports other frame rates in widespread use, including 60 Hz, 50 Hz, 30 Hz, 25 Hz, and 24 Hz. It is common to collectively refer to both standards using a nominal bit rate of 1.5 Gbit/s.
  • For very high-definition applications, requiring greater resolution, frame rate, or color fidelity than the HD-SDI interface can provide, the SMPTE 372M standard defines the dual link interface. As the name suggests, this interface consists of two SMPTE 292M interconnects operating in parallel. In particular, the dual link interface supports 10-bit, 4:2:2, 1080P formats at frame rates of 60 Hz, 59.94 Hz, and 50 Hz, as well as 12-bit color depth, RGB encoding, and 4:4:4 colour sampling.
  • A nominal 3 Gbit/s interface (more accurately, 2.97 Gbit/s, but commonly referred to as "3 gig") is now standardized by SMPTE; as of June 2006, chipsets for this interface are just becoming available. It is intended to support all of the features supported by the dual 1.485 Gbit/s interface, but requires only one cable rather than two.

... The inner box (green) is the format used in most pre-1952 films and pre-widescreen television. ... NTSC is the analog television system in use in Canada, Japan, Mexico, the Philippines, South Korea, Taiwan, the United States, and some other countries, mostly in the Americas (see map). ... For other uses, see PAL (disambiguation). ... Enhanced definition television, extended definition television, or EDTV is a shorthand term for certain television formats. ... High-definition television (HDTV) means broadcast of television signals with a higher resolution than traditional formats (NTSC, SECAM, PAL) allow. ... NTSC is the analog television system in use in Canada, Japan, Mexico, the Philippines, South Korea, Taiwan, the United States, and some other countries, mostly in the Americas (see map). ... In digital image processing, chroma subsampling is the use of lower resolution for the colour (chroma) information in an image than for the brightness (intensity or luma) information. ... 2006 is a common year starting on Sunday of the Gregorian calendar. ...

Other interfaces

SMPTE 292M defines an optical interface as well as an electrical one; this interface is widely considered to be obsolete. An 8-bit parallel digital interface is defined by CCIR 601, this is also obsolete (however, many clauses in the various standards accommodate the possibility of an 8-bit interface). CCIR 601 is the old name of a standard published by the CCIR (now ITU-R) for encoding interlaced analogue video signals in digital form. ...


Data Format

In SD and ED applications, the parallel data format is defined to 10 bits wide, whereas in HD applications, it is 20 bits wide, divided into two parallel 10-bit datastreams (known as Y and C). The SD datastream is arranged like this:

Cb Y Cr Y' Cb Y Cr Y'

whereas the HD datastreams are arranged like this:

Y
Y Y' Y Y' Y Y' Y Y'
C
Cb Cr Cb Cr Cb Cr Cb Cr

For all serial digital interfaces (excluding the obsolete composite encodings), the native color encoding is 4:2:2 YCbCr format. The luminance channel (Y) is encoded at full bandwidth (13.5 MHz in 270 Mbit/s SD, ~75 MHz in HD), and the two chrominance channels (Cb and Cr) are subsampled horizontally, and encoded at half bandwidth (6.75 MHz or 37.5 MHz). The Y, Cr, and Cb samples are co-sited (acquired at the same instance in time), and the Y' sample is acquired at the time halfway between two adjacent Y samples. In digital image processing, chroma subsampling is the use of lower resolution for the colour (chroma) information in an image than for the brightness (intensity or luma) information. ... A colour image and the Y, Cb and Cr elements of it. ...


In the above, Y refers to luminance samples, and C to chrominance samples. Cr and Cb further refer to the red and blue "color difference" channels; see Component Video for more information. This section only discusses the native color encoding of SDI; other color encodings are possible by treating the interface as a generic 10-bit data channel. The use of other colorimetry encodings, and the conversion to and from RGB colorspace, is discussed below. Luminance (also called luminosity) is a photometric measure of the density of luminous intensity in a given direction. ... Chrominance (chroma for short) comprises the two components of a television signal that encode color information. ... Three cables, each with RCA plugs at both ends, are often used to carry analog component video Component video is a video signal that has been split into two or more components. ... REDIRECT RGB color model ...


Video payload (as well as ancillary data payload) may use any 10-bit word in the range 4 to 1019 (004 to 3FB in hexadecimal) inclusive; the values 0-3 and 1020-1023 (3FC - 3FF) are reserved and may not appear anywhere in the payload. These reserved words have two purposes; they are used both for #synchronization packets and for #anciliary data headers.


Synchronization packets

A synchronization packet (commonly known as the timing reference signal or TRS) occurs immediately before the first active sample on every line, and immediately after the last active sample (and before the start of the horizontal blanking region). The synchronization packet consists of four 10-bit words. (S The first three words are always the same--0x3FF, 0, 0; the fourth consists of 3 flag bits, along with an error correcting code. As a result, there are 8 different synchronization packets possible. Once the beam of a TV or monitor has reached the right side of the screen it is quickly moved back to the left side of the screen. ...


In the HD-SDI and dual link interfaces, synchronization packets must occur simultaneously in both the Y and C datastreams. (Some delay between the two cables in a dual link interface is permissible; equipment which supports dual link is expected to buffer the leading link in order to allow the other link to catch up). In SD-SDI and enhanced definition interfaces, there is only one datastream, and thus only one synchronization packet at a time. Other than the issue of how many packets appear, their format is the same in all versions of the serial-digital interface.


The flags bits found in the fourth word (commonly known as the XYZ word) are known as H, F, and V. The H bit indicates the start of horizontal blank; and synchronization bits immediately preceding the horizontal blanking region must have H set to one. Such packets are commonly referred to as End of Active Video, or EAV packets. Likewise, the packet appearing immediately before the start of the active video has H set to 0; this is the Start of Active Video or SAV packet.


Likewise, the V bit is used to indicate the start of the vertical blanking region; an EAV packet with V=1 indicates the following line (lines are deemed to start at EAV) is part of the vertical interval, an EAV packet with V=0 indicates the following line is part of the active picture. The vertical blanking interval (VBI) is an interval in a television or VDU signal that temporarily suspends transmission of the signal for the electron gun to move back up to the first line of the television screen to trace the next screen field. ...


The F bit is used in interlaced and segmented-frame formats to indicate whether the line comes from the first or second field (or segment). In progressive scan formats, the F bit is always set to zero. Interlacing is a method of displaying images on a raster-scanned display device, such as a cathode ray tube (CRT). ... Progressive scan Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which the lines of each frame are drawn in sequence. ...


Line counter and CRC

In the high definition serial digital interface (and in dual-link HD), additional check words are provided to increase the robustness of the interface. In these formats, the four samples immediately following the EAV packets (but not the SAV packets) contain a cyclic redundancy check field, and a line count indicator. The CRC field provides a CRC of the preceding line (CRCs are computed independently for the Y and C streams), and can be used to detect bit errors in the interface. The line count field indicates the line number of the current line. A cyclic redundancy check (CRC) is a type of function that takes as input a data stream of any length and produces as output a value of a certain fixed size. ... Bit error indicates the number of bits of a data stream over a communication channel that have been altered by noise. ...


The CRC and line counts are not provided in the SD and ED interfaces. Instead, a special ancillary data packet known as an EDH packet may be optionally used to provide a CRC check on the data.


Line and sample numbering

Each sample within a given datastream is assigned a unique line and sample number. In all formats, the first sample immediately following the SAV packet is assigned sample number 0; the next sample is sample 1; all the way up to the XYZ word in the following SAV packet. In SD interfaces, where there is only one datastream, the 0th sample is a Cb sample; the 1st sample a Y sample, the 2nd sample a Cr sample, and the third sample is the Y' sample; the pattern repeats from there. In HD interfaces, each datastream has its own sample numbering--so the 0th sample of the Y datastream is the Y sample, the next sample the Y' sample, etc. Likewise, the first sample in the C datastream is Cb, followed by Cr, followed by Cb again.


Lines are numbered sequentially, starting from 1, up to the number of lines per frame of the indicated format (typically 525, 625, 750, or 1125). Determination of line 1 is somewhat arbitrary; however it is unambiguously specified by the relevant standards. In 525-line systems, the first line of vertical blank is line 1, whereas in other interlaced systems (625 and 1125-line), the first line after the F bit transitions to zero is line 1.


Note that lines are deemed to start at EAV, whereas sample zero is the sample following SAV. This produces the somewhat confusing result that the first sample in a given line of 1080i video is sample number 1920 (the first EAV sample in that format), and the line ends at the following sample 1919 (the last active sample in that format). Note that this behavior differs somewhat from analog video interfaces, where the line transition is deemed to occur at the sync pulse, which occurs roughly halfway through the horizontal blanking region.


Link numbering

Link numbering is only an issue in dual-link interfaces. The first link (the primary) link, is assigned a link number of 1, subsequent links are assigned increasing link numbers; so the second (secondary) link in a dual-link system is link 2. The link number of a given interface is indicated by a VPID packet located in the vertical ancillary data space.


Note that the data layout in dual link is designed so that the primary link can be fed into a single-link interface, and still produce usable (though somewhat degraded) video. The secondary link generally contains things like additional LSBs (in 12-bit formats), non-cosited samples in 4:4:4 sampled video (so that the primary link is still valid 4:2:2), and alpha or data channels. If the second link of a 1080P dual link configuration is absent, the first link still contains a valid 1080i signal.


In the case of 1080p60, 59.94, or 50 Hz video over a dual link; each link contains a valid 1080i signal at the same field rate. The first link contains the 1st, 3rd, and 5th lines of odd fields and the 2nd, 4th, 6th, etc. lines of even fields, and the second link contains the even lines on the odd fields, and the odd lines on the even fields. When the two links are combined, the result is a progressive-scan picture at the higher frame rate.


Ancillary data

Main article: Ancillary data

Like SMPTE 259M, SMPTE 292M supports the SMPTE 291M standard for ancillary data. Ancillary data is provided as a standardized transport for non-video payload within a serial digital signal; it is used for things such as embedded audio, closed captions, timecode, and other sorts of metadata. Ancillary data is indicated by a 3-word packet consisting of 0, 3FF, 3FF (the opposite of the synchronization packet header), followed by a two-word identification code, a data count word (indicating 0 - 255 words of payload), the actual payload, and a one-word checksum. Other than in their use in the header, the codes prohibited to video payload are also prohibited to ancillary data payload. Ancillary data (commonly abbreviated as ANC data), in the context of television systems, refers to a means which by non-video information (such as audio, other forms of essence, and metadata) may be embedded within the serial digital interface. ... The Serial Digital Interface (SDI), standardized in ITU-R BT.656 and SMPTE 259M, is a digital video interface used for broadcast-grade video. ... Ancillary data (commonly abbreviated as ANC data), in the context of television systems, refers to a means which by non-video information (such as audio, other forms of essence, and metadata) may be embedded within the serial digital interface. ... This article is about audible acoustic waves. ... Closed captioning allows deaf, hard of hearing / hearing_impaired, and other people to read, through captions, a transcript of the audio portion of a video that they cannot hear. ... Timecode is also the title of a 2000 film directed by Mike Figgis which was shot in one continuous take. ... Metadata is data about data. ...


Specific applications of ancillary data include embedded audio, EDH, VPID and SDTI.


In dual link applications; ancillary data is mostly found on the primary link; the secondary link is to be used for ancillary data only if there is no room on the primary link. One exception to this rule is the VPID packet; both links must have a valid VPID packet present.


Embedded audio

Both the HD and SD serial interfaces provide for 16 channels of embedded audio. The two interfaces use different audio encapsulation methods--SD uses the SMPTE 272M standard, whereas HD uses the SMPTE 299M standard. In either case, a SDI signal may contain up to sixteen audio channels (8 pairs) embedded 48 kHz, 24-bit audio channels along with the video. Typically, 48 kHz, 24-bit (20-bit in SD) PCM audio is stored, in a manner directly compatible with the AES3 digital audio interface. These are placed in the (horizontal) blanking periods, when the SDI signal carries nothing useful, since the receiver generates its own blanking signals from the TRS. PCM is an initialism which can have different meanings: Phase Change Material Pulse-code modulation, a way to digitally encode signals representing sound and their video counterparts Potential Cancer Marker Communist Party of Mexico Plug Compatible Manufacturer Power-train control module, a computer in a car which controls the car... AES/EBU, for Audio Engineering Society / European Broadcasting Union, officially known as AES3, is a 1992 standard (revised in 1995, 1998, and 2003) for carrying digital audio signals between various devices. ...


In dual-link applications, 32 channels of audio are available, as each link may carry 16 channels.


EDH

As the standard definition interface carries no checksum, CRC, or other data integrity check, an EDH (Error Detection and Handling) packet may be optionally placed in the vertical interval of the video signal. This packet includes CRC values for both the active picture, and the entire field (excluding those lines at which switching may occur, and which should contain no useful data); equipment can compute their own CRC and compare it with the received CRC in order to detect errors. In television technology, Error Detection and Handling (EDH) protocol is an optional but commonly used addition to the Standard Definition-Serial Digital Interface (SDI) standard. ...


EDH is typically only used with the standard definition interface; the presence of CRC words in the HD interface make EDH packets unnecessary.


VPID

VPID (or video payload identifier) packets are increasingly used to describe the video format. In early versions of the serial digital interface, it was always possible to uniquely determine the video format by counting the number of lines and samples between H and V transitions in the TRS. With the introduction of dual link interfaces, and segmented-frame standards, this is no longer possible; thus the VPID standard (defined by SMPTE 352M) provides a way to uniquely and unambiguously identify the format of the video payload.


Video payload and blanking

The active portion of the video signal is defined to be those samples which follow a SAV packet, and precede the next EAV packet; where the corresponding EAV and SAV packets have the V bit set to zero. It is in the active portion that the actual image information is stored.


Color encoding

Several color encodings are possible in the serial digital interface. The default (and most common case) is 10-bit linearly sampled video data encoded as 4:2:2 YCbCr. (YCbCr is a digital representation of the YPbPr colorspace). Samples of video are stored as described above. Data words correspond to signal levels of the respective video components, as follows: A colour image and the Y, Cb and Cr elements of it. ... YPbPr (also referred to as YPrPb, PrPbY, and PbPrY) is a color space used in video electronics. ...

  • The luminance (Y) channel is defined such that a signal level of 0 mV is assigned the codeword 64 (40 hex), and 700 millivolts (full scale) is assigned the codeword 940 (3AC hex) .
  • For the chroma channels, 0 mV is assigned the code word 512 (200 hex), -350mV is assigned a code word of 64 (40 hex), and +350mV is assigned a code word of 960 (3C0 hex).

Note that the scaling of the luma and chroma channels is not identical. The minimum and maximum of these ranges represent the preferred signal limits, though the video payload may venture outside these ranges (providing that the reserved code words of 0 - 3 and 1020 - 1023 are never used for video payload). In addition, the corresponding analog signal may have excursions further outside of this range.


Colorimetry

As YPbPr (and YCbCr) are both derived from the RGB colorspace, a means of converting is required. There are three colorimetries typicallly used with digital video: REDIRECT RGB color model ... This article or section is not written in the formal tone expected of an encyclopedia article. ...

  • SD and ED applications typically use a colorimetry matrix specified in CCIR 601.
  • Most HD, dual link, and 3Gb applications use a different matrix, specified in CCIR 709.
  • The 1035-line HD standards specified by SMPTE 260M (primarily used in Japan and now largely considered obsolete), used a colorimetry matrix specified by SMPTE 240M. This colorimetry is nowadays rarely used, as the 1035-line formats have been superseded by 1080-line formats

CCIR 601 is the old name of a standard published by the CCIR (now ITU-R) for encoding interlaced analogue video signals in digital form. ...

Other color encodings

The dual-link and 3 Gbit/s interfaces additionally support other color encodings besides 4:2:2 YCbCr, namely:

  • 4:2:2 and 4:4:4 YCbCr, with an optional alpha (used for color keying) or data (used for non-video payload) channel
  • 4:4:4 RGB, also with an optional alpha or data channel
  • 4:2:2 YCbCr, 4:4:4 YCbCr, and 4:4:4 RGB, with 12 bits of color information per sample, rather than 10. Note that the interface itself is still 10 bit; the additional 2 bits per channel are multiplexed into an additional 10-bit channel on the second link.

If an RGB encoding is used, the three primaries are all encoded in the same fashion as the Y channel; a value of 64 (40 hex) corresponds to 0mV, and 940 (3AC hex) corresponds to 700mV


12-bit applications are scaled in a similar fashion to their 10-bit counterparts; the additional two bits are considered to be LSBs. LSB may stand for: least significant bit (computing) Linux Standard Base (computing) lower sideband (radio) Local Scene Blend (see: LSB Nation) This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the same title. ...


Vertical and horizontal blanking regions

For portions of the vertical and horizontal blanking regions which are not used for ancillary data, it is recommended that the luma samples be assigned the code word 64 (40 hex), and the chroma samples be assigned 512 (200 hex); both of which correspond to 0 mV. It is permissible to encode analog vertical interval information (such as vertical interval timecode or vertical interval test signals) without breaking the interface, but such usage is nonstandard (and ancillary data is the preferred means for transmitting metadata). Conversion of analog sync and burst signals into digital, however, is not recommended--and neither is necessary in the digital interface. Vertical interval timecode (VITC, pronounced vitsee or sometimes vits) is a form of SMPTE timecode embedded as black-and-white bars in a pair of the normally unseen vertical interval lines in a television signal. ...


Supported video formats

The various versions of the serial digital interface support numerous video formats.

  • The 270 Mbit/s interface supports 525-line, interlaced video at a 59.94 Hz field rate (29.97 Hz frame rate), and 625-line, 50 Hz interlaced video. These formats are highly compatible with NTSC and PAL respectively; and the terms NTSC and PAL are often (incorrectly) used to refer to these formats. (NTSC and PAL are composite color encoding schemes; and the serial digital interface--other than the obsolete 143 Mbit/s and 177 Mbit/s forms, is a component standard).
  • The 360 Mbit/s interface supports 525i and 625i widescreen. It can also be used to support 525p, if 4:2:0 sampling is used.
  • The various 540 Mbit/s interfaces support 525p and 625p formats.
  • The nominal 1.5 Gbit/s interfaces support most high definition formats. Supported formats include 1080i60, 1080i59.94, 1080i50, 1080p30, 1080p29.97, 1080p25, 1080p24, 1080p23.98, 720p60, 720p59.94, and 720p50. In addition, there are several 1035i formats (an obsolete Japanese television standard), half-bandwidth 720p standards such as 720p24 (used in some film conversion applications, and unusual because it has an odd number of samples per line), and various 1080psf (progressive, segmented frame) formats. Progressive Segmented frames formats appear as interlace video but contain video which is progressively scanned. This is done to support analog monitors and televisions, many of which are incapable of locking to low field rates such as 30 Hz and 24 Hz.
  • The dual link HD interface supports 1080p60, 1080p59.94, and 1080p50, as well as 4:4:4 encoding, greater color depth, RGB encoding, alpha channels, and nonstandard resolutions (often encountered in computer graphics or digital cinema).

NTSC is the analog television system in use in Canada, Japan, Mexico, the Philippines, South Korea, Taiwan, the United States, and some other countries, mostly in the Americas (see map). ... For other uses, see PAL (disambiguation). ... High-definition television (HDTV) means broadcast of television signals with a higher resolution than traditional formats (NTSC, SECAM, PAL) allow. ... Interlacing is a method of displaying images on a raster-scanned display device, such as a cathode ray tube (CRT). ... This article is about digital presentation. ...

Related interfaces

In addition to the regular serial digital interface described here, there are several other similar interfaces which are similar to, or are contained within, a serial digital interface.


SDTI

There is an expanded specification called SDTI (Serial Data Transport Interface), which allows compressed (i.e. DV, MPEG and others) video streams to be transported over an SDI line. This allows for multiple video streams in one cable or faster-than-realtime (2x, 4x,...) video transmission. A related standard, known as HD-SDTI, provides similar capability over a SMPTE 292M interface. Serial Data Transport Interface is a way of transmitting data packets over a Serial Digital Interface datastream. ... Serial Data Transport Interface (SMPTE 305M) is a way of transmitting data packets over a SDI datastream. ... A MiniDV Camcorder For other uses, see DV (disambiguation). ... The Moving Picture Experts Group or MPEG is a working group of ISO/IEC charged with the development of video and audio encoding standards. ...


The SDTI interface is specified by SMPTE 305M. The HD-SDTI interface is specified by SMPTE 348M.


SMPTE 349M

The standard SMPTE 349M: Transport of Alternate Source Image Formats through SMPTE 292M, specifies a means to encapsulate non-standard and lower-bitrate video formats within a HD-SDI interface. This standard allows, for example, several independent standard definition video signals to be multiplexed onto a HD-SDI interface, and transmitted down one wire. This standard doesn't merely adjust EAV and SAV timing to meet the requirements of the lower-bitrate formats; instead, it provides a means by which an entire SDI format (including synchronization words, ancillary data, and video payload) can be encapsulated and transmitted as ordinary data payload within a 292M stream.


G.703

Main article: G.703

The G.703 standard is another high-speed digital interface, originally designed for telephony. G.703 is a ITU-T standard for transmitting voice or data over digital carriers such as T1 and E1. ...


External links

  • Society of Motion Picture and Television Engineers - Home page
  • Standards of SMPTE
  • Society of Motion Picture and Television Engineers: SMPTE 274M-2005: Image Sample Structure, Digital Representation and Digital Timing Reference Sequences for Multiple Picture Rates
  • Society of Motion Picture and Television Engineers: SMPTE 292M-1998: Bit-Serial Digital Interface for High Definition Television
  • Society of Motion Picture and Television Engineers: SMPTE 291M-1998: Ancillary Data Packet and Space Formatting
  • Society of Motion Picture and Television Engineers: SMPTE 372M-2002: Dual Link 292M Interface for 1920 x 1080 Picture Raster

  Results from FactBites:
 
Serial Digital Interface - Wikipedia, the free encyclopedia (3772 words)
Framing is done by detection of a special synchronization pattern, which appears on the (unscrambled) serial digital signal to be a sequence of ten ones followed by twenty zeroes (twenty ones followed by forty zeroes in HD); this bit pattern is not legal anywhere else within the data payload.
An 8-bit parallel digital interface is defined by CCIR 601, this is also obsolete (however, many clauses in the various standards accommodate the possibility of an 8-bit interface).
In SD interfaces, where there is only one datastream, the 0th sample is a Cb sample; the 1st sample a Y sample, the 2nd sample a Cr sample, and the third sample is the Y' sample; the pattern repeats from there.
Serial Peripheral Interface Bus - Wikipedia, the free encyclopedia (735 words)
The Serial Peripheral Interface Bus or SPI (often pronounced like "spy") bus is a very loose standard for controlling almost any digital electronics that accepts a clocked serial stream of bits.
The advantage of a serial bus is that it minimizes the number of conductors, pins, and the size of the package of an integrated circuit.
A serial peripheral bus is the most flexible choice when many different types of serial peripherals must be present, and there is a single controller.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m