Search This Blog
Monday, December 20, 2010
GLOSSARY OF TERMS USED IN VIDEO ENGINEERING
There are certain terms that are very important in video engineering and very much used to represent many properties of an video signal.
Video Basics Continued
Composite/CVBS Interface
Composite signals are the most commonly used analog video interface. Composite video is also referred to as CVBS, which stands for color, video, blanking, and sync, or composite video baseband signal. It combines the brightness information (luma), the color information (chroma), and the synchronizing signals on just one cable. The connector is typically an RCA jack. This is the same connector as that used for standard line level audio connections. A typical waveform of an all-white NTSC composite video signal is shown in Figure 5.Figure 5. NTSC composite video waveform.
This figure depicts the portion of the signal that represents one horizontal scan line. Each line is made up of the active video portion and the horizontal blanking portion. The active video portion contains the picture brightness (luma) and color (chroma) information. The brightness information is the instantaneous amplitude at any point in time. The unit of measure for the amplitude is in terms of an IRE unit. IRE is an arbitrary unit where 140 IRE = 1Vp-p. From the figure, you can see that the voltage during the active video portion would yield a bright-white picture for this horizontal scan line, whereas the horizontal blanking portion would be displayed as black and therefore not seen on the screen. Please refer back to Figure 1 for a pictorial explanation. Some video systems (NTSC only) use something called "setup," which places reference black at a point equal to 7.5 IRE or about 54mV above the blanking level.
Color information is added on top of the luma signal and is a sine wave with the colors identified by a specific phase difference between it and the color-burst reference phase. This can be seen in Figure 6, which shows a horizontal scan line of color bars.
Figure 6. Composite video waveform: color bars.
The amplitude of the modulation is proportional to the amount of color (or saturation), and the phase information denotes the tint (or hue) of the color. The horizontal blanking portion contains the horizontal synchronizing pulse (sync pulse) as well as the color reference (color burst) located just after the rising edge of the sync pulse (called the "back porch"). It is important to note here that the horizontal blanking portion of the signal is positioned in time such that it is not visible on the display screen.
Y/C Interfaces
The Y/C signal is a video signal with less encoding. Brightness (luma), which is the Y signal, and the color (chroma), the C signal, are carried on two separate sets of wires. The connector is a mini-DIN type and resembles a small version of a keyboard connector.Note: The term "S-video" stands for "separate video" and sometimes is used to refer to a Y/C signal, sometimes used in reference to recording formats. It was originally a recording format, as used for Sony Betamax, in which the luma was recorded separately from the chroma. The term is also commonly used to refer to the S-VHS (Super VHS) video recording format.
Component Interfaces
Component signal interfaces are the highest performance, because they have the least encoding. The signals exist in a nearly native format. They always utilize three pairs of wires that are typically in either a luma (Y) and two-color-difference-signals format or a red, green, blue (RGB) format. RGB formats are almost always used in computer applications, whereas color-difference formats are generally used in television applications. The Y signal contains the brightness (luma) and synchronizing information, and the color-difference signals contain the red (R) minus the Y signal and the blue (B) minus the Y signal. The theory behind this combination is that each of the base R, G, and B components can be derived from these difference signals. Common variations of these signals are as follows:- Y, B-Y, R-Y: Luma and color-difference signals.
- Y, Pr, Pb: Pr and Pb are scaled versions of B-Y and R-Y. Commonly found in high-end consumer equipment.
- Y, Cr, Cb: Digital-signal equivalent to Y, Pr, Pb. Sometimes incorrectly used in place of Y, Pr, Pb.
- Y, U, V: Not an interface standard. These are intermediate, quadrature signals used in the formation of composite and Y/C signals. Sometimes incorrectly referred to as a "component interface."
Computer Signal Interfaces
Virtually all computer interfaces utilize RGB format signals. The picture information is carried separately by the three base components of red, green, and blue. Synchronizing information is typically carried as separate horizontal (H) and vertical (V) signals. The five signals, R, G, B, H, and V, are carried on one cable consisting of a shielded bundle of wires. The connector is almost always a 15-pin D-type connector. Sometimes the H and V sync information is merged with one of the RGB signals, typically the green component, but this is becoming less common. This is referred to as "sync on green." In rarer cases, the sync information is on the red or the blue signal.Video Basics Continued
Composite/CVBS Interface
Composite signals are the most commonly used analog video interface. Composite video is also referred to as CVBS, which stands for color, video, blanking, and sync, or composite video baseband signal. It combines the brightness information (luma), the color information (chroma), and the synchronizing signals on just one cable. The connector is typically an RCA jack. This is the same connector as that used for standard line level audio connections. A typical waveform of an all-white NTSC composite video signal is shown in Figure 5.Figure 5. NTSC composite video waveform.
This figure depicts the portion of the signal that represents one horizontal scan line. Each line is made up of the active video portion and the horizontal blanking portion. The active video portion contains the picture brightness (luma) and color (chroma) information. The brightness information is the instantaneous amplitude at any point in time. The unit of measure for the amplitude is in terms of an IRE unit. IRE is an arbitrary unit where 140 IRE = 1Vp-p. From the figure, you can see that the voltage during the active video portion would yield a bright-white picture for this horizontal scan line, whereas the horizontal blanking portion would be displayed as black and therefore not seen on the screen. Please refer back to Figure 1 for a pictorial explanation. Some video systems (NTSC only) use something called "setup," which places reference black at a point equal to 7.5 IRE or about 54mV above the blanking level.
Color information is added on top of the luma signal and is a sine wave with the colors identified by a specific phase difference between it and the color-burst reference phase. This can be seen in Figure 6, which shows a horizontal scan line of color bars.
Figure 6. Composite video waveform: color bars.
The amplitude of the modulation is proportional to the amount of color (or saturation), and the phase information denotes the tint (or hue) of the color. The horizontal blanking portion contains the horizontal synchronizing pulse (sync pulse) as well as the color reference (color burst) located just after the rising edge of the sync pulse (called the "back porch"). It is important to note here that the horizontal blanking portion of the signal is positioned in time such that it is not visible on the display screen.
Y/C Interfaces
The Y/C signal is a video signal with less encoding. Brightness (luma), which is the Y signal, and the color (chroma), the C signal, are carried on two separate sets of wires. The connector is a mini-DIN type and resembles a small version of a keyboard connector.Note: The term "S-video" stands for "separate video" and sometimes is used to refer to a Y/C signal, sometimes used in reference to recording formats. It was originally a recording format, as used for Sony Betamax, in which the luma was recorded separately from the chroma. The term is also commonly used to refer to the S-VHS (Super VHS) video recording format.
Component Interfaces
Component signal interfaces are the highest performance, because they have the least encoding. The signals exist in a nearly native format. They always utilize three pairs of wires that are typically in either a luma (Y) and two-color-difference-signals format or a red, green, blue (RGB) format. RGB formats are almost always used in computer applications, whereas color-difference formats are generally used in television applications. The Y signal contains the brightness (luma) and synchronizing information, and the color-difference signals contain the red (R) minus the Y signal and the blue (B) minus the Y signal. The theory behind this combination is that each of the base R, G, and B components can be derived from these difference signals. Common variations of these signals are as follows:- Y, B-Y, R-Y: Luma and color-difference signals.
- Y, Pr, Pb: Pr and Pb are scaled versions of B-Y and R-Y. Commonly found in high-end consumer equipment.
- Y, Cr, Cb: Digital-signal equivalent to Y, Pr, Pb. Sometimes incorrectly used in place of Y, Pr, Pb.
- Y, U, V: Not an interface standard. These are intermediate, quadrature signals used in the formation of composite and Y/C signals. Sometimes incorrectly referred to as a "component interface."
Computer Signal Interfaces
Virtually all computer interfaces utilize RGB format signals. The picture information is carried separately by the three base components of red, green, and blue. Synchronizing information is typically carried as separate horizontal (H) and vertical (V) signals. The five signals, R, G, B, H, and V, are carried on one cable consisting of a shielded bundle of wires. The connector is almost always a 15-pin D-type connector. Sometimes the H and V sync information is merged with one of the RGB signals, typically the green component, but this is becoming less common. This is referred to as "sync on green." In rarer cases, the sync information is on the red or the blue signal.Video Basics
Here i have made an attempt to provide the Basics of Video.The theory of video is very vast and i have retrieve
it in very simple way.
It covers many of the fundamentals of analog video. It is divided into four sections: "Picture Basics" covers how a video picture is generated; "Resolution: Visual versus Format" discusses the different resolution formats and how resolution is specified and measured; "Formats and Interfaces" includes different types of video signals, waveforms, and interfaces; and the glossary at the end defines terms specific to video.
Figure 1. Horizontal scan versus display brightness.
At the end of each line, there is a portion of the waveform (horizontal blanking interval) that tells the scanning circuit in the display to retrace to the left edge of the display and then start scanning the next line. Starting at the top, all of the lines on the display are scanned in this way. One complete set of lines makes a picture. This is called a frame. Once the first complete picture is scanned, there is another portion of the waveform (vertical blanking interval, not shown) that tells the scanning circuit to retrace to the top of the display and start scanning the next frame, or picture. This sequence is repeated at a fast enough rate so that the displayed images are perceived to have continuous motion. This is the same principle as that behind the "flip books" that you rapidly flip through to see a moving picture or cartoons that are drawn and rapidly displayed one picture at a time.
Figure 2. Interlaced scanning system.
A progressive, or non-interlaced, picture is painted on the screen by scanning all of the horizontal lines of the picture in one pass from the top to the bottom. This is illustrated in Figure 3.
Figure 3. Progressive (non-interlaced) scanning system.
Figure 4. Representative visual resolution test pattern.
Table 1. Typical Frequencies for Common TV and Computer Video Formats
There are three basic levels of baseband signal interfaces. In order of increasing quality, they are composite (or CVBS), which uses one wire pair; Y/C (or S-video), which uses two wire pairs; and component, which uses three wire pairs. Each wire pair consists of a signal and a ground. These three interfaces differ in their level of information combination (or encoding). More encoding typically degrades the quality but allows the signal to be carried on fewer wires. Component has the least amount of encoding, and composite the most.
it in very simple way.
It covers many of the fundamentals of analog video. It is divided into four sections: "Picture Basics" covers how a video picture is generated; "Resolution: Visual versus Format" discusses the different resolution formats and how resolution is specified and measured; "Formats and Interfaces" includes different types of video signals, waveforms, and interfaces; and the glossary at the end defines terms specific to video.
Picture Basics
A picture is "drawn" on a television or computer display screen by sweeping an electrical signal horizontally across the display one line at a time. The amplitude of this signal versus time represents the instantaneous brightness at that physical point on the display. Figure 1shows the signal amplitude relationship to the brightness on the display.Figure 1. Horizontal scan versus display brightness.
At the end of each line, there is a portion of the waveform (horizontal blanking interval) that tells the scanning circuit in the display to retrace to the left edge of the display and then start scanning the next line. Starting at the top, all of the lines on the display are scanned in this way. One complete set of lines makes a picture. This is called a frame. Once the first complete picture is scanned, there is another portion of the waveform (vertical blanking interval, not shown) that tells the scanning circuit to retrace to the top of the display and start scanning the next frame, or picture. This sequence is repeated at a fast enough rate so that the displayed images are perceived to have continuous motion. This is the same principle as that behind the "flip books" that you rapidly flip through to see a moving picture or cartoons that are drawn and rapidly displayed one picture at a time.
Interlaced versus Progressive Scans
These are two different types of scanning systems. They differ in the technique used to "paint" the picture on the screen. Television signals and compatible displays are typically interlaced, and computer signals and compatible displays are typically progressive (non-interlaced). These two formats are incompatible with each other; one would need to be converted to the other before any common processing could be done. Interlaced scanning is where each picture, referred to as a frame, is divided into two separate sub-pictures, referred to as fields. Two fields make up a frame. An interlaced picture is painted on the screen in two passes, by first scanning the horizontal lines of the first field and then retracing to the top of the screen and then scanning the horizontal lines for the second field in-between the first set. Field 1 consists of lines 1 through 262 1/2, and field 2 consists of lines 262 1/2 through 525. The interlaced principle is illustrated in Figure 2. Only a few lines at the top and the bottom of each field are shown.Figure 2. Interlaced scanning system.
A progressive, or non-interlaced, picture is painted on the screen by scanning all of the horizontal lines of the picture in one pass from the top to the bottom. This is illustrated in Figure 3.
Figure 3. Progressive (non-interlaced) scanning system.
Resolution:Visual versus Format
The visual resolution of a video signal or display is the amount of detail that can be seen. This is different from the resolution format of a signal or display. For example, in a computer application, a XGA signal has a format resolution of 1024 horizontal pixels and 768 vertical pixels (lines), and is the implied visual resolution. However, if the signal or display has any limitations that can degrade the performance, it may not be possible to actually view all of this detail.Visual Resolution in Television Systems
Visual resolution in television systems is accurately specified in terms of a parameter called "TV lines." This parameter is typically used to indicate horizontal resolution, but the same technique can be used for vertical resolution. TV lines are determined by viewing a test pattern consisting of alternating black and white lines that are placed closer and closer together. The pair of lines with the closest spacing that can be distinguished as separate lines determines the resolution. The lines that can be extrapolated across the screen to a width equal to one picture height are the TV lines of resolution. Figure 4 shows a representative picture for determining resolution.Figure 4. Representative visual resolution test pattern.
Visual Resolution in Computer Systems
Computer resolution formats are typically specified by the visible number of pixels in the horizontal and vertical dimensions. For example, a VGA format signal has 640 visible pixels in the horizontal direction and 480 visible pixels in the vertical direction. An XGA format signal has 1024 visible pixels in the horizontal direction and 768 visible pixels in the vertical direction. In a well-designed computer system that is specified to reach a given maximum format resolution, all of the signal processing would be designed such that the visual resolution would be at least as good as the format resolution. If any circuit in the chain does not have the required performance, the visual resolution will be less than the format resolution.Formats and Interfaces
There are many different kinds of video signals, which can be divided into either television or computer types. The format of television signals varies from country to country. In the United States and Japan, the NTSC format is used. NTSC stands for National Television Systems Committee, which is the name of the organization that developed the standard. In Europe, the PAL format is common. PAL (phase alternating line), developed after NTSC, is an improvement over NTSC. SECAM is used in France and stands for sequential coleur avec memoire (with memory). It should be noted that there is a total of about 15 different sub-formats contained within these three general formats. Each of the formats is generally not compatible with the others. Although they all utilize the same basic scanning system and represent color with a type of phase modulation, they differ in specific scanning frequencies, number of scan lines, and color modulation techniques, among others. The various computer formats (such as VGA, XGA, and UXGA) also differ substantially, with the primary difference in the scan frequencies. These differences do not cause as much concern, because most computer equipment is now designed to handle variable scan rates. This compatibility is a major advantage for computer formats in that media, and content can be interchanged on a global basis.Table 1. Typical Frequencies for Common TV and Computer Video Formats
Description | |||||
Vertical Resolution Format (visible lines per frame) | |||||
Horizontal Resolution Format (visible pixels per line) | |||||
Horizontal Rate (kHz) | |||||
Vertical Frame Rate (Hz) | |||||
Highest Frequency (MHz) |
There are three basic levels of baseband signal interfaces. In order of increasing quality, they are composite (or CVBS), which uses one wire pair; Y/C (or S-video), which uses two wire pairs; and component, which uses three wire pairs. Each wire pair consists of a signal and a ground. These three interfaces differ in their level of information combination (or encoding). More encoding typically degrades the quality but allows the signal to be carried on fewer wires. Component has the least amount of encoding, and composite the most.
Subscribe to:
Posts (Atom)