Search This Blog

Friday, January 21, 2011

What is gamma?

Gamma: Output vs. Input Level


What is gamma?

Gamma is a numerical value that indicates the relationship between the input and the output signals of a camera/monitor, as illustrated in the graph above. Generally, the more closely a gamma curve approaches a straight line (=1), the more natural the image appears to the human eye. Because monitors have a gamma curve that dips below the straight line, cameras are usually set up with a reverse gamma so that the resultant curve when they are added is close to a straight line. The lower area of the curve affects dark colors while the higher area determines the colors close to white. The middle area represents colors in between. The PMW-EX1 is preset with several gamma curves (STANDARD1-4 and CINE1-4) which can be selected to obtain the desired color balance and contrast. These presets can be modified with the Gamma function to obtain customized color reproduction. In addition, the Black Gamma function allows you to make gamma curve adjustments affect only the darker sections of the image. These gamma adjustments are an effective way to widen the unit’s expressive range.
Back to

knee

KNEE: A function that allows contrast to be reproduced in the bright areas of objects within the video signal's dynamic range (the range of teh luminance that can be processed).

understanding the terms shadows, midtones and highlights

It’s important to know what shadows, mid tones and highlights are in photographic terms when it comes to using Photoshop.  These three elements are present in every photograph, color or black and white, so it’s important to understand how to improve and adjust these settings.   Even if it sounds daunting, read on to find out more about these factors.
Highlights are the lightest areas of an image, therefore the parts that have the most light hitting it.  If something has too many highlights, we may say that it is overexposed and the area is lacking in detail.   Midtones show the middle tones of an image – the colours that are in-between.  For example, if we had a black and white image, the midtone would be grey – somewhere between the two.  You want a good amount of midtone in a balanced image but at the same time, you don’t want everything to be ‘grey’ or flat.  Shadows are the darkest areas of a photograph.  A shadow is also devoid of color; it could be the black in a photograph or just the areas that carry little light.  An image with too many shadows may be underexposed, and will not show much detail, although this can normally be adjusted.

Saturday, January 15, 2011

phase of an audio

If you’ve paid attention for long enough, you’ve seen heated debate in online forums and letters to the editor in magazines. One side will claim that it has been proven that people can’t hear the effects of phase errors in music, and the other is just as adamant that the opposite is true.
Much of the confusion about phase lies with the fact that there are several facets to this issue. Narrow arguments on the subject can be much like the story of the blind men and the elephant—one believes that the animal is snake-like, while another insists that it’s more like a wall. Both sides may be right, as far as their knowledge allows, but both are equally wrong because they’re hampered by a limited understanding of the subject.

What is phase?

Phase is a frequency dependent time delay. If all frequencies in a sound wave (music, for instance), are delayed by the same amount as they pass through a device, we call that device “phase linear.” A digital delay has this characteristic—it simply delays the sound as a whole, without altering the relationships of frequencies to each other. The human ear is insensitive to this kind of phase change of delay, as long as the delay is constant and we don’t have another signal to reference it to. The audio from a CD-player is always delayed due to processing, for instance, but it has no effect on our listening enjoyment.

Relative phase

Now, even if the phase is linear (simply an overall delay), we can easily detect a phase difference if we have a reference. For instance, you can get closer to one of your stereo speakers than the other; even if you use the stereo balance control to even the relative loudness between speakers, it won’t sound the same as being equidistance between them.
Another obvious case is when we have a direct reference to compare to. When you delay music and mix it with the un-delayed version, for instance, it’s easy to hear the effect; short delays cause frequency-dependent cancellation between the two signals, while longer delays result in an obvious echo.
If you connect one of your stereo speakers up backwards, inverting the signal, you’ll get phase cancellation between many harmonic components simultaneously as they cancel in the air. This is particularly noticeable with mono input and at low frequencies, where the distance between the speakers has less effect.

The general case

Having dispensed with linear phase, let’s look at the more general case of phase as a frequency-dependent delay.
Does it seem likely that we could hear the difference between a music signal and the same signal with altered phase?
First, I should point out that phase error, in the real world, is typically constant and affects a group of frequencies, usually by progressive amounts. By “constant”, I mean that the phase error is not moving around, as in the effect a phase shifter device is designed to produce. By “group of frequencies”, I mean that it’s typically not a signal frequency that’s shifted, or unrelated frequencies; phase shift typically “smears” an area of the music spectrum.
Back to the question: Does it seem likely that we could hear the difference between an audio signal and the same signal with altered phase? The answer is… No… and ultimately Yes.
No: The human ear is insensitive to a constant relative phase change in a static waveform. For instance, you cannot here the difference between a steady sawtooth wave (which contains all harmonic frequencies) and a waveform that contains the same harmonic content but with the phase of the harmonics delayed by various (but constant) amounts. The second waveform would not look like a sawtooth on an oscilloscope, but you would not be able to hear the difference. And this is true no matter how ridiculous you get with the phase shifting.
Yes: Dynamically changing waveforms are a different matter. In particular, it’s not only reasonable, but easy to demonstrate (at least under artificially produced conditions) that musical transients (pluck, ding, tap) can be severely damaged by phase shift. Many frequencies of short duration combine to produce a transient, and phase shift smears their time relationship, turning a “tock!” into a “thwock!”.
Because music is a dynamic waveform, the answer has to be “yes”—phase shift can indeed affect the sound. The second part is “how much?” Certainly, that is a tougher question. It depends on the degree or phase error, the area of the spectrum it occupies, and the music itself. Clearly we can tolerate phase shift to a degree. All forms of analog equalization—such as on mixing consoles—impart significant phase shift. It’s probably wise, though, to minimize phase shift where we can.

Monday, December 20, 2010

GLOSSARY OF TERMS USED IN VIDEO ENGINEERING

There are certain terms that are very important in video engineering and very much used to represent many properties of an video signal.


Aspect Ratio

The ratio of the visible-picture width to the height. Standard television and computers have an aspect ratio of 4:3(1.33). HDTV has aspects ratios of either 4:3 or 16:9(1.78). Additional aspect ratios like 1.85:1 or 2.35:1 are used in cinema.

Back Porch

The area of a composite video signal defined as the time between the end of the color burst and the start of active video. Also loosely used to mean the total time from the rising edge of sync to the start of active video.

Blanking Interval

There are horizontal and vertical blanking intervals. Horizontal blanking interval is the time period allocated for retrace of the signal from the right edge of the display back to the left edge to start another scan line. Vertical blanking interval is the time period allocated for retrace of the signal from the bottom back to the top to start another field or frame. Synchronizing signals occupy a portion of the blanking interval.

Blanking Level

Used to describe a voltage level (blanking level). The blanking level is the nominal voltage of a video waveform during the horizontal and vertical periods, excluding the more negative voltage sync tips.

Breezeway

The area of a composite video signal defined as the time between the rising edge of the sync pulse and the start of the color burst.

Chroma

The color portion of a video signal. This term is sometimes incorrectly referred to as "chrominance," which is the actual displayed color information.

Clamp

A circuit that forces a specific portion (either the back porch or the sync tip) of the video signal to a specific DC voltage, to restore the DC level. Also called "DC restore." A black level clamp to ground circuit forces the back-porch voltage to be equal to zero volts. A peak clamp forces the sync-tip voltage to be equal to a specified voltage.

Color Bars

A standard video waveform used to test the calibration of a video system. It consists of a sequence of the six primary and secondary colors plus white with a standard amplitude and timing. The active-low color sequence is white, yellow, cyan, green, magenta, red, and blue. There are several amplitude standards, the most common being 75% amplitude (brightness) with 100% saturation (intensity of the color).

Color Burst

The color burst, also commonly called the "color subcarrier," is 8 to 10 cycles of the color reference frequency. It is positioned between the rising edge of sync and the start of active video for a composite video signal.

Color Saturation

The amplitude of the color modulation on a standard video signal. The larger the amplitude of this modulation, the more saturated (more intense) the color.

Color Subcarrier

See Color Burst. 

Component Video

A three-wire video interface that carries the video information in its basic RGB components or luma (brightness) and two-color-difference signals.

Composite Video

A video signal that combines the luma (brightness), chroma (color), burst (color reference), and sync (horizontal and vertical synchronizing signals) into a single waveform carried on a single wire pair.

Differential Gain

Important measurement parameter for composite video signals. Not applicable in Y/C or component signals. Differential gain is the amount of change in the color saturation (amplitude of the color modulation) for a change in low-frequency luma (brightness) amplitude. Closely approximated by measuring the change in the amplitude of a sine wave for a change in its DC level.

Differential Phase

Important measurement parameter for composite video signals. Not applicable in Y/C or component signals. Differential phase is the change in hue (phase of the color modulation) for a change in low-frequency luma (brightness) amplitude. Closely approximated by measuring the change in the phase of a sine wave for a change in its DC level.

Fields and Frames

A frame is one complete scan of a picture. In NTSC it consists of 525 horizontal scan lines. In interlaced scanning systems, a field is half of a frame; thus, two fields make a frame.

Front Porch

The area of a composite video waveform between the end of the active video and the leading edge of sync.

Horizontal Blanking

See Blanking Level and Blanking Interval. 

Horizontal Line Frequency

The inverse of the time (or period) for one horizontal scan line.

Interlaced Scan

The process whereby each frame of a picture is created by first scanning half of the lines and then scanning the second set of lines, which are interleaved between the first to complete the picture. Each half is referred to as a field. Two fields make a frame.

IRE

An arbitrary unit of measurement equal to 1/100 of the excursion from blanking to reference white level. In NTSC systems, 100 IRE equals 714mV and 1-volt p-p equals 140 IRE.

Luma

The monochrome or black-and-white portion of a video signal. This term is sometimes incorrectly called "luminance," which refers to the actual displayed brightness.

Monochrome

The luma (brightness) portion of a video signal without the color information. Monochrome, commonly known as black-and-white, predatescurrent color television.

NTSC

National Television Systems Committee. A group that established black-and-white television standards in the United States in 1941 and later added color in 1953. NTSC is used to refer to the systems and signals compatible with this specific color-modulation technique. Consists of quadrature-modulated color-difference signals added to the luma with a color subcarrier reference of 455/2 times the horizontal line rate, typically 3.579545MHz with an H rate of 15.75kHz. Commonly used in 525-line, 59.94Hz scanning systems.

PAL

Phase alternate line. PAL is used to refer to systems and signals that are compatible with this specific modulation technique. Similar to NTSC but uses subcarrier phase alternation to reduce the sensitivity to phase errors that would be displayed as color errors. Commonly used with 626-line, 50Hz scanning systems with a subcarrier frequency of 4.43362MHz.

Pixel

Picture element. A pixel is the smallest piece of display detail that has a unique brightness and color. In a digital image, a pixel is an individual point in the image, represented by a certain number of bits to indicate the brightness.

Progressive Scan

The process whereby a picture is created by scanning all of the lines of a frame in one pass. See also Interlaced Scan. The process of converting from interlaced to progressive scan is called "line doubling."

Raster

The collection of horizontal scan lines that makes up a picture on a display. A reference to it normally assumes that the sync elements of the signal are included.

Refresh Rate

See Vertical Frame Rate. 

RGB

Stands for red, green, and blue. It is a component interface typically used in computer graphics systems.

Setup

A reference black level 7.5% (7.5IRE) above blanking level in NTSC analog systems. It is not used in PAL or digital or HDTV systems. In these systems, reference black is the same level as blanking.

Subcarrier

See Color Burst. 

S-Video

Commonly incorrectly used interchangeably with Y/C. See also Y/C. Technically, a magnetic-tape modulation format. 

Sync Signals/Pulses

Sync signals, also known as sync pulses, are negative-going timing pulses in video signals that are used by video-processing or display devices to synchronize the horizontal and vertical portions of the display.

Vertical Blanking

See Blanking Level and Blanking Interval. 

Vertical Field Frequency

The inverse of the time (or period) to produce one field of video (half of a frame). In NTSC it is 59.94Hz.

Vertical Frame Rate

The inverse of the time (or period) to produce one frame of video. Also called "refresh rate" or "vertical refresh rate."

Video Bandwidth, Minimum

The minimum analog bandwidth required to reproduce the smallest amount of detail contained in the video signal.

Y Cr Cb

A digital component video interface. Y is the luma (brightness) portion, and Cr and Cb are the color-difference portions of the signal.

Y Pr Pb

An analog-component video interface. Y is the luma (brightness) portion, and Pr and Pb are the color-difference portions of the signal. Typically used on high-end consumer video equipment.

Y/C

An analog video interface in which the chroma (color) information is carried separately from the luma (brightness) and sync information. Two wire pairs are used, denoted Y and C or Y/C. Often incorrectly referred to as "S-video."

Video Basics Continued

Video Basics Continued

Composite/CVBS Interface

Composite signals are the most commonly used analog video interface. Composite video is also referred to as CVBS, which stands for color, video, blanking, and sync, or composite video baseband signal. It combines the brightness information (luma), the color information (chroma), and the synchronizing signals on just one cable. The connector is typically an RCA jack. This is the same connector as that used for standard line level audio connections. A typical waveform of an all-white NTSC composite video signal is shown in Figure 5.

Figure 5. NTSC composite video waveform.
Figure 5. NTSC composite video waveform.

This figure depicts the portion of the signal that represents one horizontal scan line. Each line is made up of the active video portion and the horizontal blanking portion. The active video portion contains the picture brightness (luma) and color (chroma) information. The brightness information is the instantaneous amplitude at any point in time. The unit of measure for the amplitude is in terms of an IRE unit. IRE is an arbitrary unit where 140 IRE = 1Vp-p. From the figure, you can see that the voltage during the active video portion would yield a bright-white picture for this horizontal scan line, whereas the horizontal blanking portion would be displayed as black and therefore not seen on the screen. Please refer back to Figure 1 for a pictorial explanation. Some video systems (NTSC only) use something called "setup," which places reference black at a point equal to 7.5 IRE or about 54mV above the blanking level. 

Color information is added on top of the luma signal and is a sine wave with the colors identified by a specific phase difference between it and the color-burst reference phase. This can be seen in Figure 6, which shows a horizontal scan line of color bars.

Figure 6. Composite video waveform: color bars.
Figure 6. Composite video waveform: color bars.

The amplitude of the modulation is proportional to the amount of color (or saturation), and the phase information denotes the tint (or hue) of the color. The horizontal blanking portion contains the horizontal synchronizing pulse (sync pulse) as well as the color reference (color burst) located just after the rising edge of the sync pulse (called the "back porch"). It is important to note here that the horizontal blanking portion of the signal is positioned in time such that it is not visible on the display screen.

Y/C Interfaces

The Y/C signal is a video signal with less encoding. Brightness (luma), which is the Y signal, and the color (chroma), the C signal, are carried on two separate sets of wires. The connector is a mini-DIN type and resembles a small version of a keyboard connector.

Note: The term "S-video" stands for "separate video" and sometimes is used to refer to a Y/C signal, sometimes used in reference to recording formats. It was originally a recording format, as used for Sony Betamax, in which the luma was recorded separately from the chroma. The term is also commonly used to refer to the S-VHS (Super VHS) video recording format.

Component Interfaces

Component signal interfaces are the highest performance, because they have the least encoding. The signals exist in a nearly native format. They always utilize three pairs of wires that are typically in either a luma (Y) and two-color-difference-signals format or a red, green, blue (RGB) format. RGB formats are almost always used in computer applications, whereas color-difference formats are generally used in television applications. The Y signal contains the brightness (luma) and synchronizing information, and the color-difference signals contain the red (R) minus the Y signal and the blue (B) minus the Y signal. The theory behind this combination is that each of the base R, G, and B components can be derived from these difference signals. Common variations of these signals are as follows:

  • Y, B-Y, R-Y: Luma and color-difference signals.
  • Y, Pr, Pb: Pr and Pb are scaled versions of B-Y and R-Y. Commonly found in high-end consumer equipment.
  • Y, Cr, Cb: Digital-signal equivalent to Y, Pr, Pb. Sometimes incorrectly used in place of Y, Pr, Pb.
  • Y, U, V: Not an interface standard. These are intermediate, quadrature signals used in the formation of composite and Y/C signals. Sometimes incorrectly referred to as a "component interface."

Computer Signal Interfaces

Virtually all computer interfaces utilize RGB format signals. The picture information is carried separately by the three base components of red, green, and blue. Synchronizing information is typically carried as separate horizontal (H) and vertical (V) signals. The five signals, R, G, B, H, and V, are carried on one cable consisting of a shielded bundle of wires. The connector is almost always a 15-pin D-type connector. Sometimes the H and V sync information is merged with one of the RGB signals, typically the green component, but this is becoming less common. This is referred to as "sync on green." In rarer cases, the sync information is on the red or the blue signal.