Tuesday, May 10, 2011

Charge-Coupled Device (CCD)

A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins.
Often the device is integrated with an image sensor, such as a photoelectric device to produce the charge that is being read, thus making the CCD a major technology for digital imaging. Although CCDs are not the only technology to allow for light detection, CCDs are widely used in professional, medical, and scientific applications where high-quality image data is required.

3 CCD Camera

A single CCD Image
A three-CCD camera is a camera whose imaging system uses three separate charge-coupled devices (CCDs), each one taking a separate measurement of red, green, or blue light. Light coming into the lens is split by a trichroic prism assembly, which directs the appropriate wavelength ranges of light to their respective CCDs. The system is employed by some still cameras, video cameras, telecine systems and camcorders.
Compared to cameras with only one CCD, three-CCD cameras generally provide superior image quality and resolution. By taking separate readings of red, green, and blue values for each pixel, three-CCD cameras achieve much better precision than single-CCD cameras. By contrast, almost all single-CCD cameras use a Bayer filter, which allows them to detect only one-third of the color information for each pixel. The other two-thirds must be interpolated with a demosaicing algorithm to 'fill in the gaps', resulting in a much lower effective resolution.
The combination of the three sensors can be done in the following ways:
  • Composite sampling, where the three sensors are perfectly aligned to avoid any color artifact when recombining the information from the three color planes
  • Pixel shifting, where the three sensors are shifted by a fraction of a pixel. After recombining the information from the three sensors, higher spatial resolution can be achieved. Pixel shifting can be horizontal only to provide higher horizontal resolution in standard resolution camera, or horizontal and vertical to provide high resolution image using standard resolution imager for example. The alignment of the three sensors can be achieved by micro mechanical movements of the sensors relative to each other.
  • Arbitrary alignment, where the random alignment errors due to the optics are comparable to or larger than the pixel size.
Three-CCD cameras are generally more expensive than single-CCD cameras because they require three times as many elements to form the image detector, and because they require a precision color-separation beam-splitter optical assembly.
The concept of cameras using three image pickups, one for each primary color, was first developed for color photography on three glass plates in the late nineteenth century, and in the 1960s through 1980s was the dominant method to record color images in television, as other possibilities to record more than one color on the video camera tube were difficult.
Three-CCD cameras are often referred to as "three-chip" cameras; this term is actually more descriptive and inclusive, since it includes cameras that use CMOS active pixel sensors instead of CCDs. Camcorders with 3 chips were called "3CCD" earlier and some are still called "3MOS" (derived from 3xCMOS, Panasonic) today.

Saturday, April 2, 2011

ND FILTER


 

Neutral Density Filters are often used to achieve motion blur effects with slow shutter speeds
In photography and optics, a neutral density filter or ND filter can be a colorless (clear) or grey filter. An ideal neutral density filter reduces and/or modifies intensity of all wavelengths or colors of light equally, giving no changes in hue of color rendition.
The purpose of standard photographic neutral density filters is to allow the photographer greater flexibility to change the aperture, exposure time and/or blur of subject in different situations and atmospheric conditions.
For a ND filter with optical density d the amount of optical power transmitted through the filter, which can be calculated from the logarithm of the ratio of the measurable intensity (I) after the filter to the incident intensity (I0), shown as the following:
Fractional Transmittance (II0) = 10-d, or
d = - \log_{10} \frac{I}{I_0}
For example, on a very bright day, one might wish to photograph a waterfall at a slow shutter speed to create a deliberate motion blur effect. In order to do this, one would need a shutter speed on the order of tenths of a second. There might be so much light that even at minimum film speed and a minimum aperture such as f/32, the corresponding shutter speed would still be too fast. In this situation, by applying an appropriate neutral density filter one or more stops can be taken out of the exposure, allowing a slow shutter speed and more pleasing effect.

Comparison of two pictures showing the result of using a ND-filter at a landscape. The first one uses only a polarizer and the second one a pol and a 1000x ND-Filter (ND3.0).
The use of an ND filter allows the photographer to utilize a larger aperture that is at or below the diffraction limit, which varies depending on the size of the sensory medium (film or digital) and for many cameras, is between f/8 and f/11, with smaller sensory medium sizes needing larger sized apertures, and larger ones able to use smaller apertures.
Instead of reducing the aperture to limit light, the photographer can add a ND filter to limit light, and can then set the shutter speed according to the particular motion desired (blur of water movement, for example) and the aperture set as needed (small aperture for maximum sharpness or large aperture for narrow depth of field (subject in focus and background out of focus). Using a digital camera, the photographer can see the image right away, and can choose the best ND filter to use for the scene being captured by first knowing the best aperture to use for maximum sharpness desired. The shutter speed would be selected by finding the desired blur from subject movement. The camera would be set up for these in manual mode, and then the overall exposure then adjusted darker by adjusting either aperture or shutter speed, noting the number of stops needed to bring the exposure to that which is desired. That offset would then be the amount of stop needed in the ND filter to use for that scene.
Examples of this use include:
  • Blurring water motion (e.g. waterfalls, rivers, oceans).
  • Reducing depth of field in very bright light (e.g. daylight).
  • When using a flash on a camera with a focal-plane shutter, exposure time is limited to the maximum speed -often 1/250th of a second, at best- at which the entire film or sensor is exposed to light at one instant. Without an ND filter this can result in the need to use f8 or higher.
  • Using a wider aperture to stay below the diffraction limit.
  • Reduce the visibility of moving objects
  • Add motion blur to subjects
Neutral density filters are used to control exposure with photographic catadioptric lenses, since the use of a traditional iris diaphragmincreases the ratio of the central obstruction found in those systems leading to poor performance.
ND filters find applications in several high-precision laser experiments because the power of a laser cannot be adjusted without changing other properties of the laser light (e.g. collimation of the beam). Moreover, most lasers have a minimum power setting at which they can be operated. To achieve the desired light attenuation, one or more neutral density filters can be placed in the path of the beam.
A graduated ND filter is similar except the intensity varies across the surface of the filter. This is useful when one region of the image is bright and the rest is not, as in a picture of a sunset.
The transition area, or edge, is available in different variations (soft, hard, attenuator). The most common is a soft edge and provides a smooth transition from the ND side and the clear side. Hard edge grads have a sharp transition from ND to clear and the attenuator edge changes gradually over most of the filter so the transition is less noticeable.
Another type of ND filter configuration is the ND Filter-wheel. It consists of two perforated glass disks which have progressively denser coating applied around the perforation on the face of each disk. When the two disks are counter-rotated in front of each other they gradually and evenly go from 100% transmission to 0% transmission. These are used on catadioptric telescopes mentioned above and in any system that is required to work at 100% of its aperture (usually because the system is required to work at its maximum angular resolution).
Practical ND filters are not perfect, as they do not reduce the intensity of all wavelengths equally. This can sometimes create color casts in recorded images, particularly with inexpensive filters. More significantly, most ND filters are only specified over the visible region of the spectrum, and do not proportionally block all wavelengths of ultraviolet or infrared radiation. This can be dangerous if using ND filters to view sources (such as the sun or white-hot metal or glass) which emit intense non-visible radiation, since the eye may be damaged even though the source does not look bright when viewed through the filter. Special filters must be used if such sources are to be safely viewed.

ND filters are quantified by their optical density or equivalently their
f-Stop reduction as follows:

lens area opening, as fraction of the complete lensFilter Optical Densityf-Stop Reduction% transmittance
10.0100%
ND21/20.3150%
ND41/40.6225%
ND81/80.9312.5%
ND161/161.246.25%
ND321/321.553.125%
ND641/641.861.563%
ND1281/1282.170.781%
ND2561/2562.480.391%
ND5121/5122.790.195%
ND10241/10243.0100.098%
ND20481/20483.3110.049%
ND40961/40963.6120.024%
ND81921/81923.9130.012%
Another practical way of determining what type of ND filter to use is by the percent of light that the filter allows to pass (transmittance). This parameter is typically applied to microscopy applications versus photography applications.

DIGITAL CAMERAS


ENG cameras


Sony camera head with Betacam SP dock recorder.
Though by definition, ENG (Electronic News Gathering) video cameras were originally designed for use by news camera operators, these have become the dominant style of professional video camera for most productions, from dramas to documentaries, from music videos to corporate training. While they have some similarities to the smaller consumer camcorder, they differ in several regards:
  • ENG cameras are larger and heavier, and usually supported by a shoulder stock on the cameraman's shoulder, taking the weight off the hand, which is freed to operate the lens zoom control. The weight of the cameras also helps dampen small movements.
  • 3 CCDs are used instead of one, one for each primary color
  • They have interchangeable lenses.
  • All settings, white balance, focus, and iris can be manually adjusted, and automatics can be completely disabled.
  • The lens is focused manually and directly, without intermediate servo controls. However the lens zoom and focus can be operated with remote controls in a studio configuration.
  • Professional BNC connectors for video and at least two XLR input connectors for audio are included.
  • A complete time code section is available, allowing time code presets; and multiple cameras can be timecode-synchronized with a cable.
  • "Bars and tone" are available in-camera (the color bars are SMPTE (Society of Motion Picture and Television Engineers) Bars, a reference signal that simplifies calibration of monitors and setting levels when duplicating and transmitting the picture. )
  • Recording is to a professional medium like some variant of Betacam or DVCPRO or Direct to disk recording or flash memory. If as in the latter two, it's a data recording, much higher data rates (or less compression) are used than in consumer devices.
  • The camera is mounted on tripods and other supports with a quick release plate.
  • A rotating behind-the-lens filter wheel, for selecting an 85A and neutral density filters.
  • Controls that need quick access are on hard physical switches, not in menu selections.
  • Gain Select, White/Black balance, color bar select, and record start controls are all in the same general place on the camera, irrespective of the camera manufacturer.
  • Audio is adjusted manually, with easily accessed physical knobs.

EFP Camera operator at a baseball game.


EFP Cameras

Electronic Field Production cameras are similar to studio cameras in that they are used primarily in multiple camera switched configurations, butoutside the studio environment, for concerts, sports and live news coverage of special events. These versatile cameras can be carried on the shoulder, or mounted on camera pedestals and cranes, with the large, very long focal length zoom lenses made for studio camera mounting. These cameras have no recording ability on their own, and transmit their signals back to the broadcast truck through a triax, fibre optic or the virtually obsolete multicore cable.


Dock cameras

Some manufacturers build camera heads, which only contain the optical block, the CCD sensors and the video encoder, and can be used with astudio adapter for connection to a CCU in EFP mode, or various dock recorders for direct recording in the preferred format, making them very versatile. However, this versatility leads to greater size and weight. They are favored for EFP and low-budget studio use, because they tend to be smaller, lighter, and less expensive than most studio cameras.

A remote-controlled camera mounted on a miniature cable car for mobility.


Remote cameras

Remote cameras are typically very small camera heads designed to be operated by remote control. Despite their small size, they are often capable of performance close to that of the larger ENG and EFP types.
"Lipstick cameras" are so called because the lens and sensor block combined are similar in size and appearance to a lipstick container. These are either hard mounted in a small location, such as a race car, or on the end of a boom pole. The sensor block and lens are separated from the rest of the camera electronics by a long thin multi conductor cable. The camera settings are manipulated from this box, while the lens settings are normally set when the camera is mounted in place.
Block cameras are so called because the camera head is a small block, often smaller than the lens itself. Some block cameras are completely self contained, while others only contain the sensor block and its pre-amps, thus requiring connection to a separate camera control unit in order to operate. All the functions of the camera can be controlled from a distance, and often there is a facility for controlling the lens focus and zoom as well. These cameras are mounted on pan and tilt heads, and may be placed in a stationary position, such as atop a pole or tower, in a corner of a broadcast booth, or behind a basketball hoop. They can also be placed on robotic dollies, at the end of camera booms and cranes, or "flown" in a cable supported harness, as shown in the illustration.

Wednesday, December 22, 2010

headroom

HEADROOM
In photography, headroom or head room is a concept of aesthetic composition that addresses the relative vertical position of the subject within the frame of the image. Headroom refers specifically to the distance between the top of the subject's head and the top of the frame, but the term is sometimes used instead of lead room, nose room or 'looking room'to include the sense of space on both sides of the image. The amount of headroom that is considered aesthetically pleasing is a dynamic quantity; it changes relative to how much of the frame is filled by the subject. The rule of thumb taken from classic portrait painting techniques, called the "rule of thirds", is that the subject's eyes, or the center of interest, is ideally positioned one-third of the way down from the top of the frame. Moving images such as film and video cameras have the same headroom issues as still photography, but with the added factors of the movement of the subject, the movement of the camera, and the possibility of zooming in or out.
Perceptual psychological studies have been carried out with experimenters using a white dot placed in various positions within a frame to demonstrate that observers attribute potential motion to a static object within a frame, relative to its position. The unmoving object is described as 'pulling' toward the center or toward an edge or corner. Proper headroom is achieved when the object is no longer seen to be slipping out of the frame—when its potential for motion is seen to be neutral in all directions.
Headroom changes as the camera zooms in or out, and the camera must simultaneously tilt up or down to keep the center of interest approximately one-third of the way down from the top of the frame. The closer the subject, the less headroom needed. In extreme close-ups, the top of the head is out of the frame, but the concept of headroom still applies via the rule of thirds.
In television broadcast camera work, the amount of headroom seen by the production crew is slightly greater than the amount seen by home viewers, whose frames are reduced in area by about 5%. To adjust for this, broadcast camera headroom is slightly expanded so that home viewers will see the correct amount of headroom. Professional video camera viewfinders and professional video monitors often include an overscan setting to compare between full screen resolution and "domestic cut-off" as an aid to achieving good headroom and lead room.
One of the most common mistakes that casual camera users make is to have too much headroom: too much space above the subject's head.

Examples

Rule of thirds


Rule of thirds

The rule of thirds is a compositional rule of thumb in visual arts such as painting, photography and design. The rule states that an image should be imagined as divided into nine equal parts by two equally-spaced horizontal lines and two equally-spaced vertical lines, and that important compositional elements should be placed along these lines or their intersections. Proponents of the technique claim that aligning a subject with these points creates more tension, energy and interest in the composition than simply centering the subject would.

The photograph to the right demonstrates the application of the rule of thirds. The horizon sits at the horizontal line dividing the lower third of the photo from the upper two-thirds. The tree sits at the intersection of two lines, sometimes called a power point or a crash point. Points of interest in the photo don't have to actually touch one of these lines to take advantage of the rule of thirds. For example, the brightest part of the sky near the horizon where the sun recently set does not fall directly on one of the lines, but does fall near the intersection of two of the lines, close enough to take advantage of the rule.

The rule of thirds is applied by aligning a subject with the guide lines and their intersection points, placing the horizon on the top or bottom line, or allowing linear features in the image to flow from section to section. The main reason for observing the rule of thirds is to discourage placement of the subject at the center, or prevent a horizon from appearing to divide the picture in half.

When photographing or filming people, it is common to line the body up with a vertical line, and having the person's eyes in line with a horizontal one. If filming a moving subject, the same pattern is often followed, with the majority of the extra room being in front of the person (the way they are moving).

Tuesday, January 26, 2010

PANNING SHOT


PANNING SHOT
In photography, panning refers to the horizontal movement or rotation of a still or video camera, or the scanning of a subject horizontally on video or a display device. Panning a camera results in a motion similar to that of someone shaking their head "no" or of an aircraft performing a yaw rotation.
Movie and television cameras pan by turning horizontally on a vertical axis, but the effect may be enhanced by adding other techniques, such as rails to move the whole camera platform. Slow panning is also combined with zooming in or out on a single subject, leaving the subject in the same portion of the frame, to emphasize or de-emphasize the subject respectively.
In video technology, the use of a camera to scan a subject horizontally is called panning.
In still photography, the panning technique is used to suggest fast motion, and bring out foreground from background. In photographic pictures it is usually noted by a foreground subject in action appearing still (i.e. a runner frozen in mid-stride) while the background is streaked and/or skewed in the apparently opposite direction of the subject's travel.
The term panning is derived from panorama,a word originally coined in 1787 by Robert Barker for the 18th century version of these applications, a machine that unrolled or unfolded a long horizontal painting to give the impression the scene was passing by; Barker also invented the cyclorama in which a large painting encircles an audience.