Camera developments
Television cameras have added new features over the years at an orderly pace. Once CCD and CMOS sensors replaced tubes in video cameras, the developments have been incremental. The most recent step change has been the migration from recording on 1/2in or 1/4in tape to solid-state, optical and hard disk drives.
Today's typical HD broadcast camera features three 2/3in CCD sensors and an optical beam splitter. In the quest for lighter, smaller and less expensive cameras, manufacturers also offer 1/2in sensors for field production and ENG, as well as 1/3in or even 1/4in sensors for the semipro user with a limited budget.
What distinguishes broadcast cameras from semipro cameras varies. At one time, the tape format was the deciding factor; if it was miniDV, it was a consumer camera.
Expect a broadcast camera to feature an interchangable lens and record at higher data rates than the typical 25Mb/s for low end devices. But for applications like news and observational documentaries, the distinction becomes blurred.
Digital cinematography has evolved along a different route from television cameras. The need to use 35mm cine lenses has led to the adoption of single sensors, with an overlaid color filter array (CFA) rather than the beam splitter and separate RGB sensors. The signal from the sensor is demosiaced derive the RGB color channels.
The use of a single sensor is also the norm for the digital SLR (DSLR) still camera, but save for consumer camcorders, the three-chip approach has dominated the television market. Why do the film guys want the single large sensors? Apart from compatibility with their favored film lenses, the larger sensor delivers two important characteristics:
- Higher sensitivityLarger pixels gather more light. In the language of film, the sensor is faster.
- Smaller depth of field
News agenciesThe latter characteristic enables the differential focus effects used by cinematographers to define the object of interest in a scene. It also allows the background to be rendered out of focus, useful with fast pans at 24fps. Television cameramen have resorted to wide aperture prime lenses to reproduce the same effects with 2/3in sensors.
This year, a revolution has occurred amongst cameramen, driven predominately by the use of the DSLR.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Newsgathering budgets have been under pressure. In addition, news runs 24 hours a day for some networks. Under the pressure to shoot more material without increasing costs, crews have become smaller. A photographer may be sent on a job with a full still camera kit for newspapers and a video kit for television news. Agencies realized that if a still camera could also shoot video, then one person could send back still pictures and video. The cameraman would only need one set of lenses and type of camera body. In response to this requirement, some consumer DSLR manufacturers have added to their cameras the capability to shoot video and to capture associated audio.
It didn't take long for video cameramen to realize that such a DSLR also provided a low-cost way to shoot film-style, with a small depth of field. Does this represent the Holy Grail, the end of the television camera? The answer is no, as there are some catches. First, the audio capability of a DSLR is rudimentary. At best it can be considered cue audio. Second, the reflex viewfinder doesn't work with the mirror up for continuous shooting. Instead, the rear LCD screen is the only viewfinder, and it is difficult to see in direct sunlight.
The sensor may have 10 million to 15 million pixels, far more than the 2 million pixels of a 1080-line television camera. The optical low-pass filter that controls spatial aliasing in the sensor is designed for the higher resolution still capture. As a consequence, HD resolution images will suffer from more spatial aliasing than a video camera designed specifically for HD capture.
Although DSLRs can capture RAW (unprocessed high bit-depth) files in still picture mode, for video capture, an eight-bit codec is used in current camera implementations. That reduces the possibilities for color grading in post without introducing posterization effects.
Some manufacturers have responded to user demands and added to the frame rates that can be used, including 23.98fps, 25fps and 29.97fps. Other manufacturers stick with a single rate, often 24fps, which must be converted in post.
However, a whole industry has grown up supplying add-ons to DSLRs for videographers. A hood and eyepiece can be used to shield the rear LCD panel from ambient light. Although a small depth of field is a desired feature, it necessarily adds to the problems of accurately focusing the subject. Several companies make follow-focus systems to aid focus.
One approach to audio capture is to add a separate audio recorder. Cameras do not record time code, so editors must resort to older techniques like the slate/clapper to ensure audio sync. There is even software that can automatically align the separate audio recording with guide audio recorded in the camera body.
3-D rigs
Once focus and audio are under control, then it's time to add camera support and a matte box, and the DSLR body has become a video camera. However, issues still remain, such as battery life and possible overheating. These camera bodies are not designed for long duration shots, and the electronics can get very hot in prolonged use. Manufacturers warn that it may be necessary on occasions to power down the body while it cools. Regular batteries will have a short life, and any shoot will need a stock of fully charged battery packs.
So why use a DSLR with all those drawbacks? It's all down to the creative possibilities for shooters who cannot afford to rent a digital cinematography camera. Already these cameras have been used to shoot prime-time programming. Television camera manufacturers are responding to this demand, and at NAB, two lower-cost, single large sensor cameras were previewed for future release. These will, of course, include proper viewfinders and audio recording facilities, so the current rigs may be short-lived.
There is great interest in 3-D as several major networks gear up for 3-D transmissions. Much of the development of the technology comes from cinematography and centers on the rig. Although stereoscopic cameras are in development, notably from Panasonic, the current method is to use two regular cameras side-by-side or at 90 degrees in a mirror rig.
Continue on next page
Stereoscopic 3-D uses two views of the scene, shot from horizontally displaced cameras (just like our eyes) to aid the brain in the perception of image depth in the scene. This clue, called stereopsis, is just one of many phenomena that the brain uses, including the occlusion of objects, aerial perspective, linear perspective and our experience of what we are viewing.
The horizontal displacement creates a parallax between the two views, with closer objects appearing to have greater displacement than distant objects. The parallax error is used by the brain to derive depth clues. Human eyes have an average displacement of around 63mm, the interocular or interpupillary distance (IPD). The camera spacing is referred to as interaxial, referring to the axes of the two lenses. Although the interaxial spacing can be the same as the IPD, it does not have to be in order to create a realistic stereoscopic effect.
In the original scene, the eyes can roam around and focus on nearby objects right to the horizon. When the scene is viewed on a display in the home, the scene is reproduced entirely in the single plane of the display. If objects are to be placed in front of or behind the display plane, then this gives rise to a conflict between the convergence of the eyes on an object and the accommodation, or focus distance, of the eyes that remain on the screen plane. To avoid eyestrain, the depth range of objects in the original scene must be mapped into a limited but comfortable viewing range.
UK broadcaster Sky is working to a depth budget of 3 percent (of the screen diagonal), which is 1.5 percent parallax for objects in front of or behind the display plane. This raises the question as to the screen diagonal to choose; a movie designed for cinema reproduction and viewed at some distance may not suit the smaller screen viewed close up in the home. This will result in a compromise.
Parallax can be controlled at the camera, or in post-production if the program is not a live event. The cameraman cannot control the subject scene depth, so to achieve a comfortable viewing range within the broadcaster's chosen depth budget, one parameter that can be changed is the interaxial distance. By this means, the parallax can be adjusted to bring near or far objects within the designated range. It is for this reason that the interaxial spacing of the cameras must be varied outside the typical 63mm of the IPD. Parallax can also be controlled by converging the lens axes (toe-in) just as we cross our eyes to view close-up objects.
For a wide spacing, the cameras can be simply placed side by side, but the size of the camera body and lens sets a minimum spacing. To achieve a smaller interaxial spacing a beam-splitter, which is a partially transparent mirror, gives complete freedom to configure the spacing. This mirror rig places the direct view camera behind the mirror, which is placed at 45 degrees to the incident light, and the reflected view camera at 90 degrees to the incident light. The rig will have a number of adjustments to control the pitch, roll and Z-axis of one camera to match the two images.
A differential horizontal transform can be applied to the left and right signals to shift the zero-parallax point. Then the two channels are cropped and rescaled to full raster. Toe-in creates opposing keystone views of the scene, which must also be corrected for the left and right rasters to correctly overlay.
As an aid to post-production adjustments, it is useful to record the lens metadata: focus, iris and zoom, plus interaxial and convergence parameters. These are especially useful for matching CGI effects to live action.
If a beam-splitter rig is used, the partially transmitting mirror may well introduce color shading and must be corrected for the left and right channels to match.
The focus and zoom tracking of the two lenses also must be carefully calibrated to match the left and right pictures. For live events, the need to make all these raster transforms and color corrections dictates the need for a stereoscopic processor between the CCU and the production switcher, although there are switchers that can perform the adjustments internally.
What is the future for 3-D? Are we stuck with the rig? Some television camera manufacturers are looking at unitary cameras, notably Panasonic with its twin-lens camcorder and Sony with its single-lens research project. And why would a cameraman want to give up that camcorder that sits perfectly balanced on his shoulder for a complex looking DSLR rig? It looks like something built in one of the workshops usually found on the back lot of a film studio. Boxes add XLR connections, remote follow-focus, larger LCD viewfinders and viewing hoods. Because of small production volumes, the cost of a fully-loaded rig can be more than the body and lens! Stereo mirror rigs look even more unwieldy, presenting quite a challenge for the Steadicam operator.
It's about the freedom for directors of photography to choose the shooting tools to get the look they seek. After the smooth lines of the modern camcorder, 3-D and DSLR rigs look anachronistic, but they are, after all, a means to an end.