Field acquisition
Much has changed in the processes of acquisition, as file-based cameras replace shooting to film and videotape. Files can be stored on any number of media formats if treated as digital data. This ranges from memory cards used as temporary storage to optical media that can be retained as an archive. The ease with which files can be copied also opens up the additional opportunity to change traditional workflows. Changing, and doing so efficiently, could wind up saving both time and money.
Television systems start with RGB sensors and end with RGB displays. To fresh eyes, the system has been hobbled with decisions made in the 1920s. Interlace is one such drawback, but it is gradually working its way out of the system. Another is gamma, the nonlinearity of the CRT. The gamma law depicts the relationship between the video voltage drive to the tube and the light intensity emitted by the phosphors. Every television system must apply an inverse gamma to correct for the display gamma; this is defined in ITU Recommendation BT.709.
Scene contrast range
The contrast of the original scene can cover a very wide range of intensity. This is especially true outside of a studio's controlled lighting. Domestic TV displays, especially when viewed under normal room lighting, can only reproduce a small brightness range — maybe 30:1.
The dynamic range of the original scene's light intensity must be compressed to suit the capabilities of the final display. This compression can be performed in the camera, or alternatively, the full range of the sensor can be captured, and the range compressed in post — color grading. The former method is used by traditional television, while the latter in cinematography.
A 35mm camera naturally used film. With a videotape camcorder, you had to shoot to the tape format integral to the camera. Videotape formats all record inverse-gamma corrected samples at 8 bits or 10 bits resolution per component. In the compressed domain, MPEG-2 and DV coding are both 8-bit. The tape delivered to the post house was at best 10-bit sampling, 4:2:2.
Although taped video can be graded, the range of corrections that can be applied are limited when compared with film negative. If too much correction is applied, then contouring will be visible.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Modern tapeless cameras offer more operational flexibility, with integral slots for memory cards, and outputs for external recorders from HD-SDI single or dual-link connectors, or even HDMI in the low cost cameras. MPEG-4 and JPEG 2000 coding schemes support higher bit depths, so higher resolution can be delivered to the post house. Log coding has been a way to record a signal on 10-bit recorders perceptually equivalent to 12-bit Rec. 709 coding.
Zone system
None of this is new. Before WW2, Ansel Adams and Fred Archer developed the zone system for still, photo-chemical photography; previsualizing how the exposure zones in original scenes could be mapped into the much-restricted reproduction of the final print. Adams adjusted exposure and development to control tone mapping. The same principles can equally apply to video.
Television camera technology has evolved over the years to capture wider intensity ratios. A sampling's bit depth determines how many different steps of intensity can be captured. The human visual system (HVS) can perceive about 100 distinct levels of intensity, but the response is non-linear and can be modeled by a log law. The 18-percent gray card used to measure exposure appears about half as bright as a white reference. In a linear system, half brightness would be 50-percent gray. By a quirk of nature, the transfer characteristic of the HVS is close to the inverse of the CRT's gamma.
An 8-bit system with 216 levels (601 coding) can easily reproduce 100 levels of intensity at the display for continuous tone reproduction with no visible banding. However, the real scene will have a much wider intensity range than the white and gray of a test chart. Outside on a sunny day, the scene brightness will range from white clouds to dark shadows. To record this full range as imperceptible steps requires up to 16-bits of resolution, depending on the scene contrast ratio. This can be reproduced with film negative, but only recently with digital sensors. Now that the dynamic range approaches and almost equals film negative, the DP can capture detail in the clouds and shadows, but the extreme dynamic range must be compressed to 8 bits for final delivery to the viewer.
A study of programs shot on video (not film) from the '70s and '80s will exhibit what would be considered “over lighting” by today's more realistic look. The contrast ratios were very low, but they matched the video technology of the day. To achieve a better “look” meant shooting on film. That has all changed, as directors and DPs expect the film look from video cameras. Just as film transfer involves a grading stage, modern digital cameras can be similarly graded if the raw signal is available. The full contrast range can be captured, and grading becomes an essential part of the workflow.
A popular camera like the ARRI Alexa captures a signal with 16-bits range through the use of parallel high and low gain amplifiers in the sensor. Put that through an HD-SDI connection, and 6 bits of information must potentially be thrown away.
Trying to preserve the data the camera captures instead of losing it in an AVCHD 4:2:0 encode is one of a couple issues that is complicating workflows. Since tapeless cameras took over, many different ways have evolved to get the images from camera to edit.
In any camera-to-transmission chain, the raw signal from the sensor goes through stages, including: color space conversion, inverse gamma and MPEG-compression. It may also include spatial scaling and de-Bayering. (See Figure 1.) The question a production person must ask is: Where will this happen, at the shooting stage or in post? Cameras may output RAW signals, log, uncompressed video or compressed video to cater for outputs at different points in this processing chain.
Getting the look
Presented with a camera that can capture a contrast ratio near film, the DP will want a more film-like workflow. Decisions on the look can be made in post and not baked-in at the capture stage. For live television, the shader is effectively grading on the fly, adjusting the mapping of scene brightness to video levels before the camera output.
Users that have migrated from film are familiar with grading pictures captured on film and coded to the DPX transfer function. This is a log law, allowing for a mapping from intensity to video level in a manner that matches HVS perception. Files are graded and then converted to Rec. 709 coding for delivery as television.
Log coding can capture the full range of the sensor for later grading, whereas Rec. 709 will clip highlights and crush shadow detail. Video camera manufacturers have all created coding schemes similar to DPX/Cineon film coding. Sony's is called S-log, Canon's is Canon Log, and ARRI's is LOG-C. Also, there is RedLog, and Panasonic has a gamma setting designated FILMREC. Log footage appears extremely low contrast on a monitor. For viewing, the picture is mapped with a LUT to Rec. 709 space.
New generation of cameras
Tapeless cameras come in all flavors, from prosumer camcorders recording AVCHD up to cameras like the Sony F65, which can record 16-bit linear RAW. This spawns a multitude of possible workflows, in contrast to videotape, which constrained the workflow to simple serial processes around ingest to the NLE.
The divergence in workflows stems from the availability of record formats: RAW, log and compressed video. (See Figure 2.) Most cameras allow for internal recording to removable media, or external recording via HD-SDI. A typical camera of the new generation, the Sony F3, records 8-bit MPEG-2 at 35Mb/s to an SxS card like others in the XDCAM-EX range. But, the camera also has a dual-link HD-SDI output with an option of RGB 4:4:4 and S-log recording externally.
Edit codecs
Any field-production workflow is a compromise. How much data is going to be created in a day's shooting? Does the material require grading? Many program genres do not need the finesse of grading and “looks.” A simple color correction may be all that is needed. Whatever the artistic requirements, the less data created, the better. Common memory cards have capacities of 64GB, which is not much when shooting RAW. The big questions are: How much compression can be used, and which codec should be used?
RAW compression schemes mostly use wavelet transform, the film community's codec of choice. The mild compression is considered visually lossless for all but the most demanding applications.
For many applications, where RAW recording is deemed an unnecessary overhead, the alternative of the camera codecs is too constraining. Many cameras record between 24Mb/s and 50Mb/s using AVC or MPEG-2, producing video not able to stand heavy grading.
Field recorders
Just as earlier shooters used cameras that shot to portable VTRs, the portable field recorder provides an alternative to integral recording and offers much more flexibility. The recorders take a single or dual-link SDI from the camera, or even an HDMI tap for the lower-cost camcorders, and record to either memory cards or larger-footprint SSDs. These latter have the advantage of a larger capacity, 250GB being normal, as opposed to the 64GB more typical of CF and SD cards. HDMI and HD-SDI both support 10-bit coding, although many cameras only provide an 8-bit resolution signal to the connector.
A recent innovation has been the use of the mezzanine editing codecs — Apple ProRes and Avid DNxHD — by field recorders. The resultant files can rapidly be copied to post-house editing drives and do not need transcoding, thus avoiding the issue of concatenating artifacts. Transcoding takes what is often valuable time in fast turnaround productions. These mezzanine codecs circumvent the high compression of typical camera codecs, and in the case of ProRes444, allow RGB coding at up to 12-bits, which is great for color grading. The Sony MPEG-4 Simple Studio Profile (SStP) codec, SR HQ, also supports 4:4:4 12-bit coding.
Field recorders allow security of parallel recording, with cameras recording directly to a memory card and the recorder creating the mezzanine-coded file stored on an SSD.
All of these options can add potential complexity to shoot. There may be a second unit with DSLRs creating files differently coded from the primary camera. The potential for error is also much higher than the handling of tapes. Memory cards are recycled during a production, so careful backup to disk and archive to LTO data tape now forms an essential part of any acquisition workflow.
Summary
Tapeless cameras provide a new flexibility. The more expensive ones offer RAW output in order to deliver the ultimate image quality for grading in post. Log coding, an idea borrowed from film transfer, has become popular by delivering a wide, dynamic range to grading. Field recorders add versatility to the workflow and allow a mezzanine-edit codec to be recorded directly from the camera, which cuts later transcoding at ingest to the NLE.
Ultimately, workflows must be designed in order to deliver optimum quality for the production budget, while also retaining security for camera files against problems such as inadvertent erasure or corruption. The importance is such that the digital intermediate technician and data wrangler have become an essential part of any crew.