4K2K sensors and more
Several paradoxes lay in the path from NTSC to 4K2K production. First, although it is trivial to build a CMOS sensor with several times the 8.3MP needed for 4K2K, it is not trivial to read such a sensor 60 times per second. Second, although compressing 8.3MP is possible, recording the data is challenging. Third, although 4K2K (4096 × 2160 or 3840 × 2160) will be imported during post production, most editing will be done with a full-HD (1920 × 1080) timeline.
As described in the previous two articles (“4K2K” in the December 2011 issue and “4K2K, part 2” in the January 2012 issue), 4K2K has evolved from DSLR cameras that shoot video. The evolution is natural because 4K2K can be described as shooting 35mm pictures at video frame rates. The need for frame rates of 5X to 10X the maximum photographic burst rates creates the first issue.
Camera designers have a myriad of ways to read out and process a sensor's photosites. This article will look at a few of these with the goal of being better able to understand current full-HD and future 4K2K cameras and camcorders.
This article will use an APS-C 16MP sensor for its examples. The sensor has 4912 × 3264 photosites with a 1.50:1 aspect ratio. Using a 16:9 window, the chip can provide a 4912 × 2760-pixel, 13.6MP capture.
The first design choice depends on the answer to the question: How rapidly is the sensor to be read? Obviously, the simplest way to keep a sensor's clock rate low, thereby keeping heat under control, is to limit a camera's frame rate to 24p and/or 30p. Unfortunately, this rules out the ATSC standards of 59.94p (1280 × 720) and 59.94i (1920 × 1080). Nevertheless, the initial generation of video-shooting DSLRs used this approach. Current DSLRs and camcorders can capture 720p59.94, 1080i59.94 and even 1080p59.94. Therefore, today, the question is how many photosites can be read 60 times per second.
As described in the “Understanding de-Bayering” sidebar in the December installment of the series, for a single-sensor camera to provide luminance resolution equivalent to that from the three-chip camcorder, the image to be de-Bayered must be 3.4MP (full HD) or 13.6MP (4K2K). A 16MP chip is able to provide the necessary pixels for both frame-sizes. For a full HD camcorder, two sensor-read options are possible.
The first option reduces the amount of data to be read and processed by a factor of two. To preserve the Bayer pattern, sensor control logic skips every other pair of rows. With the example sensor, 2760 rows are reduced to 1380 rows. Skipping rows, as is done by several popular DSLRs, introduces aliasing and moiré.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
When the sensor and DSP are fast enough, no rows need be skipped. In this, the second design option, all rows are de-Bayered. Therefore, aliasing and moiré are reduced to only that caused by an inadequate optical low-pass filter (OLPF).
As each pair of rows is read, they are de-Bayered to YCrCb values. Assuming a de-Bayer efficiency of 78 percent, sensor horizontal luminance resolution will be about 3832 pixels.
With either option, unless the sensor has exactly 1920 or 3840 columns, YCrCb row values are downscaled to the target line width — in pixels. And, unless the sensor has exactly 1080 or 2160 rows, the frame must be downscaled to the target number of lines. For our example chip, the full HD downscale factor is 0.391 for both dimensions, i.e., 1920/4912. The 4K2K downscale factor is 0.782.
During an HD downscale, post de-Bayer luminance resolution is reduced by an equal factor. For our example sensor, 3832 x 0.391 divided by 1.78 (aspect ratio) yields about 842 TVI/ph. (With a lower-cost camera, that has a de-Bayer efficiency of only 70 percent, resolution drops to about 756 TVI/ph). When shooting 4K2K, estimated horizontal luminance resolution is about 1682 TVI/ph, i.e., 3832 X 0.782/1.78.
Once a de-Bayered and downscaled image has been obtained, the next steps are compression and recording. Although this presents no problem for a full-HD camera, currently there are no 4K2K versions of AVC-Intra, DVCPRO HD or HDCAM.
The H.264 specification does include Level 5.1 supporting 3840 × 2160 at up to 25fps and Level 5.2 supporting 3840 × 2160 at up to 30fps. To employ AVCHD encoding of 4K2K, its specification would need to be enhanced to at least Level 5.1.
The JVC GY-HMQ10 camcorder illustrates one way compression and recording problems can be solved. The HMQ10's 1/2.3in CMOS sensor has 8.3 million active photosites and delivers 3840 × 2160 video at 23.976p, 50p and 59.94p.
From the sensor onward to the SDHC/SDXC recording cards, each 8.3MP frame is processed by the JVC Falconbrid LSI chip. Falconbrid de-Bayers the QuadHD frame (QFHD), divides it into four streams and simultaneously compresses these streams using H.264/AVC at 36Mb/s. The aggregate data rate for 2160p is, therefore, 144Mb/s. The four H.264 data streams are written to four memory cards.
During playback, these cards are read, and the streams are decompressed and output via four HDMI ports. (Projectors and monitors that display 4K2K have four HDMI input ports.)
Once 4K2K has been recorded, focus shifts to post production. How will data be imported into an NLE and edited? One solution for the JVC HMQ10 is the transfer of data from four cards, via USB, from the camcorder to a utility that merges and transcodes them to an intermediate codec. (Under OS X, I have had no problems creating and editing 2160p60 ProRes files.)
Post production
Those shooting QFHD with an F65 and recording on SRMemory cards using a Sony SR-R1000 can make use of HD-SDI connections to transfer files in real time. (During shooting, four HD-SDI connections send QFHD data from an F65 to a pair of SRK-R201 input boards installed in an SR-R1000.)
Two SRK-202 output boards installed in an R1000 provide four HD-SDI output ports. Each port provides uncompressed digital video.
With a Blackmagic Design DeckLink 4K board installed in your PC or Mac, you can transfer QFHD digital video to 4:2:2 8-bit YCrCb uncompressed files. To accomplish this, first you'll also need a powerful RAID because 2160p24 video and audio requires about 400MB/s.
Second, four AJA HD10AM AES audio embedder/disembedders will be needed. Each HD10AM inputs one 1920 × 1080 signal (from one of a pair of HD-SDI output ports on a SRK-202) plus a pair of audio signals (from two AES output ports on an SRK-202). Each audio embedder sends a 1920 × 1080 signal with embedded audio to one HD-SDI input port on the DeckLink 4K board. Once 4K2K files have been imported, online editors choose a workflow. When delivering a 4K2K file, one can only use certain NLEs. FCP 7, FCP X, EDIUS 6 and Premiere Pro support 4K2K timelines and exports.
Although the Media Composer 6 from Avid does not support 4K2K sequences, it will accept 4K2K ProRes and uncompressed files via AMA. (When source files are 2160p60, edit them in a 1080i60 sequence.) Media Composer's Resize filter creates a 1920 × 1080 image from a 3840 × 2160 image. Pan-and-scan of the larger image is possible by key-framing the X and/or Y position control. (See Figure 1.)
An NLE, such as FCP X, that can apply a Pan-Scan-Zoom (such as iMovie's Ken Burns effect) can use the “extra” pixels to zoom in and out of the larger image as shown in Figure 2 (3840 × 2160 downscaled to 1920 × 1080) and Figure 3 (crop of 3840 × 2160 to 1920 × 1080).
Although working with 4K2K requires a greater understanding of camera technology and more effort during post, these issues are surmountable with a moderate amount of effort.
Steve Mullen is the owner of DVC. He can be reached via his website athttp://home.mindspring.com/~d-v-c.