Future Developments in Imaging Technology

Cameras are getting better and more versatile. NAB saw some new product launches and the trend seems to be towards 4K imaging. However, there was another trend in evidence, that of post-processing for VFX, 3D and more.

The human visual system comprises the eye and the brain, specifically the visual cortex, working together to create our view of the world. So far cameras have modelled the eye, a lens and a sensor, but have used little processing downstream. The new world of computational cinematography brings the processing power of the CPU and GPU to enable features not possible with a conventional camera.

The SMPTE held technical sessions on the weekend of NAB, with an opening session on 'Advancing Cameras'. The panel looked at high frame rate (HFR), high dynamic range (HDR), as well as advances in 4K and 8K camera technology.

One speaker, Eric Fossum, talked about the future beyond CMOS sensors. Fossum was on the team at the NASA Jet Propulsion Labsthat invented the CMOS active pixel sensor that is set to replace the CCD as the primary imaging technology in digital cameras. Now a Professor at the Thayer School of Engineering at Dartmouth, he is developing the Quanta Image Sensor (QIS). In the QIS, the goal is to count every photon that strikes the image sensor, and to provide resolution of 1 billion or more specialized photoelements (called jots) per sensor, and to read out jot bit planes hundreds or thousands of times per second resulting in 0.25 - 100 Tb/s of data. A CMOS pixel creates a voltage proportional to the number of photons collected during the frame of exposure. An analogy of QIS versus a CMOS sensor is to think of counting individual raindrops rather than saying 3mm of rain fell in an hour.

This technology is a few years off practical application, but indications from the presentation were that it would be useful in computational cinematography for VFX. CMOS imaging has developed at an amazing pace since the introduction of the technology. One only has to look at Red’s latest sensor, the Red Dragon, which captures 6K x 3K at 60fps with a claimed dynamic range over 16 stops. S/N is claimed to be 80dB — I remember when cameras struggled to reach 55dB.

Lightfield Imaging

Another speaker, Siegrfied Foessel of the Fraunhofer Institute, described their work on lightfield imaging. They have been experimenting with a plenoptic camera, that uses a microlens array to capture the direction as well as intensity of incoming light rays. This allows a 4D representation of the scene to be computed, which allows refocusing in post production. Looks like the focus puller is going to be moving from set to the edit bay! Lightfield information can also be used to create depth maps, avoiding the need for greenscreen shooting with matte work. Another camera they are working with is the camera array. 

A number of small, low cost cameras are mounted in an X-Y array. The data stream is processed to create among other data sets, an HDR output or a lighfield. Since the direction of light is captured by both camera types it is possible to create multiple views for 3D imaging. Indications are that autostereoscopic displays (eye-wear free) could require up to 20 views, and these can be computed from lightfield data or synthesized from a stereo camera.

CATEGORIES