A Simple Guide to Formats and Codecs
Because the world can never have enough codecs, right?
ORLANDO, Fla.—“All I need is a QuickTime file.” How many times have you heard this from a client? Unfortunately, that’s not enough information. Here are the reasons why.
Digital files consist of format wrappers, data, and metadata. A format wrapper defines the type of file it is, as identified by the file extension, like .mpeg, .mov, .mp4, .mxf, etc. Think of a wrapper like a videocassette tape. It earmarks which player application should be compatible with that file, just like a VHS cassette told you that a VHS player was required. The data is the audio/video content contained within that format wrapper. The metadata is information about that data, such as date/time stamps, color profiles, and more.
UNDERSTANDING THEM
Audio and video signals are encoded into digital files using codecs—shorthand for compression/decompression. Most video files use data compression, whereas the audio signal in professional formats is typically uncompressed. On the other hand, consumer audio formats such as .mp3 and .m4a use a highly compressed codec. While we tend to associate certain codecs together with specific formats, like ProRes and .mov, they are not synonymous. For example, files can be encoded with the ProRes codec and stored in .mxf wrappers.
Video compression is defined as either lossy or visually lossless. This is determined by the codec type, data rate used to encode the file, frame rate, and frame size. File sizes will be smaller when a lossy method is used. But, they will display compression artifacts visible to the eye, because so much signal information is thrown away. Streaming services like Netflix and YouTube use lossy methods to get the signal to your home over the internet. Visually lossless codecs, like high bit-rate versions of the Avid DNx or the Apple ProRes family of codecs, are used in camera acquisition and post production. These codecs employ a high data rate for compression and will have significantly larger file sizes. However, compression artifacts are generally indiscernible to the eye compared to uncompressed video.
The vast majority of codecs used in production and post today are the proprietary intellectual property of companies or associations. They are not open source or open standard, even though their use may be ubiquitous. The use of a codec, especially to encode data to that codec, requires a licensing agreement. Typically, this is transparent and appears to be free to the user, but rest assured that an arrangement between companies has been made. As with all intellectual property, this can result in a codec no longer being available within an application if such an arrangement has ended between companies.
LIBRARY COMPONENTS AND THE 64-BIT TRANSITION
When a video file is played by an application, its data is decoded on-the-fly and displayed as RGB pixels to your screen or a viewer within the interface. This requires a set of installed library components that the application draws upon in order to read, decode, and display the video data. These components may be part of the computer’s operating system or they are custom-installed components that only function for that one application.
Over time, Apple and Microsoft have dropped or “deprecated” support for older codecs within their own operating systems. For example, Apple’s Catalina is a 64-bit operating system, with no support for 32-bit applications and library components. This means that certain codecs - including many still in active use, like DNxHD/HR—can no longer be decoded (played) through any application that depends on the 32-bit QuickTime framework used in previous versions of macOS. This is not an issue of the codec itself, but rather the library components used.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
In order for such files to work within Catalina, Apple or the application developer has to write new 64-bit library components to play such files going forward. In the case of DNx, Avid and Adobe can read and write these files in the .mxf format, but Apple’s own applications, like Final Cut Pro X or QuickTime Player, are not able to do the same. Expect that to be updated later this year.
WHY CAN’T I WRITE CAMERA RAW FILES IN POST?
Digital video cameras convert raw sensor data into RGB pixel information and record that to a digital file using a defined codec and format. Processing is done in the camera to “bake in” the conversion of the Bayer pattern sensor data to RGB, along with its native ISO and a color profile. These files are then easily played by most professional editing and player applications.
Ever since the launch of RED Digital Cinema’s RED One camera, compressed raw codecs have grown in popularity. When you record a camera raw signal, the conversion/processing step to RGB is skipped, thus enabling access to more latitude for color correction in post. It also generates higher quality images for an equivalent data rate and file size. The trade-off is that camera raw files tax the hardware systems used in post. That’s because the conversions that would have been done in the camera are now performed in real-time by the computer.
Raw codecs store information generated by a camera’s sensor and are only intended for image acquisition. You cannot write or re-encode camera raw files in post, because there is no sensor data available. Camera raw codecs are also proprietary to individual companies, including RED, ARRI, Apple, Blackmagic Design, and others. For now, there is no video equivalent to CinemaDNG, a photographic raw format developed by Adobe and then offered up as an open standard to the community. Although it could be argued that GoPro’s CineForm RAW codec is a comparable solution for video.
The video from non-raw codecs can be adjusted through color correction, but you aren’t actually altering the underlying color processing of the file itself. You are skewing the RGB information that is already there. In contrast, camera raw codecs offer the opportunity to leverage the way the sensor data is actually decoded, using various color science and adjustment schemes. This requires each company to create a camera raw plug-in specific to their proprietary codec, which the user can access in order to “develop” the image. Software engineers have several options: 1) keep that proprietary process isolated to their own applications, 2) create plug-ins for other companies to use, 3) create OS components that everyone can tap into, or 4) offer an SDK and let other companies write their own tools to use within their own—often competing—applications.
Codecs like REDCODE are widely supported in most applications, thanks to RED’s plug-ins. Others, like ProRes RAW and Blackmagic RAW, are in a transition state as NLEs add support over this coming year. In a practical sense, if you own a camera that records raw video, don’t assume that your favorite editing tool or post house can deal with those files. As always, do your homework and ask the right questions.