Editing systems

With disk space as cheap as paperclips, load time is considerably less of an issue.

It is clear that we have indeed come a long way from razor blades for editing television content.

In the early years of television any editing was done strictly on film. Programs that needed the sophistication that editing allowed had to use both film production techniques and tools. In the balance, the ability to broadcast a program more than once was also protected, for at that time there was no effective means of recording a program without using film. The technique of recording after video production (live only) was called kinescope, a process in which a film camera was used to capture a program from a CRT in real time. Once the video was transferred to a non-electronic medium all manner of well-established film techniques could be used.

By the mid-1950s a number of research commercial departments in the U.S. and elsewhere had been working hard to find a commercially viable approach to recording the electronic images without chemical photography. Ampex, and later RCA, Sony, Panasonic, Bosch, Hitachi, NEC and others achieved considerable success in making increasingly more practical electronic copies of the live images. However, it was a number of years before practical electronic editing was perfected. For a decade editing was done by physically splicing videotape segments together in the same manner as film was physically spliced. Indeed editors used bins like film professionals to store clips (more accurately called waste baskets in the janitorial industry).

In the 1970s, “Rowan and Martin's Laugh In” took such efforts to new heights with literally hundreds of splices in some programs. While this was thoroughly enjoyable television, it was equally impractical for wide use in production. For instance, a split edit (one where video and audio cuts are not coincident) were impossible. The push to create usable electronic editing was spurred on by Hollywood's need to be able to produce electronically like they did with film, in the theory that video production would be as cheap or cheaper than film approaches.

However, unless you physically cut videotape (or use modern computer-based editing tools), video editing is relegated to a linear process in which scenes are transferred (dubbed) from original videotape to the master recording used for air. While beyond the scope of this article, it is important to realize the precision with which this had to be done. Scenes had to be written on the master tape in sequence, from the head of the program to the tail, and the tracks on the tape had to be precisely controlled to allow the playback of the edited signal by mechanical means with considerable hysteresis. To edit one had to begin erasing old video as the head passed the erase head, and a precise amount of time later begin recording the new material exactly where the erasure had started. Eventually, methods using tones on a cue track permitted “precisely” repeatable edits, though there were no methods to achieve time synchronization between VTRs unless the editor had extraordinary skill and the VTRs were carefully maintained.

In the early 1970s a timecode that could be recorded on tape and used to both synchronize transports and accurately control the timing of edits was developed by commercial companies and standardized by the SMPTE (SMPTE 12M is still titled “Time and Control Code”). Initially, implementations were still finicky and prone to less than perfectly repeatable results, due in part to the analog nature of the timecode. Over time, the systems utilizing this common “sync track” grew more capable. Companies including RCA and Ampex created systems that used special-purpose, hardware-programmed computers to control multiple VTRs and provided in some cases GPI control over production switchers and other devices to make integrated systems almost possible.

CMX however took a fresh look at the problem and created software to be run on a general-purpose computer that took control of VTRs, ATRs, production switchers and audio consoles in a more powerful environment. The CMX 300 and its successor products were the mainstay of the production industry for nearly 20 years. It certainly was true that no early computer editing system radically changed the topology of the editing landscape, as scenes were still laid down in strictly controlled places on the tape. However, it was now possible with adequate care to make a “tracking edit” in which a scene could be spliced to itself visually seamlessly and reliably. This was a breakthrough capability that today is still critically important to many production processes. An editor for film spent most of his or her time thinking about the flow of the content. A video editor typically was a technician who understood what made the complicated process tick and fixed the often-broken equipment.

As the cost of the hardware and the sophistication increased, producers demanded less-expensive facilities to use to make first-cut decisions at lower cost. The software and other tools were created in less expensive incarnations using the then new U-matic 3/4-inch videotape format as a less-expensive medium less prone to technical complications and cheaper to run. In some environments the upward pressure began again, with offline editing booths acquiring editor-controlled audio consoles and larger production switchers. In some markets these tools were used as a final cut method.

However, the ability to replicate the film production process, which is essentially nonlinear in nature, had not yet been achieved. In a historical footnote, CMX produced the first nonlinear electronic editing system. The CMX 600 was intended strictly for offline purposes, as it was monochrome, and the quality of the preview image was never intended for any more than first cut purposes. One could however take the decision list that was output to punch tape on a CMX 600 and input it to a CMX online editing system to auto-conform an edit master from the source material. Only a few CMX 600s were made, with some notably used in Hollywood by CBS with some success. The system cost hundreds of thousands of dollars, which in the mid-1970s was serious cash. It would be more than a decade before nonlinear editing would come back as a commercial product.

Other media were used for other attempts at nonlinear editing. Lucas Films invested in the creation of a system called Edit Droid (and Sound Droid), using analog laser disks as the playback media. The theory was basically, provide enough copies of the media on enough machines, and random access editing would look like it was utilizing a single copy of the media in a nonlinear process. Too complicated and too expensive for general use, the Edit Droid and some similar competitors withered.

The idea behind the Droid and CMX 600 was critical to the future of editing. Playback in real time with immediate nonlinear access to any scene would make practical a new class of tools. It still would not quite emulate film, but if the software controlling the process was sophisticated enough, the editor might escape the technology and begin to go back to thinking only (actually principally) about content. To a large measure the cat was finally out of the bag, and the effect on the production process was profound.

When it became possible to compress video, put it on computer disks and play it back in a truly random access manner, the Holy Grail was within sight. Avid, and then an increasing number of competitors, achieved considerable commercial success as soon as the technology became practical. At first these rudimentary computer editing systems were still offline tools, but as the quality of compression and the speed of computers improved they moved closer and closer to the mainstream of electronic editing.

Today products we generally categorize as nonlinear editors supply fully acceptable quality for most editing needs, including uncompressed editing of both 525 video and HDTV. While this history lesson may be a long way to go to get to today, it is clear that we have indeed come a long way from razor blades for editing television content. Computer editing has now begun to move down to the home PC with IEEE 1394 interfaces to remarkable consumer cameras permitting convincing quality and considerable sophistication in editing products in wide distribution.

Now we have software that can provide many of the functions of a very expensive linear editing bay. But though we have indeed come a long way, we still do not have tools in software that can totally replace some linear editing functions, nor all of the capabilities of a fully configured linear editing bay. One of the limitations of nonlinear processes is the acquisition process itself.

In film the camera original is seldom edited directly. In early videotape editing with razor blades that was also true. The original is simply too valuable to risk losing it in the editing process. Film editing eventually conforms the camera original to the edited version, producing a clean copy only after the decisions are final.

Nonlinear editors do the same thing for different reasons. Fundamentally the camera original (videotape in almost all cases) is transferred, or dubbed, to the computer-accessible media because it has to be done for most systems to work. Thus the “load” time must be allocated before editing can begin. When nonlinear editors first became available, and disk space was quite limited, this was a severely limiting factor. A one-hour documentary with a 20:1 shooting ratio became an exercise in media management at least as complicated as linear editing. With disk space as cheap as paperclips, load time is considerably less of an issue. However the load time issue is not eliminated by the quantities of bits available.

Imagine a single shot brought in to a session on tape. To use it one must ingest it (load) to the system, and then proceed with the edit. What if there are dozens of such shots, like there might be in the case of a news editing environment? It actually can be less efficient to edit nonlinearly in such circumstances. The ability to preview non-destructively, as well as to use features like fit to fill and other capabilities at will, makes nonlinear systems shine. But when a simple cuts-only program that is well understood and not subject to an interactive editing process, it may well be faster to lace up the tape, dump it to the master and move on.

Linear editing systems that grew out of the early CMX 300 linear editing systems can provide features that strictly nonlinear systems cannot. They can operate in a hybrid environment for instance — utilizing disk drives for random access features, but also accessing “linear tools” like VTRs and production switchers — to achieve the best of all worlds. Sony, Accom and others provide just such systems and offer the freedom to complete a project, or portions of one, in a linear fashion when appropriate, while accessing the essence of nonlinear techniques as well.

The future holds interesting possibilities as the power of general-purpose computers begins to achieve performance levels orders of magnitude higher than that used in the first nonlinear systems. One powerful concept that will certainly come to fruition is that of “proxy editing.” Consider doing all your editing on inexpensive platforms using low bit rate proxies of the actual media with modest quality, but no reduction in production capability. Then the metadata representing all of your decisions is applied to the actual media stored elsewhere in the same compute environment, but executed on platforms with higher capability and sophisticated algorithms. By taking this approach, many people could be editing the material simultaneously and non-destructively. It would also allow an environment in which the expensive tools needed to do high-quality work need only be purchased once. The proxies for those tools could be much less expensive, turning back the clock to the early days when offline editing was done in less expensive editing bays with U-matic proxies of high-quality media. At the end of a session you might simply hit the print button and the final copy of the program would be assembled and delivered over the WAN to the distributor for release to air, while you retained the original media and the proxies in a secure environment controlled by the owner of the content.

John Luff is vice president of business development for AZCAR USA.

CATEGORIES