Media asset-management systems
If you have it but you can't find it, you don't have it. In a sentence, this describes why media asset-management (MAM) systems are important. They help the user locate content. MAM systems have been a part of broadcast operations for years. The first MAM systems were file cards and sheets of paper. As collections grew, broadcasters and post-production facilities began to use computers to track material in their archives. The archive was viewed as an end-of-pipe process, and MAM systems were largely confined to simple catalog systems.
Figure 1. In this storyboard view from Artesia’s TEAMS MAM system, the user is presented with keyframes to quickly locate important content.
MAM systems have evolved significantly, and the role of the archive has changed dramatically. MAM systems now can locate and track content throughout a facility. Broadcasters have created a new category of archived, shared-content storage, that often operates at the center of networked production facilities.
MAM anatomy
MAM systems have several functional areas, including ingest, annotation, cataloging, storage, retrieval and delivery. Although these are described linearly here, work may occur simultaneously in these areas.
Ingest
During the ingest process, storage systems capture essence (video and audio) in digital form. Usually, they capture low-resolution and full-resolution content simultaneously. Ingest operators or automated systems link the captured material to a metadata record in the MAM system, and enter some preliminary information. Once the ingest process has started, the essence may be available for further processing. (Some MAM systems require that the ingest process be complete before users can begin working with the system.) Most MAM systems can also track material that is not ingested, such as tapes on shelves.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Annotation
During annotation, a user types in notes while viewing the content. Typical annotation clients include VTR-like functions that allow the annotator to pause the content while entering notes. It is extremely important that the annotation and VTR commands be intuitive and quickly accessible from the keyboard. Most users perform a quick annotation as the system ingests the material. For important content, they may go back and perform an in-depth analysis.
Cataloging
During this extremely important phase, users enter information that others will later use to retrieve the content. They typically enter title, date, location and keywords, along with other information such as segment length, overall length, and talent. Since this information is critical to retrieval, users might employ a limited thesaurus to restrict the entries allowed in certain fields. The MAM system may also use automated cataloging technologies to capture keyframes, closed-caption text and other information. MAM systems can also populate their catalogs with information from editing and automation systems.
Storage
MAM systems continuously store material. During the initial ingest process, the MAM stores essence and metadata. Additional metadata is stored during annotation, cataloging and automated analysis.
Retrieval
Users employ the retrieval process to locate, identify, and view previously cataloged and annotated content. The first and most common way to retrieve material is a text search. The retrieval client may use simple or complicated searching techniques to look through the catalog and annotations, and return a list of content matching the search criteria. But, more commonly, the system presents the user with a view that includes still images of the search results. In some cases, the client will even take users to the area of the content where it found a match. This can be useful if you are trying to locate a particular scene in a two-hour movie.
Figure 1 shows a screenshot from a MAM product made by Artesia called TEAMS. In this storyboard view, the system presents the user with keyframes to quickly locate important content.
Delivery
In some cases, the retrieval function is also the delivery function. In other configurations, the MAM system delivers high-resolution content by file transfer. Sometimes, the MAM retrieval process is part of a larger retrieval function where users view low-resolution proxies to quickly locate material in the system or in an offline library.
Workflow support
Several years ago, production workflows were much more linear than they are today. A news feed would come into a facility, someone in the tape room would record the feed, an editor would pick up the tape and begin editing and, finally, the completed story would be delivered to the control room for integration into the evening news. Then if the story was particularly important, someone would put it on a shelf in the library and perhaps catalog its location for later retrieval.
But technology has enabled major changes to this workflow. In new workflows, the archive serves as a central repository for content, and the MAM system allows people to quickly locate the material they need. Several editors can work on the same source material at the same time to create different products. For example, a producer may send a particularly important news story directly to air as it arrives from the field. At the same time, the system feeds the material into a central repository. Editors begin creating rough-cut stories from the incoming feed almost immediately. Taped pieces begin appearing within a few minutes. As this is happening, different groups of editors may have already begun working on pieces for the 6:00 p.m. and 11:00 p.m. news.
The point is that many people may want access to the same content at the same time. This was difficult to do when editing systems were primarily tape-based. But as we move to networked editing environments, it becomes possible for the user to change from a linear workflow to a more collaborative environment. Once material is stored on a server and different groups begin putting various completed pieces back on the server, some sort of content-tracking system becomes critical. That is the function of a MAM system — to keep track of where content is located and help users find it.
The MAM dilemma
While MAM sounds great, there are challenges in implementing these systems. One challenge is figuring out how to pay for them. MAM systems typically incur costs in one area, but deliver value in that area and other areas as well. It may cost a significant amount of money to purchase, install and train personnel to operate the systems. Additionally, an archive group may incur additional costs in time and personnel during annotation and cataloging. The benefits of MAM systems typically accrue to users of the system — researchers, post-production departments and on-air promotion people, for example. This usually means that the archive group gets hit with all the cost, while other departments receive the benefits.
Where does the metadata come from?
Another dilemma facing owners of MAM systems is how to obtain metadata such as cataloging information and annotations. It is one thing to go to a search engine on the Web, type in a word or phrase and have hundreds of likely Web pages appear. Search engines parse text to build databases that yield quick search results. It's quite another to search video content.
The dilemma for video is simple to explain, but difficult to resolve. Who do you designate to watch movies or news stories and type in the information that others will later use to retrieve the video or audio? If your organization already has an archive, it is likely that someone there is familiar with its contents. If the volume of new material entering your archive is low, then it may be possible for this person to enter detailed information on a scene-by-scene basis. But many larger organizations face a huge task, either because they have large amounts of new material coming into their facilities or because they have a huge backlog of material waiting to be cataloged. In either case, any single individual is likely to burn out quickly if asked to complete this task.
Faced with this problem, one facility hired students to help with its cataloging effort. While the students catalogued a great deal of material in a relatively short period of time, it was not long before those using the system found that some of the catalog information was missing, some was in error, and much was either irrelevant or did not include terms for which someone was likely to search.
What's next?
But better solutions are appearing on the horizon. Cataloging technology is improving. Speech-to-text, on-screen text recognition, closed-caption capture and other automated tools may improve the cataloging process. Metadata-aware file formats such as MXF and AAF may also aid in the metadata-collection process. Slate information from an MXF camera may be retained in an MXF file. Editor comments may be held in an AAF file. Later on, a MAM system can collect metadata from these files that can be used to locate a particular piece of video.
Brad Gilmer is president of Gilmer & Associates, executive director of the AAF Association and executive director of the Video Services Forum.
Send questions and comments to:brad_gilmer@primediabusiness.com