Knowledge is power

In the good old days, storing and archiving sound and picture content was a fairly straightforward task for content providers. As long as their chosen medium was stable and their repository was suitable and safe, they simply labeled the film reel or videotape and then placed it carefully and neatly on an appropriate shelf (usually in their post-production house of choice).

A few notes might have been added — scribbled on a piece of paper — that revealed in more detail what was on a particular piece of content. And those forward-thinking enough within the industry might even have catalogued the library in some fashion. Either way, there on the shelf the content would sit until someone needed to use it again. Once it became less useful, this content would then be moved to long-term storage, somewhere remote but safe, where it was doubtless forgotten about. Easy — job done.

Fast-forward to today’s media landscape, though, and things are quite different, and quite a lot more complicated.

Mining the archive

Library content is one of the most valuable assets a producer or broadcaster can own. And its uses go far beyond simply having some extra footage to add to future programs. Such is the fragmentation of the viewing audience these days that an archive of clips or programs can be exploited, repurposed, reused, re-edited, sold, clipped and repackaged for a multitude of uses and a variety of devices, from mobile phones and tablets to PCs and televisions.

It’s a potential gold mine, but one that can be benefited from only if the original content is accessible, easy to find and available quickly. This pretty much rules out the old school option of sticking content in an old aircraft hangar somewhere.

It’s not just archive footage that can (and needs to) be exploited in this way either. It’s that cheesy “make once, use often” mantra that you hear a lot about during trade shows.

The thing is, although content owners realize that their sound and pictures are valuable, many don’t know how to go about successfully exploiting them. Legacy workflows (and systems) mean that content is often not created efficiently enough to facilitate this approach, and, as a result, when current footage becomes archive footage, the opportunities are lost too. This is where digital comes in.

The general approach to “going digital” for many has been to devise a bespoke solution featuring different systems and technologies, protocols and conventions that happen to suit a particular environment.

Unfortunately, although this might work on a general level, often it simply replicates a tape-based way of working, replacing the videotape with a file-based medium of choice. This can make media delivery more complicated and may even negate the time and cost-savings advantages that a digital workflow potentially brings.

At the same time, there is rarely any budget available to deliver any extra versions of content or to digitize an archive. And, if that wasn’t bad enough, broadcasters are starting to specify that programs should be delivered as files and not tape.

All of this leaves content owners with a bit of a conundrum.

Efficient and cost-conscious

On the one hand, content providers need to be more efficient when producing their content, and they need to be better at exploiting it once they have produced it. On the other, they have to make it happen within existing (and ever-reducing) budgets.

Constructing a new digital way of working using different bits of existing (or even new) technology produces huge inefficiencies, mainly because so many broadcast systems are not interoperable. Quite simply, they don’t talk to one another. As a result, few of the necessary processes required for “make once, use often” can be automated. This is a key issue, as any manual work that needs to be done takes creative people away from what they are best at — being creative.

In addition, there are the hidden costs that occur as a result of interoperability issues, the re-working that needs to be done following system failures, as well as the unnecessary expense incurred as a direct result of legacy tools (such as the need for multiple tape versions).

Those same legacy tools can, invariably, be maintained by only a handful of people who actually know how to use them, and, by virtue of the passage of time, this talent pool will continue to shrink as the cost of knowledge grows in direct relation to its scarcity. 

Further, if the lack of automation and the failure of different systems to be interoperable wasn’t enough, regardless of whether the systems work together or not, successful exploitation of content requires efficient tools for search.

In short, the creation and distribution of media in a tapeless world has to change. The broadcast industry needs technologies that allow content owners and buyers to understand their content and also ensure that value can be added to it to make it appropriate to target audiences
and platforms.

The adoption of business practices from the enterprise IT world, and its understanding of the management of assets, should form the basis of this change. Data has to be at the heart of the new workflow. Let’s explore how this might work.

Enterprises have been successfully managing vast amounts of data for some time, as a result of realizing the benefits from the migration from analogue to digital working.

With IP (Internet Protocol, not Intellectual Property) now the connectivity technology of choice, the broadcast industry can do the same if it learns from their experiences.

Data (and metadata in particular) is everything. It will empower change and drive this change across the entire value chain. Data is so much more than a one-line entry in the listings magazine or a series of data tags that identify, describe and classify video content. It is not a static, isolated, fixed-point-in-time descriptor. It is the capture of a much wider, richer sentiment from the people who not only watched the content, but also talked about it and engaged further with it. This detail, captured from the on-screen activity itself, makes the content richer. The more it can be fed back into workflow, the more it can be used to drive meaningful, increased automation.

By adding living, experiential feedback as data fields, our understanding of the content improves, allowing us to see our content as viewers do. This allows for far greater exploitation than production-added data alone. But it doesn’t stop there. It can also help to improve workflow processes, provide full visibility and accountability of costs, create new values around the content itself, allow for informed decisions around the brand, and generate new monetization opportunities such as derivative content or new platforms.

Knowledge-oriented Workflow

For this to become a reality, we need to move away from the idea of a static linear workflow and work toward a flexible cycle where any work activity can be started at any point, with an intelligent platform as the hub. We call this a “Knowledge-oriented Workflow,” or KnOW.

Figures 1 and 2 show two models that illustrate the opportunity that exists for broadcasters and content owners applying a KnOW-based approach. Figure 1 shows how the level of knowledge deployment is quite limited, whereas in Figure 2, anthropomorphic knowledge puts the media into its social context.

Figure 1. Without a Knowledge-oriented Workflow, the level of knowledge deployment is limited.

By understanding how television is watched, it is possible to make improvements to any processes that are currently manual.

Extending the knowledge about what you have (as well as where and what it is) will enable better-informed business decisions. For example, if you have 100,000 assets but are only selling 5000, it may be because no one knows what is in the library. But more likely, it is because not enough people within your organization know what the audience wants. Or, even if they do, by the time the material is made available, the moment has passed.

Figure 2. Anthropomorphic knowledge puts the media into its social context.

A knowledge-based approach solves that problem by allowing content owners to have a better understanding of the context and sentiment associated with a piece of footage and enabling a deeper appreciation of who wants to watch what, where and when (be it commissioned or acquired material). In turn, these tools enable the identification of content that was highly valued (or not) by an audience and allows for a better understanding of trends, making it possible to respond appropriately and quickly.

Workflow processes can also benefit from improved knowledge. Automation can be extended to include access services, for example, while recommendation engines on on-demand platforms can be made that much richer, learning about a user and responding accordingly. The knock-on effect is that users stay longer on the site or portal.

Dailies management is also easier because of the extra data — as is intelligent search/query. And let’s not forget the financial benefits: KnOW makes it possible to achieve better ad revenues or a higher audience appreciation index by releasing material in a timely and context-sensitive way.

A KnOW recognizes that all the linear processes inside a media corporation need to be connected and it links them together, gluing, for example, a rights system to playout, catch-up, archive or workflow. In short, it allows organizations to know what assets they have, where those assets are, what rights they have and what they can do with them.

In conclusion, there are many benefits to be had from the migration to a file-based workflow. But there are an equal number of potential pitfalls to avoid along the way. A knowledge-based, data-driven approach can serve to not only avoid these pitfalls, but also to help a content owner or broadcaster move away from technology dependence and the complexity that this brings, all while releasing valuable resources.

Joe Trainor is managing director at Deluxe Media Technologies.

CATEGORIES