It's An Analog Life
The beauty of digital is in the eye of the beholder
JOHNSTON, IOWA: One of the projects that IPTV has begun working on is the conversion of our studios to HD production. To that end, we have begun evaluating HD production switchers, graphics, still stores, monitors, and of course, cameras. It is the camera system selection process that I really want to address in this article.
CCD REVOLUTION
We have a long history of replacing cameras in the studio with field cameras. In the early days of three and four tube studio cameras, the primary performance impacting components of the camera system were the tubes. Bigger tubes meant a better overall image quality and the zoom lens was a relatively new technology. Everything from the lens to the home viewer's receiver was analog and each device along the way added its own unique degradation and distortion to the image.
Enter the CCD imager! I first started seeing these devices while working at KHON-TV in Honolulu. I think the first solid-state imager camera I ever saw was a prototype of a Hitachi unit, designated the "SK-1." It was definitely not ready for primetime but it certainly pointed towards the future and the end of the tube camera.
While at KHON we began converting from our Sony BVP-250s and BVP-330s to the BVP-5 with BVV-1 Beta back. With the introduction of the Sony BVP-7, there was finally a CCD camera that had performance that exceeded our RCA TK-44s and we rolled them into studio service. There were certainly three-tube studio cameras that outperformed the BVP-7s but the cost differential was so great that we never even did a shootout. This pattern continued for me at WSAZ where I replaced TK-45s with Ikegami HL-55s and at KRQE where Hitachi 110s were replaced with Sony EFP cameras.
In all of these cases, our decision was to use EFP lenses rather than the more expensive studio glass. Clearly a case could be made that the studio glass outperformed the EFP glass but the delivery system was still analog and the performance-cost differential didn't make sense, especially since the performance improvement we saw moving from the tube cameras to the solid-state was very good and the added performance of the studio glass was virtually indistinguishable to the home viewer because of all the previously mentioned degradations and distortions.
Now as I look at converting another studio, I find that things have changed. I still think that the EFP camera can meet the demands of the studio and I know I am not alone in this belief. Visit the manufacturer's Web sites and you'll find that some don't offer a true studio camera but an EFP camera with a studio buildup that adds the creature comforts and capabilities that studio camera operators expect. A comparison of the specification from the camera manufacturers that do offer distinct studio models and compatible portable unit shows virtually no difference in technical performance.
THE DIGITAL CHAIN
So what has changed? Virtually everything in the chain is now digital, all the way from the camera input to the consumer's display. Now, all that unique degradation and distortion that used to mask problems are being systematically corrected so that in theory, the quality at the origin is identical to the quality at the destination. That is a considerable change from where we came from and all indicators point to system quality increasing over time. So the only analog devices left in the process at the studio are the lenses, and at the receivers, the eyes and brains of the viewers. Given the fact that the viewer will watch reality programming, I suspect there is little we can do to improve the eye/brain end of the chain so we are better off focusing on the lens/image capture end of the system.
For me, this has been a rather new experience. I come from the environment where lens selection is based more on a "religious fervor" rather than any real performance issue. If technological considerations do actually enter into the equation it is usually more along the lines of service than performance. In Hawaii we were a Fujinon shop because their service was better than Canon's. The same was true in Albuquerque. Here at IPTV we're Canon because their service is better than at Fujinon. It is somewhat regional and changes over time.
As we move forward from this point however, we have to make sure that the lens decision is not made based on transient factors but on demonstrable performance specifications. Just looking at zoom ratio, focal lengths and minimum object distances aren't necessarily going to make for the best choice. In the world of the 4:3 analog home receiver, what are the considerations at the studio end when setting up a shot? Most places I have been associated with have some very basic ideas. On the news set, they want the talent to be in focus with the proper amount of skin detail enhancement (selective image blurring). They want enough light to allow the viewer to see the set and in some cases, the activity in the newsroom behind the anchor. They want the lens aperture adjusted to defocus the set somewhat to allow the viewer to get a sense of depth while not losing touch with the activity going on. For the vast majority of time, there is a lower third super covering most of the desk and an over the shoulder graphic box covering the somewhat defocused news set. All of this is packed into a 4:3 image. Now, the image is considerably wider and incredibly clearer. In this scenario, do the same directorial decisions make sense? Given the amount of added screen space, does the aperture setting for the lens remain the same to give the sense of depth while maintaining the integrity of the shot?
As television moves more toward the shape and quality of movies, the shot decisions become more in line with cinematography where there is a lot more creative input and manipulation at the lens than the simple television shot. So in order to insure that the image quality is as good as it can be, the lens decision needs to be based on some sound knowledge of the science and what it means. You should also consider what factors are important based on the application. A lens in a studio is operating in a very controlled environment and in general on a very limited number of shots as compared to a lens in a field production application or sporting event.
Probably the most important specification to understand is the MTF (Modulation Transfer Function) or spatial frequency response that determines the sharpness of an image system. If you look at a multi-burst image on a monitor, you get an idea of the MTF of the system you're looking at. The more closely spaced vertical lines you can see, the better the MTF and the sharper the image. So in evaluating a lens, all you have to do is focus on a multi-burst chart and look at the results on a monitor. The only problem is that you are actually looking at the MTF of the whole system, not just the lens. If it is a zoom lens, you are looking at the MTF contribution of the lens at one specific point in its operational range. Do you think that, as the 30 or 40 elements within the lens move, that the MTF remains constant? No way. And even more challenging, consider that MTF performance of the lens will also vary with the frequency of the light. So in order to effectively evaluate the performance of the lens, you'll need to make multiple tests at various apertures and zoom ratios as well as shoot typical scenes. To maintain consistency, the tests should all be done on the same camera using the same signal path.
Finally, don't go nuts trying to find a lens that will give you perfect imagery at 1,000 television lines per picture height (TVL/ph). Look at performance at 800 TVL/ph and more importantly below. Those are the spatial frequencies where the majority of information is. The stuff above that is detail and is fun to look at from an engineering point of view but doesn't do much for the viewer. Assuming there is any motion in the scene, detail is gone anyway. If it is a static shot, you are now enhancing the lines on the face of the talent that is definitely not going to make it past the skin detail circuit.
In conclusion, remember that life is analog and all digital does is provide for a very flexible and somewhat lossless way to transport and manipulate the content. Both ends of the chain will always be analog and making sure that the last analog device before the A/D is as good as it can be will ensure that the image delivered from the last D/A will be a true representation.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Bill Hayes is the former director of engineering and technology for Iowa PBS and has been at the forefront of broadcast TV technology for more than 40 years. He’s a former president of IEEE’s Broadcast Technology Society, is a Partnership Board Member of the International Broadcasting Convention (IBC) and has contributed extensively to SMPTE and ATSC. He is a recipient of Future's 2021 Tech Leadership Award and SMPTE Fellow.