Tick-tock talk

Have you heard of Intel’s “Tick-Tock” model? Essentially, it’s an extension of Moore’s Law, which incidentally is named for Intel co-founder Gordon Moore. Moore’s Law is an observation and not natural law that states approximately every two years, the number of transistors on integrated circuits doubles. Moore first described his observation in a 1965 technical paper. For almost 50 years, the phenomenon has continued, encompassing processing speeds, memory sizes and pixel counts. Some say the two-year cycle is closer to 18 months. Others say it’s a trend nearing its end, in another decade or two.

Tock before tick

On the Intel-defined progress clock as illustrated in Figure 1, a tock is represented by new microarchitecture. A tick is represented by advances in manufacturing technology often known as die shrink, optical shrink or process shrink.

Shrinking, or scaling as others may call it, is often achieved by advances in photolithography and fabrication processes. A tick creates identical circuitry on a smaller scale and lowers costs two ways. First, the same size wafer of silicon can yield more processor dies per wafer. Also, research and development of tock-changing architectural designs are generally more expensive than a manufacturing tick.

An Intel tick is generally associated with a reduction of the physical width of circuit traces etched on chips. In 1971, that dimension was 10 micrometers. By 1985, it was one micrometer. The first Pentium chips were manufactured using a 0.8-micrometer node process. Today’s latest microprocessors, such as Intel’s fourth-generation “Haswell” Core i7 chips code-named Ivy Bridge for desktop and mobile, are manufactured using a 22-nanometer node process. The 22-nanometer Haswell chips are identified by the brand name Core ix-4xxx.

The company’s newest microprocessor chips next in line to hit the market are the “Broadwell” design, which uses a 14-nanometer node process. Right now, Intel’s Broadwell tick is experiencing yield problems, so introduction of 14-nanometer Broadwell chips in real products has been delayed until 2014.

The trend isn’t limited to any particular chip manufacturer, but Intel seems to have turned it into a profitable self-fulfilling prophecy.

Double-edged progress

Broadcast engineers know from experience that a dark side usually accompanies the glitter of new technology. New hardware usually inspires new software, and that combination often introduces a new set of technical issues for stations and facilities to overcome. The good news is that broadcast engineers overcome technical issues for a living, and we like a good challenge, when we have time for it.

The broadcasting and video production industry has its own ticks and tocks. A broadcast tick might be significant advancements in imaging/display performance and sizes. A broadcast tock might be storage/speed improvements. You might have some other suggestions, and I’m not all that sure which is a tick or a tock. However, in digital television, it is abundantly clear that one needs and drives the other. It’s kind of our industry’s yin and yang.

Intel’s new Broadwell core chips are said to be 30-percent more power efficient and faster than their Haswell counterparts. CPU performance improvements of 10 to 20 percent over Haswell Ivy Bridge chips are expected.

But, wait. The Broadwell chips haven’t ticked yet. Haswell is a real tock. The newest and last Haswell-based computers are beginning to become available for purchase. What’s the Haswell advantage to broadcasters?

FinFET

Haswell architecture is based on a move to FinFET transistor design. FinFET is a nonplanar, double-gate transistor built on a silicon-on-insulator (SOI) substrate. Some call it a tri-gate or 3D transistor. Physically, the channel between the source and drain is a three-dimensional bar on top of the substrate, called the fin. The thickness of the fin determines the effective channel length of the device. The gate electrode is wrapped around the channel, allowing control of the channel electrical field. It is highly efficient, effective and tiny.

The technology limit for planar transistors is generally agreed to be approximately 20 nanometers. FinFets provide a path (no pun intended) for continued ticks. Some manufacturers are predicting 10 nanometer finFets by 2015. FinFETs allow faster speeds at the same power consumption, or lower power drain at the same speeds as planar transistors. FinFETS are also said to have 90-percent less static leakage current and a lower switching voltage than planar transistors. Intel unveiled the first finFETs in 2011, when the 22-nanometer semiconductor fabrication node was introduced.

New fourth-generation Haswell chips also have Quick Sync Video embedded. Initial information promises that Quick Sync Video will significantly improve the speed of downloads, video editing, format conversion and DVD burning. If super-complex functions like that are built into the microprocessor, marketing claims of dramatic speed improvements may be valid. We’ll be the judges of that when the new Haswell-based computers hit our stations, desks and laps.

He or she who hesitates

We’ve all learned from experience that technology and its capabilities often progress more frequently than most budgets will allow. If there isn’t a demonstrable need to add or replace computers, ticks and tocks become more of a spectator event than a milestone for most television, video production and post facilities. If you can’t prove new technology will make or save money, it’s a tough sell. But, with accurate research and supporting data, even incremental time-saving steps can be identified and perhaps shown as a legitimate operational cost reduction. Keep your eyes and ears open for the real advantages all new processors offer. Sometimes, even when computers aren’t broken, they need to be replaced for business reasons.

The next tock on Intel’s progress clock is “Skylake.” Like virtually every other tock or tick, it promises to be faster, better, more efficient, and cost and consume less power than anything before it. And, it probably will be, just as will the tick that comes after it, and so on.

Meanwhile back in reality, there’s a lesser-known law known as the Great Moore’s Law compensator (TGMLC), also referred to as bloat or Wirth’s law. In 1995, Wirth was first to officially observe that software gets slower faster than hardware gets faster. I believe we’re all familiar with that concept. Yin or yang, it’s all called progress.

CATEGORIES