Network protocols
Figure 1. A typical protocol stack in a network server will contain both UDP and TCP running over IP. Click here to see an enlarged diagram.
It would be hard to work for long in this industry without running into the term protocol. A protocol is a set of agreements that govern communications between two entities. The entities can be VTRs, remote control units, computers, transmitters — almost anything that would need to talk to something else.
The protocol helps define the interface. You can imagine that it would be a drawn-out process to start from scratch every time you wanted to get two devices to communicate; there can be literally thousands of decisions that have to be made before you are successful.
Many of these decisions are already made if you use common protocols. For example, if you know that you have a VTR and a remote control that speaks the Sony VTR control protocol over an RS-422 circuit, you know that an RS-422 electrical and physical interface will be used, and you know that the commands sent over the interface comply with the VTR protocol. Saying “Sony protocol over 422” defines the connector, the electrical coding on the wire, the command structure and more.
You are probably already familiar with the names of several Internet protocols, but you may not have thought of them in broadcast terms. Internet protocol (IP) is the most ubiquitous protocol in the computer environment. IP over Ethernet specifies a great number of things in a short phrase. It is actually saying something similar to Sony protocol over RS-422. It says, “Let's communicate using IP over an Ethernet electrical/physical connection.”
It is a common practice to stack protocols on top of each other. (See Figure 1.) Perhaps you have heard of the term protocol stack. This practice allows systems designers and engineers flexibility in exchanging components in computer systems without having to rebuild the system from scratch.
One common example of this is TCP/IP. This combination of acronyms stands for transaction control protocol (TCP) over Internet protocol on Ethernet. The application talks to TCP, which then packetizes information according to IP rules and sends the communications over an Ethernet electrical and physical connection. There are many protocols in the computer world. For now, let's focus on the most important ones, the Internet core protocols.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Internet control message protocol
Applications from the user world will interface to the network using either user datagram protocol (UDP) or TCP (in some cases, both) running over IP. If the system connects to both Ethernet and ATM networks, there will be two physical interfaces in the server, one for each network.
Internet control message protocol (ICMP) is primarily used to signal problems on the network. You may have seen messages such as “Network unreachable” or “Destination network unknown.” These messages are displayed as the result of conversations between network devices that use ICMP.
Internet Protocol
The most common use of ICMP is to ping another computer on the Internet. While ping can provide a quick check to verify that a computer is connected to a network, you should be aware that many servers do not respond to ping requests for security reasons. ICMP is also used to transfer information between routers and to provide initial network information to diskless devices on a network.
IP is truly a core protocol of the Internet. IP's job is to get datagrams from one device to another using the addressing scheme for that physical network. It is the responsibility of other protocols to provide end-to-end routing information.
IP is low on the protocol stack, so it is closely related to the physical and electrical media that will be used to carry the data. IP prepares data sent to it by higher protocols for transmission across a specific network, taking into account such things as the packet length, hardware addressing structure and how data should be split across multiple packets (if this is allowed).
Address resolution protocol
These days, Ethernet is the dominant electrical and physical networking technology. But when IP was developed, there were several different and competing technologies available. Some of these are still in use today. IP works just as well with token ring and ATM as it does with Ethernet. It is the IP layer that accounts for these differences.
Address resolution protocol (ARP) associates a particular IP address (e.g., 192.168.30.20) with a specific piece of hardware. Behind the scenes, routers build ARP tables that contain the IP address of a device and the hardware address of the device.
User datagram protocol
For Ethernet networks, the hardware address is known as the media access control (MAC) address. While IP addresses can be assigned by anyone, MAC addresses are assigned by the equipment manufacturer and are unique. Once the router knows the unique hardware address for a given IP address, it can transfer the data to the correct device.
UDP is used to send datagrams from one place to another. Dictionary.com defines a datagram as, “A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination computer without reliance on earlier exchanges between this source and destination computer and the transporting network.”
There is one particularly important thing to know about UDP. Nothing in the protocol guarantees that packets sent across the network will reach the other end. In fact, UDP explicitly does not check to see that packets have been received. UDP is a fire and forget protocol. As such, it has extremely low overhead.
You might wonder why this protocol was developed. After all, the whole point of having a network is to move data from one place to another. There are some cases where checking on the delivery of each packet is not practical. One example: In a multicast service, a server may send data to hundreds, or perhaps thousands, of clients. Checking with each client to see that every packet has been received would be prohibitive.
Transaction control protocol
You should be aware that UDP packet size can vary, and in some cases UDP packets can be large. This brings up the issue of fairness. Large UDP packets may hog more of the available bandwidth, causing other traffic to suffer. For this reason, and for other security reasons, some system administrators do not permit UDP traffic to cross their firewalls. This can cause headaches for broadcasters who are using UDP to distribute multicast video over the public Internet.
TCP is also used to send datagrams from one place to another over a network. One of the biggest differences between TCP and UDP is that TCP guarantees delivery of the data. TCP stamps each datagram with a unique sequence number. It then looks for the receiver to acknowledge that it received the datagram. TCP also implements a number of rate control mechanisms to deal with rate limits imposed by the receiver and with congestion issues on the network.
TCP does one other unique thing besides handling lost packets. It reorders packets that have been received out of sequence. Remember that once a packet is launched onto the Internet, it is on its own, and there is no association between this packet and the one that comes before or after it. Packets can and do arrive in a different order from the order in which they were sent.
File transfer protocol
TCP almost always runs on top of IP, so the notation TCP/IP is common. Just bear in mind that TCP/IP specifies two separate protocols.
As the name implies, file transfer protocol (FTP) is used to transfer files across the Internet. FTP has some excellent features that make it an indispensable protocol. FTP handles lost packets and reordering. It also senses congestion on a link and employs automatic rate control to relieve the congestion.
That said, FTP has some characteristics that make it unsuitable for moving professional video files. First, many FTP applications have a file size limit of 2GB. Professional video files can be much larger than this, so this limitation can be a real problem. Second, FTP has rate control mechanisms that can interfere with transmission of large files.
If the network is congested, FTP senses this and adjusts its rate — fairly drastically! FTP responds to congestion by cutting its speed in half. If the congestion continues, FTP cuts its speed in half again. This continues until the transfer aborts due to timeout. If the session has not timed out and the congestion situation improves, FTP increases its speed, but it can take a long time (several tens of seconds) for FTP to get back to its initial speed. You can see this reduction in rate on a network traffic monitor as a stair step pattern. Unfortunately, in some cases, FTP's rate control mechanisms can limit throughput to a low level even though the available bandwidth is high and congestion does not exist.
Table 1 lists the acronyms and functions for some other common protocols you should be familiar with. Also, I strongly recommend that you read “Internet Core Protocols” by Eric A. Hall (O'Reilly). This is an excellent book on the subject and will give you much more detail on these protocols than I could possibly give you here.
Table 1. Common Internet core protocols Protocol Use HTTP
Hypertext transmission protocol Used primarily by Web browsers, but increasingly used for the transmission and retrieval of files and other data IGMP
Internet group management protocol Is the core multicasting protocol. Allows a single host to send out messages to multiple clients. Note that there are many other multicast protocols POP
Post office protocol Check and retrieve mail on remote mail servers SMTP
Simple mail transfer protocol Send mail through a mail server SNMP
Simple network monitoring protocol Remotely monitor equipment on a network. May also be used to execute limited remote commands. SSH
Secure shell Secure terminal emulation for use between clients (usually system administrators) and Internet servers Telnet A non-secure terminal emulation protocol for use between clients (usually system administrators) and Internet servers
Brad Gilmer is president of Gilmer & Associates, executive director of the AAF Association, and executive director of the Video Services Forum.
Send questions and comments to:brad_gilmer@primediabusiness.com