CCNP ROUTE 300-101 Part 1.5 – Describe UDP Operations

User Datagram Protocol Operations

UDP is defined to make available a datagram mode of packet-switched computer communication in the environment of an interconnected set of computer networks. This protocol assumes that the Internet Protocol is used as the underlying protocol.

This protocol provides a procedure for application programs to send messages to other programs with a minimum of protocol mechanism. The protocol is transaction oriented, and delivery and duplicate protection are not guaranteed. Applications requiring ordered reliable delivery of streams of data should use the TCP.

Because UDP is considered to be a connectionless, unreliable protocol, it lacks the sequence numbering, window size, and acknowledgment numbering present in the header of a TCP segment. Rather the UDP segment’s header contains only source and destination port numbers, a UDP checksum (which is an optional field used to detect transmission errors), and the segment length (measured in bytes).

                  0      7 8     15 16    23 24    31  
                 +--------+--------+--------+--------+ 
                 |     Source      |   Destination   | 
                 |      Port       |      Port       | 
                 +--------+--------+--------+--------+ 
                 |                 |                 | 
                 |     Length      |    Checksum     | 
                 +--------+--------+--------+--------+ 
                 |                                     
                 |          data octets ...            
                 +---------------- ...

 

Source Port

is an optional field, when meaningful, it indicates the port of the sending process, and may be assumed to be the port to which a reply should  be addressed  in the absence of any other information. If not used, a value of zero is inserted.

 

Destination Port
has a meaning within the context of a particular internet destination address.

 

Lengths

in octets of this user datagram including this header and the data. (This means the minimum value of the length is eight.)

 

Checksum

is the 16-bit one’s complement of the one’s complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end(if necessary) to make a multiple of two octets.

 

The pseudo header conceptually prefixed to the UDP header contains the source  address,  the destination  address,  the protocol,  and the UDP length. This information gives protection against misrouted datagrams. This checksum procedure is the same as is used in TCP.

                  0      7 8     15 16    23 24    31 
                 +--------+--------+--------+--------+
                 |          source address           |
                 +--------+--------+--------+--------+
                 |        destination address        |
                 +--------+--------+--------+--------+
                 |  zero  |protocol|   UDP length    |
                 +--------+--------+--------+--------+

 

If the computed checksum is zero, it is transmitted as all ones (the equivalent in one’s complement arithmetic). An all zero transmitted checksum value means that the transmitter generated no checksum (for debugging or for higher level protocols that don’t care).

Because a UDP segment header is so much smaller than a TCP segment header, UDP becomes a good candidate for the transport layer protocol serving applications that need to maximize bandwidth and do not require acknowledgments (for example, audio or
video streams). In fact, the primary protocol used to carry voice and video traffic, Realtime Transport Protocol (RTP), is a Layer 4 protocol that is encapsulated inside of UDP.

If RTP is carrying interactive voice or video streams, the latency between the participants
in a voice and/or video call should ideally be no greater than 150 ms. To help ensure that
RTP experiences minimal latency, even during times of congestion, Cisco recommends a
queuing technology called Low Latency Queuing (LLQ). LLQ allows one or more traffic  types to be buffered in a priority queue, which is serviced first (up to a maximum bandwidth limit) during times of congestion.

Metaphorically, LLQ works much like a carpool lane found in highway systems in larger cities. With a carpool lane, if you are a special type of traffic (for example, a vehicle with two or more passengers), you get to drive in a separate lane with less congestion.

However, the carpool lane is not the autobahn (a German highway without a speed limit). You are still restricted as to how fast you can go.

With LLQ, you can treat special traffic types (voice and video using RTP) in a special way, by placing them in a priority queue. Traffic in the priority queue (like a carpool lane) gets to go ahead of nonpriority traffic, however, there is a bandwidth limit (much like a speed limit) that traffic in the priority queue cannot exceed. Therefore, priority traffic does not starve out nonpriority traffic.

 

Starvation

When there is congestion on the network, either because of bandwidth limitations, or due to QoS mechanisms such as WRED, which intentionally drops certain types of traffic, UDP will naturally tend to “win” the battle over TCP.  The reason for this is that TCP has congestion avoidance and error discovery mechanisms that allow it to know when it needs to slow things down a bit.

UDP however, does not have these built-in congestion avoidance mechanisms. That means that a UDP-based traffic flow will just keep on blasting its destination with traffic, with no regard to how that may be affecting other traffic flows. TCP’s backoff algorithms might allow it to slow to a crawl, while UDP is enjoying the uncluttered highways. Two similar important concepts can come into play to solve this. First, try to put critical TCP traffic flows into queues that will ensure that they always have a fair chance at sending their payloads. Second, avoid putting TCP and UDP into the same queues. While this can isolate TCP from all other UDP traffic that is placed into different queues, you’ll still experience starvation of your TCP applications if the queue is at its threshold, at which point, you’ll need to increase the bandwidth available to the queue, which might in turn requiring increased bandwidth on the link altogether.

 

Latency

I described Latency in the previous topic 1.4 , but Cisco mostly likely is referring to the fact that UDP can reduce latency because of having a much smaller and simpler header size. Also the fact that UDP does not have reliable mechanisms to “slow” down (or increase) the speed of segments passing through the wire. It just sends bursts of data and that’s it, so, we can conclude that UDP latency (correct me if I’m wrong), might be lower than with the case of TCP.

 

I hope this helps someone else!