Packet Switching
We often don’t wonder how the data that we send to the internet is transferred from one location to the other without any errors.
Suppose you’re transferring a 100MB file and during the process, you lose your connection. Now the file that you’re sending, would it have to be sent all over again? What happens if one byte of data gets corrupted during file transfer?
All these scenarios are dealt with by a process called packet switching. As previously discussed,
Packets are bundles of divided data from a file that can be simultaneously sent through different routes on the internet.
Different routes can transmit data at different rates, with the transmission rate of a link measured in bits/second. Packet switching is the fundamental way of transferring data on the internet.
Most packet switches use store-and-forward transmission at the inputs to the links. Store-and-forward transmission means that the packet switch must receive the entire packet before it can begin to transmit the first bit of the packet onto the outbound link.
How fast does data travel?
We can calculate the time of data transmitted by looking at the process. Essentially, the host first divides the file into packets of length L bits.
These packets are then forwarded to the internet at a rate of R bits/sec. Looking at this information, it’s easy to find packet delay.
These packet delays are for a single hop and will vary for each hop.
Problem:
- Suppose we have a file size of 5MB and it is divided into 5 packets. The transmission rate R is 3MB/s. How long would it take for the file to transfer?
Solution:
Here we can find the packet size L to be 1 MB.
According to the formula, L/R
(1MB)/(3MB/s)=0.3s, is the time taken for 1 packet
So for the whole file, it would take 0.3*5=1.5 seconds.
But wait, that’s not all
The transmission delay discussed above is just one of the major types of delays that could occur during packet transfer. We also have other types of delays that could happen. Following are the types of delays that could happen during packet transfer:
- Processing Delay: The router requires some time to process the packets that it receives and to decide where to send them.
- Queueing Delay: Having too many packets arrive at a router at a time can cause queueing which may cause some delays.
- Transmission Delay: This is the delay that is discussed above
- Propagation Delay: The distance the packet has to travel also plays a major role in the overall delay. This can be calculated with the following formula:
Packet Loss:
Packet loss or packet dropping is a very important factor when it comes to deciding the overall delay of a transfer. Numerous factors come into play when deciding if a packet would be dropped or not i.e.
the transmission rate of the link, and the nature of the arriving traffic, that is, whether the traffic arrives periodically or arrives in bursts.
Packet loss is the act of a packet being lost in transit due to the incoming rate of packets exceeding the outgoing rate at a specific threshold. This causes a queue backlog.
Packets are generally dropped when a queue has no more space to accommodate another packet. This is a very troublesome problem as it can lead to many packets being lost.
However, we can mitigate this problem by approximating at which rate a packet has a chance to be dropped.
Suppose the average incoming rate of packets is a. The packet size is L bits as previously discussed so the average incoming rate of packets will be La bits/sec. The outgoing rate as we know is R bits/sec. By this, we can figure that if the ratio of incoming to outgoing packets is less than 1:
La/R <1
then we have a very high probability that no
packet will be lost. This is known as the traffic
intensity. The closer this ratio is to 1, the more traffic will be
accumulated thus eventually reaching a point when packets start to be dropped.
Bandwidth & Throughput:
So far we have been calculating the theoretical time it takes for a packet to travel. However, in the real world, many factors can affect the time delay.
Bandwidth is the calculation of the maximum theoretical capacity whereas throughput is the actual amount of packets that pass in a second. Think of it like a processor, the maximum calculations it can do in one second would be the bandwidth but the actual calculations it does in practicality would be the throughput.
Throughput comes into play because we can never accurately calculate how many packets will be transferred in a second. There can be many reasons why a packet may get delayed.
The server of a connection might have a slower output rate but the connection may have the capacity to do more OR the server might be currently occupied due to which it may not have enough memory to accommodate packet transferring. Many reasons can affect the actual transfer rate of a connection.
Comments
Post a Comment