Network Performance

Data traffic types and network performance measures (Level 1)

Network Performance

Throughput

We begin by understanding the throughput of a flow through a subsystem. A flow is a stream of bits of interest, e.g., all bits flowing through a link in one direction, the bits corresponding to a video stored on a server and playing at a device, etc. A subsystem is any part of the network through which bits can flow, e.g., a link in a network, a subnetwork, a router, an entire network except for the endpoints of a particular flow.

_images/Figure-120.png

Figure-1: We define the throughput of a flow through a sub system. In the top image, the flow of interest is in blue. In the second figure the sub system is the link and throughput is measured on all flows combined. In the bottom image, again the flow of interest is in blue, while the sub system is a network.

In the context of NetSim, we have:

  • Application throughput: The subsystem is the entire network, except for the two entities involved in the application, and the flow is the bit stream of that application.

  • Point-to-point link throughput: The subsystem is the link, and the flow is all the bits flowing through the link in a specified direction.

Let \(A(t)\) be the number of bits of the flow during the interval \([0,t]\), then the throughput up to \(t\), is equal to \(\frac{A(t)}{t}\), and the long-run average throughput is equal to \(\lim_{t \to \infty} \frac{A(t)}{t}\).

Delay

Delay is reported for an entity, e.g., packets in a stream flowing through a subsystem, a user request (such as a connection setup request), a task such as a file transfer.

Let \(t_k\) be the time instant at which delay for the \(k^{th}\) entity starts,

  • e.g., the time of arrival of a packet at a router, the time of a connection request initiation, or the time at which a file transfer is requested.

Let \(u_k\) be the time instant at which the delay of the \(k^{th}\) entity ends,

  • e.g., the time at which a packet leaves the router, the time at which a connection request is completed, or the time at which a file transfer is completed.

Then \(d_k = u_k - t_k\) is the delay of the \(k^{th}\) entity among the entities of interest.

  • e.g., the delay of the \(k^{th}\) packet passing through a router, the delay for the connection setup, or the file transfer delay.

The average delay over last \(K\) entities is equal to

\[\frac{1}{K} \sum_{k=(N(K-1)+1)}^{N} d_k\]

This is called a “moving window” average with a window of \(K\), and will track any changes in the system. As it can be seen, the implementation of the above average requires the storage of the past \(K\) values of \(d_k\). An averaging approach that does not require such storage is the Exponentially Weighted Moving Average (EWMA)

\[\bar{d}(N) = \sum_{k=1}^{N} (1-\alpha)^{N-k} \alpha d_k = \bar{d}(N-1) + \alpha(d_N - \bar{d}(N-1))\]

which is roughly like averaging over the past \(1/\alpha\) values of \(d_k\); for example, \(\alpha = 0.001\) gives an averaging window of 1000. The limiting average delay is equal to,

\[\lim_{K \to \infty} \left(\frac{1}{K}\right) \sum_{k=1}^{K} d_k\]

This limiting average delay is meaningful if the system is statistically invariant over a long time.

_images/Figure-219.png

Figure-2: Components of network delay

The following are some examples of the network aspects that determine the delay of an entity.

  • Packet delay through a router

  • Processing delay: in moving the packet through the “fabric” to the queue at its exit port.

  • Queuing delay: at the exit port

  • Transmission delay: in sending out the packet bits onto the digital transmission link

  • File transfer delay

  • Connection set up delay: between the request packet from the user, until the connection is set up and transfer starts.

  • Data transfer delay is the time taken to transmit all the bytes in the file reliably to the user, so that the user has an exact copy of the file, and the file shows up at the user level in the user’s computer.

Generally, the delay of an entity in a network subsystem will depend on:

  • the “capacity” of the subsystem (e.g., the bit rate of a link), and

  • the way the capacity is allotted to the various entities being transmitted over the link.

The delay through a subsystem will include the propagation delay (due to speed of light), router queuing delay, router processing delay, and transmission delay.

Types of Traffic

Elastic Traffic

Elastic traffic is generated by applications that do not have any intrinsic “time dependent” rate requirement and can, thus, be transported at arbitrary transfer rates. For example, the file transfer application only requires that each file be transferred reliably but does not impose a transfer rate requirement. Of course, it is desirable that the file is transferred quickly, but that is not a part of the file transfer application requirement.

The following are the QoS requirements of elastic traffic.

  • Transfer delay and delay variability can be tolerated. An elastic transfer can be performed over a wide range of transfer rates, and the rate can even vary over the duration of the transfer.

  • The application cannot tolerate data loss.

  • This does not mean, however, that the network cannot lose any data. Packets can be lost in the network - owing to uncorrectable transmission errors or buffer overflows - provided that the lost packets are recovered by an automatic retransmission procedure.

  • Thus, effectively the application would see a lossless transport service. Because elastic sources do not require delay guarantees, the delay involved in recovering lost packets can be tolerated.

Stream Traffic

This source of traffic has an intrinsic “time-dependent” behavior. This pattern must be preserved for faithful reproduction at the receiver. The packet network will introduce delays: a fixed propagation delay and queueing delay that can vary from packet to packet. Applications such as real-time interactive speech or video telephony are examples of stream sources.

The following are typical QoS requirements of stream sources.

  • Delay (average and variation) must be controlled. Real-time interactive traffic such as that from packet telephony would require tight control of end-to-end delay; for example, for packet telephony the end-to-end delay may need to be controlled to less than 200 ms, with a probability more than 0.99.

  • There is tolerance for data loss. Because of the high levels of redundancy in speech and images, a certain amount of data loss is imperceptible. As an example, for packet voice 5 to 10% of the packets can be lost without significant degradation of the speech quality.

Appendix: Obtaining delay metrics in NetSim

The Avg end-to-end packet delay can be got directly from the delay column in the application metrics of NetSim Results dashboard. Queuing delay, transmission time and propagation delay can be calculated from the packet trace which is explained in Section below.

In the packet trace:

  • The average of difference between the PHY LAYER ARRIVAL TIME\(\ (µs)\) and the NW LAYER ARRIVAL TIME\(\ (µs)\) will give us the average queuing delay of packets in a link.

\[Avg.Queuing\ delay\ (\mu s) = Avg.\ \left( PHY\ LAYER\ ARRIVAL\ TIME\ (µs) - NW\ LAYER\ ARRIVAL\ TIME\ (µs) \right)\]
  • The average of difference between the PHY LAYER START TIME \((µs)\) and the PHY LAYER ARRIVAL TIME \((µs)\) will give us the average transmission time of packets in a link.

\[Avg.Transmission\ Time\ (\mu s) = Avg.\left( PHY\ LAYER\ START\ TIME\ (µs) - PHY\ LAYER\ ARRIVAL\ TIME\ (µs) \right)\]
  • The average of difference between the PHY LAYER END TIME \((µs)\) and the PHY LAYER START TIME \((µs)\) will give us the average propagation delay of packets in a link.

\[Avg.\ Propagation\ delay\ (\mu s) = Avg.\ \left( PHY\ LAYER\ END\ TIME\ (µs) - PHY\ LAYER\ START\ TIME\ (µs) \right)\]

The procedure for calculating Avg.Queuing delay (μs)

Consider case 2(b),

  • To compute average queuing delay, we have to take the avg. queuing delay in each link (2, 3, 5) and take the sum of the delays.

  • Open Packet Trace file using the Open packet trace option available in the Simulation Results window.

  • In the Packet Trace, filter the data packets using the column Control Packet Type/App Name to option App1 Video (see Figure-43).

_images/Figure-206.png

Figure-20: Filter the data packets in Packet Trace by selecting App1 video.

  • Now, to compute the average queue in Link 3, we will select Transmitter Id as Router-5 and Receiver Id as Router-6. This filters all the successful packets from Router 5 to Router 6.

  • The columns Network Layer Arrival Time(µS) and Phy Layer Arrival Time(µS) correspond to the arrival time and departure time of the packets in the buffer at Link 3, respectively (see Figure-44).

_images/Figure-2110.png

Figure-21: Packet arrival and departure times in the link buffer

  • The difference between the PHY Layer Arrival Time (µs) and the Network Layer Arrival Time (µs) will give us the delay of a packet in the link (see Figure-45).

    \[Queuing\ Delay = PHY\ Layer\ Arrival\ Time\ (\mu s) - NW\ Layer\ Arrival\ Time\ (\mu s)\]
_images/Figure-226.png

Figure-22: Queuing Delay

  • Now, calculate the average queuing delay by taking the mean of the queueing delay of all the packets (see Figure-45)

_images/Figure-236.png

Figure-23: Average queuing delay

The average queuing delay obtained in the packet trace is 1836.82 μs which is same as that as mentioned in Table-3. Similarly, after computing queuing delay for link 2 and link 5 we get avg. queuing delay = 0. Hence the average queuing delay in the network is 1836.82 µs.

Similar steps can be followed to obtain Avg.Transmission Time (μs) and Avg.Propagation Delay (μs).

The procedure for calculating File Transfer Time(s)

  • To calculate the File Transfer Time, filter the data packets using the column Control Packet Type/App Name to option App2 FTP

_images/Figure-246.png

Figure-24: Filter the data packets in Packet Trace by selecting App2_FTP.

  • Calculate the difference between the last value of Phy layer end time and first value of App layer arrival time.

\[FTP\ file\ transfer\ Time\ (s) \;=\; (Phy\ layer\ end\ time\ (\mu s) - App\ layer\ arrival\ time\ (\mu s))\]
\[= 16742892.32\ \mu s - 0\ \mu s \;=\; 16.74\ s\]
_images/Figure-255.png

Figure-25: File Transfer Time

Exercises

  1. Redo the experiment by using the following inputs.

Part-A:

  1. Consider the application layer file size of 50 MB and modify the link speed as 120Mbps. Derive the theoretical File transfer time and compare against NetSim result.

  2. For variations, consider different combinations of file sizes, such as 20 MB, 30MB, 40MB, 60MB, 70MB etc., and different link speeds such as 10 Mbps, 50 Mbps, 200 Mbps etc. Derive the theoretical File transfer time for each case and compare against NetSim result.

Part-B:

  1. Redo the experiment mentioned in part-B with customized video traffic generation rate set to 20 Mbps with packet size = 1000B, IAT = 400µs. Derive the theoretical File transfer time for each case and compare against NetSim result obtained using the packet trace.

  2. Redo the experiment mentioned in part-B and generate FTP traffic with 10 MB file size (application settings: File size = 10000000B, IAT = 5s) and a link speed of 50 Mbps. Derive the theoretical File transfer time for each case and compare against NetSim result obtained using the packet trace.

Delay and Little’s Law (Level 2)

Introduction

Delay is another important measure of quality of a network, very relevant for real-time applications. The application processes concern over different types of delay - connection establishment delay, session response delay, end-to-end packet delay, jitter, etc. In this experiment, we will review the most basic and fundamental measure of delay, known as end-to-end packet delay in the network.

The end-to-end packet delay denotes the sojourn time of a packet in the network and is computed as follows. Let \(a_i\) and \(d_i\) denote the time of arrival of packet i into the network (into the transport layer at the source node) and time of departure of the packet i from the network (from the transport layer at the destination node), respectively.

Then, the sojourn time of packet i is computed as \((d_i - a_i)\) seconds. A useful measure of delay of a flow is the average end-to-end delay of all the packets in the flow, and is computed as:

\[\text{average packet delay} \;=\; \frac{1}{N} \sum_{i=1}^{N} (d_i - a_i) \; secs\]

where N is the count of packets in the flow.

A packet may encounter delay at different layers (and nodes) in the network. The transport layer at the end hosts may delay packets to control flow rate and congestion in the network. At the network layer (at the end hosts and at the intermediate routers), the packets may be delayed due to queues in the buffers. In every link (along the route), the packets see channel access delay and switching/forwarding delay at the data link layer, and packet transmission delay and propagation delay at the physical layer.

In addition, the packets may encounter processing delay (due to hardware restrictions). It is a common practice to group the various components of the delay under the following four categories: queueing delay (caused due to congestion in the network), transmission delay (caused due to channel access and transmission over the channel), propagation delay and processing delay. We will assume zero processing delay and define packet delay as:

\[end\ to\ end\ packet\ delay \;=\; queueing\ delay \;+\; transmission\ delay \;+\; propagation\ delay\]

We would like to note that, in many scenarios, the propagation delay and transmission delay are relatively constant in comparison with the queueing delay. This permits us (including applications and algorithms) to use packet delay to estimate congestion (indicated by the queueing delay) in the network.

Little’s Law

The average end-to-end packet delay in the network is related to the average number of packets in the network. Little’s law states that the average number of packets in the network is equal to the average arrival rate of packets into the network multiplied by the average end-to-end delay in the network, i.e.,

\[\text{average number of packets in the network} \;=\; \text{average arrival rate into the network} \times \text{average end to end delay in the network}\]

Likewise, the average queueing delay in a buffer is also related to the average number of packets in the queue via Little’s law:

\[\text{average number of packets in queue} \;=\; \text{average arrival rate into the queue} \times \text{average delay in the queue}\]

The following figure illustrates the basic idea behind Little’s law. In Figure-36a, we plot the arrival process \(a(t)\) (thick black line) and the departure process \(d(t)\) (thick red line) of a queue as a function of time. We have also indicated the time of arrivals (\(a_i\)) and time of departures (\(d_i\)) of the four packets in Figure 2-36a.

In Figure 2‑36b, we plot the queue process \(q(t) = a(t) - d(t)\) as a function of time, and in Figure 2‑36c, we plot the waiting time \(\left( d_{i - \ }a_{i}^{\ } \right)\) of the four packets in the network. From the figures, we can note that the area under the queue process is the same as the sum of the waiting time of the four packets. Now, the average number of packets in the queue ( \(\frac{14}{10}\) ) , if we consider a duration of ten seconds for the experiment) is equal to the product of the average arrival rate of packets \(\left( \frac{4}{10} \right)\) and the average delay in the queue \(\left( \frac{14}{4} \right)\).

_images/Figure-361.png

Figure-36: Illustration of Little’s law in a queue.

In Experiment 3 (Throughput and Bottleneck Server Analysis), we noted that bottleneck server analysis can provide tremendous insights on the flow and network performance. Using M/G/1 analysis of the bottleneck server and Little’s law, we can analyze queueing delay at the bottleneck server and predict end-to-end packet delay as well (assuming constant transmission times and propagation delays).

NetSim Simulation Setup

Open NetSim and click on Experiments> Internetworks> Network Performance> Delay and Littles Law then click on the tile in the middle panel to load the example as shown below in Figure-37.

_images/Figure-371.png

Figure-37: List of scenarios for the example of Delay and Littles Law

Part-1: A Single Flow Scenario

We will study a simple network setup with a single flow illustrated in Figure-38 to review end-to-end packet delay in a network as a function of network traffic. An application process at Wired Node 1 seeks to transfer data to an application process at Wired Node 2.

We will consider a custom traffic generation process (at the application) that generates data packets of constant length (say, L bits) with IAT inter-arrival times (say, with average inter-arrival time \(v\) seconds). The application traffic generation rate in this setup is \(\dfrac{L}{v}\) bits per second.

We prefer to minimize communication overheads (including delay at the transport layer) and hence, will use UDP for data transfer between the application processes.

In this setup, we will vary the traffic generation rate \(\left(\dfrac{L}{v}\right)\) by varying the average inter-arrival time (\(v\)), and review the average queue at the different links, average queuing delay at the different links, and end-to-end packet delay.

Procedure

We will simulate the network setup illustrated in Figure-38 with the configuration parameters listed in detail in Table-6 to study the single flow scenario.

NetSim UI displays the configuration file corresponding to this experiment as shown below:

_images/Figure-381.png

Figure-38: Network set up for studying a single flow

The following set of procedures were done to generate this sample:

Step 1: Drop two wired nodes and two routers onto the simulation environment. The wired nodes and the routers are connected to wired links as shown in (See Figure-38).

Step 2: Configure an application between any two nodes by selecting a Custom application from the Set Traffic tab. Right click on the Application Flow App1 CBR and select Properties. In the Application configuration window (see Figure-39), select Transport Protocol as UDP. In the PACKET SIZE tab, select Distribution as CONSTANT and Value as 1460 bytes. In the INTER ARRIVAL TIME tab, select Distribution as EXPONENTIAL and Mean as 11680 microseconds.

_images/Figure-391.png

Figure-39: Application configuration window

Step 3: The properties of the wired nodes are left to the default values.

Step 4: Right-click the link ID (of a wired link) and select Properties to access the link’s properties window (see Figure-40). Set Max Uplink Speed and Max Downlink Speed to 10 Mbps for link 2 (the backbone link connecting the routers) and 1000 Mbps for links 1 and 3 (the access link connecting the Wired_Nodes and the routers).

Set Uplink BER and Downlink BER as 0 for links 1, 2 and 3. Set Uplink Propagation Delay and Downlink Propagation Delay as 0 microseconds for the two-access links 1 and 3 and 10 milliseconds for the backbone link 2.

_images/Figure-401.png

Figure-40: Link_ID_2 Properties window

Step 5: Right-click Router 3 icon and select Properties to access the link’s properties window (see Figure-41). In the INTERFACE 2 (WAN) tab, select the NETWORK LAYER properties, set Buffer size (MB) to 8.

_images/Figure-411.png

Figure-41: Router Properties window

Step 6: Enable Packet Trace check box. Packet Trace can be used for packet level analysis.

Step 7: Click on Run icon to access the Run Simulation window (see Figure-42) and set the Simulation Time to 100 seconds and click on Run.

_images/Figure-421.png

Figure-42: Run Simulation window

Step 8: Now, repeat the simulation with different average inter-arrival times (such as 5840 µs, 3893 µs, 2920 µs, 2336 µs and so on). We vary the input flow rate by varying the average inter-arrival time. This should permit us to identify the bottleneck link and the maximum achievable throughput.

The detailed list of network configuration parameters is presented in (See Table-6).

Parameter

Value

LINK PARAMETERS

Wired Link Speed (access link)

1000 Mbps

Wired Link Speed (backbone link)

10 Mbps

Wired Link BER

0

Wired Link Propagation Delay (access link)

0

Wired Link Propagation Delay (backbone link)

10 milliseconds

APPLICATION PARAMETERS

Application

Custom

Source ID

1

Destination ID

2

Transport Protocol

UDP

Packet Size – Value

1460 bytes

Packet Size – Distribution

Constant

Inter Arrival Time – Mean

AIAT (µs) Table-26

Inter Arrival Time – Distribution

Exponential

ROUTER PARAMETERS

Buffer Size – Interface (WAN)

8

MISCELLANEOUS

Simulation Time

100 Sec

Packet Trace

Enabled

Table-6: Detailed Network Parameters

Performance Measure

In Table-7 and Table-9, we report the flow average inter-arrival time v and the corresponding application traffic generation rate, input flow rate (at the physical layer), average queue and delay of packets in the network and in the buffers, and packet loss rate.

Given the average inter-arrival time \(v\) and the application payload size \(L\) bits (here, \(1460 \times 8 = 11680\) bits), we have,

\[\text{Traffic generation rate} \;=\; \frac{L}{v} \;=\; \frac{11680}{v} \; bps\]
\[\text{PHY rate} \;=\; \frac{11680 + 54 \times 8}{v} \;=\; \frac{12112}{v} \; bps\]

where the packet overheads of 54 bytes is computed as \(54 = 8 (UDP\ header) + 20 (IP\ header) + 26 (MAC + PHY\ header)\ bytes\).

Let \(Q_l(u)\) as denote the instantaneous queue at link l at time \(u\). Then, the average queue at link l is computed as:

\[\text{average queue at link l} \;=\; \frac{1}{T} \int_0^T Q_l(u)\;du\ bits\]

where \(T\) is the simulation time. And, let \(N(u)\) denote the instantaneous number of packets in the network at time \(u\). Then, the average number of packets in the network is computed as:

\[\text{average number of packets in the network} \;=\; \frac{1}{T} \int_0^T N(u)\;du\ bits\]

Let \(a_{i,l}\) and \(d_{i,l}\) denote the time of arrival of a packet 𝑖 into the link 𝑙 (the corresponding router) and the time of departure of the packet 𝑖 from the link 𝑙 (the corresponding router), respectively. Then, the average queuing delay at link l (the corresponding router) is computed as:

\[\text{average queuing delay at link l} \;=\; \frac{1}{N} \sum_{i=1}^{N} (d_{i,l} - a_{i,l})\]

where \(N\) is the count of packets in the flow. Let \(a_i\) and \(d_i\) denote the time of arrival of a packet i into the network (into the transport layer at the source node) and time of departure of the packet i from the network (from the transport layer at the destination node), respectively. Then, the end-to-end delay of packet i is computed as \((d_i - a_i)\) seconds, and the average end-to-end delay of the packets in the flow is computed as:

\[\text{average end-to-end delay} \;=\; \frac{1}{N} \sum_{i=1}^{N} (d_i - a_i)\]

Average Queue Computation from Packet Trace

  • Open Packet Trace file using the Open Packet Trace option available in the Simulation Results window.

  • In the Packet Trace, filter the data packets using the column CONTROL PACKET TYPE/APP NAME and the option App1 CUSTOM (see Figure-43).

_images/Figure-431.png

Figure-43: Filter the data packets in Packet Trace by selecting App1 CUSTOM.

  • Now, to compute the average queue in Link 2, we will select TRANSMITTER ID as ROUTER-3 and RECEIVER ID as ROUTER-4. This filters all the successful packets from Router 3 to Router 4.

  • The columns NW LAYER ARRIVAL TIME(US) and PHY LAYER ARRIVAL TIME(US) correspond to the arrival time and departure time of the packets in the buffer at Link 2, respectively (see Figure-44).

  • You may now count the number of packets arrivals (departures) into (from) the buffer upto time t using the NW LAYER ARRIVAL TIME(US) (PHY LAYER ARRIVAL TIME(US)) column. The difference between the number of arrivals and the number of departures gives us the number of packets in the queue at any time.

_images/Figure-441.png

Figure-44: Packet arrival and departure times in the link buffer

  • Calculate the average queue by taking the mean of the number of packets in queue at every time interval during the simulation.

  • The difference between the PHY LAYER ARRIVAL TIME(US) and the NW LAYER ARRIVAL TIME(US) will give us the delay of a packet in the link (see Figure-45).

\[Queuing\ Delay \;=\; PHY\ LAYER\ ARRIVAL\ TIME(US)\;-\;NW\ LAYER\ ARRIVAL\ TIME(US)\]
_images/Figure-451.png

Figure-45: Queuing Delay

  • Now, calculate the average queuing delay by taking the mean of the queueing delay of all the packets (see Figure-45)

Network Delay Computation from Packet Trace

  • Open Packet Trace file using the Open Packet Trace option available in the Simulation Results window

  • In the Packet Trace, filter the data packets using the column CONTROL PACKET TYPE/APP NAME and the option App1 CUSTOM (see Figure-43).

  • Now, we will select the RECEIVER ID as NODE-2. This filters all the successful packets in the network that reached Wired Node 2

  • The columns APP LAYER ARRIVAL TIME(US) and PHY LAYER END TIME(US) correspond to the arrival time and departure time of the packets in the network respectively.

  • You may now count the number of arrivals (departures) into (from) the network upto time t using the APP LAYER ARRIVAL TIME(US) (PHY LAYER END TIME(US)) column. The difference between the number of arrivals and the number of departures gives us the number of packets in the network at any time.

  • Calculate the average number of packets in the network by taking the mean of the number of packets in network at every time interval during the simulation.

  • Packet Delay at a per packet level can be calculated using the columns Application Layer Arrival Time and Physical Layer End Time in the packet trace as:

\[End-to-End\ Delay \;=\; PHY\ LAYER\ END\ TIME(US)\;-\;APP\ LAYER\ ARRIVAL\ TIME(US)\]
  • Calculate the average end-to-end packet delay by taking the mean of the difference between Phy Layer End Time and App Layer Arrival Time columns.

NOTE: To calculate average number of packets in queue refer the experiment on Throughput and Bottleneck Server Analysis.

Results

In Table-7, we report the flow average inter-arrival time (AIAT) and the corresponding application traffic generation rate (TGR), input flow rate (at the physical layer), average number of packets in the system, end-to-end packet delay in the network and packet loss rate.

AIAT v (in µs)

App layer traffic gen rate (L/v in Mbps)

PHY Rate with overheads (in Mbps)

Arrival Rate (in Pkts/sec)

End-to-End Packet Delay (µs)

Avg no. of pkts in system

11680

1

1.037

86

11282.19

0.97

5840

2

2.074

171

11367.91

1.95

3893

3.0003

3.1112

257

11474.12

2.95

2920

4

4.1479

342

11621.20

3.98

2336

5

5.1849

428

11833.88

5.05

1947

5.999

6.2209

514

12142.78

6.24

1669

6.982

7.257

599

12663.59

7.59

1460

8

8.2959

685

13846.73

9.73

1298

8.9985

9.3313

770

17900.40

13.78

1284

9.0966

9.433

779

18905.81

14.72

1270

9.1969

9.537

787

20300.54

15.98

1256

9.2994

9.6433

796

22083.41

17.62

1243

9.3966

9.7442

805

25255.67

20.31

1229

9.5037

9.8552

814

31604.84

25.71

1217

9.5974

9.9523

822

42665.76

35.05

Table-7: Packet arrival rate, average number of packets in the system, end-to-end delay and packet loss rate. In this table “Average number of packets in the system” is calculated as “Arrival rate” times “End-to-end Delay”.

We can infer the following from Table-7:

  • The average end-to-end packet delay (between the source and the destination) is bounded below by the sum of the packet transmission durations and the propagation delays of the constituent links. This value is equal to \(2 \times 12 \ \mu s\) (transmission time in the node–router links) + \(1211 \ \mu s\) (transmission time in the router–router link) + \(10000 \ \mu s\) (propagation delay in the router–router link) which is \(11235 \ \mu s\).

  • As the input flow rate increases, the packet delay increases as well (due to congestion and queueing in the intermediate routers). As the input flow rate matches or exceeds the bottleneck link capacity, the end-to-end packet delay increases unbounded (limited by the buffer size).

  • The average number of packets in the network can be found to be equal to the product of the average end-to-end packet delay and the average input flow rate into the network. This is a validation of Little’s law. In cases where the packet loss rate is positive, the arrival rate is to be multiplied by (1 - packet loss rate).

Independent verification of Little’s law

We know that from Little’s law

\[\text{Average number of packets in the system} \;=\; Arrival\ rate \times Delay\]

Additionally, we know that in simulation

\[\text{Average number of packets in the system} \;=\; Packet\ generated - Packet\ received\]

We verify that the Average number of packets in the system computed from these two independent methods matches.

Since Little’s law deals with averages, a good match between theory and simulation is improbable from any one trial. We therefore run multiple simulations for each scenario by changing the seed for the random number generator, and at the end of each simulation obtain (i) Packets Generated, and (ii) Packets Received. We then subtract packets received from packets generated for each simulation run and then average over all simulation runs. This data is then compared against the Average number of packets in the system obtained theoretically.

Consider the case where \(AIAT = 1229 \ \mu s\). We run 10 simulations per the table shown below.

RNG Seed 1

RNG Seed 2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No. of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system (Simulation)

1

2

814

30795.90699

25.06

81461

81430

31

2

3

814

27154.86007

22.10

81078

81056

22

3

4

814

34090.20559

27.74

81776

81749

27

4

5

814

28166.82512

22.92

81399

81377

22

5

6

814

26717.70032

21.74

81309

81279

30

6

7

814

32580.07104

26.51

81498

81456

42

7

8

814

28673.19279

23.33

81431

81426

22

8

9

814

31786.77442

26.02

81873

81860

13

9

10

814

28730.26673

23.38

81241

81231

10

10

11

814

29707.92242

24.17

81537

81515

22

Avg No. of pkts in the system (Theory)

24.28

Avg No. of pkts in the system (Simulation)

22.4

Table-8: Results for validating the number of packets in systems for 1229 μs IAT.

Observe that we are generating ≈80,000 packets and the results from simulation and theory falls within a margin of 2 packets which shows good accuracy. Results for each of the average inter-arrival times (AIATs) are provided in the Appendix.

Average queue length and the average queuing delay

In Table-9, we report the average queue and average queueing delay at the intermediate routers (Wired Node 1, Router 3 and Router 4) and the average end-to-end packet delay as a function of the input flow rate.

Input Flow Rate (in Mbps)

Arrival Rate (in Pkts/sec)

Avg no of pkts in Queue

Average Queueing Delay (in µs)

End-to-End Packet Delay (in µs)

Node 1

Router 3

Router 4

Node 1

Router 3

Router 4

1.037

86

0

0

0

0.008

67.55

0

11270.07

2.074

171

0

0

0

0.015

153.26

0

11355.79

3.1112

257

0

0.08

0

0.021

259.47

0

11462.00

4.1479

342

0

0.13

0

0.029

406.54

0

11609.08

5.1849

428

0

0.26

0

0.036

619.21

0

11821.70

6.2209

514

0

0.45

0

0.046

928.11

0

12130.67

7.257

599

0

0.98

0

0.054

1488.91

0

12651.48

8.2959

685

0

1.93

0

0.062

2632.03

0

13834.63

9.3313

770

0

5.34

0

0.070

6625.60

0

17828.18

9.433

779

0

6.83

0

0.070

7991.14

0

18905.81

9.537

787

0

7.82

0

0.071

9095.84

0

20300.54

9.6433

796

0

11.02

0

0.072

11106.29

0

22088.47

9.7442

805

0

11.18

0

0.073

14040.96

0

25255.66

9.8552

814

0

16.44

0

0.074

20313.30

0

31604.83

9.9523

822

0

25.7

0

0.073

31451.04

0

42665.76

10.0598

831

0

43.23

0

0.074

51441.43

0

62660.95

10.1611

839

0

93.14

0

0.075

112374.10

0

123595.75

10.2649

847

0

437.92

0

0.076

518145.71

0

528204.94

11.4069

942

0

3873.8

0

0.085

4610484.2

0

4620748.06

12.4481

1028

0

4111.3

0

0.093

5468684.0

0

5479858.08

13.4873

1113

0

4859.1

0

0.096

5786635.9

0

5798190.08

14.5228

1199

0

5185.6

0

0.106

5953385.9

0

5964588.20

15.5481

1284

0

5275.6

0

0.113

6055075.2

0

6066277.82

Table-9: Average queue and average queueing delay in the intermediate buffers and end-to-end packet delay.

We can infer the following from Table-9.

  • There is queue buildup as well as queueing delay at Router_3 (Link 2) as the input flow rate increases. Clearly, link 2 is the bottleneck link where the packets see large queueing delay.

  • As the input flow rate matches or exceeds the bottleneck link capacity, the average queueing delay at the (bottleneck) server increases unbounded. Here, we note that the maximum queueing delay is limited by the buffer size (8 MB) and link capacity (10 Mbps), and an upper bounded is 8 × 1024 ×1 024 × 8107 = 6.7 seconds.

  • The average number of packets in a queue can be found to be equal to the product of the average queueing delay and the average input flow rate into the network. This is again a validation of the Little’s law. In cases where the packet loss rate is positive, the arrival rate is to be multiplied by (1 - packet loss rate).

  • The average end-to-end packet delay can be found to be equal to the sum of the packet transmission delays (12.112µs (link 1), 1211µs (link 2), 12.112 µs (link3)), propagation delay (10000 µs) and the average queueing delay in the three links.

For the sake of the readers, we have made the following plots for clarity. In Figure 2 46, we plot the average end-to-end packet delay as a function of the traffic generation rate. We note that the average packet delay increases unbounded as the traffic generation rate matches or exceeds the bottleneck link capacity.

Figure 46a

a) Linear Scale

Figure 46b

b) Log Scale

Figure-46: Average end-to-end packet delay as a function of the traffic generation rate.

In Figure-47, we plot the queueing delay experienced by few packets at the buffers of Links 1 and 2 for two different input flow rates. We note that the packet delay is a stochastic process and is a function of the input flow rate and the link capacity as well.

Figure 47a

a) At Wired Node 1 for TGR = 8 Mbps

Figure 47b

b) At Router 3 for TGR = 8 Mbps

Figure 47c

c) At Wired Node 1 for TGR = 9.5037 Mbps

Figure 47d

d) At Router 3 for TGR = 9.5037 Mbps

Figure-47: Queueing Delay of packets at Wired_Node_1 (Link 1) and Router_3 (Link 2) for two different traffic generation rates

Bottleneck Server Analysis as M/G/1 Queue

Suppose that the application packet inter-arrival time is i.i.d. with exponential distribution. From the M/G/1 queue analysis (in fact, M/D/1 queue analysis), we know that the average queueing delay at the link buffer (assuming large buffer size) must be

\[\text{average queueing delay} \;=\; \frac{1}{\mu} \;+\; \frac{1}{2\mu} \cdot \frac{\rho}{1 - \rho} \;=\; \lambda \times \text{average queue}\]

where \(\rho\) is the offered load to the link, \(\lambda\) is the input flow rate in packet arrivals per second, and \(\mu\) is the service rate of the link in packets served per second. Notice that the average queueing delay increases unbounded as \(\rho \to 1\).

_images/Figure-481.png

Figure-48: Average queueing delay (in seconds) at the bottleneck link 2 (at Router 3) Average queueing delay (in seconds) at the bottleneck link 2 (at Router 3) as a function of the offered load.

In Figure-48, we plot the average queueing delay (from simulation) and from (1) (from the bottleneck analysis) as a function of offered load \(\rho\). Clearly, the bottleneck link analysis predicts the average queue (from simulation) very well. Also, we note from (1) that the network performance depends on \(\lambda\) and \(\mu\) as \(\tfrac{\lambda}{\mu} = \rho\) only.

Appendix

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

11680

1

2

86

11283.56

0.97

8623

8622

1

11680

2

3

86

11278.78

0.97

8535

8535

0

11680

3

4

86

11280.88

0.97

8566

8566

0

11680

4

5

86

11285.21

0.97

8538

8538

0

11680

5

6

86

11277.80

0.97

8514

8514

0

11680

6

7

86

11283.66

0.97

8583

8583

0

11680

7

8

86

11279.83

0.97

8577

8577

0

11680

8

9

86

11283.53

0.97

8562

8561

1

11680

9

10

86

11282.39

0.97

8442

8442

0

11680

10

11

86

11283.05

0.97

8472

8472

0

11680

Avg No. of pkts in the system (Theory)

0.97

Avg No. of pkts in system (Simulation)

0.3

Table-10: Results for validating the number of packets in systems for 11680 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

5840

1

2

171

11364.89

1.95

17202

17200

2

5840

2

3

171

11366.71

1.95

17076

17073

3

5840

3

4

171

11363.02

1.95

17161

17156

5

5840

4

5

171

11362.85

1.95

17155

17155

0

5840

5

6

171

11364.89

1.95

16997

16996

1

5840

6

7

171

11371.79

1.95

17155

17155

2

5840

7

8

171

11367.94

1.95

17112

17110

2

5840

8

9

171

11370.50

1.95

17178

17177

1

5840

9

10

171

11365.97

1.95

17002

17000

2

5840

10

11

171

11366.97

1.95

17077

17076

1

5840

Avg No. of pkts in the system (Theory)

1.95

Avg No. of pkts in system (Simulation)

2.2

Table-11: Results for validating the number of packets in systems for 5840 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

3893

1

2

257

11475.81

2.95

25860

25854

6

3893

2

3

257

11477.72

2.95

25556

25555

1

3893

3

4

257

11472.91

2.95

25680

25676

4

3893

4

5

257

11479.34

2.95

25737

25735

2

3893

5

6

257

11481.69

2.95

25714

25710

4

3893

6

7

257

11480.16

2.95

25785

25782

3

3893

7

8

257

11480.76

2.95

25687

25683

4

3893

8

9

257

11475.06

2.95

25736

25732

4

3893

9

10

257

11470.14

2.95

25522

25519

3

3893

10

11

257

11478.44

2.95

25629

25625

4

3893

Avg No. of pkts in the system (Theory)

2.95

Avg No. of pkts in system (Simulation)

3.5

Table-12: Results for validating the number of packets in systems for 3893 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

2920

1

2

342

11623.20

3.98

34593

34589

4

2920

2

3

342

11626.28

3.98

34061

34059

2

2920

3

4

342

11614.69

3.98

34236

34228

8

2920

4

5

342

11625.14

3.98

34245

34240

5

2920

5

6

342

11629.86

3.98

34215

34209

6

2920

6

7

342

11631.24

3.98

34349

34344

5

2920

7

8

342

11627.24

3.98

34320

34318

2

2920

8

9

342

11625.62

3.98

34390

34387

3

2920

9

10

342

11617.11

3.98

34033

34030

3

2920

10

11

342

11624.00

3.98

34251

34243

8

2920

Avg No. of pkts in the system (Theory)

3.98

Avg No. of pkts in system (Simulation)

4.6

Table-13: Results for validating the number of packets in systems for 2920 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

2336

1

2

428

11838.59

5.07

43145

43139

6

2336

2

3

428

11816.87

5.06

42445

42440

5

2336

3

4

428

11816.53

5.06

42801

42794

7

2336

4

5

428

11840.76

5.07

42820

42813

7

2336

5

6

428

11827.88

5.06

42798

42793

5

2336

6

7

428

11847.27

5.06

42986

42982

4

2336

7

8

428

11835.97

5.07

42862

42853

9

2336

8

9

428

11836.49

5.06

42978

42974

4

2336

9

10

428

11826.71

5.06

42664

42659

5

2336

10

11

428

11825.99

5.06

42792

42787

5

2336

Avg No. of pkts in the system (Theory)

5.06

Avg No. of pkts in system (Simulation)

5.7

Table-14: Results for validating the number of packets in systems for 2336 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1947

1

2

514

12149.61

6.24

51581

51575

6

1947

2

3

514

12132.72

6.23

51159

51147

12

1947

3

4

514

12118.78

6.22

51388

51383

5

1947

4

5

514

12158.55

6.24

51387

51378

9

1947

5

6

514

12146.59

6.24

51338

51334

4

1947

6

7

514

12156.33

6.24

51515

51510

5

1947

7

8

514

12146.09

6.24

51347

51339

8

1947

8

9

514

12168.86

6.25

51777

51772

5

1947

9

10

514

12150.55

6.24

51271

51265

6

1947

10

11

514

12141.11

6.24

51355

51351

4

1947

Avg No. of pkts in the system (Theory)

6.24

Avg No. of pkts in system (Simulation)

6.4

Table-15: Results for validating the number of packets in systems for 1947 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1669

1

2

599

12694.47

7.61

60146

60138

8

1669

2

3

599

12659.47

7.59

59736

59728

8

1669

3

4

599

12652.29

7.58

60020

60011

9

1669

4

5

599

12712.53

7.62

59986

59975

11

1669

5

6

599

12687.75

7.60

59890

59878

12

1669

6

7

599

12704.78

7.61

60422

60230

12

1669

7

8

599

12680.35

7.60

60047

60038

9

1669

8

9

599

12728.32

7.63

60458

60456

9

1669

9

10

599

12676.02

7.59

59870

59858

12

1669

10

11

599

12687.70

7.60

60020

60015

5

1669

Avg No. of pkts in the system (Theory)

7.60

Avg No. of pkts in system (Simulation)

8.8

Table-16: Results for validating the number of packets in systems for 1669 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1460

1

2

685

13841.40

9.48

68752

68734

18

1460

2

3

685

13703.41

9.39

68303

68295

8

1460

3

4

685

13720.77

9.40

68604

68597

7

1460

4

5

685

13871.45

9.50

68522

68514

8

1460

5

6

685

13825.66

9.47

68390

68380

10

1460

6

7

685

13880.92

9.53

68731

68719

12

1460

7

8

685

13840.10

9.48

68524

68508

16

1460

8

9

685

13928.14

9.54

68992

68985

7

1460

9

10

685

13771.70

9.43

68422

68411

11

1460

10

11

685

13842.15

9.48

68587

68571

16

1460

Avg No. of pkts in the system (Theory)

9.47

Avg No. of pkts in system (Simulation)

11.3

Table-17: Results for validating the number of packets in systems for 1460 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1298

1

2

770

17642.09

13.59

77110

77094

16

1298

2

3

770

17284.05

13.32

76765

76743

22

1298

3

4

770

17906.68

13.80

77332

77316

16

1298

4

5

770

17942.55

13.82

77100

77089

11

1298

5

6

770

17330.87

13.35

76921

76913

8

1298

6

7

770

17980.07

13.75

77213

77201

12

1298

7

8

770

17793.99

13.71

77212

77190

22

1298

8

9

770

18201.85

14.02

77572

77566

6

1298

9

10

770

17616.95

13.57

76862

76856

9

1298

10

11

770

17972.09

13.85

77092

77083

9

1298

Avg No. of pkts in the system (Theory)

13.69

Avg No. of pkts in system (Simulation)

12.8

Table-18: Results for validating the number of packets in systems for 1298 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1284

1

2

779

18677.06

14.55

77980

77947

33

1284

2

3

779

18228.55

14.20

77602

77594

8

1284

3

4

779

19086.73

14.87

78191

78171

20

1284

4

5

779

18916.42

14.73

77924

77903

21

1284

5

6

779

18205.63

14.18

77778

77757

21

1284

6

7

779

19228.28

14.93

78067

78045

12

1284

7

8

779

18554.90

14.68

77967

77951

16

1284

8

9

779

19222.07

14.97

78393

78373

20

1284

9

10

779

18531.55

14.43

77697

77682

10

1284

10

11

779

19097.05

14.87

77984

77974

10

1284

Avg No. of pkts in the system (Theory)

14.65

Avg No. of pkts in system (Simulation)

18.1

Table-19: Results for validating the number of packets in systems for 1284 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1270

1

2

787

20051.08

15.79

78768

78761

7

1270

2

3

787

19407.29

15.28

78449

78429

20

1270

3

4

787

20760.65

16.35

79125

79091

34

1270

4

5

787

20133.15

15.85

78798

78766

22

1270

5

6

787

19314.91

15.21

78692

78666

26

1270

6

7

787

20993.89

16.53

78860

78849

11

1270

7

8

787

20085.40

15.82

78800

78791

9

1270

8

9

787

20607.46

16.23

79223

79209

14

1270

9

10

787

19836.88

15.62

78579

78562

17

1270

10

11

787

20480.04

16.13

78868

78847

21

1270

Avg No. of pkts in the system (Theory)

15.88

Avg No. of pkts in system (Simulation)

18.1

Table-20: Results for validating the number of packets in systems for 1270 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1256

1

2

796

22133.73

17.62

79679

79649

30

1256

2

3

796

21074.87

16.78

79286

79272

14

1256

3

4

796

23681.15

18.85

80019

79998

21

1256

4

5

796

21731.11

17.30

79610

79601

9

1256

5

6

796

20840.28

16.59

79568

79562

6

1256

6

7

796

23481.11

18.70

79751

79728

23

1256

7

8

796

21819.70

17.37

79687

79664

23

1256

8

9

796

22823.29

18.07

80158

80130

28

1256

9

10

796

21188.70

17.37

79493

79466

27

1256

10

11

796

22451.04

17.88

79797

79776

22

1256

Avg No. of pkts in the system (Theory)

17.66

Avg No. of pkts in system (Simulation)

22

Table-21: Results for validating the number of packets in systems for 1256 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1243

1

2

805

25180.05

20.26

80517

80498

19

1243

2

3

805

23621.91

19.00

80152

80138

14

1243

3

4

805

27685.64

22.27

80894

80844

50

1243

4

5

805

23932.33

19.25

80479

80447

32

1243

5

6

805

22965.12

18.48

80394

80378

12

1243

6

7

805

27007.94

21.73

80551

80542

9

1243

7

8

805

24249.05

19.51

80528

80500

28

1243

8

9

805

26040.47

20.95

80995

80959

36

1243

9

10

805

24433.37

19.66

80356

80336

20

1243

10

11

805

25106.62

20.20

80599

80591

8

1243

Avg No. of pkts in the system (Theory)

20.13

Avg No. of pkts in system (Simulation)

23.2

Table-22: Results for validating the number of packets in systems for 1243 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1229

1

2

814

30795.91

25.06

81461

81430

31

1229

2

3

814

27154.86

22.10

81078

81056

22

1229

3

4

814

34090.21

27.74

81776

81749

27

1229

4

5

814

28166.83

22.92

81399

81377

22

1229

5

6

814

26717.70

21.74

81309

81279

30

1229

6

7

814

32580.07

26.51

81498

81456

42

1229

7

8

814

28673.19

23.33

81431

81426

5

1229

8

9

814

31786.77

25.86

81873

81860

13

1229

9

10

814

28730.27

23.38

81241

81231

10

1229

10

11

814

29707.92

24.17

81537

81515

20

1229

Avg No. of pkts in the system (Theory)

24.28

Avg No. of pkts in system (Simulation)

22.4

Table-23: Results for validating the number of packets in systems for 1229 μs IAT.

IAT

RNG Seed

1

RNG Seed

2

Arrival Rate (Pkts/Sec)

End to End Delay (µs)

No of pkts in the system (Theory)

Pkts Generated

Pkts Received

No. of pkts in the system(Sim)

1217

1

2

822

41953.86

34.47

82287

82246

41

1217

2

3

822

33117.16

27.21

81893

81870

23

1217

3

4

822

44215.58

36.33

82578

82519

59

1217

4

5

822

34847.11

28.63

82199

82176

23

1217

5

6

822

33705.32

27.70

82016

81905

22

1217

6

7

822

40737.94

33.47

82248

82240

8

1217

7

8

822

35997.53

29.58

82189

82176

13

1217

8

9

822

49519.81

40.69

82676

82667

9

1217

9

10

822

37348.67

30.69

81989

81979

10

1217

10

11

822

36543.17

30.03

82318

82305

13

1217

Avg No. of pkts in the system (Theory)

31.88

Avg No. of pkts in system (Simulation)

26.7

Table-24: Results for validating the number of packets in systems for 1217 μs IAT.

Throughput and Bottleneck Server Analysis (Level 2)

Introduction

An important measure of quality of a network is the maximum throughput available to an application process (we will also call it a flow) in the network. Throughput is commonly defined as the rate of transfer of application payload through the network, and is often computed as

\[\text{Throughput} = \frac{\text{application bytes transferred}}{\text{Transferred duration}} \ \text{bps}\]

A Single Flow Scenario

_images/Figure-491.png

Figure-49: A flow \( f \) passing through a link \( l \) of fixed capacity \( C_l \).

Application throughput depends on a lot of factors including the nature of the application, transport protocol, queueing and scheduling policies at the intermediate routers, MAC protocol and PHY parameters of the links along the route, as well as the dynamic link and traffic profile in the network. A key and a fundamental aspect of the network that limits or determines application throughput is the capacity of the constituent links (capacity may be defined at MAC/PHY layer). Consider a flow \(f\) passing through a link \(l\) with fixed capacity \(C_l\) bps. Trivially, the amount of application bytes transferred via the link over a duration of \(T\) seconds is upper bounded by \(C_l \times T\) bits. Hence,

\[\text{Throughput} = \frac{\text{application bytes transferred}}{\text{Transferred duration}} \leq C_l \ \text{bps}\]

The upper bound is nearly achievable if the flow can generate sufficient input traffic to the link. Here, we would like to note that the actual throughput may be slightly less than the link capacity due to overheads in the communication protocols.

_images/Figure-50.png

Figure-50: A single flow \( f \) passing through a series of links. The link with the least capacity will be identified as the bottleneck link for the flow \( f \).

If a flow \(f\) passes through multiple links \(l \in L_f\) (in series), then, the application throughput will be limited by the link with the least capacity among them, i.e.,

\[\text{throughput} \leq \{ \min_{i \in L_f} C_i \} \ \text{bps}\]

The link \(l_f^* = \arg \min_{i \in L_f} C_i\) may be identified as the bottleneck link for the flow \(f\). Typically, a server or a link that determines the performance of a flow is called the bottleneck server or bottleneck link for the flow. In the case where a single flow \(f\) passes through multiple links (\(L_f\)) in series, the link \(l_f^*\) will limit the maximum throughput achievable and is the bottleneck link for the flow \(f\).

A noticeable characteristic of the bottleneck link is queue (of packets of the flow) build-up at the bottleneck server. The queue tends to increase with the input flow rate and is known to grow unbounded as the input flow rate matches or exceeds the bottleneck link capacity.

_images/Figure-516.png

Figure-51: Approximation of a network using bottleneck server technique.

It is a common and a useful technique to reduce a network into a bottleneck link (from the perspective of a flow(s)) to study throughput and queue buildup. For example, a network with two links (in series) can be approximated by a single link of capacity \(\min(C_1, C_2)\) as illustrated in Figure 2-51. Such analysis is commonly known as bottleneck server analysis. Single server queueing models such as M/M/1, M/G/1, etc. can provide tremendous insights on the flow and network performance with the bottleneck server analysis.

Multiple Flow Scenario

_images/Figure-521.png

Figure-52: Two flows \( f_1 \) and \( f_2 \) passing through a link \( l \) of capacity \( C_l \).

Consider a scenario where multiple flows compete for the network resources. Suppose that the flows interact at some link buffer/server, say \(\hat{l}\), and compete for capacity. In such scenarios, the link capacity \(C_{\hat{l}}\) is shared among the competing flows and it is quite possible that the link can become the bottleneck link for the flows (limiting throughput). Here again, the queue tends to increase with the combined input flow rate and will grow unbounded as the combined input flow rate matches or exceeds the bottleneck link capacity. A plausible bound of throughput in this case is (under nicer assumptions on the competing flows)

\[\text{throughput} = \frac{C_{\hat{l}}}{\text{number of flows competing for capacity at link } \hat{l}} \ \text{bps}\]

NetSim Simulation Setup

Open NetSim and click on Experiments> Internetworks> Network Performance> Throughput and Bottleneck Server Analysis then click on the tile in the middle panel to load the example as shown in below Figure-53.

_images/Figure-531.png

Figure-53: List of scenarios for the example of Throughput and Bottleneck Server Analysis

Part - 1: A Single Flow Scenarios

We will study a simple network setup with a single flow illustrated in Figure 2-54 to review the definition of a bottleneck link and the maximum application throughput achievable in the network. An application process at Wired_Node_1 seeks to transfer data to an application process at Wired_Node_2. We consider a custom traffic generation process (at the application) that generates data packets of constant length (say, \(L\) bits) with i.i.d. inter-arrival times (say, with average inter-arrival time \(\nu\) seconds). The application traffic generation rate in this setup is \(\tfrac{L}{\nu}\) bits per second. We prefer to minimize the communication overheads and hence, will use UDP for data transfer between the application processes.

In this setup, we will vary the traffic generation rate by varying the average inter-arrival time \(\nu\) and review the average queue at the different links, packet loss rate and the application throughput.

Procedure

We will simulate the network setup illustrated in Figure-54 with the configuration parameters listed in detail in Table-25 to study the single flow scenario.

NetSim UI displays the configuration file corresponding to this experiment as shown below:

_images/Figure-541.png

Figure-54: Network set up for studying a single flow

The following set of procedures were done to generate this sample.

Step 1: Drop two wired nodes and two routers onto the simulation environment. The wired nodes and the routers are connected with wired links as shown in (See Figure-54).

Step 2: Click on the 'Set Traffic' tab in the ribbon at the top and configure a custom application between two wired nodes. To set the application properties as shown in below figure, click on the created application, expand the property panel on the right, and set the transport protocol to UDP. In the Packet Size tab, select Distribution as CONSTANT and set the value to 1460 bytes. In the Inter-Arrival Time tab, select Distribution as EXPONENTIAL and set the mean to 11680 microseconds.

_images/Figure-551.png

Figure-55: Application configuration window

Step 3: The properties of the wired nodes are left to the default values.

Step 4: Click the link ID (of a wired link) and expand the property panel on right (see Figure-56). Set Max Uplink Speed and Max Downlink Speed to 10 Mbps for link 2 (the backbone link connecting the routers) and 1000 Mbps for links 1 and 3 (the access link connecting the Wired Nodes and the routers).

Set Uplink BER and Downlink BER as 0 for links 1, 2 and 3. Set Uplink Propagation Delay and Downlink Propagation Delay as 0 microseconds for the two-access links 1 and 3 and 100 microseconds for the backbone link 2.

_images/Figure-561.png

Figure-56: Link Properties window

Step 5: Click on Router 3, expand the property panel on right and set the Buffer size to 8 MB in network layer of Interface 2 (WAN) as shown in Figure-57.

_images/Figure-571.png

Figure-57: Router Properties window

Step 6: Click on Configure reports tab in ribbon on the top and enable packet trace. Packet Trace can be used for packet level analysis.

Step 7: Click on Run icon to access the Run Simulation window (see Figure-58) and set the Simulation Time to 100 seconds and click on Run.

_images/Figure-581.png

Figure-58: Run Simulation window

Step 8: Now, repeat the simulation with different average inter-arrival times (such as 5840 µs, 3893 µs, 2920 µs, 2336 µs and so on). We vary the input flow rate by varying the average inter-arrival time. This should permit us to identify the bottleneck link and the maximum achievable throughput.

The detailed list of network configuration parameters is presented in (See Table-25).

Simulation Parameters

Parameter

Value

LINK PARAMETERS

Wired Link Speed (access link)

1000 Mbps

Wired Link Speed (backbone link)

10 Mbps

Wired Link BER

0

Wired Link Propagation Delay (access link)

0

Wired Link Propagation Delay (backbone link)

100 µs

APPLICATION PARAMETERS

Application

Custom

Source ID

1

Destination ID

2

Transport Protocol

UDP

Packet Size – Value

1460 bytes

Packet Size – Distribution

Constant

Inter Arrival Time – Mean

AIAT (µs)

Inter Arrival Time – Distribution

Exponential

ROUTER PARAMETERS

Buffer Size

8

MISCELLANEOUS

Simulation Time

100 Sec

Packet Trace

Enabled

Table-25: Detailed Network Parameters

Performance Measure

In Table-26, we report the flow average inter-arrival time v and the corresponding application traffic generation rate, input flow rate (at the physical layer), average queue at the three buffers (of Wired Node 1, Router 3 and Router 4), average throughput (over the simulation time) and packet loss rate (computed at the destination).

Given the average inter-arrival time v and the application payload size L bits (here, 1460×8 = 11680 bits), we have,

\[\text{Traffic generation rate} = \frac{L}{\nu} = \frac{11680}{\nu} \ \text{bps}\]
\[\text{input flow rate} = \frac{11680 + 54 \times 8}{\nu} = \frac{12112}{\nu} \ \text{bps}\]

where the packet overheads of 54 bytes is computed as \(54 = 8 (\text{UDP header}) + 20 (\text{IP header}) + 26 (\text{MAC + PHY header}) \ \text{bytes}\). Let \(Q_l(u)\) denote the instantaneous queue at link \(l\) at time \(u\). Then, the average queue at link \(l\) is computed as

\[\text{average queue at link } l = \frac{1}{T} \int_{0}^{T} Q_l(u) \, du \ \text{bits}\]

where \(T\) is the simulation time. The average throughput of the flow is computed as

\[\text{throughput} = \frac{\text{application bytes transferred}}{T} \ \text{bps}\]

The packet loss rate is defined as the fraction of application data lost (here, due to buffer overflow at the bottleneck server):

\[\text{packet loss rate} = \frac{\text{application bytes not received at destination}}{\text{application bytes transmitted at source}}\]

Average Queue Computation from Packet Trace

  • Open Packet Trace from the Simulation Results window as shown below

_images/Figure-591.png

Figure-59: Opening Packet trace from Simulation results window

  • In packet trace, click on Insert on Top ribbon → Select Pivot Table as shown in below figure

_images/Figure-601.png

Figure-60: Creating pivot table from packet trace

_images/Figure-612.png

Figure-61: Top Ribbon

  • The packet trace will be selected and Table 1 is mentioned as defualt, just click on OK.

_images/Figure-621.png

Figure-62: Packet Trace Pivot Table

  • Then we will get blank Pivot table.

_images/Figure-631.png

Figure-63: Blank Pivot Table

  • Packet ID drag and drop into Values field for 2 times, CONTROL PACKET TYPE or APP NAME, TRANSMITTER ID, RECEIVER ID into Filter field, NW LAYER ARRIVAL TIME (US) to Rows field see Figure-64.

  • Change Sum of PACKET ID -> Values Field Settings ->Select Count -> ok for both Values field, CONTROL PACKET TYPE to APP1 CUSTOM, TRANSMITTER ID to Router 3 and RECEIVER ID to Router 4

_images/Figure-641.png

Figure-64: Adding fields into Filter, Columns, Rows and Values

  • Right click on first value of Row Labels ->Group ->Select By value as 1000000.

  • Go to Values field under left click on Count of PACKET ID2 ->Values Field Settings-> click on show values as -> Running total in-> click on OK.

  • Again, create one more Pivot Table, Click on Insert on Top ribbon → Select Pivot Table.

  • Then select packet trace and press Ctrl + A → Select ok

  • Then we will get blank Pivot table see Figure-65.

  • Packet ID drag and drop into Values field for 2 times, CONTROL PACKET TYPE/APP NAME, TRANSMITTER ID, RECEIVER ID into Filter field, PHY LAYER ARRIVAL TIME (US) to Rows field see Figure-65,

  • Change Sum of PACKET ID -> Values Field Settings ->Select Count -> ok for both Values field, CONTROL PACKET TYPE to APP1 CUSTOM, TRANSMITTER ID to Router 3 and RECEIVER ID to Router 4

  • Right click on first value of Row Labels for second Pivot Table->Group ->Select by value as 1000000.

_images/Figure-651.png

Figure-65: Create one more Pivot Table and Add All Fields

  • Go to Values field under left click on Count of PACKET ID ->Values Field Settings-> click on show values as -> Running total in-> click on OK.

  • Calculate the average queue by taking the mean of the number of packets in queue at every time interval during the simulation.

  • The difference between the count of PACKET ID2 (Column C) and count of PACKET ID2 (Column G), Note down the average value for difference see Figure-66

_images/Figure-661.png

Figure-66: Average Packets in Queue

\[\text{Packet Loss Rate (in percent)} = \frac{\text{Packet Generated} - \text{Packet Received}}{\text{Packet Generated}} \times 100\]

Results

In Table-26, we report the flow average inter-arrival time (AIAT) and the corresponding application traffic generation rate (TGR), input flow rate, average queue at the three buffers (of Wired Node 1, Router 3 and Router 4), average throughput and packet loss rate.

AIAT ν (in µs)

TGR L/ν (in Mbps)

Input Flow Rate (in Mbps)

Average queue (in pkts)

Packet Loss Rate (in percent)

Node 1

Router3

Router4

Average Throughput (in Mbps)

11680

1.0000

1.0370

0

0.00

0

0.999925

0

5840

2.0000

2.0740

0

0.02

0

1.998214

0

3893

3.0003

3.1120

0

0.09

0

2.999309

0

2920

4.0000

4.1479

0

0.11

0

3.996429

0

2336

5.0000

5.1849

0

0.32

0

5.010136

0

1947

5.9990

6.2209

0

0.60

0

6.000016

0.01

1669

6.9982

7.2570

0

0.92

0

7.004262

0.01

1460

8.0000

8.2959

0

2.05

0

8.028131

0.01

1298

8.9985

9.3313

0

5.43

0

9.009718

0.01

1168

9.9966

9.4330

0

6.78

0

9.104629

0.01

1270

9.1969

9.5370

0

7.78

0

9.208629

0.01

1256

9.2994

9.6433

0

7.81

0

9.313515

0.01

1243

9.3966

9.7442

0

11.32

0

9.416182

0.02

1229

9.5037

9.8552

0

16.21

0

9.522470

0.02

1217

9.5974

9.9523

0

25.89

0

9.614626

0.04

1204

9.7010

10.0598

0

42.80

0

9.717994

0.05

1192

9.7987

10.1611

0

93.19

0

9.795950

0.26

1180

9.8983

10.2644

0

445.40

0

9.807698

1.15

1168

10.0000

10.3699

0

856.25

0

9.808981

2.09

1162

10.9981

11.4049

0

3989.11

0

9.811667

11.00

973

12.0041

12.4481

0

4752.38

0

9.811667

18.53

898

13.0067

13.4878

0

5035.80

0

9.811667

24.75

834

14.0048

14.5228

0

5185.92

0

9.811667

29.75

779

14.9936

15.5481

0

5275.58

0

9.811667

34.75

Table-26: Average queue, throughput and loss rate as a function of traffic generation rate.

We can infer the following from Table-26,

The input flow rate is slightly larger than the application traffic generation rate. This is due to the overheads in communication.

There is queue buildup at Router 3 (Link 2) as the input flow rate increases. So, Link 2 is the bottleneck link for the flow.

As the input flow rate increases, the average queue increases at the (bottleneck) server at Router 3. The traffic generation rate matches the application throughput (with nearly zero packet loss rate) when the input flow rate is less than the capacity of the link.

As the input flow rate reaches or exceeds the link capacity, the average queue at the (bottleneck) server at Router 3 increases unbounded (limited by the buffer size) and the packet loss rate increases as well.

For the sake of the readers, we have made the following plots for clarity. In Figure-60, we plot application throughput as a function of the traffic generation rate. We note that the application throughput saturates as the traffic generate rate (in fact, the input flow rate) gets closer to the link capacity. The maximum application throughput achievable in the setup is 9.81 Mbps (for a bottleneck link with capacity 10 Mbps).

_images/Figure-671.png

Figure-67: Application throughput as a function of the traffic generation rate.

Figure-64, we plot the queue evolution at the buffers of Links 1 and 2 for two different input flow rates. We note that the buffer occupancy is a stochastic process and is a function of the input flow rate and the link capacity as well.

Figure 47a

a) At Wired Node 1 for TGR = 8 Mbps

Figure 47b

b) At Router 3 for TGR = 8 Mbps

Figure 47c

c) At Wired Node 1 for TGR = 9.5037 Mbps

Figure 47d

d) At Router 3 for TGR = 9.5037 Mbps

Figure-68: Queue evolution at Wired Node 1 (Link 1) and Router 3 (Link 2) for two different traffic generation rates

In Figure-69, we plot the average queue at the bottleneck link 2 (at Router 3) as a function of the traffic generation rate. We note that the average queue increases gradually before it increases unboundedly near the link capacity.

_images/Figure-691.png

Figure-69: Average queue (in packets) at the bottleneck link 2 (at Router 3) as a function of the traffic generation rate

Bottleneck Server Analysis as M/G/1 Queue

Let us now analyze the network by focusing on the flow and the bottleneck link (Link 2). Consider a single flow (with average inter-arrival time v) into a bottleneck link (with capacity C). Let us denote the input flow rate in packet arrivals per second as \(\lambda\) , where \(\lambda = 1/v\). Let us also denote the service rate of the bottleneck server in packets served per second as \(\mu\), where

\[\mu = \frac{C}{L + 54 \times 8} .\]

Then,

\[\rho = \lambda \times \frac{1}{\mu} = \frac{\lambda}{\mu}\]

denotes the offered load to the server. When \(\rho < 1\), \(\rho\) also denotes (approximately) the fraction of time the server is busy serving packets (i.e., \(\rho\) denotes link utilization). When \(\rho \ll 1\), then the link is barely utilized. When \(\rho > 1\), then the link is said to be overloaded or saturated (and the buffer will grow unbounded). The interesting regime is when \(0 < \rho < 1\).

Suppose that the application packet inter-arrival time is i.i.d. with exponential distribution. From the M/G/1 queue analysis (in fact, M/D/1 queue analysis), we know that the average queue at the link buffer (assuming large buffer size) must be:

\[\text{average queue} = \rho \times \frac{1}{2} \left( \frac{\rho^2}{1 - \rho} \right), \quad 0 < \rho < 1\]

where \(\rho\) is the offered load. In Figure-69, we also plot the average queue from (1) (from the bottleneck analysis) and compare it with the average queue from the simulation. You will notice that the bottleneck link analysis predicts the average queue (from simulation) very well.

An interesting fact is that the average queue depends on \(\lambda\) and \(\mu\) only as \(\rho = \lambda / \mu\).

Part - 2: Two Flow Scenario

We will consider a simple network setup with two flows illustrated in Figure-70 to review the definition of a bottleneck link and the maximum application throughput achievable in the network. An application process at Wired Node 1 seeks to transfer data to an application process at Wired Node 2. Also, an application process at Wired Node 3 seeks to transfer data to an application process at Wired Node 4. The two flows interact at the buffer of Router_5 (Link 3) and compete for link capacity. We will again consider custom traffic generation process (at the application processes) that generates data packets of constant length (\(L\) bits) with i.i.d. inter-arrival times (with average inter-arrival time \(v\) seconds) with a common distribution.

The application traffic generation rate in this setup is:

\[\frac{L}{v} \ \text{bits per second (for either application)} .\]

In this setup, we will vary the traffic generation rate of the two sources (by identically varying the average inter-arrival time \(v\)) and review the average queue at the different links, application throughput (\(s\)), packet loss rate (\(s\)).

Procedure

We will simulate the network setup illustrated in Figure-70 with the configuration parameters listed in detail in Table-25 to study the two-flow scenario. We will assume identical configuration parameters for the access links and the two application processes.

_images/Figure-70.png

Figure-70: Network set up for studying two flows

Step 1: Right-click the link ID (of a wired link) and select Properties to access the link’s properties window. Set Max Uplink Speed and Max Downlink Speed to 10 Mbps for link 3 (the backbone link connecting the routers) and 1000 Mbps for links 1,2,4, 5 (the access link connecting the Wired Nodes and the routers). Set Uplink BER and Downlink BER as 0 for all links. Set Uplink Propagation Delay and Downlink Propagation Delay as 0 microseconds for links 1,2,4 and 5 and 100 microseconds for the backbone link 3.

Step 2: Enable Packet trace in NetSim GUI.

Step 3: Simulation time is 100 sec for all samples.

Results

In Table-27, we report the common flow average inter-arrival time (AIAT) and the corresponding application traffic generation rate (TGR), input flow rate, combined input flow rate, average queue at the buffers (of Wired Node 1, Wired Node 3 and Router 5), average throughput(s) and packet loss rate(s).

AIAT (v in µs)

TGR (L/v in Mbps)

Input Flow Rate (in Mbps)

Combined Input Flow Rate (in Mbps)

Average queue (in pkts)

Average Throughput (in Mbps)

Packet Loss Rate

WN1

WN2

Router

App1

App2

App1

App2

11680

1.000

1.037

2.074

0

0

0.030

0.999

1.002

0

0

5840

2.000

2.074

4.148

0

0

0.160

1.998

2.006

0

0

3893

3.000

3.111

6.222

0

0

0.320

2.999

3.001

0

0

2920

4.000

4.147

8.295

0

0

1.940

3.996

4.018

0

0

2336

5.000

5.184

10.369

0

0

852.73

4.903

4.907

2.12

2.10

1947

5.999

6.220

12.441

0

0

4767.77

4.896

4.915

18.42

18.35

1669

6.998

7.257

14.514

0

0

5195.21

4.896

4.915

30.27

29.82

1460

8.000

8.295

16.591

0

0

5345.82

4.906

4.905

38.92

38.75

1298

8.998

9.331

18.662

0

0

5422.26

4.904

4.906

45.56

45.52

1168

10.000

10.369

20.738

0

0

5468.07

4.918

4.892

50.90

51.15

Table-27: Average queue, throughput(s) packet loss rate(s) as a function of the traffic generation

We can infer the following.

  1. There is queue buildup at Router 5 (Link 3) as the combined input flow rate increases. So, link 3 is the bottleneck link for the two flows.

  2. The traffic generation rate matches the application throughput(s) (with nearly zero packet loss rate) when the combined input flow rate is less than the capacity of the bottleneck link.

  3. As the combined input flow rate reaches or exceeds the bottleneck link capacity, the average queue at the (bottleneck) server at Router 5 increases unbounded (limited by the buffer size) and the packet loss rate increases as well.

  4. The two flows share the available link capacity and see a maximum application throughput of 4.9 Mbps (half of bottleneck link capacity 10 Mbps).

Useful Exercises

  1. Redo the single flow experiment with constant inter-arrival time for the application process. Comment on average queue evolution and maximum throughput at the links.

  2. Redo the single flow experiment with small buffer size (8 KBytes) at the bottleneck link 2. Compute the average queue evolution at the bottleneck link and the average throughput of the flow as a function of traffic generation rate. Compare it with the case when the buffer size in 8 MB.

  3. Redo the single flow experiment with a bottleneck link capacity of 100 Mbps. Evaluate the average queue as a function of the traffic generation rate. Now, plot the average queue as a function of the offered load and compare it with the case of bottleneck link with 10 Mbps capacity (studied in the report).