Routing & Switching

The OSPF weight setting problem and the performance comparison of the OSPF vs. RIP

Part A: The OSPF Weight Setting Problem

Routing is the task of finding a path from a sender to a desired destination. Routing is complex in large networks because of the many potential intermediate destinations a packet might traverse before reaching its destination. A routing table instructs the router how to forward packets. The routing protocols employ different operations to analyze different incoming update messages to produce their routing tables. Given a packet with an IP destination address in its header, the router performs a routing table lookup which returns the IP address of the packet’s next hop.

OSPF (Open Shortest Path First) is a routing protocol used in IP networks. OSPF requires routers to exchange routing information with all other routers in the network. Complete network topology knowledge (i.e., the arrangement of all routers and links in the network) is required. Because each router knows the complete topology, each router can compute all needed shortest paths.

OSPF calculates routes as follows. Each link is assigned a dimensionless metric, called cost or weight. This integer cost ranges from 1 to 65,535 (\(2^{16} - 1\)). The cost of a path in the directed graph is the sum of the link costs. Using Dijkstra’s shortest path algorithm, OSPF mandates that each router computes a tree of shortest paths with itself as the root.

The link weights are assigned by the network operator. The lower the weight, the greater the chance that traffic will get routed on that link. One approach is to assign the OSPF metric as the inverse of the link bandwidth. If each link cost is set to 1, the cost of a path is equal to the number of links (hops) in the path [1].

The OSPF weight setting problem seeks a set of weights that optimizes network performance. Finding the right set of weights can greatly improve network performance. To see the importance of good OSPF weight setting, we consider the example [2] given.

_images/Figure-126.png

Figure-1: A seven-node, nine-link network is shown. All the links are assumed to be bidirectional. The capacity of each link is 100 Mbps. Nodes q, r, s, and w generate 50 Mbps of traffic each destined for node t. The weights assigned to each link are also shown.

Nodes q, r, s, and w generate 50 Mbps traffic each, for node t. The weights assigned to each link are shown beside the links. Two different weight assignments are considered. We will simulate these two scenarios in NetSim and compare their network performance.

Network Set up

Open NetSim and click on Experiments> Internetworks> Routing and Switching> The OSPF weight setting problem and the performance comparison of the OSPF vs. RIP then click on the tile in the middle panel to load the example as shown below.

_images/Figure-227.png

Figure-2: List of scenarios for the example of OSPF weight setting problem.

_images/Figure-315.png

Figure-3: The same scenario is replicated in NetSim. Each node from the earlier figure is represented in NetSim by one router and one host. Traffic is generated from Q1, R1, S1 and W1 to T1 at a rate of 50 Mbps and all link capacities are 100 Mbps.

Procedure

Step 1: A network scenario is designed in the NetSim GUI comprising of 7 Wired Nodes and 7 Routers.

Step 2: Go to Router Properties. In the Application Layer, Routing Protocol is set as OSPF

_images/Figure-413.png

Figure-4: Application Layer Window - Routing Protocol is set as OSPF

The Router Configuration Window shown above indicates the Routing Protocol set as OSPF along with its associated parameters. The “Routing Protocol” parameter is Global. i.e., changing in one Router will affect all the other Routers. So, in all the Routers, the Routing Protocol is now set as OSPF.

Step 3: Go to Router Properties > Application Layer > OSPF > Expand the Interfaces. In all the routers WAN Outgoing Interfaces, set the Output Cost is set to 1.

_images/Figure-518.png

Figure-5: WAN Interfaces - Output Cost is set to 1.

The “Output Cost” parameter in the WAN Interface > Application Layer of a router indicates the cost of sending a data packet on that interface and is expressed in the link state metric.

Step 4: Go to Router Properties > All WAN Interfaces > Network Layer > set the Buffer Size (MB) parameter to 1024.

Step 5: Go to Link Properties, set the properties as below.

Parameter

Parameter Value

Uplink / Downlink Speed (Mbps)

100

Uplink / Downlink BER

0

Uplink / Downlink Propagation Delay (µs)

0

Table-1: Wired Link Properties.

Step 6: Go to the Link ID 12 Properties (Link Between Router T and Node T1) > Set Uplink and Downlink Speed to 1000 Mbps.

Step 7: To Configure an application between any two nodes by selecting a CBR application from the Set Traffic tab connect between source and destination. Right click on the Application Flow and select transport protocol to UDP.

Here, the CBR Application is generated from Q1, R1, S1 and W1 to T1 with 50Mbps of generation rate by setting the Packet Size of 1460 Bytes and Inter Arrival Time remaining 233.6 μs.

Additionally, the “Start Time (s)” parameter is set to 10s. This time is typically chosen to be greater than the time required for OSPF convergence (i.e., Exchange of OSPF information between all the routers), and it increases as the size of the network increases.

Step 8: Packet Trace is enabled, and post simulation we can observe the route which the packets have chosen to reach the destination based on the Open Shortest Path First Routing Protocol.

Step 9: Run the Simulation for 20 Seconds.

Step 10: Note down the Throughput and Observe the flow of Packet using Packet Trace.

Case 2: OSPF Output cost as 3. Network configuration:

For the same case, Change the Router R Properties > Application Layer > OSPF > Expand the Interface 1> Set the Output Cost is set to 3 and run the simulation for 20 seconds.

Results

Case-1: OSPF Output cost as 1

All link weights are set as 1.

  • Shortest Path from Q1 to T1 is Q1 > Q > U > T > T1. Total cost is 2.

  • Shortest Path from R1 to T1 is R1 > R > U > T > T1. Total cost is 2.

  • Shortest Path from S1 to T1 is S1 > S > U > T > T1. Total cost is 2.

  • Shortest Path from W1 to T1 is W1 > W > T > T1. Total cost is 1.

_images/Figure-614.png

Figure-6: OSPF Packet Flow for Case 1, Output cost set to 1.

  • The shortest paths from nodes Q1, R1, S1, and W1 to node T1 are determined based on OSPF's calculation of total cost.

  • The traffic from nodes Q1, R1, and S1 is routed through node U due to its shorter path compared to the direct path to T1. The traffic from node W1 takes the direct single-hop path to T1 due to its lower cost. The packet flow described above, choosing the shortest paths, can be observed in the Packet Trace

After simulation, open packet trace, filter the PACKET TYPE column to CBR and further filter the CONTROL PACKET TYPE/APP NAME to respective traffic flow and observe the packet flow. Here we have filtered the CONTROL PACKET TYPE/APP NAME as TRAFFIC_R1_T1

_images/Figure-716.png

Figure-7: Packet flow in packet trace for OSPF output cost as 1.

Case-2: OSPF Output cost as 3

Link-6 cost is set to 3, all other link weights are 1.

  • Increasing the output cost on link R→ U to 3 results in different shortest paths chosen by OSPF for some nodes compared to Case 1.

  • For instance, in Case 2, node R1 chooses a different path through nodes V and W to reach T1 due to the increased cost on the direct link R → U.

  • This weight setting aims to balance the load on links by redistributing traffic, ensuring de-congestion of link U → T.

Users can observe the shortest paths from NetSim Packet Trace.

  • Shortest Path from Q1 to T1 is Q1 > Q > U > T > T1. Total cost is 2.

  • Shortest Path from R1 to T1 is R1 > V > W > T > T1. Total cost is 3.

  • Shortest Path from S1 to T1 is S1 > S > U  T > T1. Total cost is 2.

  • Shortest Path from W1 to T1 is W1 > W > T > T1. Total cost is 1.

_images/Figure-814.png

Figure-8: OSPF Packet Flow for Case 2, Output cost set to 3 in Interface 1 of Router R.

After simulation, open packet trace, filter the CONTROL PACKET TYPE/APP NAME as TRAFFIC_R1_T1 and observe the change the packet flow due to changes in output cost.

_images/Figure-98.png

Figure-9: Packet flow in packet trace for OSPF output cost as 3.

Discussion

Scenario

Throughput in Mbps (Q1-T1)

Throughput in Mbps (R1-T1)

Throughput in Mbps (S1-T1)

Throughput in Mbps (W1-T1)

Case 1

32.71

32.71

32.70

49.99

Case 2

49.06

49.06

49.06

49.06

Table 3 2: NetSim simulation outputs from both cases. Recall that the traffic demand (generation rate) at each node is ≈50 Mbps. In case #1 the three flows Q1-T1, R1-T1 and S1-T1 obtain only 32.71 Mbps. In case #2 with the right OPSF weight settings, each traffic.

The traffic generation rate at each node was approximately 50 Mbps. In case #1, the three flows—Q1-T1, R1-T1, and S1-T1—achieved a throughput of only 32.71 Mbps. This indicates a significant underperformance relative to the traffic demand. This happens because the link U → T has a capacity of 100 Mbps while the input rate is 150 Mbps. The available bandwidth is split equally between the three flows with each flow getting approximately \(\frac{100}{3}\) Mbps.

In case #2, after optimizing OSPF weight configurations, traffic from R is routed to T through V and W. The link U → T now carries two flows totaling 100 Mbps and the link W → T carries the other two flows also totaling 100 Mbps. The network is now able to handle the traffic demands. This shows how setting the right link weights in OSPF can greatly improve network performance.

Part-B: Performance comparison of OSPF vs. RIP

Routing information protocol (RIP) is a routing protocol based on the Bellman-Ford (or distance vector) algorithm. This algorithm has been used for routing computations in computer networks since the early days of the ARPANET. Distance vector algorithms are based on a table in each router listing the best route to every destination in the system. Of course, in order to define which route is best, we have to have some way of measuring goodness. This is referred to as the "metric". RIP uses a metric that simply counts how many routers a message must go through i.e., to get the metric of a complete route, one just adds up the costs of the individual hops that make up the route [3]. The best route is the route with the minimum hops; RIP therefore is a minimum hop routing algorithm.

The advantage of min hop routing is that a demand from a source to the corresponding destination can be routed through the network while consuming the least amount of total bandwidth resources. If a demand requires an amount of bandwidth \(d\), then the total bandwidth consumed on a route is \(d \times H\) where \(H\) is the number of hops on the chosen route. If \(H_{min}\) is the number of hops on the shortest path, then \(H_{min} \leq H\), and it follows that min hop routing consumes the least resources. However, the amount of resources consumed in routing a demand is often not the only important criterion to be considered. We show this through the example below and compare the performance of RIP with OSPF.

_images/Figure-107.png

Figure-10: Example showing possible problems with min hop routing. Suppose each link has a bandwidth of 100 Mbps unit. Let us say W is sending 50 Mbps of traffic to z, and u is sending 50 Mbps of data to v. We now have a traffic demand between x and y, asking for 50 unit of bandwidth. Rather than route the packet through c – d – e, RIP uses min hop to route the traffic, i.e., traffic is sent through the link a – b, which is already congested. This problem can be overcome in OSPF by setting the right link weights (which we discussed in Part A).Network setup.

Network Set up

Open NetSim and click on Experiments> Internetworks> Routing and Switching> The OSPF weight setting problem and the performance comparison of the OSPF vs. RIP then click on the tile in the middle panel to load the example as shown below.

_images/Figure-1111.png

Figure-11: List of scenarios for the example the performance comparison of the OSPF vs. RIP

_images/Figure-127.png

Figure-12: NetSim scenario consists of 11 Routers and 6 Wired Nodes. CBR Traffic is generated from W1 to Z1, U1 to V1, X1 to Y1 at 50 Mbps rate and link capacity is 100 Mbps.

Case 1: RIP. Network configuration

Step 1: Design a network in the NetSim GUI comprising of 6 Wired Nodes and 11 Routers

Step 2: Go to Router Properties. In the Application Layer, Routing Protocol is set as RIP.

_images/Figure-135.png

Figure-13: Application Layer Window - Routing Protocol is set as RIP.

The “Routing Protocol” parameter is Global. i.e., changing in one Router will update this parameter in all other Routers.

Step 3: Go to Router Properties > All WAN Interfaces > Network Layer > set the Buffer Size (MB) parameter to 1024.

Step 4: Go to Link Properties, set the properties as below.

Parameter

Parameter Value

Uplink / Downlink Speed (Mbps)

100

Uplink / Downlink BER

0

Uplink / Downlink Propagation Delay (µs)

0

Table-3: Wired Link Properties.

Step 5: To Configure an application between any two nodes by selecting a CBR application from the Set Traffic tab connect between source and destination. Right click on the Application Flow and select transport protocol to UDP.

The CBR Application is created with a packet size of 1460 bytes and an inter-arrival time of 233.6 μs. Additionally, the "Start Time (s)" parameter is set to 10 while configuring the application. This time is typically set to be greater than the time taken for route convergence, and it increases as the size of the network increases.

Step 6: Packet Trace is enabled, and post-simulation, we can observe the route that the packets have chosen to reach the destination based on the Routing Protocol set.

Step 7: Run the Simulation for 20 Seconds.

Step 8: Note down the Throughput and observe the flow of packets using Packet Trace.

Case 2: OSPF. Network configuration

For the same case, change the setting as follows

Step 1: Go to Router Properties. In the Application Layer, Routing Protocol is set as OSPF.

_images/Figure-146.png

Figure-14: Application Layer Window - Routing Protocol is set as OSPF.

Step 2: Go to Router Properties > Application Layer > OSPF > Expand the Interfaces. In all the routers WAN Outgoing Interfaces, set the “Output Cost” is set to 1 as shown below.

_images/Figure-156.png

Figure-15: WAN Interfaces - Output Cost is set to 1.

Step 3: Go to Router X Properties > Application Layer > OSPF > Expand the Interface 1 > Set the Output Cost is set to 4.

Step 4: Run the Simulation for 20 Seconds.

Step 5: Note down the Throughput and observe the flow of packets using Packet Trace.

Results

Case-1: RIP

Users can observe from the packet trace that:

  • Shortest Path from W1 to Z1 is W1 > W > A > B > Z > Z1. Hop count is 3.

  • Shortest Path from U1 to V1 is U1 > U > A > B > V > V1. Hop count is 3.

  • Shortest Path from X1 to Y1 is X1 > X > A > B > Y > Y1. Hop count is 3.

_images/Figure-167.png

Figure-16: Packet flow when routing protocol is set to RIP.

  • The shortest paths from nodes W1, U1 and X1 to nodes Z1, V1, and Y1 are determined based on hop-count. This is because the protocol in operation is RIP.

  • Min hop routing would use the three-hop path X → A → B → Y. Then the link A → B is completely full, and, as far as servicing future demands is concerned, the network becomes partitioned.

  • It would have been better to route the demand on the four-hop path even though it consumes more resources. The point is that resources on the links that constitute the four-hop path between x and y cannot be used because of HOP Count limitations in RIP protocol.

After simulation, open packet trace, filter the PACKET TYPE column to CBR and further filter the CONTROL PACKET TYPE/APP NAME to respective traffic flow and observe the packet flow. Here we have filtered the CONTROL PACKET TYPE/APP NAME as TRAFFIC_X1_Y1 and PACKET ID to 1.

_images/Figure-177.png

Figure-17: Shows the packet flow for Traffic X1-Y1 in the packet trace after applying the filters for RIP case

Case 2: OSPF

Users can observe from the Packet Trace that:

  • Shortest Path from W1 to Z1 is W1 > W > A > B > Z > Z1. Total cost is 3.

  • Shortest Path from U1 to V1 is U1 > U > A > B > V > V1. Total cost is 3.

  • Shortest Path from X1 to Y1 is X1 > X > C > D > E > Y > Y1. Total cost is 4. Here, Packet flow from X1 to Y1 is not happening via X1 > X > A> B > Y > Y1 because the total output cost for is 6. Hence, it chooses the path with least output cost.

_images/Figure-187.png

Figure-18: Packet flow when routing protocol is set to OSPF.

  • The OSPF link output cost is set to 4 between X > A.

  • The shortest paths from nodes W1, U1,and X1 to nodes Z1, V1, and Y1 are determined based on OSPF's calculation of total cost, which considers factors like link weights.

After simulation, open packet trace, filter the PACKET TYPE column to CBR and further filter the CONTROL PACKET TYPE/APP NAME to respective traffic flow and observe the packet flow. Here we have filtered the CONTROL PACKET TYPE/APP NAME as TRAFFIC_X1_Y1 and PACKET ID to 1.

_images/Figure-197.png

Figure-19: Shows the packet flow for Traffic X1-Y1 in the packet trace after applying the filters for OSPF case.

Discussion

Scenario

Throughput in Mbps (W1-Z1)

Throughput in Mbps (U1-V1)

Throughput in Mbps (X1-Y1)

Case 1 (RIP)

32.71

32.70

32.70

Case 2 (OSPF)

49.06

49.06

49.99

Table-4: NetSim simulation outputs from both cases. Recall that the traffic demand (generation rate) at each node is ≈50 Mbps. In case #1, with RIP, the three flows W1-S1, U1-V1 and X1-Y1 obtain a throughput of only 32.70 Mbps. In case #2, with OPSF, each traffic demand gets the required throughput of ≈50 Mbps.

Here again, the traffic generation rate at each node was ≈ 50 Mbps. In case #1 with RIP, the three flows—W1-Z1, U1-V1, and X1-Y1—achieve a throughput of only 32.70 Mbps. This shows a significant underperformance relative to the traffic demand. Conversely, in case #2, with OSPF each flow successfully meets its expected throughput demand.

The reason for this difference is that RIP uses hop count as a metric forcing the X1–V1 flow to use the same congested A-B link. However, OSPF provides flexibility in setting the link weights. When we set the X-A link weight as 4, the X1-V1 traffic demand flows through the uncongested path X-C-D-E-Y. Thus the 100 Mbps A-B link is able to provide ≈ 50 Mbps of throughput to W1-Z1 and U1-V1 flows, and the other path (X-C-D-E-Y) provides ≈ 50 Mbps of throughput to the X1-Y1 flow.

Exercises

1. Effect of Output Cost on OSPF Routing Path Selection

Construct the network as shown below, using OSPF as the application layer routing protocol with the default output cost set to 100. Set the application start time to 10 seconds and enable packet trace to observe the data flow in the trace file after the simulation. Analyse how modifying the output cost in routers influences OSPF’s path selection for data transmission

Case a: In the scenario below, OSPF by default, select the Router 3 > Router 7 > Router 6 path for data transmission due to its lower cost. Your task is to modify the OSPF link costs in router so that the data follows the Router 3 > Router 4 > Router 5 > Router 6 for data transmission

_images/img12.png

Case b: Similarly, configure the output cost in router link such that, the data should consider the path as shown below: Router 3 > Router 8 > Router 9 > Router 6.

_images/img21.png

2. Understanding OSPF Rerouting After Link Failure Construct the network scenario as shown below and configure OSPF as the default routing protocol. Introduce a link failure at 25 seconds on Link 2, and analyse how OSPF detects the failure, recomputes the routes, and selects an alternate path for data transmission. Enable packet trace prior to simulation and explain your observations with relevant screenshots.

_images/img31.png

Network Settings

Router Properties: Router > Application layer

Routing Protocol

OSPF

Wired link properties: Link 2

Up time (sec)

0

Down time (sec)

25

Application properties

Start time

15 sec

Run time

Simulation time

50 seconds

3. Understanding OSPF weight setting problem

Construct the network scenario as shown below, using OSPF as the routing protocol at the application layer. In this setup, generate 50 Mbps of traffic from Nodes 9, 10, 11, and 12 to Node 13. Set the output cost for all router links to 100, and follow the additional settings provided in the table below.

Case a: In this scenario, Nodes 9, 10, and 11 are using the same path for data transmission, leading to network congestion. Your tasks are to: (i) obtain the transmission path for each application from the packet trace, (ii) tabulate the throughput for each application, and (iii) highlight the congested links in the network.

Enable packet trace prior to simulation and explain your observations with relevant screenshots

_images/img41.png

Network settings

Router Properties: Router > Application layer

Application layer routing protocol

OSPF

Output costs for all routers

100

Router > Interface (WAN) > Network layer

Buffer size

1024

All Link Properties

Uplink / Downlink Speed (Mbps)

100 (Link 12: 1000)

Uplink / Downlink BER

0

Uplink / Downlink Propagation Delay (µs)

0

Application properties

Start time (seconds)

10

Packet size (Bytes)

1460

Inter arrival time (µs)

233.6

Transport layer protocol

UDP

Run time

Simulation time (seconds)

30

Case b: For the same scenario, adjust the weight (output cost) so that Node 11 takes an alternate path for data transmission, reducing the traffic load on the original path. The throughput should improve for all nodes, and network congestion should be minimized. (i) Tabulate the throughput obtained for each node and compare it with Case a. (ii) Explain the alternate path taken Node 11 with relevant screenshots from packet trace.

_images/img51.png

4. Understanding performance evaluation between OSPF and RIP protocol.

Construct the scenario as shown below and set the application layer routing protocol to RIP and Generate traffic from each node to Node 4 at a rate of 50 Mbps, set start time to 10 sec and simulate it for 30 seconds. Enable packet trace prior to simulation and explain your observations with relevant screenshots

Case a: In this case, configure the network setting as mentioned, tabulate the throughput for each application, analyse any congestion point in network.

_images/img61.png

Case b: Consider OSPF protocol and set the output cost for all routers to 100, adjust the weight (Output cost) for Router 8, in such way that it considers alternate path for data transmission and avoiding the congestion path. Explain how the OSPF has flexibility improving network performance by weight setting over RIP protocol.

_images/img7.png

References

[1] M. Ericsson, M. G. C. Resende and P. M. Pardalos, “A genetic algorithm for the weight setting problem in OSPF routing,” Journal of Combinatorial Optimization , vol. 6, p. 299–333, 2002.

[2] A. Kumar, D. Manjunath and J. Kuri, Communication Networking, ISBN: 0-12-428751-4, 2004.

[3] “RFC 2543: RIP Version 2,” IETF, [Online]. Available: https://datatracker.ietf.org/doc/html/rfc2453#page-3.

Understand working of ARP and IP Forwarding within a LAN and across a router (Level 1)

Theory

In network architecture different layers have their own addressing scheme. This helps the different layers in being largely independent. Application layer uses host names, network layer uses IP addresses, and the link layer uses MAC addresses. Whenever a source node wants to send an IP datagram to a destination node, it needs to know the address of the destination. Since there are both IP addresses and MAC addresses, there needs to be a translation between them. This translation is handled by the Address Resolution Protocol (ARP). In IP network, IP routing involves the determination of suitable path for a network packet from a source to its destination. If the destination address is not on the local network, routers forward the packets to the next adjacent network.

(Reference: A good reference for this topic is Section 5.4.1: Link Layer Addressing and ARP, of the book, Computer Networking, A Top-Down Approach, 6th Edition by Kurose and Ross)

ARP protocol Description

  1. ARP module in the sending host takes any IP address as input and returns the corresponding MAC address.

  2. First the sender constructs a special packet called an ARP packet, which contains several fields including the sending and receiving IP and MAC addresses.

  3. Both ARP request and response packets have the same format.

  4. The purpose of the ARP request packet is to query all the other hosts and routers on the subnet to determine the MAC address corresponding to the IP address that is being resolved.

  5. The sender broadcasts the ARP request packet, which is received by all the hosts in the subnet.

  6. Each node checks if its IP address matches the destination IP address in the ARP packet.

  7. The one with the match sends back to the querying host a response ARP packet with the desired mapping.

  8. Each host and router have an ARP table in its memory, which contains mapping of IP addresses to MAC addresses.

  9. The ARP table also contains a Time-to-live (TTL) value, which indicates when each mapping will be deleted from the table.

ARP Frame Format

_images/Figure-207.png

Figure-20: ARP Frame Format

The ARP message format is designed to accommodate layer two and layer three addresses of various sizes. This diagram shows the most common implementation, which uses 32 bits for the layer three (“Protocol”) addresses, and 48 bits for the layer two hardware addresses.

IP Forwarding Description

  1. Every router has a forwarding table that maps the destination addresses (or portions of the destination addresses) to that router’s outbound links.

  2. A router forwards a packet by examining the value of a field in the arriving packet’s header, and then using this header value to index into the router’s forwarding table.

  3. The value stored in the forwarding table entry for that header indicates the router’s outgoing link interface to which that packet is to be forwarded.

  4. Depending on the network-layer protocol, the header value could be the destination address of the packet or an indication of the connection to which the packet belongs.

  5. ARP operates when a host wants to send a datagram to another host on the same subnet.

  6. When sending a Datagram off the subnet, the datagram must first be sent to the first-hop router on the path to the final destination. The MAC address of the router interface is acquired using ARP.

  7. The router determines the interface on which the datagram is to be forwarded by consulting its forwarding table.

  8. Router obtains the MAC address of the destination node using ARP.

  9. The router sends the packet into the respective subnet from the interface that was identified using the forwarding table.

Network Set up

Open NetSim and click on Experiments> Internetworks> Routing and Switching > Working of ARP and IP Forwarding within a LAN and across a router then click on the tile in the middle panel to load the as shown in example see Figure-21.

_images/Figure-2111.png

Figure-21: List of scenarios for the example of Working of ARP and IP Forwarding within a LAN and across a router.

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-22.

_images/Figure-228.png

Figure-22: Network set up for studying the ARP across a LAN

Procedure

ARP across a LAN

Step 1: A network scenario is designed in NetSim GUI comprising of 3 Wired Nodes, 2 L2 Switches, and 1 Router in the “Internetworks” Network Library.

Step 2: Configure an application between any two nodes by selecting a CBR application from the Set Traffic tab. Click on the created application, expand the application property panel on the right, and set the transport protocol to UDP instead of TCP by keeping other properties as default.

If set to TCP, the ARP table will get updated due to the transmission of TCP control packets thereby eliminating the need for ARP to resolve addresses.

Step 3: Packet Trace is enabled from Configure report tab, and hence we can view the ARP Request and ARP Reply packets exchanged initially, before transmission of the data packets.

Step 4: Click on Run simulation. The simulation time is set to 10 seconds. In the “Static ARP Configuration” tab, Static ARP is set to disable see Figure-23.

Step 5: Under Options, the “Static ARP” tab, Static ARP is set to disable see Figure-23.

_images/Figure-237.png

Figure-23: Static ARP Configuration Window

Click on OK.

If Static ARP is enabled, then NetSim will automatically create an ARP table for each node. To see the working of the ARP protocol users should disable Static ARP.

By doing so, ARP request would be sent to the destination to find out the destinations MAC Address.

Output – ARP across a LAN

Once the simulation is complete, to view the packet trace file, click on “Open Packet Trace” option present in the left-hand-side of the Results Dashboard.

_images/Figure-247.png

Figure-24: Open Packet Trace

NODE 1 will send ARP REQUEST to SWITCH-4, SWITCH-4 sends this to ROUTER-6, and SWITCH-4 also sends this to NODE-2. ARP-REPLY is sent by the NODE-2 to SWITCH -4, and in-turn SWITCH-4 sends it to NODE-1.

Discussion – ARP across a LAN

Intra-LAN-IP-forwarding:

_images/Figure-256.png

Figure-25: Intra LAN IP Forwarding

NODE-1 broadcasts ARP Request, which is then broadcasted by SWITCH-4. NODE-2 sends the ARP Reply to NODE-1 via SWITCH-4. After this step, datagrams are transmitted from NODE-1 to NODE-2. Notice the DESTINATION ID column for ARP Request type packets, which indicates Broadcast-0.

ARP across a WAN

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-26.

_images/Figure-264.png

Figure-26: Network set up for studying the ARP across a WAN

Procedure

The following set of procedures were done to generate this sample.

Step 1: A network scenario is designed in the NetSim GUI comprising of 3 Wired Nodes, 2 L2 Switches, and 1 Router.

Step 2: Click on the Set Traffic tab and configure the application between the nodes. Click on the application, expand the property panel on the right and set the properties as mentioned below.

APP 1 CBR is created from Wired Node 1 to Wired Node 3, Packet size set as 1460 bytes and Inter arrival time as 20000 Micro sec and Transport layer protocol to UDP.

APP 2 CBR is created from Wired Node 2 to Wired Node 3, Packet size set as 1460 bytes and Inter arrival time as 20000 Micro sec and Transport layer protocol to UDP. Additionally, the start time is set to 1 second and end time to 3 second.

Transport Protocol is set to UDP instead of TCP. If set to TCP, the ARP table will get updated during transmission of TCP control packets thereby eliminating the need for ARP to resolve addresses.

Step 3: Packet Trace is enabled from Configured reports tab, and hence we can view the ARP Request and ARP Reply packets exchanged initially, before transmission of the data packets.

Step 4: Click on Run simulation. The simulation time is set to 10 seconds.

Step 5: Under Options, the “Static ARP” tab, Static ARP is set to disable.

Output I – ARP across a WAN

Once the simulation is complete, to view the packet trace file, click on “Open Packet Trace” option present in the left-hand-side of the Results Dashboard.

In packet trace, filter the CONTROL PACKET TYPE/APP NAME filed to view APP 1CBR, ARP_REQUEST, ARP_REPLY.

_images/Figure-274.png

Figure-27: Open Packet Trace

NODE 1 will send ARP REQUEST to SWITCH-4, SWITCH-4 sends this to ROUTER-6, and SWITCH-4 also sends this to NODE-2. ARP-REPLY is sent by the ROUTER-6 to SWITCH -4, and in-turn SWITCH-4 sends it to NODE-1. Again ROUTER-6 will send ARP REQUEST to SWITCH-5, SWITCH-5 sends this to NODE-3. ARP REPLY is sent by NODE-3 to SWITCH-5 and in-turn SWITCH-5 sends it to ROUTER-6.

The IP forwarding table formed in the router can be accessed from the IP Forwarding Table list present in the Simulation Results window as shown below Figure-28.

_images/Figure-284.png

Figure-28: IP Forwarding Table

Click on Detailed View checkbox to view the additional fields as indicated above.

Router forwards packets intended to the subnet 192.169.0.0 to the interface with the IP 192.168.0.1 based on the first entry in its routing table.

Discussion I – ARP across a WAN

From the above case we can understand that, since Router 6 did not know the destination address, the Application packets reach only till Router 6, and ARP mechanism continues with Router 6 re-broadcasting the ARP REQUEST, finding the destination address and the datagram is getting transferred to Wired node 3 (destination).

Output II – ARP across a WAN

In same packet trace, filter the CONTROL PACKET TYPE/APP NAME column to view APP 2 CBR, ARP REQUEST, ARP REPLY only.

In the below figure user can observe that ARP REQUEST is broadcasted from Wired Node 2, the ARP Reply is sent from the Router 6, upon receiving the ARP REPLY. Router 6 directly starts sending the data packet to the Wired Node 3 unlike the previous sample.

_images/Figure-294.png

Figure-29: Open Packet Trace

Discussion II – ARP across a WAN

Across-Router-IP-forwarding

ARP PROTOCOL- WORKING

_images/Figure-304.png

Figure-30: Across Router IP Forwarding

NODE-2 transmits ARP Request which is further broadcasted by SWITCH-4. ROUTER-6 sends ARP Reply to NODE-2 which goes through SWITCH-4. Then NODE-2 starts sending datagrams to NODE-3. If router has the MAC address of NODE-3 in its ARP table, then ARP ends here, and router starts forwarding the datagrams to NODE-3 by consulting its forwarding table. Router 6, has this information updated during transmission of APP1 packets and hence ARP request for identifying the MAC address of NODE-3, need not be sent again. In the other case (Output -I), Router sends ARP Request to appropriate subnet and after getting the MAC address of NODE-3, it then forwards the datagrams to NODE-3 using its forwarding table.

Exercises

  1. Construct a below ARP experiment by manually adding the ARP entries using the Static ARP option with the file option. Refer to user manual section 3.15, “Static ARP configuration in NetSim”.

_images/Figure-316.png

Figure-31: Network scenario for ARP exercise

Simulate and study the spanning tree protocol (Level 1)

Introduction

Spanning Tree Protocol (STP) is a link management protocol. Using the spanning tree algorithm, STP provides path redundancy while preventing undesirable loops in a network that are created by multiple active paths between stations. Loops occur when there are alternate routes between hosts. To establish path redundancy, STP creates a tree that spans all of the switches in an extended network, forcing redundant paths into a standby, or blocked state. STP allows only one active path at a time between any two network devices (this prevents the loops) but establishes the redundant links as a backup if the initial link should fail. Without spanning tree in place, it is possible that both connections may simultaneously live, which could result in an endless loop of traffic on the LAN.

(Reference: A good reference for this topic is Section 3.1.4: Bridges and LAN switches, of the book, Computer Networks, 5th Edition by Peterson and Davie)

Network Setup

Open NetSim and click on Experiments> Internetworks> Routing and Switching> Simulate and study the spanning tree protocol then click on the tile in the middle panel to load the example as shown in below Figure-32.

_images/Figure-323.png

Figure-32: List of scenarios for the example of Simulate and study the spanning tree protocol

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-33.

_images/Figure-333.png

Figure-33: Network set up for studying the STP 1

NOTE: At least three L2 Switches are required in the network to analyze the spanning tree formation.

Procedure

STP-1

Step 1: A network scenario is designed in the NetSim GUI comprising of 3 Wired Nodes and 3 L2 Switches in the “Internetworks” Network Library.

Step 2: Go to L2 Switch 1 Properties. In the Interface 1 (ETHERNET) > Datalink Layer, “Switch Priority” is set to 2. Similarly, for the other interfaces of L2 Switch 1, Switch Priority is set to 2.

To configure any properties in the device, click on the device, expand the property panel on the right side, and change the properties as mentioned in the steps.

Step 3: Go to L2 Switch 2 Properties. In the Interface 1 (ETHERNET) > Datalink Layer, “Switch Priority” is set to 1. Similarly, for the other interfaces of L2 Switch 2, Switch Priority is set to 1.

Step 4: Go to L2 Switch 3 Properties. In the Interface 1 (ETHERNET) > Datalink Layer, “Switch Priority” is set to 3. Similarly, for the other interfaces of L2 Switch 3, Switch Priority is set to 3.

L2_Switch Properties

L2_Switch 1

L2_Switch 2

L2_Switch 3

Switch Priority

2

1

3

Table-5: Switch Priorities for STP-1

NOTE: Switch Priority is set to all the 3 L2 Switches and Switch Priority has to be changed for all the interfaces of L2 Switch.

Switch Priority is interpreted as the weights associated with each interface of a L2 Switch. A higher value indicates a higher priority.

Step 5: Configure Custom application between two wired nodes by clicking on set traffic tab from ribbon on the top. Click on the created application and expand the application property panel on right, set the start time to 1 second and by keeping other properties as default.

_images/Figure-343.png

Figure-34: Application Configuring Window

Step 6: Enable the packet trace from configure reports tab and run simulation for 10 seconds.

STP-2

The following changes in settings are done from the previous Sample:

In STP 2, the “Switch Priority” of all the 3 L2 Switches are changed as follows Table-6:

L2 Switch Properties

L2 Switch 1

L2 Switch 2

L2 Switch 3

Switch Priority

1

2

3

Table-6: Switch Priorities for STP-2

Output

The active and blocked ports for the two samples STP-1 and STP-2 are illustrated in the below screenshots based on the data flow observed in the packet trace.

STP-1

_images/Figure-353.png

Figure-35: A representative image showing active and blocked ports for sample STP-1 based on the path of data flow we observe in the packet trace

Go to Packet Trace and observe that, after the exchange of control packets, the data packets take the following path Wired Node 4 > L2 Switch 1 > L2 Switch 3 > Wired Node 5.

_images/Figure-362.png

Figure-36: Observe the transmitter ID and the receiver ID columns in the Packet Trace to determine the path of data flow.

STP-2

_images/Figure-372.png

Figure-37: A representative image showing active and blocked ports for sample STP-2 based on the path of data flow we observe in the packet trace.

Go to Packet Trace and observe that, after the exchange of control, the data packets take the following path. Wired Node 4 > L2 Switch 1 > L2 Switch 3 > Wired Node 5.

_images/Figure-382.png

Figure-38: Observe the transmitter ID and the receiver ID columns in the Packet Trace to determine the path of data flow.

Go to Simulation Results window, In the left panel of the Results Dashboard, click on the Additional metrics and slide down to obtain the Switch MAC address table list of all the L2 Switches.

For each L2 Switch, a Switch MAC Address Table containing the MAC address entries see Figure-39, the port that is used for reaching it, along with the type of entry can be obtained at the end of Simulation.

_images/Figure-392.png

Figure-39: STP 2 MAC Address table

Discussion

Each L2 Switch has an ID which is a combination of its Lowest MAC address and priority. The Spanning tree algorithm selects the L2 Switch with the smallest ID as the root node of the Spanning Tree. The root node forward frames out over all its ports. In the other L2 Switches, the ports that have the least cost of reaching the root switch are set as Forward Ports and the remaining are set as Blocked Ports.

In the STP-1, L2 Switch 2 was assigned least priority and was selected as a Root Switch. The green line indicates the forward path, and the red line indicates the blocked path. The frame from Wired Node 4 should take the path through the L2 Switch 1, 2 and 3 to reach the Wired Node 5. In the STP-2, L2 Switch 1 was assigned least priority and selected as a Root switch. In this case, the frame from Wired Node 4 takes the path through the L2 Switch 1 and 3 to reach the destination Wired Node 5.

Understanding VLAN operation in L2 and L3 Switches (Level 2)

Introduction to VLAN

VLAN is called as virtual local area network, used in Switches and it operates at Layer 2 and Layer 3. A VLAN is a group of hosts which communicate as if they were attached to the same broadcast domain, regardless of their physical Location.

For example, all workstations and servers used by a particular workgroup team can be connected to the same VLAN, regardless of their physical connections to the network or the fact that they might be intermingled with other teams. VLANs have the same attributes as physical LANs, but you can group end stations even if they are not physically located on the same LAN Segment.

_images/Figure-402.png

Figure-40: Virtual local area network (VLAN)

A VLAN behaves just like a LAN in all respects but with additional flexibility. By using VLAN technology, it is possible to subdivide a single physical switch into several logical switches. VLANs are implemented by using the appropriate switch configuration commands to create the VLANs and assign specific switch interfaces to the desired VLAN.

Switches implement VLANs by adding a VLAN tag to the Ethernet frames as they enter the switch. The VLAN tag contains the VLAN ID and other information, which is determined by the interface from which the frame enters the switch. The switch uses VLAN tags to ensure that each Ethernet frame is confined to the VLAN to which it belongs based on the VLAN ID contained in the VLAN tag. The VLAN tags are removed as the frames exit the switch on the way to their destination.

Any port can belong to a VLAN, and unicast, broadcast, and multicast packets are forwarded and flooded only to end stations in that VLAN. Each VLAN is considered a logical network. Packets destined for stations that do not belong to the VLAN must be forwarded through a router.

In the below screenshot, the stations in the development department are assigned to one VLAN, the stations in the marketing department are assigned to another VLAN, and the stations in the testing department are assigned to another VLAN.

_images/Figure-414.png

Figure-41: Hosts in one VLAN need to communicate with hosts in another VLAN This is known as Inter-VLAN routing.

VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as Inter-VLAN routing. This can be possible by using L3 Switch.

What is a layer 3 switch?

Layer 3 switch (also known as a multi-layer switch) is a multi-functional device that have the same functionality like a layer 2 switch, but behaves like a router when necessary. It’s generally faster than a router due to its hardware-based routing functions, but it’s also more expensive than a normal switch.

Network setup

Open NetSim and click on Experiments > Advanced Routing> Understanding VLAN operation in L2 and L3 Switches then click on the tile in the middle panel to load the example as shown in below in Figure-42.

_images/Figure-422.png

Figure-42: List of scenarios for the example of Understanding VLAN operation in L2 and L3 Switches

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-43.

_images/Figure-432.png

Figure-43: Network set up for studying the Intra-VLAN

Procedure

Intra-VLAN

Intra-VLAN is a mechanism in which hosts in same VLAN can communicate to each other.

The following set of procedures were done to generate this sample:

Step 1: A network scenario is designed in NetSim GUI comprising of 3 Wired Nodes and 1 L2 Switch in the “Internetworks” Network Library.

Step 2: Click on L2 Switch 1 and expand property panel on the right and set the properties as shown in Table-7.

L2 Switch 1

Interface ID

VLAN Status

VLAN ID

VLAN Port Type

Interface 1

TRUE

2

Access Port

Interface 2

TRUE

2

Access Port

Interface 3

TRUE

3

Access Port

Table-7: L2 Switch 1 Properties

In all the INTERFACE (ETHERNET) > DATALINK LAYER Properties of L2 Switch 1, “VLAN Status” is set to TRUE.

_images/Figure-442.png

Figure-44: DATALINK LAYER Properties of L2 Switch 1

Now click on “Configure VLAN” option and the VLAN 2 fields are entered as shown below Figure-39.

_images/Figure-452.png

Figure-45: VLAN Configure window

To add a new entry after entering the required fields, click on the ADD button.

_images/Figure-461.png

Figure-46: Configuring VLAN Properties in VLAN 2

To configure another VLAN, click on the “+” symbol located in the top.

_images/Figure-471.png

Figure-47: Configuring VLAN Properties in VLAN 3

And then we can add the entry to it.

Step 3: Click on the "Configure Reports" tab in the top ribbon, enable the plots, run the simulation for 10 seconds and observe the throughputs.

Inter-VLAN

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-48.

_images/Figure-482.png

Figure-48: Network set up for studying the Inter-VLAN

The following set of procedures were done to generate this sample:

Step 1: A network scenario is designed in NetSim GUI comprising of 5 Wired Nodes and 1 L3 Switch in the “Internetworks” Network Library.

Step 2: Click on Wired Node and expand property panel on the right and set the properties are as per the below Table-8.

Node

Wired Node2

Wired Node3

Wired Node4

Wired Node5

Wired Node6

I/f1_Ethernet

I/f1_Ethernet

I/f1_Ethernet

I/f1_Ethernet

I/f1_Ethernet

IP Address

10.0.0.4

10.1.0.4

11.2.0.4

11.3.0.4

11.4.0.4

Default Gateway

10.0.0.3

10.1.0.3

11.2.0.3

11.3.0.3

11.4.0.3

Table-8: Wired Node properties

Step 3: The L3 Switch 1 Properties are set as per the below table:

L3 Switch

If1_Ethernet

If2_Ethernet

If3_Ethernet

If4_Ethernet

If5_Ethernet

IP Address

IP Address

IP Address

IP Address

IP Address

L3 Switch 1

10.0.0.3

10.1.0.3

11.2.0.3

11.3.0.3

11.4.0.3

Table-9: L3 Switch 1 Properties

L3 Switch 1

Interface ID

VLAN Status

VLAN ID

VLAN Port Type

Interface 1

TRUE

2

Access Port

Interface 2

TRUE

2

Access Port

Interface 3

TRUE

3

Access Port

Interface 4

TRUE

3

Access Port

Interface 5

TRUE

3

Access Port

Table-10: VLAN configurations Properties

The VLAN configurations done are shown as follows:

_images/Figure-492.png

Figure-49: Configuring VLAN Properties in VLAN 2

_images/Figure-501.png

Figure-50: Configuring VLAN Properties in VLAN 3

Step 4: Click on the "Configure Reports" tab in the top ribbon, enable the plots, run the simulation for 10 seconds and observe the throughputs.

Output and Inference for Intra-VLAN

Throughput (Mbps)

Application 1

0.58

Application 2

0

Table-11: Results Comparison

The throughput for 2nd application is zero because the source and destination is in different VLANs, thereby traffic flow or communication between 2 VLANs using Layer 2 switch is not possible. To overcome this problem, an L3 switch is used.

Output and Inference for Inter-VLAN

Throughput (Mbps)

Application 1

0.58

Application 2

0.58

Application 3

0.58

Table-12: Results Comparison

In this case, application1 is in VLAN2, application2 is in VLAN3 and application 3 is in between VLAN2 and VLAN3. From the above results, the throughput for application 3 (different VLANs) is nonzero, because of using L3 switch. So, communication between 2 VLANs is possible using L3 Switch.

Exercises

  1. Construct the network below by configuring VLANs in L2Switch 4 as follows.

  1. Configure Node1 and Node2 as part of one VLAN and Node2 and Node3 in a different VLAN. Analyse traffic flow within and between different VLANs.

  2. Configure Node1 and Node3 as part of one VLAN and Node2 and Node3 in a different VLAN. Analyse traffic flow within and between different VLANs.

_images/Figure-519.png

Figure-51: Network scenario for VLAN.

Understanding the working of Public IP Address and Network Address Translation (NAT). (Level 2)

Theory

Public Address

A public IP address is assigned to every computer that connects to the Internet where each IP is unique. Hence there cannot exist two computers with the same public IP address all over the Internet. This addressing scheme makes it possible for the computers to “find each other” online and exchange information. User has no control over the IP address (public) that is assigned to the computer. The public IP address is assigned to the computer by the Internet Service Provider as soon as the computer is connected to the Internet gateway.

Private Address

An IP address is considered private if the IP number falls within one of the IP address ranges reserved for private networks such as a Local Area Network (LAN). The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private networks (local networks):

Class

Starting IP address

Ending IP address

No. of hosts

A

10.0.0.0

10.255.255.255

16,777,216

B

172.16.0.0

172.31.255.255

1,048,576

C

192.168.0.0

192.168.255.255

65,536

Table-18: Private IP address table

Private IP addresses are used for numbering the computers in a private network including home, school and business LANs in airports and hotels which makes it possible for the computers in the network to communicate with each other. For example, if a network A consists of 30 computers each of them can be given an IP starting from 192.168.0.1 to 192.168.0.30.

Devices with private IP addresses cannot connect directly to the Internet. Likewise, computers outside the local network cannot connect directly to a device with a private IP. It is possible to interconnect two private networks with the help of a router or a similar device that supports Network Address Translation.

If the private network is connected to the Internet (through an Internet connection via ISP) then each computer will have a private IP as well as a public IP. Private IP is used for communication within the network whereas the public IP is used for communication over the Internet.

Network address translation (NAT)

A NAT (Network Address Translation or Network Address Translator) is the virtualization of Internet Protocol (IP) addresses. NAT helps to improve security and decrease the number of IP addresses an organization needs.

A device that is configured with NAT will have at least one interface to the inside network and one to the outside network. In a typical environment, NAT is configured at the exit device between a stub domain (inside network) and the backbone. When a packet leaves the domain, NAT translates the locally significant source address into a globally unique address. When a packet enters the domain, NAT translates the globally unique destination address into a local address. If more than one exit point exists, each NAT must have the same translation table. NAT can be configured to advertise to the outside world only one address for the entire network. This ability provides additional security by effectively hiding the entire internal network behind that one address. If NAT cannot allocate an address because it has run out of addresses, it drops the packet and sends an Internet Control Message Protocol (ICMP) host unreachable packet to the destination.

_images/Figure-602.png

Figure-60: NAT implementation

NAT is secure since it hides network from the Internet. All communications from internal private network are handled by the NAT device, which will ensure all the appropriate translations are performed and provide a flawless connection between internal devices and the Internet.

In the above figure, a simple network of 4 hosts and one router that connects this network to the Internet. All hosts in the network have a private Class C IP Address, including the router's private interface (192.168.0.1), while the public interface that's connected to the Internet has a real IP Address (203.31.220.134). This is the IP address the Internet sees as all internal IP addresses are hidden.

Network Setup

Open NetSim and click on Experiments> Advanced Routing> Understanding Public IP Address and NAT (Network Address Translation) then click on the tile in the middle panel to load the example as shown in below Figure-61.

_images/Figure-615.png

Figure-61: List of scenarios for the example of Understanding Public IP Address and NAT (Network Address Translation)

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-62.

_images/Figure-622.png

Figure-62: Network set up for studying the Understanding Public IP Address and NAT (Network Address Translation)

Procedure

The following set of procedures were done to generate this sample:

Step 1: A network scenario is designed in NetSim GUI comprising of 6 Wired Nodes, 2 L2 Switches, and 4 Routers in the “Internetworks” Network Library.

Step 2: Click on Wired Nodes and then open right-side property panel. In the INTERFACE (ETHERNET) > NETWORK LAYER, the IP Address and the Subnet Mask are set as per the table given below Table-19.

Wired Node

IP address

Subnet mask

7

10.0.0.2

255.0.0.0

8

10.0.0.3

255.0.0.0

9

10.0.0.4

255.0.0.0

10

172.16.0.2

255.255.0.0

11

172.16.0.3

255.255.0.0

12

172.16.0.4

255.255.0.0

Table-19: IP Address and the Subnet mask for Wired nodes

Step 3: The IP Address and the Subnet Mask in Routers are set as per the table given below Table-20.

Router

Interface

IP address

Subnet mask

Router 1

Interface 2(WAN)

11.1.1.1

255.0.0.0

Interface 1(Eth)

10.0.0.1

255.0.0.0

Router 2

Interface 1(WAN)

11.1.1.2

255.0.0.0

Interface 2(WAN)

12.1.1.1

255.0.0.0

Router 3

Interface 1(WAN)

12.1.1.2

255.0.0.0

Interface 2(WAN)

13.1.1.2

255.0.0.0

Router 4

Interface 1(WAN)

13.1.1.1

255.0.0.0

Interface 2(Eth)

172.16.0.1

255.255.0.0

Table-20: IP Address and the Subnet Mask for Routers

Step 4: Configure an application between any two nodes by selecting a CBR application from Wired Node 7 i.e., Source to Wired Node 10 i.e., Destination from Set Traffic tab in the ribbon. Click on the application, expand the right-side property panel and Set Packet Size: 1460 Bytes, Inter Arrival Time remaining 20000µs

Additionally, the “Start Time(s)” parameter is set to 50(Figure-57), while configuring the application. This time is usually set to be greater than the time taken for OSPF Convergence (i.e., Exchange of OSPF information between all the routers), and it increases as the size of the network increases.

_images/Figure-632.png

Figure-63: Application Properties Window

Step 5: Click on Configure reports tab in ribbon on the top and enable packet trace. Packet Trace can be used for packet level analysis.

Step 6: Click on the "Configure Reports" tab in the top ribbon, enable the plots, run the simulation for 100 seconds.

Output

After simulation Open Packet Trace from the Simulation Results window and filter Packet ID to 1 as shown below.

_images/Figure-642.png

Figure-64: Packet Trace

SOURCE IP – source node IP (Node)

DESTINATION IP – gateway IP/ destination IP (Router/ Node)

GATEWAY IP – IP of the device which is transmitting a packet (Router/ Node)

NEXT HOP IP – IP of the next hop (Router/ Node)

Source node 7 (10.0.0.2) wouldn’t know how to route to the destination and hence its default gateway is Router 1 with interface IP (10.0.0.1). So, the first line in the above screenshot specifies packet flow from Source Node 7 to L2 Switch 6 with SOURCE_IP (10.0.0.2), DESTINATION IP (10.0.0.1), GATEWAY IP (10.0.0.2) and NEXT HOP IP (10.0.0.1). Since Switch is Layer2 device there is no change in the IPs in second line. Third line specifies the packet flow from Router 1 to Router 2 with SOURCE IP (10.0.0.2), DESTINATION_IP (13.1.1.1- IP of the router connected to destination. Since OSPF is running, the router is looks up the route to its destination from routing table), GATEWAY IP (11.1.1.1) and NEXT HOP IP (11.1.1.2) and so on.

Exercises

  1. Create a scenario different from the one in the experiment. It should consist of 3 Local Area Networks (LANs). Each LAN can have 1 switch and 2 nodes. Connect each switch to a Router and interconnect the routers. This completes the network set-up. Next, configure LAN1 to have class-A IP addresses, the LAN2 have class-B IP addresses and the LAN3 have class-C IP addresses. Finally, configure the following data traffic flows (i) from a node in LAN1 to a node in LAN2 (ii) From a node in LAN2 to a node in LAN3 (iii) and from a node in LAN3 to a node in LAN1.

Post simulation, using the packet trace explain the Source IP, Destination IP, Gateway IP and Next hop IP for each of the traffic flows.

M/D/1 and M/G/1 Queues (Level 3)

Motivation

In this simulation experiment, we will study a model that is important to understand the queuing and delay phenomena in packet communication links. Let us consider the network shown in Figur-66. Wired Node 1 is transmitting UDP packets to Wired Node 2 through a router. Link 1 and Link 2 are of speed 10 Mbps. The packet lengths are 1250 bytes plus a 54-byte header, so that the time taken to transmit a packet on each 10 Mbps link is

\[\frac{1304 \times 8}{10} \, \mu sec = 1043.2 \, \mu sec\]

In this setting, we would like answers to the following questions:

  1. We notice that the maximum rate at which these packets can be carried on a 10 Mbps link is

    \[\frac{10^{6}}{1043.2} = 958.59 \, \text{packets per second}\]

    Can the UDP application send packets at this rate?

  2. The time taken for a UDP packet to traverse the two links is 2 × 1043.2 = 2086.4 μsec. Is this the time it actually takes for a UDP packet generated at Wired Node 1 to reach Wired Node 2?

The answer to these questions depends on the manner in which the UDP packets are being generated at Wired Node 1. If the UDP packets are generated at intervals of 1043.2 μsec then successive packets will enter the Link 1, just when the previous packet departs. In practice, however, the UDP packets will be generated by a live voice or video source. Depending on the voice activity, the activity in the video scene, and the coding being used for the voice and the video, the rate of generation of UDP packets will vary with time. Suppose two packets were generated during the time that one packet is sent out on Link 1, then one will have to wait, giving rise to queue formation. This also underlines the need for a buffer to be placed before each link; a buffer is just some dynamic random-access memory in the link interface card into which packets can be stored while waiting for the link to free up.

Queuing models permit us to understand the phenomenon of mismatch between the service rate (e.g., the rate at which the link can send out packets) and the rate at which packets arrive. In the network in Figure-66, looking at the UDP flow from Wired Node 1 to Wired Node 2, via Router 3, there are two places at which queueing can occur. At the interface between Wired Node 1 and Link 1, and at the interface between Router 3 and Link 2. Since the only flow of packets is from Wired Node 1 to Wired Node 2, all the packets entering Link 2 are from Link 1, and these are both of the same bit rate. Link 2, therefore, cannot receive packets faster than it can serve them and, at any time, only the packet currently in transmission will be at Link 2. On the other hand at the Wired Node 1 to Link 1 interface, the packets are generated directly by the application, which can be at arbitrary rates, or inter-packet times.

Suppose that, at Wired Node 1, the application generates the successive packets such that the time intervals between the successive packets being generated are statistically independent, and the probability distribution of the time intervals has a negative exponential density, i.e., of the form

\[\lambda e^{-\lambda x}\]

where \(\lambda\) (packets per second) is a parameter, called the rate parameter, and \(x\) (seconds) is the argument of the density. The application generates the entire packet instantaneously, i.e., all the bits of the packet arrive from the application together, and enter the buffer at Link 1, to wait behind the other packets, in a first-in-first-out manner. The resulting random process of the points at which packets enter the buffer of Link 1 is called a Poisson Process of rate \(\lambda\) packets per second. The buffer queues the packets while Link 1 serves them with service time b = 1043.2 μsec. Such a queue is called an \(M/D/1\) queue, where the notation is to be read as follows.

  • The M before the first slash (denoting “Markov”) denotes the Poisson Process of instants at which packets enter the buffer.

  • The D between the two slashes (denoting “Deterministic”) denotes the fixed time taken to serve each queued packet.

  • The 1 after the second slash denotes that there is just a single server (Link 1 in our example)

This way of describing a single server queueing system is called Kendall’s Notation.

In this experiment, we will understand the M/D/1 model by simulating the above-described network on NetSim. The M/D/1 queueing model, however, is simple enough that it can be mathematically analyzed in substantial detail. We will summarize the results of this analysis in the next section. The simulation results from NetSim will be compared with the analytical results.

Mathematical Analysis of the M/D/1 Queue

The M/D/1 queueing system has a random number of arrivals during any time interval. Therefore, the number of packets waiting at the buffer is also random. It is possible to mathematically analyze the random process of the number of waiting packets. The procedure for carrying out such analysis is, however, beyond the scope of this document. We provide the final formulas so that the simulation results from NetSim can be compared with those provided by these formulas.

As described earlier, in this chapter, the M/D/1 queue is characterized by two parameters: λ (packets per second), which is the arrival rate of packets into the buffer, and μ (packets per second), which is the rate at which packets are removed from a nonempty queue. Note that 1/μ is the service time of each packet.

Define

\[\rho = \lambda \times \frac{1}{\mu} = \frac{\lambda}{\mu}.\]

We note that \(\rho\) is the average number of packets that arrive during the service time of a packet. Intuitively, it can be expected that if \(\rho > 1\) then packets arrive faster than the rate at which they can be served, and the queue of packets can be expected to grow without bound. When \(\rho < 1\) we can expect the queue to be “stable.” When \(\rho = 1\), the service rate is exactly matched with the arrival rate; due to the randomness, however, the queue can still grow without bound. The details of this case are beyond the scope of this document.

For the \(k^{th}\) arriving packet, denote the instant of arrival by \(a_k\), the instant at which service for this packet starts as \(s_k\), and the instant at which the packet leaves the system as \(d_k\). Clearly, for all \(k\),

\[d_k - s_k = \frac{1}{\mu}\]

the deterministic service time. Further define, for each \(k\),

\[W_k = s_k - a_k\]
\[T_k = d_k - a_k\]

i.e., \(W_k\) is called the queuing delay, i.e., time from the arrival of the \(k^{th}\) packet until it starts getting transmitted, whereas \(T_k\) is called the total delay, i.e., the time from the arrival of the \(k^{th}\) packet until its transmission is completed. Considering a large number of packets, we are interested in the average of the values \(W_1, W_2, W_3, \cdots\), i.e., the average queueing time of the packets. Denote this average by \(W\). By mathematical analysis of the packet queue process, it can be shown that for an M/D/1 queueing system,

\[W = \frac{1}{2\mu} \times \frac{\rho}{1 - \rho}\]

Denoting by \(T\), the average total time in the system (i.e., the average of \(T_1, T_2, T_3, \cdots\)), clearly

\[T = W + \frac{1}{\mu}.\]

Observe the following from the above formula:

  1. As ρ approaches 0, W becomes 0. This is clear, since, when the arrival rate becomes very small, and arriving packet sees a very small queue. For arrival rate approaching 0, packets get served immediately on arrival.

  2. As ρ increases, W inreases.

  3. As ρ approaches 1 (from values smaller than 1), the mean delay goes to ∞.

We will verify these observations in the NetSim simulation.

The Experimental Setup

Open NetSim and click on Experiments> Internetworks> Network Performance> MD1 and MG1 Queues then click on the tile in the middle panel to load the example as shown in below Figure-65.

_images/Figure-652.png

Figure-65: List of scenarios for the example of MD1 and MG1 Queues

NetSim UI displays the configuration file corresponding to this experiment as shown above:

The model described at the beginning of this chapter is shown in Figure-66.

_images/Figure-662.png

Figure-66: A single wired node (Wired Node 1) sending UDP packets to another wired node (Wired Node 2) through a router (Router 3). The packet interarrival times at Wired Node 1 are exponentially distributed, and packets are all of the same length, i.e., 1250 bytes plus UDP/IP header.

Procedure

Queuing delay for IAT-20863 (µs) Sample:

The following set of procedures were done to generate this sample:

Step 1: A network scenario is designed in NetSim GUI comprising of 2 Wired Nodes and 1 Router in the “Internetworks” Network Library.

Step 2: Link Properties are set as per the table given below Table-21. To set the link properties, click on the link and expand property panel on right and configure as mentioned.

Link Properties

Link

Link

Uplink Speed (Mbps)

10

10

Downlink Speed (Mbps)

10

10

Uplink BER

0

0

Downlink BER

0

0

Uplink Propagation Delay (µs)

0

0

Downlink Propagation Delay (µs)

0

0

Table-21: Wired link properties

Step 3: Configure Custom application between Wired node 1 to Wired node 2 by clicking on set traffic tab from the ribbon on the top. To set the application properties, click on the application and set the Transport Protocol to UDP, Packet Size to 1250 bytes, Distribution to exponential, and Mean to 20863 µs.

The Packet Size and Inter Arrival Time parameters are set such that the Generation Rate equals 0.479 Mbps. Generation Rate can be calculated using the formula:

\[\textit{Generation Rate (Mbps)} = \textit{Packet Size (Bytes)} \times \frac{8}{\textit{Interarrival time (µs)}}\]

Step 4: Packet Trace is enabled by clicking on the configure reports tab on the top. At the end of the simulation, a very large .csv file containing all the packet information is available for the users to perform packet level analysis.

Step 5: Run the Simulation for 100 Seconds. Similarly, the other samples are created by changing the Inter Arrival Time per the formula

\[IAT = \frac{10^{6}}{958.59 \times \rho}\]

as per the table given below Table-22.

ρ

IAT (µs)

0.05

20863

0.1

10431

0.15

6954

0.2

5215

0.25

4172

0.3

3477

0.35

2980

0.4

2607

0.45

2318

0.5

2086

0.55

1896

0.6

1738

0.65

1604

0.7

1490

0.75

1390

0.8

1303

0.85

1227

0.9

1159

0.95

1098

Table-22: Inter Arrival Time Settings

Even though the packet size at the application layer is 1250 bytes, as the packet moves down the layers, overhead is added. The overheads added in different layers are shown in the below table and can be obtained from the packet trace:

Layer

Overhead (Bytes)

Transport Layer

8

Network Layer

20

MAC Layer

26

Physical Layer

0

Total

54

Table-23: Overheads added to a packet as it flows down the network stack

Obtaining the Mean Queuing delay from the Simulation Output

After running the simulation, note down the “Mean Delay” from the Application Metrics in NetSim results window. This is the average time between the arrival of packets into the buffer at Wired Node 1, and their reception at Wired Node 2.

_images/Figure-672.png

Figure-67: Observing Delay value from the NetSim simulation results window

As explained in the beginning of this chapter, for the network shown in Figure-66, the end-to-end delay of a packet is the sum of the queueing delay at the buffer between the wired-node and Link 1, the transmission time on Link 1, and the transmission time on Link 2 (there being no queueing delay between the Router and Link_2). It follows that

\[\textit{Mean Delay} = \left( \frac{1}{2\mu} \times \frac{\rho}{1 - \rho} \right) + \frac{1}{\mu} + \frac{1}{\mu}\]

Note: The Simulation results are calculated from the Packet Trace whereas the theoretical results are calculated using the formula.

Output Table

Sample

ρ

λ

Mean Delay (µs)

Queuing Delay (µs) (Simulation)

Queuing Delay (µs) (Theory)

1

0.05

47.93

2112.87

26.47

27.45

2

0.10

95.86

2144.01

57.61

57.96

3

0.15

143.79

2178.86

92.46

92.05

4

0.20

191.72

2218.09

131.69

130.40

5

0.25

239.65

2259.11

172.71

173.87

6

0.30

287.58

2309.49

223.09

223.54

7

0.35

335.51

2365.74

279.34

280.86

8

0.40

383.44

2435.65

349.25

347.73

9

0.45

431.37

2513.79

427.39

426.76

10

0.50

479.30

2608.38

521.98

521.60

11

0.55

527.22

2721.59

635.19

637.51

12

0.60

575.15

2864.88

778.48

782.40

13

0.65

623.08

3052.84

966.44

968.68

14

0.70

671.01

3304.58

1218.18

1217.07

15

0.75

718.94

3633.66

1547.26

1564.80

16

0.80

766.87

4160.39

2073.99

2086.40

17

0.85

814.80

5115.95

3029.55

2955.73

18

0.90

862.73

6967.16

4880.76

4694.39

19

0.95

910.66

12382.98

10296.58

9910.39

Table-24: Mean Delay, Queueing delay from Simulation and Queueing delay from analysis

Comparison Chart

_images/Figure-681.png

Figure-68: Comparison of queueing delay from simulation and analysis

Advanced Topic: The M/G/1 Queue

Successive packets were generated instantly at exponentially distributed time intervals (i.e., at the points of a Poisson process); this gave the “M” in the notation. The packets were all of fixed length; this gave the “D” in the notation. Such a model was motivated by the transmission of packetized voice over a fixed bit rate wireline link. The voice samples are packetized into constant length UDP packets. For example, typically, 20ms of voice samples would make up a packet, which would be emitted at the instant that the 20ms worth of voice samples are collected. A voice source that is a part of a conversation would have listening periods, and “silence” periods between words and sentences. Thus, the intervals between emission instants of successive UDP packets would be random. A simple model for these random intervals is that they are exponentially distributed, and independent from packet to packet. This, formally, is called the Poisson point process. With exponentially distributed (and independent) inter-arrival times, and fixed length packets we obtain the M/D/1 model. On the other hand, some applications, such as video, generate unequal length packets. Video frames could be encoded into packets. To reduce the number of bits being transmitted, if there is not much change in a frame, as compared to the previous one, then the frame is encoded into a small number of bits; on the other hand if there is a large change then a large number of bits would need to be used to encode the new information in the frame. This motivates variable packet sizes.

Let us suppose that, from such an application, the packets arrive at the points of a Poisson process of rate \(\lambda\), and the randomly varying packet transmission times can be modelled as independent and identically distributed random variables, \(B_1, B_2, B_3, \cdots\), with mean \(b\) and second moment \(b^{(2)}\), i.e., variance \(b^{(2)} - b^2\). Such a model is denoted by M/G/1, where M denotes the Poisson arrival process, and G (“general”) the “generally” distributed service times. Recall the notation M/D/1 (from earlier in this section), where the D denoted fixed (or “deterministic”) service times. Evidently, the M/D/1 model is a special case of the M/G/1 model.

Again, as defined earlier in this section, let \(W\) denote the mean queueing delay in the M/G/1 system. Mathematical analysis of the M/G/1 queue yields the following formula for \(W\):

\[W = \frac{\rho}{1 - \rho} \cdot \frac{b^{(2)}}{2b}\]

where, as before, \(\rho = \lambda b\). This formula is called the Pollaczek–Khinchine formula or P–K formula, after the researchers who first obtained it. Denoting the variance of the service time by \(Var(B)\), the P–K formula can also be written as

\[W = \frac{\rho b}{2(1 - \rho)} \left( \frac{Var(B)}{b^2} + 1 \right)\]

Applying this formula to the M/D/1 queue, we have \(Var(B) = 0\). Substituting this in the M/G/1 formula, we obtain

\[W = \frac{\rho}{1 - \rho} \cdot \frac{b}{2}\]

which, with \(b = 1/\mu\), is exactly the M/D/1 mean queueing delay formula displayed earlier in this section.

A NetSim Exercise Utilising the M/G/1 Queue

In this section we demonstrate the use of the M/G/1 queueing model in the context of the network setup shown in Figur-66. The application generates exponentially distributed data segment with mean \(d\) bits, i.e., successive data segment lengths are sampled independently from an exponential distribution with rate parameter \(\frac{1}{d}\). Note that, since packets are integer multiples of bits, the exponential distribution will only serve as an approximation. These data segments are then packetized by adding a constant length header of length \(h\) bits. The packet generation instants form a Poisson process of rate \(\lambda\).

Let us denote the link speed by \(c\). Let us denote the random data segment length by \(X\) and the packet transmission time by \(B\), so that

\[B = \frac{X + h}{c}\]

Denoting the mean of \(B\) by \(b\), we have

\[b = \frac{d + h}{c}\]

Further, since \(h\) is a constant,

\[Var(B) = \frac{Var(X)}{c^2}\]

These can now be substituted in the P–K formula to obtain the mean delay in the buffer between Node 1 and Link 1.

We set the mean packet size to 100B or 800 bits, the header length \(h = 54B\) or 432 bits and \(\lambda = 5000\).

For a 10 Mbps link, the service rate \(\mu\) is

\[\mu = \frac{10 \times 10^6}{154 \times 8} = 8116.8\]

Using the Pollaczek–Khinchine (PK) formula, the waiting time for a M/G/1 queueing system is

\[w = \frac{\rho + \lambda \times \mu \times Var(s)}{2(\mu - \lambda)}\]

Where \(Var(s)\) is the variance of the service time distribution \(S\). Note that

\[Var(s) = \frac{1}{(\mu')^2}\]

where \(\mu'\) is the mean service time of the exponential random variable (100B packets and not 154B):

\[\mu' = \frac{10 \times 10^6}{100 \times 8} = 12500\]

Hence substituting into the PK formula, one gets

\[w = \frac{0.4 + \left(\frac{3467.7 \times 8116.8}{12500^2}\right)}{2(8116.8 - 3246.7)} = 59.5 \,\mu s\]

By simulation the queuing delay is \(60.5 \,\mu s\).

The queuing delay is not available in the NetSim results dashboard. It can be got from the packet trace. It is the average of (PHY layer Arrival time – APP layer arrival time) for packets being sent from Node 1.

Understand the working of OSPF and SPF (Level 3)

NOTE: NetSim Academic supports a maximum of 20 routers and hence this experiment cannot be done with NetSim Academic. NetSim Standard/Pro would be required to simulate this configuration.

Objective

To understand the working of OSPF and Shortest Path First (SPF) tree creation.

Theory

OSPF

Open Shortest Path First (OSPF) is an Interior Gateway Protocol (IGP) standardized by the Internet Engineering Task Force (IETF) and commonly used in large Enterprise networks. OSPF is a link-state routing protocol providing fast convergence and excellent scalability. Like all link-state protocols, OSPF is very efficient in its use of network bandwidth.

Shortest path First Algorithm

OSPF uses a shorted path first algorithm to build and calculate the shortest path to all known destinations. The shortest path is calculated with the use of the Dijkstra algorithm. The algorithm by itself is quite complicated. This is a very high level, simplified way of looking at the various steps of the algorithm:

  • Upon initialization or due to any change in routing information, a router generates a link-state advertisement. This advertisement represents the collection of all link-states on that router.

  • All routers exchange link-states by means of flooding. Each router that receives a link-state update should store a copy in its link-state database and then propagate the update to other routers.

  • After the database of each router is completed, the router calculates a Shortest Path Tree to all destinations. The router uses the Dijkstra algorithm in order to calculate the shortest path tree. The destinations, the associated cost and the next hop to reach those destinations form the IP routing table.

  • In case no changes in the OSPF network occur, such as cost of a link or a network being added or deleted, OSPF should be very quiet. Any changes that occur are communicated through link-state packets, and the Dijkstra algorithm is recalculated in order to find the shortest path.

The algorithm places each router at the root of a tree and calculates the shortest path to each destination based on the cumulative cost required to reach that destination. Each router will have its own view of the topology even though all the routers will build a shortest path tree using the same link-state database.

Example

Refer from OSPF RFC 2328 (https://tools.ietf.org/html/rfc2328#section-2.3).txt

The below network shows a sample map of an Autonomous System.

_images/Figure-692.png

Figure-69: Sample maps of an Autonomous system

A cost is associated with the output side of each router interface. This cost is configurable by the system administrator. The lower the cost, the more likely the interface is to be used to forward data traffic. Costs are also associated with the externally derived routing data (e.g., the BGP-learned routes).

The directed graph resulting from the above network is depicted in the following table. Arcs are labelled with the cost of the corresponding router output interface. Arcs having no labelled cost have a cost of 0. Note that arcs leading from networks to routers always have cost 0.

FROM

TO

RT1

RT2

RT3

RT4

RT5

RT6

RT7

RT8

RT9

RT 10

RT 11

RT 12

N3

N6

N8

N9

RT1

0

RT2

0

RT3

6

0

RT4

8

0

RT5

8

6

6

RT6

8

7

RT7

6

0

RT8

0

RT9

0

RT10

7

0

0

RT11

0

0

RT12

0

N1

3

N2

3

N3

1

1

1

1

N4

2

N5

N6

1

1

1

N7

4

N8

3

2

N9

1

1

1

N10

2

N11

3

N12

8

2

N13

8

N14

8

N15

9

H1

10

Table-25: Directed graph

A router generates its routing table from the above directed graph by calculating a tree of shortest paths with the router itself as root. Obviously, the shortest-path tree depends on the router doing the calculation. The shortest-path tree for Router RT6 in our example is depicted in the following figure.

_images/Figure-701.png

Figure-70: SPF tree for Router 6

Routing Table

The IP forwarding table formed in the routers and nodes can be accessed from the IP Forwarding Table list present in the Simulation Results window as shown below:

The tree gives the entire path to any destination network or host. However, only the next hop to the destination is used in the forwarding process. Note also that the best route to any router has also been calculated. For the processing of external data, we note the next hop and distance to any router advertising external routes. The resulting routing table for Router RT6 is shown in the following table.

Destination

IP Address

Next hop

Distance

N1

11.0.0.130

RT3

10

N2

11.0.0.138

RT3

10

N3

11.0.0.2

RT3

7

N4

11.0.0.170

RT3

8

N6

11.0.0.66

RT10

8

N7

11.0.0.194

RT10

12

N8

11.0.0.106

RT10

10

N9

11.0.0.82

RT10

11

N10

11.0.0.202

RT10

13

N11

11.0.0.211

RT10

14

H1

11.0.0.227

RT10

21

RT5

11.0.0.34

RT5

6

RT7

11.0.0.58

RT10

8

N12

11.0.0.147

RT10

10

N13

11.0.0.154

RT5

14

N14

11.0.0.162

RT5

14

N15

11.0.0.186

RT10

17

Table-26: Routing Table for RT6

Distance calculation

RT6 has 3 interfaces i.e., RT3, RT5 and RT10. The distance obtained is 10 for destination N1 via RT3 interface. The packets from RT6 would reach N1 via RT3, N3 and RT1. The cost assigned to routers in this path is 6+1+0+3 = 10 (cost can be seen in SPF tree for RT6).

Network Setup

Open NetSim and click on Experiments> Internetworks> Routing and Switching> Understand the working of OSPF then click on the tile in the middle panel to load the example as shown in below Figure-71.

_images/Figure-717.png

Figure-71: List of scenarios for the example of Understand the working of OSPF

NetSim UI displays the configuration file corresponding to this experiment as shown below Figure-72.

_images/Figure-721.png

Figure-72: Network topology created in NetSim and it is similar to the network as per the OSPF RFC 2328.

Procedure

The following set of procedures were done to generate this sample:

Step 1: A network scenario is designed in NetSim GUI comprising of 28 Routers and 13 Wired Nodes in the “Internetworks” Network Library.

Step 2: The Output Cost for all the Routers in the network is set in Applications > OSPF > WAN Interface as per Table-25.

Step 3: Configure a CBR application with all the destination nodes by selecting CBR application icon from Set Traffic tab. Right click on the application, open properties as a new window and set the start time of all the applications to 30 seconds.

Step 4: Packet Trace is enabled in the NetSim GUI, and hence we can track the route in which the packets have chosen to reach the destination based on the Output Cost that is set.

Step 5: Run the Simulation for 40 seconds.

Output

The following image is a depiction of the shortest path first tree created in NetSim. This is for representational purposes and cannot be opened in NetSim. The blue color numbers are the “Output Cost” parameter of the link and is set by the user in Router > Application Layer > Interface. The red numbers are the IP addresses of the interfaces of the Routers.

_images/Figure-731.png

Figure-73: SPF tree for Router 6

NOTE: NetSim, does not implement Link type3 (Link to Stub Network). Hence users would notice a slight difference between the SPF trees of RFC and NetSim.

The IP forwarding table formed in the routers can be accessed from the IP Forwarding Table list present in the Simulation Results window as shown below table.

Network Destination

Gateway

Interface

Metrics

Type

11.0.0.18

11.0.0.50

11.0.0.51

6

OSPF

11.0.0.50

11.0.0.50

11.0.0.51

6

OSPF

11.0.0.171

11.0.0.50

11.0.0.51

6

OSPF

11.0.0.34

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.43

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.59

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.146

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.155

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.163

11.0.0.43

11.0.0.42

6

OSPF

11.0.0.3

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.131

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.11

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.139

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.26

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.35

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.115

11.0.0.122

11.0.0.123

7

OSPF

11.0.0.122

11.0.0.122

11.0.0.123

7

OSPF

11.0.0.218

11.0.0.122

11.0.0.123

7

OSPF

11.0.0.2

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.10

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.19

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.27

11.0.0.50

11.0.0.51

7

OSPF

11.0.0.58

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.67

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.179

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.187

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.74

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.195

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.66

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.75

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.114

11.0.0.122

11.0.0.123

8

OSPF

11.0.0.170

11.0.0.50

11.0.0.51

8

OSPF

11.0.0.98

11.0.0.122

11.0.0.123

10

OSPF

11.0.0.107

11.0.0.122

11.0.0.123

10

OSPF

11.0.0.130

11.0.0.50

11.0.0.51

10

OSPF

11.0.0.138

11.0.0.50

11.0.0.51

10

OSPF

11.0.0.178

11.0.0.122

11.0.0.123

10

OSPF

11.0.0.106

11.0.0.122

11.0.0.123

10

OSPF

11.0.0.219

11.0.0.122

11.0.0.123

10

OSPF

11.0.0.83

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.210

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.90

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.203

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.226

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.82

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.91

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.99

11.0.0.122

11.0.0.123

11

OSPF

11.0.0.194

11.0.0.122

11.0.0.123

12

OSPF

11.0.0.202

11.0.0.122

11.0.0.123

13

OSPF

11.0.0.154

11.0.0.43

11.0.0.42

14

OSPF

11.0.0.162

11.0.0.43

11.0.0.42

14

OSPF

11.0.0.211

11.0.0.122

11.0.0.123

14

OSPF

11.0.0.147

11.0.0.43

11.0.0.42

14

OSPF

11.0.0.186

11.0.0.122

11.0.0.123

17

OSPF

11.0.0.227

11.0.0.122

11.0.0.123

21

OSPF

Table-27: Exported list of IP forwarding table of RT 6 from NetSim’s Result Dashboard

From the above table, the router forwards packets intended to the subnet:

From

Destination

IP Address

Route to Destination

Distance (Metrics)

RT6

N1

11.0.0.130

RT3>N3>RT1>N1 (6+1+0+3)

10

RT6

N2

11.0.0.138

RT3>N3>RT2>N2 (6+1+0+3)

10

RT6

N3

11.0.0.2

RT3>N3 (6+1)

7

RT6

N4

11.0.0.170

RT3>N4 (6+2)

8

RT6

N6

11.0.0.66

RT10>N6 (7+1)

8

RT6

N7

11.0.0.194

RT10>N6>RT8>N7 (7+1+0+4)

12

RT6

N8

11.0.0.106

RT10>N8 (7+3)

10

RT6

N9

11.0.0.82

RT10>N8>RT11>N9 (7+3+0+1)

11

RT6

N10

11.0.0.202

RT10> N8>RT11>N9>RT12>N10

(7+3+0+1+0+2)

13

RT6

N11

11.0.0.211

RT10> N8>RT11>N9>RT9>N11

(7+3+0+1+0+3)

14

RT6

H1

11.0.0.227

RT10> N8>RT11>N9>RT12>H1

(7+3+0+1+0+10)

21

RT6

RT5

11.0.0.34

RT5 (6)

6

RT6

RT7

11.0.0.58

RT10> N6>RT7 (7+1+0)

8

RT6

N12

11.0.0.147

RT10>N6>RT7>N12 (7+1+0+2)

10

RT6

N13

11.0.0.154

RT5>N13 (6+8)

14

RT6

N14

11.0.0.162

RT5>N4 (6+8)

14

RT6

N15

11.0.0.186

RT10>N6>RT7>N15 (7+1+0+9)

17

Table-28: Distance calculated from RT6 to destinations.

Open Packet Trace and filter PACKET ID to 1 and CONTROL PACKET TYPE/APP NAME to WN13-WN1 to view the detailed information of the routes taken to reach the destination WN1.

_images/Figure-741.png

Figure-74: Packet Trace showing the route in which the packets have chosen to reach the destination.

Similarly, filter CONTROL PACKET TYPE/APP NAME to the remaining destination nodes to view the routes taken.

Thus, we are able to simulate the exact example as provided in the RFC and report that SPF Tree obtained, and the routing costs match the analysis provided in the RFC.