NSX

Spoof Guard

  1. One of the security features offered by NSX is spoof guard.
  2. The spoof guard feature protects against spoofing of IPs preventing malicious attack.
  3. Spoof guard allows to trust the IP reported by vCenter to the NSX manger with the help of VMware tools.
  4. In case of any spoofed IP or violation, Spoof Guard blocks the traffic on that particular vNIC. (Prevents the virtual machines vNIC from accessing the network)
  5. This functions independently of the Distributed Firewall of NSX.
  6. Spoof Guard supports both IPv4 and IPv6 addresses.
Use cases:
  • Preventing rogue VM from assuming the IP address of an existing VM & start sending malicious traffic.
  • Preventing any unauthorized IP address change for the VMs without proper approval.
  • Enhanced security feature which prevents any VMs from by passing the DFW firewall policies by changing the IP address of the VM.
Enabling spoof guard feature is very simple & easy with few clicks.
By default, Spoof Guard feature is disabled.
 
Creating Spoof Guard Policy:
1.By default, IP detection type is None. It should be changed.
2.2 options are supported.
a.DHCP Snooping
b.ARP Snooping
 
  1. As next step, you can edit the default policy or create a new policy. In this example we will create a new policy.
  2. Create the policy name as “Test” & select the option “Enabled”
  3. In this we will select to Manually inspect & approve all the IP address.

  1. As next step, select the network for which you need to apply this policy.
  2. The network could be Distributed port group, legacy port group or it can be logical switch.
 
  1. Once the network is selected, you will be able to view the IPs detected & they are waiting for the “Approve” action.
  2. Unless approved, the VMs will not be part of the network & no traffic passes.

 

 

 

 

 

 

 

 

NSX Traceflow

 

NSX Trace flow:

  • Troubleshooting virtual environment is challenging & also quite interesting.
  • Trace flow is one of the tools which was introduced from NSX for vSphere 6.2 used for troubleshooting & planning.
  • It allows to inject packet into the network & monitor its flow across the network.
  • The traffic can be injected at the vNIC level for the VM without the need to touch the operating system or logging to the VM.
  • One of key benefits using Trace flow is that it can be used even when the VM is down.
  • The output of trace flow indicates the hops that was traversed for the traffic from source to destination.
  • It also indicates whether the packet is delivered to the destination or not (Whether DFW is blocking the traffic or not)

 

Trace Flow Use cases:

  • Trouble shooting network failures to see the exact path that traffic takes
  • Performance monitoring to see link utilization
  • Network planning to see how a network will behave when it is in production

Following traffic are supported by Trace flow

  1. Layer 2 unicast
  2. Layer 3 unicast
  3. Layer 2 broadcast
  4. Layer 2 multicast

Note: The source for any trace flow should be always the vNIC of the VM. The destination could be any device in NSX overlay or underlay.

 

Using Trace flow:

  • Login to vCenter & navigate through Networking & Security -> Tools -> Tracefllow
  • Its required to select the source VM vNIC & the destination VM vNIC (refer below screenshot)
 
  • Under advanced options choose the protocol of the choice from the drop down. (Supported protocols are TCP, UDP & ICMP)
  • In this example we have selected Protocol “TCP”
  • Destination Port TCP 22 is selected in this example

Click on “Trace” to initiate the trace between the source & the destination.

  • The simulated traffic is initiated between the source & destination VMs vNIC.
  • The complete traffic flow including the vNIC, firewall , ESXi host is visible.
  • It is easily identified whether the packet is delivered or not.

  • To identify which firewall policy is hit or followed, just click on the firewall & it shows the Rule ID which allowed or blocked the traffic.

 Trace flow is a very simple & easy tool for troubleshooting virtual network infrastructure.

NSX Backup

  • One of the key considerations during the NSX deployment is proper planning of backing up the NSX manager.
  • Proper & regular backing up of NSX Manager is critical to ensure to the NSX can be recovered due to any failure or un foreseen issues.
  • NSX Manager backup is critical as backing up any other components in SDDC environment.
  • As a best practice of deploying SDDC, one needs to ensure that proper backup procedure & process is in place.
The NSX backup contains the configurations of the below:
  • Controllers
  • Logical switching
  • Routing entities
  • Security
  • Firewall Rules
  • Events & Audit logs

Virtual switches (vDS) are not part of NSX Manager backup.

 

 

Best Practices:

  • Before & after any NSX upgrade or vCenter upgrade
  • After any configuration changes related to NSX controllers, logical switches, logical routers, Edge Service Gateways, Security & Firewall policies.
  • Ensure that the vCenter Server including its database server are backed up along with the NSX backup schedule.
  • In case of any trouble or issue & when it is required to restore the entire environment, it is always recommended to restore the NSX backup along with the vCenter server backup including its database which has been taken at the same time.
  • Create a backup strategy policy to schedule the backup periodically along with the vCenter & its database.
 

NSX Manager Backup Method:

  1. Web Interface:NSX Manager with FTP/SFTP
  2. REST API Method
The recommended way to take the NSX backup is via Web Interface using FTP/SFTP, since it is very simple & easy to configure.
 

Procedure – NSX Manager Backup:

  • NSX Manager backup is very simple & straight forward procedure.
  • The below VMware article explains the same & it is easy to setup.

 

Ref:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/com.vmware.nsx.upgrade.doc/GUID-2A75A102-518D-4D6C-B23D-877C421B1536.html

 

Restoring NSX Manager Backup:

  • Restoring NSX Manager requires backup file to be loaded to the NSX Manager appliance.
  • VMware recommendation is to reinstall or setup a new NSX Manager appliance & then restore the backup file.
    •     Restoring NSX Manager is compatible between the NSX Manager of the same version. (The backup file version & the restoring NSX version should be the same)
  • Restoring the backup file to the existing NSX Manager appliance will also work but sometimes it will cause issue.
VMware also recommends having the details of the old NSX manager settings like the IP Address, Subnet Mask, Default Gateway settings in prior, which needs to be specified to the newly deployed NSX Manager Appliance.

 

Ref:
 
There may be situations where the NSX Edges becomes inaccessible or failed due to some reasons.
In this case the NSX Edges can be easily restored by clicking Redeploy NSX Edge () in the vSphere Web Client.
It is not required to restore the complete NSX Manager backup.
Note: Individual backup of NSX Edge devices is not supported.
 
 

VXLAN in NSX

Virtual Extensible LAN (VXLAN):

  • VXLAN is the base of network virtualization which provides network overlay.
  • VXLAN encapsulates Ethernet frames on a UDP routable packet.
  • VXLAN provides extending a single L2 segment across L3 boundaries.
  • VXLAN also overcomes the VLAN limits.The 802.1q standard has a maximum of 4094 VLANs.
  • VXLAN overcomes this by maximum of 2^24 VNIs (VXLAN Network Identifier).

Overlay Architecture: NSX

  • The term “Overlay” refers to any virtual networks over any “underlay” network.  (Underlay refers to the physical network)
  • Virtual networks are created with a MAC-over-IP encapsulation with VXLAN.
  • The encapsulation allows two VMs on the same network to talk to each other even if the path between the VMs needs to be routed.
  • VXLAN modules operate in ESXi Hypervisor.
  • VTEPs encapsulate & de-capsulate network packets.
  • VTEP’s terminate VXLAN tunnels
  • Wrap UDP Packet Header around L2 packet
  • VXLAN Packet header includes VNI (VXLAN Network Identifier)
  • Manage by NSX Controllers
                  – ARP,VTEP,MAC tables
  • Encapsulated packets are forwarded between VTEPS over physical network like any other IP traffic.
  • VTEP is a host interface which forwards Ethernet frames from a virtual network via VXLAN or vice-versa.
  • ll hosts with the same VNI configured must be able to retrieve and synchronize data (ARP & MAC tables).

MTU Considerations:
VXLAN is an overlay technology which uses encapsulation; the MTU needs to be adjusted.
VXLAN adds 50 bytes of overhaed to the header.
The entire underlay path needs to be configured to support the MTU requirment of the VXLAN.

  • IPv4 Header – 20 bytes
  • UDP Header – 8 bytes
  • VXLAN Header – 8 bytes
  • Original Ethernet Header with VLAN – 18 bytes
  • Original Ethernet Payload – 1500 bytes

Total = 1554 bytes

  • VMware recommends having the MTU value to be set as 1600 bytes.

VMware – Interview Questions

What are the types of Ports groups in ESX/ESXi?

There are 3 types of port groups in ESX

  1. Service console port group
  2. VMkernel Port group
  3. Virtual machine port group

There are only 2 types of port group in ESXi

  1. Vmkernel Port group
  2. Virtual Machine Port group

What is VMkernel?

VMware Kernel is a Proprietary kernel of VMware and is not based on any of the flavors of Linux operating systems.
VMkernel requires an operating system to boot and manage the kernel.
A service console is being provided when VMware kernel is booted.
Only service console is based up on Red hat Linux OS not VMkernel.

What is the use of Service Console port?

Service console port group is required to manage the ESX server and it acts as the management network for the ESX.
vCenter/vSphere Client uses the service console IP’s to communicate with the ESX server.

What is the use of VMKernel Port?

VMkernel port is used by ESX/ESXi for vMotion, ISCSI & NFS communications. ESXi uses VMkernel as the management network since it don’t have service console built with it.

What is the use of Virtual Machine Port Group?

Virtual Machine port group is used by Virtual machine communication.

How Virtual Machine communicates to other servers in Network?

All the Virtual Machines which are configured in VM Port Group are able to connect to the other machines on the network. So this port group enables communication between vSwitch and Physical Switch by the use of uplink (Physical NIC) associated with the port group.

What are the different types of Partitions in ESX server?

/ -root
Swap
/var
/Var/core
/opt
/home
/tmp

 

Cisco VSS

Virtual Switching System – VSS

 

What is a Virtual Switching System (VSS)?

  • A VSS is network system virtualization technology that pools multiple Cisco Catalyst 6500 Series switches into one virtual switch.
  • It helps in increasing operational efficiency, boosting nonstop communication & scaling system bandwidth capacity up to 1.4 Tbps
  • We can manage 2 Cisco Catalyst 6509 chassis as a single 18 slot chassis once VSS is enabled.
  • VSS eliminates spanning tree & provides increased bandwidth.

 

Supported Platform:

  • Catalyst 6500 & Catalyst 4500 series
  • Virtual Switching Supervisor 720-10GE (VS-S720-10GE-3C and VS-S720-10GE-3CXL) with IOS 12.2(33)SXH1 & IP Base
  • Supervisor 2T (VS-S2T-10G and VS-S2T-10G-XL) with IOS 12.2(50) SY and IP Base

The peer switches which needs to form the VSS should be identical to each other with respect to both hardware & software version.

Cisco VSS – Logical Representation:

Cisco VSS Architecture:

  • Virtual Switching System allows the combination of 2 switches into a single, logical network entity from the network control plane & management perspective.
  • VSS uses Cisco IOS Stateful Switchover (SSO) & Non-Stop Forwarding (NSF) to provide a single logical switching & routing entity.
  • To neighboring devices the VSS appears as a single logical switch.
  • From data plane & traffic-forwarding perspective both switches in the Virtual Switching System actively forward traffic.
  • From Control plane perspective one switch will be active & the other will be standby.

 

Key Features:

  • 1 Active redundant control plane
  • 1 Config
  • Single point of management
  • 2 active data planes
  • Standby switch is essentially a set of additional line cards
  • Control messages &  Data frames flow between active and standby via VSL (can be seen as backplane extension)
  • Special encapsulation on VSL frames to carry additional information

 

 

VSL:

  • VSL is a dedicated link which bonds the 2 chassis together into a single logical node.
  • This link is responsible for transferring both data & control traffic between the peer chassis.
  • The VSL is formed as a Cisco Ether Channel interface which carries VSS control traffic & normal data traffic.
  • Up to 8 physical ports can form the ether channel for forming the VSL link.

 

 

 

 

 

 

 

 

 

 

 

 

 

Sample Configuration:

 

VSS Configuration

Switch A

Switch B

! Setting up the Domain & the priority

! Setting up the Domain & the priority

switch virtual domain 10

switch virtual domain 10

switch 1

switch 2

switch 1 priority 110

switch 1 priority 110

switch 2 priority 100

switch 2 priority 100

exit

exit

! Setting Up the VSL link

! Setting Up the VSL link

interface port-channel 1

interface port-channel 2

no shut

no shut

description #VSL to switch 2#

description VSL to switch 1#

switch virtual link 1

switch virtual link 2

 

 

interface range T1/1

interface range T1/1

no shutdown

no shutdown

channel-group 1 mode on

channel-group 2 mode on

 

 

interface range T1/2

interface range T1/2

no shutdown

no shutdown

channel-group 1 mode on

channel-group 2 mode on

 

 

! Enabling the VSS

! Enabling the VSS

switch convert mode virtual

switch convert mode virtual

 

 

switch accept mode virtual

 

 

Hyper Convergence

What is Hyper-convergence?

Hyper-convergence is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking, virtualization & other technologies in a hardware box supported by a single vendor.

  • Hyper-convergence was born out of the converged infrastructure concept of products that include storage, compute & networking in one box.
  • Systems that fall under the hyper convergence category also have the hypervisor built in and are used specifically in virtual environments.
  • Storage & compute are typically managed separately in a virtual environment, but hyperconvergence provides simplificationby allowing everything to be managed through one plug in.

How hyper-converged systems differ from converged systems?

Hyper-converged systems take the concept of convergence to the next level.

  • Converged systems are separate components engineered to work together.

     

     

     

    • Storage, Networking, Compute & Virtualization components are integrated together to provide a converged infrastructure.
    • Storage & management of computing power are handled independent of the virtual environment
  •  Hyper-converged systems are modular systems designed to scale out by adding additional modules.

     

     

     

    • These systems are mainly designed around storage & compute on a single x86 server chassis interconnected by 10 GB Ethernet.
    • It’s like a server with bunch of storage from a physical perspective.

Key Benefits of Hyper-convergence

  • Simple design.
  • Decreased administrative overhead.
  • Simplified vendor management since single vendor provides the complete solution.

What Hyperconvergence can do for you?

  • Hyperconvergence is based on the Software Defined Data Center (SDDC). Since it is based on Software it provides the required flexibility & agility that business demands from IT.

     

     

    • As it is software driven any new features are made available in the software releases which can be easily applied without any change or upgrade in the hardware.
  • Hyperconvergence solutions provide the combined flash & spinning disk for storage which offers the better capacity & performance which helps to eliminate resource islands.[Underutilized resources]
  • Hyperconvergence solutions offers single vendor approach related to procurement, implementation & operation.

     

     

    • All components compute, storage, network, backup are all combined in a single shared resource pool with Hypervisor technology.
    • The software layer which forms the base of this Hyperconvergence technology is designed to accommodate any hardware failure that eventually happens & it cannot be prevented.
    • As this technology is software driven any new features are made available in the software releases which can be easily applied without any change or upgrade in the hardware.
    • Offers single & centralized interface for managing all the resources across multiple nodes.
  • Hyperconvergence solutions go far beyond servers & storage that traditional or legacy services offers. It offers below services which a legacy service does not offer.

     

     

    • Data protection products which includes backup & replication
    • De- duplication appliances
    • Wide-area network  optimization appliances
    • Solid State Drive (SSD) arrays
    • SSD cache arrays
    • Replication Appliances & Software

Who are the vendors?

  • Nutanix
  • Simplivity – Omnicube
  • Scale Computing

Interview Questions for Network Engineers

  1. What are the slots which can be used for the Supervisor Engine 720 for 6509

Ans – Slots 5 or 6

  1. What are the slots which can be used for the Supervisor Engine 720 for 6513

Ans – Slots 7 or 8

  1. What is the difference between Sup720 & Sup32

     

    1. Sup720 can forward upto 400 Mpps & Sup32 can forward upto 15 Mpps
    2. Sup720 supports Distributed Cisco Express Forwarding which is not possible with Sup32
  1. How many Cisco Catalyst 3750-X Series Switches can make up a Cisco StackPower stack?

           Ans – Up to four switches can become part of the same Cisco StackPower stack in a ring topology

  1. What are different OSPF LSA Types?

LSA 1 (Router LSA)

Generated by all routers in an area to describe their directly attached links (Intra-area routes). These do not leave the area.

LSA 2 (Network LSA)

Generated by the DR of a broadcast or nonbroadcast segment to describe the neighbors connected to the segment. These do not leave the area.

LSA 3 (Summary LSA)

Generated by the ABR to describe a route to neighbors outside the area. (Inter-area routes)

LSA 4 (Summary LSA)

Generated by the ABR to describe a route to an ASBR to neighbors outside the area.

LSA 5 (External LSA)

Generated by ASBR to describe routes redistributed into the area. These routes appear as E1 or E2 in the routing table. E2 (default) uses a static cost throughout the OSPF domain as it only takes the cost into account that is reported at redistribution. E1 uses a cumulative cost of the cost reported into the OSPF domain at redistribution plus the local cost to the ASBR.

LSA 6 (Multicast LSA)

Not supported on Cisco routers.

LSA 7 (NSSA External LSA)

Generated by an ASBR inside a NSSA to describe routes redistributed into the NSSA. LSA 7 is translated into LSA 5 as it leaves the NSSA by the ABR. These routes appear as N1 or N2 in the IP routing table inside the NSSA. Much like LSA 5, N2 is a static cost while N1 is a cumulative cost that includes the cost to the ASBR.

  1. What are the criteria or parameters which are checked for establishing OSPF neighbor relationship?
  • Subnet mask used on the subnet
  • Subnet number (as derived using the subnet mask and each router's interface IP address)
  • Hello interval
  • Dead interval
  • OSPF area ID
  • Must pass authentication checks (if used)
  • Value of the stub area flag
  1. Why does OSPF require all traffic between non-backbone areas to pass through a backbone area (area 0)? [ Ans Courtesy – Jeff Doyle]

The first concept is this:

Every link state router floods information about itself, its links, and its neighbors to every other router. From this flooded information each router builds an identical link state database. Each router then independently runs a shortest-path-first calculation on its database – a local calculation using distributed information – to derive a shortest-path tree. This tree is a sort of map of the shortest path to every other router.

One of the advantages of link state protocols is that the link state database provides a “view” of the entire network, preventing most routing loops. This is in contrast to distance vector protocols, in which route information is passed hop-by-hop through the network and a calculation is performed at each hop – a distributed calculation using local information. Each router along a route is dependent on the router before it to perform its calculations correctly and then correctly pass along the results. When a router advertises the prefixes it learns to its neighbors it’s basically saying, “I know how to reach these destinations.” And because each distance vector router knows only what its neighbors tell it, and has no “view” of the network beyond the neighbors, the protocol is vulnerable to loops.

The second concept is this:

When link state domains grow large, the flooding and the resulting size of the link state database becomes a scaling problem. The problem is remedied by breaking the routing domain into areas: That first concept is modified so that flooding occurs only within the boundaries of an area, and the resulting link state database contains only information from the routers in the area.  This, in turn, means that each router’s calculated shortest-path tree only describes the path to other routers within the area.

The third concept is this:

OSPF areas are connected by one or more Area Border Routers (the other main link state protocol, IS-IS, connects areas somewhat differently) which maintain a separate link state database and calculate a separate shortest-path tree for each of their connected areas. So an ABR by definition is a member of two or more areas. It advertises the prefixes it learns in one area to its other areas by flooding Type 3 LSAs into the areas that basically say, “I know how to reach these destinations.”

Wait a minute – what that last concept described is not link state, it’s distance vector. The routers in an area cannot “see” past the ABR, and rely on the ABR to correctly tell them what prefixes it can reach. The SPF calculation within an area derives a shortest-path tree that depicts all prefixes beyond the ABR as leaf subnets connected to the ABR at some specified cost.

And that leads us to the answer to the question:

Because inter-area OSPF is distance vector, it is vulnerable to routing loops. It avoids loops by mandating a loop-free inter-area topology, in which traffic from one area can only reach another area through area 0.

  1.  What is Cisco Express Forwarding?

Ans:  Cisco Express Forwarding (CEF) is a packet-switching technique that is the default for many of Cisco’s router lines over the last ten years. It provides the ability to switch packets through a device in a very quick efficient way while also keeping the load on the router’s processor low. This way the route process can be tasked with dealing with other duties that require larger amounts of processor time (Quality of Service, Encryption, etc.).

  1. What is the purpose of the Passive-Interface?

Ans:  Passive Interface command is to control the advertisement of routing information. The command enables the suppression of routing updates over some interfaces while it allows updates to be exchanged normally over other interfaces.

  1. What is FECN & BECN?

Ans:  FECN – Forward Explicit Congestion Notification.                    

         BECN – Backward Explicit Congestion Notification.

                                                                                                                                      

In a frame relay network FECN is a header bit transmitted by the source (sending) terminal requesting that the destination (receiving) terminal slow down its requests for data.

BECN is a header bit transmitted by the destination terminal requesting that the source terminal send data more slowly.

FECN and BECN are intended to minimize the possibility that packets will be discarded (and thus have to be resent) when more packets arrive than can be handled.

  1. What is the difference between HSRP & VRRP?

Ans: HSRP and VRRP both are the virtual routing protocols that overcome the problem of single gateway failure. Both works with many similarities, such as redundancy and load balancing, but with significant difference as below.

  • HSRP (Host Standby Router Protocol) is Cisco proprietary protocol whereas VRRP (Virtual Router Redundancy Protocol) is an open standards-based protocol.
  • HSRP use default hello timer of 3 second with a hold timer of 10 seconds whereas VRRP use default hello timer of 1 second with a hold timer of 3 seconds.
  • In HSRP, one router is active, one is standby and the rest are in listening state, if more than 3 routers are in the group. In VRRP the active router is called master router whereas all other routers in the group are in backup state.
  • VRRP supports default pre-emption where as HSRP needs it to configured.
  • In HSRP the highest interface address wins the election whereas in VRRP, if a router uses virtual IP as an interface IP, this router becomes the active or master, if the priorities are default.
  1. What is BGP synchronization?

Ans: The BGP synchronization rule states that if an AS provides transit service to another AS, BGP should not advertise a route until all of the routers within the AS have learned about the route via an IGP.

 

 

 

 

 

 

VXLAN Basics

VXLAN – Virtual Extensible Local Area Networks

What is VXLAN?

  • Virtual Extensible LAN (VXLAN) is a network virtualization technology that addresses the scalability problems across data center networks.
  • VXLAN is an L2 overlay over an L3 network. It uses a VLAN-like encapsulation technique to encapsulate MAC-based OSI layer 2 Ethernet frames within layer 3 UDP packets.
  • Each overlay network is known as a VXLAN Segment and identified by a unique 24-bit segment ID called a VXLAN Network Identifier (VNI). 
  • Only virtual machines on the same VNI are allowed to communicate with each other.  Virtual machines are identified uniquely by the combination of their MAC addresses and VNI. 
  • The VXLAN technology is created by Cisco, VMware , Citrix and RedHat.

Why VXLAN is required?

VXLAN technology is developed to address the below problems which are frequently encountered in the Data Center & in the Cloud computing network.

Limitation of 4094 broadcast domains -VLAN

  • Limitations of the no of the VLANs supported in the network. The no of VLAN supported on the traditional network is 4094.
  • Most of the could service providers faces the VLAN short comings since multi companies & multi tenants requires unique VLAN ID & the segmentation of each company resources will lead to utilization of the VLANs. Scalability is an issue.
  • VXLAN address this problem by increasing traditional VLAN limits from 4094 to 16 million.
  • It uses a 24-bit segment identifier to scale beyond the 4096 limitations of VLANs.

Layer 2 extensions across Data Center & Mobility

  • VXLAN address the Layer 2 extensions between different data center sites that must share the same logical networks.

     

    • Extending Layer 2 domains across Layer 3 network is not possible. This means the same VLAN cannot be extended beyond the Layer 3.
    • VXLAN technology addresses this by binding the two separate layer 2 domains and makes them look like one.
  • VXLAN supports the long distance V-motion & High Availability (HA) across data center.
  • VXLAN also address the problem of scalability by expanding the L2 network across datacenter & maintaining the same network.

Key Benefits

  • It does not depend on STP to converge the topology. Instead Layer 3 routing protocols are used.
  • No links within the fabric are blocked. All links are active and can carry traffic
  • The fabric can load balance traffic across all active links, ensuring no bandwidth is sitting idle.

VXLAN Use Cases – Summary

  • Cloud Service Providers or Data Center which requires more than 4096 VLAN for network segmentation.
  • Stretching Layer 2 domains across the data center in order to accommodate growth without breaking the Layer 2 adjacency requirement for the services & applications.

 

 

 

 

vPC – Virtual Port Channel

Virtual Port Channel – vPC

Nexus Platform:

The Nexus platform provides several advantages that include the fact that it was built to be more modular than the Catalyst platform.
This mean that processes such as OSPF, EIGRP, hsrp, lacp can be started and stopped and upgraded without affecting the kernel. It also provides a path to perform inline software service upgrades (ISSU) which allow upgrading a switch’s kernel without sacrificing downtime on the network.
 
The Nexus platform is considered data center switches. In this blog I want to focus on the new technology knows as vPC.
 vPC stands for Virtual Port Channel and is a way to spread link aggregation across multiple switches.

Link Aggregation is a way to combine multiple ports together into a single logical port. For example if we have 2 * 10Gbps ports we can combine together to form a 20Gbps port. This gives better throughput and redundancy in case one of the ports goes down.

Cisco calls this a port-channel or ether-channel.

The main limitation is that a port-channel had to be contained on a single switch.

To get redundancy when interconnecting switches as shown in the below diagram & also to avoid switching loops we need to rely on Spanning tree protocol.

 

 

 

vPC & elimination of Spanning Tree

vPC allows to build a port channel that spans across two different switches.

This means that both switches know about the MAC addresses being seen on both ports and can effectively decide what to listen to and what not to listen to.

Cisco has tried to take care of this in the past with technology like VSS (Virtual Switching System) and the switch stack on a 3750. The problem with these technologies is that the switches are acting as a single unit with shared resources. In case if we need to upgrade one switch it requires an upgrade to all.

In case of vPC it provides redundancy in the fact that we have 2 switches that are independent and can be taken down and upgraded independently, but at the same time they are sharing port channel information and can eliminate the need for Spanning Tree Protocol.

vPC Configuration – Example

For this example I am using a pair of Nexus 5K for the vPC setup & configuration.

 

 

 

 

 

 

 

 

Step -1:

The Nexus switches require enabling the features we need.

In this case we will need vPC and LACP on both the switches [Switch#1 & Switch#2]

 

feature lacp

feature vpc

 

Step -2:

Setup the management interfaces on each Nexus 5K switch.

These will be the vPC keep alive link. We can also use the cross over cable to connect both the switches & it can be used as the keep alive link as shown in the below diagram.

The switches require L3 reachability between them to have the keep alive messages exchanged.

 

On switch#1:

interface mgmt0

ip address 192.168.100.5/30

 

On switch#2:

interface mgmt0

 ip address 192.168.100.6/30

 

Make sure that both the switches are able to reach each other via PING & ensure the network reachability.

 

Step -3:

Next step is to setup a vPC Domain with the management addresses of each management interface.

 

On Switch#1:

vpc domain 10

peer-keepalive destination 192.168.100.6 source 192.168.100.5 vrf management

 

On Switch#2:

vpc domain 10

peer-keepalive destination 192.168.100.5 source 192.168.100.6 vrf management

 

 

 

 

Step -4:

 

Next step is to configure the peer links that will carry the data.

These peer links should at least have adequate bandwidth & also offer redundancy.

In this example we are using 2 * 10 Gbps SFP interfaces which offer aggregated bandwidth of 20 Gbps.

 

We are using interface 1/1 & 1/2 on both the switches.

The port channel no used is 57.

 

On Switch#1

 

interface port-channel57

 description ## vPc to Switch#2 ##

 switchport mode trunk

 spanning-tree port type network

 

interface Ethernet1/1

 description ##  To Switch#2 E1/1 ###

 switchport mode trunk

 channel-group 57 mode active

 

interface Ethernet1/2

 description ##  To Switch#2 E1/2 ###

 switchport mode trunk

channel-group 57 mode active

 

 

 

On Switch#2

 

interface port-channel57

 description ## vPc to Switch#1 ##

 switchport mode trunk

 spanning-tree port type network

 

interface Ethernet1/1

 description ##  To Switch#1 E1/1 ###

 switchport mode trunk

 channel-group 57 mode active

 

interface Ethernet1/2

 description ##  To Switch#1 E1/2 ###

 switchport mode trunk

channel-group 57 mode active

 

 

The Port channel 57 needs to be defined as peer link using the below command.

 

interface port-channel57

 vpc peer-link

 

Perform a show vpc brief to see the status of the port channel:

 

Switch#1# show vpc brief
Legend:
                (*) – local vPC is down, forwarding via vPC peer-link
 
vPC domain id                   : 10
Peer status                     : peer adjacency formed ok
vPC keep-alive status           : peer is alive
Configuration consistency status: success
Per-vlan consistency status     : success
Type-2 consistency status       : success
vPC role                        : primary
Number of vPCs configured       : 0
Peer Gateway                    : Disabled
Dual-active excluded VLANs      : –
Graceful Consistency Check      : Enabled

 

vPC Peer-link status
———————————————————————
id   Port   Status Active vlans
—   —-   —— ————————————————–
1    Po57 up     1

 

vPC Port Setup

Next step is to setup a port channel in Switch#3.
The Switch#3 we are using in this example is 3750 which uses 2 10Gbps ports.
The ports used are T1/0/1 & T1/0/2.

First thing is the configuration on Switch C. For this configuration, we will be using a 3750 and connecting two Ten Gigabit interfaces in a port channel to switches A and B. Ports T1/0/1 and T1/0/2 on the 3750 are connected to port Ethernet 1/10 on both Switch#1 and Switch#2.

 

On Switch#3

interface Port-channel100
 description ## To Switch#1 & Switch#2 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 
interface TenGigabitEthernet1/0/1
 description ## To Switch#1 Ethernet 1/10 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 100 mode active
 
interface TenGigabitEthernet1/0/2
 description ## To Switch#2 Ethernet 1/10 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 100 mode active

Then on Switch#1 & Switch#2 configure a port channel in a vPC. The configuration is the same on both switches & with this configuration I have allowed only VLAN 1, 10 & 20 in the trunk port.

interface port-channel101
description ## To Switch#3 ## 
switchport mode trunk
vpc 101
switchport trunk allowed vlan 1,10,20
spanning-tree port type edge trunk
  spanning-tree bpduguard enable
 
  interface Ethernet1/10
  description ## To Switch#3 ## 
  switchport mode trunk
  switchport trunk allowed vlan 1,10,20
  channel-group 101 mode active
 
Once the connectivity is setup the port channel comes up & the status of the vPc can be confirmed.
After this we have 20Gbps port channel connectivity to Switch#3.

Cisco OTV

Cisco OTV – Overlay Transport Virtualization

What is Cisco Overlay Transport Virtualization?

  • Overlay Transport Virtualization (OTV) is the layer 2 technology for providing the Data Center Interconnection.
  • It provides L2 extension capabilities between different data centers.
  • With OTV the VLAN & IP Subnet can be extended across the data center. It helps in having the same IP address range across different Data center.

 Data Center Design with Cisco OTV 

1. What is Cisco OTV?

  • Overlay Transport Virtualization (OTV) is the layer 2 technology for providing the Data Center Interconnection.
  • It provides L2 extension capabilities between different data centers.
  •  With OTV the VLAN & IP Subnet can be extended across the data center. It helps in having the same IP address range across different Data center.
  • OTV only requires IP connectivity between remote data center sites and it does not require any changes in existing design. But currently it supports only Nexus 7000 series switches with M1-Series line cards.
  • OTV helps in achieving workload mobility.
  • Without virtualization we can add resources in other data center if the exiting data center runs out of space
  •  With the virtualization concept of the workload mobility the Virtual Machines can be moved across data center & maintain the same IP subnet & VLAN.

 

2. How Cisco OTV works?

  • OTV uses the concept of MAC routing, aka, ‘MAC in IP routing’.
  • OTV works on the concept of “MAC routing,” which means a control plane protocol is used to exchange MAC reachability information between network devices providing LAN extension functionality
  • The MAC-in-IP routing is done by encapsulating an Ethernet frame in an IP packet before forwarding across the transport IP network.
  • The action of encapsulating the traffic between the OTV devices is called an overlay between the data centre sites.
  • OTV is deployed on devices at the edge of the data centre sites called OTV Edge Devices.
  • These edge devices perform typical L2 learning & forwarding functions on their internal interfaces and performs IP based virtualizations functions on the outside interfaces for traffic that is destined between two DCs via Overlay Interface. It basically exchanges the MAC address learned between the DCs.

3. How OTV behaves in CoB scenario? What are the benefits of OTV for CoB?

     A. Without Virtualization

  • OTV helps to have the same IP Address segment available in the COB site.
  • It helps the servers to be available in each Data Center & maintaining the same LAN segment.
  • Physical Server Migration from 1 Data center to other Data center can be achieved without changing the IP address of the server & also no change in the application.
  • Microsoft Cluster servers which require same L2 network connectivity can be placed in different data center using the benefits of the OTV concept.

     B. With Virtualization

  • Virtualization solution with SRM functionality takes the advantage of the OTV technology to bring back the server in COB site maintaining the same IP address.
  • Virtualization solution with V Motion feature helps in live migration of the VM from 1 data center to the other by maintaining the same IP Address.

4. What are the requirements for deploying Cisco OTV?

  • Hardware – Cisco Nexus 7000 series switch.

     

    • At each data center to have the OTV feature enabled it requires Cisco Nexus 7000 series switch.
    • M1-Series line cards
    • IOS Requirement: NX-OS 5.0(3) & above
  • License – Transport Service Licenses for the OTV feature.

     

    • Enterprise License (N7K-LAN1K9) – We have this license in our existing Nexus 7010.
    • Transport Services License (N7K-TRS1K9)
    • LAN_ADVANCED_SERVICES (N7K-ADV1K9)
  • Topology – L2 Data center topology

5. What is the specs/scalability/capacity of Cisco OTV technology?

  • Cisco OTV is scalable up to 6 sites with 2 devices at each location.
  • Max 256 VLAN can be extended.
  • Distance Limitation – Distance/Latency is not a constraint for OTV.

6. What are the commands/configurations for deploying Cisco OTV?

  • To enable OTV it requires few commands on each of the Nexus 7000 series switches.
  • The below commands are required to enable the OTV on Cisco Nexus 7000 series switches. The below example is for extending the VLANs from 5 – 10 across the Data center.
! Configure the physical interface that OTV uses to reach the DCI transport infrastructure  

interface ethernet 2/1
 ip address 192.0.2.1/24
 ip igmp version 3
 no shutdown
!Configure the VLAN that will be extended on the overlay network and the site-vlan
vlan 2,5-10
 ! Configure OTV including the VLANs that will be extended.
feature otv
otv site-vlan 2
otv site-identifier 256

interface Overlay1
otv control-group 239.1.1.1 
otv data-group 232.1.1.0/28
otv join-interface ethernet 2/1
!Extend the configured VLAN
 otv extend-vlan 5-10
no shutdown

7. Any other benefits of Cisco OTV other than for CoB

  • OTV is designed for the data center interconnection & for the availability of the services across data center.
  • It extends the same IP segment Data center across multiple locations.