vPC – Virtual Port Channel

Virtual Port Channel – vPC

Nexus Platform:

The Nexus platform provides several advantages that include the fact that it was built to be more modular than the Catalyst platform.
This mean that processes such as OSPF, EIGRP, hsrp, lacp can be started and stopped and upgraded without affecting the kernel. It also provides a path to perform inline software service upgrades (ISSU) which allow upgrading a switch’s kernel without sacrificing downtime on the network.
 
The Nexus platform is considered data center switches. In this blog I want to focus on the new technology knows as vPC.
 vPC stands for Virtual Port Channel and is a way to spread link aggregation across multiple switches.

Link Aggregation is a way to combine multiple ports together into a single logical port. For example if we have 2 * 10Gbps ports we can combine together to form a 20Gbps port. This gives better throughput and redundancy in case one of the ports goes down.

Cisco calls this a port-channel or ether-channel.

The main limitation is that a port-channel had to be contained on a single switch.

To get redundancy when interconnecting switches as shown in the below diagram & also to avoid switching loops we need to rely on Spanning tree protocol.

 

 

 

vPC & elimination of Spanning Tree

vPC allows to build a port channel that spans across two different switches.

This means that both switches know about the MAC addresses being seen on both ports and can effectively decide what to listen to and what not to listen to.

Cisco has tried to take care of this in the past with technology like VSS (Virtual Switching System) and the switch stack on a 3750. The problem with these technologies is that the switches are acting as a single unit with shared resources. In case if we need to upgrade one switch it requires an upgrade to all.

In case of vPC it provides redundancy in the fact that we have 2 switches that are independent and can be taken down and upgraded independently, but at the same time they are sharing port channel information and can eliminate the need for Spanning Tree Protocol.

vPC Configuration – Example

For this example I am using a pair of Nexus 5K for the vPC setup & configuration.

 

 

 

 

 

 

 

 

Step -1:

The Nexus switches require enabling the features we need.

In this case we will need vPC and LACP on both the switches [Switch#1 & Switch#2]

 

feature lacp

feature vpc

 

Step -2:

Setup the management interfaces on each Nexus 5K switch.

These will be the vPC keep alive link. We can also use the cross over cable to connect both the switches & it can be used as the keep alive link as shown in the below diagram.

The switches require L3 reachability between them to have the keep alive messages exchanged.

 

On switch#1:

interface mgmt0

ip address 192.168.100.5/30

 

On switch#2:

interface mgmt0

 ip address 192.168.100.6/30

 

Make sure that both the switches are able to reach each other via PING & ensure the network reachability.

 

Step -3:

Next step is to setup a vPC Domain with the management addresses of each management interface.

 

On Switch#1:

vpc domain 10

peer-keepalive destination 192.168.100.6 source 192.168.100.5 vrf management

 

On Switch#2:

vpc domain 10

peer-keepalive destination 192.168.100.5 source 192.168.100.6 vrf management

 

 

 

 

Step -4:

 

Next step is to configure the peer links that will carry the data.

These peer links should at least have adequate bandwidth & also offer redundancy.

In this example we are using 2 * 10 Gbps SFP interfaces which offer aggregated bandwidth of 20 Gbps.

 

We are using interface 1/1 & 1/2 on both the switches.

The port channel no used is 57.

 

On Switch#1

 

interface port-channel57

 description ## vPc to Switch#2 ##

 switchport mode trunk

 spanning-tree port type network

 

interface Ethernet1/1

 description ##  To Switch#2 E1/1 ###

 switchport mode trunk

 channel-group 57 mode active

 

interface Ethernet1/2

 description ##  To Switch#2 E1/2 ###

 switchport mode trunk

channel-group 57 mode active

 

 

 

On Switch#2

 

interface port-channel57

 description ## vPc to Switch#1 ##

 switchport mode trunk

 spanning-tree port type network

 

interface Ethernet1/1

 description ##  To Switch#1 E1/1 ###

 switchport mode trunk

 channel-group 57 mode active

 

interface Ethernet1/2

 description ##  To Switch#1 E1/2 ###

 switchport mode trunk

channel-group 57 mode active

 

 

The Port channel 57 needs to be defined as peer link using the below command.

 

interface port-channel57

 vpc peer-link

 

Perform a show vpc brief to see the status of the port channel:

 

Switch#1# show vpc brief
Legend:
                (*) – local vPC is down, forwarding via vPC peer-link
 
vPC domain id                   : 10
Peer status                     : peer adjacency formed ok
vPC keep-alive status           : peer is alive
Configuration consistency status: success
Per-vlan consistency status     : success
Type-2 consistency status       : success
vPC role                        : primary
Number of vPCs configured       : 0
Peer Gateway                    : Disabled
Dual-active excluded VLANs      : –
Graceful Consistency Check      : Enabled

 

vPC Peer-link status
———————————————————————
id   Port   Status Active vlans
—   —-   —— ————————————————–
1    Po57 up     1

 

vPC Port Setup

Next step is to setup a port channel in Switch#3.
The Switch#3 we are using in this example is 3750 which uses 2 10Gbps ports.
The ports used are T1/0/1 & T1/0/2.

First thing is the configuration on Switch C. For this configuration, we will be using a 3750 and connecting two Ten Gigabit interfaces in a port channel to switches A and B. Ports T1/0/1 and T1/0/2 on the 3750 are connected to port Ethernet 1/10 on both Switch#1 and Switch#2.

 

On Switch#3

interface Port-channel100
 description ## To Switch#1 & Switch#2 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 
interface TenGigabitEthernet1/0/1
 description ## To Switch#1 Ethernet 1/10 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 100 mode active
 
interface TenGigabitEthernet1/0/2
 description ## To Switch#2 Ethernet 1/10 ##
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 100 mode active

Then on Switch#1 & Switch#2 configure a port channel in a vPC. The configuration is the same on both switches & with this configuration I have allowed only VLAN 1, 10 & 20 in the trunk port.

interface port-channel101
description ## To Switch#3 ## 
switchport mode trunk
vpc 101
switchport trunk allowed vlan 1,10,20
spanning-tree port type edge trunk
  spanning-tree bpduguard enable
 
  interface Ethernet1/10
  description ## To Switch#3 ## 
  switchport mode trunk
  switchport trunk allowed vlan 1,10,20
  channel-group 101 mode active
 
Once the connectivity is setup the port channel comes up & the status of the vPc can be confirmed.
After this we have 20Gbps port channel connectivity to Switch#3.

Cisco OTV

Cisco OTV – Overlay Transport Virtualization

What is Cisco Overlay Transport Virtualization?

  • Overlay Transport Virtualization (OTV) is the layer 2 technology for providing the Data Center Interconnection.
  • It provides L2 extension capabilities between different data centers.
  • With OTV the VLAN & IP Subnet can be extended across the data center. It helps in having the same IP address range across different Data center.

 Data Center Design with Cisco OTV 

1. What is Cisco OTV?

  • Overlay Transport Virtualization (OTV) is the layer 2 technology for providing the Data Center Interconnection.
  • It provides L2 extension capabilities between different data centers.
  •  With OTV the VLAN & IP Subnet can be extended across the data center. It helps in having the same IP address range across different Data center.
  • OTV only requires IP connectivity between remote data center sites and it does not require any changes in existing design. But currently it supports only Nexus 7000 series switches with M1-Series line cards.
  • OTV helps in achieving workload mobility.
  • Without virtualization we can add resources in other data center if the exiting data center runs out of space
  •  With the virtualization concept of the workload mobility the Virtual Machines can be moved across data center & maintain the same IP subnet & VLAN.

 

2. How Cisco OTV works?

  • OTV uses the concept of MAC routing, aka, ‘MAC in IP routing’.
  • OTV works on the concept of “MAC routing,” which means a control plane protocol is used to exchange MAC reachability information between network devices providing LAN extension functionality
  • The MAC-in-IP routing is done by encapsulating an Ethernet frame in an IP packet before forwarding across the transport IP network.
  • The action of encapsulating the traffic between the OTV devices is called an overlay between the data centre sites.
  • OTV is deployed on devices at the edge of the data centre sites called OTV Edge Devices.
  • These edge devices perform typical L2 learning & forwarding functions on their internal interfaces and performs IP based virtualizations functions on the outside interfaces for traffic that is destined between two DCs via Overlay Interface. It basically exchanges the MAC address learned between the DCs.

3. How OTV behaves in CoB scenario? What are the benefits of OTV for CoB?

     A. Without Virtualization

  • OTV helps to have the same IP Address segment available in the COB site.
  • It helps the servers to be available in each Data Center & maintaining the same LAN segment.
  • Physical Server Migration from 1 Data center to other Data center can be achieved without changing the IP address of the server & also no change in the application.
  • Microsoft Cluster servers which require same L2 network connectivity can be placed in different data center using the benefits of the OTV concept.

     B. With Virtualization

  • Virtualization solution with SRM functionality takes the advantage of the OTV technology to bring back the server in COB site maintaining the same IP address.
  • Virtualization solution with V Motion feature helps in live migration of the VM from 1 data center to the other by maintaining the same IP Address.

4. What are the requirements for deploying Cisco OTV?

  • Hardware – Cisco Nexus 7000 series switch.

     

    • At each data center to have the OTV feature enabled it requires Cisco Nexus 7000 series switch.
    • M1-Series line cards
    • IOS Requirement: NX-OS 5.0(3) & above
  • License – Transport Service Licenses for the OTV feature.

     

    • Enterprise License (N7K-LAN1K9) – We have this license in our existing Nexus 7010.
    • Transport Services License (N7K-TRS1K9)
    • LAN_ADVANCED_SERVICES (N7K-ADV1K9)
  • Topology – L2 Data center topology

5. What is the specs/scalability/capacity of Cisco OTV technology?

  • Cisco OTV is scalable up to 6 sites with 2 devices at each location.
  • Max 256 VLAN can be extended.
  • Distance Limitation – Distance/Latency is not a constraint for OTV.

6. What are the commands/configurations for deploying Cisco OTV?

  • To enable OTV it requires few commands on each of the Nexus 7000 series switches.
  • The below commands are required to enable the OTV on Cisco Nexus 7000 series switches. The below example is for extending the VLANs from 5 – 10 across the Data center.
! Configure the physical interface that OTV uses to reach the DCI transport infrastructure  

interface ethernet 2/1
 ip address 192.0.2.1/24
 ip igmp version 3
 no shutdown
!Configure the VLAN that will be extended on the overlay network and the site-vlan
vlan 2,5-10
 ! Configure OTV including the VLANs that will be extended.
feature otv
otv site-vlan 2
otv site-identifier 256

interface Overlay1
otv control-group 239.1.1.1 
otv data-group 232.1.1.0/28
otv join-interface ethernet 2/1
!Extend the configured VLAN
 otv extend-vlan 5-10
no shutdown

7. Any other benefits of Cisco OTV other than for CoB

  • OTV is designed for the data center interconnection & for the availability of the services across data center.
  • It extends the same IP segment Data center across multiple locations.