Spoof Guard

  1. One of the security features offered by NSX is spoof guard.
  2. The spoof guard feature protects against spoofing of IPs preventing malicious attack.
  3. Spoof guard allows to trust the IP reported by vCenter to the NSX manger with the help of VMware tools.
  4. In case of any spoofed IP or violation, Spoof Guard blocks the traffic on that particular vNIC. (Prevents the virtual machines vNIC from accessing the network)
  5. This functions independently of the Distributed Firewall of NSX.
  6. Spoof Guard supports both IPv4 and IPv6 addresses.
Use cases:
  • Preventing rogue VM from assuming the IP address of an existing VM & start sending malicious traffic.
  • Preventing any unauthorized IP address change for the VMs without proper approval.
  • Enhanced security feature which prevents any VMs from by passing the DFW firewall policies by changing the IP address of the VM.
Enabling spoof guard feature is very simple & easy with few clicks.
By default, Spoof Guard feature is disabled.
 
Creating Spoof Guard Policy:
1.By default, IP detection type is None. It should be changed.
2.2 options are supported.
a.DHCP Snooping
b.ARP Snooping
 
  1. As next step, you can edit the default policy or create a new policy. In this example we will create a new policy.
  2. Create the policy name as “Test” & select the option “Enabled”
  3. In this we will select to Manually inspect & approve all the IP address.

  1. As next step, select the network for which you need to apply this policy.
  2. The network could be Distributed port group, legacy port group or it can be logical switch.
 
  1. Once the network is selected, you will be able to view the IPs detected & they are waiting for the “Approve” action.
  2. Unless approved, the VMs will not be part of the network & no traffic passes.

 

 

 

 

 

 

 

 

NSX Traceflow

 

NSX Trace flow:

  • Troubleshooting virtual environment is challenging & also quite interesting.
  • Trace flow is one of the tools which was introduced from NSX for vSphere 6.2 used for troubleshooting & planning.
  • It allows to inject packet into the network & monitor its flow across the network.
  • The traffic can be injected at the vNIC level for the VM without the need to touch the operating system or logging to the VM.
  • One of key benefits using Trace flow is that it can be used even when the VM is down.
  • The output of trace flow indicates the hops that was traversed for the traffic from source to destination.
  • It also indicates whether the packet is delivered to the destination or not (Whether DFW is blocking the traffic or not)

 

Trace Flow Use cases:

  • Trouble shooting network failures to see the exact path that traffic takes
  • Performance monitoring to see link utilization
  • Network planning to see how a network will behave when it is in production

Following traffic are supported by Trace flow

  1. Layer 2 unicast
  2. Layer 3 unicast
  3. Layer 2 broadcast
  4. Layer 2 multicast

Note: The source for any trace flow should be always the vNIC of the VM. The destination could be any device in NSX overlay or underlay.

 

Using Trace flow:

  • Login to vCenter & navigate through Networking & Security -> Tools -> Tracefllow
  • Its required to select the source VM vNIC & the destination VM vNIC (refer below screenshot)
 
  • Under advanced options choose the protocol of the choice from the drop down. (Supported protocols are TCP, UDP & ICMP)
  • In this example we have selected Protocol “TCP”
  • Destination Port TCP 22 is selected in this example

Click on “Trace” to initiate the trace between the source & the destination.

  • The simulated traffic is initiated between the source & destination VMs vNIC.
  • The complete traffic flow including the vNIC, firewall , ESXi host is visible.
  • It is easily identified whether the packet is delivered or not.

  • To identify which firewall policy is hit or followed, just click on the firewall & it shows the Rule ID which allowed or blocked the traffic.

 Trace flow is a very simple & easy tool for troubleshooting virtual network infrastructure.

NSX Backup

  • One of the key considerations during the NSX deployment is proper planning of backing up the NSX manager.
  • Proper & regular backing up of NSX Manager is critical to ensure to the NSX can be recovered due to any failure or un foreseen issues.
  • NSX Manager backup is critical as backing up any other components in SDDC environment.
  • As a best practice of deploying SDDC, one needs to ensure that proper backup procedure & process is in place.
The NSX backup contains the configurations of the below:
  • Controllers
  • Logical switching
  • Routing entities
  • Security
  • Firewall Rules
  • Events & Audit logs

Virtual switches (vDS) are not part of NSX Manager backup.

 

 

Best Practices:

  • Before & after any NSX upgrade or vCenter upgrade
  • After any configuration changes related to NSX controllers, logical switches, logical routers, Edge Service Gateways, Security & Firewall policies.
  • Ensure that the vCenter Server including its database server are backed up along with the NSX backup schedule.
  • In case of any trouble or issue & when it is required to restore the entire environment, it is always recommended to restore the NSX backup along with the vCenter server backup including its database which has been taken at the same time.
  • Create a backup strategy policy to schedule the backup periodically along with the vCenter & its database.
 

NSX Manager Backup Method:

  1. Web Interface:NSX Manager with FTP/SFTP
  2. REST API Method
The recommended way to take the NSX backup is via Web Interface using FTP/SFTP, since it is very simple & easy to configure.
 

Procedure – NSX Manager Backup:

  • NSX Manager backup is very simple & straight forward procedure.
  • The below VMware article explains the same & it is easy to setup.

 

Ref:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/com.vmware.nsx.upgrade.doc/GUID-2A75A102-518D-4D6C-B23D-877C421B1536.html

 

Restoring NSX Manager Backup:

  • Restoring NSX Manager requires backup file to be loaded to the NSX Manager appliance.
  • VMware recommendation is to reinstall or setup a new NSX Manager appliance & then restore the backup file.
    •     Restoring NSX Manager is compatible between the NSX Manager of the same version. (The backup file version & the restoring NSX version should be the same)
  • Restoring the backup file to the existing NSX Manager appliance will also work but sometimes it will cause issue.
VMware also recommends having the details of the old NSX manager settings like the IP Address, Subnet Mask, Default Gateway settings in prior, which needs to be specified to the newly deployed NSX Manager Appliance.

 

Ref:
 
There may be situations where the NSX Edges becomes inaccessible or failed due to some reasons.
In this case the NSX Edges can be easily restored by clicking Redeploy NSX Edge () in the vSphere Web Client.
It is not required to restore the complete NSX Manager backup.
Note: Individual backup of NSX Edge devices is not supported.
 
 

VXLAN in NSX

Virtual Extensible LAN (VXLAN):

  • VXLAN is the base of network virtualization which provides network overlay.
  • VXLAN encapsulates Ethernet frames on a UDP routable packet.
  • VXLAN provides extending a single L2 segment across L3 boundaries.
  • VXLAN also overcomes the VLAN limits.The 802.1q standard has a maximum of 4094 VLANs.
  • VXLAN overcomes this by maximum of 2^24 VNIs (VXLAN Network Identifier).

Overlay Architecture: NSX

  • The term “Overlay” refers to any virtual networks over any “underlay” network.  (Underlay refers to the physical network)
  • Virtual networks are created with a MAC-over-IP encapsulation with VXLAN.
  • The encapsulation allows two VMs on the same network to talk to each other even if the path between the VMs needs to be routed.
  • VXLAN modules operate in ESXi Hypervisor.
  • VTEPs encapsulate & de-capsulate network packets.
  • VTEP’s terminate VXLAN tunnels
  • Wrap UDP Packet Header around L2 packet
  • VXLAN Packet header includes VNI (VXLAN Network Identifier)
  • Manage by NSX Controllers
                  – ARP,VTEP,MAC tables
  • Encapsulated packets are forwarded between VTEPS over physical network like any other IP traffic.
  • VTEP is a host interface which forwards Ethernet frames from a virtual network via VXLAN or vice-versa.
  • ll hosts with the same VNI configured must be able to retrieve and synchronize data (ARP & MAC tables).

MTU Considerations:
VXLAN is an overlay technology which uses encapsulation; the MTU needs to be adjusted.
VXLAN adds 50 bytes of overhaed to the header.
The entire underlay path needs to be configured to support the MTU requirment of the VXLAN.

  • IPv4 Header – 20 bytes
  • UDP Header – 8 bytes
  • VXLAN Header – 8 bytes
  • Original Ethernet Header with VLAN – 18 bytes
  • Original Ethernet Payload – 1500 bytes

Total = 1554 bytes

  • VMware recommends having the MTU value to be set as 1600 bytes.

Hyper Convergence

What is Hyper-convergence?

Hyper-convergence is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking, virtualization & other technologies in a hardware box supported by a single vendor.

  • Hyper-convergence was born out of the converged infrastructure concept of products that include storage, compute & networking in one box.
  • Systems that fall under the hyper convergence category also have the hypervisor built in and are used specifically in virtual environments.
  • Storage & compute are typically managed separately in a virtual environment, but hyperconvergence provides simplificationby allowing everything to be managed through one plug in.

How hyper-converged systems differ from converged systems?

Hyper-converged systems take the concept of convergence to the next level.

  • Converged systems are separate components engineered to work together.

     

     

     

    • Storage, Networking, Compute & Virtualization components are integrated together to provide a converged infrastructure.
    • Storage & management of computing power are handled independent of the virtual environment
  •  Hyper-converged systems are modular systems designed to scale out by adding additional modules.

     

     

     

    • These systems are mainly designed around storage & compute on a single x86 server chassis interconnected by 10 GB Ethernet.
    • It’s like a server with bunch of storage from a physical perspective.

Key Benefits of Hyper-convergence

  • Simple design.
  • Decreased administrative overhead.
  • Simplified vendor management since single vendor provides the complete solution.

What Hyperconvergence can do for you?

  • Hyperconvergence is based on the Software Defined Data Center (SDDC). Since it is based on Software it provides the required flexibility & agility that business demands from IT.

     

     

    • As it is software driven any new features are made available in the software releases which can be easily applied without any change or upgrade in the hardware.
  • Hyperconvergence solutions provide the combined flash & spinning disk for storage which offers the better capacity & performance which helps to eliminate resource islands.[Underutilized resources]
  • Hyperconvergence solutions offers single vendor approach related to procurement, implementation & operation.

     

     

    • All components compute, storage, network, backup are all combined in a single shared resource pool with Hypervisor technology.
    • The software layer which forms the base of this Hyperconvergence technology is designed to accommodate any hardware failure that eventually happens & it cannot be prevented.
    • As this technology is software driven any new features are made available in the software releases which can be easily applied without any change or upgrade in the hardware.
    • Offers single & centralized interface for managing all the resources across multiple nodes.
  • Hyperconvergence solutions go far beyond servers & storage that traditional or legacy services offers. It offers below services which a legacy service does not offer.

     

     

    • Data protection products which includes backup & replication
    • De- duplication appliances
    • Wide-area network  optimization appliances
    • Solid State Drive (SSD) arrays
    • SSD cache arrays
    • Replication Appliances & Software

Who are the vendors?

  • Nutanix
  • Simplivity – Omnicube
  • Scale Computing

VXLAN Basics

VXLAN – Virtual Extensible Local Area Networks

What is VXLAN?

  • Virtual Extensible LAN (VXLAN) is a network virtualization technology that addresses the scalability problems across data center networks.
  • VXLAN is an L2 overlay over an L3 network. It uses a VLAN-like encapsulation technique to encapsulate MAC-based OSI layer 2 Ethernet frames within layer 3 UDP packets.
  • Each overlay network is known as a VXLAN Segment and identified by a unique 24-bit segment ID called a VXLAN Network Identifier (VNI). 
  • Only virtual machines on the same VNI are allowed to communicate with each other.  Virtual machines are identified uniquely by the combination of their MAC addresses and VNI. 
  • The VXLAN technology is created by Cisco, VMware , Citrix and RedHat.

Why VXLAN is required?

VXLAN technology is developed to address the below problems which are frequently encountered in the Data Center & in the Cloud computing network.

Limitation of 4094 broadcast domains -VLAN

  • Limitations of the no of the VLANs supported in the network. The no of VLAN supported on the traditional network is 4094.
  • Most of the could service providers faces the VLAN short comings since multi companies & multi tenants requires unique VLAN ID & the segmentation of each company resources will lead to utilization of the VLANs. Scalability is an issue.
  • VXLAN address this problem by increasing traditional VLAN limits from 4094 to 16 million.
  • It uses a 24-bit segment identifier to scale beyond the 4096 limitations of VLANs.

Layer 2 extensions across Data Center & Mobility

  • VXLAN address the Layer 2 extensions between different data center sites that must share the same logical networks.

     

    • Extending Layer 2 domains across Layer 3 network is not possible. This means the same VLAN cannot be extended beyond the Layer 3.
    • VXLAN technology addresses this by binding the two separate layer 2 domains and makes them look like one.
  • VXLAN supports the long distance V-motion & High Availability (HA) across data center.
  • VXLAN also address the problem of scalability by expanding the L2 network across datacenter & maintaining the same network.

Key Benefits

  • It does not depend on STP to converge the topology. Instead Layer 3 routing protocols are used.
  • No links within the fabric are blocked. All links are active and can carry traffic
  • The fabric can load balance traffic across all active links, ensuring no bandwidth is sitting idle.

VXLAN Use Cases – Summary

  • Cloud Service Providers or Data Center which requires more than 4096 VLAN for network segmentation.
  • Stretching Layer 2 domains across the data center in order to accommodate growth without breaking the Layer 2 adjacency requirement for the services & applications.