NSX Controllers

NSX Controllers

NSX Control Plane:

  • The NSX control plane is based on NSX Controller cluster.
  • NSX Controller is a virtual appliance (must be deployed in a three-node cluster for high availability & scale) that is responsible for managing the distributed switching & routing modules in ESXi hosts.
  • The controller does not have any data plane traffic passing through it.
  • The NSX controller is the central control point for all logical switches within a network & maintains information of all virtual machines, hosts, logical switches, distributed logical router & VXLANs.
  • NSX Controller handles both the Layer 2 & Layer 3 control plane.


Layer 2: Control Plane:
  • VTEP table:
    • Table that lists all VTEPs  that have VMs connected to the logical switch.
    • There is one VTEP per logical switch.
  • MAC table:
    • Table that lists the MAC Addresses of VMs connected to the logical switches.


  • ARP table:
    • Table that lists the ARP entries of VMs connected to the logical switches.

Layer 3: Control Plane

  • NSX controllers have the routing table for each distributed logical router.
  • It has the list of all hosts running a copy of distributed logical router.

Deploying NSX Controllers

  • NSX controllers are virtual appliances deployed by the NSX Manager.
  • The NSX controllers must be deployed in the same vCenter associated with NSX Manager. (It can also be deployed via REST API)
    • There should be at least 1 NSX controller be deployed before deploying any logical switches & DLR (Distributed Logical Routers)
    • If the NSX Manager can’t establish communication to the NSX controller due to any reason, it deletes the appliance.
    • Port requirement for the NSX controller communication with the NSX Manager & ESXi hosts are mentioned in the below link which needs to be opened in the network.


  • In Production environment there should be 3 NSX controllers per NSX manager to provide redundancy & failover capability.
  • The NSX controllers should be deployed in separate ESXi hosts to prevent single point of failure.
    • It is recommended to configure DRS anti-affinity rule to prevent the controllers from residing on the same host.
  • When NSX controllers get deployed they automatically form the cluster among themselves.
    • The 1st NSX Controller will join the cluster by itself before the next controllers gets deployed.
    • The 2nd & 3rd controllers can be deployed only after successful deployment of the 1st controller. It cannot be deployed in parallel.

NSX Controller Master & Recovery:

  • When the cluster is formed the Layer 2 & Layer 3 control plane responsibilities are shared among all the NSX controllers.
    • The layer 2 NSX controller master assigns layer 2 control plane responsibility on a per logical switch basis to each NSX controller in the cluster.
    • The layer 3 NSX controller master assigns the layer 3 forwarding table on per DLR to each NSX controller in the cluster including the master.
    • The process of assigning the logical switches & logical routers to different NSX controllers is called “slicing”.
  • When a NSX controller goes down or becomes unresponsive the data plane continues to operate without any interruption.
    • When one of the NSX controllers goes down, the master assigns the role of the failed NSX controllers to the other available NSX controllers.
    • In case the Master NSX controllers go down, the available NSX controllers will be electing the new master.
    • The newly elected master NSX controller will recover the control plane of the affected logical switches & the distributed logical routers.