- Identify management and edge cluster requirements
- Minimum 2 hosts per cluster
- Minimum 1 vCenter, Optimally one dedicated vCenter for management cluster + one for each NSX domain
- Management and edge clusters can be combined but this limits future expansion
- Management and edge clusters need to share Top of Rack connectivity (L2 shared for Mgmt, L2 shared for Edge to External)
- Recommended to deploy dedicated compute clusters rather than combining with Mgmt/Edge
- Describe minimum/optimal physical infrastructure requirements for a VMware NSX implementation
- 2+ hosts per Cluster
- 3+ hosts for Management Cluster
- Minimum : 1 Compute cluster + 1 Edge/Management cluster
- Optimal : 1+ Compute cluster, 1 Management cluster, 1 Edge cluster
- 1Gb fabric
- Minimum 4 NIC per host unless 10GigE
- Edge racks are normally the only racks connected to the external physical network
- Management racks need to share L2 connectivity
- 10Gb fabric (VMware ran performance tests on 10Gb X540 Intel NICs)
- Leaf-Spine fabric
- Separate Edge routers connected to Edge racks.
- Describe how traffic types are handled in a physical infrastructureThe following 4 traffic types should be in segregated VLANs
- VXLAN Traffic
A VTEP IP address is associated to a VMkernel interface on the host. This is used to transporting VXLAN frames across the fabric. VXLAN tunnels are initiated and terminated by VTEP interfaces. This encapsulation ensures the external fabric never sees the VM IP or MAC. Because VXLAN provisioning is done at the cluster level, there is a challenge in assigning the IP address.
- If “Use IP Pool” is used, then only a single IP address range can be applied to a cluster, of which the hosts may be in different VLANs. Then manual configuration would be required to change the VTEP IP addresses on a per host basis at command line.
- If “Use DHCP” option is used, the VTEP will receive a valid IP for the VLAN it is in, depending on the specific rack it is connected to. This is the recommended approach for production deployments.
- Management Traffic
Management traffic is sourced and terminated by the management VMkernel interface on the host and includes the communication between vCenter Server and hosts as well as communication with other management tools such as NSX ManagerA single VDS can span multiple hypervisors that are deployed beyond a single leaf switch. Because no VLANs can be extended beyond a leaf switch, the management interfaces of hypervisors participating in a common VDS and connected to separate leaf switches are in separate IP subnets.
- vSphere vMotion Traffic
From a VMware support point of view, the historical recommendation has always been to deploy all the VMkernel interfaces used for vMotion as part of a common IP subnet. This is clearly not possible when designing the network for network virtualization using L3 in the access layer, where it is mandated to select different subnets in different racks for those VMkernel interfaces. Until this design is fully and officially supported by VMware, it is recommended that users go through the RPQ process so VMware can validate the design on a case-by-case basis.
- Storage Traffic
A VMkernel interface is used to provide features such as shared or non-directly attached storage. Typically, we refer to storage that can be attached via an IP connection—NAS or iSCSI, for example—rather than FC or FCoE. From an IP addressing standpoint, the same rules that apply to management traffic apply to storage VMkernel interfaces. The storage VMkernel interface of servers inside a rack—that is, connected to a leaf switch is part of the same IP subnet
This subnet, however, cannot span beyond this leaf switch. Therefore, the storage VMkernel interface IP of a host in a different rack is in a different subnet.
- VXLAN Traffic
- Determine use cases for available virtual architectures
An enterprise wishes to host multiple applications and provide connectivity among the different tiers of the application as well as connectivity to the external network.
- Multiple Tenant
A service provider environment has multiple tenants and each tenant can have different requirements in terms of number of isolated logical networks and other network services such as LB, Firewall, and VPN etc. A single NSX Edge is limited to 9 tenants and these tenants cannot have overlapping IP addressing.
- Multi-Tenant Scalable
A large service provider can have an additional layer of aggregation (a route aggregation Edge) which allows multiple groups of up to 9 tenants. It also permits overlapping IP addressing between groups of tenants.
- Describe ESXi host vmnic requirements
- Minimum 2x physical NIC if 10GigE
- Minimum 4x physical NIC if 1GigE
- Different traffic types should be on different VLANs
- Differentiate virtual to physical switch connection methodsThe design criteria used for connecting hosts are as follows:
- The type of traffic carried – VXLAN, vMotion, Management, Storage. Specific focus in this case is on VXLAN traffic as it is the specific additional traffic type found in NSX-v deployments.
- Type of isolation required based on traffic SLA – dedicated uplinks (for example for vMotion/Management) vs. shared uplinks.
- Type of cluster – compute workloads, edge and management with or without storage etc.
- Amount of bandwidth required for VXLAN traffic that may determine the decision of deploying a single or multiple VTEPs
- The options for teaming on the portgroup used for VXLAN are as follows.
- The selection criteria for type of uplink configuration to deploy can be based on the following considerations:
- Simplicity of configuration – single VTEP vs. Physical switch configurations.
- BW Required for each type of traffic.
- Convergence requirement.
- Cluster specifics – compute, edge and management.
- The uplink utilization factors – flow based vs. MAC based
- The recommended teaming mode for VXLAN traffic for ESXi hosts in Compute Clusters is LACP. It provides sufficient utilization of both links and reduced failover time. It also offers simplicity of VTEP configuration and troubleshooting at the expense of extra configuration and co-ordination with the physical switch. Obviously, ToR diversity for ESXi attachment can only be achieved assuming the deployed physical switches support a type of multi-chassis etherchannel capability, like vPC (Cisco) or MLAG (Arista, Juniper, etc.).For ESXi hosts part of the edge clusters is instead recommended avoiding the LACP or Static EtherChannel options. Because the NSX Edge must establish routing adjacencies with the next hop L3 devices – generally the ToR switches – an LACP/EtherChannel connection would complicate this and/or be unsupported. Therefore the recommendation for Edge Clusters is to select Explicit Failover Order or SRC-ID/SRC-MAC Hash as the teaming order for VXLAN traffic
- Describe VMkernel networking recommendations
- 3 VIBs installed on each host – VXLAN, Distributed Firewall, Logical Router. These are vmkernel modules.
- Separate vmkernel NIC interfaces should be configured for the following services:
- IP Storage (if used)
- If Source Port or Source MAC teaming are used, NSX creates multiple VTEP to load balance
If LACP, Failover, or Etherchannel are used, NSX creates 1 VTEP by default
- DHCP should be used for VTEP IP configuration to avoid manual configuration of each VTEP address.
- VMware NSX Network Virtualization Design Guide
- NSX User’s Guide