NSX-T 3.1.1 released with support for OSPFv2

VMware NSX-T 3.1.1 has just been released with awaited OSPF routing support for the north band connectivity. Prior to 3.1.1 there was no OSPF routing protocol available so we had to use BGP instead as dynamic routing protocol for connecting to the corporate and outside network.

OSPF can now be enabled only on the external interface and also can be in the same OSPF area even across multiple Edge Nodes. That’s a great news for you if you have NSX-V in your environment and planning to migrate to NSX-T, because the OSPFv2 will make the migration a lot easier if you are already using OSPF in your environment.

There are lots of other enhancements in 3.1.1 that I will list some of the key ones below:

L3 Networking

  • OSPFv2 Support on Tier-0 Gateways
    • NSX-T Data Center now supports OSPF version 2 as a dynamic routing protocol between Tier-0 gateways and physical routers. OSPF can be enabled only on external interfaces and can all be in the same OSPF area (standard area or NSSA), even across multiple Edge Nodes. This simplifies migration from the existing NSX for vSphere deployment already using OSPF to NSX-T Data Center.

NSX Data Center for vSphere to NSX-T Data Center Migration

  • Support of Universal Objects Migration for a Single Site
    • You can migrate your NSX Data Center for vSphere environment deployed with a single NSX Manager in Primary mode (not secondary).
  • Migration of NSX-V Environment with vRealize Automation – Phase 2
    • The Migration Coordinator interacts with vRealize Automation (vRA) to migrate environments where vRealize Automation provides automation capabilities. This release adds additional topologies and use cases to those already supported in NSX-T 3.1.0.
  • Modular Migration for Hosts and Distributed Firewall
    • The NSX-T Migration Coordinator adds a new mode to migrate only the distributed firewall configuration and the hosts, leaving the logical topology(L3 topology, services) for you to complete. You can benefit from the in-place migration offered by the Migration Coordinator (hosts moved from NSX-V to NSX-T while going through maintenance mode, firewall states and memberships maintained, layer 2 extended between NSX for vSphere and NSX-T during migration) that lets you (or a third party automation) deploy the Tier-0/Tier-1 gateways and relative services, hence giving greater flexibility in terms of topologies. This feature is available from UI and API
  • Modular Migration for Distributed Firewall available from UI
    • The NSX-T user interface now exposes the Modular Migration of firewall rules. This feature simplifies lift-and-shift migration where you vMotion VMs between an environment with hosts with NSX for vSphere and another environment with hosts with NSX-T by migrating firewall rules and keeping states and memberships (hence maintaining security between VMs in the old environment and the new one).
  • Fully Validated Scenario for Lift and Shift Leveraging vMotion, Distributed Firewall Migration and L2 Extension with Bridging
    • This feature supports the complete scenario for migration between two parallel environments (lift and shift) leveraging NSX-T bridge to extend L2 between NSX for vSphere and NSX-T, the Modular Distributed Firewall.

Identity Firewall

  • NSX Policy API support for Identity Firewall configuration
    • Setup of Active Directory, for use in Identity Firewall rules, can now be configured through NSX Policy API

Advanced Load Balancer Integration

  • Support Policy API for Avi Configuration
  • Service Insertion Phase 2 – Transparent LB in NSX-T advanced load balancer

Some other key features and changes:

  • Supports for Guest Users and Local User accounts
  • Upgraded FIPS compliant Bouncy Castle
  • NSX Cloud
    • NSX Marketplace Appliance in Azure
    • NSX Cloud Service Manager HA
    • NSX Cloud for Horizon Cloud VDI enhancements
  • UI-based Upgrade Readiness Tool for migration from NVDS to VDS with NSX-T Data Center
  • Enable VDS in all vSphere Editions for NSX-T Data Center Users
  • This release supports a maximum scale of 50 Clusters (ESXi clusters) per vCenter enabled with vLCM, on clusters enabled for vSphere with Tanzu
  • Starting with NSX-T 3.1.1, NSX-T will reject x509 certificates with duplicate extensions

There are long list of bug fixes in this release.

Check out the details on the official VMware release notes here.

VMware Cloud on AWS is now available with two hosts deployment; starting from 33% cheaper

When VMware Cloud on Amazon AWS was introduced three years ago it required minimum of 4 hosts to be provisioned in production clusters. Then the requirement reduced to three hosts a bit later. They have now dropped one more host and the minimum requirements is two hosts now.

A few days ago VMware and Amazon AWS announced new upgrades in VMCA. Here are the key changes:

  • The minimum requirements for production cluster deployment reduced to 2 hosts. So the entry deployment cost is basically dropped by 33% that will attract small business.
  • Storage optimized AWS EC2 (I3en) instances are now available on VMCA for data-intensive with high random I/O workload such as Rational Database

While talking about VMware Cloud on AWS upgrades, I though it might be good to add basic information about VMware Cloud on Amazon AWS here:

  • VMware Cloud on AWS is basically VMware SDDC solution which is based on VMware Cloud Foundation platform with optimized access to native AWS services. VMCA run on elastic and dedicated hosts on Amazon AWS infrastructure
  • VMCA is currently available in 16 AWS regions. AWS is planning to expand the availability of VMCA to 21 regions by the end of the year
  • VMCA is a cloud choice for easily migrating VMs between on-premise VMware platform and cloud managed VMware SDDC platform that also provides integration to AWS services
  • VMware Cloud on AWS can be purchased either directly from AWS or APN partners
  • You can use your existing Windows Server licenses in VMCA. Consult your Microsoft product terms for any restrictions.
  • Each host is equivalent to an Amazon EC2 I3.metal instance (2 sockets with 18 cores per socket, 512 GiB RAM, and 15.2 TB Raw SSD storage).
  • Productions Clusters can have minimum 2 and maximum 16 ESXi hosts
  • Single host SDDC starter is a 30-days plan that can to reduce costs for proof of concepts
  • VMs can be moved (cold migrate) from on-premise DC with minimum version of vSphere 6.0 to VMCA
  • Hybric Link Mode is supported with vSphere 6.5 or later
  • Live migration can be done using vMotion or leveraging VMware Hyper Cloud Extension (HCX)

Support for NSX-T in VMware Skyline 2.5

A good news for NSX-T users. VMware announced VMware Skyline Collector 2.5 and Advisor releases with support for NSX-T and new Findings & Recommendations.

Skyline now supports NSX-T 2.5 and above that means you can connect your NSX-T endpoints to your collectors and then Skyline will discover your NSX-T proactive Findings and Recommendations within Advisor. Just bear in mind that it may take 24-48 hours for these new findings to appear within Skyline Advisor.

The other handy feature is the ability to automatically upload NSX-T tech support log bundle to VMware technical support for NSX-T using Log Assist that will save a lot of time for operation support teams for dealing with technical support cases for NSX-T.

There are new Findings and Recommendations:

  • NSX-T Findings that picks up deployments issues within your NSX-T
  • VMware Security Advisories has new security advisories added to inform you about potential vulnerabilities to be vigilant about security risks

If you have the Auto Upgrade feature enabled in your Skyline Collector, your Collectors will update automatically. Otherwise you can download the new version from the Collector VAMI. Note, The Skyline Collector must be able to receive update notifications from vapp-updates.vmware.com.

vSphere 6.7 General Support Extended

Previously the general support for vSphere 6.5 and 6.7 was full 5 years since the official release of vSphere 6.5 as 15 November 2021.

Earlier this month VMware has announced extension for General Support of vSphere 6.7. That means the general support for vSphere 6.5 remains as 15 November 20201 while it’s now extended to 15 October 2022 for vSphere 6.7.

This will allow the VMware customers to be able to keep their vSphere platforms in support while preparing for upgrading to vSphere 7.

VMware provides bug and security fixes, patches, upgrades and high priority (P1) technical support for customers on active general support.

Below is the End of General Support (EoGS) availability for vSphere

ProductGeneral AvailabilityEnd of General SupportEnd of Technical Guidance
vSphere 6.012 Mar 201512 Mar 202012 Mar 2022
vSphere 6.515 Nov 201615 Nov 202115 Nov 2023
vSphere 6.717 Apr 201815 Oct 202215 Nov 2023
vSphere 7.002 Apr 202002 Apr 202502 Apr 2027
vSphere Lifecycle Matrix

You might still get technical advice from VMware before EoTG if you have an active VMware support even if your vSphere version is out of general support. However you won’t be able to log high priority P1 tickets with VMware after EoGS.

In terms of licensing, there is no requirement for upgrading license keys if you are upgrading from 6.0 to 6.5 or 6.7 as they are all vSphere version 6.x. But if you are planning to upgrade to vSphere 7.0, the vSphere 6.x licenses won’t work on upgraded products and you will need to assign new Licenses.

I you have an active subscription and support with VMware then you can easily upgrade you vSphere licenses via myVMware portal. Otherwise check out the below link and check your license upgrade eligibility with VMware.

https://www.vmware.com/products/vsphere/upgrade-center.html#licensing

PowerCLI script to move a virtual disk between two VMs

<#
    MoveVD.ps1
    Move a virtual disk between two VMs

    Recently I was requested for writing a script to easily detach a virtual disk from a VM and attach it to another VM.
	

    .History.
	2020/05/28 - 0.1 - Reza Rafiee		- Initial version
	

#>

###############################
Write-host (" ")
$SourceVM = vRead-Host "Enter Source VM Name "
$srcVM=Get-VM -Name $SourceVM

Write-host ("The attached virtual disks on $srcVM.name ")
get-vm -name $srcVM | Get-HardDisk | Select Name,CapacityGB,Persistence,Filename

Write-host (" ")

$VDiskNumber =	Read-Host "Enter the Virtual Hard Disk Number that you want to detach from $srcVM.name  "

$VDiskSize = Read-Host "Enter the Disk Size (GB) "

Write-host (" ")

$TargetVM = Read-Host "Enter Target VM Name "




$trgVM= Get-VM -Name $TargetVM
$trgDisk="Hard Disk $VDiskNumber"

$disk=get-vm -name $srcVM | Get-HardDisk | Where-Object {($_.Name -eq $trgDisk) -AND ($_.CapacityGB -eq $VDiskSize)}


If ($disk -eq $null){
write-host ("No Hard Disk found as ($trgDisk - $VDiskSize GB) on $SourceVM")
exit
}

$confirmation = Read-Host -Prompt "Are you sure you want to detach ($trgDisk - $VDiskSize GB) on $SourceVM and attach it to $TargetVM ? [y/n]"

If ($confirmation -eq "y") {
	Remove-HardDisk $disk -Confirm:$false
	New-HardDisk -VM $trgVM -DiskPath $disk.Filename
#You can also specify the SCSI controller of which the disk should be attached to by adding the following parameter to the above command:  -Controller "SCSI Controller 0"
	Write-host (" ")
	Write-host ("The attached virtual disks on $trgVM.name ")
	get-vm -name $trgVM | Get-HardDisk | Select Name,CapacityGB,Persistence,Filename
	
	}
###############################

VMware NSX-T 3.0 released

VMware announced NSX-t 3.0 General Availability a few days ago and it’s now available for download in VMware’s portal.

NSX-T 3.0 is a major upgrade from 2.5.1 and has plenty of new features, improvements as well as bug fixes.

I have summarized some of the important features and improvements of the new NSX-T 3.0 in this post and I hope you will find it informative.

Here are the new features:

NSX Federation

  • NSX Federation is the ability to manage, control and synchronize multiple NSX-T deployments over different locations in on-prem, AWS, Azure and Public Clouds.
  • Global Manager is the key component of NSX Federation which provides GUI and REST API endpoint and makes you able to configure consistent security policies across multiple locations and stretched networking objects such as Tier-0 and Tier-1 gateways and segments through a single pane of glass.
  • In the below Youtube video, Dimitri Desmidt explains NSX-T Federation in details as part of Tech Filed day 21VMware Demo and Preview program.
  • Security policies attach to the workload which means the policies move with the workload during failover or migration between environments. This takes care of full network and security fail-over along with SRM VM fail-over which simplifies DR as the network entities would be created once and the segments stretched across between locations. So in event of a disaster the workload can be fully failed-over to the recovery location with all the security policies in place.

Comprehensive Treat protection (Distributed IDS/IPS)

  • NSX Distributed Firewall (DFW) now supports Windows 2016 physical servers in addition to Linux physical servers.
  • New Firewall configuration wizard that simplifies rule creation specially for VLAN backed micro-segmentation
  • Distributed IDS/IPS, Micro-Segmentation for Windows Physical Servers, Time-based Firewall Rules, and a feature preview of URL Analysis for URL Classification and Reputation.
  • The intrusion detection and prevention capabilities can now be enabled within the hypervisor to detect vulnerable network traffic on a per VM or even more granular on per vNIC of a VM basis with granular context based rule inspection which NSX Manager easily downloads and keeps the threat signature pack updated.
  • IDS/IPS can be enabled within Hypervisor to detect vulnerable network traffic on a per VM or even more granlar on per vNIC of a VM
  • Threat detection in NSX IDS is much more efficient comparing to traditional IDS due to its context based inspection mechanism, so you can assign relevant signatures to a VM based on the running serives i.e. Linux or Wondows

NSX-T networking and security for vSphere with Kubernetes

  • Supports full stack netwrking and security for vSphere with Kubernetes including key networking functions: Switching, Distributed routing (T0/T1), Distributed Firewalling, load balancing, Distributed LB, NAT and IPAM and network identity lifecycle.
  • Watch the below Youtube vidoe from Vinay Reddy that explains the networking and security capabilities of NSX-T in vSphere with Kubernetes:
NSX-T for vSphere Kubernetes by Vinay Reddy
  • Integration with VMware Tanzu Kubernetes Grid Service
  • L2-7 container networking services to non-VMware Kubernetes platforms

Telco cloud enhancements

  • Multi tenancy enhancement and support by adding VRF Lite and Overlay EVPN
  • VRF Lite support provides multi-tenant data plane isolation through Virtual Routing Forwarding (VRF) in Tier-0 gateway
  • L3 EVPN support provides northbound connectivity Telco VNFs to the Overlay networks and maintains the isolation on the dataplane by using one VNI per VRF
  • Multicast routing for scalable networking and accelerated data plane performance. Multicast replication is only supported on T0. According to VMware, T1 will be supported in future releases.
  • NAT64 which provides stateful NAT from IPv6 to IPv4
  • East-West service chaining for NFV is the ability to chain multiple services for edge traffic that can now also be extended to redirect edge traffic.
  • IPv6 support for containers

Some other new features

Converged VDS 7.0

  • NSX-T now supports VDS and you can deploy NSX-T on the existing VDS 7.0 with no VM network disruption which makes deployments much easier in brown fields.

Support for vRNI 5.2

  • “In addition to NSX, VMware also rolled out VMware vRealize Network Insight 5.2, the company’s network visibility and analytics software. The new software features machine learning support for Flow Based Application Discovery will automatically group VMs into applications and tiers for a better understanding of what is occurring on the infrastructure,” VMware stated.
  • “vRealize Network Insight 5.2 has new end-to-end visibility of the network path from VM through to VMware Cloud on AWS including the AWS Direct Connect section. For VMware SD-WAN users, there will be additional visibility into SD-WAN application and business policy support,” VMware stated.
  • I review vRNI 5.2 new features and improvements in another post later on.

Automation, OpenStack and other CMP

  • Search API: Exposes NSX-T Search capabilities (already available in UI) through API
  • Terraform Provider for NSX-T – Declarative API support: Provides infrastructure-as-code by covering a wider range of constructs from networking (T0/T1 Gateway, segments), security (centralized and distributed firewall, groups) and services (load balancer, NAT, DHCP).
  • Enhanced Ansible Module for NSX-T support for Upgrade (in addition to install) and Logical object support.
  • OpenStack Integration Improvements: extended IPv6, VPNaaS support and vRF lite support

User interface improvements

  • Brand new Alarms dashboard and Network Topology Visualizations: Provides an interactive network topology diagram of Tier 0 Gateways, Tier 1 Gateways, Segments, and connected workloads (VMs, Containers), with the ability to export to PDF.
  • New Getting Started Wizards: A new getting started wizard is introduced for preparing clusters for VLAN Micro-Segmentation in three easy steps.
  • Quick Access to Actions and Alarms from Search Results: Enhanced search results page to include quick access to relevant actions and alarms. Added more search criteria across Networking, Security, Inventory, and System objects.
  • User Interface Preferences for NSX Policy versus Manager Modes: You can switch between NSX Policy mode and NSX Manager mode within the user interface, as well as control the default display. By default, new installations display the UI in NSX Policy mode, and the UI Mode switcher is hidden. Environments that contain objects created through NSX Manager mode (such as from NSX upgrades or cloud management platforms) by default display the UI Mode switcher in the top right-hand corner of the UI.
  • UI Design Improvements for System Appliances Overview: Improved UI design layout for displaying resource activity and operational status of NSX system appliances.
  • Security Dashboards: NSX-T 3.0 introduces new Security Overview Dashboards for security and firewall admins to see at-a-glance the current operational state of firewall and distributed IDS.
  • Security wizards for VLAN-based Micro-Segmentation: You can configure your data centers to introduce segmentation using NSX-T in very easy steps.
  • Container Inventory & Monitoring in User Interface: Container cluster, Namespace, Network Policy, Pod level inventory can be visualized in the NSX-T User Interface. Visibility is also provided into co-relation of Container/K8 objects to NSX-T logical objects.
  • NCP Component Health Monitoring: The NSX Container Plugin and related component health information like NCP Status, NSX Node Agent Status, NSX Hyperbus Agent Status can be monitored using the NSX Manager UI/API.
  • Physical Servers Listing: NSX-T adds UI support for listing physical servers.

Wrap-up

As I mentioned before this release is a major upgrade for VMware NSX solution and I believe it’s moving in right direction. Combination of NSX-T and SDWAN would be a tempting solution for Telco service providers as Telco is adopting virtualization more than ever and network virtualization plays a key role in that transformation.

Here is the “What’s new at a glance” slide for a quick review of new features but more details can be found in the release notes of the product:

If you are keen to deep dive into NSX-T 3.0 details I would suggest you to check out NSX-T 3.0 release notes and then enroll in the VMware Hands-On-Lab NSX-T sessions and do some practice in a very well built lab environment and then download the product and build your own sandbox and check the new features practically.

Credits

Release notes:
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/rn/VMware-NSX-T-Data-Center-30-Release-Notes.html

Downlaod

https://my.vmware.com/en/web/vmware/info/slug/networking_security/vmware_nsx_t_data_center/3_x

I hope you find this post useful and thank you for reading!

Disclaimer

The material and information contained on this article and my blog are for general information purposes only. You should not rely upon the information on this article as a basis for making any business, legal or any other decisions. Whilst I try to keep the information up to date and correct, I will not be liable for any false, inaccurate, inappropriate or incomplete information presented in this article. I would advise you to check with VMware as a reference in order to make any decision.

Automate SNMP configuration on multiple ESXi hosts

I have created a PowerCLI script that can be used for applying SNMP configuration on multiple ESXi hosts of a vSphere Cluster by replicating configuration from a reference ESXi host.

Leave a comment with your email address if you have a question and I will get back to you soon.

Enjoy the script 🙂

  1. <#
  2.   configure-snmp-on-esxi.ps1
  3.  
  4.   Configure SNMP settings on multiple ESXi hosts using a reference host settings
  5.  
  6.   .History.
  7.   2020/04/09 - 0.1 - Reza Rafiee - First version
  8.  
  9.   .Variables.
  10.   $VC: vCenter Server
  11.   $targethosts: Target ESXi host cluster
  12.   (to apply on single ESXi host refer to line30)
  13.   $refesxhost : Reference ESXi host
  14.  
  15. #>
  16.  
  17. $VC="vCenter Server Name"
  18. $targethosts="Target cluster name"
  19. $refesxhost = "Reference ESXi host name"
  20.  
  21.  
  22.  
  23. Connect-viserver $VCServer
  24. $refhost = get-vmhost $refesxhost
  25. $refesxcli = Get-EsxCli -VMhost $refhost -V2
  26. $snmp=$refesxcli.system.snmp.get.invoke()
  27. write-host "SNMP configuration on " $refesxhost," (Refernce Host): "
  28. $snmp
  29.  
  30. $vmhosts = get-cluster -name $targethosts | get-vmhost
  31. <#If you want to apply the snmp config on a single host
  32.  then enter ESXi host name for $targethosts variable and
  33.  replace the above line with the below line:
  34.  
  35.  $vmhosts = get-vmhost -name $targethosts
  36.  
  37.  #>
  38.  
  39.  
  40. foreach ($vmhost in $vmhosts){
  41.  
  42. $esxcli = Get-EsxCli -VMHost $vmhost -V2
  43.  
  44. <#Reset SNMP settings to factory default on the target host prior to
  45. reconfigure SNMP settings on that host#>
  46. $snmpreset = $esxcli.system.snmp.set.CreateArgs()
  47. $snmpreset.reset = $true
  48. $esxcli.system.snmp.set.Invoke($snmpreset)
  49.  
  50. write-host "SNMP settigs has been reset to default on $vmhost"
  51. #SNMP settings reset complete
  52.  
  53. $esxcli = Get-EsxCli -VMHost $vmhost -V2
  54. $arguments = $esxcli.system.snmp.set.CreateArgs()
  55.  
  56. #The below arguments (if statements) cannot be null hence we skip the null ones
  57.  
  58. if ($snmp.communities -ne $null) {
  59. $arguments.communities = $snmp.communities
  60. }
  61.  
  62. if ($snmp.engineid -ne "$null") {
  63. write-host "engineid is nt null"
  64. $arguments.engineid = $snmp.engineid
  65. }
  66.  
  67. if ($snmp.targets -ne $null) {
  68. $arguments.targets = $snmp.targets
  69. }
  70.  
  71. if ($snmp.users -ne $null) {
  72. $arguments.users = $snmp.users
  73. }
  74.  
  75. if ($snmp.privacy -in ("none", "AES128")) {
  76. $arguments.privacy = $snmp.privacy
  77. }
  78.  
  79. if ($snmp.remoteusers -ne $null) {
  80. $arguments.remoteusers = $snmp.remoteusers
  81. }
  82.  
  83. if ($snmp.authentication -in ("none", "MD5", "SHA1")) {
  84. $arguments.authentication = $snmp.authentication
  85. }
  86.  
  87. if ($snmp.v3targets -in ("none", "auth", "priv")) {
  88. $arguments.v3targets = $snmp.v3targets
  89. }
  90.  
  91. $arguments.hwsrc = $snmp.hwsrc
  92. $arguments.largestorage = $snmp.largestorage
  93. $arguments.loglevel = $snmp.loglevel
  94. $arguments.notraps = $snmp.notraps
  95. $arguments.enable = $snmp.enable
  96. $arguments.port = $snmp.port
  97. $arguments.syscontact = $snmp.syscontact
  98. $arguments.syslocation = $snmp.syslocation
  99.  
  100. $esxcli.system.snmp.set.Invoke($arguments)
  101.  
  102. $newsnmp=$esxcli.system.snmp.get.Invoke()
  103. write-host "SNMP configuration on", $vmhost, ": "
  104. $newsnmp
  105.  
  106. }

Create multiple VDS port groups using PowerCLI

This is a PowerCLI script for creating multiple portgroups in a VMware Distributed Switch.

It can be helpful for migrating from Nexus 1000v to VMware VDS.

Please note that this script is only for creating switchport port groups (single VLAN) and not for trunk port groups (VLAN range).

If you have any question about the script, please leave a comment.

<#
    Create_VDS_PortGroups

    Creates Port Groups in a VMware Distributed Switch

    Feed the script with "portgroups.csv" file with portgroup names and vlan IDs and then update $VDS with the terget VDS_Name.
	
	portgroup.csv must have "portgroup" and "vlan" header to identify the portgroup name and corresponding vlan ID.
	
	Please note that this code is only for creating switchports (single VLAN) not trunk port groups (VLAN range).
	
	I wrote this script for migrating portgroups from Nexus 1000v to VMware Distributed Switch
    
    .History.
	2020/03/06 - 0.1 - Reza Rafiee	- First version

#>

############################
$PGs = Import-CSV .\portgroups.csv
$VDS = "VDS Name"
$RefPG = Get-VDPortgroup -Name "Reference Portgroup Name"

ForEach ($PG in $PGs) {
    $newPG = Get-VDSwitch -Name $VDS |
    New-VDPortgroup -Name $PG.portgroup  -ReferencePortgroup $RefPG.Name | Set-VDPortgroup -Notes $PG.description
    Set-VDVlanConfiguration -VDPortgroup $newPG -VlanId $PG.vlan -Confirm:$false 
	}
############################





			

Connect vNIC on a VM to network using command line

Once upon a time I had an ESXi host in disconnected state and the management services were out of order and even restarting management services couldn’t help out to get it back to manageable state.

While the Host was partially manageable we had to connect a network interface of a VM to network but the only option was command line. The below commands did the job:

You will need to find the VM ID and the vNIC device ID as well using the below two commands:

vim-cmd vmsvc/getallvms | grep "VM_Name"
vim-cmd vmsvc/get.configuration "VM_ID"

Then you can run the below command to connect/disconnect the vNIC:

vim-cmd vmsvc/device.connection true|false

example:

How to know if ESXi or Xen server is using UEFI or Legacy boot mode

There might be times that you need to know if the ESXi host boot mode is set to UEFI or Legacy and obviously one option is to reboot the host and check the boot mode from BIOS. But it requires a downtime and sometimes it’s not an option in critical production environment.

Here is a simple command in both ESXi and Xen server that you can run to identify the boot mode without rebooting the server:

VMware:

to check boot type of esx run this command from putty SSH:

vsish -e get /hardware/firmwareType

Xen:

to check boot type of a xen host check for the EFI folder under /sys/firmware/

open up the xen host console and run the check the contents of /sys/firmware folder by running the below commands:

cd /sys/firmware/
ls

if it returns a folder labled EFI then it’s UEFI boot. Otherwise it’s Legacy.