Posts

Showing posts from 2022

NSX-T Application Platform - setup and deployment with vSphere TKGS/NSX-T/self-signed Harbor

Image
     With the introduction of NSX-T v3.2 new option is present inside environment - NSX Application Platform (NAPP), with main purpose of delivering NGFW services inside VMware SDN solution (Advanced Threat Protection, NDR, Intelligence, AMP etc.). It's an solution built in modern app way - meaning Kubernetes (k8s) infrastructure must be in place. For that different options are available - vanilla k8s, OpenShift or one option with deep integration in well-known vSphere solution - Tanzu. I would like to separate this, more-less, complicated process in couple of segments, with most interesting points which could maybe help. Environment that I used for play/demo is based on 4-node vSAN environment / vSphere v7 / NSX-T v3.2.1.2 / TKGS for vSphere (k8s  platform) . I - TKGS for vSphere setup 1) Content library creation - place to download required k8s images for supervisor cluster deployment inside vSphere. URL is available:  https://wp-content.vmware.com/...

NSX-T implementation plan / steps overview

     Here I would like to depict summary (main) steps needed for successful NSX-T deployment, in one place, considering versions above 3.x: - NSX manager deployment - OVA, with typical infrastructure services prepared (IP/DNS/NTP/passwords...) - pay attention for GRUB setup (last resort in case ROOT password is lost) - Compute manager creation - interconnection between NSX and vCenter - Deployment of additional NSX managers through NSX manager UI - Cluster VIP setup - Disable user password expiration (root/admin/audit) - per your security policies ( clear user <user> password-expiration ) or change expiration period ( set user <user> password-expiration <days> ) - DRS rules setup for NSX nodes on different servers - Setup of SFTP backup - FIPS compliant server store needed - Setup of LDAP for AAA services - IP pools creation for TEP's - Transport zone creation / Transport Node profile (for ESXi) / Host prep - Transport node profile (for Edge nodes) / Edg...

vSphere Microsoft clustering - RDM & VMDK shared disk option for WSFC

Image
    There are multiple improvements from clustering techniques available to Customers from vSphere 7 and up, giving different options and simplifying actions so that high availability requirements are satisfied. Before v7 you can only rely on different pRDM (physical RDM) options or vVols (VMware vSphere Virtual Volumes), starting with clustered VMDK support on vSAN 6.7 U3.     Starting with vSphere v7.x there is support for  Windows Server Failover Cluster (WSFC) with shared disk VMDK based resources by transparently passing to the underlying storage or emulating on the datastore level SCSI-3 Persistent Reservations (SCSI3-PRs) SCSI commands, required for a WSFC node (VM participating in a WSFC) to arbitrate access to a shared disk.     In summary, from the storage perspective next table is showing current options for sharing purposes in WSFC cluster environments - pay attention that only FC based storage, for clustered VMDK based disk, is supported a...

NSX-T Layer 2 bridging - scenarios & use cases

Image
     Layer 2 bridging is very useful feature of NSX-T, which provides connection to a VLAN backed port group or a device, such as a gateway, that resides outside of NSX-T DC environment. Useful scenarios, among others, are: Workload migration from VLAN-backed to NSX overlay segment, NSX-V to NSX-T migration in Customer environments, Security features leverage using NSX-T Gateway firewall etc. L2 bridging feature requires usage of Edge clusters and Edge Bridge profiles.      Deployments should consider different options, with most important scenarios, for implementation below (this covers Edge VM deployment option as typical use case): Edge VM on VSS portgroup  --> promiscuous and forged transmit on portgroup REQUIRED / ESXi host (with Edge VM) command " esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1 " REQUIRED / Active and Standby Edge VMs should be on different hosts, Edge VM on VDS 6.6 (or later) portgroup  -->...

NSX ALB LetsEncrypt script parameters and usage

     In one of my previous posts about NSX ALB (Avi) and Let's Encrypt integration LINK , I explained how useful could be implementing service like this, especially in Customer environment where large number of different DNS records exist, serving different virtual services using legitimately known and signed digital certificate. Based on main part of this functionality, GitHub script used for certificate management service inside NSX ALB ( LINK - v0.9.7 actual at the time of writing), I would like to show you different options available and useful depending on different use cases and scenarios.     Parameters used by script are well defined and usable inside certificate management configuration on NSX ALB: user / password - self explained and needed by certificate management service for successful run of script. Permissions using custom role defined as  read & write access enabled for Virtual Service, Application Profile, SSL/TLS Certificates and Cer...

NSX-T - useful CLI commands at one place

    In this article I will try to summarize most useful CLI commands inside NSX-T environment, which I personally favorize, so you can quickly make observations/troubleshooting decisions, hopefully in an easy manner with relevant outputs. Now, NSX-T environment and support for CLI comes with many options - many GETs / SETs CLI commands etc. with included option also for   Central CLI   (more on very nice post at this   LINK ) - but here I'm going to put most interesting one, from my perspective, and for sure this list is going to be expanded: - PING test using TEP interface vmkping ++netstack=vxlan <IP>  [vmkping ++netstack=vxlan -d -s 1572 <destination IP>] - example with sending packet with MTU=1572 w/o fragmentation - Enable firewall logging for rules configured inside NSX-T esxcli network firewall ruleset set -r syslog -e true  - enable firewall SYSLOG generation inside ESXi transport node tail -f /var/log/dfwpktlogs.log | grep ...

NSX Edge / Transport node TEP networking scenarios

    During time and different NSX-T versions, different options were available from the perspective of Edge/ESXi Transport node TEP ( Tunnel EndPoint ) networking aspect, which gives multiple options for someone to fulfil even the most demanding scenarios in this area. Some of them gives more flexibility or simplicity, but ultimate goal for functional SDN is always satisfied.     One VMware article gives in summary overview what you can use, plan, and I found it very useful in several occasions as a reminder how something in TEP/VLAN area could be achieved - LINK     In summary, mentioned KB gives following options from TEP networking perspective, comparing Edge nodes and ESXi transport nodes: Edge TEP and ESXi host Transport Node TEP can be configured on the same VLAN in the following configurations: - Edge VM TEP interface connected to a portgroup on an ESXi host not prepared for NSX - Edge VM TEP interface connected to a portgroup on a switch not used by...

NSX-T T0 BGP routing considerations regarding MTU size

     Recently I had serious NSX-T production issue with BGP involved and T0 routing instance on edge VMs cluster, in terms of not having routes inside routing table on T0 - which supposed to be received from ToR L3 device. NSX-T environment has several options regarding connections from fabric to the outside world. These type of connections goes over T0 instance, which can be configured to use static or dynamic routing (BGP, OSPF) for this purpose. MTU is important consideration inside NSX environment, because of GENEVE encapsulation inside overlay (should be minimum 1600bytes - 9k ideally). Routing protocols are also subject to MTU check (OSPF is out of scope for this article, but you know that MTU is checked during neighborship establishment). Different networking vendors are using various options for MTU path discovery - by default this mechanisms inside BGP should be enabled (but of course should be checked and confirmed). Problem arrives when you configure ToR as ie ...

ESXi 7 and TPM 2.0 - Host TPM attestation alarm explanation

Image
     With ESXi 7, new or 6.x upgraded systems, there are couple of changes introduced at host hardware security tampering level using Trusted Platform Module (TPM) chip. Occasionally, alarm, which is seen inside vCenter console, looks like on below picture (myself encountered this with Dell PowerEdge hardware):     Per this VMware LINK 1  TPM 2.0 chip provides, using configured UEFI secure boot, successful attestation, verified remotely by vCenter system, based on stored measurements of the software modules booted in the ESXi system. Specifically, from vSphere v7 new " vSphere Trust Authority Attestation Service is introduced, which signs a JSON Web Token (JWT) that it issues to the ESXi host, providing the  assertions about the identity, validity, and configuration of the ESXi host " - giving option to build something like completely Trusted infrastructure inside vSphere LINK 2 . But, before that could happen, couple of requirements are ment...

NSX Advanced Load Balancer (ex AVI Networks) Lets Encrypt script integration

Image
     I would like to share very useful setup for VMware NSX ALB (ex Avi solution), in terms of usage freely available Let's Encrypt certificate management solution. Basically, provided script gives you automation inside NSX ALB environment, without the need for some external tools or systems. Putting it summary these are the required steps: - create appropriate virtual service (VS) which you will use for SSL setup with Let's Encrypt cert - this can be standalone service or SNI ( Service Name Identifier ) based ( Parent/Child ) if needed. Initially you can select "System-Default" SSL cert during the VS setup; - create appropriate DNS records for new service in place - out of scope of NSX ALB most of the times. NSX ALB Controllers should have access to Let's Encrypt public servers for successful ACME based HTTP-01 certificate generation/renewal; - Download required script from  HERE - Follow rest of required configuration steps on this link  NSX-ALB-Lets-Encrypt-S...

Linux VM online disk expansion

      Occasionally, there is a need to expand disk assigned to some Linux virtual machine, depending on infrastructure or service provided inside datacenter. Personally, I liked feature inside Windows OS where, after you resize HDD through virtual machine settings, using Disk management tool you have easy option just to do an online expansion, without rebooting or something similar.     For most of the time similar action, at least from my side on Linux based VMs, was performed using reboot operation and then disk expansion process. Particular commands needed to expand disk inside Linux VMs are very well explained on following links: extending a logical volume (LVM) in a Linux virtual machine  - Red Hat/CentOS example, or increasing the size of a Linux ext3 virtual machine disk . useful KB if you need to create new disk on existing Linux VM, for test/lab purpose, before trying below process - LINK     I would like to introduce couple of new o...

NSX-T - North/South Edge uplink connection options and scenarios

Image
     In this, a little bit longer post, I'm going to explain a couple typical use case scenarios regarding different options used inside NSX-T environment for connection options on Edge side, regarding TEP and North/South traffic options. Every environment is special use case, but hope you will find here summary options which you can use for successful deployment and design planning. First, couple of assumptions that I made here: - vSphere environment is v7.x - vDS distributed switch is in place - this dramatically can simplify NSX-T design and implementation because of NSX support inside vSphere 7. vSphere ESXi is pre-requisite for this and if you have such kind of infra, unless you have some special reason - N-vDS is not necessary at all - Basically, we are speaking here about post NSX-T v2.5 era where, similar to bare metal Edge nodes, Edge VM form factor also supports same vDS/N-VDS for overlay and external traffic - definitely no more "three N-VDS per Edge VM design"...