NSX-T Application Platform - setup and deployment with vSphere TKGS/NSX-T/self-signed Harbor

    With the introduction of NSX-T v3.2 new option is present inside environment - NSX Application Platform (NAPP), with main purpose of delivering NGFW services inside VMware SDN solution (Advanced Threat Protection, NDR, Intelligence, AMP etc.). It's an solution built in modern app way - meaning Kubernetes (k8s) infrastructure must be in place. For that different options are available - vanilla k8s, OpenShift or one option with deep integration in well-known vSphere solution - Tanzu. I would like to separate this, more-less, complicated process in couple of segments, with most interesting points which could maybe help. Environment that I used for play/demo is based on 4-node vSAN environment / vSphere v7 / NSX-T v3.2.1.2 / TKGS for vSphere (k8s platform).


I - TKGS for vSphere setup

1) Content library creation - place to download required k8s images for supervisor cluster deployment inside vSphere. URL is available: https://wp-content.vmware.com/v2/latest/lib.json (if vCenter does not have access to Internet this step must be done manually!):

2) Supervisor cluster setup - different options are available depending on how networking and load balancing (LB) features are being utilised. Good quick setup guide is already available from VMware directly in form of PoC guide (which can be used for lab/demo environment): https://core.vmware.com/resource/tanzu-proof-concept-guide#poc-guide-overview Generally - as mentioned - different options are available from networking/LB point of view but can be summarised in:

- option 1 - NSX-T - with all nice, already included NSX stuff (vSphere pod + k8s cluster support) - option that I used during rest of lab/demo, or

- option 2 - vSphere vDS networking with NSX ALB (ex Avi Networks next generation LB product) or some open-source (ie HA proxy) - only k8s cluster supported.

During this step - just couple of suggestions:

- don't use *.local domain if possible, because of some checks in future possible from different k8s cluster solutions,

- different networks are required for NSX deployment with steps inside wizard:

namespace (IP addresses for workloads attached to Supervisor cluster Namespace segments - if NAT mode is unchecked, then this IP CIDR should be a routable IP address) should be at least /22 or better /20 (10.245.0.0/20 in my lab);

service CIDR (Internal block from which IPs for k8s cluster IP Services will be allocated - it cannot overlap with IPs of Workload Management components (VC, NSX, ESXs, Management DNS, NTP) - 10.96.0.0/22);

ingress CIDR (used to allocate IP addresses for services that are published via service type load balancer and Ingress across all Supervisor Cluster namespaces) - will be published from NSX domain to external world - basically incoming traffic to this space will be translated to namespace network (172.16.24.0/24 in my lab);

egress CIDR (allocate IP address for SNAT (Source Network Address Translation) for traffic exiting the Supervisor cluster namespace) - basically opposite from ingress network space for outbound traffic and also published from NSX routing domain (172.16.26.0/24 in my lab) and 

management CIDR for obvious purpose - during k8s workload creation ONLY network manually created (can also be inside NSX segment).

3) After TKGS deployment web access to supervisor cluster should give page for downloading kubectl + vSphere plugin CLI tools:

These tools should be deployed on Mac, Linux or Windows OS jump host for successful manipulation on k8s clusters inside vSphere. Useful commands in one place at this stage from my perspective:

kubectl.exe vsphere login --server <IP> -u <username> --- login to supervisor TKGS cluster

kubectl config use-context / kubectl config get-contexts --- get/use namespace contexts

kubectl config use-context <namespace name> --- switch to usable namespace

kubectl.exe apply -f <name_of_yaml_config_file>

Kubectl get tkr --- get tanzu k8s available runtimes

kubectl.exe get nodes -o wide --- get nodes with IPs

 

II - setup namespace for NAPP

1) Create new namespace / assign required permissions and storage policies

2) Create, and assign to created NAPP namespace, required VM classes per NAPP requirements and size you want to build - info available at https://docs.vmware.com/en/VMware-NSX/4.0/nsx-application-platform/GUID-85CD2728-8081-45CE-9A4A-D72F49779D6A.html#GUID-85CD2728-8081-45CE-9A4A-D72F49779D6A

3) Create YAML file for k8s cluster deployment - this can be tricky so I will put an example of mine, with thorough explanation:


apiVersion: run.tanzu.vmware.com/v1alpha1

kind: TanzuKubernetesCluster 

metadata:

  name: napp-final-cluster                       #cluster name - user defined

  namespace: napp-final                 #vsphere namespace - already created

spec:

  distribution:

    version: v1.19.7---vmware.1-tkg.2.f52f85a        #check tkr inside TKGS and supported versions for NAPP deployment

  topology:

    controlPlane:

      count: 1                                 #number of control plane nodes

      class: napp-std-ctrl                #vmclass created for control plane nodes

      storageClass: vsan-default-storage-policy          #storageclass created for control plane

      volumes:

        - name: containerd

          mountPath: /var/lib/containerd

          capacity:

            storage: 64Gi     

    workers:

      count: 3                                 #number of worker nodes that will be created

      class: napp-standard               #created vmclass for worker nodes

      storageClass: vsan-default-storage-policy         #storageclass for worker nodes

      volumes:

        - name: containerd

          mountPath: /var/lib/containerd

          capacity:

            storage: 64Gi

  #settings:

    #network:

      #cni:

        #name: calico ###optional - if you want to change from default CNI Antrea to open source Calico

      #pods:

        #cidrBlocks:

        #- 10.4.44.0/22                    ###optional - if you need to change already defined namespace network

      #services:

        #cidrBlocks:

        #- 10.6.68.0/22   ###optional - if you need to change already defined services CIDR

      #trust:                   ###optional - if you're using private Harbor image registry for required images and self signed certificate - explained inside section III during Harbor setup

        #additionalTrustedCAs:

          #- name: harbor-lab-selfSigned

            #data: |

              LS0tLS1CRUdJT..............................<BASE-64 encoded>

 


4) Login to namespace, switch to required context and apply prepared YAML file per your environment - all commands are mentioned above


III - Harbor registry setup

1) Public Harbor registry from VMware could be used if k8s cluster is having Internet access - private one should be created in all other cases. Typical use case VM for Harbor purpose could be:

Ubuntu server 20.04 LTS / 2vCPU and 4-8GB RAM / disk with enough space for images ie >100GB

2) Install Docker engine on Ubuntu - https://docs.docker.com/engine/install/ubuntu/ 

sudo apt-get update

sudo apt-get install ca-certificates curl gnupg lsb-release

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

sudo systemctl status docker

docker-compose --version

3) Install helm package manager for k8s - needed for NAPP deployment

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 (script will check for latest version and download it)

chmod 700 get_helm.sh

./get_helm.sh

4) Download Harbor from wget https://github.com/goharbor/harbor/releases/download/v2.6.1/harbor-online-installer-v2.6.1.tgz 

tar xvfz harbor-online-installer-v2.6.1.tgz

cp harbor.yml.tmpl harbor.yml (so you can edit it without problem)

harbor.yaml should be edited with favourite editor (vi, nano etc.) - couple of settings are most important:

hostname - should match certificate you plan to use with Harbor (public or private one - if private then additional steps are needed as mentioned below)

data_volume: /data     (where images are being stored)

certificate: /home/certs/...

private_key: /home/certs/...

harbor_admin_password: <xyz321>

Regarding certificate - if you plan to use private CA and certificate for Harbor inside your lab you can follow steps from https://goharbor.io/docs/1.10/install-config/configure-https/ - basically it explains how to configure HTTPS access to Harbor from zero (CA creation, server certificate creation, upload etc.) and generated cert is something you will provide in above step for harbor YAML file deployment.

Install Harbor with $ sudo ./install.sh --with-chartmuseum

Using Web browser try accessing Harbor over HTTPS and login

5) Create project in Harbor:

Download required NAPP files from VMware Customer Connect portal --> NSX Intelligence --> NSX Application Platform (over 24GB as of writing)

Edit file upload_artifacts_to_private_harbor.sh in terms of:

DOCKER_REPO=harbor-lab.sr/nsx-napp (or whatever you configured for your Harbor and project)

DOCKER_USERNAME=admin

DOCKER_PASSWORD=<your-harbor-password>

Upload the files:

chmod +x upload_artifacts_to_private_harbor.sh

sudo ./upload_artifacts_to_private_harbor.sh

Because we introduced private Harbor with self-signed certs we need to change a little bit settings section inside YAML file:

- add additionalTrustedCAs with your custom CA certificate created earlier in Base-64 encoded data - easily transformed using https://base64.guru/converter/encode 

- after deploying k8s cluster for NAPP in your lab, with changed YAML about trusted CAs, authenticate to it and set proper role bindings so containers can be created with correct permissions:

kubectl create clusterrolebinding psp:authenticated --clusterrole=psp:vmware-system-privileged --group=system:authenticated

As mentioned - these changes and usage of private Harbor registry are usable only for lab/demo environments - NOT in production.


IV - setup NSX Application Platform

Generate a Tanzu k8s cluster configuration file with a non-expiring token per https://docs.vmware.com/en/VMware-NSX/4.0/nsx-application-platform/GUID-52A52C0B-9575-43B6-ADE2-E8640E22C29F.html :

login to cluster and choose correct context / create service account

kubectl create serviceaccount napp-admin -n kube-system

kubectl create clusterrolebinding napp-admin --serviceaccount=kube-system:napp-admin --clusterrole=cluster-admin

Create napp-token.txt file (or your chosen name) with:

SECRET=$(kubectl get serviceaccount napp-admin -n kube-system -ojsonpath='{.secrets[].name}')

TOKEN=$(kubectl get secret $SECRET -n kube-system -ojsonpath='{.data.token}' | base64 -d)

kubectl get secrets $SECRET -n kube-system -o jsonpath='{.data.ca\.crt}' | base64 -d > ./ca.crt

CONTEXT=$(kubectl config view -o jsonpath='{.current-context}')

CLUSTER=$(kubectl config view -o jsonpath='{.contexts[?(@.name == "'"$CONTEXT"'")].context.cluster}')

URL=$(kubectl config view -o jsonpath='{.clusters[?(@.name == "'"$CLUSTER"'")].cluster.server}')

TO_BE_CREATED_KUBECONFIG_FILE="napp-token.txt"

kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-cluster $CLUSTER --server=$URL --certificate-authority=./ca.crt --embed-certs=true

kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-credentials napp-admin --token=$TOKEN

kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-context $CONTEXT --cluster=$CLUSTER --user=napp-admin

kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE use-context $CONTEXT

Upload napp-token.txt file to some station from where you can upload it to NSX - it will be needed during NAPP setup.


It's time to finally install NAPP - go to NSX manager --> System --> NSX Application Platform

Enter your prepared Harbor information:

https://<harbor-FQDN>/chartrepo/nsx-napp (or whatever you used for project naming) - Helm repository

<harbor-FQDN>/nsx-napp/clustering - Docker registry - WITHOUT HTTPS

Upload created napp-token.txt file

Create 2 DNS FQDN entries from Ingress CIDR that you created - for ingress URL and Messaging URL - populate them in required fields

Choose appropriate form factor per your deployment

Optionally - upload required k8s tools (downloaded from VMware Customer Connect), which depends on k8s version you're using

After couple of pre-checks - installation will start and, hopefully, successfully finish :)


V - Possible errors and resolutions - that I experienced :)

Harbor image pooling - can be checked inside context with:

$ kubectl get pods -n cert-manager --- ErrImagePull is observed inside output --> something is not OK with Harbor certificate (self-signed) that is used inside lab and must be resolved

Pay attention for required storage sizes and VM classes (vCPU, RAM) which are used inside k8s cluster for NAPP platform

If planned to try and use advanced Network Detection and Response, Malware protection etc. services - NAPP platform will need outbound Internet access - which in this lab configuration that I mention is possible by NAT-ing egress CIDR which is configured earlier for k8s.

Be ready to deploy and re-deploy k8s multiple times, with possible Harbor changes also :)

Comments

Popular posts from this blog

NSX ALB LetsEncrypt with DNS-01 challenge - BIND example

VMware SD WAN - multiple locations - LAN IP address space overlapping with NAT

NSX ALB routing - multiple floating IP and BGP setup