This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

How to install the EKS Anywhere CLI, set up prerequisites, and create EKS Anywhere clusters

This section explains how to set up and run EKS Anywhere. The pages in this section are purposefully ordered, and we recommend stepping through the pages one-by-one until you are ready to choose an infrastructure provider for your EKS Anywhere cluster.

1 - Overview

Overview of the EKS Anywhere cluster creation process

Overview

Kubernetes clusters require infrastructure capacity for the Kubernetes control plane, etcd, and worker nodes. EKS Anywhere provisions and manages this capacity on your behalf when you create EKS Anywhere clusters by interacting with the underlying infrastructure interfaces. Today, EKS Anywhere supports vSphere, bare metal, Snow, Apache CloudStack and Nutanix infrastructure providers. EKS Anywhere can also run on Docker for dev/test and non-production deployments only.

If you are creating your first EKS Anywhere cluster, you must first prepare an Administrative machine (Admin machine) where you install and run the EKS Anywhere CLI. The EKS Anywhere CLI (eksctl anywhere) is the primary tool you will use to create and manage your first cluster.

Your interface for configuring EKS Anywhere clusters is the cluster specification yaml (cluster spec). This cluster spec is where you define cluster configuration including cluster name, network, Kubernetes version, control plane settings, worker node settings, and operating system. You also specify environment-specific configuration in the cluster spec for vSphere, bare metal, Snow, CloudStack, and Nutanix. When you perform cluster lifecycle operations, you modify the cluster spec, and then apply the cluster spec changes to your cluster in a declarative manner.

Before creating EKS Anywhere clusters, you must determine the operating system you will use. EKS Anywhere supports Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL). All operating systems are not supported on each infrastructure provider. If you are using Ubuntu or RHEL, you must build your images before creating your cluster. For details reference the Operating System Management documenation

During initial cluster creation, the EKS Anywhere CLI performs the following high-level actions

  • Confirms the target cluster environment is available
  • Confirms authentication succeeds to the target environment
  • Performs infrastructure provider-specific validations
  • Creates a bootstrap cluster (Kind cluster) on the Admin machine
  • Installs Cluster API (CAPI) and EKS-A core components on the bootstrap cluster
  • Creates the EKS Anywhere cluster on the infrastructure provider
  • Moves the Cluster API and EKS-A core components from the bootstrap cluster to the EKS Anywhere cluster
  • Shuts down the bootstrap cluster

During initial cluster creation, you can observe the progress through the EKS Anywhere CLI output and by monitoring the CAPI and EKS-A controller manager logs on the bootstrap cluster. To access the bootstrap cluster, use the kubeconfig file in the <cluster-name>/generated/<cluster-name>.kind.kubeconfig file location.

After initial cluster creation, you can access your cluster using the kubeconfig file, which is located in the <cluster-name>/<cluster-name>-eks-a-cluster.kubeconfig file location. You can SSH to the nodes that EKS Anywhere created on your behalf with the keys in the <cluster-name>/eks-a-id_rsa location by default.

While you do not need to maintain your Admin machine, you must save your kubeconfig, SSH keys, and EKS Anywhere cluster spec to a safe location if you intend to use a different Admin machine in the future.

See the Admin machine page for details and requirements to get started setting up your Admin machine.

Infrastructure Providers

EKS Anywhere uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that is based on the Kubernetes Cluster API (CAPI) project.

Like CAPI, EKS Anywhere runs a Kind cluster on the Admin machine to act as a bootstrap cluster. However, instead of using CAPI directly with the clusterctl command to manage EKS Anywhere clusters, you use the eksctl anywhere command which simplifies that operation.

Before creating your first EKS Anywhere cluster, you must choose your infrastructure provider and ensure the requirements for that environment are met. Reference the infrastructure provider-specific sections below for more information.

Deployment Architectures

EKS Anywhere supports two deployment architectures:

  • Standalone clusters: If you are only running a single EKS Anywhere cluster, you can deploy a standalone cluster. This deployment type runs the EKS Anywhere management components on the same cluster that runs workloads. Standalone clusters must be managed with the eksctl CLI. A standalone cluster is effectively a management cluster, but in this deployment type, only manages itself.

  • Management cluster with separate workload clusters: If you plan to deploy multiple EKS Anywhere clusters, it’s recommended to deploy a management cluster with separate workload clusters. With this deployment type, the EKS Anywhere management components are only run on the management cluster, and the management cluster can be used to perform cluster lifecycle operations on a fleet of workload clusters. The management cluster must be managed with the eksctl CLI, whereas workload clusters can be managed with the eksctl CLI or with Kubernetes API-compatible clients such as kubectl, GitOps, or Terraform.

For details on the EKS Anywhere architectures, see the Architecture page.

EKS Anywhere software

When setting up your Admin machine, you need Internet access to the repositories hosting the EKS Anywhere software. EKS Anywhere software is divided into two types of components: The EKS Anywhere CLI for managing clusters and the cluster components and controllers used to run workloads and configure clusters.

  • Command line tools: Binaries installed on the Admin machine include eksctl, eksctl-anywhere, kubectl, and aws-iam-authenticator.
  • Cluster components and controllers: These components are listed on the artifacts page for each provider.

If you are operating behind a firewall that limits access to the Internet, you can configure EKS Anywhere to use a proxy service to connect to the Internet.

For more information on the software used in EKS Distro, which includes the Kubernetes release and related software in EKS Anywhere, see the EKS Distro Releases page.

2 - 1. Admin Machine

Steps for setting up the Admin Machine

EKS Anywhere will create and manage Kubernetes clusters on multiple providers. Currently we support creating development clusters locally using Docker and production clusters from providers listed on the providers page.

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you run all EKS Anywhere lifecycle operations as well as Docker, kubectl and prerequisite utilites. From here you will need to install eksctl , a CLI tool for creating and managing clusters on EKS, and the eksctl-anywhere plugin, an extension to create and manage EKS Anywhere clusters on-premises, on your Administrative machine. You can then proceed to the cluster networking and provider specific steps . See Create cluster workflow for an overview of the cluster creation process.

NOTE: For Snow provider, if you ordered a Snowball Edge device with EKS Anywhere enabled, it is preconfigured with an Admin AMI which contains the necessary binaries, dependencies, and artifacts to create an EKS Anywhere cluster. Skip to the steps on Create Snow production cluster to get started with EKS Anywhere on Snow.

Administrative machine prerequisites

System and network requirements

  • Mac OS 10.15+ / Ubuntu 20.04.2 LTS or 22.04 LTS / RHEL or Rocky Linux 8.8+
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • If you are running in an airgapped environment, the Admin machine must be amd64.
  • If you are running EKS Anywhere on bare metal, the Admin machine must be on the same Layer 2 network as the cluster machines.

Here are a few other things to keep in mind:

  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation, as described here.

  • If you are using EKS Anywhere v0.15 or earlier and Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1. For details, see Troubleshooting Guide.

  • If you are using Docker Desktop, you need to know that:

    • For EKS Anywhere Bare Metal, Docker Desktop is not supported
    • For EKS Anywhere vSphere, if you are using EKS Anywhere v0.15 or earlier and Mac OS Docker Desktop 4.4.2 or newer "deprecatedCgroupv1": true must be set in ~/Library/Group\ Containers/group.com.docker/settings.json.

Tools

Install EKS Anywhere CLI tools

Via Homebrew (macOS and Linux)

You can install eksctl and eksctl-anywhere with homebrew . This package will also install kubectl and the aws-iam-authenticator which will be helpful to test EKS Anywhere clusters.

brew install aws/tap/eks-anywhere

Manually (macOS and Linux)

Install the latest release of eksctl. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer.

curl "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo install -m 0755 /tmp/eksctl /usr/local/bin/eksctl

Install the eksctl-anywhere plugin.

RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo install -m 0755 ./eksctl-anywhere /usr/local/bin/eksctl-anywhere

Install the kubectl Kubernetes command line tool. This can be done by following the instructions here .

Or you can install the latest kubectl directly with the following.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo install -m 0755 ./kubectl /usr/local/bin/kubectl

Upgrade eksctl-anywhere

If you installed eksctl-anywhere via homebrew you can upgrade the binary with

brew update
brew upgrade aws/tap/eks-anywhere

If you installed eksctl-anywhere manually you should follow the installation steps to download the latest release.

You can verify your installed version with

eksctl anywhere version

Prepare for airgapped deployments (optional)

For more information on how to prepare the Administrative machine for airgapped environments, go to the Airgapped page.

Deploy a cluster

Once you have the tools installed, go to the EKS Anywhere providers page for instructions on creating a cluster on your chosen provider.

3 - 2. Airgapped (optional)

Configure EKS Anywhere for airgapped environments

EKS Anywhere can be used in airgapped environments, where clusters are not connected to the internet or external networks. The following diagrams illustrate how to set up for cluster creation in an airgapped environment:

Download EKS Anywhere artifacts to Admin machine

If you are planning to run EKS Anywhere in an airgapped environments, before you create a cluster, you must temporarily connect your Admin machine to the internet to install the eksctl CLI and pull the required EKS Anywhere dependencies.

Disconnect Admin machine from Internet to create cluster

Once these dependencies are downloaded and imported in a local registry, you no longer need internet access. In the EKS Anywhere cluster specification, you can configure EKS Anywhere to use your local registry mirror. When the registry mirror configuration is set in the EKS Anywhere cluster specification, EKS Anywhere configures containerd to pull from that registry instead of Amazon ECR during cluster creation and lifecycle operations. For more information, reference the Registry Mirror Configuration documentation.

If you are using Ubuntu or RHEL as the operating system for nodes in your EKS Anywhere cluster, you must connect to the internet while building the images with the EKS Anywhere image-builder tool. After building the operating system images, you can configure EKS Anywhere to pull the operating system images from a location of your chosing in the EKS Anywhere cluster specification. For more information on the image building process and operating system cluster specification, reference the Operating System Management documentation.

Overview

The process for preparing your airgapped environment for EKS Anywhere is summarized by the following steps:

  1. Use the eksctl anywhere CLI to download EKS Anywhere artifacts. These artifacts are yaml files that contain the list and locations of the EKS Anywhere dependencies.
  2. Use the eksctl anywhere CLI to download EKS Anywhere images. These images include EKS Anywhere dependencies including EKS Distro components, Cluster API provider components, and EKS Anywhere components such as the EKS Anywhere controllers, Cilium CNI, kube-vip, and cert-manager.
  3. Set up your local registry following the steps in the Registry Mirror Configuration documentation.
  4. Use the eksctl anywhere CLI to import the EKS Anywhere images to your local registry.
  5. Optionally use the eksctl anywhere CLI to copy EKS Anywhere Curated Packages images to your local registry.

Prerequisites

  • An existing Admin machine
  • Docker running on the Admin machine
  • At least 80GB in storage space on the Admin machine to temporarily store the EKS Anywhere images locally before importing them to your local registry. Currently, when downloading images, EKS Anywhere pulls all dependencies for all infrastructure providers and supported Kubernetes versions.
  • The download and import images commands must be run on an amd64 machine to import amd64 images to the registry mirror.

Procedure

  1. Download the EKS Anywhere artifacts that contain the list and locations of the EKS Anywhere dependencies. A compressed file eks-anywhere-downloads.tar.gz will be downloaded. You can use the eksctl anywhere download artifacts --dry-run command to see the list of artifacts it will download.

    eksctl anywhere download artifacts
    
  2. Decompress the eks-anywhere-downloads.tar.gz file using the following command. This will create an eks-anywhere-downloads folder.

    tar -xvf eks-anywhere-downloads.tar.gz
    
  3. Download the EKS Anywhere image dependencies to the Admin machine. This command may take several minutes (10+) to complete. To monitor the progress of the command, you can run with the -v 6 command line argument, which will show details of the images that are being pulled. Docker must be running for the following command to succeed.

    eksctl anywhere download images -o images.tar
    
  4. Set up a local registry mirror to host the downloaded EKS Anywhere images and configure your Admin machine with the certificates and authentication information if your registry requires it. For details, refer to the Registry Mirror Configuration documentation.

  5. Import images to the local registry mirror using the following command. Set REGISTRY_MIRROR_URL to the url of the local registry mirror you created in the previous step. This command may take several minutes to complete. To monitor the progress of the command, you can run with the -v 6 command line argument. When using self-signed certificates for your registry, you should run with the --insecure command line argument to indicate skipping TLS verification while pushing helm charts and bundles.

    export REGISTRY_MIRROR_URL=<registryurl>
    
    eksctl anywhere import images -i images.tar -r ${REGISTRY_MIRROR_URL} \
       --bundles ./eks-anywhere-downloads/bundle-release.yaml
    
  6. Optionally import curated packages to your registry mirror. The curated packages images are copied from Amazon ECR to your local registry mirror in a single step, as opposed to separate download and import steps. For post-cluster creation steps, reference the Curated Packages documentation.

    Expand for curated packages instructions

    If your EKS Anywhere cluster is running in an airgapped environment, and you set up a local registry mirror, you can copy curated packages from Amazon ECR to your local registry mirror with the following command.

    Set $KUBEVERSION to be equal to the spec.kubernetesVersion of your EKS Anywhere cluster specification.

    The copy packages command uses the credentials in your docker config file. So you must docker login to the source registries and the destination registry before running the command.

    When using self-signed certificates for your registry, you should run with the --dst-insecure command line argument to indicate skipping TLS verification while copying curated packages.

    eksctl anywhere copy packages \
      ${REGISTRY_MIRROR_URL}/curated-packages \
      --kube-version $KUBEVERSION \
      --src-chart-registry public.ecr.aws/eks-anywhere \
      --src-image-registry 783794618700.dkr.ecr.us-west-2.amazonaws.com
    

If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you create your EKS Anywhere cluster, configure registryMirrorConfiguration in your EKS Anywhere cluster specification with the information for your local registry. For details see the Registry Mirror Configuration documentation.

NOTE: If you are running EKS Anywhere on bare metal, you must configure osImageURL and hookImagesURLPath in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the bare metal configuration documentation.

Next Steps

3.1 -

  1. Download the EKS Anywhere artifacts that contain the list and locations of the EKS Anywhere dependencies. A compressed file eks-anywhere-downloads.tar.gz will be downloaded. You can use the eksctl anywhere download artifacts --dry-run command to see the list of artifacts it will download.

    eksctl anywhere download artifacts
    
  2. Decompress the eks-anywhere-downloads.tar.gz file using the following command. This will create an eks-anywhere-downloads folder.

    tar -xvf eks-anywhere-downloads.tar.gz
    
  3. Download the EKS Anywhere image dependencies to the Admin machine. This command may take several minutes (10+) to complete. To monitor the progress of the command, you can run with the -v 6 command line argument, which will show details of the images that are being pulled. Docker must be running for the following command to succeed.

    eksctl anywhere download images -o images.tar
    
  4. Set up a local registry mirror to host the downloaded EKS Anywhere images and configure your Admin machine with the certificates and authentication information if your registry requires it. For details, refer to the Registry Mirror Configuration documentation.

  5. Import images to the local registry mirror using the following command. Set REGISTRY_MIRROR_URL to the url of the local registry mirror you created in the previous step. This command may take several minutes to complete. To monitor the progress of the command, you can run with the -v 6 command line argument. When using self-signed certificates for your registry, you should run with the --insecure command line argument to indicate skipping TLS verification while pushing helm charts and bundles.

    export REGISTRY_MIRROR_URL=<registryurl>
    
    eksctl anywhere import images -i images.tar -r ${REGISTRY_MIRROR_URL} \
       --bundles ./eks-anywhere-downloads/bundle-release.yaml
    
  6. Optionally import curated packages to your registry mirror. The curated packages images are copied from Amazon ECR to your local registry mirror in a single step, as opposed to separate download and import steps. For post-cluster creation steps, reference the Curated Packages documentation.

    Expand for curated packages instructions

    If your EKS Anywhere cluster is running in an airgapped environment, and you set up a local registry mirror, you can copy curated packages from Amazon ECR to your local registry mirror with the following command.

    Set $KUBEVERSION to be equal to the spec.kubernetesVersion of your EKS Anywhere cluster specification.

    The copy packages command uses the credentials in your docker config file. So you must docker login to the source registries and the destination registry before running the command.

    When using self-signed certificates for your registry, you should run with the --dst-insecure command line argument to indicate skipping TLS verification while copying curated packages.

    eksctl anywhere copy packages \
      ${REGISTRY_MIRROR_URL}/curated-packages \
      --kube-version $KUBEVERSION \
      --src-chart-registry public.ecr.aws/eks-anywhere \
      --src-image-registry 783794618700.dkr.ecr.us-west-2.amazonaws.com
    

4 - 3. Cluster Networking

EKS Anywhere cluster networking

Cluster Networking

EKS Anywhere clusters use the clusterNetwork field in the cluster spec to allocate pod and service IPs. Once the cluster is created, the pods.cidrBlocks, services.cidrBlocks and nodes.cidrMaskSize fields are immutable. As a result, extra care should be taken to ensure that there are sufficient IPs and IP blocks available when provisioning large clusters.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12

The cluster pods.cidrBlocks is subdivided between nodes with a default block of size /24 per node, which can also be configured via the nodes.cidrMaskSize field. This node CIDR block is then used to assign pod IPs on the node.

Ports and Protocols

EKS Anywhere requires that various ports on control plane and worker nodes be open. Some Kubernetes-specific ports need open access only from other Kubernetes nodes, while others are exposed externally. Beyond Kubernetes ports, someone managing an EKS Anywhere cluster must also have external access to ports on the underlying EKS Anywhere provider (such as VMware) and to external tooling (such as Jenkins).

If you are responsible for network firewall rules between nodes on your EKS Anywhere clusters, the following tables describe both Kubernetes and EKS Anywhere-specific ports you should be aware of.

Kubernetes control plane

The following table represents the ports published by the Kubernetes project that must be accessible on any Kubernetes control plane.

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self

Although etcd ports are included in control plane section, you can also host your own etcd cluster externally or on custom ports.

Protocol Direction Port Range Purpose Used By
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd

Use the following to access the SSH service on the control plane and etcd nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients
Kubernetes worker nodes

The following table represents the ports published by the Kubernetes project that must be accessible from worker nodes.

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All

The API server port that is sometimes switched to 443. Alternatively, the default port is kept as is and API server is put behind a load balancer that listens on 443 and routes the requests to API server on the default port.

Use the following to access the SSH service on the worker nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients
Bare Metal provider

On the Admin machine for a Bare Metal provider, the following ports need to be accessible to all the nodes in the cluster, from the same level 2 network, for initially network booting:

Protocol Direction Port Range Purpose Used By
UDP Inbound 67 Boots DHCP All nodes, for network boot
UDP Inbound 69 Boots TFTP All nodes, for network boot
UDP Inbound 514 Boots Syslog All nodes, for provisioning logs
TCP Inbound 80 Boots HTTP All nodes, for network boot
TCP Inbound 42113 Tink-server gRPC All nodes, talk to Tinkerbell
TCP Inbound 50061 Hegel HTTP All nodes, talk to Tinkerbell
TCP Outbound 623 Rufio IPMI All nodes, out-of-band power and next boot (optional )
TCP Outbound 80,443 Rufio Redfish All nodes, out-of-band power and next boot (optional )
VMware provider

The following table displays ports that need to be accessible from the VMware provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 443 vCenter Server vCenter API endpoint
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Nutanix provider

The following table displays ports that need to be accessible from the Nutanix provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 9440 Prism Central Server Prism Central API endpoint
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Snow provider

In addition to the Ports Required to Use AWS Services on an AWS Snowball Edge Device , the following table displays ports that need to be accessible from the Snow provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 9092 Device Controller EKS Anywhere and CAPAS controller
TCP Inbound 8242 EC2 HTTPS endpoint EKS Anywhere and CAPAS controller
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Control plane management tools

A variety of control plane management tools are available to use with EKS Anywhere. One example is Jenkins.

Protocol Direction Port Range Purpose Used By
TCP Inbound 8080 Jenkins Server HTTP Jenkins endpoint
TCP Inbound 8443 Jenkins Server HTTPS Jenkins endpoint

5 - 4. Choose provider

Choose an infrastructure provider for EKS Anywhere clusters

EKS Anywhere supports many different types of infrastructure including VMWare vSphere, bare metal, Snow, Nutanix, and Apache CloudStack. You can also run EKS Anywhere on Docker for dev/test use cases only. EKS Anywhere clusters can only run on a single infrastructure provider. For example, you cannot have some vSphere nodes, some bare metal nodes, and some Snow nodes in a single EKS Anywhere cluster. Management clusters also must run on the same infrastructure provider as workload clusters.

Detailed information on each infrastructure provider can be found in the sections below. Review the infrastructure provider’s prerequisites in-depth before creating your first cluster.

Install on vSphere
Install on Bare Metal
Install on Snow
Install on CloudStack
Install on Nutanix
Install on Docker (dev only)

6 - Create vSphere cluster

Create an EKS Anywhere cluster on vSphere

6.1 - Overview

Overview of EKS Anywhere cluster creation on vSphere

Creating a vSphere cluster

The following diagram illustrates what happens when you start the cluster creation process.

Start creating a vSphere cluster

Start creating EKS Anywhere cluster

1. Generate a config file for vSphere

To this command, you identify the name of the provider (-p vsphere) and a cluster name and redirect the output to a file. The result is a config file template that you need to modify for the specific instance of your provider.

2. Modify the config file

Using the generated cluster config file, make modifications to suit your situation. Details about this config file are contained on the vSphere Config page.

3. Launch the cluster creation

Once you have modified the cluster configuration file, use eksctl anywhere create cluster -f $CLUSTER_NAME.yaml starts the cluster creation process. To see details on the cluster creation process, increase verbosity (-v=9 provides maximum verbosity).

4. Authenticate and create bootstrap cluster

After authenticating to vSphere and validating the assets there, the cluster creation process starts off creating a temporary Kubernetes bootstrap cluster on the Administrative machine. To begin, the cluster creation process runs a series of govc commands to check on the vSphere environment:

  • Checks that the vSphere environment is available.

  • Using the URL and credentials provided in the cluster spec files, authenticates to the vSphere provider.

  • Validates that the datacenter and the datacenter network exists.

  • Validates that the identified datastore (to store your EKS Anywhere cluster) exists, that the folder holding your EKS Anywhere cluster VMs exists, and that the resource pools containing compute resources exist. If you have multiple VSphereMachineConfig objects in your config file, you will see these validations repeated.

  • Validates the virtual machine templates to be used for the control plane and worker nodes (such as ubuntu-2004-kube-v1.20.7).

If all validations pass, you will see this message:

✅ Vsphere Provider setup is valid

Next, the process runs the kind command to build a single-node Kubernetes bootstrap cluster on the Administrative machine. This includes pulling the kind node image, preparing the node, writing the configuration, starting the control-plane, and installing CNI. You will see:

After this point the bootstrap cluster is installed, but not yet fully configured.

Continuing cluster creation

The following diagram illustrates the activities that occur next:

Continue creating EKS Anywhere cluster

1. Add CAPI management

Cluster API (CAPI) management is added to the bootstrap cluster to direct the creation of the target cluster.

2. Set up cluster

Configure the control plane and worker nodes.

3. Add Cilium networking

Add Cilium as the CNI plugin to use for networking between the cluster services and pods.

4. Add CAPI to target cluster

Add the CAPI service to the target cluster in preparation for it to take over management of the cluster after the cluster creation is completed and the bootstrap cluster is deleted. The bootstrap cluster can then begin moving the CAPI objects over to the target cluster, so it can take over the management of itself.

With the bootstrap cluster running and configured on the Administrative machine, the creation of the target cluster begins. It uses kubectl to apply a target cluster configuration as follows:

  • Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the target cluster.

  • CAPI providers are configured on the target cluster, in preparation for the target cluster to take over responsibilities for running the components needed to manage itself.

  • With CAPI running on the target cluster, CAPI objects for the target cluster are moved from the bootstrap cluster to the target cluster’s CAPI service (done internally with the clusterctl command).

  • Add Kubernetes CRDs and other addons that are specific to EKS Anywhere.

  • The cluster configuration is saved.

Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the workload cluster:

Installing networking on workload cluster

After that, the CAPI providers are configured on the workload cluster, in preparation for the workload cluster to take over responsibilities for running the components needed to manage itself:

Installing cluster-api providers on workload cluster

With CAPI running on the workload cluster, CAPI objects for the workload cluster are moved from the bootstrap cluster to the workload cluster’s CAPI service (done internally with the clusterctl command):

Moving cluster management from bootstrap to workload cluster

At this point, the cluster creation process will add Kubernetes CRDs and other addons that are specific to EKS Anywhere. That configuration is applied directly to the cluster:

Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster

If you did not specify GitOps support, starting the flux service is skipped:

GitOps field not specified, bootstrap flux skipped

The cluster configuration is saved:

Writing cluster config file

With the cluster up, and the CAPI service running on the new cluster, the bootstrap cluster is no longer needed and is deleted:

Delete EKS Anywhere bootstrap cluster

At this point, cluster creation is complete. You can now use your target cluster as either:

  • A standalone cluster (to run workloads) or
  • A management cluster (to optionally create one or more workload clusters)

Creating workload clusters (optional)

As described in Create separate workload clusters , you can use the cluster you just created as a management cluster to create and manage one or more workload clusters on the same vSphere provider as follows:

  • Use eksctl to generate a cluster config file for the new workload cluster.
  • Modify the cluster config with a new cluster name and different vSphere resources.
  • Use eksctl to create the new workload cluster from the new cluster config file and credentials from the initial management cluster.

6.2 - Requirements for EKS Anywhere on VMware vSphere

VMware vSphere provider requirements for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a VMware vSphere environment

To prepare a VMware vSphere environment to run EKS Anywhere, you need the following:

  • A vSphere 7 or 8 environment running vCenter.

  • Capacity to deploy 6-10 VMs.

  • DHCP service running in vSphere environment in the primary VM network for your workload cluster.

  • One network in vSphere to use for the cluster. EKS Anywhere clusters need access to vCenter through the network to enable self-managing and storage capabilities.

  • An OVA imported into vSphere and converted into a template for the workload VMs

  • It’s critical that you set up your vSphere user credentials properly.

  • One IP address routable from cluster but excluded from DHCP offering. This IP address is to be used as the Control Plane Endpoint IP.

    Below are some suggestions to ensure that this IP address is never handed out by your DHCP server.

    You may need to contact your network engineer.

    • Pick an IP address reachable from cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

Each VM will require:

  • 2 vCPUs
  • 8GB RAM
  • 25GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

  • vCenter endpoint (must be accessible to EKS Anywhere clusters)
  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

vSphere information needed before creating the cluster

You need to get the following information before creating the cluster:

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate IP address for the control plane of each workload cluster you add.

    Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each cluster. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

    A static IP address will be used for each control plane VM in your EKS Anywhere cluster. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering.

    An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

    Import ova wizard

  • vSphere Datacenter Name: The vSphere datacenter to deploy the EKS Anywhere cluster on.

    Import ova wizard

  • VM Network Name: The VM network to deploy your EKS Anywhere cluster on.

    Import ova wizard

  • vCenter Server Domain Name: The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

    Import ova wizard

  • thumbprint (required if insecure=false): The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self-signed certificate for your vSphere endpoint.

    There are several ways to obtain your vCenter thumbprint. If you have govc installed, you can run the following command in the Administrative machine terminal, and take a note of the output:

    govc about.cert -thumbprint -k
    
  • template: The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere.

    Import ova wizard

  • datastore: The vSphere datastore to deploy your EKS Anywhere cluster on.

    Import ova wizard

  • folder: The folder parameter in VSphereMachineConfig allows you to organize the VMs of an EKS Anywhere cluster. With this, each cluster can be organized as a folder in vSphere. You will have a separate folder for the management cluster and each cluster you are adding.

    Import ova wizard

  • resourcePool: The vSphere resource pools for your VMs in the EKS Anywhere cluster. If there is a resource pool: /<datacenter>/host/<resource-pool-name>/Resources

    Import ova wizard

6.3 - Preparing vSphere for EKS Anywhere

Set up a vSphere provider to prepare it for EKS Anywhere

Certain resources must be in place with appropriate user permissions to create an EKS Anywhere cluster using the vSphere provider.

Configuring Folder Resources

Create a VM folder:

For each user that needs to create workload clusters, have the vSphere administrator create a VM folder. That folder will host:

  • A nested folder for the management cluster and another one for each workload cluster.
  • Each cluster VM in its own nested folder under this folder.
vm/
├── YourVMFolder/
    ├── mgmt-cluster <------ Folder with vms for management cluster
        ├── mgmt-cluster-7c2sp
        ├── mgmt-cluster-etcd-2pbhp
        ├── mgmt-cluster-md-0-5c5844bcd8xpjcln-9j7xh
    ├── worload-cluster-0 <------ Folder with vms for workload cluster 0
        ├── workload-cluster-0-8dk3j
        ├── workload-cluster-0-etcd-20ksa
        ├── workload-cluster-0-md-0-6d964979ccxbkchk-c4qjf
    ├── worload-cluster-1 <------ Folder with vms for workload cluster 1
        ├── workload-cluster-1-59cbn
        ├── workload-cluster-1-etcd-qs6wv
        ├── workload-cluster-1-md-0-756bcc99c9-9j7xh

To see how to create folders on vSphere, see the vSphere Create a Folder documentation.

Configuring vSphere User, Group, and Roles

You need a vSphere user with the right privileges to let you create EKS Anywhere clusters on top of your vSphere cluster.

Configure via EKSA CLI

To configure a new user via CLI, you will need two things:

  • a set of vSphere admin credentials with the ability to create users and groups. If you do not have the rights to create new groups and users, you can invoke govc commands directly as outlined here.
  • a user.yaml file:
apiVersion: "eks-anywhere.amazon.com/v1"
kind: vSphereUser
spec:
  username: "eksa"                # optional, default eksa
  group: "MyExistingGroup"        # optional, default EKSAUsers
  globalRole: "MyGlobalRole"      # optional, default EKSAGlobalRole
  userRole: "MyUserRole"          # optional, default EKSAUserRole
  adminRole: "MyEKSAAdminRole"    # optional, default EKSACloudAdminRole
  datacenter: "MyDatacenter"
  vSphereDomain: "vsphere.local"  # this should be the domain used when you login, e.g. YourUsername@vsphere.local
  connection:
    server: "https://my-vsphere.internal.acme.com"
    insecure: false
  objects:
    networks:
      - !!str "/MyDatacenter/network/My Network"
    datastores:
      - !!str "/MyDatacenter/datastore/MyDatastore2"
    resourcePools:
      - !!str "/MyDatacenter/host/Cluster-03/MyResourcePool" # NOTE: see below if you do not want to use a resource pool
    folders:
      - !!str "/MyDatacenter/vm/OrgDirectory/MyVMs"
    templates:
      - !!str "/MyDatacenter/vm/Templates/MyTemplates"

NOTE: If you do not want to create a resource pool, you can instead specify the cluster directly as /MyDatacenter/host/Cluster-03 in user.yaml, where Cluster-03 is your cluster name. In your cluster spec, you will need to specify /MyDatacenter/host/Cluster-03/Resources for the resourcePool field.

Set the admin credentials as environment variables:

export EKSA_VSPHERE_USERNAME=<ADMIN_VSPHERE_USERNAME>
export EKSA_VSPHERE_PASSWORD=<ADMIN_VSPHERE_PASSWORD>

If the user does not already exist, you can create the user and all the specified group and role objects by running:

eksctl anywhere exp vsphere setup user -f user.yaml --password '<NewUserPassword>'

If the user or any of the group or role objects already exist, use the force flag instead to overwrite Group-Role-Object mappings for the group, roles, and objects specified in the user.yaml config file:

eksctl anywhere exp vsphere setup user -f user.yaml --force

Please note that there is one more manual step to configure global permissions here .

Configure via govc

If you do not have the rights to create a new user, you can still configure the necessary roles and permissions using the govc cli .

#! /bin/bash
# govc calls to configure a user with minimal permissions
set -x
set -e

EKSA_USER='<Username>@<UserDomain>'
USER_ROLE='EKSAUserRole'
GLOBAL_ROLE='EKSAGlobalRole'
ADMIN_ROLE='EKSACloudAdminRole'

FOLDER_VM='/YourDatacenter/vm/YourVMFolder'
FOLDER_TEMPLATES='/YourDatacenter/vm/Templates'

NETWORK='/YourDatacenter/network/YourNetwork'
DATASTORE='/YourDatacenter/datastore/YourDatastore'
RESOURCE_POOL='/YourDatacenter/host/Cluster-01/Resources/YourResourcePool'

govc role.create "$GLOBAL_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/globalPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc role.create "$USER_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/eksUserPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc role.create "$ADMIN_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/adminPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$GLOBAL_ROLE" /

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$ADMIN_ROLE" "$FOLDER_VM"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$ADMIN_ROLE" "$FOLDER_TEMPLATES"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$NETWORK"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$DATASTORE"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$RESOURCE_POOL"

NOTE: If you do not want to create a resource pool, you can instead specify the cluster directly as /MyDatacenter/host/Cluster-03 in user.yaml, where Cluster-03 is your cluster name. In your cluster spec, you will need to specify /MyDatacenter/host/Cluster-03/Resources for the resourcePool field.

Please note that there is one more manual step to configure global permissions here .

Configure via UI

Add a vCenter User

Ask your VSphere administrator to add a vCenter user that will be used for the provisioning of the EKS Anywhere cluster in VMware vSphere.

  1. Log in with the vSphere Client to the vCenter Server.
  2. Specify the user name and password for a member of the vCenter Single Sign-On Administrators group.
  3. Navigate to the vCenter Single Sign-On user configuration UI.
    • From the Home menu, select Administration.
    • Under Single Sign On, click Users and Groups.
  4. If vsphere.local is not the currently selected domain, select it from the drop-down menu. You cannot add users to other domains.
  5. On the Users tab, click Add.
  6. Enter a user name and password for the new user. The maximum number of characters allowed for the user name is 300. You cannot change the user name after you create a user. The password must meet the password policy requirements for the system.
  7. Click Add.

For more details, see vSphere Add vCenter Single Sign-On Users documentation.

Create and define user roles

When you add a user for creating clusters, that user initially has no privileges to perform management operations. So you have to add this user to groups with the required permissions, or assign a role or roles with the required permission to this user.

Three roles are needed to be able to create the EKS Anywhere cluster:

  1. Create a global custom role: For example, you could name this EKS Anywhere Global. Define it for the user on the vCenter domain level and its children objects. Create this role with the following privileges:

    > Content Library
    * Add library item
    * Check in a template
    * Check out a template
    * Create local library
    * Update files
    > vSphere Tagging
    * Assign or Unassign vSphere Tag
    * Assign or Unassign vSphere Tag on Object
    * Create vSphere Tag
    * Create vSphere Tag Category
    * Delete vSphere Tag
    * Delete vSphere Tag Category
    * Edit vSphere Tag
    * Edit vSphere Tag Category
    * Modify UsedBy Field For Category
    * Modify UsedBy Field For Tag
    > Sessions
    * Validate session
    
  2. Create a user custom role: The second role is also a custom role that you could call, for example, EKSAUserRole. Define this role with the following objects and children objects.

    • The pool resource level and its children objects. This resource pool that our EKS Anywhere VMs will be part of.
    • The storage object level and its children objects. This storage that will be used to store the cluster VMs.
    • The network VLAN object level and its children objects. This network that will host the cluster VMs.
    • The VM and Template folder level and its children objects.

    Create this role with the following privileges:

    > Content Library
    * Add library item
    * Check in a template
    * Check out a template
    * Create local library
    > Datastore
    * Allocate space
    * Browse datastore
    * Low level file operations
    > Folder
    * Create folder
    > vSphere Tagging
    * Assign or Unassign vSphere Tag
    * Assign or Unassign vSphere Tag on Object
    * Create vSphere Tag
    * Create vSphere Tag Category
    * Delete vSphere Tag
    * Delete vSphere Tag Category
    * Edit vSphere Tag
    * Edit vSphere Tag Category
    * Modify UsedBy Field For Category
    * Modify UsedBy Field For Tag
    > Network
    * Assign network
    > Resource
    * Assign virtual machine to resource pool
    > Scheduled task
    * Create tasks
    * Modify task
    * Remove task
    * Run task
    > Profile-driven storage
    * Profile-driven storage view
    > Storage views
    * View
    > vApp
    * Import
    > Virtual machine
    * Change Configuration
      - Add existing disk
      - Add new disk
      - Add or remove device
      - Advanced configuration
      - Change CPU count
      - Change Memory
      - Change Settings
      - Configure Raw device
      - Extend virtual disk
      - Modify device settings
      - Remove disk
    * Edit Inventory
      - Create from existing
      - Create new
      - Remove
    * Interaction
      - Power off
      - Power on
    * Provisioning
      - Clone template
      - Clone virtual machine
      - Create template from virtual machine
      - Customize guest
      - Deploy template
      - Mark as template
      - Read customization specifications
    * Snapshot management
      - Create snapshot
      - Remove snapshot
      - Revert to snapshot
    
  3. Create a default Administrator role: The third role is the default system role Administrator that you define to the user on the folder level and its children objects (VMs and OVA templates) that was created by the VSphere admistrator for you.

    To create a role and define privileges check Create a vCenter Server Custom Role and Defined Privileges pages.

Manually set Global Permissions role in Global Permissions UI

vSphere does not currently support a public API for setting global permissions. Because of this, you will need to manually assign the Global Role you created to your user or group in the Global Permissions UI.

Make sure to select the Propagate to children box so the permissions get propagated down properly.

Global Permissions Screenshot

Deploy an OVA Template

If the user creating the cluster has permission and network access to create and tag a template, you can skip these steps because EKS Anywhere will automatically download the OVA and create the template if it can. If the user does not have the permissions or network access to create and tag the template, follow this guide. The OVA contains the operating system (Ubuntu, Bottlerocket, or RHEL) for a specific EKS Distro Kubernetes release and EKS Anywhere version. The following example uses Ubuntu as the operating system, but a similar workflow would work for Bottlerocket or RHEL.

Steps to deploy the OVA

  1. Go to the artifacts page and download or build the OVA template with the newest EKS Distro Kubernetes release to your computer.
  2. Log in to the vCenter Server.
  3. Right-click the folder you created above and select Deploy OVF Template. The Deploy OVF Template wizard opens.
  4. On the Select an OVF template page, select the Local file option, specify the location of the OVA template you downloaded to your computer, and click Next.
  5. On the Select a name and folder page, enter a unique name for the virtual machine or leave the default generated name, if you do not have other templates with the same name within your vCenter Server virtual machine folder. The default deployment location for the virtual machine is the inventory object where you started the wizard, which is the folder you created above. Click Next.
  6. On the Select a compute resource page, select the resource pool where to run the deployed VM template, and click Next.
  7. On the Review details page, verify the OVF or OVA template details and click Next.
  8. On the Select storage page, select a datastore to store the deployed OVF or OVA template and click Next.
  9. On the Select networks page, select a source network and map it to a destination network. Click Next.
  10. On the Ready to complete page, review the page and Click Finish. For details, see Deploy an OVF or OVA Template

To build your own Ubuntu OVA template check the Building your own Ubuntu OVA section.

To use the deployed OVA template to create the VMs for the EKS Anywhere cluster, you have to tag it with specific values for the os and eksdRelease keys. The value of the os key is the operating system of the deployed OVA template, which is ubuntu in our scenario. The value of the eksdRelease holds kubernetes and the EKS-D release used in the deployed OVA template. Check the following Customize OVAs page for more details.

Steps to tag the deployed OVA template:

  1. Go to the artifacts page and take notes of the tags and values associated with the OVA template you deployed in the previous step.
  2. In the vSphere Client, select MenuTags & Custom Attributes.
  3. Select the Tags tab and click Tags.
  4. Click New.
  5. In the Create Tag dialog box, copy the os tag name associated with your OVA that you took notes of, which in our case is os:ubuntu and paste it as the name for the first tag required.
  6. Specify the tag category os if it exist or create it if it does not exist.
  7. Click Create.
  8. Now to add the release tag, repeat steps 2-4.
  9. In the Create Tag dialog box, copy the os tag name associated with your OVA that you took notes of, which in our case is eksdRelease:kubernetes-1-21-eks-8 and paste it as the name for the second tag required.
  10. Specify the tag category eksdRelease if it exist or create it if it does not exist.
  11. Click Create.
  12. Navigate to the VM and Template tab.
  13. Select the folder that was created.
  14. Select deployed template and click Actions.
  15. From the drop-down menu, select Tags and Custom AttributesAssign Tag.
  16. Select the tags we created from the list and confirm the operation.

6.4 - Create vSphere cluster

Create an EKS Anywhere cluster on VMware vSphere

EKS Anywhere supports a VMware vSphere provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on vSphere in a way that:

  • Deploys an initial cluster on your vSphere environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. Also it lets you keep vSphere credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Generate an initial cluster config (named mgmt for this example):

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-mgmt-cluster.yaml
    
  3. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to vsphere configuration for information on configuring this cluster config for a vSphere provider.
    • Add Optional configuration settings as needed. See Github provider to see how to identify your Git information.
    • Create at least two control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  4. Set Credential Environment Variables

    Before you create the initial cluster, you will need to set and export these environment variables for your vSphere user name and password. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_VSPHERE_USERNAME='billy'
    export EKSA_VSPHERE_PASSWORD='t0p$ecret'
    
  5. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
    
  6. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:

    kubectl get machines -A
    

    Example command output

    NAMESPACE   NAME                PROVIDERID        PHASE    VERSION
    eksa-system mgmt-b2xyz          vsphere:/xxxxx    Running  v1.24.2-eks-1-24-5
    eksa-system mgmt-etcd-r9b42     vsphere:/xxxxx    Running  
    eksa-system mgmt-md-8-6xr-rnr   vsphere:/xxxxx    Running  v1.24.2-eks-1-24-5
    ...
    

    The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.

  8. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings.

    NOTE: Ensure workload cluster object names (Cluster, vSphereDatacenterConfig, vSphereMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: spec.users[0].sshAuthorizedKeys must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml 
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  5. To check the workload cluster, get the workload cluster credentials and run a test workload:

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath='{.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

6.5 - Configure for vSphere

Full EKS Anywhere configuration reference for a VMware vSphere cluster.

This is a generic template with detailed descriptions below for reference.

Key: Resources are in green ; Links to field descriptions are in blue

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name             # Name of the cluster (required)
spec:
   clusterNetwork:                   # Cluster network configuration (required)
      cniConfig:                     # Cluster CNI plugin - default: cilium (required)
         cilium: {}
      pods:
         cidrBlocks:                 # Internal Kubernetes subnet CIDR block for pods (required)
            - 192.168.0.0/16
      services:
         cidrBlocks:                 # Internal Kubernetes subnet CIDR block for services (required)
            - 10.96.0.0/12
   controlPlaneConfiguration:        # Specific cluster control plane config (required)
      count: 2                       # Number of control plane nodes (required)
      endpoint:                      # IP for control plane endpoint on your network (required)
         host: xxx.xxx.xxx.xxx
      machineGroupRef:               # vSphere-specific Kubernetes node config (required)
        kind: VSphereMachineConfig
        name: my-cluster-machines
      taints:                        # Taints applied to control plane nodes 
      - key: "key1"
        value: "value1"
        effect: "NoSchedule"
      labels:                        # Labels applied to control plane nodes 
        "key1": "value1"
        "key2": "value2"
   datacenterRef:                    # Kubernetes object with vSphere-specific config 
      kind: VSphereDatacenterConfig
      name: my-cluster-datacenter
   externalEtcdConfiguration:
     count: 3                        # Number of etcd members 
     machineGroupRef:                # vSphere-specific Kubernetes etcd config
        kind: VSphereMachineConfig
        name: my-cluster-machines
   kubernetesVersion: "1.25"         # Kubernetes version to use for the cluster (required)
   workerNodeGroupConfigurations:    # List of node groups you can define for workers (required) 
   - count: 2                        # Number of worker nodes 
     machineGroupRef:                # vSphere-specific Kubernetes node objects (required) 
       kind: VSphereMachineConfig
       name: my-cluster-machines
     name: md-0                      # Name of the worker nodegroup (required) 
     taints:                         # Taints to apply to worker node group nodes 
     - key: "key1"
       value: "value1"
       effect: "NoSchedule"
     labels:                         # Labels to apply to worker node group nodes 
       "key1": "value1"
       "key2": "value2"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
   name: my-cluster-datacenter
spec:
  datacenter: "datacenter1"          # vSphere datacenter name on which to deploy EKS Anywhere (required) 
  server: "myvsphere.local"          # FQDN or IP address of vCenter server (required) 
  network: "network1"                # Path to the VM network on which to deploy EKS Anywhere (required) 
  insecure: false                    # Set to true if vCenter does not have a valid certificate 
  thumbprint: "1E:3B:A1:4C:B2:..."   # SHA1 thumprint of vCenter server certificate (required if insecure=false)

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
   name: my-cluster-machines
spec:
  diskGiB:  25                         # Size of disk on VMs, if no snapshots
  datastore: "datastore1"              # Path to vSphere datastore to deploy EKS Anywhere on (required)
  folder: "folder1"                    # Path to VM folder for EKS Anywhere cluster VMs (required)
  numCPUs: 2                           # Number of CPUs on virtual machines
  memoryMiB: 8192                      # Size of RAM on VMs
  osFamily: "bottlerocket"             # Operating system on VMs
  resourcePool: "resourcePool1"        # vSphere resource pool for EKS Anywhere VMs (required)
  storagePolicyName: "storagePolicy1"  # Storage policy name associated with VMs
  template: "bottlerocket-kube-v1-25"  # VM template for EKS Anywhere (required for RHEL/Ubuntu-based OVAs)
  cloneMode: "fullClone"               # Clone mode to use when cloning VMs from the template
  users:                               # Add users to access VMs via SSH
  - name: "ec2-user"                   # Name of each user set to access VMs
    sshAuthorizedKeys:                 # SSH keys for user needed to access VMs
    - "ssh-rsa AAAAB3NzaC1yc2E..."
  tags:                                # List of tags to attach to cluster VMs, in URN format
  - "urn:vmomi:InventoryServiceTag:5b3e951f-4e1d-4511-95b1-5ba1ea97245c:GLOBAL"
  - "urn:vmomi:InventoryServiceTag:cfee03d0-0189-4f27-8c65-fe75086a86cd:GLOBAL"

The following additional optional configuration can also be included:

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must NOT have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

Must be less than or equal to the cluster kubernetesVersion defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane’s Kubernetes version. Removing workerNodeGroupConfiguration.kubernetesVersion will trigger an upgrade of the node group to the kubernetesVersion defined at the root level of the cluster spec.

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields below.

datacenterRef (required)

Refers to the Kubernetes object with vsphere environment specific configuration. See VSphereDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

VSphereDatacenterConfig Fields

datacenter (required)

The name of the vSphere datacenter to deploy the EKS Anywhere cluster on. For example SDDC-Datacenter.

network (required)

The path to the VM network to deploy your EKS Anywhere cluster on. For example, /<DATACENTER>/network/<NETWORK_NAME>. Use govc find -type n to see a list of networks.

server (required)

The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

insecure (optional)

Set insecure to true if the vCenter server does not have a valid certificate. (Default: false)

thumbprint (required if insecure=false)

The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self signed certificate.

There are several ways to obtain your vCenter thumbprint. The easiest way is if you have govc installed, you can run:

govc about.cert -thumbprint -k

Another way is from the vCenter web UI, go to Administration/Certificate Management and click view details of the machine certificate. The format of this thumbprint does not exactly match the format required though and you will need to add : to separate each hexadecimal value.

Another way to get the thumbprint is use this command with your servers certificate in a file named ca.crt:

openssl x509 -sha1 -fingerprint -in ca.crt -noout

If you specify the wrong thumbprint, an error message will be printed with the expected thumbprint. If no valid certificate is being used, insecure must be set to true.

VSphereMachineConfig Fields

memoryMiB (optional)

Size of RAM on virtual machines (Default: 8192)

numCPUs (optional)

Number of CPUs on virtual machines (Default: 2)

osFamily (optional)

Operating System on virtual machines. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket)

diskGiB (optional)

Size of disk on virtual machines if snapshots aren’t included (Default: 25)

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is ec2-user if osFamily=bottlrocket and capv if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

template (optional)

The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere . This is a required field if you are using Ubuntu-based or RHEL-based OVAs. The template must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, template must include 1.24, 1_24, 1-24 or 124.

cloneMode (optional)

cloneMode defines the clone mode to use when creating the cluster VMs from the template. Allowed values are:

  • fullClone: With full clone, the cloned VM is a separate independent copy of the template. This makes provisioning the VMs a bit slower at the cost of better customization and performance.
  • linkedClone: With linked clone, the cloned VM shares the parent template’s virtual disk. This makes provisioning the VMs faster while also saving the disk space. Linked clone does not allow customizing the disk size. The template should meet the following properties to use linkedClone:
    • The template needs to have a snapshot
    • The template’s disk size must match the VSphereMachineConfig’s diskGiB

If this field is not specified, EKS Anywhere tries to determine the clone mode based on the following criteria:

  • It uses linkedClone if the template has snapshots and the template diskSize matches the machineConfig DiskGiB.
  • Otherwise, it uses use full clone.

datastore (required)

The path to the vSphere datastore to deploy your EKS Anywhere cluster on, for example /<DATACENTER>/datastore/<DATASTORE_NAME>. Use govc find -type s to get a list of datastores.

folder (required)

The path to a VM folder for your EKS Anywhere cluster VMs. This allows you to organize your VMs. If the folder does not exist, it will be created for you. If the folder is blank, the VMs will go in the root folder. For example /<DATACENTER>/vm/<FOLDER_NAME>/.... Use govc find -type f to get a list of existing folders.

resourcePool (required)

The vSphere Resource pools for your VMs in the EKS Anywhere cluster. Examples of resource pool values include:

  • If there is no resource pool: /<datacenter>/host/<cluster-name>/Resources
  • If there is a resource pool: /<datacenter>/host/<cluster-name>/Resources/<resource-pool-name>
  • The wild card option */Resources also often works.

Use govc find -type p to get a list of available resource pools.

storagePolicyName (optional)

The storage policy name associated with your VMs. Generally this can be left blank. Use govc storage.policy.ls to get a list of available storage policies.

tags (optional)

Optional list of tags to attach to your cluster VMs in the URN format.

Example:

  tags:
  - urn:vmomi:InventoryServiceTag:8e0ce079-0675-47d6-8665-16ada4e6dabd:GLOBAL

hostOSConfig (optional)

Optional host OS configurations for the EKS Anywhere Kubernetes nodes. More information in the Host OS Configuration section.

Optional VSphere Credentials

Use the following environment variables to configure the Cloud Provider with different credentials.

EKSA_VSPHERE_CP_USERNAME

Username for Cloud Provider (Default: $EKSA_VSPHERE_USERNAME).

EKSA_VSPHERE_CP_PASSWORD

Password for Cloud Provider (Default: $EKSA_VSPHERE_PASSWORD).

6.6 - Customize vSphere

Customizing EKS Anywhere on vSphere

6.6.1 - Import OVAs

Importing EKS Anywhere OVAs to vSphere

If you want to specify an OVA template, you will need to import OVA files into vSphere before you can use it in your EKS Anywhere cluster. This guide was written using VMware Cloud on AWS, but the VMware OVA import guide can be found here.

EKS Anywhere supports the following operating system families

  • Bottlerocket (default)
  • Ubuntu
  • RHEL

A list of OVAs for this release can be found on the artifacts page.

Using vCenter Web User Interface

  1. Right click on your Datacenter, select Deploy OVF Template Import ova drop down

  2. Select an OVF template using URL or selecting a local OVF file and click on Next. If you are not able to select an OVF template using URL, download the file and use Local file option.

    Note: If you are using Bottlerocket OVAs, please select local file option. Import ova wizard

  3. Select a folder where you want to deploy your OVF package (most of our OVF templates are under SDDC-Datacenter directory) and click on Next. You cannot have an OVF template with the same name in one directory. For workload VM templates, leave the Kubernetes version in the template name for reference. A workload VM template will support at least one prior Kubernetes major versions. Import ova wizard

  4. Select any compute resource to run (from cluster-1, 10.2.34.5, etc..) the deployed VM and click on Next Import ova wizard

  5. Review the details and click Next.

  6. Accept the agreement and click Next.

  7. Select the appropriate storage (e.g. “WorkloadDatastore“) and click Next.

  8. Select destination network (e.g. “sddc-cgw-network-1”) and click Next.

  9. Finish.

  10. Snapshot the VM. Right click on the imported VM and select Snapshots -> Take Snapshot… (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip to step 13) Import ova wizard

  11. Name your template (e.g. “root”) and click Create. Import ova wizard

  12. Snapshots for the imported VM should now show up under the Snapshots tab for the VM. Import ova wizard

  13. Right click on the imported VM and select Template and Convert to Template Import ova wizard

Steps to deploy a template using GOVC (CLI)

To deploy a template using govc, you must first ensure that you have GOVC installed . You need to set and export three environment variables to run govc GOVC_USERNAME, GOVC_PASSWORD and GOVC_URL.

  1. Import the template to a content library in vCenter using URL or selecting a local OVA file

    Using URL:

    govc library.import -k -pull <library name> <URL for the OVA file>
    

    Using a file from the local machine:

    govc library.import <library name> <path to OVA file on local machine>
    
  2. Deploy the template

    govc library.deploy -pool <resource pool> -folder <folder location to deploy template> /<library name>/<template name> <name of new VM>
    

    2a. If using Bottlerocket template for newer Kubernetes version than 1.21, resize disk 1 to 22G

    govc vm.disk.change -vm <template name> -disk.label "Hard disk 1" -size 22G
    

    2b. If using Bottlerocket template for Kubernetes version 1.21, resize disk 2 to 20G

    govc vm.disk.change -vm <template name> -disk.label "Hard disk 2" -size 20G
    
  3. Take a snapshot of the VM (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip this step)

    govc snapshot.create -vm ubuntu-2004-kube-v1.25.6 root
    
  4. Mark the new VM as a template

    govc vm.markastemplate <name of new VM>
    

Important Additional Steps to Tag the OVA

Using vCenter UI

Tag to indicate OS family

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag os:ubuntu or os:bottlerocket Import ova wizard

Tag to indicate eksd release

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag eksdRelease:{eksd release for the selected ova}, for example eksdRelease:kubernetes-1-25-eks-5 for the 1.25 ova. You can find the rest of eksd releases in the previous section . If it’s the first time you add an eksdRelease tag, you would need to create the category first. Click on “Create New Category” and name it eksdRelease. Import ova wizard

Using govc

Tag to indicate OS family

  1. Create tag category
govc tags.category.create -t VirtualMachine os
  1. Create tags os:ubuntu and os:bottlerocket
govc tags.create -c os os:bottlerocket
govc tags.create -c os os:ubuntu
  1. Attach newly created tag to the template
govc tags.attach os:bottlerocket <Template Path>
govc tags.attach os:ubuntu <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

Tag to indicate eksd release

  1. Create tag category
govc tags.category.create -t VirtualMachine eksdRelease
  1. Create the proper eksd release Tag, depending on your template. You can find the eksd releases in the previous section . For example eksdRelease:kubernetes-1-25-eks-5 for the 1.25 template.
govc tags.create -c eksdRelease eksdRelease:kubernetes-1-25-eks-5
  1. Attach newly created tag to the template
govc tags.attach eksdRelease:kubernetes-1-25-eks-5 <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

After you are done you can use the template for your workload cluster.

6.6.2 - Custom Ubuntu OVAs

Customizing Imported Ubuntu OVAs

There may be a need to make specific configuration changes on the imported ova template before using it to create/update EKS-A clusters.

Set up SSH Access for Imported OVA

SSH user and key need to be configured in order to allow SSH login to the VM template

Clone template to VM

Create an environment variable to hold the name of modified VM/template

export VM=<vm-name>

Clone the imported OVA template to create VM

govc vm.clone -on=false -vm=<full-path-to-imported-template> - folder=<full-path-to-folder-that-will-contain-the-VM> -ds=<datastore> $VM

Configure VM with cloud-init and the VMX GuestInfo datasource

Create a metadata.yaml file

instance-id: cloud-vm
local-hostname: cloud-vm
network:
  version: 2
  ethernets:
    nics:
      match:
        name: ens*
      dhcp4: yes

Create a userdata.yaml file

#cloud-config

users:
  - default
  - name: <username>
    primary_group: <username>
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, wheel
    ssh_import_id: None
    lock_passwd: true
    ssh_authorized_keys:
    - <user's ssh public key>

Export environment variable containing the cloud-init metadata and userdata

export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \
       USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })

Assign metadata and userdata to VM’s guestinfo

govc vm.change -vm "${VM}" \
  -e guestinfo.metadata="${METADATA}" \
  -e guestinfo.metadata.encoding="gzip+base64" \
  -e guestinfo.userdata="${USERDATA}" \
  -e guestinfo.userdata.encoding="gzip+base64"

Power the VM on

govc vm.power -on “$VM”

Customize the VM

Once the VM is powered on and fetches an IP address, ssh into the VM using your private key corresponding to the public key specified in userdata.yaml

ssh -i <private-key-file> username@<VM-IP>

At this point, you can make the desired configuration changes on the VM. The following sections describe some of the things you may want to do:

Add a Certificate Authority

Copy your CA certificate under /usr/local/share/ca-certificates and run sudo update-ca-certificates which will place the certificate under the /etc/ssl/certs directory.

Add Authentication Credentials for a Private Registry

If /etc/containerd/config.toml is not present initially, the default configuration can be generated by running the containerd config default > /etc/containerd/config.toml command. To configure a credential for a specific registry, create/modify the /etc/containerd/config.toml as follows:

# explicitly use v2 config format
version = 2

# The registry host has to be a domain name or IP. Port number is also
# needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry1-host:port".auth]
  username = ""
  password = ""
  auth = ""
  identitytoken = ""
 # The registry host has to be a domain name or IP. Port number is also
 # needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry2-host:port".auth]
  username = ""
  password = ""
  auth = ""
  identitytoken = ""

Restart containerd service with the sudo systemctl restart containerd command.

Convert VM to a Template

After you have customized the VM, you need to convert it to a template.

Cleanup the machine and power off the VM

This step is needed because of a known issue in Ubuntu which results in the clone VMs getting the same DHCP IP

sudo su
echo -n > /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
cloud-init clean -l --machine-id

Delete the hostname from file

/etc/hostname

Delete the networking config file

rm -rf /etc/netplan/50-cloud-init.yaml

Edit the cloud init config to turn preserve_hostname to false

vi /etc/cloud/cloud.cfg

Power the VM down

govc vm.power -off "$VM"

Take a snapshot of the VM

It is recommended to take a snapshot of the VM as it reduces the provisioning time for the machines and makes cluster creation faster.

If you do snapshot the VM, you will not be able to customize the disk size of your cluster VMs. If you prefer not to take a snapshot, skip this step.

govc snapshot.create -vm "$VM" root

Convert VM to template

govc vm.markastemplate $VM

Tag the template appropriately as described here

Use this customized template to create/upgrade EKS Anywhere clusters

6.7 -

  • vCenter endpoint (must be accessible to EKS Anywhere clusters)
  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

7 - Create Bare Metal cluster

Create an EKS Anywhere cluster on Bare Metal

7.1 - Overview

Overview of EKS Anywhere cluster creation on bare metal

Creating a Bare Metal cluster

The following diagram illustrates what happens when you create an EKS Anywhere cluster on bare metal. You can run EKS Anywhere on bare metal as a single node cluster with the Kubernetes control plane and workloads co-located on a single server, as a multi-node cluster with the Kubernetes control plane and workloads co-located on the same servers, and as a multi-node cluster with the Kubernetes control plane and worker nodes on different, dedicated servers.

Start creating a Bare Metal cluster

Start creating EKS Anywhere Bare Metal cluster

1. Generate a config file for Bare Metal

Identify the provider (--provider tinkerbell) and the cluster name to the eksctl anywhere generate clusterconfig command and direct the output into a cluster config .yaml file.

2. Modify the config file and hardware CSV file

Modify the generated cluster config file to suit your situation. Details about this config file are contained on the Bare Metal Config page. Create a hardware configuration file (hardware.csv) as described in Prepare hardware inventory .

3. Launch the cluster creation

Run the eksctl anywhere cluster create command, providing the cluster config and hardware CSV files. To see details on the cluster creation process, increase verbosity (-v=9 provides maximum verbosity).

4. Create bootstrap cluster and provision hardware

The cluster creation process starts by creating a temporary Kubernetes bootstrap cluster on the Administrative machine. Containerized components of the Tinkerbell provisioner run either as pods on the bootstrap cluster (Hegel, Rufio, and Tink) or directly as containers on Docker (Boots). Those Tinkerbell components drive the provisioning of the operating systems and Kubernetes components on each of the physical computers.

With the information gathered from the cluster specification and the hardware CSV file, three custom resource definitions (CRDs) are created. These include:

  • Hardware custom resources: Which store hardware information for each machine
  • Template custom resources: Which store the tasks and actions
  • Workflow custom resources: Which put together the complete hardware and template information for each machine. There are different workflows for control plane and worker nodes.

As the bootstrap cluster comes up and Tinkerbell components are started, you should see messages like the following:

$ eksctl anywhere create cluster --hardware-csv hardware.csv -f eksa-mgmt-cluster.yaml
Performing setup and validations
Tinkerbell Provider setup is valid
Validate certificate for registry mirror
Create preflight validations pass
Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup
Creating new workload cluster

At this point, Tinkerbell will try to boot up the machines in the target cluster.

Continuing cluster creation

Tinkerbell takes over the activities for creating provisioning the Bare Metal machines to become the new target cluster. See Overview of Tinkerbell in EKS Anywhere for examples of commands you can run to watch over this process.

Continue creating EKS Anywhere Bare Metal cluster

1. Tinkerbell network boots and configures nodes

  • Rufio uses BMC information to set the power state for the first control plane node it wants to provision.
  • When the node boots from its NIC, it talks to the Boots DHCP server, which fetches the kernel and initramfs (HookOS) needed to network boot the machine.
  • With HookOS running on the node, the operating system identified by IMG_URL in the cluster specification is copied to the identified DEST_DISK on the machine.
  • The Hegel components provides data stores that contain information used by services such as cloud-init to configure each system.
  • Next, the workflow is run on the first control plane node, followed by network booting and running the workflow for each subsequent control plane node.
  • Once the control plane is up, worker nodes are network booted and workflows are run to deploy each node.

2. Tinkerbell components move to the target cluster

Once all the defined nodes are added to the cluster, the Tinkerbell components and associated data are moved to run as pods on worker nodes in the new workload cluster.

Deleting Tinkerbell from Admin machine

All Tinkerbell-related pods and containers are then deleted from the Admin machine. Further management of tinkerbell and related information can be done using from the new cluster, using tools such as kubectl.

Delete Tinkerbell pods and container

Using Tinkerbell on EKS Anywhere

The sections below step through how Tinkerbell is integrated with EKS Anywhere to deploy a Bare Metal cluster. While based on features described in Tinkerbell Documentation , EKS Anywhere has modified and added to Tinkerbell components such that the entire Tinkerbell stack is now Kubernetes-friendly and can run on a Kubernetes cluster.

Create bare metal CSV file

The information that Tinkerbell uses to provision machines for the target EKS Anywhere cluster needs to be gathered in a CSV file with the following format:

hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8,type=cp,/dev/sda
...

Each physical, bare metal machine is represented by a comma-separated list of information on a single line. It includes information needed to identify each machine (the NIC’s MAC address), network boot the machine, point to the disk to install on, and then configure and start the installed system. See Preparing hardware inventory for details on the content and format of that file.

Modify the cluster specification file

Before you create a cluster using the Bare Metal configuration file, you can make Tinkerbell-related changes to that file. In particular, TinkerbellDatacenterConfig fields, TinkerbellMachineConfig fields, and Tinkerbell Actions can be added or modified.

Tinkerbell actions vary based on the operating system you choose for your EKS Anywhere cluster. Actions are stored internally and not shown in the generated cluster specification file, so you must add those sections yourself to change from the defaults (see Ubuntu TinkerbellTemplateConfig example and Bottlerocket TinkerbellTemplateConfig example for details).

In most cases, you don’t need to touch the default actions. However, you might want to modify an action (for example to change kexec to a reboot action if the hardware requires it) or add an action to further configure the installed system. Examples in Advanced Bare Metal cluster configuration show a few actions you might want to add.

Once you have made all your modifications, you can go ahead and create the cluster. The next section describes how Tinkerbell works during cluster creation to provision your Bare Metal machines and prepare them to join the EKS Anywhere cluster.

7.2 - Tinkerbell Concepts

Overview of Tinkerbell and network booting for EKS Anywhere on Bare Metal

NOTE: The Boots service has been renamed to Smee by the upstream Tinkerbell community. Any reference to Boots or Smee refer to the same service. The commands for the logs and expected pods mentioned in this doc are still the proper commands to run.

EKS Anywhere uses Tinkerbell to provision machines for a Bare Metal cluster. Understanding what Tinkerbell is and how it works with EKS Anywhere can help you take advantage of advanced provisioning features or overcome provisioning problems you encounter.

As someone deploying an EKS Anywhere cluster on Bare Metal, you have several opportunities to interact with Tinkerbell:

  • Create a hardware CSV file: You are required to create a hardware CSV file that contains an entry for every physical machine you want to add at cluster creation time.
  • Create an EKS Anywhere cluster: By modifying the Bare Metal configuration file used to create a cluster, you can change some Tinkerbell settings or add actions to define how the operating system on each machine is configured.
  • Monitor provisioning: You can follow along with the Tinkerbell Overview in this page to monitor the progress of your hardware provisioning, as Tinkerbell finds machines and attempts to network boot, configure, and restart them.

When you run the command to create an EKS Anywhere Bare Metal cluster, a set of Tinkerbell components start up on the Admin machine. One of these components runs in a container on Docker (Boots), while other components run as either controllers or services in pods on the Kubernetes kind cluster that is started up on the Admin machine. Tinkerbell components include Boots, Hegel, Rufio, and Tink.

Tinkerbell Boots service (Smee service)

The Boots service runs in a single container to handle the DHCP service and network booting activities. In particular, Boots hands out IP addresses, serves iPXE binaries via HTTP and TFTP, delivers an iPXE script to the provisioned machines, and runs a syslog server.

Boots is different from the other Tinkerbell services because the DHCP service it runs must listen directly to layer 2 traffic. (The kind cluster running on the Admin machine doesn’t have the ability to have pods listening on layer 2 networks, which is why Boots is run directly on Docker instead, with host networking enabled.)

Because Boots is running as a container in Docker, you can see the output in the logs for the Boots container by running:

docker logs boots

From the logs output, you will see iPXE try to network boot each machine. If the process doesn’t get all the information it wants from the DHCP server, it will time out. You can see iPXE loading variables, loading a kernel and initramfs (via DHCP), then booting into that kernel and initramfs: in other words, you will see everything that happens with iPXE before it switches over to the kernel and initramfs. The kernel, initramfs, and all images retrieved later are obtained remotely over HTTP and HTTPS.

Tinkerbell Hegel, Rufio, and Tink components

After Boots comes up on Docker, a small Kubernetes kind cluster starts up on the Admin machine. Other Tinkerbell components run as pods on that kind cluster. Those components include:

  • Hegel: Manages Tinkerbell’s metadata service. The Hegel service gets its metadata from the hardware specification stored in Kubernetes in the form of custom resources. The format that it serves is similar to an Ec2 metadata format.
  • Rufio: Handles talking to BMCs (which manages things like starting and stopping systems with IPMI or Redfish). The Rufio Kubernetes controller sets things such as power state, persistent boot order. BMC authentication is managed with Kubernetes secrets.
  • Tink: The Tink service consists of three components: Tink server, Tink controller, and Tink worker. The Tink controller manages hardware data, templates you want to execute, and the workflows that each target specific hardware you are provisioning. The Tink worker is a small binary that runs inside of HookOS and talks to the Tink server. The worker sends the Tink server its MAC address and asks the server for workflows to run. The Tink worker will then go through each action, one-by-one, and try to execute it.

To see those services and controllers running on the kind bootstrap cluster, type:

kubectl get pods -n eksa-system
NAME                                      READY STATUS    RESTARTS AGE
hegel-sbchp                               1/1   Running   0        3d
rufio-controller-manager-5dcc568c79-9kllz 1/1   Running   0        3d
tink-controller-manager-54dc786db6-tm2c5  1/1   Running   0        3d
tink-server-5c494445bc-986sl              1/1   Running   0        3d

Provisioning hardware with Tinkerbell

After you start up the cluster create process, the following is the general workflow that Tinkerbell performs to begin provisioning the bare metal machines and prepare them to become part of the EKS Anywhere target cluster. You can set up kubectl on the Admin machine to access the bootstrap cluster and follow along:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig

Power up the nodes

Tinkerbell starts by finding a node from the hardware list (based on MAC address) and contacting it to identify a baseboard management job (job.bmc) that runs a set of baseboard management tasks (task.bmc). To see that information, type:

kubectl get job.bmc -A
NAMESPACE    NAME                                           AGE
eksa-system  mycluster-md-0-1656099863422-vxvh2-provision   12m
kubectl get tasks.bmc -A
NAMESPACE    NAME                                                AGE
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-0  55s
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-1  51s
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-2  47s

The following shows snippets from the tasks.bmc output that represent the three tasks: Power Off, enable network boot, and Power On.

kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-0
...
  Task:
    Power Action:  Off
Status:
  Completion Time:   2022-06-27T20:32:59Z
  Conditions:
    Status:    True
    Type:      Completed 
kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-1
...
  Task:
    One Time Boot Device Action:
      Device:
        pxe
      Efi Boot:  true
Status:
  Completion Time:   2022-06-27T20:33:04Z
  Conditions:
    Status:    True
    Type:      Completed   
kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-2
  Task:
    Power Action:  on
Status:
  Completion Time:   2022-06-27T20:33:10Z
  Conditions:
    Status:    True
    Type:      Completed   

Rufio converts the baseboard management jobs into task objects, then goes ahead and executes each task. To see Rufio logs, type:

kubectl logs -n eksa-system rufio-controller-manager-5dcc568c79-9kllz | less

Network booting the nodes

Next the Boots service netboots the machine and begins streaming the HookOS (vmlinuz and initramfs) to the machine. HookOS runs in memory and provides the installation environment. To watch the Boots log messages as each node powers up, type:

docker logs boots 

You can search the output for vmlinuz and initramfs to watch as the HookOS is downloaded and booted from memory on each machine.

Running workflows

Once the HookOS is up, Tinkerbell begins running the tasks and actions contained in the workflows. This is coordinated between the Tink worker, running in memory within the HookOS on the machine, and the Tink server on the kind cluster. To see the workflows being run, type the following:

kubectl get workflows.tinkerbell.org -n eksa-system
NAME                                TEMPLATE                            STATE
mycluster-md-0-1656099863422-vxh2   mycluster-md-0-1656099863422-vxh2   STATE_RUNNING

This shows the workflow for the first machine that is being provisioned. Add -o yaml to see details of that workflow template:

kubectl get workflows.tinkerbell.org -n eksa-system -o yaml
...
status:
  state: STATE_RUNNING
  tasks:
  - actions
    - environment:
        COMPRESSED: "true"
        DEST_DISK: /dev/sda
        IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
      image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
      name: stream-image
      seconds: 35
      startedAt: "2022-06-27T20:37:39Z"
      status: STATE_SUCCESS
...

You can see that the first action in the workflow is to stream (stream-image) the operating system to the destination disk (DEST_DISK) on the machine. In this example, the Bottlerocket operating system that will be copied to disk (/dev/sda) is being served from the location specified by IMG_URL. The action was successful (STATE_SUCCESS) and it took 35 seconds.

Each action and its status is shown in this output for the whole workflow. To see details of the default actions for each supported operating system, see the Ubuntu TinkerbellTemplateConfig example and Bottlerocket TinkerbellTemplateConfig example.

In general, the actions include:

  • Streaming the operating system image to disk on each machine.
  • Configuring the network interfaces on each machine.
  • Setting up the cloud-init or similar service to add users and otherwise configure the system.
  • Identifying the data source to add to the system.
  • Setting the kernel to pivot to the installed system (using kexec) or having the system reboot to bring up the installed system from disk.

If all goes well, you will see all actions set to STATE_SUCCESS, except for the kexec-image action. That should show as STATE_RUNNING for as long as the machine is running.

You can review the CAPT logs to see provisioning activity. For example, at the start of a new provisioning event, you would see something like the following:

kubectl logs -n capt-system capt-controller-manager-9f8b95b-frbq | less
..."Created BMCJob to get hardware ready for provisioning"...

You can follow this output to see the machine as it goes through the provisioning process.

After the node is initialized, completes all the Tinkerbell actions, and is booted into the installed operating system (Ubuntu or Bottlerocket), the new system starts cloud-init to do further configuration. At this point, the system will reach out to the Tinkerbell Hegel service to get its metadata.

If something goes wrong, viewing Hegel files can help you understand why a stuck system that has booted into Ubuntu or Bottlerocket has not joined the cluster yet. To see the Hegel logs, get the internal IP address for one of the new nodes. Then check for the names of Hegel logs and display the contents of one of those logs, searching for the IP address of the node:

kubectl get nodes -o wide
NAME        STATUS   ROLES                 AGE    VERSION               INTERNAL-IP    ...
eksa-da04   Ready    control-plane,master  9m5s   v1.22.10-eks-7dc61e8  10.80.30.23
kubectl get pods -n eksa-system | grep hegel
hegel-n7ngs
kubectl logs -n eksa-system hegel-n7ngs
..."Retrieved IP peer IP..."userIP":"10.80.30.23...

If the log shows you are getting requests from the node, the problem is not a cloud-init issue.

After the first machine successfully completes the workflow, each other machine repeats the same process until the initial set of machines is all up and running.

Tinkerbell moves to target cluster

Once the initial set of machines is up and the EKS Anywhere cluster is running, all the Tinkerbell services and components (including Boots) are moved to the new target cluster and run as pods on that cluster. Those services are deleted on the kind cluster on the Admin machine.

Reviewing the status

At this point, you can change your kubectl credentials to point at the new target cluster to get information about Tinkerbell services on the new cluster. For example:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig

First check that the Tinkerbell pods are all running by listing pods from the eksa-system namespace:

kubectl get pods -n eksa-system
NAME                                        READY   STATUS    RESTARTS   AGE
smee-5dc66b5d4-klhmj                       1/1     Running   0          3d
hegel-sbchp                                 1/1     Running   0          3d
rufio-controller-manager-5dcc568c79-9kllz   1/1     Running   0          3d
tink-controller-manager-54dc786db6-tm2c5    1/1     Running   0          3d
tink-server-5c494445bc-986sl                1/1     Running   0          3d

Next, check the list of Tinkerbell machines. If all of the machines were provisioned successfully, you should see true under the READY column for each one.

kubectl get tinkerbellmachine -A
NAMESPACE    NAME                                                   CLUSTER    STATE  READY  INSTANCEID                          MACHINE
eksa-system  mycluster-control-plane-template-1656099863422-pqq2q   mycluster         true   tinkerbell://eksa-system/eksa-da04  mycluster-72p72

You can also check the machines themselves. Watch the PHASE change from Provisioning to Provisioned to Running. The Running phase indicates that the machine is now running as a node on the new cluster:

kubectl get machines -n eksa-system
NAME              CLUSTER    NODENAME    PROVIDERID                         PHASE    AGE  VERSION
mycluster-72p72   mycluster  eksa-da04   tinkerbell://eksa-system/eksa-da04 Running  7m25s   v1.22.10-eks-1-22-8

Once you have confirmed that all your machines are successfully running as nodes on the target cluster, there is not much for Tinkerbell to do. It stays around to continue running the DHCP service and to be available to add more machines to the cluster.

7.3 - Requirements for EKS Anywhere on Bare Metal

Bare Metal provider requirements for EKS Anywhere

To run EKS Anywhere on Bare Metal, you need to meet the hardware and networking requirements described below.

Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere.

Compute server requirements

The minimum number of physical machines needed to run EKS Anywhere on bare metal is 1. To configure EKS Anywhere to run on a single server, set controlPlaneConfiguration.count to 1, and omit workerNodeGroupConfigurations from your cluster configuration.

The recommended number of physical machines for production is at least:

  • Control plane physical machines: 3
  • Worker physical machines: 2

The compute hardware you need for your Bare Metal cluster must meet the following capacity requirements:

  • vCPU: 2
  • Memory: 8GB RAM
  • Storage: 25GB

Operating system requirements

If you intend on using a non-Bottlerocket OS you must build it using image-builder. See the OS Management Artifacts page for help building the OS.

Upgrade requirements

If you are running a standalone cluster with only one control plane node, you will need at least one additional, temporary machine for each control plane node grouping. For cluster with multiple control plane nodes, you can perform a rolling upgrade with or without an extra temporary machine. For worker node upgrades, you can perform a rolling upgrade with or without an extra temporary machine.

When upgrading without an extra machine, keep in mind that your control plane and your workload must be able to tolerate node unavailability. When upgrading with extra machine(s), you will need additional temporary machine(s) for each control plane and worker node grouping. Refer to Upgrade Bare Metal Cluster and Advanced configuration for upgrade rollout strategy .

NOTE: For single-node clusters that require an additional temporary machine for upgrading, if you don’t want to set up the extra hardware, you may recreate the cluster for upgrading and handle data recovery manually.

Network requirements

Each machine should include the following features:

  • Network Interface Cards: at least one NIC is required. It must be capable of network booting.

  • BMC integration (recommended): an IPMI or Redfish implementation (such a Dell iDRAC, RedFish-compatible, legacy or HP iLO) on the computer’s motherboard or on a separate expansion card. This feature is used to allow remote management of the machine, such as turning the machine on and off.

NOTE: BMC integration is not required for an EKS Anywhere cluster. However, without BMC integration, upgrades are not supported and you will have to physically turn machines off and on when appropriate.

Here are other network requirements:

  • All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).

  • You must be able to run DHCP on the control plane/worker machine network.

NOTE: If you have another DHCP service running on the network, you need to prevent it from interfering with the EKS Anywhere DHCP service. You can do that by configuring the other DHCP service to explicitly block all MAC addresses and exclude all IP addresses that you plan to use with your EKS Anywhere clusters.

  • If you have not followed the steps for airgapped environments , then the administrative machine and the target workload environment need network access (TCP/443) to:

    • public.ecr.aws

    • anywhere-assets.eks.amazonaws.com: to download the EKS Anywhere binaries, manifests and OVAs

    • distro.eks.amazonaws.com: to download EKS Distro binaries and manifests

    • d2glxqk2uabbnd.cloudfront.net: for EKS Anywhere and EKS Distro ECR container images

  • Two IP addresses routable from the cluster, but excluded from DHCP offering. One IP address is to be used as the Control Plane Endpoint IP. The other is for the Tinkerbell IP address on the target cluster. Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses.

    • Pick IP addresses reachable from the cluster subnet that are excluded from the DHCP range or

    • Create an IP reservation for these addresses on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

NOTE: When you set up your cluster configuration YAML file, the endpoint and Tinkerbell addresses are set in the controlPlaneConfiguration.endpoint.host and tinkerbellIP fields, respectively.

Validated hardware

Through extensive testing in a variety of on-premises environments, we have validated Amazon EKS Anywhere on bare metal works without modification on most modern hardware that meets the above requirements. Compatibility is determined by the host operating system selected when Building Node Images . Installation may require you to Customize HookOS for EKS Anywhere on Bare Metal to add drivers, or modify configuration specific to your environment. Bottlerocket support for bare metal was deprecated with the EKS Anywhere v0.19 release.

7.4 - Preparing Bare Metal for EKS Anywhere

Set up a Bare Metal cluster to prepare it for EKS Anywhere

After gathering hardware described in Bare Metal Requirements , you need to prepare the hardware and create a CSV file describing that hardware.

Prepare hardware

To prepare your computer hardware for EKS Anywhere, you need to connect your computer hardware and do some configuration. Once the hardware is in place, you need to:

  • Obtain IP and MAC addresses for your machines' NICs.
  • Obtain IP addresses for your machines' BMC interfaces.
  • Obtain the gateway address for your network to reach the Internet.
  • Obtain the IP address for your DNS servers.
  • Make sure the following settings are in place:
    • UEFI is enabled on all target cluster machines, unless you are provisioning RHEL systems. Enable legacy BIOS on any RHEL machines.
    • Netboot (PXE or HTTP) boot is enabled for the NIC on each machine for which you provided the MAC address. This is the interface on which the operating system will be provisioned.
    • IPMI over LAN and/or Redfish is enabled on all BMC interfaces.
  • Go to the BMC settings for each machine and set the IP address (bmc_ip), username (bmc_username), and password (bmc_password) to use later in the CSV file.

Prepare hardware inventory

Create a CSV file to provide information about all physical machines that you are ready to add to your target Bare Metal cluster. This file will be used:

  • When you generate the hardware file to be included in the cluster creation process described in the Create Bare Metal production cluster Getting Started guide.
  • To provide information that is passed to each machine from the Tinkerbell DHCP server when the machine is initially network booted.

NOTE:While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer this section.

The following is an example of an EKS Anywhere Bare Metal hardware CSV file:

hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp02,10.10.44.2,root,Me9xQf93,CC:48:3A:00:00:02,10.10.50.3,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp03,10.10.44.3,root,Z8x2M6hl,CC:48:3A:00:00:03,10.10.50.4,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-wk01,10.10.44.4,root,B398xRTp,CC:48:3A:00:00:04,10.10.50.5,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda
eksa-wk02,10.10.44.5,root,w7EenR94,CC:48:3A:00:00:05,10.10.50.6,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda

The CSV file is a comma-separated list of values in a plain text file, holding information about the physical machines in the datacenter that are intended to be a part of the cluster creation process. Each line represents a physical machine (not a virtual machine).

The following sections describe each value.

hostname

The hostname assigned to the machine.

bmc_ip (optional)

The IP address assigned to the BMC interface on the machine.

bmc_username (optional)

The username assigned to the BMC interface on the machine.

bmc_password (optional)

The password associated with the bmc_username assigned to the BMC interface on the machine.

mac

The MAC address of the network interface card (NIC) that provides access to the host computer.

ip_address

The IP address providing access to the host computer.

netmask

The netmask associated with the ip_address value. In the example above, a /23 subnet mask is used, allowing you to use up to 510 IP addresses in that range.

gateway

IP address of the interface that provides access (the gateway) to the Internet.

nameservers

The IP address of the server that you want to provide DNS service to the cluster.

labels

The optional labels field can consist of a key/value pair to use in conjunction with the hardwareSelector field when you set up your Bare Metal configuration. The key/value pair is connected with an equal (=) sign.

For example, a TinkerbellMachineConfig with a hardwareSelector containing type: cp will match entries in the CSV containing type=cp in its label definition.

disk

The device name of the disk on which the operating system will be installed. For example, it could be /dev/sda for the first SCSI disk or /dev/nvme0n1 for the first NVME storage device.

7.5 - Create Bare Metal cluster

Create a cluster on Bare Metal

EKS Anywhere supports a Bare Metal provider for EKS Anywhere deployments. EKS Anywhere allows you to provision and manage Kubernetes clusters based on Amazon EKS software on your own infrastructure.

This document walks you through setting up EKS Anywhere on Bare Metal as a standalone, self-managed cluster or combined set of management/workload clusters. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite checklist

EKS Anywhere needs:

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here.

    If you are going to use packages, set up authentication. These credentials should have limited capabilities:

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Set an environment variable for your cluster name:

    export CLUSTER_NAME=mgmt
    
  3. Generate a cluster config file for your Bare Metal provider (using tinkerbell as the provider type).

    eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider tinkerbell > eksa-mgmt-cluster.yaml
    
  4. Modify the cluster config (eksa-mgmt-cluster.yaml) by referring to the Bare Metal configuration reference documentation.

  5. Create the cluster, using the hardware.csv file you made in Bare Metal preparation .

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       --hardware-csv hardware.csv \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster
       --hardware-csv hardware.csv \
       -f $CLUSTER_NAME.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  6. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane and worker nodes:

    kubectl get machines -A
    

    Example command output:

    NAMESPACE     NAME                        CLUSTER   NODENAME        PROVIDERID                              PHASE     AGE   VERSION
    eksa-system   mgmt-47zj8                  mgmt      eksa-node01     tinkerbell://eksa-system/eksa-node01    Running   1h    v1.23.7-eks-1-23-4
    eksa-system   mgmt-md-0-7f79df46f-wlp7w   mgmt      eksa-node02     tinkerbell://eksa-system/eksa-node02    Running   1h    v1.23.7-eks-1-23-4
    ...
    
  8. Check the cluster:

    You can now use the cluster as you would any Kubernetes cluster. To try it out, run the test application with:

    export CLUSTER_NAME=mgmt
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in Deploy test workload.

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider tinkerbell > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings. Ensure workload cluster object names (Cluster, TinkerbellDatacenterConfig, TinkerbellMachineConfig, etc.) are distinct from management cluster object names. Keep the tinkerbellIP of workload cluster the same as tinkerbellIP of the management cluster.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster

    To create a new workload cluster from your management cluster run this command, identifying:

    • The workload cluster YAML file
    • The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)

    Create a workload cluster in one of the following ways:

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
          # --hardware-csv <hardware.csv> \ # uncomment to add more hardware
          # --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \ # uncomment for airgapped install
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl to talks to the Kubernetes API to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: For kubectl, GitOps and Terraform:

      • The baremetal controller does not support scaling upgrades and Kubernetes version upgrades in the same request.
      • While creating a new workload cluster if you need to add additional machines for the target workload cluster, run:
        eksctl anywhere generate hardware -z updated-hardware.csv > updated-hardware.yaml
        kubectl apply -f updated-hardware.yaml
        
      • For creating multiple workload clusters, it is essential that the hardware labels and selectors defined for a given workload cluster are unique to that workload cluster. For instance, for an EKS Anywhere cluster named eksa-workload1, the hardware that is assigned for this cluster should have labels that are only going to be used for this cluster like type=eksa-workload1-cp and type=eksa-workload1-worker. Another workload cluster named eksa-workload2 can have labels like type=eksa-workload2-cp and type=eksa-workload2-worker. Please note that even though labels can be arbitrary, they need to be unique for each workload cluster. Not specifying unique cluster labels can cause cluster creations to behave in unexpected ways which may lead to unsuccessful creations and unstable clusters. See the hardware selectors section for more information
  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster. Change your credentials to point to the new workload cluster (for example, mgmt-w01), then run the test application with:

    export CLUSTER_NAME=mgmt-w01
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

7.6 - Configure for Bare Metal

Full EKS Anywhere configuration reference for a Bare Metal cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

To generate your own cluster configuration, follow instructions from the Create Bare Metal cluster section and modify it using descriptions below. For information on how to add cluster configuration settings to this file for advanced node configuration, see Advanced Bare Metal cluster configuration .

NOTE: Bare Metal cluster creation with RHEL 9 raw OS images requires advanced cluster configurations to be set. To create Bare Metal RHEL 9 clusters, modify the cluster configurations using descriptions below and follow Advanced Bare Metal cluster configuration .

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
    endpoint:
      host: "<Control Plane Endpoint IP>"
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name-cp
  datacenterRef:
    kind: TinkerbellDatacenterConfig
    name: my-cluster-name
  kubernetesVersion: "1.28"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
  name: my-cluster-name
spec:
  tinkerbellIP: "<Tinkerbell IP>"

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef: {}
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef:
    kind: TinkerbellTemplateConfig
    name: my-cluster-name
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

Cluster Fields

name (required)

Name of your cluster (my-cluster-name in this example).

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes. This number needs to be odd to maintain ETCD quorum.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other machines.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint (For k8s versions prior to 1.24, node-role.kubernetes.io/master. For k8s versions 1.24+, node-role.kubernetes.io/control-plane). The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

controlPlaneConfiguration.upgradeRolloutStrategy (optional)

Configuration parameters for upgrade strategy.

controlPlaneConfiguration.upgradeRolloutStrategy.type (optional)

Default: RollingUpdate

Type of rollout strategy. Supported values: RollingUpdate,InPlace.

NOTE: The upgrade rollout strategy type must be the same for all control plane and worker nodes.

controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate (optional)

Configuration parameters for customizing rolling upgrade behavior.

NOTE: The rolling update parameters can only be configured if upgradeRolloutStrategy.type is RollingUpdate.

controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)

Default: 1

This can not be 0 if maxUnavailable is 0.

The maximum number of machines that can be scheduled above the desired number of machines.

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

controlPlaneConfiguration.skipLoadBalancerDeployment (optional)

Optional field to skip deploying the control plane load balancer. Make sure your infrastructure can handle control plane load balancing when you set this field to true. In most cases, you should not set this field to true.

datacenterRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration. See TinkerbellDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

managementCluster (required)

Identifies the name of the management cluster. If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name.

workerNodeGroupConfigurations (optional)

This takes in a list of node groups that you can define for your workers.

You can omit workerNodeGroupConfigurations when creating Bare Metal clusters. If you omit workerNodeGroupConfigurations, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without workerNodeGroupConfigurations.

NOTE: Empty workerNodeGroupConfigurations is not supported when Kubernetes version <= 1.21.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration (optional)

Configuration parameters for Cluster Autoscaler.

NOTE: Autoscaling configuration is not supported when using the InPlace upgrade rollout strategy.

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

Must be less than or equal to the cluster kubernetesVersion defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane’s Kubernetes version. Removing workerNodeGroupConfiguration.kubernetesVersion will trigger an upgrade of the node group to the kubernetesVersion defined at the root level of the cluster spec.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy (optional)

Configuration parameters for upgrade strategy.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.type (optional)

Default: RollingUpdate

Type of rollout strategy. Supported values: RollingUpdate,InPlace.

NOTE: The upgrade rollout strategy type must be the same for all control plane and worker nodes.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate (optional)

Configuration parameters for customizing rolling upgrade behavior.

NOTE: The rolling update parameters can only be configured if upgradeRolloutStrategy.type is RollingUpdate.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)

Default: 1

This can not be 0 if maxUnavailable is 0.

The maximum number of machines that can be scheduled above the desired number of machines.

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional)

Default: 0

This can not be 0 if MaxSurge is 0.

The maximum number of machines that can be unavailable during the upgrade.

Example: When this is set to n, the old worker node group can be scaled down by n machines immediately when the rolling upgrade starts. Once new machines are ready, old worker node group can be scaled down further, followed by scaling up the new worker node group, ensuring that the total number of machines unavailable at all times during the upgrade never falls below n.

TinkerbellDatacenterConfig Fields

tinkerbellIP (required)

Required field to identify the IP address of the Tinkerbell service. This IP address must be a unique IP in the network range that does not conflict with other IPs. Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs. When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.

osImageURL (required)

Required field to set the operating system. In order to use Ubuntu or RHEL see building baremetal node images . This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run upgrade cluster command . The osImageURL must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the osImageURL name should include 1.24, 1_24, 1-24 or 124.

NOTE: osImageURL field cannot be set both in the TinkerbellDatacenterConfig and TinkerbellMachineConfig objects. If this value is set for TinkerbellDatacenterConfig, osImageURL has to be set to empty string "" for all the TinkerbellMachineConfigs.

hookImagesURLPath (optional)

Optional field to replace the HookOS image. This field is useful if you want to provide a customized HookOS image or simply host the standard image locally. See Artifacts for details.

Example TinkerbellDatacenterConfig.spec

spec:
  tinkerbellIP: "192.168.0.10"                                          # Available, routable IP
  osImageURL: "http://my-web-server/ubuntu-v1.23.7-eks-a-12-amd64.gz"   # Full URL to the OS Image hosted locally
  hookImagesURLPath: "http://my-web-server/hook"                        # Path to the hook images. This path must contain vmlinuz-x86_64 and initramfs-x86_64

This is the folder structure for my-web-server:

my-web-server
├── hook
│   ├── initramfs-x86_64
│   └── vmlinuz-x86_64
└── ubuntu-v1.23.7-eks-a-12-amd64.gz

skipLoadBalancerDeployment (optional)

Optional field to skip deploying the default load balancer for Tinkerbell stack.

EKS Anywhere for Bare Metal uses kube-vip load balancer by default to expose the Tinkerbell stack externally. You can disable this feature by setting this field to true.

NOTE: If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at tinkerbellIP once the cluster creation is finished. One way to achieve this is by using the MetalLB package.

TinkerbellMachineConfig Fields

In the example, there are TinkerbellMachineConfig sections for control plane (my-cluster-name-cp) and worker (my-cluster-name) machine groups. The following fields identify information needed to configure the nodes in each of those groups.

NOTE: Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.

hardwareSelector (optional)

Use fields under hardwareSelector to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster. Choose any label name you like. For example, if you had added the label node=cp-machine to the machines listed in your CSV file that you want to be control plane nodes, the following hardwareSelector field would cause those machines to be added to the control plane:

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector:
    node: "cp-machine"

osFamily (required)

Operating system on the machine. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket).

osImageURL (required)

Required field to set the operating system. In order to use Ubuntu or RHEL see building baremetal node images . This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run upgrade cluster command . The osImageURL must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the osImageURL name should include 1.24, 1_24, 1-24 or 124.

NOTE: If this value is set for a single TinkerbellMachineConfig, osImageURL has to be set for all the TinkerbellMachineConfigs. osImageURL field cannot be set both in the TinkerbellDatacenterConfig and TinkerbellMachineConfig objects. If set for TinkerbellMachineConfig, the value must be set to empty string "" for TinkerbellDatacenterConfig

templateRef (optional)

Identifies the template that defines the actions that will be applied to the TinkerbellMachineConfig. See TinkerbellTemplateConfig fields below. EKS Anywhere will generate default templates based on osFamily during the create command. You can override this default template by providing your own template here.

users (optional)

The name of the user you want to configure to access your virtual machines through SSH.

The default is ec2-user. Currently, only one user is supported.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your machines through SSH (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster machines so you can SSH into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<machine-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

hostOSConfig (optional)

Optional host OS configurations for the EKS Anywhere Kubernetes nodes. More information in the Host OS Configuration section.

Advanced Bare Metal cluster configuration

When you generate a Bare Metal cluster configuration, the TinkerbellTemplateConfig is kept internally and not shown in the generated configuration file. TinkerbellTemplateConfig settings define the actions done to install each node, such as get installation media, configure networking, add users, and otherwise configure the node.

Advanced users can override the default values set for TinkerbellTemplateConfig. They can also add their own Tinkerbell actions to make personalized modifications to EKS Anywhere nodes.

The following shows three TinkerbellTemplateConfig examples that you can add to your cluster configuration file to override the values that EKS Anywhere sets: one for Ubuntu, one for RHEL and one for Bottlerocket. Most actions used differ for different operating systems.

NOTE: For the stream-image action, DEST_DISK points to the device representing the entire hard disk (for example, /dev/sda). For UEFI-enabled images, such as Ubuntu and RHEL 9, write actions use DEST_DISK to point to the second partition (for example, /dev/sda2), with the first being the EFI partition. For the Bottlerocket image, which has 12 partitions, DEST_DISK is partition 12 (for example, /dev/sda12). Device names will be different for different disk types.

Ubuntu TinkerbellTemplateConfig example

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://my-file-server/ubuntu-v1.23.7-eks-a-12-amd64.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/netplan/config.yaml
          STATIC_NETPLAN: true
          DIRMODE: "0755"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netplan
        timeout: 90
      - environment:
          CONTENTS: |
            datasource:
              Ec2:
                metadata_urls: [<admin-machine-ip>, <tinkerbell-ip-from-cluster-config>]
                strict_id: false
            manage_etc_hosts: localhost
            warnings:
              dsid_missing_source: off            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-config
        timeout: 90
      - environment:
          CONTENTS: |
            network:
              config: disabled            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: disable-cloud-init-network-capabilities
        timeout: 90
      - environment:
          CONTENTS: |
                        datasource: Ec2
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/ds-identify.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-ds-config
        timeout: 90
      - environment:
          BLOCK_DEVICE: /dev/sda2
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/kexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: kexec-image
        pid: host
        timeout: 90
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

RHEL TinkerbellTemplateConfig example

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://my-file-server/rhel-9-uefi-amd64.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/netplan/config.yaml
          STATIC_NETPLAN: true
          DIRMODE: "0755"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netplan
        timeout: 90
      - environment:
          CONTENTS: |
            datasource:
              Ec2:
                metadata_urls: [<admin-machine-ip>, <tinkerbell-ip-from-cluster-config>]
                strict_id: false
            manage_etc_hosts: localhost
            warnings:
              dsid_missing_source: off            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-config
        timeout: 90
      - environment:
          CONTENTS: |
            network:
              config: disabled            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: disable-cloud-init-network-capabilities
        timeout: 90
      - environment:
          CONTENTS: |
                        datasource: Ec2
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/ds-identify.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-ds-config
        timeout: 90
      - name: "reboot"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        timeout: 90
        volumes:
          - /worker:/worker
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

Bottlerocket TinkerbellTemplateConfig example

Pay special attention to the BOOTCONFIG_CONTENTS environment section below if you wish to set up console redirection for the kernel and systemd. If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here. However, if you need all boot output to be shown via a server’s serial console for example, extra configuration should be provided inside BOOTCONFIG_CONTENTS.

An empty kernel {} key is provided below in the example; inside this key is where you will specify your console devices. You may specify multiple comma delimited console devices in quotes to a console key as such: console = "tty0", "ttyS0,115200n8". The order of the devices is significant; systemd will output to the last device specified. The console key belongs inside the kernel key like so:

kernel {
    console = "tty0", "ttyS0,115200n8"
}

The above example will send all kernel output to both consoles, and systemd output to ttyS0. Additional information about serial console setup can be found in the Linux kernel documentation .

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          # An example console declaration that will send all kernel output to both consoles, and systemd output to ttyS0.
          # kernel {
          #     console = "tty0", "ttyS0,115200n8"
          # }
          BOOTCONFIG_CONTENTS: |
                        kernel {}
          DEST_DISK: /dev/sda12
          DEST_PATH: /bootconfig.data
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-bootconfig
        timeout: 90
      - environment:
          CONTENTS: |
            # Version is required, it will change as we support
            # additional settings
            version = 1
            # "eno1" is the interface name
            # Users may turn on dhcp4 and dhcp6 via boolean
            [eno1]
            dhcp4 = true
            # Define this interface as the "primary" interface
            # for the system.  This IP is what kubelet will use
            # as the node IP.  If none of the interfaces has
            # "primary" set, we choose the first interface in
            # the file
            primary = true            
          DEST_DISK: /dev/sda12
          DEST_PATH: /net.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netconfig
        timeout: 90
      - environment:
          HEGEL_URLS: http://<hegel-ip>:50061
          DEST_DISK: /dev/sda12
          DEST_PATH: /user-data.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-user-data
        timeout: 90
      - name: "reboot"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        timeout: 90
        volumes:
          - /worker:/worker
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

TinkerbellTemplateConfig Fields

The values in the TinkerbellTemplateConfig fields are created from the contents of the CSV file used to generate a configuration. The template contains actions that are performed on a Bare Metal machine when it first boots up to be provisioned. For advanced users, you can add these fields to your cluster configuration file if you have special needs to do so.

While there are fields that apply to all provisioned operating systems, actions are specific to each operating system. Examples below describe actions for Ubuntu and Bottlerocket operating systems.

template.global_timeout

Sets the timeout value for completing the configuration. Set to 6000 (100 minutes) by default.

template.id

Not set by default.

template.tasks

Within the TinkerbellTemplateConfig template under tasks is a set of actions. The following descriptions cover the actions shown in the example templates for Ubuntu and Bottlerocket:

template.tasks.actions.name.stream-image (Ubuntu and Bottlerocket)

The stream-image action streams the selected image to the machine you are provisioning. It identifies:

  • environment.COMPRESSED: When set to true, Tinkerbell expects IMG_URL to be a compressed image, which Tinkerbell will uncompress when it writes the contents to disk.
  • environment.DEST_DISK: The hard disk on which the operating system is deployed. The default is the first SCSI disk (/dev/sda), but can be changed for other disk types.
  • environment.IMG_URL: The operating system tarball (ubuntu or other) to stream to the machine you are configuring.
  • image: Container image needed to perform the steps needed by this action.
  • timeout: Sets the amount of time (in seconds) that Tinkerbell has to stream the image, uncompress it, and write it to disk before timing out. Consider increasing this limit from the default 600 to a higher limit if this action is timing out.

Ubuntu-specific actions

template.tasks.actions.name.write-netplan (Ubuntu)

The write-netplan action writes Ubuntu network configuration information to the machine (see Netplan ) for details. It identifies:

  • environment.CONTENTS.network.version: Identifies the network version.
  • environment.CONTENTS.network.renderer: Defines the service to manage networking. By default, the networkd systemd service is used.
  • environment.CONTENTS.network.ethernets: Network interface to external network (eno1, by default) and whether or not to use dhcp4 (true, by default).
  • environment.DEST_DISK: Destination block storage device partition where the operating system is copied. By default, /dev/sda2 is used (sda1 is the EFI partition).
  • environment.DEST_PATH: File where the networking configuration is written (/etc/netplan/config.yaml, by default).
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0755, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0644, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-config (Ubuntu)

The add-tink-cloud-init-config action configures cloud-init features to further configure the operating system. See cloud-init Documentation for details. It identifies:

  • environment.CONTENTS.datasource: Identifies Ec2 (Ec2.metadata_urls) as the data source and sets Ec2.strict_id: false to prevent cloud-init from producing warnings about this datasource.
  • environment.CONTENTS.system_info: Creates the tink user and gives it administrative group privileges (wheel, adm) and passwordless sudo privileges, and set the default shell (/bin/bash).
  • environment.CONTENTS.manage_etc_hosts: Updates the system’s /etc/hosts file with the hostname. Set to localhost by default.
  • environment.CONTENTS.warnings: Sets dsid_missing_source to off.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the cloud-init configuration file on disk (/etc/cloud/cloud.cfg.d/10_tinkerbell.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-ds-config (Ubuntu)

The add-tink-cloud-init-ds-config action configures cloud-init data store features. This identifies the location of your metadata source once the machine is up and running. It identifies:

  • environment.CONTENTS.datasource: Sets the datasource. Uses Ec2, by default.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the data store identity configuration file on disk (/etc/cloud/ds-identify.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.kexec-image (Ubuntu)

The kexec-image action performs provisioning activities on the machine, then allows kexec to pivot the kernel to use the system installed on disk. This action identifies:

  • environment.BLOCK_DEVICE: Disk partition on which the operating system is installed (/dev/sda2, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • image: Container image used to perform the steps needed by this action.
  • pid: Process ID. Set to host, by default.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: Identifies mount points that need to be remounted to point to locations in the installed system.

There are known issues related to drivers with some hardware that may make it necessary to replace the kexec-image action with a full reboot. If you require a full reboot, you can change the kexec-image setting as follows:

actions:
- name: "reboot"
  image: public.ecr.aws/l0g8r8j6/tinkerbell/hub/reboot-action:latest
  timeout: 90
  volumes:
  - /worker:/worker

Bottlerocket-specific actions

template.tasks.actions.write-bootconfig (Bottlerocket)

The write-bootconfig action identifies the location on the machine to put content needed to boot the system from disk.

  • environment.BOOTCONFIG_CONTENTS.kernel: Add kernel parameters that are passed to the kernel when the system boots.
  • environment.DEST_DISK: Identifies the block storage device that holds the boot partition.
  • environment.DEST_PATH: Identifies the file holding boot configuration data (/bootconfig.data in this example).
  • environment.DIRMODE: The Linux permissions assigned to the boot directory.
  • environment.FS_TYPE: The filesystem type associated with the boot partition.
  • environment.GID: The group ID associated with files and directories created on the boot partition.
  • environment.MODE: The Linux permissions assigned to files in the boot partition.
  • environment.UID: The user ID associated with files and directories created on the boot partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.write-netconfig (Bottlerocket)

The write-netconfig action configures networking for the system.

  • environment.CONTENTS: Add network values, including: version = 1 (version number), [eno1] (external network interface), dhcp4 = true (turns on dhcp4), and primary = true (identifies this interface as the primary interface used by kubelet).
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.

template.tasks.actions.write-user-data (Bottlerocket)

The write-user-data action configures the Tinkerbell Hegel service, which provides the metadata store for Tinkerbell.

  • environment.HEGEL_URLS: The IP address and port number of the Tinkerbell Hegel service.
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.reboot (Bottlerocket)

The reboot action defines how the system restarts to bring up the installed system.

  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: The volume (directory) to mount into the container from the installed system.

version

Matches the current version of the Tinkerbell template.

Custom Tinkerbell action examples

By creating your own custom Tinkerbell actions, you can add to or modify the installed operating system so those changes take effect when the installed system first starts (from a reboot or pivot). The following example shows how to add a .deb package (openssl) to an Ubuntu installation:

      - environment:
          BLOCK_DEVICE: /dev/sda1
          CHROOT: "y"
          CMD_LINE: apt -y update && apt -y install openssl
          DEFAULT_INTERPRETER: /bin/sh -c
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: install-openssl
        timeout: 90

The following shows an example of adding a new user (tinkerbell) to an installed Ubuntu system:

      - environment:
          BLOCK_DEVICE: <block device path> # E.g. /dev/sda1
          FS_TYPE: ext4
          CHROOT: y
          DEFAULT_INTERPRETER: "/bin/sh -c"
          CMD_LINE: "useradd --password $(openssl passwd -1 tinkerbell) --shell /bin/bash --create-home --groups sudo tinkerbell"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: "create-user"
        timeout: 90

Look for more examples as they are added to the Tinkerbell examples page.

7.7 - Customize Bare Metal

Customizing EKS Anywhere on Bare Metal

7.7.1 - Customize HookOS for EKS Anywhere on Bare Metal

Customizing HookOS for EKS Anywhere on Bare Metal

To network boot bare metal machines in EKS Anywhere clusters, machines acquire a kernel and initial ramdisk that is referred to as HookOS. A default HookOS is provided when you create an EKS Anywhere cluster. However, there may be cases where you want and/or need to customize the default HookOS, such as to add drivers required to boot your particular type of hardware.

The following procedure describes how to customize and build HookOS. For more information on Tinkerbell’s HookOS Installation Environment, see the Tinkerbell Hook repo .

System requirements

  • >= 2G memory
  • >= 4 CPU cores # the more cores the better for kernel building.
  • >= 20G disk space

Dependencies

Be sure to install all the following dependencies.

  • jq
  • envsubst
  • pigz
  • docker
  • curl
  • bash >= 4.4
  • git
  • findutils
  1. Clone the Hook repo or your fork of that repo:

    git clone https://github.com/tinkerbell/hook.git
    cd hook/
    
  2. Run the Linux kernel menuconfig TUI and configuring the kernel as needed. Be sure to save the config before exiting. The result of this step will be a modified kernel configuration file (./kernel/configs/generic-6.6.y-x86_64).

    ./build.sh kernel-config hook-latest-lts-amd64
    
  3. Build the kernel container image. The result of this step will be a container image. Use docker images quay.io/tinkerbell/hook-kernel to see it.

    ./build.sh kernel hook-latest-lts-amd64
    
  4. Build the HookOS kernel and initramfs artifacts. The result of this step will be the kernel and initramfs. These files are located at ./out/hook/vmlinuz-latest-lts-x86_64 and ./out/hook/initramfs-latest-lts-x86_64 respectively.

    ./build.sh linuxkit hook-latest-lts-amd64 
    
  5. Rename the kernel and initramfs files to vmlinuz-x86_64 and initramfs-x86_64 respectively.

    mv ./out/hook/vmlinuz-latest-lts-x86_64 ./out/hook/vmlinuz-x86_64
    mv ./out/hook/initramfs-latest-lts-x86_64 ./out/hook/initramfs-x86_64
    
  6. To use the kernel (vmlinuz-x86_64) and initial ram disk (initramfs-x86_64) when you build your EKS Anywhere cluster, see the description of the hookImagesURLPath field in your cluster configuration file.

7.7.2 - DHCP options for EKS Anywhere

Using your existing DHCP service with EKS Anywhere Bare Metal

In order to facilitate network booting machines, EKS Anywhere bare metal runs its own DHCP server, Boots (a standalone service in the Tinkerbell stack). There can be numerous reasons why you may want to use an existing DHCP service instead of Boots: Security, compliance, access issues, existing layer 2 constraints, existing automation, and so on.

In environments where there is an existing DHCP service, this DHCP service can be configured to interoperate with EKS Anywhere. This document will cover how to make your existing DHCP service interoperate with EKS Anywhere bare metal. In this scenario EKS Anywhere will have no layer 2 DHCP responsibilities.

Note: Currently, Boots is responsible for more than just DHCP. So Boots can’t be entirely avoided in the provisioning process.

Additional Services in Boots

  • HTTP and TFTP servers for iPXE binaries
  • HTTP server for iPXE script
  • Syslog server (receiver)

Process

As a prerequisite, your existing DHCP must serve host/address/static reservations for all machines that EKS Anywhere bare metal will be provisioning. This means that the IPAM details you enter into your hardware.csv must be used to create host/address/static reservations in your existing DHCP service.

Now, you configure your existing DHCP service to provide the location of the iPXE binary and script. This is a two-step interaction between machines and the DHCP service and enables the provisioning process to start.

  • Step 1: The machine broadcasts a request to network boot. Your existing DHCP service then provides the machine with all IPAM info as well as the location of the Tinkerbell iPXE binary (ipxe.efi). The machine configures its network interface with the IPAM info then downloads the Tinkerbell iPXE binary from the location provided by the DHCP service and runs it.

  • Step 2: Now with the Tinkerbell iPXE binary loaded and running, iPXE again broadcasts a request to network boot. The DHCP service again provides all IPAM info as well as the location of the Tinkerbell iPXE script (auto.ipxe). iPXE configures its network interface using the IPAM info and then downloads the Tinkerbell iPXE script from the location provided by the DHCP service and runs it.

Note The auto.ipxe is an iPXE script that tells iPXE from where to download the HookOS kernel and initrd so that they can be loaded into memory.

The following diagram illustrates the process described above. Note that the diagram only describes the network booting parts of the DHCP interaction, not the exchange of IPAM info.

process

Configuration

Below you will find code snippets showing how to add the two-step process from above to an existing DHCP service. Each config checks if DHCP option 77 (user class option ) equals “Tinkerbell”. If it does match, then the Tinkerbell iPXE script (auto.ipxe) will be served. If option 77 does not match, then the iPXE binary (ipxe.efi) will be served.

DHCP option: next server

Most DHCP services define a next server option. This option generally corresponds to either DHCP option 66 or the DHCP header sname, reference. This option is used to tell a machine where to download the initial bootloader, reference.

Special consideration is required for the next server value when using EKS Anywhere to create your initial management cluster. This is because during this initial create phase a temporary bootstrap cluster is created and used to provision the management cluster.

The bootstrap cluster runs the Tinkerbell stack. When the management cluster is successfully created, the Tinkerbell stack is redeployed to the management cluster and the bootstrap cluster is deleted. This means that the IP address of the Tinkerbell stack will change.

As a temporary and one-time step, the IP address used by the existing DHCP service for next server will need to be the IP address of the temporary bootstrap cluster. This will be the IP of the Admin node or if you use the cli flag --tinkerbell-bootstrap-ip then that IP should be used for the next server in your existing DHCP service.

Once the management cluster is created, the IP address used by the existing DHCP service for next server will need to be updated to the tinkerbellIP. This IP is defined in your cluster spec at tinkerbellDatacenterConfig.spec.tinkerbellIP. The next server IP will not need to be updated again.

Note: The upgrade phase of a management cluster or the creation of any workload clusters will not require you to change the next server IP in the config of your existing DHCP service.

Code snippets

The following code snippets are generic examples of the config needed to enable the two-step process to an existing DHCP service. It does not cover the IPAM info that is also required.

dnsmasq

dnsmasq.conf

dhcp-match=tinkerbell, option:user-class, Tinkerbell
dhcp-boot=tag:!tinkerbell,ipxe.efi,none,192.168.2.112
dhcp-boot=tag:tinkerbell,http://192.168.2.112/auto.ipxe

Kea DHCP

kea.json

{
    "Dhcp4": {
        "client-classes": [
            {
                "name": "tinkerbell",
                "test": "substring(option[77].hex,0,10) == 'Tinkerbell'",
                "boot-file-name": "http://192.168.2.112/auto.ipxe"
            },
            {
                "name": "default",
                "test": "not(substring(option[77].hex,0,10) == 'Tinkerbell')",
                "boot-file-name": "ipxe.efi"
            }
        ],
        "subnet4": [
            {
                "next-server": "192.168.2.112"
            }
        ]
    }
}

ISC DHCP

dhcpd.conf

 if exists user-class and option user-class = "Tinkerbell" {
     filename "http://192.168.2.112/auto.ipxe";
 } else {
     filename "ipxe.efi";
 }
 next-server "192.168.1.112";

Microsoft DHCP server

Please follow the ipxe.org guide on how to configure Microsoft DHCP server.

8 - Create Snow cluster

Create an EKS Anywhere cluster on Snow

8.1 - Create Snow cluster

Create an EKS Anywhere cluster on AWS Snowball Edge

EKS Anywhere supports an AWS Snow provider for EKS Anywhere deployments.

This document walks you through setting up EKS Anywhere on Snow as a standalone, self-managed cluster or combined set of management/workload clusters. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite checklist

EKS Anywhere on Snow needs:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or standalone cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Set an environment variables for your cluster name

    export CLUSTER_NAME=mgmt
    
  3. Generate a cluster config file for your Snow provider

    eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider snow > eksa-mgmt-cluster.yaml
    
  4. Optionally import images to private registry

    This optional step imports EKS Anywhere artifacts and release bundle to a local registry. This is required for air-gapped installation.

    eksctl anywhere import images \
       --input /usr/lib/eks-a/artifacts/artifacts.tar.gz \
       --bundles /usr/lib/eks-a/manifests/bundle-release.yaml \
       --registry $PRIVATE_REGISTRY_ENDPOINT \
       --insecure=true
    
  5. Modify the cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to the Snow configuration for information on configuring this cluster config for a Snow provider.
    • Add Optional configuration settings as needed.
  6. Set Credential Environment Variables

    Before you create the initial cluster, you will need to use the credentials and ca-bundles files that are in the Admin instance, and export these environment variables for your AWS Snowball device credentials. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_AWS_CREDENTIALS_FILE='/PATH/TO/CREDENTIALS/FILE'
    export EKSA_AWS_CA_BUNDLES_FILE='/PATH/TO/CABUNDLES/FILE'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

  7. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override /usr/lib/eks-a/manifests/bundle-release.yaml
    
  8. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  9. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane and worker nodes:

    kubectl get machines -A
    

    Example command output:

    NAMESPACE    NAME                        CLUSTER  NODENAME                    PROVIDERID                                       PHASE    AGE    VERSION
    eksa-system  mgmt-etcd-dsxb5             mgmt                                 aws-snow:///192.168.1.231/s.i-8b0b0631da3b8d9e4  Running  4m59s  
    eksa-system  mgmt-md-0-7b7c69cf94-99sll  mgmt     mgmt-md-0-1-58nng           aws-snow:///192.168.1.231/s.i-8ebf6b58a58e47531  Running  4m58s  v1.24.9-eks-1-24-7
    eksa-system  mgmt-srrt8                  mgmt     mgmt-control-plane-1-xs4t9  aws-snow:///192.168.1.231/s.i-8414c7fcabcf3d7c1  Running  4m58s  v1.24.9-eks-1-24-7
    ...    
    
  10. Check the cluster:

    You can now use the cluster as you would any Kubernetes cluster. To try it out, run the test application with:

    export CLUSTER_NAME=mgmt
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in Deploy test workload .

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider snow > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings.

    NOTE: Ensure workload cluster object names (Cluster, SnowDatacenterConfig, SnowMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: snowDatacenterConfig.spec.identityRef and a Snow bootstrap credentials secret need to be specified when provisioning a cluster through GitOps or Terraform, as EKS Anywhere Cluster Controller will not create a Snow bootstrap credentials secret like eksctl CLI does when field is empty.

      snowMachineConfig.spec.sshKeyName must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster.

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath={.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

8.2 - Configure for Snow

Full EKS Anywhere configuration reference for a AWS Snow cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 10.1.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  datacenterRef:
    kind: SnowDatacenterConfig
    name: my-cluster-datacenter
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  kubernetesVersion: "1.28"
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
    name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowDatacenterConfig
metadata:
  name: my-cluster-datacenter
spec: {}

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowMachineConfig
metadata:
  name: my-cluster-machines
spec:
  amiID: ""
  instanceType: sbe-c.large
  sshKeyName: ""
  osFamily: ubuntu
  devices:
  - ""
  containersVolume:
    size: 25
  network:
    directNetworkInterfaces:
    - index: 1
      primary: true
      ipPoolRef:
        kind: SnowIPPool
        name: ip-pool-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowIPPool
metadata:
  name: ip-pool-1
spec:
  pools:
  - ipStart: 192.168.1.2
    ipEnd: 192.168.1.14
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1
  - ipStart: 192.168.1.55
    ipEnd: 192.168.1.250
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other devices.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

externalEtcdConfiguration.count (optional)

Number of etcd members.

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with Snow specific configuration for your etcd members. See SnowMachineConfig Fields below.

datacenterRef (required)

Refers to the Kubernetes object with Snow environment specific configuration. See SnowDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

SnowDatacenterConfig Fields

identityRef (required)

Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.

SnowMachineConfig Fields

amiID (optional)

AMI ID from which to create the machine instance. Snow provider offers an AMI lookup logic which will look for a suitable AMI ID based on the Kubernetes version and osFamily if the field is empty.

instanceType (optional)

Type of the Snow EC2 machine instance. See Quotas for Compute Instances on a Snowball Edge Device for supported instance types on Snow (Default: sbe-c.large).

osFamily

Operating System on instance machines. Permitted value: ubuntu.

physicalNetworkConnector (optional)

Type of snow physical network connector to use for creating direct network interfaces. Permitted values: SFP_PLUS, QSFP, RJ45 (Default: SFP_PLUS).

sshKeyName (optional)

Name of the AWS Snow SSH key pair you want to configure to access your machine instances.

The default is eksa-default-{cluster-name}-{uuid}.

devices

A device IP list from which to bootstrap and provision machine instances.

network

Custom network setting for the machine instances. DHCP and static IP configurations are supported.

network.directNetworkInterfaces[0].index (optional)

Index number of a direct network interface (DNI) used to clarify the position in the list. Must be no smaller than 1 and no greater than 8.

network.directNetworkInterfaces[0].primary (optional)

Whether the DNI is primary or not. One and only one primary DNI is required in the directNetworkInterfaces list.

network.directNetworkInterfaces[0].vlanID (optional)

VLAN ID to use for the DNI.

network.directNetworkInterfaces[0].dhcp (optional)

Whether DHCP is to be used to assign IP for the DNI.

network.directNetworkInterfaces[0].ipPoolRef (optional)

Refers to a SnowIPPool object which provides a range of ip addresses. When specified, an IP address selected from the pool will be allocated to the DNI.

containersVolume (optional)

Configuration option for customizing containers data storage volume.

containersVolume.size (optional)

Size of the storage for containerd runtime in Gi.

The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.

containersVolume.deviceName (optional)

Containers volume device name.

containersVolume.type (optional)

Type of the containers volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

nonRootVolumes (optional)

Configuration options for the non root storage volumes.

nonRootVolumes[0].deviceName (optional)

Non root volume device name. Must be specified and cannot have prefix “/dev/sda” as it is reserved for root volume and containers volume.

nonRootVolumes[0].size (optional)

Size of the storage device for the non root volume. Must be no smaller than 8 Gi.

nonRootVolumes[0].type (optional)

Type of the non root volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

SnowIPPool Fields

pools[0].ipStart (optional)

Start address of an IP range.

pools[0].ipEnd (optional)

End address of an IP range.

pools[0].subnet (optional)

An IP subnet for determining whether an IP is within the subnet.

pools[0].gateway (optional)

Gateway of the subnet for routing purpose.

9 - Create CloudStack cluster

Create an EKS Anywhere cluster on Apache CloudStack

9.1 - Requirements for EKS Anywhere on CloudStack

CloudStack provider requirements for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a CloudStack environment

To prepare a CloudStack environment to run EKS Anywhere, you need the following:

  • A CloudStack 4.14 or later environment. CloudStack 4.16 is used for examples in these docs.

  • Capacity to deploy 6-10 VMs.

  • One shared network in CloudStack to use for the cluster. EKS Anywhere clusters need access to CloudStack through the network to enable self-managing and storage capabilities.

  • A Red Hat Enterprise Linux qcow2 image built using the image-builder tool as described in artifacts .

  • User credentials (CloudStack API key and Secret key) to create VMs and attach networks in CloudStack.

  • Prepare DHCP IP addresses pool

  • One IP address routable from the cluster but excluded from DHCP offering. This IP address is to be used as the Control Plane Endpoint IP. Below are some suggestions to ensure that this IP address is never handed out by your DHCP server. You may need to contact your network engineer.

    • Pick an IP address reachable from the cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

Each VM will require:

  • 2 vCPUs
  • 8GB RAM
  • 25GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

CloudStack information needed before creating the cluster

You need at least the following information before creating the cluster. See CloudStack configuration for a complete list of options and Preparing CloudStack for instructions on creating the assets.

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate one for the controlplane of each workload cluster you add.

Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

A static IP address will be used for each control plane VM in your EKS Anywhere cluster. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering. An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

  • CloudStack datacenter: You need the name of the CloudStack Datacenter plus the following for each Availability Zone (availabilityZones). Most items can be represented by name or ID:
    • Account (account): Account with permission to create a cluster (optional, admin by default).
    • Credentials (credentialsRef): Credentials provided in an ini file used to access the CloudStack API endpoint. See CloudStack Getting started for details.
    • Domain (domain): The CloudStack domain in which to deploy the cluster (optional, ROOT by default)
    • Management endpoint (managementApiEndpoint): Endpoint for a cloudstack client to make API calls to client.
    • Zone network (zone.network): Either name or ID of the network.
  • CloudStack machine configuration: For each set of machines (for example, you could configure separate set of machines for control plane, worker, and etcd nodes), obtain the following information. This must be predefined in the cloudStack instance and identified by name or ID:
    • Compute offering (computeOffering): Choose an existing compute offering (such as large-instance), reflecting the amount of resources to apply to each VM.
    • Operating system (template): Identifies the operating system image to use (such as rhel8-k8s-118).
    • Users (users.name): Identifies users and SSH keys needed to access the VMs.

9.2 - Preparing CloudStack for EKS Anywhere

Set up a CloudStack cluster to prepare it for EKS Anywhere

Before you can create an EKS Anywhere cluster in CloudStack, you must do some setup on your CloudStack environment. This document helps you get what you need to fulfill the prerequisites described in the Requirements and values you need for CloudStack configuration .

Set up a domain and user credentials

Either use the ROOT domain or create a new domain to deploy your EKS Anywhere cluster. One or more users are grouped under a domain. This example creates a user account for the domain with a Domain Administrator role. From the apachecloudstack console:

  1. Select Domains.

  2. Select Add Domain.

  3. Fill in the Name for the domain (eksa in this example) and select OK.

  4. Select Accounts -> Add Account, then fill in the form to add a user with DomainAdmin role, as shown in the following figure:

    Add a user account with the DomainAdmin role

  5. To generate API credentials for the user, select Accounts-> -> View Users -> and select the Generate Keys button.

  6. Select OK to confirm key generation. The API Key and Secret Key should appear as shown in the following figure:

    Generate API Key and Secret Key

  7. Copy the API Key and Secret Key to a credentials file to use when you generate your cluster. For example:

    [Global]
    api-url = http://10.0.0.2:8080/client/api
    api-key = OI7pm0xrPMYjLlMfqrEEj...
    secret-key = tPsgAECJwTHzbU4wMH...
    

Import template

You need to build at least one operating system image and import it as a template to use for your cluster nodes. Currently, only Red Hat Enterprise Linux 8 images are supported. To build a RHEL-based image to use with EKS Anywhere, see Build node images .

  1. Make your image accessible from you local machine or from a URL that is accessible to your CloudStack setup.

  2. Select Images -> Templates, then select either Register Template from URL or Select local Template. The following figure lets you register a template from URL:

    Adding a RHEL-based EKS Anywhere image template

    This example imports a RHEL image (QCOW2), identifies the zone from which it will be available, uses KVM as the hypervisor, uses the osdefault Root disk controller, and identifies the OS Type as Red Hat Enterprise Linux 8.0. Select OK to save the template.

  3. Note the template name and zone so you can use it later when you deploy your cluster.

Create CloudStack configurations

Take a look at the following CloudStack configuration settings before creating your EKS Anywhere cluster. You will need to identify many of these assets when you create you cluster specification:

DatacenterConfig information

Here is how to get information to go into the CloudStackDatacenterConfig section of the CloudStack cluster configuration file:

  • Domain: Select Domains, then select your domain name from under the ROOT domain. Select View Users, not the user with the Domain Admin role, and consider setting limits to what each user can consume from the Resources and Configure Limits tabs.

  • Zones: Select Infrastructure -> Zones. Find a Zone where you can deploy your cluster or create a new one.

    Select from available Zones

  • Network: Select Network -> Guest networks. Choose a network to use for your cluster or create a new one.

Here is what some of that information would look like in a cluster configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
  name: my-cluster-name-datacenter
spec:
  availabilityZones:
  - account: admin
    credentialsRef: global
    domain: eksa
    managementApiEndpoint: ""
    name: az-1
    zone:
      name: Zone2
      network:
        name: "SharedNet2"

MachineConfig information

Here is how to get information to go into CloudStackMachineConfig sections of the CloudStack cluster configuration file:

  • computeOffering: Select Service Offerings -> Compute Offerings to see a list of available combinations of CPU cores, CPU, and memory to apply to your node instances. See the following figure for an example:

    Choose or add a compute offering to set node resources

  • template: Select Images -> Templates to see available operating system image templates.

  • diskOffering: Select Storage -> Volumes, the select Create Volume, if you want to create disk storage to attach to your nodes (optional). You can use this to store logs or other data you want saved outside of the nodes. When you later create the cluster configuration, you can identify things like where you want the device mounted, the type of file system, labels and other information.

  • AffinityGroupIds: Select Compute -> Affinity Groups, then select Add new affinity group (optional). By creating an affinity group, you can tell all VMs from a set of instances to either all run on different physical hosts (anti-affinity) or just run anywhere they can (affinity).

Here is what some of that information would look like in a cluster configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  computeOffering:
    name: "Medium Instance"
  template:
    name: "rhel8-kube-1.28-eksa"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/kubernetes: /data-small/var/log/kubernetes
  affinityGroupIds:
  - control-plane-anti-affinity

9.3 - Create CloudStack cluster

Create a cluster on CloudStack

EKS Anywhere supports a CloudStack provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on CloudStack in a way that:

  • Deploys an initial cluster on your CloudStack environment. That cluster can be used as a standalone cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. Also it lets you keep CloudStack credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or standalone cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"
    
  2. Generate an initial cluster config (named mgmt for this example):

    export CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider cloudstack > eksa-mgmt-cluster.yaml
    
  3. Create credential file

    Create a credential file (for example, cloud-config) and add the credentials needed to access your CloudStack environment. The file should include:

    • api-key: Obtained from CloudStack
    • secret-key: Obtained from CloudStack
    • api-url: The URL to your CloudStack API endpoint

    For example:

    [Global]
    api-key     =  -Dk5uB0DE3aWng
    secret-key  =  -0DQLunsaJKxCEEHn44XxP80tv6v_RB0DiDtdgwJ
    api-url     =  http://172.16.0.1:8080/client/api
    
    

    You can have multiple credential entries. To match this example, you would enter global as the credentialsRef in the cluster config file for your CloudStack availability zone. You can configure multiple credentials for multiple availability zones.

  4. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to Cloudstack configuration for information on configuring this cluster config for a CloudStack provider.
    • Add Optional configuration settings as needed.
    • Create at least two control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  5. Set Environment Variables

    Convert the credential file into base64 and set the following environment variable to that value:

    export EKSA_CLOUDSTACK_B64ENCODED_SECRET=$(base64 -i cloud-config)
    
  6. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  7. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  8. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:

    kubectl get machines -A
    

    Example command output

    NAMESPACE   NAME                PROVIDERID           PHASE    VERSION
    eksa-system mgmt-b2xyz          cloudstack:/xxxxx    Running  v1.23.1-eks-1-21-5
    eksa-system mgmt-etcd-r9b42     cloudstack:/xxxxx    Running
    eksa-system mgmt-md-8-6xr-rnr   cloudstack:/xxxxx    Running  v1.23.1-eks-1-21-5
    ...
    

    The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.

  9. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider cloudstack > eksa-w01-cluster.yaml
    
  3. Modify the workload cluster config (eksa-w01-cluster.yaml) as follows. Refer to the initial config described earlier for the required and optional settings. In particular:

    • Ensure workload cluster object names (Cluster, CloudDatacenterConfig, CloudStackMachineConfig, etc.) are distinct from management cluster object names.
  4. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  5. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: spec.users[0].sshAuthorizedKeys must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  6. To check the workload cluster, get the workload cluster credentials and run a test workload:

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath={.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
  7. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

Optional configuration

Disable KubeVIP

The KubeVIP deployment used for load balancing Kube API Server requests can be disabled by setting an environment variable that will be interpreted by the eksctl anywhere create cluster command. Disabling the KubeVIP deployment is useful if you wish to use an external load balancer for load balancing Kube API Server requests. When disabling the KubeVIP load balancer you become responsible for hosting the Spec.ControlPlaneConfiguration.Endpoint.Host IP which must route requests to a Kube API Server instance of the cluster being provisioned.

export CLOUDSTACK_KUBE_VIP_DISABLED=true

9.4 - CloudStack configuration

Full EKS Anywhere configuration reference for a CloudStack cluster

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-cp
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
  datacenterRef:
    kind: CloudStackDatacenterConfig
    name: my-cluster-name
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-etcd
  kubernetesVersion: "1.28"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 2
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
  name: my-cluster-name-datacenter
spec:
  availabilityZones:
  - account: admin
    credentialsRef: global
    domain: domain1
    managementApiEndpoint: ""
    name: az-1
    zone:
      name: zone1
      network:
        name: "net1"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/kubernetes: /data-small/var/log/kubernetes
  affinityGroupIds:
  - control-plane-anti-affinity
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/pods: /data-small/var/log/pods
    /var/log/containers: /data-small/var/log/containers
  affinityGroupIds:
  - worker-affinity
  userCustomDetails:
    foo: bar
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-etcd
spec:
  computeOffering: {}
    name: "m4-large"
  users:
  - name: "capc"
    sshAuthorizedKeys:
    - "ssh-rsa AAAAB3N...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/lib: /data-small/var/lib
  affinityGroupIds:
  - etcd-affinity
---

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint, node-role.kubernetes.io/master. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint node-role.kubernetes.io/master. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

datacenterRef (required)

Refers to the Kubernetes object with CloudStack environment specific configuration. See CloudStackDatacenterConfig Fields below.

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See CloudStackMachineConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

managementCluster (required)

Identifies the name of the management cluster. If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

CloudStackDatacenterConfig

availabilityZones.account (optional)

Account used to access CloudStack. As long as you pass valid credentials, through availabilityZones.credentialsRef, this value is not required.

availabilityZones.credentialsRef (required)

If you passed credentials through the environment variable EKSA_CLOUDSTACK_B64ENCODED_SECRET noted in Create CloudStack production cluster , you can identify those credentials here. For that example, you would use the profile name global. You can instead use a previously created secret on the Kubernetes cluster in the eksa-system namespace.

availabilityZones.domain (optional)

CloudStack domain to deploy the cluster. The default is ROOT.

availabilityZones.managementApiEndpoint (required)

Location of the CloudStack API management endpoint. For example, http://10.11.0.2:8080/client/api.

availabilityZones.{id,name} (required)

Name or ID of the CloudStack zone on which to deploy the cluster.

availabilityZones.zone.network.{id,name} (required)

CloudStack network name or ID to use with the cluster.

CloudStackMachineConfig

In the example above, there are separate CloudStackMachineConfig sections for the control plane (my-cluster-name-cp), worker (my-cluster-name) and etcd (my-cluster-name-etcd) nodes.

computeOffering.{id,name} (required)

Name or ID of the CloudStack compute instance.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh. You can add as many users object as you want.

The default is capc.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

template.{id,name} (required)

The VM template to use for your EKS Anywhere cluster. Currently, a VM based on RHEL 8.6 is required. This can be a name or ID. The template.name must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the template.name field name should include 1.24, 1_24, 1-24 or 124. See the Artifacts page for instructions for building RHEL-based images.

diskOffering (optional)

Name representing a disk you want to mount into nodes for this CloudStackMachineConfig

diskOffering.mountPath (optional)

Mount point on which to mount the disk.

diskOffering.device (optional)

Device name of the disk partition to mount.

diskOffering.filesystem (optional)

File system type used to format the filesystem on the disk.

diskOffering.label (optional)

Label to apply to the disk partition.

Symbolic link of a directory or file you want to mount from the host filesystem to the mounted filesystem.

userCustomDetails (optional)

Add key/value pairs to nodes in a CloudStackMachineConfig. These can be used for things like identifying sets of nodes that you want to add to a security group that opens selected ports.

affinityGroupIDs (optional)

Group ID to attach to the set of host systems to indicate how affinity is done for services on those systems.

affinity (optional)

Allows you to set pro and anti affinity for the CloudStackMachineConfig. This can be used in a mutually exclusive fashion with the affinityGroupIDs field.

10 - Create Nutanix cluster

Create an EKS Anywhere cluster on Nutanix Cloud Infrastructure with AHV

10.1 - Overview

Overview of EKS Anywhere cluster creation on Nutanix

Creating a Nutanix cluster

The following diagram illustrates the cluster creation process for the Nutanix provider.

Start creating a Nutanix cluster

Start creating EKS Anywhere cluster

1. Generate a config file for Nutanix

Identify the provider (--provider nutanix) and the cluster name in the eksctl anywhere generate clusterconfig command and direct the output to a cluster config .yaml file.

2. Modify the config file

Modify the generated cluster config file to suit your situation. Details about this config file can be found on the Nutanix Config page.

3. Launch the cluster creation

After modifying the cluster configuration file, run the eksctl anywhere cluster create command, providing the cluster config. The verbosity can be increased to see more details on the cluster creation process (-v=9 provides maximum verbosity).

4. Create bootstrap cluster

The cluster creation process starts with creating a temporary Kubernetes bootstrap cluster on the Administrative machine.

First, the cluster creation process runs a series of commands to validate the Nutanix environment:

  • Checks that the Nutanix environment is available.
  • Authenticates the Nutanix provider to the Nutanix environment using the supplied Prism Central endpoint information and credentials.

For each of the NutanixMachineConfig objects, the following validations are performed:

  • Validates the provided resource configuration (CPU, memory, storage)
  • Validates the Nutanix subnet
  • Validates the Nutanix Prism Element cluster
  • Validates the image
  • (Optional) Validates the Nutanix project

If all validations pass, you will see this message:

✅ Nutanix Provider setup is valid

During bootstrap cluster creation, the following messages will be shown:

Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup

Next, the Nutanix provider will create the machines in the Nutanix environment.

Continuing cluster creation

The following diagram illustrates the activities that occur next:

Continue creating EKS Anywhere cluster

1. CAPI management

Cluster API (CAPI) management will orchestrate the creation of the target cluster in the Nutanix environment.

Creating new workload cluster

2. Create the target cluster nodes

The control plane and worker nodes will be created and configured using the Nutanix provider.

3. Add Cilium networking

Add Cilium as the CNI plugin to use for networking between the cluster services and pods.

Installing networking on workload cluster

4. Moving cluster management to target cluster

CAPI components are installed on the target cluster. Next, cluster management is moved from the bootstrap cluster to the target cluster.

Creating EKS-A namespace
Installing cluster-api providers on workload cluster
Installing EKS-A secrets on workload cluster
Installing resources on management cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Installing EKS-D components on workload cluster
Creating EKS-A CRDs instances on workload cluster

4. Saving cluster configuration file

The cluster configuration file is saved.

Writing cluster config file

5. Delete bootstrap cluster

The bootstrap cluster is no longer needed and is deleted when the target cluster is up and running:

Delete EKS Anywhere bootstrap cluster

The target cluster can now be used as either:

  • A standalone cluster (to run workloads) or
  • A management cluster (to optionally create one or more workload clusters)

Creating workload clusters (optional)

The target cluster acts as a management cluster. One or more workload clusters can be managed by this management cluster as described in Create separate workload clusters :

  • Use eksctl to generate a cluster config file for the new workload cluster.
  • Modify the cluster config with a new cluster name and different Nutanix resources.
  • Use eksctl to create the new workload cluster from the new cluster config file.

10.2 - Requirements for EKS Anywhere on Nutanix Cloud Infrastructure

Preparing a Nutanix Cloud Infrastructure provider for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a Nutanix environment

To prepare a Nutanix environment to run EKS Anywhere, you need the following:

  • A Nutanix environment running AOS 5.20.4+ with AHV and Prism Central 2022.1+

  • Capacity to deploy 6-10 VMs

  • DHCP service or Nutanix IPAM running in your environment in the primary VM network for your workload cluster

  • Prepare DHCP IP addresses pool

  • A VM image imported into the Prism Image Service for the workload VMs

  • User credentials to create VMs and attach networks, etc

  • One IP address routable from cluster but excluded from DHCP/IPAM offering. This IP address is to be used as the Control Plane Endpoint IP

    Below are some suggestions to ensure that this IP address is never handed out by your DHCP server.

    You may need to contact your network engineer.

    • Pick an IP address reachable from cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.
    • Block an IP address from the Nutanix IPAM managed network using aCLI

Each VM will require:

  • 2 vCPUs
  • 4GB RAM
  • 40GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

  • Prism Central endpoint (must be accessible to EKS Anywhere clusters)
  • Prism Element Data Services IP and CVM endpoints (for CSI storage connections)
  • public.ecr.aws (for pulling EKS Anywhere container images)
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries and manifests)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

Nutanix information needed before creating the cluster

You need to get the following information before creating the cluster:

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate one for the controlplane of each workload cluster you add.

    Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

    A static IP address will be used for control plane API server HA in each of your EKS Anywhere clusters. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering.

    An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

    Import ova wizard

  • Prism Central FQDN or IP Address: The Prism Central fully qualified domain name or IP address.

  • Prism Element Cluster Name: The AOS cluster to deploy the EKS Anywhere cluster on.

  • VM Subnet Name: The VM network to deploy your EKS Anywhere cluster on.

  • Machine Template Image Name: The VM image to use for your EKS Anywhere cluster.

  • additionalTrustBundle (required if using a self-signed PC SSL certificate): The PEM encoded CA trust bundle of the root CA that issued the certificate for Prism Central.

10.3 - Preparing Nutanix Cloud Infrastructure for EKS Anywhere

Set up a Nutanix cluster to prepare it for EKS Anywhere

Certain resources must be in place with appropriate user permissions to create an EKS Anywhere cluster using the Nutanix provider.

Configuring Nutanix User

You need a Prism Admin user to create EKS Anywhere clusters on top of your Nutanix cluster.

Build Nutanix AHV node images

Follow the steps outlined in artifacts to create a Ubuntu-based image for Nutanix AHV and import it into the AOS Image Service.

10.4 - Create Nutanix cluster

Create an EKS Anywhere cluster on Nutanix Cloud Infrastructure with AHV

EKS Anywhere supports a Nutanix Cloud Infrastructure (NCI) provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on Nutanix Cloud Infrastructure with AHV in a way that:

  • Deploys an initial cluster in your Nutanix environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. It also lets you keep NCI credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Generate an initial cluster config (named mgmt for this example):

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider nutanix > eksa-mgmt-cluster.yaml
    
  3. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to Nutanix configuration for information on configuring this cluster config for a Nutanix provider.
    • Add Optional configuration settings as needed.
    • Create at least three control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  4. Set Credential Environment Variables

    Before you create the initial cluster, you will need to set and export these environment variables for your Nutanix Prism Central user name and password. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_NUTANIX_USERNAME='billy'
    export EKSA_NUTANIX_PASSWORD='t0p$ecret'
    
  5. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  6. Once the cluster is created, you can access it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster is ready, list the machines to see the control plane, and worker nodes:

    kubectl get machines -n eksa-system
    

    Example command output

       NAME              CLUSTER  NODENAME                                 PROVIDERID       PHASE     AGE   VERSION
       mgmt-4gtt2        mgmt     mgmt-control-plane-1670343878900-2m4ln   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-d42xn        mgmt     mgmt-control-plane-1670343878900-jbfxt   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-9868m   mgmt     mgmt-md-0-1670343878901-lkmxw            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-njpk2   mgmt     mgmt-md-0-1670343878901-9clbz            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-p4gp2   mgmt     mgmt-md-0-1670343878901-mbktx            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-zkwrr        mgmt     mgmt-control-plane-1670343878900-jrdkk   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
    
  8. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the cluster CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider nutanix > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings. Ensure workload cluster object names (Cluster, NutanixDatacenterConfig, NutanixMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster

    To create a new workload cluster from your management cluster run this command, identifying:

    • The workload cluster YAML file
    • The initial cluster’s kubeconfig (this causes the workload cluster to be managed from the management cluster)
    eksctl anywhere create cluster \
       -f eksa-w01-cluster.yaml  \
       --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster. Change your kubeconfig to point to the new workload cluster (for example, w01), then run the test application with:

    export CLUSTER_NAME=w01
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

10.5 - Configure for Nutanix

Full EKS Anywhere configuration reference for a Nutanix cluster

This is a generic template with detailed descriptions below for reference.

The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
 name: mgmt
 namespace: default
spec:
 clusterNetwork:
   cniConfig:
     cilium: {}
   pods:
     cidrBlocks:
       - 192.168.0.0/16
   services:
     cidrBlocks:
       - 10.96.0.0/16
 controlPlaneConfiguration:
   count: 3
   endpoint:
     host: ""
   machineGroupRef:
     kind: NutanixMachineConfig
     name: mgmt-cp-machine
 datacenterRef:
   kind: NutanixDatacenterConfig
   name: nutanix-cluster
 externalEtcdConfiguration:
   count: 3
   machineGroupRef:
     kind: NutanixMachineConfig
     name: mgmt-etcd
 kubernetesVersion: "1.28"
 workerNodeGroupConfigurations:
   - count: 1
     machineGroupRef:
       kind: NutanixMachineConfig
       name: mgmt-machine
     name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixDatacenterConfig
metadata:
 name: nutanix-cluster
 namespace: default
spec:
 endpoint: pc01.cloud.internal
 port: 9440
 credentialRef:
   kind: Secret
   name: nutanix-credentials
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 annotations:
   anywhere.eks.amazonaws.com/control-plane: "true"
 name: mgmt-cp-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
 additionalCategories:
   - key: my-category
     value: my-category-value
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 name: mgmt-etcd
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 name: mgmt-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---

Cluster Fields

name (required)

Name of your cluster mgmt in this example.

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here .

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with Nutanix specific configuration for your etcd members. See NutanixMachineConfig fields below.

datacenterRef (required)

Refers to the Kubernetes object with Nutanix environment specific configuration. See NutanixDatacenterConfig fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

NutanixDatacenterConfig Fields

endpoint (required)

The Prism Central server fully qualified domain name or IP address. If the server IP is used, the PC SSL certificate must have an IP SAN configured.

port (required)

The Prism Central server port. (Default: 9440)

credentialRef (required)

Reference to the Kubernetes secret that contains the Prism Central credentials.

insecure (optional)

Set insecure to true if the Prism Central server does not have a valid certificate. This is not recommended for production use cases. (Default: false)

additionalTrustBundle (optional; required if using a self-signed PC SSL certificate)

The PEM encoded CA trust bundle.

The additionalTrustBundle needs to be populated with the PEM-encoded x509 certificate of the Root CA that issued the certificate for Prism Central. Suggestions on how to obtain this certificate are here .

Example:

 additionalTrustBundle: |
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----    

NutanixMachineConfig Fields

cluster (required)

Reference to the Prism Element cluster.

cluster.type (required)

Type to identify the Prism Element cluster. (Permitted values: name or uuid)

cluster.name (required)

Name of the Prism Element cluster.

cluster.uuid (required)

UUID of the Prism Element cluster.

image (required)

Reference to the OS image used for the system disk.

image.type (required)

Type to identify the OS image. (Permitted values: name or uuid)

image.name (name or UUID required)

Name of the image The image.name must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, image.name must include 1.24, 1_24, 1-24 or 124.

image.uuid (name or UUID required)

UUID of the image The name of the image associated with the uuid must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the name associated with image.uuid field must include 1.24, 1_24, 1-24 or 124.

memorySize (optional)

Size of RAM on virtual machines (Default: 4Gi)

osFamily (optional)

Operating System on virtual machines. Permitted values: ubuntu and redhat. (Default: ubuntu)

subnet (required)

Reference to the subnet to be assigned to the VMs.

subnet.name (name or UUID required)

Name of the subnet.

subnet.type (required)

Type to identify the subnet. (Permitted values: name or uuid)

subnet.uuid (name or UUID required)

UUID of the subnet.

systemDiskSize (optional)

Amount of storage assigned to the system disk. (Default: 40Gi)

vcpuSockets (optional)

Amount of vCPU sockets. (Default: 2)

vcpusPerSocket (optional)

Amount of vCPUs per socket. (Default: 1)

project (optional)

Reference to an existing project used for the virtual machines.

project.type (required)

Type to identify the project. (Permitted values: name or uuid)

project.name (name or UUID required)

Name of the project

project.uuid (name or UUID required)

UUID of the project

additionalCategories (optional)

Reference to a list of existing Nutanix Categories to be assigned to virtual machines.

additionalCategories[0].key

Nutanix Category to add to the virtual machine.

additionalCategories[0].value

Value of the Nutanix Category to add to the virtual machine

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is eksa if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

10.6 -

  • Prism Central endpoint (must be accessible to EKS Anywhere clusters)
  • Prism Element Data Services IP and CVM endpoints (for CSI storage connections)
  • public.ecr.aws (for pulling EKS Anywhere container images)
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries and manifests)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

11 - Create Docker Cluster (dev only)

Create an EKS Anywhere cluster with Docker on your local machine, laptop, or cloud instance

EKS Anywhere docker provider deployments

EKS Anywhere supports a Docker provider for development and testing use cases only. This allows you to try EKS Anywhere on your local machine or laptop before deploying to other infrastructure such as vSphere or bare metal.

Prerequisites

System and network requirements

  • Mac OS 10.15+ / Ubuntu 20.04.2 LTS or 22.04 LTS / RHEL or Rocky Linux 8.8+
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • If you are running in an airgapped environment, the Admin machine must be amd64.

Here are a few other things to keep in mind:

  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation, as described here.

  • If you are using EKS Anywhere v0.15 or earlier and Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1. For details, see Troubleshooting Guide.

Tools

Install EKS Anywhere CLI tools

To get started with EKS Anywhere, you must first install the eksctl CLI and the eksctl anywhere plugin. This is the primary interface for EKS Anywhere and what you will use to create a local Docker cluster. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer.

Homebrew

Note if you already have eksctl installed, you can install the eksctl anywhere plugin manually following the instructions in the following section. This package also installs kubectl and aws-iam-authenticator.

brew install aws/tap/eks-anywhere

Manual

Install eksctl

curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo install -m 0755 /tmp/eksctl /usr/local/bin/eksctl

Install the eksctl-anywhere plugin

RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo install -m 0755 ./eksctl-anywhere /usr/local/bin/eksctl-anywhere

Install kubectl. See the Kubernetes documentation for more information.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo install -m 0755 ./kubectl /usr/local/bin/kubectl

Create a local Docker cluster

  1. Generate a cluster config. The cluster config will contain the settings for your local Docker cluster. The eksctl anywhere generate command populates a cluster config with EKS Anywhere defaults and best practices.

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider docker > $CLUSTER_NAME.yaml
    

    The command above creates a file named eksa-cluster.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections: Cluster and DockerDatacenterConfig. These are the minimum configuration settings you must provide to create a Docker cluster. You can optionally configure OIDC, etcd, proxy, and GitOps as described here.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
       name: mgmt
    spec:
       clusterNetwork:
          cniConfig:
             cilium: {}
          pods:
             cidrBlocks:
                - 192.168.0.0/16
          services:
             cidrBlocks:
                - 10.96.0.0/12
       controlPlaneConfiguration:
          count: 1
       datacenterRef:
          kind: DockerDatacenterConfig
          name: mgmt
       externalEtcdConfiguration:
          count: 1
       kubernetesVersion: "1.28"
       managementCluster:
          name: mgmt
       workerNodeGroupConfigurations:
          - count: 1
            name: md-0
    ---
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: DockerDatacenterConfig
    metadata:
       name: mgmt
    spec: {}
    
    
  2. Create Docker Cluster. Note the following command may take several minutes to complete. You can run the command with -v 6 to increase logging verbosity to see the progress of the command.

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml --bundles-override ./eks-anywhere-downloads/bundle-release.yaml
    

    Expand for sample output:

    Performing setup and validations
    ✅ validation succeeded {"validation": "docker Provider setup is valid"}
    Creating new bootstrap cluster
    Installing cluster-api providers on bootstrap cluster
    Provider specific setup
    Creating new workload cluster
    Installing networking on workload cluster
    Installing cluster-api providers on workload cluster
    Moving cluster management from bootstrap to workload cluster
    Installing EKS-A custom components (CRD and controller) on workload cluster
    Creating EKS-A CRDs instances on workload cluster
    Installing GitOps Toolkit on workload cluster
    GitOps field not specified, bootstrap flux skipped
    Deleting bootstrap cluster
    🎉 Cluster created!
    ----------------------------------------------------------------------------------
    The Amazon EKS Anywhere Curated Packages are only available to customers with the
    Amazon EKS Anywhere Enterprise Subscription
    ----------------------------------------------------------------------------------
    ...
    

    NOTE: to install curated packages during cluster creation, use --install-packages packages.yaml flag

  3. Access Docker cluster

    Once the cluster is created you can use it with the generated kubeconfig in the local directory. If you used the same naming conventions as the example above, you will find a eksa-cluster/eksa-cluster-eks-a-cluster.kubeconfig in the directory where you ran the commands.

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl get ns
    

    Example command output

    NAME                                STATUS   AGE
    capd-system                         Active   21m
    capi-kubeadm-bootstrap-system       Active   21m
    capi-kubeadm-control-plane-system   Active   21m
    capi-system                         Active   21m
    capi-webhook-system                 Active   21m
    cert-manager                        Active   22m
    default                             Active   23m
    eksa-packages                       Active   23m
    eksa-system                         Active   20m
    kube-node-lease                     Active   23m
    kube-public                         Active   23m
    kube-system                         Active   23m
    

    You can now use the cluster like you would any Kubernetes cluster.

  4. The following command will deploy a test application:

    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    To interact with the deployed application, review the steps in the Deploy test workload page .

Next steps:

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

12 - Optional Configuration

Optional Config references for EKS Anywhere clusters such as etcd, OS, CNI, IRSA, proxy, and registry mirror

The configuration pages below describe optional features that you can add to your EKS Anywhere provider’s clusterspec file. See each provider’s installation section for details on which optional features are supported.

12.1 - etcd

EKS Anywhere cluster yaml etcd specification reference

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

There are two types of etcd topologies for configuring a Kubernetes cluster:

  • Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
  • Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components

The unstacked etcd topology is recommended for a HA cluster for the following reasons:

  • External etcd topology decouples the control plane components and etcd member. For example, if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
  • Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.

EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
      cniConfig:
         cilium: {}
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
         kind: VSphereMachineConfig
         name: my-cluster-name-cp
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-name
   # etcd configuration
   externalEtcdConfiguration:
      count: 3
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-name-etcd
   kubernetesVersion: "1.27"
   workerNodeGroupConfigurations:
      - count: 1
        machineGroupRef:
           kind: VSphereMachineConfig
           name: my-cluster-name
        name: md-0

externalEtcdConfiguration (under Cluster)

External etcd configuration for your Kubernetes cluster.

count (required)

This determines the number of etcd members in the cluster. The recommended number is 3.

machineGroupRef (required)

Refers to the Kubernetes object with provider specific configuration for your nodes.

12.2 - Encrypting Confidential Data at Rest

EKS Anywhere cluster specification for encryption of etcd data at-rest

You can configure EKS Anywhere clusters to encrypt confidential API resource data, such as secrets, at-rest in etcd using a KMS encryption provider. EKS Anywhere supports a hybrid model for configuring etcd encryption where cluster admins are responsible for deploying and maintaining the KMS provider on the cluster and EKS Anywhere handles configuring kube-apiserver with the KMS properties.

Because of this model, etcd encryption can only be enabled on cluster upgrades after the KMS provider has been deployed on the cluster.

Before you begin

Before enabling etcd encryption, make sure you have done the following:

Example etcd encryption configuration

The following cluster spec enables etcd encryption configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster
  namespace: default
spec:
  ...
  etcdEncryption:
  - providers:
    - kms:
        cachesize: 1000
        name: example-kms-config
        socketListenAddress: unix:///var/run/kmsplugin/socket.sock
        timeout: 3s
    resources:
    - secrets

Description of etcd encryption fields

etcdEncryption

Key used to specify etcd encryption configuration for a cluster. This field is only supported on cluster upgrades.

  • providers

    Key used to specify which encryption provider to use. Currently, only one provider can be configured.

    • kms

      Key used to configure KMS encryption provider.

      • name

        Key used to set the name of the KMS plugin. This cannot be changed once set.

      • endpoint

        Key used to specify the listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.

      • cachesize

        Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap. If cachesize isn’t specified, a default of 1000 is used.

      • timeout

        How long should kube-apiserver wait for kms-plugin to respond before returning an error. If a timeout isn’t specified, a default timeout of 3s is used.

  • resources

    Key used to specify a list of resources that should be encrypted using the corresponding encryption provider. These can be native Kubernetes resources such as secrets and configmaps or custom resource definitions such as clusters.anywhere.eks.amazonaws.com.

Example AWS Encryption Provider DaemonSet

Here’s a sample AWS encryption provider daemonset configuration.

Expand
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: aws-encryption-provider
  name: aws-encryption-provider
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: aws-encryption-provider
  template:
    metadata:
      labels:
        app: aws-encryption-provider
    spec:
      containers:
      - image: <AWS_ENCRYPTION_PROVIDER_IMAGE>    # Specify the AWS KMS encryption provider image 
        name: aws-encryption-provider
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        command:
        - /aws-encryption-provider
        - --key=<KEY_ARN>                         # Specify the arn of KMS key to be used for encryption/decryption
        - --region=<AWS_REGION>                   # Specify the region in which the KMS key exists
        - --listen=<KMS_SOCKET_LISTEN_ADDRESS>    # Specify a socket listen address for the KMS provider. Example: /var/run/kmsplugin/socket.sock
        ports:
        - containerPort: 8080
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
        volumeMounts:
          - mountPath: /var/run/kmsplugin
            name: var-run-kmsplugin
          - mountPath: /root/.aws
            name: aws-credentials
      tolerations:
      - key: "node-role.kubernetes.io/master"
        effect: "NoSchedule"
      - key: "node-role.kubernetes.io/control-plane"
        effect: "NoSchedule"
      volumes:
      - hostPath:
          path: /var/run/kmsplugin
          type: DirectoryOrCreate
        name: var-run-kmsplugin
      - hostPath:
          path: /etc/kubernetes/aws
          type: DirectoryOrCreate
        name: aws-credentials

12.3 - Operating system

EKS Anywhere cluster yaml specification for host OS configuration

Host OS Configuration

You can configure certain host OS settings through EKS Anywhere.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

The following cluster spec shows an example of how to configure host OS settings:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig        # Replace "VSphereMachineConfig" with "TinkerbellMachineConfig" for Tinkerbell clusters
metadata:
  name: machine-config
spec:
  ...
  hostOSConfiguration:
    ntpConfiguration:
      servers:
        - time-a.ntp.local
        - time-b.ntp.local
    certBundles:
    - name: "bundle_1"
      data: |
        -----BEGIN CERTIFICATE-----
        MIIF1DCCA...
        ...
        es6RXmsCj...
        -----END CERTIFICATE-----

        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----        
    bottlerocketConfiguration:
      kubernetes:
        allowedUnsafeSysctls:
          - "net.core.somaxconn"
          - "net.ipv4.ip_local_port_range"
        clusterDNSIPs:
          - 10.96.0.10
        maxPods: 100
      kernel:
        sysctlSettings:
          net.core.wmem_max: "8388608"
          net.core.rmem_max: "8388608"
          ...
      boot:
        bootKernelParameters:
          slub_debug:
          - "options,slabs"
          ...

Host OS Configuration Spec Details

hostOSConfiguration

Top level object used for host OS configurations.

  • ntpConfiguration

    Key used for configuring NTP servers on your EKS Anywhere cluster nodes.

    • servers
      Servers is a list of NTP servers that should be configured on EKS Anywhere cluster nodes.
  • certBundles

    Key used for configuring custom trusted CA certs on your EKS Anywhere cluster nodes. Multiple cert bundles can be configured.

    • name

    Name of the cert bundle that should be configured on EKS Anywhere cluster nodes. This must be a unique name for each entry

    • data

    Data of the cert bundle that should be configured on EKS Anywhere cluster nodes. This takes in a PEM formatted cert bundle and can contain more than one CA cert per entry.


  • bottlerocketConfiguration

    Key used for configuring Bottlerocket-specific settings on EKS Anywhere cluster nodes. These settings are only valid for Bottlerocket.

    • kubernetes - DEPRECATED

      Key used for configuring Bottlerocket Kubernetes settings.

      • allowedUnsafeSysctls

        List of unsafe sysctls that should be enabled on the node.

      • clusterDNSIPs

        List of IPs of DNS service(s) running in the kubernetes cluster.

      • maxPods

        Maximum number of pods that can be scheduled on each node.

    • kernel

      Key used for configuring Bottlerocket Kernel settings.

      • sysctlSettings
        Map of kernel sysctl settings that should be enabled on the node.
    • boot

      Key used for configuring Bottlerocket Boot settings.

      • bootKernelParameters
        Map of Boot Kernel parameters Bottlerocket should configure.

12.4 - Container Networking Interface

EKS Anywhere cluster yaml cni plugin specification reference

Specifying CNI Plugin in EKS Anywhere cluster spec

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected for a cluster, and the plugin cannot be changed once the cluster is created. Up until the 0.7.x releases, the plugin had to be specified using the cni field on cluster spec. Starting with release 0.8, the plugin should be specified using the new cniConfig field as follows:

  • For selecting Cilium as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          cilium: {}
    

    EKS Anywhere selects this as the default plugin when generating a cluster config.

  • Or for selecting Kindnetd as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          kindnetd: {}
    

NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins after the cluster is created.

Policy Configuration options for Cilium plugin

Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods. The allowed values for this mode are: default, always and never. Please refer the official Cilium documentation for more details on how each mode affects the communication within the cluster and choose a mode accordingly. You can choose to not set this field so that cilium will be launched with the default mode. Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        policyEnforcementMode: "always"

Please note that if the always mode is selected, all communication between pods is blocked unless NetworkPolicy objects allowing communication are created. In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy objects needed for the user workloads once the cluster is created.

Network policies created by EKS Anywhere for “always” mode

As mentioned above, if Cilium is configured with policyEnforcementMode set to always, EKS Anywhere creates NetworkPolicy objects to enable communication between its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:

  • kube-system
  • eksa-system
  • All core Cluster API namespaces:
    • capi-system
    • capi-kubeadm-bootstrap-system
    • capi-kubeadm-control-plane-system
    • etcdadm-bootstrap-provider-system
    • etcdadm-controller-system
    • cert-manager
  • Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
  • If Gitops is enabled, then the gitops namespace (flux-system by default)

This is the NetworkPolicy that will be created in these namespaces for the cluster:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-egress
  namespace: test
spec:
  podSelector: {}
  ingress:
  - {}
  egress:
  - {}
  policyTypes:
  - Ingress
  - Egress

Switching the Cilium policy enforcement mode

The policy enforcement mode for Cilium can be changed as a part of cluster upgrade through the cli upgrade command.

  1. Switching to always mode: When switching from default/never to always mode, EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above). This will ensure that the cluster gets upgraded successfully. But it is up to the user to create the NetworkPolicy objects required for the user workloads.

  2. Switching from always mode: When switching from always to default mode, EKS Anywhere will not delete any of the existing NetworkPolicy objects, including the ones required for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.

EgressMasqueradeInterfaces option for Cilium plugin

Cilium accepts the EgressMasqueradeInterfaces option from users to limit which interfaces masquerading is performed on. The allowed values for this mode are an interface name such as eth0 or an interface prefix such as eth+. Please refer to the official Cilium documentation for more details on how this option affects masquerading traffic.

By default, masquerading will be performed on all traffic leaving on a non-Cilium network device. This only has an effect on traffic egressing from a node to an external destination not part of the cluster and does not affect routing decisions.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        egressMasqueradeInterfaces: "eth0"

RoutingMode option for Cilium plugin

By default all traffic is sent by Cilium over Geneve tunneling on the network. The routingMode option allows users to switch to native routing instead.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        routingMode: "direct"

Use a custom CNI

EKS Anywhere can be configured to skip EKS Anywhere’s default Cilium CNI upgrades via the skipUpgrade field. skipUpgrade can be true or false. When not set, it defaults to false.

When creating a new cluster with skipUpgrade enabled, EKS Anywhere Cilium will be installed as it is required to successfully provision an EKS Anywhere cluster. When the cluster successfully provisions, EKS Anywhere Cilium may be uninstalled and replaced with a different CNI. Subsequent upgrades to the cluster will not attempt to upgrade or re-install EKS Anywhere Cilium.

Once enabled, skipUpgrade cannot be disabled.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        skipUpgrade: true

The Cilium CLI can be used to uninstall EKS Anywhere Cilium via cilium uninstall. See the replacing Cilium task for a walkthrough on how to successfully replace EKS Anywhere Cilium.

Node IPs configuration option

Starting with release v0.10, the node-cidr-mask-size flag for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes being an optional field, is not generated in the EKS Anywhere spec using generate clusterconfig command. This block for nodes will need to be manually added to the cluster spec under the clusterNetwork section:

  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium: {}
    nodes:
      cidrMaskSize: 24

If the user does not specify the clusterNetwork.nodes field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4. Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16 and the node CIDR mask size is 24. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get 192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.

To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be small, to support that many IPs. For instance, to support 1024 nodes, a user can do any of the following things

  • Set the pods cidr blocks to 192.168.0.0/16 and node cidr mask size to 26
  • Set the pods cidr blocks to 192.168.0.0/15 and node cidr mask size to 25

Please note that the node-cidr-mask-size needs to be large enough to accommodate the number of pods you want to run on each node. A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs. This is an immutable field, and the value can’t be updated once the cluster has been created.

12.5 - IAM Roles for Service Accounts configuration

EKS Anywhere cluster spec for IAM Roles for Service Accounts (IRSA)

IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys

IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it.

The steps below are based on the guide for configuring IRSA for DIY Kubernetes, with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.

Create an OIDC provider and make its discovery document publicly accessible

You must use a single OIDC provider per EKS Anywhere cluster, which is the best practice to prevent a token from one cluster being used with another cluster. These steps describe the process of using a public S3 bucket to host the OIDC discovery.json and keys.json documents.

  1. Create an S3 bucket to host the public signing keys and OIDC discovery document for your cluster . Make a note of the $HOSTNAME and $ISSUER_HOSTPATH.

  2. Create the OIDC discovery document as follows:

    cat <<EOF > discovery.json
    {
        "issuer": "https://$ISSUER_HOSTPATH",
        "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
        "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
        "response_types_supported": [
            "id_token"
        ],
        "subject_types_supported": [
            "public"
        ],
        "id_token_signing_alg_values_supported": [
            "RS256"
        ],
        "claims_supported": [
            "sub",
            "iss"
        ]
    }
    EOF
    
  3. Upload the discovery.json file to the S3 bucket:

    aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
    
  4. Create an OIDC provider for your cluster. Set the Provider URL to https://$ISSUER_HOSTPATH and Audience to sts.amazonaws.com.

  5. Make a note of the Provider field of OIDC provider after it is created.

  6. Assign an IAM role to the OIDC provider.

    1. Navigate to the AWS IAM Console.

    2. Click on the OIDC provider.

    3. Click Assign role.

    4. Select Create a new role.

    5. Select Web identity as the trusted entity.

    6. In the Web identity section:

      • If your Identity provider is not auto selected, select it.
      • Select sts.amazonaws.com as the Audience.
    7. Click Next.

    8. Configure your desired Permissions poilicies.

    9. Below is a sample trust policy of IAM role for your pods. Replace ACCOUNT_ID, ISSUER_HOSTPATH, NAMESPACE and SERVICE_ACCOUNT. Example: Scoped to a service account

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/ISSUER_HOSTPATH"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "ISSUER_HOSTPATH:sub": "system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT"
                      },
                  }
              }
          ]
      }
      
    10. Create the IAM Role and make a note of the Role name.

    11. After the cluster is created you can grant service accounts access to the role by modifying the trust relationship. See the How to use trust policies with IAM Roles for more information on trust policies. Refer to Configure the trust relationship for the OIDC provider’s IAM Role for a working example.

Create (or upgrade) the EKS Anywhere cluster

When creating (or upgrading) the EKS Anywhere cluster, you need to configure the kube-apiserver’s service-account-issuer flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for $ISSUER_HOSTPATH as the service-account-issuer. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
    name: my-cluster-name
spec:
    podIamConfig:
        serviceAccountIssuer: https://$ISSUER_HOSTPATH

Set the remaining fields in cluster spec as required and create the cluster.

Generate keys.json and make it publicly accessible

  1. The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key from the cluster and create a keys.json document with the content.

    git clone https://github.com/aws/amazon-eks-pod-identity-webhook
    cd amazon-eks-pod-identity-webhook
    kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
    go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
    
  2. Upload the keys.json document to the S3 bucket.

    aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
    

Deploy pod identity webhook

The Amazon Pod Identity Webhook configures pods with the necessary environment variables and tokens (via file mounts) to interact with AWS services. The webhook will configure any pod associated with a service account that has an eks-amazonaws.com/role-arn annotation.

  1. Clone amazon-eks-pod-identity-webhook .

  2. Set the $KUBECONFIG environment variable to the path of the EKS Anywhere cluster.

  3. Apply the manifests for the amazon-eks-pod-identity-webhook. The image used here will be pulled from docker.io. Optionally, the image can be imported into (or proxied through) your private registry. Change the IMAGE argument here to your private registry if needed.

    make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
    
  4. Create a service account with an eks.amazonaws.com/role-arn annotation set to the IAM Role created for the OIDC provider.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: my-serviceaccount
      namespace: default
      annotations:
        # set this with value of OIDC_IAM_ROLE
        eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/s3-reader"
    
        # optional: Defaults to "sts.amazonaws.com" if not set
        eks.amazonaws.com/audience: "sts.amazonaws.com"
    
        # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
        #   to containers
        eks.amazonaws.com/sts-regional-endpoints: "true"
    
        # optional: Defaults to 86400 for expirationSeconds if not set
        #   Note: This value can be overwritten if specified in the pod
        #         annotation as shown in the next step.
        eks.amazonaws.com/token-expiration: "86400"
    
  5. Finally, apply the my-service-account.yaml file to create your service account.

    kubectl apply -f my-service-account.yaml
    
  6. You can validate IRSA by following IRSA setup and test . Ensure the awscli pod is deployed in the same namespace of ServiceAccount pod-identity-webhook.

Configure the trust relationship for the OIDC provider’s IAM Role

In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE) created in the first section, and add in the desired service accounts.

  1. Choose the role in the console to open it for editing.

  2. Choose the Trust relationships tab, and then choose Edit trust relationship.

  3. Find the line that looks similar to the following:

    "$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
    
  4. Change the line to look like the following line. Replace aud with sub and replace KUBERNETES_SERVICE_ACCOUNT_NAMESPACE and KUBERNETES_SERVICE_ACCOUNT_NAME with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in.

    "$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
    

    The allow list example below applies my-serviceaccount service account to the default namespace and all service accounts to the observability namespace for the us-west-2 region. Remember to replace Account_ID and S3_BUCKET with the required values.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "arn:aws:iam::$Account_ID:oidc-provider/s3.us-west-2.amazonaws.com/$S3_BUCKET"
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                    "StringLike": {
                        "s3.us-west-2.amazonaws.com/$S3_BUCKET:sub": [
                                "system:serviceaccount:default:my-serviceaccount",
                                "system:serviceaccount:amazon-cloudwatch:*"
                            ]
                        }
                    }
                }
            ]
        }
    
  5. Refer this doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.

  6. Choose Update Trust Policy to finish.

12.6 - IAM Authentication

EKS Anywhere cluster yaml specification AWS IAM Authenticator reference

AWS IAM Authenticator support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication. In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # IAM Authenticator support
   identityProviderRefs:
      - kind: AWSIamConfig
        name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
   name: aws-iam-auth-config
spec:
    awsRegion: ""
    backendMode:
        - ""
    mapRoles:
        - roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
          username: myKubernetesUsername
          groups:
          - ""
    mapUsers:
        - userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
          username: myKubernetesUsername
          groups:
          - ""
    partition: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the AWSIamConfig object with the configuration below.

awsRegion (required)

  • Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
  • Type: string

backendMode (required)

  • Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap and CRD modes supported by AWS IAM Authenticator, for more details refer to backendMode
  • Type: string
  • Description: When using EKSConfigMap backendMode, we recommend providing either mapRoles or mapUsers to set the IAM role mappings at the time of creation. This input is added to an EKS style ConfigMap. For more details refer to EKS IAM

  • Type: list object

    roleARN, userARN (required)

    • Description: IAM ARN to authenticate to the cluster. roleARN specifies an IAM role and userARN specifies an IAM user.
    • Type: string

    username (required)

    • Description: The Kubernetes username the IAM ARN is mapped to in the cluster. The ARN gets mapped to the Kubernetes cluster permissions associated with the username.
    • Type: string

    groups

    • Description: List of kubernetes user groups that the mapped IAM ARN is given permissions to.
    • Type: list string

partition

  • Description: This field is used to set the aws partition that the IAM roles are present in. Default value is aws.
  • Type: string

12.7 - OIDC

EKS Anywhere cluster yaml specification OIDC reference

OIDC support (optional)

EKS Anywhere can create clusters that support api server OIDC authentication.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # OIDC support
   identityProviderRefs:
      - kind: OIDCConfig
        name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
   name: my-cluster-name
spec:
    clientId: ""
    groupsClaim: ""
    groupsPrefix: ""
    issuerUrl: "https://x"
    requiredClaims:
      - claim: ""
        value: ""
    usernameClaim: ""
    usernamePrefix: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the OIDCConfig object with the configuration below.

clientId (required)

  • Description: ClientId defines the client ID for the OpenID Connect client
  • Type: string

groupsClaim (optional)

  • Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
  • Type: string

groupsPrefix (optional)

  • Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
  • Type: string

issuerUrl (required)

  • Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
  • Type: string

requiredClaims (optional)

List of RequiredClaim objects listed below. Only one is supported at this time.

requiredClaims[0] (optional)

  • Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
    • claim
      • type: string
    • value
      • type: string
  • Type: object

usernameClaim (optional)

  • Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
  • Type: string

usernamePrefix (optional)

  • Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
  • Type: string

12.8 - Proxy

EKS Anywhere cluster yaml specification proxy configuration reference

Proxy support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   proxyConfiguration:
      httpProxy: http-proxy-ip:port
      httpsProxy: https-proxy-ip:port
      noProxy:
      - list of no proxy endpoints

Configuring Docker daemon

EKS Anywhere will proxy for you given the above configuration file. However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.

This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.

For an example of how to do this with systemd, please see Docker’s documentation here .

Configuring EKS Anywhere proxy without config file

For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.

However, for commands that do not utilize a cluster config file, you can set the following environment variables:

export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost

Proxy Configuration Spec Details

proxyConfiguration (required)

  • Description: top level key; required to use proxy.
  • Type: object

httpProxy (required)

  • Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpProxy: 192.168.0.1:3218

httpsProxy (required)

  • Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpsProxy: 192.168.0.1:3218

noProxy (optional)

  • Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
  • Type: list of strings
  • Example
  noProxy:
   - localhost
   - 192.168.0.1
   - 192.168.0.0/16
   - .example.com

12.9 - KubeletConfiguration

EKS Anywhere cluster yaml specification for Kubelet Configuration

Kubelet Configuration Support

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket
RHEL 8.x
RHEL 9.x

You can configure EKS Anywhere to specify Kubelet settings and configure those for control plane and/or worker nodes starting from v0.20.0. This can be done using kubeletConfiguration. The following cluster spec shows an example of how to configure kubeletConfiguration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   ...
 controlPlaneConfiguration:         # Kubelet configuration for Control plane nodes
    kubeletConfiguration:
      kind: KubeletConfiguration
      maxPods: 80
   ...
  workerNodeGroupConfigurations:    # Kubelet configuration for Worker nodes
  - count: 1
    kubeletConfiguration:
      kind: KubeletConfiguration
      maxPods: 85
   ...

kubeletConfiguration should contain the configuration to be used by the kubelet while creating or updating a node. It must contain the kind key with the value KubeletConfiguration for EKS Anywhere to process the settings. This configuration must only be used with valid settings as it may cause unexpected behavior from the Kubelet if misconfigured. EKS Anywhere performs a limited set of data type validations for the Kubelet Configuration, however it is ultimately the user’s responsibility to make sure that valid configuration is set for Kubelet Configuration.

More details on the Kubelet Configuration object and its supported fields can be found here . EKS Anywhere only supports the latest Kubernetes version’s KubeletConfiguration.

Bottlerocket Support

The only provider that supports kubeletConfiguration with Bottlerocket is vSphere. The list of settings that can be configured for Bottlerocket can be found here . This page also describes other various settings like Kubelet Options. The settings supported by Bottlerocket will have information specific to the Kubelet Configuration keyword in there. Refer to the documentation to learn about the supported fields as well as their data types as they may vary from the upstream object’s data types.

Note that this is the preferred and supported way to specify any Kubelet settings from the release v0.20.0 onwards. Previously the hostOSConfiguration.bottlerocketConfiguration.kubernetes field was used to specify Bottlerocket Kubernetes settings. That has been deprecated from v0.20.0

Here’s a list of supported fields by Bottlerocket for Kubelet Configuration -

  • allowedUnsafeSysctls
  • clusterDNSIPs
  • clusterDomain
  • containerLogMaxFiles
  • containerLogMaxSize
  • cpuCFSQuota
  • cpuManagerPolicy
  • cpuManagerPolicyOptions
  • cpuManagerReconcilePeriod
  • eventBurst
  • eventRecordQPS
  • evictionHard
  • evictionMaxPodGracePeriod
  • evictionSoft
  • evictionSoftGracePeriod
  • imageGCHighThresholdPercent
  • imageGCLowThresholdPercent
  • kubeAPIBurst
  • kubeAPIQPS
  • kubeReserved
  • maxPods
  • memoryManagerPolicy
  • podPidsLimit
  • providerID
  • registryBurst
  • registryPullQPS
  • shutdownGracePeriod
  • shutdownGracePeriodCriticalPods
  • systemReserved
  • topologyManagerPolicy
  • topologyManagerScope

Special fields

Duplicate fields

The clusterNetwork.dns.resolvConf is the file path to a file containing a custom DNS resolver configuration. This can now be provided in the Kubelet Configuration using the resolvConf field. Note that if both these fields are set, the Kubelet Configuration’s field will take precendence and override the value from the clusterNetwork.dns.resolvConf.

Blocked fields

Fields like providerID or cloudProvider are set by EKS Anywhere and can’t be set by users. This is to maintain seamless support for all providers.

Node Rollouts

Adding, updating, or deleting the Kubelet Configuration will cause node rollouts to the respective nodes that the configuration affects. This is especially important to consider in providers like Baremetal since the node rollouts that are caused by the Kubelet config changes could require extra hardware provisioned depending on your rollout strategy.

12.10 - MachineHealthCheck

EKS Anywhere cluster yaml specification for MachineHealthCheck configuration

MachineHealthCheck Support

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to specify timeouts and maxUnhealthy values for machine health checks.

A MachineHealthCheck (MHC) is a resource in Cluster API which allows users to define conditions under which Machines within a Cluster should be considered unhealthy. A MachineHealthCheck is defined on a management cluster and scoped to a particular workload cluster.

Note: Even though the MachineHealthCheck configuration in the EKS-A spec is optional, MachineHealthChecks are still installed for all clusters using the default values mentioned below.

EKS Anywhere allows users to have granular control over MachineHealthChecks in their cluster configuration, with default values (derived from Cluster API) being applied if the MHC is not configured in the spec. The top-level machineHealthCheck field governs the global MachineHealthCheck settings for all Machines (control-plane and worker). These global settings can be overridden through the nested machineHealthCheck field in the control plane configuration and each worker node configuration. If the nested MHC fields are not configured, then the top-level settings are applied to the respective Machines.

The following cluster spec shows an example of how to configure health check timeouts and maxUnhealthy:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  machineHealthCheck:               # Top-level MachineHealthCheck configuration
    maxUnhealthy: "60%"
    nodeStartupTimeout: "10m0s"
    unhealthyMachineTimeout: "5m0s"
   ...
 controlPlaneConfiguration:         # MachineHealthCheck configuration for Control plane
    machineHealthCheck:
      maxUnhealthy: 100%
      nodeStartupTimeout: "15m0s"
      unhealthyMachineTimeout: 10m
   ...
  workerNodeGroupConfigurations:
  - count: 1
    name: md-0
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 0
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
  - count: 1
    name: md-1
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 1
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
   ...

MachineHealthCheck Spec Details

machineHealthCheck (optional)

  • Description: top-level key; required to configure global MachineHealthCheck timeouts and maxUnhealthy.
  • Type: object

machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: 100% for control plane machines, 40% for worker nodes (Cluster API defaults).
  • Type: integer (count) or string (percentage)

machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a Node to join the cluster, before considering a Machine unhealthy.
  • Default: 20m0s for Tinkerbell provider, 10m0s for all other providers.
  • Minimum Value (If configured): 30s
  • Type: string

machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy Node conditions (e.g., Ready=False, Ready=Unknown) should be matched for, before considering a Machine unhealthy.
  • Default: 5m0s
  • Type: string

controlPlaneConfiguration.machineHealthCheck (optional)

  • Description: Control plane level configuration for MachineHealthCheck timeouts and maxUnhealthy values.
  • Type: object

controlPlaneConfiguration.machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy control plane Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: Top-level MHC maxUnhealthy if set or 100% otherwise.
  • Type: integer (count) or string (percentage)

controlPlaneConfiguration.machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a control plane Node to join the cluster, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 20m0s for Tinkerbell provider, 10m0s for all other providers otherwise.
  • Minimum Value (if configured): 30s
  • Type: string

controlPlaneConfiguration.machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy conditions (e.g., Ready=False, Ready=Unknown) should be matched for a control plane Node, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 5m0s otherwise.
  • Type: string

workerNodeGroupConfigurations.machineHealthCheck (optional)

  • Description: Worker node level configuration for MachineHealthCheck timeouts and maxUnhealthy values.
  • Type: object

workerNodeGroupConfigurations.machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy worker Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: Top-level MHC maxUnhealthy if set or 40% otherwise.
  • Type: integer (count) or string (percentage)

workerNodeGroupConfigurations.machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a worker Node to join the cluster, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 20m0s for Tinkerbell provider, 10m0s for all other providers otherwise.
  • Minimum Value (if configured): 30s
  • Type: string

workerNodeGroupConfigurations.machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy conditions (e.g., Ready=False, Ready=Unknown) should be matched for a worker Node, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 5m0s otherwise.
  • Type: string

12.11 - Registry Mirror

EKS Anywhere cluster specification for registry mirror configuration

Registry Mirror Support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to use a local registry mirror for its dependencies. When a registry mirror is configured in the EKS Anywhere cluster specification, EKS Anywhere will use it instead of defaulting to Amazon ECR for its dependencies. For details on how to configure your local registry mirror for EKS Anywhere, see the Configure local registry mirror section.

See the airgapped documentation page for instructions on downloading and importing EKS Anywhere dependencies to a local registry mirror.

Registry Mirror Cluster Spec

The following cluster spec shows an example of how to configure registry mirror:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  ...
  registryMirrorConfiguration:
    endpoint: <private registry IP or hostname>
    port: <private registry port>
    ociNamespaces:
      - registry: <upstream registry IP or hostname>
        namespace: <namespace in private registry>
      ...
    caCertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----        

Registry Mirror Cluster Spec Details

registryMirrorConfiguration (optional)

  • Description: top level key; required to use a private registry.
  • Type: object

endpoint (required)

  • Description: IP address or hostname of the private registry for pulling images
  • Type: string
  • Example: endpoint: 192.168.0.1

port (optional)

  • Description: port for the private registry. This is an optional field. If a port is not specified, the default HTTPS port 443 is used
  • Type: string
  • Example: port: 443

ociNamespaces (optional)

  • Description: when you need to mirror multiple registries, you can map each upstream registry to the “namespace” of its mirror. The namespace is appended with the endpoint, <endpoint>/<namespace> to setup the mirror for the registry specified. Note while using ociNamespaces, you need to specify all the registries that need to be mirrored. This includes an entry for the public.ecr.aws registry to pull EKS Anywhere images from.

  • Type: array

  • Example:

    ociNamespaces:
      - registry: "public.ecr.aws"
        namespace: ""
      - registry: "783794618700.dkr.ecr.us-west-2.amazonaws.com"
        namespace: "curated-packages"
    

caCertContent (optional)

  • Description: certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec. This must be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry.

    It is also possible to configure CACertContent by exporting an environment variable:
    export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"

  • Type: string

  • Example:

    CACertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----  
    

authenticate (optional)

  • Description: optional field to authenticate with a private registry. When using private registries that require authentication, it is necessary to set this parameter to true in the cluster spec.
  • Type: boolean

When this value is set to true, the following environment variables need to be set:

export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

insecureSkipVerify (optional)

  • Description: optional field to skip the registry certificate verification. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. Currently only supported for Ubuntu and RHEL OS.
  • Type: boolean

Configure local registry mirror

Project configuration

The following projects must be created in your registry before importing the EKS Anywhere images:

bottlerocket
eks-anywhere
eks-distro
isovalent
cilium-chart

For example, if a registry is available at private-registry.local, then the following projects must be created.

https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent
https://private-registry.local/cilium-chart

Admin machine configuration

You must configure the Admin machine with the information it needs to communicate with your registry.

Add the registry’s CA certificate to the list of CA certificates on the Admin machine if your registry uses self-signed certificates.

If your registry uses authentication, the following environment variables must be set on the Admin machine before running the eksctl anywhere import images command.

export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

12.12 - Autoscaling configuration

EKS Anywhere cluster yaml autoscaling specification reference

EKS Anywhere supports autoscaling worker node groups using the Kubernetes Cluster Autoscaler . The Kubernetes Cluster Autoscaler Curated Package is an image and helm chart installed via the Curated Packages Controller

The helm chart utilizes the Cluster Autoscaler clusterapi mode to scale resources.

Configure an EKS Anywhere worker node group to be picked up by a Cluster Autoscaler deployment by adding autoscalingConfiguration block to the workerNodeGroupConfiguration.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      workerNodeGroupConfigurations:
        - name: md-0
          autoscalingConfiguration:
            minCount: 1
            maxCount: 5
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-a
        - name: md-1
          autoscalingConfiguration:
            minCount: 1
            maxCount: 3
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-b

Note that if count is specified for the worker node group, it’s value will be ignored during cluster creation as well as cluster upgrade. If only one of minCount or maxCount is specified, then the other will have a default value of 0 and count will have a default value of minCount.

EKS Anywhere automatically applies the following annotations to your MachineDeployment objects for worker node groups with autoscaling enabled. The Cluster Autoscaler component uses these annotations to identify which node groups to autoscale. If a node group is not autoscaling as expected, check for these annotations on the MachineDeployment to troubleshoot.

cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: <minCount>
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <maxCount>

12.13 - Skipping validations configuration

EKS Anywhere cluster annotations to skip validations

EKS Anywhere runs a set of validations while performing cluster operations. Some of these validations can be chosen to be skipped.

One such validation EKS Anywhere runs is a check for whether cluster’s control plane ip is in use or not.

  • If a cluster is being created using the EKS Anywhere cli, this validation can be skipped by using the --skip-ip-check flag or by setting the below annotation on the Cluster object.
  • If a workload cluster is being created using tools like kubectl or GitOps, the validation can only be skipped by adding the below annotation.

Configure an EKS Anywhere cluster to skip the validation for checking the uniqueness of the control plane IP by using the anywhere.eks.amazonaws.com/skip-ip-check annotation and setting the value to true like shown below.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      annotations:
        anywhere.eks.amazonaws.com/skip-ip-check: "true"
      name: my-cluster-name
    spec:
    .
    .
    .

Note that this annotation is also automatically set if you use the --skip-ip-check flag while running the EKS Anywhere create cluster command.

12.14 - GitOps

Configuration reference for GitOps cluster management.

GitOps Support (Optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere can create clusters that supports GitOps configuration management with Flux. In order to add GitOps support, you need to configure your cluster by specifying the configuration file with gitOpsRef field when creating or upgrading the cluster. We currently support two types of configurations: FluxConfig and GitOpsConfig.

Flux Configuration

The flux configuration spec has three optional fields, regardless of the chosen git provider.

Flux Configuration Spec Details

systemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster. Defaults to flux-system
  • Type: string

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files. Defaults to the cluster name
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration. Defaults to main
  • Type: string

EKS Anywhere currently supports two git providers for FluxConfig: Github and Git.

Github provider

Please note that for the Flux config to work successfully with the Github provider, the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-github-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-github-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  github:
    personal: true
    repository: myClusterGitopsRepo
    owner: myGithubUsername

---

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

Git provider

Before you create a cluster using the Git provider, you will need to set and export the EKSA_GIT_KNOWN_HOSTS and EKSA_GIT_PRIVATE_KEY environment variables.

EKSA_GIT_KNOWN_HOSTS

EKS Anywhere uses the provided known hosts file to verify the identity of the git provider when connecting to it with SSH. The EKSA_GIT_KNOWN_HOSTS environment variable should be a path to a known hosts file containing entries for the git server to which you’ll be connecting.

For example, if you wanted to provide a known hosts file which allows you to connect to and verify the identity of github.com using a private key based on the key algorithm ecdsa, you can use the OpenSSH utility ssh-keyscan to obtain the known host entry used by github.com for the ecdsa key type. EKS Anywhere supports ecdsa, rsa, and ed25519 key types, which can be specified via the sshKeyAlgorithm field of the git provider config.

ssh-keyscan -t ecdsa github.com >> my_eksa_known_hosts

This will produce a file which contains known-hosts entries for the ecdsa key type supported by github.com, mapping the host to the key-type and public key.

github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=

EKS Anywhere will use the content of the file at the path EKSA_GIT_KNOWN_HOSTS to verify the identity of the remote git server, and the provided known hosts file must contain an entry for the remote host and key type.

EKSA_GIT_PRIVATE_KEY

The EKSA_GIT_PRIVATE_KEY environment variable should be a path to the private key file associated with a valid SSH public key registered with your Git provider. This key must have permission to both read from and write to your repository. The key can use the key algorithms rsa, ecdsa, and ed25519.

This key file must have restricted file permissions, allowing only the owner to read and write, such as octal permissions 600.

If your private key file is passphrase protected, you must also set EKSA_GIT_SSH_KEY_PASSPHRASE with that value.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-git-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-git-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  git:
    repositoryUrl: ssh://git@github.com/myAccount/myClusterGitopsRepo.git
    sshKeyAlgorithm: ecdsa
---

git Configuration Spec Details

repositoryUrl (required)

  • Description: The URL of an existing repository where EKS Anywhere will store your cluster configuration and sync it to the cluster. For private repositories, the SSH URL will be of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
  • Type: string
  • Value: A common repositoryUrl value can be of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git. This may differ from the default SSH URL given by your provider. Consider these differences between github and CodeCommit URLs:
    • The github.com user interface provides an SSH URL containing a : before the repository owner, rather than a /. Make sure to replace this : with a /, if present.
    • The CodeCommit SSH URL must include SSH-KEY-ID in format ssh://<SSH-Key-ID>@git-codecommit.<region>.amazonaws.com/v1/repos/<repository>.

sshKeyAlgorithm (optional)

  • Description: The SSH key algorithm of the private key specified via EKSA_PRIVATE_KEY_FILE. Defaults to ecdsa
  • Type: string

Supported SSH key algorithm types are ecdsa, rsa, and ed25519.

Be sure that this SSH key algorithm matches the private key file provided by EKSA_GIT_PRIVATE_KEY_FILE and that the known hosts entry for the key type is present in EKSA_GIT_KNOWN_HOSTS.

GitOps Configuration

Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-gitops
    kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
  name: my-gitops
  namespace: default
spec:
  flux:
    github:
      personal: true
      repository: myClusterGitopsRepo
      owner: myGithubUsername
      fluxSystemNamespace: ""
      clusterConfigPath: ""

GitOps Configuration Spec Details

flux (required)

  • Description: our supported gitops provider is flux. This is the only supported value.
  • Type: object

Flux Configuration Spec Details

github (required)

  • Description: github is the only currently supported git provider. This defines your github configuration to be used by EKS Anywhere and flux.
  • Type: object

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
  • Default: clusters/$MANAGEMENT_CLUSTER_NAME
  • Type: string

fluxSystemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster.
  • Default: flux-system.
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration.
  • Default: main
  • Type: string

12.15 - Package controller

EKS Anywhere cluster yaml specification for package controller configuration

Package Controller Configuration (optional)

You can specify EKS Anywhere package controller configurations. For more on the curated packages and the package controller, visit the package management documentation.

The following cluster spec shows an example of how to configure package controller:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  packages:
    disable: false
    controller:
      resources:
        requests:
          cpu: 100m
          memory: 50Mi
        limits:
          cpu: 750m
          memory: 450Mi


Package Controller Configuration Spec Details

packages (optional)

  • Description: Top level key; required controller configuration.
  • Type: object

packages.disable (optional)

  • Description: Disable the package controller.
  • Type: bool
  • Example: disable: true

packages.controller (optional)

  • Description: Disable the package controller.
  • Type: object

packages.controller.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.controller.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.controller.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.controller.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.controller.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string

packages.cronjob (optional)

  • Description: Disable the package controller.
  • Type: object

packages.cronjob.disable (optional)

  • Description: Disable the cron job.
  • Type: bool
  • Example: disable: true

packages.cronjob.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.cronjob.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.cronjob.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.cronjob.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.cronjob.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string

12.16 - API Server Extra Args

EKS Anywhere cluster yaml specification for Kubernetes API Server Extra Args reference

API Server Extra Args support (optional)

As of EKS Anywhere version v0.20.0, you can pass additional flags to configure the Kubernetes API server in your EKS Anywhere clusters.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

In order to configure a cluster with API Server extra args, you need to configure your cluster by updating the cluster configuration file to include the details below. The feature flag API_SERVER_EXTRA_ARGS_ENABLED=true needs to be set as an environment variable.

This is a generic template with some example API Server extra args configuration below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
    ...
    controlPlaneConfiguration:
        apiServerExtraArgs:
            ...
            disable-admission-plugins: "DefaultStorageClass,DefaultTolerationSeconds"
            enable-admission-plugins: "NamespaceAutoProvision,NamespaceExists"

The above example configures the disable-admission-plugins and enable-admission-plugins options of the API Server to enable additional admission plugins or disable some of the default ones. You can configure any of the API Server options using the above template.

controlPlaneConfiguration.apiServerExtraArgs (optional)

Reference the Kubernetes documentation for the list of flags that can be configured for the Kubernetes API server in EKS Anywhere

13 -

### clusterNetwork (required) Network configuration. ### clusterNetwork.cniConfig (required) CNI plugin configuration. Supports `cilium`. ### clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional) Optionally specify a policyEnforcementMode of `default`, `always` or `never`. ### clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional) Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option. ### clusterNetwork.cniConfig.cilium.skipUpgrade (optional) When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI. ### clusterNetwork.cniConfig.cilium.routingMode (optional) Optionally specify the routing mode. Accepts `default` and `direct`. Also see RoutingMode option. ### clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional) Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. ### clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional) Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. ### clusterNetwork.pods.cidrBlocks[0] (required) The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges. ### clusterNetwork.services.cidrBlocks[0] (required) The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges. ### clusterNetwork.dns.resolvConf.path (optional) File path to a file containing a custom DNS resolver configuration.