This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Operating System management

Managing node operating systems in EKS Anywhere clusters.

1 - Overview

Overview of operating system management for nodes in EKS Anywhere clusters.

Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL) can be used as operating systems for nodes in EKS Anywhere clusters. You can only use a single operating system per cluster. Bottlerocket is the only operating system distributed and fully supported by AWS. If you are using the other operating systems, you must build the operating system images and configure EKS Anywhere to use the images you built when installing or updating clusters. AWS will assist with troubleshooting and configuration guidance for Ubuntu and RHEL as part of EKS Anywhere Enterprise Subscriptions. For official support for Ubuntu and RHEL operating systems, you must purchase support through their respective vendors.

Reference the table below for the operating systems supported per deployment option for the latest version of EKS Anywhere. See Admin machine for supported operating systems.

vSphere Bare metal Snow CloudStack Nutanix
Bottlerocket
Ubuntu
RHEL
OS Supported Versions
Bottlerocket 1.19.x
Ubuntu 20.04.x, 22.04.x
RHEL 8.x, 9.x*

*Nutanix and CloudStack only

With the vSphere, bare metal, Snow, CloudStack and Nutanix deployment options, EKS Anywhere provisions the operating system when new machines are deployed during cluster creation, upgrade, and scaling operations. You can configure the operating system to use through the EKS Anywhere cluster spec, which varies by deployment option. See the deployment option sections below for an overview of how the operating system configuration works per deployment option.

vSphere

To configure the operating system to use for EKS Anywhere clusters on vSphere, use the VSphereMachingConfig spec.template field . The template name corresponds to the template you imported into your vSphere environment. See the Customize OVAs and Import OVAs documentation pages for more information. Changing the template after cluster creation will result in the deployment of new machines.

Bare metal

To configure the operating system to use for EKS Anywhere clusters on bare metal, use the TinkerbellDatacenterConfig spec.osImageURL field . This field can be used to stream the operating system from a custom location and is required to use Ubuntu or RHEL. You cannot change the osImageURL after creating your cluster. To upgrade the operating system, you must replace the image at the existing osImageURL location with a new image. Operating system changes are only deployed when an action that triggers a deployment of new machines is triggered, which includes Kubernetes version upgrades only at this time.

Snow

To configure the operating system to use for EKS Anywhere clusters on Snow, use the SnowMachineConfig spec.osFamily field . At this time, only Ubuntu is supported for use with EKS Anywhere clusters on Snow. You can customize the instance image with the SnowMachineConfig spec.amiID field and the instance type with the SnowMachineConfig spec.instanceType field . Changes to these fields after cluster creation will result in the deployment of new machines.

CloudStack

To configure the operating system to use for EKS Anywhere clusters on CloudStack, use the CloudStackMachineConfig spec.template.name field . At this time, only RHEL is supported for use with EKS Anywhere clusters on CloudStack. Changing the template name field after cluster creation will result in the deployment of new machines.

Nutanix

To configure the operating system to use for EKS Anywhere clusters on Nutanix, use the NutanixMachineConfig spec.image.name field or the image uuid field. At this time, only Ubuntu and RHEL are supported for use with EKS Anywhere clusters on Nutanix. Changing the image name or uuid field after cluster creation will result in the deployment of new machines.

2 - Artifacts

Artifacts associated with this release: OVAs and images.

EKS Anywhere supports three different node operating systems:

  • Bottlerocket: For vSphere and Bare Metal providers
  • Ubuntu: For vSphere, Bare Metal, Nutanix, and Snow providers
  • Red Hat Enterprise Linux (RHEL): For vSphere, CloudStack, Nutanix, and Bare Metal providers

Bottlerocket OVAs and images are distributed by the EKS Anywhere project. To build your own Ubuntu-based or RHEL-based EKS Anywhere node, see Building node images .

Prerequisites

Several code snippets on this page use curl and yq commands. Refer to the Tools section to learn how to install them.

Bare Metal artifacts

Artifacts for EKS Anywhere Bare Metal clusters are listed below. If you like, you can download these images and serve them locally to speed up cluster creation. See descriptions of the osImageURL and hookImagesURLPath fields for details.

Ubuntu or RHEL OS images for Bare Metal

EKS Anywhere does not distribute Ubuntu or RHEL OS images. However, see Building node images for information on how to build EKS Anywhere images from those Linux distributions. Note: if you utilize your Admin Host to build images, you will need to review the DHCP integration provided by Libvirtd and ensure it is disabled. If the Libvirtd DHCP is enabled, the “boots container” will detect a port conflict and terminate.

Bottlerocket OS images for Bare Metal

Bottlerocket vends its Baremetal variant Images using a secure distribution tool called tuftool. Please refer to Download Bottlerocket node images for instructions on downloading Bottlerocket Baremetal images. You can also get the download URIs for EKS Anywhere-vended Bottlerocket Baremetal images from the bundle release by running the following commands:

Using the latest EKS Anywhere version

EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")

OR

Using a specific EKS Anywhere version

EKSA_RELEASE_VERSION=<EKS-A version>
KUBEVERSION=1.30 # Replace this with the Kubernetes version you wish to use

BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[] | select(.kubeVersion==\"$KUBEVERSION\").eksD.raw.bottlerocket.uri"

HookOS (kernel and initial ramdisk) for Bare Metal

Using the latest EKS Anywhere version

EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")

OR

Using a specific EKS Anywhere version

EKSA_RELEASE_VERSION=<EKS-A version>

kernel:

BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].tinkerbell.tinkerbellStack.hook.vmlinuz.amd.uri"

initial ramdisk:

BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].tinkerbell.tinkerbellStack.hook.initramfs.amd.uri"

vSphere artifacts

Bottlerocket OVAs

Bottlerocket vends its VMware variant OVAs using a secure distribution tool called tuftool. Please refer to Download Bottlerocket node images for instructions on downloading Bottlerocket OVAs. You can also get the download URIs for EKS Anywhere-vended Bottlerocket OVAs from the bundle release by running the following commands:

Using the latest EKS Anywhere version

EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")

OR

Using a specific EKS Anywhere version

EKSA_RELEASE_VERSION=<EKS-A version>
KUBEVERSION=1.30 # Replace this with the Kubernetes version you wish to use

BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[] | select(.kubeVersion==\"$KUBEVERSION\").eksD.ova.bottlerocket.uri"

Ubuntu or RHEL OVAs

EKS Anywhere no longer distributes Ubuntu or RHEL OVAs for use with EKS Anywhere clusters. Building your own Ubuntu or RHEL-based OVAs as described in Building node images is the only supported way to get that functionality.

OVA Template Tags

There are two categories of tags to be attached to the OVA templates in vCenter.

os: This category represents the OS corresponding to this template. The possible values for this tag are os:bottlerocket, os:redhat and os:ubuntu.

eksdRelease: This category represents the EKS Distro release corresponding to this template. The value for this tag can be obtained programmatically as follows.

Using the latest EKS Anywhere version

EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")

OR

Using a specific EKS Anywhere version

EKSA_RELEASE_VERSION=<EKS-A version>
KUBEVERSION=1.30 # Replace this with the Kubernetes version you wish to use

BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
curl -sL $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[] | select(.kubeVersion==\"$KUBEVERSION\").eksD.name"

Download Bottlerocket node images

Bottlerocket vends its VMware variant OVAs and Baremetal variants images using a secure distribution tool called tuftool. Please follow instructions down below to download Bottlerocket node images.

  1. Install Rust and Cargo
curl https://sh.rustup.rs -sSf | sh
  1. Install tuftool using Cargo
CARGO_NET_GIT_FETCH_WITH_CLI=true cargo install --force tuftool
  1. Download the root role that will be used by tuftool to download the Bottlerocket images
curl -O "https://cache.bottlerocket.aws/root.json"
sha512sum -c <<<"a3c58bc73999264f6f28f3ed9bfcb325a5be943a782852c7d53e803881968e0a4698bd54c2f125493f4669610a9da83a1787eb58a8303b2ee488fa2a3f7d802f  root.json"
  1. Export the desired Kubernetes version. EKS Anywhere currently supports 1.23, 1.24, 1.25, 1.26, 1.27, and 1.28.
export KUBEVERSION="1.27"
  1. Programmatically retrieve the Bottlerocket version corresponding to this release of EKS-A and Kubernetes version and export it.

    Using the latest EKS Anywhere version

    EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")
    

    OR

    Using a specific EKS Anywhere version

    EKSA_RELEASE_VERSION=<EKS-A version>
    

    Set the Bottlerocket image format to the desired value (ova for the VMware variant or raw for the Baremetal variant)

    export BOTTLEROCKET_IMAGE_FORMAT="ova"
    
    BUNDLE_MANIFEST_URL=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
    BUILD_TOOLING_COMMIT=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.gitCommit")
    export BOTTLEROCKET_VERSION=$(curl -sL https://raw.githubusercontent.com/aws/eks-anywhere-build-tooling/$BUILD_TOOLING_COMMIT/projects/kubernetes-sigs/image-builder/BOTTLEROCKET_RELEASES | yq ".$(echo $KUBEVERSION | tr '.' '-').$BOTTLEROCKET_IMAGE_FORMAT-release-version")
    
  2. Download Bottlerocket node image

    a. To download VMware variant Bottlerocket OVA

    OVA="bottlerocket-vmware-k8s-${KUBEVERSION}-x86_64-${BOTTLEROCKET_VERSION}.ova"
    tuftool download ${TMPDIR:-/tmp/bottlerocket-ovas} --target-name "${OVA}" \
       --root ./root.json \
       --metadata-url "https://updates.bottlerocket.aws/2020-07-07/vmware-k8s-${KUBEVERSION}/x86_64/" \
       --targets-url "https://updates.bottlerocket.aws/targets/"
    

    The above command will download a Bottlerocket OVA. Please refer Deploy an OVA Template to proceed with the downloaded OVA.

    b. To download Baremetal variant Bottlerocket image

    IMAGE="bottlerocket-metal-k8s-${KUBEVERSION}-x86_64-${BOTTLEROCKET_VERSION}.img.lz4"
    tuftool download ${TMPDIR:-/tmp/bottlerocket-metal} --target-name "${IMAGE}" \
       --root ./root.json \
       --metadata-url "https://updates.bottlerocket.aws/2020-07-07/metal-k8s-${KUBEVERSION}/x86_64/" \
       --targets-url "https://updates.bottlerocket.aws/targets/"
    

    The above command will download a Bottlerocket lz4 compressed image. Decompress and gzip the image with the following commands and host the image on a webserver for using it for an EKS Anywhere Baremetal cluster.

    lz4 --decompress ${TMPDIR:-/tmp/bottlerocket-metal}/${IMAGE} ${TMPDIR:-/tmp/bottlerocket-metal}/bottlerocket.img
    gzip ${TMPDIR:-/tmp/bottlerocket-metal}/bottlerocket.img
    

Building node images

The image-builder CLI lets you build your own Ubuntu-based vSphere OVAs, Nutanix qcow2 images, RHEL-based qcow2 images, or Bare Metal gzip images to use in EKS Anywhere clusters. When you run image-builder, it will pull in all components needed to build images to be used as Kubernetes nodes in an EKS Anywhere cluster, including the latest operating system, Kubernetes control plane components, and EKS Distro security updates, bug fixes, and patches. When building an image using this tool, you get to choose:

  • Operating system type (for example, ubuntu, redhat) and version (Ubuntu only)
  • Provider (vsphere, cloudstack, baremetal, ami, nutanix)
  • Release channel for EKS Distro (generally aligning with Kubernetes releases)
  • vSphere only: configuration file providing information needed to access your vSphere setup
  • CloudStack only: configuration file providing information needed to access your CloudStack setup
  • Snow AMI only: configuration file providing information needed to customize your Snow AMI build parameters
  • Nutanix only: configuration file providing information needed to access Nutanix Prism Central

Because image-builder creates images in the same way that the EKS Anywhere project does for their own testing, images built with that tool are supported.

The table below shows the support matrix for the hypervisor and OS combinations that image-builder supports.

vSphere Baremetal CloudStack Nutanix Snow
Ubuntu
RHEL

Prerequisites

To use image-builder, you must meet the following prerequisites:

System requirements

image-builder has been tested on Ubuntu (20.04, 21.04, 22.04), RHEL 8 and Amazon Linux 2 machines. The following system requirements should be met for the machine on which image-builder is run:

  • AMD 64-bit architecture
  • 50 GB disk space
  • 2 vCPUs
  • 8 GB RAM
  • Baremetal only: Run on a bare metal machine with virtualization enabled

Network connectivity requirements

  • public.ecr.aws (to download container images from EKS Anywhere)
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere artifacts such as binaries, manifests and OS images)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (to pull the EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container)
  • github.com (to download binaries and tools required for image builds from GitHub releases)
  • objects.githubusercontent.com (to download binaries and tools required for image builds from GitHub releases)
  • raw.githubusercontent.com (to download binaries and tools required for image builds from GitHub releases)
  • releases.hashicorp.com (to download Packer binary for image builds)
  • galaxy.ansible.com (to download Ansible packages from Ansible Galaxy)
  • vSphere only: VMware vCenter endpoint
  • CloudStack only: Apache CloudStack endpoint
  • Nutanix only: Nutanix Prism Central endpoint
  • Red Hat only: dl.fedoraproject.org (to download RPMs and GPG keys for RHEL image builds)
  • Ubuntu only: cdimage.ubuntu.com (to download Ubuntu server ISOs for Ubuntu image builds)

vSphere requirements

image-builder uses the Hashicorp vsphere-iso Packer Builder for building vSphere OVAs.

Permissions

Configure a user with a role containing the following permissions.

The role can be configured programmatically with the govc command below, or configured in the vSphere UI using the table below as reference.

Note that no matter how the role is created, it must be assigned to the user or user group at the Global Permissions level.

Unfortunately there is no API for managing vSphere Global Permissions, so they must be set on the user via the UI under Administration > Access Control > Global Permissions.

To generate a role named EKSAImageBuilder with the required privileges via govc, run the following:

govc role.create "EKSAImageBuilder" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/imageBuilderPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

If creating a role with these privileges via the UI, refer to the table below.

Category UI Privilege Programmatic Privilege
Datastore Allocate space Datastore.AllocateSpace
Datastore Browse datastore Datastore.Browse
Datastore Low level file operations Datastore.FileManagement
Network Assign network Network.Assign
Resource Assign virtual machine to resource pool Resource.AssignVMToPool
vApp Export vApp.Export
VirtualMachine Configuration > Add new disk VirtualMachine.Config.AddNewDisk
VirtualMachine Configuration > Add or remove device VirtualMachine.Config.AddRemoveDevice
VirtualMachine Configuration > Advanced configuration VirtualMachine.Config.AdvancedConfiguration
VirtualMachine Configuration > Change CPU count VirtualMachine.Config.CPUCount
VirtualMachine Configuration > Change memory VirtualMachine.Config.Memory
VirtualMachine Configuration > Change settings VirtualMachine.Config.Settings
VirtualMachine Configuration > Change Resource VirtualMachine.Config.Resource
VirtualMachine Configuration > Set annotation VirtualMachine.Config.Annotation
VirtualMachine Edit Inventory > Create from existing VirtualMachine.Inventory.CreateFromExisting
VirtualMachine Edit Inventory > Create new VirtualMachine.Inventory.Create
VirtualMachine Edit Inventory > Remove VirtualMachine.Inventory.Delete
VirtualMachine Interaction > Configure CD media VirtualMachine.Interact.SetCDMedia
VirtualMachine Interaction > Configure floppy media VirtualMachine.Interact.SetFloppyMedia
VirtualMachine Interaction > Connect devices VirtualMachine.Interact.DeviceConnection
VirtualMachine Interaction > Inject USB HID scan codes VirtualMachine.Interact.PutUsbScanCodes
VirtualMachine Interaction > Power off VirtualMachine.Interact.PowerOff
VirtualMachine Interaction > Power on VirtualMachine.Interact.PowerOn
VirtualMachine Interaction > Create template from virtual machine VirtualMachine.Provisioning.CreateTemplateFromVM
VirtualMachine Interaction > Mark as template VirtualMachine.Provisioning.MarkAsTemplate
VirtualMachine Interaction > Mark as virtual machine VirtualMachine.Provisioning.MarkAsVM
VirtualMachine State > Create snapshot VirtualMachine.State.CreateSnapshot

CloudStack requirements

Refer to the CloudStack Permissions for CAPC doc for required CloudStack user permissions.

Snow AMI requirements

Packer will require prior authentication with your AWS account to launch EC2 instances for the Snow AMI build. Refer to the Authentication guide for Amazon EBS Packer builder for possible modes of authentication. We recommend that you run image-builder on a pre-existing Ubuntu EC2 instance and use an IAM instance role with the required permissions .

Nutanix permissions

Prism Central Administrator permissions are required to build a Nutanix image using image-builder.

Downloading the image-builder CLI

You will need to download the image-builder CLI corresponding to the version of EKS Anywhere you are using. The image-builder CLI can be downloaded using the commands provided below:

Using the latest EKS Anywhere version

EKSA_RELEASE_VERSION=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.latestVersion")

OR

Using a specific EKS Anywhere version

EKSA_RELEASE_VERSION=<EKS-A version>
cd /tmp
BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
IMAGEBUILDER_TARBALL_URI=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksD.imagebuilder.uri")
curl -s $IMAGEBUILDER_TARBALL_URI | tar xz ./image-builder
sudo install -m 0755 ./image-builder /usr/local/bin/image-builder   
cd -

Build vSphere OVA node images

These steps use image-builder to create an Ubuntu-based or RHEL-based image for vSphere. Before proceeding, ensure that the above system-level, network-level and vSphere-specific prerequisites have been met.

  1. Create a Linux user for running image-builder.

    sudo adduser image-builder
    

    Follow the prompt to provide a password for the image-builder user.

  2. Add image-builder user to the sudo group and change user as image-builder providing in the password from previous step when prompted.

    sudo usermod -aG sudo image-builder
    su image-builder
    cd /home/$USER
    
  3. Install packages and prepare environment:

       sudo apt update -y
       sudo apt install jq unzip make -y
       sudo snap install yq
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo dnf update -y
       sudo dnf install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo yum update -y
       sudo yum install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       

    • Starting with image-builder version v0.3.0, the minimum required Python version is Python 3.9. However, many Linux distros ship only up to Python 3.8, so you will need to install Python 3.9 from external sources. Refer to the pyenv installation and usage documentation to install Python 3.9 and make it the default Python version.
    • Once you have Python 3.9, you can install Ansible using pip.
      python3 -m pip install --user ansible
      
  4. Get the latest version of govc:

    curl -L -o - "https://github.com/vmware/govmomi/releases/latest/download/govc_$(uname -s)_$(uname -m).tar.gz" | sudo tar -C /usr/local/bin -xvzf - govc
    
  5. Create a vSphere configuration file (for example, vsphere.json):

    {
      "cluster": "",
      "convert_to_template": "",
      "create_snapshot": "",
      "datacenter": "",
      "datastore": "",
      "folder": "",
      "insecure_connection": "",
      "linked_clone": "",
      "network": "",
      "password": "",
      "resource_pool": "",
      "username": "",
      "vcenter_server": "",
    }
    
    cluster

    The vSphere cluster where the virtual machine is created.

    convert_to_template

    Convert VM to a template.

    create_snapshot

    Create a snapshot so the VM can be used as a base for linked clones.

    datacenter

    The vSphere datacenter name. Required if there is more than one datacenter in the vSphere inventory.

    datastore

    The vSphere datastore where the virtual machine is created.

    folder

    The VM folder where the virtual machine is created..

    insecure_connection

    Do not validate the vCenter Server TLS certificate.

    linked_clone

    Create the virtual machine as a linked clone from latest snapshot.

    network

    The network which the VM will be connected to.

    password

    The password used to connect to vSphere.

    resource_pool

    The vSphere resource pool where the virtual machine is created. If this is not specified, the root resource pool associated with the cluster is used.

    username

    The username used to connect to vSphere.

    vcenter_server

    The vCenter Server hostname.

    For RHEL images, add the following fields:

    {
      "iso_url": "<https://endpoint to RHEL ISO endpoint or path to file>",
      "iso_checksum": "<for example: ea5f349d492fed819e5086d351de47261c470fc794f7124805d176d69ddf1fcd>",
      "iso_checksum_type": "<for example: sha256>",
      "rhel_username": "<RHSM username>",
      "rhel_password": "<RHSM password>"
    }
    
  6. Create an Ubuntu or Redhat image:

    Ubuntu

    To create an Ubuntu-based image, run image-builder with the following options:

    • --os: ubuntu
    • --os-version: 20.04 or 22.04 (default: 20.04)
    • --hypervisor: For vSphere use vsphere
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --vsphere-config: vSphere configuration file (vsphere.json in this example)
    image-builder build --os ubuntu --hypervisor vsphere --release-channel 1-29 --vsphere-config vsphere.json
    

    Red Hat Enterprise Linux

    To create a RHEL-based image, run image-builder with the following options:

    • --os: redhat
    • --os-version: 8 (default: 8)
    • --hypervisor: For vSphere use vsphere
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --vsphere-config: vSphere configuration file (vsphere.json in this example)
    image-builder build --os redhat --hypervisor vsphere --release-channel 1-29 --vsphere-config vsphere.json
    

Build Bare Metal node images

These steps use image-builder to create an Ubuntu-based or RHEL-based image for Bare Metal. Before proceeding, ensure that the above system-level, network-level and baremetal-specific prerequisites have been met.

  1. Create a Linux user for running image-builder.

    sudo adduser image-builder
    

    Follow the prompt to provide a password for the image-builder user.

  2. Add image-builder user to the sudo group and change user as image-builder providing in the password from previous step when prompted.

    sudo usermod -aG sudo image-builder
    su image-builder
    cd /home/$USER
    
  3. Install packages and prepare environment:

       sudo apt update -y
       sudo apt install jq make qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin unzip -y
       sudo snap install yq
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo dnf update -y
       sudo dnf install jq make qemu-kvm libvirt virtinst cpu-checker libguestfs-tools libosinfo unzip wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo yum update -y
       sudo yum install jq make qemu-kvm libvirt libvirt-clients libguestfs-tools unzip wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       

    • Starting with image-builder version v0.3.0, the minimum required Python version is Python 3.9. However, many Linux distros ship only up to Python 3.8, so you will need to install Python 3.9 from external sources. Refer to the pyenv installation and usage documentation to install Python 3.9 and make it the default Python version.
    • Once you have Python 3.9, you can install Ansible using pip.
      python3 -m pip install --user ansible
      
  4. Create an Ubuntu or Red Hat image:

    Ubuntu

    To create an Ubuntu-based image, run image-builder with the following options:

    • --os: ubuntu
    • --os-version: 20.04 or 22.04 (default: 20.04)
    • --hypervisor: baremetal
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --baremetal-config: baremetal config file if using proxy
    image-builder build --os ubuntu --hypervisor baremetal --release-channel 1-29
    

    Red Hat Enterprise Linux (RHEL)

    RHEL images require a configuration file to identify the location of the RHEL 8 or RHEL 9 ISO image and Red Hat subscription information. The image-builder command will temporarily consume a Red Hat subscription that is removed once the image is built.

    {
      "iso_url": "<https://endpoint to RHEL ISO endpoint or path to file>",
      "iso_checksum": "<for example: ea5f349d492fed819e5086d351de47261c470fc794f7124805d176d69ddf1fcd>",
      "iso_checksum_type": "<for example: sha256>",
      "rhel_username": "<RHSM username>",
      "rhel_password": "<RHSM password>",
      "extra_rpms": "<space-separated list of RPM packages; useful for adding required drivers or other packages>"
    }
    

    Run the image-builder with the following options:

    • --os: redhat
    • --os-version: 8 or 9 (default: 8)
    • --hypervisor: baremetal
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --baremetal-config: Bare metal config file

    Image builder only supports building RHEL 9 raw images with EFI firmware. Refer to UEFI Support to enable image builds with EFI firmware.

    image-builder build --os redhat --hypervisor baremetal --release-channel 1-29 --baremetal-config baremetal.json
    
  5. To consume the image, serve it from an accessible web server, then create the bare metal cluster spec configuring the osImageURL field URL of the image. For example:

    osImageURL: "http://<artifact host address>/my-ubuntu-v1.23.9-eks-a-17-amd64.gz"
    

    See descriptions of osImageURL for further information.

Build CloudStack node images

These steps use image-builder to create a RHEL-based image for CloudStack. Before proceeding, ensure that the above system-level, network-level and CloudStack-specific prerequisites have been met.

  1. Create a Linux user for running image-builder.

    sudo adduser image-builder
    

    Follow the prompt to provide a password for the image-builder user.

  2. Add image-builder user to the sudo group and change user as image-builder providing in the password from previous step when prompted.

    sudo usermod -aG sudo image-builder
    su image-builder
    cd /home/$USER
    
  3. Install packages and prepare environment:

       sudo apt update -y
       sudo apt install jq make qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin unzip -y
       sudo snap install yq
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo dnf update -y
       sudo dnf install jq make qemu-kvm libvirt virtinst cpu-checker libguestfs-tools libosinfo unzip wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo yum update -y
       sudo yum install jq make qemu-kvm libvirt libvirt-clients libguestfs-tools unzip wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       sudo usermod -a -G kvm $USER
       sudo chmod 666 /dev/kvm
       sudo chown root:kvm /dev/kvm
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       

    • Starting with image-builder version v0.3.0, the minimum required Python version is Python 3.9. However, many Linux distros ship only up to Python 3.8, so you will need to install Python 3.9 from external sources. Refer to the pyenv installation and usage documentation to install Python 3.9 and make it the default Python version.
    • Once you have Python 3.9, you can install Ansible using pip.
      python3 -m pip install --user ansible
      
  4. Create a CloudStack configuration file (for example, cloudstack.json) to provide the location of a Red Hat Enterprise Linux 8 ISO image and related checksum and Red Hat subscription information:

    {
      "iso_url": "<https://endpoint to RHEL ISO endpoint or path to file>",
      "iso_checksum": "<for example: ea5f349d492fed819e5086d351de47261c470fc794f7124805d176d69ddf1fcd>",
      "iso_checksum_type": "<for example: sha256>",
      "rhel_username": "<RHSM username>",
      "rhel_password": "<RHSM password>"
    }
    

    NOTE: To build the RHEL-based image, image-builder temporarily consumes a Red Hat subscription. That subscription is removed once the image is built.

  5. To create a RHEL-based image, run image-builder with the following options:

    • --os: redhat
    • --os-version: 8 (default: 8)
    • --hypervisor: For CloudStack use cloudstack
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --cloudstack-config: CloudStack configuration file (cloudstack.json in this example)
    image-builder build --os redhat --hypervisor cloudstack --release-channel 1-29 --cloudstack-config cloudstack.json
    
  6. To consume the resulting RHEL-based image, add it as a template to your CloudStack setup as described in Preparing CloudStack .

Build Snow node images

These steps use image-builder to create an Ubuntu-based Amazon Machine Image (AMI) that is backed by EBS volumes for Snow. Before proceeding, ensure that the above system-level, network-level and AMI-specific prerequisites have been met

  1. Create a Linux user for running image-builder.

    sudo adduser image-builder
    

    Follow the prompt to provide a password for the image-builder user.

  2. Add the image-builder user to the sudo group and switch user to image-builder, providing in the password from previous step when prompted.

    sudo usermod -aG sudo image-builder
    su image-builder
    cd /home/$USER
    
  3. Install packages and prepare environment:

       sudo apt update -y
       sudo apt install jq unzip make -y
       sudo snap install yq
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo dnf update -y
       sudo dnf install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo yum update -y
       sudo yum install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       

    • Starting with image-builder version v0.3.0, the minimum required Python version is Python 3.9. However, many Linux distros ship only up to Python 3.8, so you will need to install Python 3.9 from external sources. Refer to the pyenv installation and usage documentation to install Python 3.9 and make it the default Python version.
    • Once you have Python 3.9, you can install Ansible using pip.
      python3 -m pip install --user ansible
      
  4. Create an AMI configuration file (for example, ami.json) that contains various AMI parameters. For example:

    {
       "ami_filter_name": "ubuntu/images/*ubuntu-focal-20.04-amd64-server-*",
       "ami_filter_owners": "679593333241",
       "ami_regions": "us-east-2",
       "aws_region": "us-east-2",
       "ansible_extra_vars": "@/home/image-builder/eks-anywhere-build-tooling/projects/kubernetes-sigs/image-builder/packer/ami/ansible_extra_vars.yaml",
       "builder_instance_type": "t3.small",
       "custom_role_name_list" : ["/home/image-builder/eks-anywhere-build-tooling/projects/kubernetes-sigs/image-builder/ansible/roles/load_additional_files"],
       "manifest_output": "/home/image-builder/manifest.json",
       "root_device_name": "/dev/sda1",
       "volume_size": "25",
       "volume_type": "gp3",
    }
    
    ami_filter_name

    Regular expression to filter a source AMI. (default: ubuntu/images/*ubuntu-focal-20.04-amd64-server-*).

    ami_filter_owners

    AWS account ID or AWS owner alias such as ‘amazon’, ‘aws-marketplace’, etc. (default: 679593333241 - the AWS Marketplace AWS account ID).

    ami_regions

    A list of AWS regions to copy the AMI to. (default: us-west-2).

    aws_region

    The AWS region in which to launch the EC2 instance to create the AMI. (default: us-west-2).

    ansible_extra_vars

    The absolute path to the additional variables to pass to Ansible. These are converted to the --extra-vars command-line argument. This path must be prefix with ‘@’. (default: @/home/image-builder/eks-anywhere-build-tooling/projects/kubernetes-sigs/image-builder/packer/ami/ansible_extra_vars.yaml)

    builder_instance_type

    The EC2 instance type to use while building the AMI. (default: t3.small).

    custom_role_name_list

    Array of strings representing the absolute paths of custom Ansible roles to run. This field is mutually exclusive with custom_role_names.

    custom_role_names

    Space-delimited string of the custom roles to run. This field is mutually exclusive with custom_role_name_list and is provided for compatibility with Ansible’s input format.

    manifest_output

    The absolute path to write the build artifacts manifest to. If you wish to export the AMI using this manifest, ensure that you provide a path that is not inside the /home/$USER/eks-anywhere-build-tooling path since that will be cleaned up when the build finishes. (default: /home/image-builder/manifest.json).

    root_device_name

    The device name used by EC2 for the root EBS volume attached to the instance. (default: /dev/sda1).

    subnet_id

    The ID of the subnet where Packer will launch the EC2 instance. This field is required when using a non-default VPC.

    volume_size

    The size of the root EBS volume in GiB. (default: 25).

    volume_type

    The type of root EBS volume, such as gp2, gp3, io1, etc. (default: gp3).

  5. To create an Ubuntu-based image, run image-builder with the following options:

    • --os: ubuntu
    • --os-version: 20.04 or 22.04 (default: 20.04)
    • --hypervisor: For AMI, use ami
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --ami-config: AMI configuration file (ami.json in this example)
    image-builder build --os ubuntu --hypervisor ami --release-channel 1-29 --ami-config ami.json
    
  6. After the build, the Ubuntu AMI will be available in your AWS account in the AWS region specified in your AMI configuration file. If you wish to export it as a raw image, you can achieve this using the AWS CLI.

    ARTIFACT_ID=$(cat <manifest output location> | jq -r '.builds[0].artifact_id')
    AMI_ID=$(echo $ARTIFACT_ID | cut -d: -f2)
    IMAGE_FORMAT=raw
    AMI_EXPORT_BUCKET_NAME=<S3 bucket to export the AMI to>
    AMI_EXPORT_PREFIX=<S3 prefix for the exported AMI object>
    EXPORT_RESPONSE=$(aws ec2 export-image --disk-image-format $IMAGE_FORMAT --s3-export-location S3Bucket=$AMI_EXPORT_BUCKET_NAME,S3Prefix=$AMI_EXPORT_PREFIX --image-id $AMI_ID)
    EXPORT_TASK_ID=$(echo $EXPORT_RESPONSE | jq -r '.ExportImageTaskId')
    

    The exported image will be available at the location s3://$AMI_EXPORT_BUCKET_NAME/$AMI_EXPORT_PREFIX/$EXPORT_IMAGE_TASK_ID.raw.

Build Nutanix node images

These steps use image-builder to create a Ubuntu-based image for Nutanix AHV and import it into the AOS Image Service. Before proceeding, ensure that the above system-level, network-level and Nutanix-specific prerequisites have been met.

  1. Download an Ubuntu cloud image or RHEL cloud image pertaining to your desired OS and OS version and upload it to the AOS Image Service using Prism. You will need to specify the image’s name in AOS as the source_image_name in the nutanix.json config file specified below. You can also skip this step and directly use the image_url field in the config file to provide the URL of a publicly accessible image as source.

  2. Create a Linux user for running image-builder.

    sudo adduser image-builder
    

    Follow the prompt to provide a password for the image-builder user.

  3. Add image-builder user to the sudo group and change user as image-builder providing in the password from previous step when prompted.

    sudo usermod -aG sudo image-builder
    su image-builder
    cd /home/$USER
    
  4. Install packages and prepare environment:

       sudo apt update -y
       sudo apt install jq unzip make -y
       sudo snap install yq
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo dnf update -y
       sudo dnf install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       
       sudo yum update -y
       sudo yum install jq unzip make wget -y
       sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
       mkdir -p /home/$USER/.ssh
       echo "HostKeyAlgorithms +ssh-rsa" >> /home/$USER/.ssh/config
       echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /home/$USER/.ssh/config
       sudo chmod 600 /home/$USER/.ssh/config
       

    • Starting with image-builder version v0.3.0, the minimum required Python version is Python 3.9. However, many Linux distros ship only up to Python 3.8, so you will need to install Python 3.9 from external sources. Refer to the pyenv installation and usage documentation to install Python 3.9 and make it the default Python version.
    • Once you have Python 3.9, you can install Ansible using pip.
      python3 -m pip install --user ansible
      
  5. Create a nutanix.json config file. More details on values can be found in the image-builder documentation . See example below:

    {
      "nutanix_cluster_name": "Name of PE Cluster",
      "source_image_name": "Name of Source Image",
      "image_name": "Name of Destination Image",
      "image_url": "URL where the source image is hosted",
      "image_export": "Exports the raw image to disk if set to true",
      "nutanix_subnet_name": "Name of Subnet",
      "nutanix_endpoint": "Prism Central IP / FQDN",
      "nutanix_insecure": "false",
      "nutanix_port": "9440",
      "nutanix_username": "PrismCentral_Username",
      "nutanix_password": "PrismCentral_Password"
    }
    

    For RHEL images, add the following fields:

    {
      "rhel_username": "<RHSM username>",
      "rhel_password": "<RHSM password>"
    }
    
  6. Create an Ubuntu or Redhat image:

    Ubuntu

    To create an Ubuntu-based image, run image-builder with the following options:

    • --os: ubuntu
    • --os-version: 20.04 or 22.04 (default: 20.04)
    • --hypervisor: For Nutanix use nutanix
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --nutanix-config: Nutanix configuration file (nutanix.json in this example)
    image-builder build --os ubuntu --hypervisor nutanix --release-channel 1-29 --nutanix-config nutanix.json
    

    Red Hat Enterprise Linux

    To create a RHEL-based image, run image-builder with the following options:

    • --os: redhat
    • --os-version: 8 or 9 (default: 8)
    • --hypervisor: For Nutanix use nutanix
    • --release-channel: Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29.
    • --nutanix-config: Nutanix configuration file (nutanix.json in this example)
    image-builder build --os redhat --hypervisor nutanix --release-channel 1-29 --nutanix-config nutanix.json
    

Configuring OS version

image-builder supports an os-version option that allows you to configure which version of the OS you wish to build. If no OS version is supplied, it will build the default for that OS, according to the table below.

Operating system
Supported versions
Corresponding os-version value
Default os-version value
Hypervisors supported
Ubuntu 20.04.6 20.04 20.04 All hypervisors except CloudStack
22.04.3 22.04
RHEL 8.8 8 8 All hypervisors except AMI
9.2 9 CloudStack and Nutanix only

Building images for a specific EKS Anywhere version

This section provides information about the relationship between image-builder and EKS Anywhere CLI version, and provides instructions on building images pertaining to a specific EKS Anywhere version.

Every release of EKS Anywhere includes a new version of image-builder CLI. For EKS-A releases prior to v0.16.3, the corresponding image-builder CLI builds images for the latest version of EKS-A released thus far. The EKS-A version determines what artifacts will be bundled into the final OS image, i.e., the core Kubernetes components vended by EKS Distro as well as several binaries vended by EKS Anywhere, such as crictl, etcdadm, etc, and users may not always want the latest versions of these, and rather wish to bake in certain specific component versions into the image.

This was improved in image-builder released with EKS-A v0.16.3 to v0.16.5. Now you can override the default latest build behavior to build images corresponding to a specific EKS-A release, including previous releases. This can be achieved by setting the EKSA_RELEASE_VERSION environment variable to the desired EKS-A release (v0.16.0 and above). For example, if you want to build an image for EKS-A version v0.16.5, you can run the following command.

export EKSA_RELEASE_VERSION=v0.16.5
image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json

With image-builder versions v0.2.1 and above (released with EKS-A version v0.17.0), the image-builder CLI has the EKS-A version baked into it, so it will build images pertaining to that release of EKS Anywhere by default. You can override the default version using the eksa-release option.

image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json --eksa-release v0.16.5

Building images corresponding to dev versions of EKS-A

image-builder also provides the option to build images pertaining to dev releases of EKS-A. In the above cases, using a production release of image-builder leads to manifests and images being sourced from production locations. While this is usually the desired behavior, it is sometimes useful to build images pertaining to the development branch. Often, new features or enhancements are added to image-builder or other EKS-A dependency projects, but are only released to production weeks or months later, based on the release cadence. In other cases, users may want to build EKS-A node images for new Kubernetes versions that are available in dev EKS-A releases but have not been officially released yet. This feature of image-builder supports both these use-cases and other similar ones.

This can be achieved using an image-builder CLI that has the dev version of EKS-A (v0.0.0-dev) baked into it, or by passing in v0.0.0-dev to the eksa-release option. For both these methods, you need to set the environment EKSA_USE_DEV_RELEASE to true.

image-builder obtained from a production EKS-A release:

export EKSA_USE_DEV_RELEASE=true
image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json --eksa-release v0.0.0-dev

image-builder obtained from a dev EKS-A release:

export EKSA_USE_DEV_RELEASE=true
image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json

In both these above approaches, the artifacts embedded into the images will be obtained from the dev release bundle manifest instead of production. This manifest contains the latest artifacts built from the main branch, and is generally more up-to-date than production release artifact versions.

UEFI support

image-builder supports UEFI-enabled images for Ubuntu OVA, Ubuntu Raw and RHEL 9 Raw images. UEFI is turned on by default for Ubuntu Raw image builds, but the default firmware for Ubuntu OVAs and RHEL Raw images is BIOS. This can be toggled with the firmware option.

For example, to build a Kubernetes v1.27 Ubuntu 22.04 OVA with UEFI enabled, you can run the following command.

image-builder build --os ubuntu --hypervisor vsphere --os-version 22.04 --release-channel 1.27 --vsphere-config config.json --firmware efi

The table below shows the possible firmware options for the hypervisor and OS combinations that image-builder supports.

vSphere Bare Metal CloudStack Nutanix Snow
Ubuntu bios (default), efi efi bios bios bios
RHEL bios bios (RHEL 8), efi (RHEL 9) bios bios bios

Mounting additional files

image-builder allows you to customize your image by adding files located on your host onto the image at build time. This is helpful when you want your image to have a custom DNS resolver configuration, systemd service unit-files, custom scripts and executables, etc. This option is suppported for all OS and Hypervisor combinations.

To do this, create a configuration file (say, files.json) containing the list of files you want to copy:

{
   "additional_files_list": [
      {
         "src": "<Absolute path of the file on the host machine>",
         "dest": "<Absolute path of the location you want to copy the file to on the image",
         "owner": "<Name of the user that should own the file>",
         "group": "<Name of the group that should own the file>",
         "mode": "<The permissions to apply to the file on the image>"
      },
      ...
   ]
}

You can now run the image-builder CLI with the files-config option, with this configuration file as input.

image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json --files-config files.json

Using Proxy Server

image-builder supports proxy-enabled build environments. In order to use proxy server to route outbound requests to the Internet, add the following fields to the hypervisor or provider configuration file (e.g. baremetal.json)

 {
   "http_proxy": "<http proxy endpoint, for example, http://username:passwd@proxyhost:port>",
   "https_proxy": "<https proxy endpoint, for example, https://proxyhost:port/>",
   "no_proxy": "<optional comma seperated list of domains that should be excluded from proxying>"
}

In a proxy-enabled environment, image-builder uses wget to download artifacts instead of curl, as curl does not support reading proxy environment variables. In order to add wget to the node OS, add the following to the above json configuration file:

{
   "extra_rpms": "wget" #If the node OS being built is RedHat
   "extra_debs": "wget" #If the node OS being built is Ubuntu
}

Run image-builder CLI with the hypervisor configuration file

image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json

Red Hat Satellite Support

While building Red Hat node images, image-builder uses public Red Hat subscription endpoints to register the build virtual machine with the provided Red Hat account and download required packages.

Alternatively, image-builder can also use a private Red Hat Satellite to register the build virtual machine and pull packages from the Satellite. In order to use Red Hat Satellite in the image build process follow the steps below.

Prerequisites

  1. Ensure the host running image-builder has bi-directional network connectivity with the RedHat Satellite
  2. Image builder flow only supports RedHat Satellite version >= 6.8
  3. Add the following Red Hat repositories for the latest 8.x or 9.x (for Nutanix) version on the Satellite and initiate a sync to replicate required packages
    1. Base OS Rpms
    2. Base OS - Extended Update Support Rpms
    3. AppStream - Extended Update Support Rpms
  4. Create an activation key on the Satellite and ensure library environment is enabled

Build Red Hat node images using Red Hat Satellite

  1. Add the following fields to the hypervisor or provider configuration file
    {
      "rhsm_server_hostname": "fqdn of Red Hat Satellite server",
      "rhsm_server_release_version": "Version of Red hat OS Packages to pull from Satellite. e.x. 8.8",
      "rhsm_activation_key": "activation key from Satellite",
      "rhsm_org_id": "org id from Satellite"
    }
    

    rhsm_server_release_version should always point to the latest 8.x or 9.x minor Red Hat release synced and available on Red Hat Satellite

  2. Run image-builder CLI with the hypervisor configuration file
    image-builder build --os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json
    

Air Gapped Image Building

image-builder supports building node OS images in an air-gapped environment. Currently only building Ubuntu-based node OS images for baremetal provider is supported in air-gapped building mode.

Prerequisites

  1. Air-gapped image building requires
    • private artifacts server e.g. artifactory from JFrog
    • private git server.
  2. Ensure the host running image-builder has bi-directional network connectivity with the artifacts server and git server
  3. Artifacts server should have the ability to host and serve, standalone artifacts and Ubuntu OS packages

Building node images in an air-gapped environment

  1. Identify the EKS-D release channel (generally aligning with Kubernetes version) to build. For example, 1.27 or 1.28

  2. Identify the latest release of EKS-A from changelog . For example,

  3. Run image-builder CLI to download manifests in an environment with internet connectivity

    image-builder download manifests
    

    This command will download a tarball containing all currently released and supported EKS-A and EKS-D manifests that are required for image building in an air-gapped environment.

  4. Create a local file named download-airgapped-artifacts.sh with the contents below. This script will download the required EKS-A and EKS-D artifacts required for image building.

    Click to expand download-airgapped-artifacts.sh script
    #!/usr/bin/env bash
    set +o nounset
    
    function downloadArtifact() {
       local -r artifact_url=${1}
       local -r artifact_path_pre=${2}
    
       # Removes hostname from url
       artifact_path_post=$(echo ${artifact_url} | sed -E 's:[^/]*//[^/]*::')
       artifact_path="${artifact_path_pre}${artifact_path_post}"
       curl -sL ${artifact_url} --output ${artifact_path} --create-dirs
    }
    
    if [ -z "${EKSA_RELEASE_VERSION}" ]; then
       echo "EKSA_RELEASE_VERSION not set. Please refer https://anywhere.eks.amazonaws.com/docs/whatsnew/ or https://github.com/aws/eks-anywhere/releases to get latest EKS-A release"
       exit 1
    fi
    
    if [ -z "${RELEASE_CHANNEL}" ]; then
       echo "RELEASE_CHANNEL not set. Supported EKS Distro releases include 1-25, 1-26, 1-27, 1-28 and 1-29"
       exit 1
    fi
    
    # Convert RELEASE_CHANNEL to dot schema
    kube_version="${RELEASE_CHANNEL/-/.}"
    echo "Setting Kube Version: ${kube_version}"
    
    # Create a local directory to download the artifacts
    artifacts_dir="eks-a-d-artifacts"
    eks_a_artifacts_dir="eks-a-artifacts"
    eks_d_artifacts_dir="eks-d-artifacts"
    echo "Creating artifacts directory: ${artifacts_dir}"
    mkdir ${artifacts_dir}
    
    # Download EKS-A bundle manifest
    cd ${artifacts_dir}
    bundles_url=$(curl -sL https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
    echo "Identified EKS-A Bundles URL: ${bundles_url}"
    echo "Downloading EKS-A Bundles manifest file"
    bundles_file_data=$(curl -sL "${bundles_url}" | yq)
    
    # Download EKS-A artifacts
    eks_a_artifacts="containerd crictl etcdadm"
    for eks_a_artifact in ${eks_a_artifacts}; do
       echo "Downloading EKS-A artifact: ${eks_a_artifact}"
       artifact_url=$(echo "${bundles_file_data}" | yq e ".spec.versionsBundles[] | select(.kubeVersion==\"${kube_version}\").eksD.${eks_a_artifact}.uri" -)
       downloadArtifact ${artifact_url} ${eks_a_artifacts_dir}
    done
    
    # Download EKS-D artifacts
    echo "Downloading EKS-D manifest file"
    eks_d_manifest_url=$(echo "${bundles_file_data}" | yq e ".spec.versionsBundles[] | select(.kubeVersion==\"${kube_version}\").eksD.manifestUrl" -)
    eks_d_manifest_file_data=$(curl -sL "${eks_d_manifest_url}" | yq)
    
    # Get EKS-D kubernetes base url from kube-apiserver
    eks_d_kube_tag=$(echo "${eks_d_manifest_file_data}" | yq e ".status.components[] | select(.name==\"kubernetes\").gitTag" -)
    echo "EKS-D Kube Tag: ${eks_d_kube_tag}"
    api_server_artifact="bin/linux/amd64/kube-apiserver.tar"
    api_server_artifact_url=$(echo "${eks_d_manifest_file_data}" | yq e ".status.components[] | select(.name==\"kubernetes\").assets[] | select(.name==\"${api_server_artifact}\").archive.uri")
    eks_d_base_url=$(echo "${api_server_artifact_url}" | sed -E "s,/${eks_d_kube_tag}/${api_server_artifact}.*,,")
    echo "EKS-D Kube Base URL: ${eks_d_base_url}"
    
    # Downloading EKS-D Kubernetes artifacts
    eks_d_k8s_artifacts="kube-apiserver.tar kube-scheduler.tar kube-controller-manager.tar kube-proxy.tar pause.tar coredns.tar etcd.tar kubeadm kubelet kubectl"
    for eks_d_k8s_artifact in ${eks_d_k8s_artifacts}; do
       echo "Downloading EKS-D artifact: Kubernetes - ${eks_d_k8s_artifact}"
       artifact_url="${eks_d_base_url}/${eks_d_kube_tag}/bin/linux/amd64/${eks_d_k8s_artifact}"
       downloadArtifact ${artifact_url} ${eks_d_artifacts_dir}
    done
    
    # Downloading EKS-D etcd artifacts
    eks_d_extra_artifacts="etcd cni-plugins"
    for eks_d_extra_artifact in ${eks_d_extra_artifacts}; do
       echo "Downloading EKS-D artifact: ${eks_d_extra_artifact}"
       eks_d_artifact_tag=$(echo "${eks_d_manifest_file_data}" | yq e ".status.components[] | select(.name==\"${eks_d_extra_artifact}\").gitTag" -)
       artifact_url=$(echo "${eks_d_manifest_file_data}" | yq e ".status.components[] | select(.name==\"${eks_d_extra_artifact}\").assets[] | select(.name==\"${eks_d_extra_artifact}-linux-amd64-${eks_d_artifact_tag}.tar.gz\").archive.uri")
       downloadArtifact ${artifact_url} ${eks_d_artifacts_dir}
    done
    
    
  5. Change mode of the saved file download-airgapped-artifacts.sh to an executable

    chmod +x download-airgapped-artifacts.sh
    
  6. Set EKS-A release version and EKS-D release channel as environment variables and execute the script

    EKSA_RELEASE_VERSION=<EKS-A version> RELEASE_CHANNEL=1-28 ./download-airgapped-artifacts.sh
    

    Executing this script will create a local directory eks-a-d-artifacts and download the required EKS-A and EKS-D artifacts.

  7. Create two repositories, one for EKS-A and one for EKS-D on the private artifacts server. Upload the contents of eks-a-d-artifacts/eks-a-artifacts to the EKS-A repository. Similarly upload the contents of eks-a-d-artifacts/eks-d-artifacts to the EKS-D repository on the private artifacts server. Please note, the path of artifacts inside the downloaded directories must be preserved while hosted on the artifacts server.

  8. Download and host the base ISO image to the artifacts server.

  9. Replicate the following public git repositories to private artifacts server or git servers. Make sure to sync all branches and tags to the private git repo.

  10. Replicate public Ubuntu packages to private artifacts server. Please refer your artifact server’s documentation for more detailed instructions.

  11. Create a sources.list file that will configure apt commands to use private artifacts server for OS packages

    deb [trusted=yes] http://<private-artifacts-server>/debian focal main restricted universe multiverse
    deb [trusted=yes] http://<private-artifacts-server>/debian focal-updates main restricted universe multiverse
    deb [trusted=yes] http://<private-artifacts-server>/debian focal-backports main restricted universe multiverse
    deb [trusted=yes] http://<private-artifacts-server>/debian focal-security main restricted universe multiverse
    

    focal in the above file refers to the code name for the Ubuntu 20.04 release. If using Ubuntu version 22.04, replace focal with jammy.

  12. Create a provider or hypervisor configuration file and add the following fields

    {
       "eksa_build_tooling_repo_url": "https://internal-git-host/eks-anywhere-build-tooling.git",
       "image_builder_repo_url": "https://internal-repos/image-builder.git",
       "private_artifacts_eksd_fqdn": "http://private-artifacts-server/artifactory/eks-d-artifacts",
       "private_artifacts_eksa_fqdn": "http://private-artifacts-server:8081/artifactory/eks-a-artifacts",
       "extra_repos": "<full path of sources.list>",
       "disable_public_repos": "true",
       "iso_url": "http://<private-base-iso-url>/ubuntu-20.04.1-legacy-server-amd64.iso",
       "iso_checksum": "<sha256 of the base iso>",
       "iso_checksum_type": "sha256"
    }
    
  13. Run image-builder CLI with the hypervisor configuration file and the downloaded manifest tarball

    image-builder build -os <OS> --hypervisor <hypervisor> --release-channel <release channel> --<hypervisor>-config config.json --airgapped --manifest-tarball <path to eks-a-manifests.tar>
    

Container Images