This is the multi-page printable view of this section. Click here to print.
Create Snow cluster
1 - Create Snow cluster
EKS Anywhere supports an AWS Snow provider for EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on Snow as a standalone, self-managed cluster or combined set of management/workload clusters. See Cluster topologies for details.
Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.
Prerequisite checklist
EKS Anywhere on Snow needs:
- Certain pre-steps to complete before interacting with a Snowball device. See Actions to complete before ordering a Snowball Edge device for Amazon EKS Anywhere .
- EKS Anywhere enabled Snowball devices. See Ordering a Snowball Edge device for use with Amazon EKS Anywhere for ordering experience through the AWS Snow Family console.
- To be run on an Admin instance in a Snowball Edge device. See Configuring and starting Amazon EKS Anywhere on Snowball Edge devices
for setting up the devices, launching the Admin instance, fetching and copying the device credentials to the Admin instance for
eksctl
CLI to consume. - Prepare DHCP IP addresses pool
Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or standalone cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).
-
Optional Configuration
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your
eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .
If you are going to use packages, set up authentication. These credentials should have limited capabilities :
export EKSA_AWS_ACCESS_KEY_ID="your*access*id" export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key" export EKSA_AWS_REGION="us-west-2"
-
Set an environment variables for your cluster name
export CLUSTER_NAME=mgmt
-
Generate a cluster config file for your Snow provider
eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider snow > eksa-mgmt-cluster.yaml
-
Optionally import images to private registry
This optional step imports EKS Anywhere artifacts and release bundle to a local registry. This is required for air-gapped installation.
- Configuring Amazon EKS Anywhere for disconnected operation shows AWS examples of selecting and building a private registry in a Snowball Edge device.
- For air-gapped scenario, run the
import images
with--input
and--bundles
arguments pointing to the artifacts and bundle release files that pre-exist in the Admin instance. - Refer to the Registry Mirror configuration for more information about using private registry.
eksctl anywhere import images \ --input /usr/lib/eks-a/artifacts/artifacts.tar.gz \ --bundles /usr/lib/eks-a/manifests/bundle-release.yaml \ --registry $PRIVATE_REGISTRY_ENDPOINT \ --insecure=true
-
Modify the cluster config (
eksa-mgmt-cluster.yaml
) as follows:- Refer to the Snow configuration for information on configuring this cluster config for a Snow provider.
- Add Optional configuration settings as needed.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to use the
credentials
andca-bundles
files that are in the Admin instance, and export these environment variables for your AWS Snowball device credentials. Make sure you use single quotes around the values so that your shell does not interpret the values:export EKSA_AWS_CREDENTIALS_FILE='/PATH/TO/CREDENTIALS/FILE' export EKSA_AWS_CA_BUNDLES_FILE='/PATH/TO/CABUNDLES/FILE'
After you have created your
eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster. -
Create cluster
For a regular cluster create (with internet access), type the following:
eksctl anywhere create cluster \ -f eksa-mgmt-cluster.yaml
For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:
eksctl anywhere create cluster \ -f eksa-mgmt-cluster.yaml \ --bundles-override /usr/lib/eks-a/manifests/bundle-release.yaml
-
Once the cluster is created you can use it with the generated
KUBECONFIG
file in your local directory:export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane and worker nodes:
kubectl get machines -A
Example command output:
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION eksa-system mgmt-etcd-dsxb5 mgmt aws-snow:///192.168.1.231/s.i-8b0b0631da3b8d9e4 Running 4m59s eksa-system mgmt-md-0-7b7c69cf94-99sll mgmt mgmt-md-0-1-58nng aws-snow:///192.168.1.231/s.i-8ebf6b58a58e47531 Running 4m58s v1.24.9-eks-1-24-7 eksa-system mgmt-srrt8 mgmt mgmt-control-plane-1-xs4t9 aws-snow:///192.168.1.231/s.i-8414c7fcabcf3d7c1 Running 4m58s v1.24.9-eks-1-24-7 ...
-
Check the cluster:
You can now use the cluster as you would any Kubernetes cluster. To try it out, run the test application with:
export CLUSTER_NAME=mgmt export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in Deploy test workload .
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Set License Environment Variable (Optional)
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Generate a workload cluster config:
CLUSTER_NAME=w01 eksctl anywhere generate clusterconfig $CLUSTER_NAME \ --provider snow > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (
Cluster
,SnowDatacenterConfig
,SnowMachineConfig
, etc.) are distinct from management cluster object names. -
Be sure to set the
managementCluster
field to identify the name of the management cluster.For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: w01 spec: managementCluster: name: mgmt
-
Create a workload cluster in one of the following ways:
-
GitOps: See Manage separate workload clusters with GitOps
-
Terraform: See Manage separate workload clusters with Terraform
NOTE:
snowDatacenterConfig.spec.identityRef
and a Snow bootstrap credentials secret need to be specified when provisioning a cluster throughGitOps
orTerraform
, as EKS Anywhere Cluster Controller will not create a Snow bootstrap credentials secret likeeksctl CLI
does when field is empty.snowMachineConfig.spec.sshKeyName
must be specified to SSH into your nodes when provisioning a cluster throughGitOps
orTerraform
, as the EKS Anywhere Cluster Controller will not generate the keys likeeksctl CLI
does when the field is empty. -
eksctl CLI: To create a workload cluster with
eksctl
, run:eksctl anywhere create cluster \ -f eksa-w01-cluster.yaml \ --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the
--kubeconfig
option tellseksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster. -
kubectl CLI: The cluster lifecycle feature lets you use
kubectl
, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To usekubectl
, run:kubectl apply -f eksa-w01-cluster.yaml
To check the state of a cluster managed with the cluster lifecyle feature, use
kubectl
to show the cluster object with its status.The
status
field on the cluster object field holds information about the current state of the cluster.kubectl get clusters w01 -o yaml
The cluster has been fully upgraded once the status of the
Ready
condition is markedTrue
. See the cluster status guide for more information.
-
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
-
If your workload cluster was created with
eksctl
, change your credentials to point to the new workload cluster (for example,w01
), then run the test application with:export CLUSTER_NAME=w01 export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig export KUBECONFIG=w01.kubeconfig kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as
eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management section for more information on common operational tasks like deleting the cluster.
-
See the Package management section for more information on post-creation curated packages installation.
2 - Configure for Snow
This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 10.1.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 3
endpoint:
host: ""
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
datacenterRef:
kind: SnowDatacenterConfig
name: my-cluster-datacenter
externalEtcdConfiguration:
count: 3
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
kubernetesVersion: "1.28"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: SnowMachineConfig
name: my-cluster-machines
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowDatacenterConfig
metadata:
name: my-cluster-datacenter
spec: {}
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowMachineConfig
metadata:
name: my-cluster-machines
spec:
amiID: ""
instanceType: sbe-c.large
sshKeyName: ""
osFamily: ubuntu
devices:
- ""
containersVolume:
size: 25
network:
directNetworkInterfaces:
- index: 1
primary: true
ipPoolRef:
kind: SnowIPPool
name: ip-pool-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowIPPool
metadata:
name: ip-pool-1
spec:
pools:
- ipStart: 192.168.1.2
ipEnd: 192.168.1.14
subnet: 192.168.1.0/24
gateway: 192.168.1.1
- ipStart: 192.168.1.55
ipEnd: 192.168.1.250
subnet: 192.168.1.0/24
gateway: 192.168.1.1
Cluster Fields
name (required)
Name of your cluster my-cluster-name
in this example
clusterNetwork (required)
Network configuration.
clusterNetwork.cniConfig (required)
CNI plugin configuration. Supports cilium
.
clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)
Optionally specify a policyEnforcementMode of default
, always
or never
.
clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)
Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.
clusterNetwork.cniConfig.cilium.skipUpgrade (optional)
When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.
clusterNetwork.cniConfig.cilium.routingMode (optional)
Optionally specify the routing mode. Accepts default
and direct
. Also see RoutingMode
option.
clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)
Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.
clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)
Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.
clusterNetwork.pods.cidrBlocks[0] (required)
The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.
clusterNetwork.services.cidrBlocks[0] (required)
The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.
clusterNetwork.dns.resolvConf.path (optional)
File path to a file containing a custom DNS resolver configuration.
controlPlaneConfiguration (required)
Specific control plane configuration for your Kubernetes cluster.
controlPlaneConfiguration.count (required)
Number of control plane nodes
controlPlaneConfiguration.machineGroupRef (required)
Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields
below.
controlPlaneConfiguration.endpoint.host (required)
A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other devices.
NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.
controlPlaneConfiguration.taints (optional)
A list of taints to apply to the control plane nodes of the cluster.
Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master
. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane
. The default control plane components will tolerate the provided taints.
Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.
NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.
controlPlaneConfiguration.labels (optional)
A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.
Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.
workerNodeGroupConfigurations (required)
This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.
workerNodeGroupConfigurations[*].count (optional)
Number of worker nodes. (default: 1
) It will be ignored if the cluster autoscaler curated package
is installed and autoscalingConfiguration
is used to specify the desired range of replicas.
Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.
workerNodeGroupConfigurations[*].machineGroupRef (required)
Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields
below.
workerNodeGroupConfigurations[*].name (required)
Name of the worker node group (default: md-0)
workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)
Minimum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)
Maximum number of nodes for this node group’s autoscaling configuration.
workerNodeGroupConfigurations[*].taints (optional)
A list of taints to apply to the nodes in the worker node group.
Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.
At least one node group must not have NoSchedule
or NoExecute
taints applied to it.
workerNodeGroupConfigurations[*].labels (optional)
A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.
Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.
workerNodeGroupConfigurations[*].kubernetesVersion (optional)
The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24
externalEtcdConfiguration.count (optional)
Number of etcd members.
externalEtcdConfiguration.machineGroupRef (optional)
Refers to the Kubernetes object with Snow specific configuration for your etcd members. See SnowMachineConfig Fields
below.
datacenterRef (required)
Refers to the Kubernetes object with Snow environment specific configuration. See SnowDatacenterConfig Fields
below.
kubernetesVersion (required)
The Kubernetes version you want to use for your cluster. Supported values: 1.28
, 1.27
, 1.26
, 1.25
, 1.24
SnowDatacenterConfig Fields
identityRef (required)
Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.
SnowMachineConfig Fields
amiID (optional)
AMI ID from which to create the machine instance. Snow provider offers an AMI lookup logic which will look for a suitable AMI ID based on the Kubernetes version and osFamily if the field is empty.
instanceType (optional)
Type of the Snow EC2 machine instance. See Quotas for Compute Instances on a Snowball Edge Device
for supported instance types on Snow (Default: sbe-c.large
).
osFamily
Operating System on instance machines. Permitted value: ubuntu.
physicalNetworkConnector (optional)
Type of snow physical network connector to use for creating direct network interfaces. Permitted values: SFP_PLUS
, QSFP
, RJ45
(Default: SFP_PLUS
).
sshKeyName (optional)
Name of the AWS Snow SSH key pair you want to configure to access your machine instances.
The default is eksa-default-{cluster-name}-{uuid}
.
devices
A device IP list from which to bootstrap and provision machine instances.
network
Custom network setting for the machine instances. DHCP and static IP configurations are supported.
network.directNetworkInterfaces[0].index (optional)
Index number of a direct network interface (DNI) used to clarify the position in the list. Must be no smaller than 1 and no greater than 8.
network.directNetworkInterfaces[0].primary (optional)
Whether the DNI is primary or not. One and only one primary DNI is required in the directNetworkInterfaces list.
network.directNetworkInterfaces[0].vlanID (optional)
VLAN ID to use for the DNI.
network.directNetworkInterfaces[0].dhcp (optional)
Whether DHCP is to be used to assign IP for the DNI.
network.directNetworkInterfaces[0].ipPoolRef (optional)
Refers to a SnowIPPool
object which provides a range of ip addresses. When specified, an IP address selected from the pool will be allocated to the DNI.
containersVolume (optional)
Configuration option for customizing containers data storage volume.
containersVolume.size (optional)
Size of the storage for containerd runtime in Gi.
The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.
containersVolume.deviceName (optional)
Containers volume device name.
containersVolume.type (optional)
Type of the containers volume. Permitted values: sbp1
, sbg1
. (Default: sbp1
)
sbp1
stands for capacity-optimized HDD. sbg1
is performance-optimized SSD.
nonRootVolumes (optional)
Configuration options for the non root storage volumes.
nonRootVolumes[0].deviceName (optional)
Non root volume device name. Must be specified and cannot have prefix “/dev/sda” as it is reserved for root volume and containers volume.
nonRootVolumes[0].size (optional)
Size of the storage device for the non root volume. Must be no smaller than 8 Gi.
nonRootVolumes[0].type (optional)
Type of the non root volume. Permitted values: sbp1
, sbg1
. (Default: sbp1
)
sbp1
stands for capacity-optimized HDD. sbg1
is performance-optimized SSD.
SnowIPPool Fields
pools[0].ipStart (optional)
Start address of an IP range.
pools[0].ipEnd (optional)
End address of an IP range.
pools[0].subnet (optional)
An IP subnet for determining whether an IP is within the subnet.
pools[0].gateway (optional)
Gateway of the subnet for routing purpose.