Amazon Elastic Kubernetes Service (Amazon EKS) Setup

Amazon Elastic Kubernetes Service (Amazon EKS) Setup

EKs stands for Elastic Kubernetes services and it is fully managed by the AWS

EKS is the best place to run K8s applications because of its security, reliability and scalability

EKS can be integrated with the other AWS services such as ELB, Cloudwatch, Autoscaling, IAM and VPC

EKS makes it easy to run K8s on AWS without needing to install, operate and maintain your k8s control plane

Amazon EKS runs the k8s control plan across three availability zones to ensure high availability and it automatically detects and replaces unhealthy masters

Aws will have complete control over the control plane. We don’t have control over it

We need to create a worker node and attach it to the Control plane

Note: We will create a worker nodes group using ASG Group

Steps to Create EKS Cluster in AWS:

Step 1) Create an IAM role in AWS (use case EKS)

IAM > Access management > Roles > create role > AWS service

Use case > EKS > EKS Cluster

Role name: Eksclusterrole and create a role.

Eksclusterrole got created

Step 2) Create VPC using CloudFormation using with below S3 URL

CloudFormation > Create stack >Template is ready > Amazon S3 URL

Paste the below yaml file in Amazon s3 URL

https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml

Stack name : Eksclustervpc

Tags > IAM Role optional > submit

A VPC will be created - It will take some time to create

Step 3) Creating an Elastic K8s Service Cluster using the existing VPC and IAM role that we created

Elastic Kubernetes Service > Add cluster > create > Name > k8s version > Cluster service role (Provide the IAM role)

The name can be anything of your choice

Kubernetes version: 1.27 (which is the latest)

The cluster service role will be the role that we created earlier Eksclusterrole.

VPC (Select the custom one created) > Security group (EKsVPC control plane) > IPV4 >Cluster endpoint access = public and private > Next-Next and click on create

Cluster is created

Step 4) Create REDHat ec2 instance t2.micro (K8S_Client_Machine)

Connect to K8s_client_machine using mobaxterm and follow the below commands

Install kubectl commands

$ sudo yum update -y

$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

$ kubectl version --client

Install AWS CLI commands

$ sudo yum install curl

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

$ sudo yum install unzip

$ unzip awscliv2.zip

$ sudo ./aws/install

Aws cli is installed now

Configure AWS Cli with credentials

Configure aws Cli as a root user or by creating a new user in IAM with programmatic access.

We will be providing the root user credentials here.

  1. Navigate to aws console > Go to profile > security credentials > Accesskeys_create accesskey

Now save the credentials and back to terminal

$ aws configure
# provide the access details

$ aws eks list-clusters

# Update kubeconfig file from EKS Cluster to Client machine CLI
$ aws eks update-kubeconfig --name <cluster-name> --region us-east-1
$ aws eks update-kubeconfig --name myEkscluster --region us-east-1

Checking the Pods in the cluster

$ kubectl get pods
$ kubectl get pods --all-namespaces

PODS are in pending state because we don't have any worker nodes to execute the Pods.

Step 5) Create IAM role for EKS worker nodes (use case as EC2) with below policies

IAM > Access management > Roles > create role > AWS service

Use case > EC2 > Next

Attach these policies

  1. AmazonEKSWorkerNodePolicy

  2. AmazonEKS_CNI_policy

  3. AmazonEc2ContainerRegistryReadOnly

Role details and click on Create role

See Eksworkerrole is created in IAM

Step 6) Create a worker Node Group

Go to EKS cluster - compute - Node Group

Provide the name and Select the role we have created for worker nodes > next

Node grouper configuration > Amazon Linux >use t2.large

min 2 and max 5

Click on remote access to the nodes > keypair > security groups as all

Click on Create

Once it is created check the nodes using the below commands in the EC2 instance

$ kubectl get nodes 

$ kubectl get pods –all-namespaces

Create a Pod and expose the Pod outside the cluster using nodeport service

Kubectl get pods -o wide

Two nodes got created right after the Node group is created in EKS Cluster compute.

Now created a Pod and expose it outside the cluster using the NodePort service.