As for our general deployment we use EKS service provided by AWS but due to special demand from client to reduce the cost we start looking for alternative ways to host our service like elastic beanstalk and ECS
Using EC2 instance for deployment
Finally we decided to go along with Kubernetes deployment but for cost reduction instead of using AWS EKS service we decided to manage our own Kubernetes cluster in an EC2 instance.
Step 1: Creating an new EC2 instance
- create a new EC2 instance with OS as Amazon linux and machine instance of t3-medium.
- create a new security group to expose certain port of that ec2 instance for public usage like this:
- create a new key value pair to login into this instace from your machine and save that .pem file in local machine
- now run this command to login into your machine
ssh -i /path/to/your/key.pem ec2-user@<your-instance-public-ip>
Installing Kubernetes, kubeadm, kubectl
Here are the steps to install Kubernetes, kubeadm, and kubectl on an Amazon Linux EC2 instance. Please note that these steps should be performed as the root user or a user with sudo privileges:
Install Docker:
sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Enable Docker:
sudo systemctl start docker
sudo systemctl enable docker
Install Kubernetes:
First, add the Kubernetes repository:
sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF
Then, install Kubernetes:
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Enable Kubernetes:
sudo systemctl start kubelet
sudo systemctl enable kubelet
Disable Swap:
sudo swapoff -a
Configure kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Configure Nginx Ingress controller
- Add the NGINX Ingress Controller Helm Repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- Install the NGINX Ingress Controller:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace kube-system
- Converting NGINX from LoadBlancer to NodePort In Kubernetes, LoadBalancer and NodePort are types of services that allow you to expose your applications to external traffic. Here's what they do:
LoadBalancer: A LoadBalancer service is the standard way to expose a service to the internet. When you create a service of type LoadBalancer, Kubernetes provisions a cloud network load balancer and creates a stable IP address that you can use to reach your service. The load balancer routes incoming traffic to your pods, helping distribute the load and providing a single point of access to your service.
NodePort: A NodePort service is a simpler way to expose a service to the outside world. When you create a service of type NodePort, Kubernetes allocates a specific port on each node and forwards incoming traffic on that port to your service. This means your service can be accessed using the IP address of any node in your cluster and the allocated port. NodePort services are typically used in on-premises or bare metal environments where a cloud load balancer is not available.
Step 2: Deploying our service
- We use ECR to store our image and we pull the latest image to build the pod for deployment.
- But to pull from that ecr repo we need send a secret that will allow if the image can be pulled from ecr this thing is automatically done in an EKS cluster, but here in our case we have to create a secret ourself and use that for pulling the image.
namespaces=(<array of namespace where this deployment needs to be done>)
# Loop over the namespaces
for namespace in "${namespaces[@]}"
do
# Create the namespace
kubectl create namespace $namespace
# Create the secret in the namespace
kubectl create secret docker-registry ecr-secret --docker-server=<docker-server> --docker-username=AWS --docker-password=<docker-password> -n $namespace
done
- But note that this secret get expired in 12 hours so we have to set a cron job to make create a new secret for each namespace in every 4 hours - 8 hours.
1. Install cronie in the machine.
2. Create the cron job script command
#!/bin/bash
kube_namespaces=($(kubectl get secret --all-namespaces | grep ecr-secret | awk '{print $1}'))
for i in "${kube_namespaces[@]}"
do
:
echo "$(date): Updating secret for namespace - $i"
kubectl delete secret ecr-secret --namespace $i
kubectl create secret docker-registry ecr-secret \
--docker-server=<docker-server> \
--docker-username=<docker-username> \
--docker-password=$(/usr/local/bin/aws ecr get-login-password) \
--namespace=$i
done
!note: docker password will be stored there if the aws is configured correctly. here the created ecr secret should be of namespace of ecr-secret for this to work properly.
3. Give the Script Executable Permissions You need to give the script executable permissions to allow it to be run as a program.
chmod +x /root/scripts/aws-ecr-credentials.sh
About Apoorv Pandey
Apoorv Pandey is a Backend Developer at CyberMind Works. Specializing in optimizing backend systems and enhancing process efficiency, he excels in building resilient backend architectures. Apoorv is dedicated to harnessing technology to boost innovation and operational effectiveness in his projects.