How We Reduced Our Client's AWS Bill by 50%

How We Reduced Our Client's AWS Bill by 50%

· 6 min read

In the world of cloud computing, AWS stands as a giant, providing a plethora of services to meet virtually any computational need. However, with great power comes great responsibility—and potentially great cost. As cloud usage scales, so do the expenses, and often, hidden or overlooked costs can balloon a company’s AWS bill unexpectedly.

Rising and Hidden Costs in AWS

AWS’s flexible, pay-as-you-go pricing model is a double-edged sword. While it allows businesses to scale their infrastructure dynamically, it also means that inefficiencies and unused resources can lead to significant wastage. Costs can escalate due to factors like unused instances, over-provisioned resources, data transfer fees, and more.

Recently, one of our clients faced this challenge. Their AWS bill had spiraled to an unsustainable $10,000 per month. We undertook a comprehensive review of their AWS usage and successfully reduced their monthly expenses by around 50%. Here's how we did it.

Identifying and Eliminating Waste

1. Unused and Over-Provisioned Instances
Our first step was to audit EC2 instances. We identified and terminated unused instances and rightsized over-provisioned ones. Similarly, for RDS instances, we adjusted the capacity to better match actual usage, ensuring no resources were unnecessarily oversized.

2. Switching to Graviton Processor Instances
To further cut costs, we migrated many of our instances to AWS Graviton processors. Graviton instances provide the same performance at approximately 20% lower cost compared to traditional x86-based instances. This switch alone contributed significantly to our savings.

3. Removing Unused Resources
We cleaned up unused resources, including:

1. Unused load balancers
2. Unused provisioned capacity in DynamoDB
3. Unused Elastic IPs
4. Unused EBS volumes

Optimizing RDS Costs

1. Switching to I/O Optimized Instances
A major expense was I/O operations in Amazon Aurora, costing about $1,000 monthly. We found that switching to I/O optimized instances could eliminate this cost, albeit with a slight increase in storage costs. Given our relatively low storage requirements, this switch resulted in substantial savings.

2. Archiving Old Data
We discovered that a portion of the data in our RDS was old and infrequently accessed. Using AWS Data Migration Service (DMS), we transferred this data to S3, significantly reducing our storage costs. This data remains accessible via Amazon Athena for any future needs.

Enhancing Cost Visibility and Management

1. Enabling Cost Categorization by Resource
An often overlooked but crucial step was enabling cost categorization by resource. This feature in AWS Cost Explorer provides detailed insights into cost distribution, allowing us to identify high-cost resources easily. Here’s how we enabled it:

Step 1: Open the AWS Cost Management Console.
Step 2: Navigate to Cost Explorer.
Step 3: Select "Preferences" from the navigation pane.
Step 4: Under the "Cost Explorer Settings" section, enable the "Resource level data" checkbox.
Step 5: Enable "Daily granularity" to get detailed daily cost data by resource.
Step 6: Wait for up to 48 hours for the categorization to take effect.

Once enabled, this feature provided a clear breakdown of costs by resource, making it easier to identify and target specific areas for cost reduction.

2. Managing DB Snapshots

Our cost analysis revealed large, unused DB snapshots costing around $300 each. We deleted these unnecessary snapshots, contributing further to our cost reductions.

Streamlining EKS Clusters and NAT Gateways

1. Reducing EKS Clusters
We consolidated our Kubernetes deployments from four EKS clusters to two, cutting the EKS cluster charges in half.

2. Re-evaluating NAT Gateway Usage
NAT gateways, while essential for security, were another significant cost. We evaluated open-source alternatives like FCK NAT but ultimately decided to reconfigure our subnets to be public, given our limited use of private subnets.

Implementing Intelligent Tiering and Custom Logging

1. S3 Intelligent Tiering
To manage S3 costs, we implemented S3 Intelligent Tiering. This service automatically moves data to the most cost-effective storage tier based on access patterns, optimizing our storage expenses without compromising performance.

2. Replacing CloudWatch Logs

CloudWatch logs were a major cost driver. To mitigate this, we set up our own logging infrastructure using SigNoz, an open-source monitoring tool, and discontinued the use of CloudWatch for logging purposes.

ECR Lifecycle Policies

To minimize costs associated with our container registry, we set up lifecycle policies in ECR to automatically delete unused images, thus saving additional storage costs.

Conclusion

Through diligent auditing, strategic optimization, and leveraging cost-saving AWS features, we successfully reduced our client’s AWS bill from $10,000 to around $5,000 per month. This significant saving improved their bottom line and enhanced operational efficiency.

Our expertise in cloud optimization and resource management enabled us to tailor a solution that maximized value without compromising performance. Partnering with us ensures access to industry-leading expertise and innovative solutions for optimal cloud efficiency.

Boopesh Mahendran

About Boopesh Mahendran

Boopesh is one of the Co-Founders of CyberMind Works and the Head of Engineering. An alum of Madras Institute of Technology with a rich professional background, he has previously worked at Adobe and Amazon. His expertise drives the innovative solutions at CyberMind Works.

Man with headphones

CONTACT US

How can we at CMW help?

Want to build something like this for yourself? Reach out to us!

Link copied
Copyright © 2024 CyberMind Works. All rights reserved.