If you're thinking about moving your operations to the public cloud, you probably know about the benefits like improved agility, better performance, and lower costs. But there's one more important factor to think about: being eco-friendly.
When we look at how much energy global data centers use, it's almost as much as Spain uses in a whole year. Moving towards a green cloud model could make a big difference by reducing global CO2 emissions. This shift could cut total IT emissions by 5.9%, which is like taking 22 million cars off the road.
[lwptoc]
Why it Matters for Businesses
Carbon footprint refers to the total amount of greenhouse gases, specifically carbon dioxide (CO2) and other carbon compounds, that are released into the atmosphere as a result of human activities. This measurement is usually expressed in equivalent tons of CO2 emitted.
In July 2023, we experienced one of the hottest months in about 120,000 years. This highlights the pressing need to tackle climate change. As consumer habits change in response to these challenges, the demand for eco-friendly products and services is skyrocketing.
The tech industry is heading towards cutting ties with data centers powered by fossil fuels. To stay ahead in the market and maintain the trust of customers and investors, companies must act now to reduce their environmental impact. This can involve making smart choices about their cloud provider or even making significant changes themselves.
The good news is that major cloud providers not only offer solutions for your immediate needs but can also be a great starting point for your journey toward sustainable software development.
Embracing cloud solutions boosts sustainability and brings notable cost savings to organizations. Cloud providers offer scalable and pay-as-you-go models, enabling businesses to optimize resource use and cut operational costs.
Why Cloud is More Affordable
Cloud computing transforms the landscape of IT services, moving away from traditional desktop setups to remote data centers. Users can effortlessly access on-demand infrastructure, eliminating the need for on-site installation and maintenance.
Green cloud computing takes this concept a step further by utilizing renewable energy sources, reducing energy consumption, and making a significant dent in the carbon footprint.
Virtualization and containerization, dividing hardware for deploying multiple operating systems, help reduce server needs and energy consumption. AI-based resource scheduling, guided by historical usage data, conserves energy. Infrastructure as a Service (IaaS) optimization, focusing on virtual machines and containers, contributes to eco-conscious IT.
A notable 2020 study revealed an interesting trend: despite a 550% increase in computing output, data center energy consumption only grew by 6%. This underscores the efficiency achieved through sustainable practices in cloud computing.
Ready to embrace the benefits of cloud migration? Contact Gart today, and let us guide you through a seamless transition to the cloud. The time is now to elevate your operations and embrace the future of digital efficiency.
A Closer Look at Each Cloud
In a world where the demand for environmentally-friendly businesses is growing, it's essential to be wary of greenwashing—superficial claims without real actions. When seeking vendors, prioritize those who openly share their sustainability efforts and show tangible results. Analyzing these details safeguards against false promises and ensures genuine steps toward a greener future receive support.
Top cloud service providers—Google, Amazon, and Microsoft—are actively committed to becoming carbon-neutral and transitioning their data centers to fully sustainable hosting by 2030.
The efficiency of public cloud infrastructure surpasses that of on-premises data centers, with a strong focus on adopting green energy sources.
AWS aims for 100% renewable energy by 2025 and net-zero emissions by 2040.
Azure plans to use 100% renewable energy by 2025 and strives to be carbon negative by 2030.
GCP is set to rely on 100% carbon-free energy by 2030.
Choosing these providers aligns with the pursuit of a sustainable and eco-friendly digital landscape.
Don't miss the opportunity to transform your operations. Contact Gart now, and let's initiate your seamless transition to the cloud—where innovation meets sustainability.
Google: Carbon-Free Operations, Water Conservation, and Cloud Sustainability
Google aims to power all its global operations with 100% carbon-free energy around the clock by 2030. They achieved carbon-neutrality in 2007 and have been using renewable energy for their data centers since 2017.
The company invests in technology for carbon removal solutions to offset its emissions. Google also has a goal to replenish 120% of the water consumed in its data centers and facilities.
Public cloud services, like Google's, rely on energy-efficient hyperscale data centers. These centers outperform smaller servers thanks to innovative infrastructure design and advanced cooling tech. Operating in a Google data center reduces electricity needs for IT hardware, leading to higher power usage effectiveness (PUE) compared to typical enterprise data centers.
Google Cloud not only prioritizes sustainability in its operations but also offers the Carbon Footprint tool for customers. This tool allows users to monitor and measure carbon emissions from their cloud applications, covering Scope 1, 2, and 3. It serves as an emissions calculator, aiding companies in reporting their gross carbon footprint and offering best practices for building low-carbon applications in Google Cloud.
Read more: Google Cloud Migration Services
Microsoft: Pioneering Carbon Reduction, Circular Solutions, and Cloud Sustainability
Microsoft aims to cut carbon emissions by over 50% by 2030 and eliminate its historical carbon footprint by 2050. They're shifting to 100% renewable energy for data centers and buildings by 2025, and zero waste is on the agenda by 2030.
Circular Centers repurpose old servers to combat growing e-waste, introduced as part of Microsoft's sustainability strategy since 2020.
Tools like Microsoft Cloud for Sustainability offer real-time insights into carbon emissions, while the Emissions Impact Dashboard for Microsoft 365 calculates cloud workload footprints.
Microsoft's focus areas include lowering energy consumption, green data centers, water management, and waste reduction through responsible sourcing and recycling.
Four key drivers reduce the energy and carbon footprint of the Microsoft Cloud: IT operational efficiency, equipment efficiency, datacenter infrastructure efficiency, and new renewable electricity, targeting 100% by 2025.
Read more: Azure Migration Services
Amazon: Leading the Charge with Net-Zero Commitment and Sustainable Solutions
As a co-founder of The Climate Pledge, Amazon joins 400 global companies committed to achieving net-zero carbon emissions by 2040. Their strategies include reducing material usage, innovating for energy efficiency, and embracing renewable energy solutions.
Amazon, the largest corporate buyer of renewable energy since 2020, leads in sustainable practices to decarbonize its transportation network.
A study by 451 Research found that US enterprises, on average, could cut their carbon footprint by up to 88% by moving to AWS from on-premises data centers.
Amazon introduces the AWS Customer Carbon Footprint Tool, an emissions calculator for customers. It provides data on carbon footprint, including Scope 1 and Scope 2 emissions from cloud service usage. It also estimates the carbon emission reduction achieved by transitioning operations to the cloud.
Read more: AWS Migration Services
Empower Your Green Transition
Ready to take the leap into the public cloud? Before you dive in, a word of advice: Cloud migration is more than a simple "lift and shift." It requires a strategic approach, choosing the right vendor, ensuring infrastructure readiness, and aligning IT and business objectives.
However, the investment in this transition pays off. Shifting operations to the public cloud and prioritizing cloud-based applications can potentially reduce global emissions and energy consumption by up to 20 percent.
Feeling inspired to make a positive impact? Now's the time to act. Contact Gart, and we'll guide you through the migration process. Let's contribute to a greener future together!
The shift to the cloud is more than a technological choice—it's a crucial transformation shaping the way organizations operate.
Your organization's objectives and business results play a pivotal role in shaping your approach to financial matters. The cloud can enhance the flexibility of your IT cost structure.
Today we'll provide insights to help you construct a compelling business case for migrating to the cloud.
[lwptoc]
Financial Considerations in Cloud Transformation
There are several crucial factors that shape the success of cloud migration journey:
Cloud Pricing Models and CAPEX to OPEX Shift
CapEx and OpEx expenditures differ across various aspects, encompassing their treatment for tax, financial, and operational reporting. Let's explore these distinctions.
Examples of CapEx expenditures in the cloud may include:
Infrastructure Purchases: Procuring physical servers, networking equipment, or storage devices for a cloud deployment.
Software Licenses: Upfront costs for purchasing software licenses or subscriptions with long-term agreements.
Custom Development: Investing in the development of custom applications or solutions tailored to specific business needs.
Data Center Construction: If an organization constructs its own data center to house cloud infrastructure, the construction costs would be considered CapEx.
Migration Costs: Initial expenses associated with migrating existing systems and data to the cloud.
Hardware Upgrades: Costs related to upgrading or expanding hardware components within the cloud infrastructure.
It's important to note that cloud services often operate on an OpEx (Operational Expenditure) model, providing a more flexible cost structure where expenses are incurred as services are used, rather than requiring significant upfront capital investments. The distinction between CapEx and OpEx is crucial for organizations to optimize their financial strategies when adopting cloud technologies.
Examples of Operational Expenditure (OpEx) in the cloud include:
Subscription Fees: Regular payments for ongoing subscriptions to cloud services, such as Software as a Service (SaaS) applications.
Usage-based Costs: Charges based on the actual usage of resources, such as compute power, storage, and data transfer.
Managed Services Fees: Payments for cloud-managed services that handle specific tasks, reducing the need for in-house management.
Data Transfer Costs: Charges associated with transferring data between different regions or out of the cloud provider's network.
Support and Maintenance: Fees for support services and ongoing maintenance of cloud infrastructure.
Scaling Costs: Additional expenses incurred when scaling resources up or down based on demand.
Training and Certification: Expenditures related to training employees on cloud technologies and obtaining certifications.
Security Services: Payments for cloud security services to protect data and applications.
Backup and Recovery Services: Costs for cloud-based backup and recovery solutions.
Consulting Services: Fees for external consulting services to optimize cloud usage and architecture.
OpEx in the cloud offers a pay-as-you-go model, providing organizations with flexibility and the ability to align expenses with actual usage.
CriteriaCAPEX (Capital Expenditure)OPEX (Operational Expenditure)DefinitionInvestments in assets with long-term valueDay-to-day expenses for ongoing business operationsNature of ExpenseSignificant upfront costsRegular, recurring costsTime HorizonLong-term focus with benefits realized over timeShort-term focus with immediate benefitsTax TreatmentGenerally depreciated over timeDeductible in the year incurredFlexibilityLimited flexibility for adjustmentsHigh flexibility to scale up or down as neededBudgetingUpfront budgeting and planning requiredEasier to budget as costs are predictableExamplesPurchasing equipment, buildings, software licensesRent, utilities, salaries, maintenance costsCAPEX involves significant initial investments for long-term assets, while OPEX covers day-to-day operational expenses with more flexibility and shorter-term focus.
Unlike the traditional model of capital expenditures (CAPEX), the cloud operates on an operational expenditures (OPEX) basis. This shift provides a more flexible cost structure, aligning expenses with actual usage and allowing for dynamic scalability.
Reduced Data Center Footprint and Increased Productivity
Moving to the cloud reduces the need for big on-site data centers, saving costs and making operations more efficient. It also allows quick adjustments to resources, matching IT needs with actual demand, boosting productivity.
DevOps Integration for Efficiency and Time-to-Market
The cloud and DevOps work together to improve how businesses operate. Combining DevOps practices with cloud technology makes processes more efficient, speeds up bringing products to market, and encourages collaboration between development and operations teams. This teamwork streamlines growth, especially for startups, by providing scalable resources in the cloud.
This combination also cuts operating costs through automation, which is crucial for business leaders focused on digital transformation. It encourages innovation, saves money, motivates employees, and aligns with the need for efficient processes to deliver top-notch goods and services. Overall, blending DevOps and the cloud accelerates important technological changes that affect business goals.
Immediate Sustainability Benefits of Cloud Migration
The initial step in the journey towards reducing greenhouse gas (GHG) emissions is understanding the magnitude of the IT estate's carbon footprint. Data centers, contributing significantly to carbon emissions, present a crucial area for improvement. According to the World Economic Forum, data centers have a larger carbon footprint than the aviation industry, accounting for 2.5% of all human-induced carbon dioxide. For some organizations, IT's contribution to the total carbon footprint ranges between 5-10%, with potential highs of 45%.
A survey by Gartner, Inc. revealed that 87% of business leaders expect to increase their investment in sustainability over the next two years.
Cloud providers invest in green technologies on a large scale, reducing the carbon footprint of organizations. This shift aligns with environmental goals and allows organizations to optimize carbon efficiency by focusing on operational expenditure.
For example, Microsoft, a key player in the industry, is taking substantial steps to measure and enhance the sustainability of its Azure Cloud. The company's commitment to addressing environmental challenges was underscored at COP26, the global climate conference held in November 2021.
The company introduced the Microsoft Cloud for Sustainability, an Azure-based platform designed to consolidate disparate data sources. This platform enables organizations to gain insights into improving their sustainability approaches. Microsoft provides data on its datacenter Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) metrics. PUE measures the efficiency of energy consumption in datacenters, while WUE assesses water use efficiency.
Unlock the full potential of your business with Azure Migration Services. Seamlessly transition to the cloud, optimize performance, and accelerate innovation. Embrace the future of digital transformation with confidence – let Azure Migration Services guide your journey.
AWS, as the largest corporate buyer of renewable energy, demonstrates a strong commitment to sustainability. In 2022, all electricity consumed across 19 AWS Regions was sourced from 100% renewable energy.
Research from 451 Research suggests that migrating on-premises workloads to AWS can reduce workload carbon footprints by at least 80%. This figure may reach an impressive 96% once AWS achieves its 100% renewable energy goal by 2025.
Case studies from companies like IBM, Accenture, Deloitte, ATOS, and Illumina highlight how sustainability motivates cloud migration. Illumina, in particular, reduced carbon emissions by 89% and lowered data storage costs using AWS.
Understanding the carbon footprint reduction potential requires precise tools. While generic calculators exist, AWS offers a specialized tool called AWS Migration Evaluator (ME). This tool uses real-time IT resource utilization data to provide projected cost and carbon emission savings.
Elevate your business to new heights with AWS Migration Services. Seamlessly migrate to the cloud, enhance scalability, and drive innovation. Unleash the power of AWS to transform your digital landscape today.
Сonclusion
The transformation to the cloud is a pivotal shift that extends beyond technology, fundamentally reshaping how organizations operate. Considering your organization's goals and financial strategy is crucial in navigating this transformative journey. The cloud introduces flexibility into your IT cost structure, enabling dynamic scalability based on actual usage.
Migrating on-premises workloads to the cloud not only reduces carbon footprints but also contributes to significant cost savings.
To explore how your company can benefit from cloud migration, including potential cost savings, consider consulting with our expert engineers. Schedule a call today for personalized insights and guidance on navigating your digital transformation journey efficiently.
In my experience optimizing cloud costs, especially on AWS, I often find that many quick wins are in the "easy to implement - good savings potential" quadrant.
[lwptoc]
That's why I've decided to share some straightforward methods for optimizing expenses on AWS that will help you save over 80% of your budget.
Choose reserved instances
Potential Savings: Up to 72%
Choosing reserved instances involves committing to a subscription, even partially, and offers a discount for long-term rentals of one to three years. While planning for a year is often deemed long-term for many companies, especially in Ukraine, reserving resources for 1-3 years carries risks but comes with the reward of a maximum discount of up to 72%.
You can check all the current pricing details on the official website - Amazon EC2 Reserved Instances
Purchase Saving Plans (Instead of On-Demand)
Potential Savings: Up to 72%
There are three types of saving plans: Compute Savings Plan, EC2 Instance Savings Plan, SageMaker Savings Plan.
AWS Compute Savings Plan is an Amazon Web Services option that allows users to receive discounts on computational resources in exchange for committing to using a specific volume of resources over a defined period (usually one or three years). This plan offers flexibility in utilizing various computing services, such as EC2, Fargate, and Lambda, at reduced prices.
AWS EC2 Instance Savings Plan is a program from Amazon Web Services that offers discounted rates exclusively for the use of EC2 instances. This plan is specifically tailored for the utilization of EC2 instances, providing discounts for a specific instance family, regardless of the region.
AWS SageMaker Savings Plan allows users to get discounts on SageMaker usage in exchange for committing to using a specific volume of computational resources over a defined period (usually one or three years).
The discount is available for one and three years with the option of full, partial upfront payment, or no upfront payment. EC2 can help save up to 72%, but it applies exclusively to EC2 instances.
Utilize Various Storage Classes for S3 (Including Intelligent Tier)
Potential Savings: 40% to 95%
AWS offers numerous options for storing data at different access levels. For instance, S3 Intelligent-Tiering automatically stores objects at three access levels: one tier optimized for frequent access, 40% cheaper tier optimized for infrequent access, and 68% cheaper tier optimized for rarely accessed data (e.g., archives).
S3 Intelligent-Tiering has the same price per 1 GB as S3 Standard — $0.023 USD.
However, the key advantage of Intelligent Tiering is its ability to automatically move objects that haven't been accessed for a specific period to lower access tiers.
Every 30, 90, and 180 days, Intelligent Tiering automatically shifts an object to the next access tier, potentially saving companies from 40% to 95%. This means that for certain objects (e.g., archives), it may be appropriate to pay only $0.0125 USD per 1 GB or $0.004 per 1 GB compared to the standard price of $0.023 USD.
Information regarding the pricing of Amazon S3
AWS Compute Optimizer
Potential Savings: quite significant
The AWS Compute Optimizer dashboard is a tool that lets users assess and prioritize optimization opportunities for their AWS resources.
The dashboard provides detailed information about potential cost savings and performance improvements, as the recommendations are based on an analysis of resource specifications and usage metrics.
The dashboard covers various types of resources, such as EC2 instances, Auto Scaling groups, Lambda functions, Amazon ECS services on Fargate, and Amazon EBS volumes.
For example, AWS Compute Optimizer reproduces information about underutilized or overutilized resources allocated for ECS Fargate services or Lambda functions. Regularly keeping an eye on this dashboard can help you make informed decisions to optimize costs and enhance performance.
Use Fargate in EKS for underutilized EC2 nodes
If your EKS nodes aren't fully used most of the time, it makes sense to consider using Fargate profiles. With AWS Fargate, you pay for a specific amount of memory/CPU resources needed for your POD, rather than paying for an entire EC2 virtual machine.
For example, let's say you have an application deployed in a Kubernetes cluster managed by Amazon EKS (Elastic Kubernetes Service). The application experiences variable traffic, with peak loads during specific hours of the day or week (like a marketplace or an online store), and you want to optimize infrastructure costs. To address this, you need to create a Fargate Profile that defines which PODs should run on Fargate. Configure Kubernetes Horizontal Pod Autoscaler (HPA) to automatically scale the number of POD replicas based on their resource usage (such as CPU or memory usage).
Manage Workload Across Different Regions
Potential Savings: significant in most cases
When handling workload across multiple regions, it's crucial to consider various aspects such as cost allocation tags, budgets, notifications, and data remediation.
Cost Allocation Tags: Classify and track expenses based on different labels like program, environment, team, or project.
AWS Budgets: Define spending thresholds and receive notifications when expenses exceed set limits. Create budgets specifically for your workload or allocate budgets to specific services or cost allocation tags.
Notifications: Set up alerts when expenses approach or surpass predefined thresholds. Timely notifications help take actions to optimize costs and prevent overspending.
Remediation: Implement mechanisms to rectify expenses based on your workload requirements. This may involve automated actions or manual interventions to address cost-related issues.
Regional Variances: Consider regional differences in pricing and data transfer costs when designing workload architectures.
Reserved Instances and Savings Plans: Utilize reserved instances or savings plans to achieve cost savings.
AWS Cost Explorer: Use this tool for visualizing and analyzing your expenses. Cost Explorer provides insights into your usage and spending trends, enabling you to identify areas of high costs and potential opportunities for cost savings.
Transition to Graviton (ARM)
Potential Savings: Up to 30%
Graviton utilizes Amazon's server-grade ARM processors developed in-house. The new processors and instances prove beneficial for various applications, including high-performance computing, batch processing, electronic design automation (EDA) automation, multimedia encoding, scientific modeling, distributed analytics, and machine learning inference on processor-based systems.
The processor family is based on ARM architecture, likely functioning as a system on a chip (SoC). This translates to lower power consumption costs while still offering satisfactory performance for the majority of clients. Key advantages of AWS Graviton include cost reduction, low latency, improved scalability, enhanced availability, and security.
Spot Instances Instead of On-Demand
Potential Savings: Up to 30%
Utilizing spot instances is essentially a resource exchange. When Amazon has surplus resources lying idle, you can set the maximum price you're willing to pay for them. The catch is that if there are no available resources, your requested capacity won't be granted.
However, there's a risk that if demand suddenly surges and the spot price exceeds your set maximum price, your spot instance will be terminated.
Spot instances operate like an auction, so the price is not fixed. We specify the maximum we're willing to pay, and AWS determines who gets the computational power. If we are willing to pay $0.1 per hour and the market price is $0.05, we will pay exactly $0.05.
Use Interface Endpoints or Gateway Endpoints to save on traffic costs (S3, SQS, DynamoDB, etc.)
Potential Savings: Depends on the workload
Interface Endpoints operate based on AWS PrivateLink, allowing access to AWS services through a private network connection without going through the internet. By using Interface Endpoints, you can save on data transfer costs associated with traffic.
Utilizing Interface Endpoints or Gateway Endpoints can indeed help save on traffic costs when accessing services like Amazon S3, Amazon SQS, and Amazon DynamoDB from your Amazon Virtual Private Cloud (VPC).
Key points:
Amazon S3: With an Interface Endpoint for S3, you can privately access S3 buckets without incurring data transfer costs between your VPC and S3.
Amazon SQS: Interface Endpoints for SQS enable secure interaction with SQS queues within your VPC, avoiding data transfer costs for communication with SQS.
Amazon DynamoDB: Using an Interface Endpoint for DynamoDB, you can access DynamoDB tables in your VPC without incurring data transfer costs.
Additionally, Interface Endpoints allow private access to AWS services using private IP addresses within your VPC, eliminating the need for internet gateway traffic. This helps eliminate data transfer costs for accessing services like S3, SQS, and DynamoDB from your VPC.
Optimize Image Sizes for Faster Loading
Potential Savings: Depends on the workload
Optimizing image sizes can help you save in various ways.
Reduce ECR Costs: By storing smaller instances, you can cut down expenses on Amazon Elastic Container Registry (ECR).
Minimize EBS Volumes on EKS Nodes: Keeping smaller volumes on Amazon Elastic Kubernetes Service (EKS) nodes helps in cost reduction.
Accelerate Container Launch Times: Faster container launch times ultimately lead to quicker task execution.
Optimization Methods:
Use the Right Image: Employ the most efficient image for your task; for instance, Alpine may be sufficient in certain scenarios.
Remove Unnecessary Data: Trim excess data and packages from the image.
Multi-Stage Image Builds: Utilize multi-stage image builds by employing multiple FROM instructions.
Use .dockerignore: Prevent the addition of unnecessary files by employing a .dockerignore file.
Reduce Instruction Count: Minimize the number of instructions, as each instruction adds extra weight to the hash. Group instructions using the && operator.
Layer Consolidation: Move frequently changing layers to the end of the Dockerfile.
These optimization methods can contribute to faster image loading, reduced storage costs, and improved overall performance in containerized environments.
Use Load Balancers to Save on IP Address Costs
Potential Savings: depends on the workload
Starting from February 2024, Amazon begins billing for each public IPv4 address. Employing a load balancer can help save on IP address costs by using a shared IP address, multiplexing traffic between ports, load balancing algorithms, and handling SSL/TLS.
By consolidating multiple services and instances under a single IP address, you can achieve cost savings while effectively managing incoming traffic.
Optimize Database Services for Higher Performance (MySQL, PostgreSQL, etc.)
Potential Savings: depends on the workload
AWS provides default settings for databases that are suitable for average workloads. If a significant portion of your monthly bill is related to AWS RDS, it's worth paying attention to parameter settings related to databases.
Some of the most effective settings may include:
Use Database-Optimized Instances: For example, instances in the R5 or X1 class are optimized for working with databases.
Choose Storage Type: General Purpose SSD (gp2) is typically cheaper than Provisioned IOPS SSD (io1/io2).
AWS RDS Auto Scaling: Automatically increase or decrease storage size based on demand.
If you can optimize the database workload, it may allow you to use smaller instance sizes without compromising performance.
Regularly Update Instances for Better Performance and Lower Costs
Potential Savings: Minor
As Amazon deploys new servers in their data processing centers to provide resources for running more instances for customers, these new servers come with the latest equipment, typically better than previous generations. Usually, the latest two to three generations are available. Make sure you update regularly to effectively utilize these resources.
Take Memory Optimize instances, for example, and compare the price change based on the relevance of one instance over another. Regular updates can ensure that you are using resources efficiently.
InstanceGenerationDescriptionOn-Demand Price (USD/hour)m6g.large6thInstances based on ARM processors offer improved performance and energy efficiency.$0.077m5.large5thGeneral-purpose instances with a balanced combination of CPU and memory, designed to support high-speed network access.$0.096m4.large4thA good balance between CPU, memory, and network resources.$0.1m3.large3rdOne of the previous generations, less efficient than m5 and m4.Not avilable
Use RDS Proxy to reduce the load on RDS
Potential for savings: Low
RDS Proxy is used to relieve the load on servers and RDS databases by reusing existing connections instead of creating new ones. Additionally, RDS Proxy improves failover during the switch of a standby read replica node to the master.
Imagine you have a web application that uses Amazon RDS to manage the database. This application experiences variable traffic intensity, and during peak periods, such as advertising campaigns or special events, it undergoes high database load due to a large number of simultaneous requests.
During peak loads, the RDS database may encounter performance and availability issues due to the high number of concurrent connections and queries. This can lead to delays in responses or even service unavailability.
RDS Proxy manages connection pools to the database, significantly reducing the number of direct connections to the database itself.
By efficiently managing connections, RDS Proxy provides higher availability and stability, especially during peak periods.
Using RDS Proxy reduces the load on RDS, and consequently, the costs are reduced too.
Define the storage policy in CloudWatch
Potential for savings: depends on the workload, could be significant.
The storage policy in Amazon CloudWatch determines how long data should be retained in CloudWatch Logs before it is automatically deleted.
Setting the right storage policy is crucial for efficient data management and cost optimization. While the "Never" option is available, it is generally not recommended for most use cases due to potential costs and data management issues.
Typically, best practice involves defining a specific retention period based on your organization's requirements, compliance policies, and needs.
Avoid using an undefined data retention period unless there is a specific reason. By doing this, you are already saving on costs.
Configure AWS Config to monitor only the events you need
Potential for savings: depends on the workload
AWS Config allows you to track and record changes to AWS resources, helping you maintain compliance, security, and governance. AWS Config provides compliance reports based on rules you define. You can access these reports on the AWS Config dashboard to see the status of tracked resources.
You can set up Amazon SNS notifications to receive alerts when AWS Config detects non-compliance with your defined rules. This can help you take immediate action to address the issue. By configuring AWS Config with specific rules and resources you need to monitor, you can efficiently manage your AWS environment, maintain compliance requirements, and avoid paying for rules you don't need.
Use lifecycle policies for S3 and ECR
Potential for savings: depends on the workload
S3 allows you to configure automatic deletion of individual objects or groups of objects based on specified conditions and schedules. You can set up lifecycle policies for objects in each specific bucket. By creating data migration policies using S3 Lifecycle, you can define the lifecycle of your object and reduce storage costs.
These object migration policies can be identified by storage periods. You can specify a policy for the entire S3 bucket or for specific prefixes. The cost of data migration during the lifecycle is determined by the cost of transfers. By configuring a lifecycle policy for ECR, you can avoid unnecessary expenses on storing Docker images that you no longer need.
Switch to using GP3 storage type for EBS
Potential for savings: 20%
By default, AWS creates gp2 EBS volumes, but it's almost always preferable to choose gp3 — the latest generation of EBS volumes, which provides more IOPS by default and is cheaper.
For example, in the US-east-1 region, the price for a gp2 volume is $0.10 per gigabyte-month of provisioned storage, while for gp3, it's $0.08/GB per month. If you have 5 TB of EBS volume on your account, you can save $100 per month by simply switching from gp2 to gp3.
Switch the format of public IP addresses from IPv4 to IPv6
Potential for savings: depending on the workload
Starting from February 1, 2024, AWS will begin charging for each public IPv4 address at a rate of $0.005 per IP address per hour. For example, taking 100 public IP addresses on EC2 x $0.005 per public IP address per month x 730 hours = $365.00 per month.
While this figure might not seem huge (without tying it to the company's capabilities), it can add up to significant network costs. Thus, the optimal time to transition to IPv6 was a couple of years ago or now.
Here are some resources about this recent update that will guide you on how to use IPv6 with widely-used services — AWS Public IPv4 Address Charge.
Collaborate with AWS professionals and partners for expertise and discounts
Potential for savings: ~5% of the contract amount through discounts.
AWS Partner Network (APN) Discounts: Companies that are members of the AWS Partner Network (APN) can access special discounts, which they can pass on to their clients. Partners reaching a certain level in the APN program often have access to better pricing offers.
Custom Pricing Agreements: Some AWS partners may have the opportunity to negotiate special pricing agreements with AWS, enabling them to offer unique discounts to their clients. This can be particularly relevant for companies involved in consulting or system integration.
Reseller Discounts: As resellers of AWS services, partners can purchase services at wholesale prices and sell them to clients with a markup, still offering a discount from standard AWS prices. They may also provide bundled offerings that include AWS services and their own additional services.
Credit Programs: AWS frequently offers credit programs or vouchers that partners can pass on to their clients. These could be promo codes or discounts for a specific period.
Seek assistance from AWS professionals and partners. Often, this is more cost-effective than purchasing and configuring everything independently. Given the intricacies of cloud space optimization, expertise in this matter can save you tens or hundreds of thousands of dollars.
More valuable tips for optimizing costs and improving efficiency in AWS environments:
Scheduled TurnOff/TurnOn for NonProd environments: If the Development team is in the same timezone, significant savings can be achieved by, for example, scaling the AutoScaling group of instances/clusters/RDS to zero during the night and weekends when services are not actively used.
Move static content to an S3 Bucket & CloudFront: To prevent service charges for static content, consider utilizing Amazon S3 for storing static files and CloudFront for content delivery.
Use API Gateway/Lambda/Lambda Edge where possible: In such setups, you only pay for the actual usage of the service. This is especially noticeable in NonProd environments where resources are often underutilized.
If your CI/CD agents are on EC2, migrate to CodeBuild: AWS CodeBuild can be a more cost-effective and scalable solution for your continuous integration and delivery needs.
CloudWatch covers the needs of 99% of projects for Monitoring and Logging: Avoid using third-party solutions if AWS CloudWatch meets your requirements. It provides comprehensive monitoring and logging capabilities for most projects.
Feel free to reach out to me or other specialists for an audit, a comprehensive optimization package, or just advice.