Blog
AWS
Cloud
Business
6
minutes

How to Scale your AWS Infrastructure - Part 1

When designing a solution, you should keep future needs in mind. If the number of users increases dramatically in a short period of time, the solution should be scalable enough to handle the new growth. Making systems scalable on cloud is relatively easier as compared to scaling on-premises infrastructure. AWS has provided excellent tools/services to enable your applications for as much scalability as you want. Note that scaling down is as important as scaling up because cost is important when designing for scalability. In this two-part article, we will go through the tools and services provided by Amazon, which can help you scale your existing AWS infrastructure. Let’s go through these services and evaluate the benefits and business cases of each of these.
Morgan Perry
Co-founder
Summary
Twitter icon
linkedin icon

Horizontally Scaling your Web Servers

To scale vertically or horizontally is one of the decisions you face (or will face) every day. Regardless of the type of scaling, you need to ensure that the system design itself, is not a hindrance in scaling. Any memory leakage, technical debt etc., must be eliminated first; otherwise, scaling up or out will not solve the root cause.

Suppose you believe your application has followed the best programming practices and there is no technical debt. In that case, vertical scaling can be a quick solution, primarily if the infrastructure does not support the needs. However, horizontal scaling, especially auto-scaling, is usually a long-lasting solution in the long term.

Why horizontal scaling?

Horizontal scaling (or scaling out) adds additional nodes or machines to your infrastructure to manage the new workload demand. It does add some complexity to the overall design but has numerous benefits like:

  • Reduced downtime - You do not have to switch the old machine off while scaling as you add a machine. As a result, overall downtime is reduced.
  • Improved availability and fault tolerance - Relying on a single machine for all your data and operations puts you at a high risk of losing it all when it goes down. Distributing it among several nodes saves you from putting all your eggs in one basket.
  • Increased performance – Horizontal scaling allows you to delegate the traffic to more endpoints for connections, which takes the load off the main server.

Adding AWS ELB

Elastic load balancer (ELB) is an AWS service that allows you to divide the load between your EC2 instances or Lambda functions. The most common type of ELB is the application load balancer, which is suited for HTTP/HTTPS traffic. It will take just a few minutes to set it up and balance the load between your EC2 instances, lambda, or even the docker containers. It supports the following AWS services:

Auto-scaling/Dynamic scaling

AWS autoscaling automatically increases or decreases capacity based on load or other metrics. For example, if your memory usage is more than 90%, Amazon EC2 Auto Scaling can add a new instance dynamically. And as soon as the load is reduced back to normal, the total capacity will be back to its original position, e.g. one node only.

  • Autoscaling is not just based on health metrics; you can also configure scaling based on a particular schedule. For example, suppose you anticipate a heavy user load on black Friday. In that case, you can set up scheduled scaling so that the system will be scaled out during the schedule you configure, and then the system will go back to its original capacity when the schedule is over. Scaling on every weekend is also a common example of scheduled scaling.

Bring in automation

When do you need automation?

Image a scenario where you have to perform rapid deployments in a highly fast-paced environment many times a day. Doing it manually will not only waste time, but it is error-prone too. Not just the deployment, it applies to infrastructure too. Manually spinning up 200 servers is a cumbersome job and inefficient too. Automation is the solution to all these problems, whether it is deployment, infrastructure management, environment replication, etc. The more automated it is, the more scalable and less error-prone it is. Let’s look at the top areas where automation should be applied to achieve scalability.

Repeatable deployments (Pipelines, CI/CD)

Modern systems heavily use continuous Integration (CI) and Continuous Deployment (CD). In large projects, many developers are working on the same project.

Continuous Integration ensures that any code conflict or system breakage will be captured almost immediately. CI allows fixing any code break earlier than later.

Continuous Deployment is an integral part of scalable systems developed through agile methodology. It ensures you have a deployable system that can be tested and evaluated by stakeholders or at least QA-ready. Pipelines serve this purpose with simplicity. Many cloud vendors have built-in capability for CI/CD. You can get your build automatically created and deployed as soon as you push the code to code repository. A complete ecosystem also involves automated testing to ensure the newly created build passes all tests.

In this regard, some of the AWS services are Codepipeline, CodeBuild, CodeDeploy, etc.

Replicate Environments (Containerization)

Containers isolate the code from the environment it needs to run on. Consider a team of 50 developers. Suppose each one has to manually set up the development environment of the same project. In that case, it will take time and effort, and need technical supervision to ensure that all developers set up their environment correctly and timely. Docker containers solve this issue. All the developers need is just one docker file/image, and it will take a few minutes to set up the same complex environment.

AWS provides fully managed services for containerization, including ECS, EKS, Beanstalk with multi-container, etc.

Automated Testing

When you have just 2 developers on a small project, testing the builds is easier and more manageable. Consider the same application grown to 250 developers and many dozen complex modules. Adding more QA team members to the project is very linear and not a practical approach. Take the example of Facebook, where more than 50,000 builds a day are created, just for android alone. No amount of manual testing can help that.

Automated testing is the solution to this problem. After integration with CI/CD, you can run automation tests on the build, and it will notify you if the build passes all the tests. If any of the tests fail, automatic rollback of the build can also be configured.

One of the notable services of AWS, which helps in testing automation, is AWS Device Farm. However, the most common tools used in automation testing include Jenkins, Selenium, etc.

Infrastructure Automation

As an application grows and gets more complex, its infrastructure needs to grow as well. Replicating the same infrastructure has always been a manual operation until we saw Infrastructure as Code (IaC). Through simple text files, you can configure all your infrastructure needs, whether your database, your container services, message queue, etc. Everything can be created automatically based on your configuration. With this technique, you can clone and replicate hundreds of servers easily and quickly.

For infrastructure automation, Cloud Formation is excellent service by AWS.

What's next?

This was part one of the article. In the next part, we will continue this discussion with databases, loose coupling, CDN, Caching, etc. Continue Reading: How to Scale your AWS Infrastructure - Part 2

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

DevOps
 minutes
TPUs vs. GPUs: The DevOps Guide to AI Hardware Selection

Stop guessing on AI hardware. This DevOps guide details when to use TPUs vs. GPUs for optimal performance, cost, and framework compatibility in MLOps.

Mélanie Dallé
Senior Marketing Manager
Cloud
Business
10
 minutes
The DevOps Guide to Docker Monitoring: Tools, Best Practices, and Unified Observability

Stop tool sprawl. Compare top Docker monitoring tools (Prometheus, Datadog, Qovery) and learn how unified observability simplifies K8s debugging and speeds up feature delivery.

Romaric Philogène
CEO & Co-founder
Cloud
Heroku
Internal Developer Platform
Platform Engineering
9
 minutes
The Top 8 Tools to Build a Zero-Toil PaaS on Your Cloud

Stop managing K8s complexity. Discover the top 8 platform tools (Qovery, Rancher, Dokku) that let you build a customizable, zero-maintenance PaaS on your cloud.

Morgan Perry
Co-founder
Kubernetes
 minutes
How to Deploy a Docker Container on Kubernetes: Step-by-Step Guide

Simplify Kubernetes Deployment. Learn the difficult 6-step manual process for deploying Docker containers to Kubernetes, the friction of YAML and kubectl, and how platform tools like Qovery automate the entire workflow.

Mélanie Dallé
Senior Marketing Manager
Observability
DevOps
 minutes
Observability in DevOps: What is it, Observe vs. Monitoring, Benefits

Observability in DevOps: Diagnose system failures faster. Learn how true observability differs from traditional monitoring. End context-switching, reduce MTTR, and resolve unforeseen issues quickly.

Mélanie Dallé
Senior Marketing Manager
DevOps
Cloud
8
 minutes
6 Best Practices to Automate DevSecOps in Days, Not Months

Integrate security seamlessly into your CI/CD pipeline. Learn the 6 best DevSecOps practices—from Policy as Code to continuous monitoring—and see how Qovery automates compliance and protection without slowing development.

Morgan Perry
Co-founder
Heroku
15
 minutes
Top 10 Heroku Alternatives: When Simplicity Hits the Scaling Wall

Escape rising Heroku costs & outages. Compare top alternatives that deliver PaaS simplicity on your own cloud and scale without limits.

Mélanie Dallé
Senior Marketing Manager
Product
Infrastructure Management
Deployment
 minutes
Stop tool sprawl - Welcome to Terraform/OpenTofu support

Provisioning cloud resources shouldn’t require a second stack of tools. With Qovery’s new Terraform and OpenTofu support, you can now define and deploy your infrastructure right alongside your applications. Declaratively, securely, and in one place. No external runners. No glue code. No tool sprawl.

Alessandro Carrano
Head of Product

It’s time to rethink
the way you do DevOps

Say goodbye to DevOps overhead. Qovery makes infrastructure effortless, giving you full control without the trouble.