4

Currently all of our PHP (Laravel) applications are running on one EC2 instance. To deploy code, we SSH onto the live box and pull from CodeCommit.

We now need to scale and deploy properly. I'm not sure where to start.

Most examples seem to use Elastic Beanstalk but I have been advised that this could be too restrictive.

I guess we want to end up with a load balancer and application servers which are created on demand.

Can anyone point me in the right direction? (I am a bit overwhelmed by all the AWS documentation.)

Ok, this is the plan:

I have signed up for A Cloud Guru and I'm going to work through some of the videos on there.

I'm going to sign up for a AWS Free Tier account and start tinkering.

I'm considering the AWS Certified Solutions Architect - Associate course, this looks like a great way to develop myself.

Mick
  • 221
  • 1
  • 4

2 Answers2

3

Here's a rough approach to scaling any stateless app on AWS:

  1. Run the app in an Auto Scaling Group (ASG). An ASG makes it easy to manage multiple servers, will automatically replace failed servers, and allows you to automatically scale the number of servers up or down in response to load.

  2. To run an ASG, you need to create a Launch Configuration. This is a "template" that specifies the configuration for each server in the ASG: i.e., what type of instance to run (e.g. t2.micro), what AMI to run on the instance, the security group settings, etc.

  3. The AMI you run on each instance in the ASG should have your app already installed (i.e., your PHP app). You can define your AMI as code using a tool like Packer.

  4. The app in your AMI should be configured to run on boot using a process supervisor (e.g., using systemd or supervisord; this will also ensure the app is restarted automatically if it crashes.

  5. Every time you change something in your app, run a build process to create a new AMI, update the Launch Configuration to use this new AMI, and roll that AMI out across your ASG. For example, you could do a zero-downtime deployment (somewhat similar to a blue-green deployment) by doubling the size of the ASG, waiting until all the new servers (from the updated Launch Configuration) launch and pass health checks, and then reducing the ASG back to its previous size (the ASG will terminate oldest instances by default).

  6. With an ASG, you have more than one server, but you want to give your users a single endpoint to use. The typical solution to this is to deploy a load balancer in front of the ASG. If you're building a typical HTTP/HTTPs app (e.g., just about all PHP apps), you'll want to use the Application Load Balancer (ALB); if you're building an app that listens directly for TCP connections, you'll want the Load Balancer Classic or Network Load Balancer. Whichever load balancer you pick, AWS will create a domain name for it that you can use, and any requests you send to that domain name will be routed in round-robin fashion to different servers in your ASG.

  7. You should a) configure the load balancer to perform health checks on your app so that it only routes to healthy servers and b) configure your ASG to register all of its servers in the load balancer automatically and to use the load balancer for health checks.

  8. You can create Auto Scaling Policies for the ALB that change the number of servers in your ASG depending on various CloudWatch metrics. For example, you could configure the ASG to add servers when average CPU usage is over 80% and to remove servers when CPU usage drops below 60%.

For more info, check out A Comprehensive Guide to Building a Scalable Web App on Amazon Web Services.

1

Currently all of our PHP (Laravel) applications are running on one EC2 instance. To deploy code, we SSH onto the live box and pull from CodeCommit.

The first thing you need to decide on is a Continuous Deploy strategy. There's a lot of servers out there with different use cases, you don't have to pick the most popular, just pick the one that fits your use case.

I guess we want to end up with a load balancer and application servers which are created on demand.

AWS has some great reference architectures but overall I would say start simple. At a base level you'll be using Route 53, Certificate Manager, ALB (ELBv2), VPC, and EC2. I would get used to the options you have with each of these and play with them. AWS CSA will not teach you about reference architecture, rather it will go over what each product is used for.

Before you go building anything for production I would also recommend using an infrastructure deployment tool. AWS provides CloudFormation out of the box but that can be difficult to interface with. Some others are Sceptre and Terraform. These allow you to store your infrastructure as code in a repository and deploy updates via a CD server.

Also keep in mind if you don't have Continuous Integration then you'll need to work on that, followed by Continuous Delivery, then by Continuous Deployment. Each of these is a critical step in the right direction.

Note on "Created on Demand"

Please keep in mind that many people come to the cloud thinking that "scaling" is instantaneous. It's not. Depending on your provisioning it can take some time. You will not scale yourself out of burst traffic. Your traffic will need to be profiled, you will need to determine your instance capacity (for web servers I measure instances in requests per second with a load testing tool), and plan accordingly.

Scaling is primarily designed for a gradual rise in traffic over time, such as if your website traffic grows gradually larger around noon (peak) and then settles late in the day.

Matt O.
  • 272
  • 1
  • 7