9

I've a iOS app on the app store and recently I've received a huge surge of traffic to my landing page hosted on EC2 and resulted in the page not responding, luckily I managed to recover it by restarting and upgrading the instance to a t2.medium.

Now I'm looking to hire someone to implement a technology to prevent the same death again. My experience only stretch that far allowing me to understand basic devops stuff but not enough for load balancer on AWS, I want to know what is a affordable implementation for my instance.

My landing page and iOS app backend are hosted on the same instance.

Missus
  • 93
  • 4

6 Answers6

8

If you want something quick to get this sorted without much more knowledge I'd recommend elastic beanstalk. It's another AWS application which will handle the load balancer configuration and instance scaling for you.

There's no extra cost on top of the load balancer and instances so you can keep using t2 type instances but let elastic beanstalk scale as much as you want to help.

Auto scaling isn't instant and in times of spiking traffic will take a short amount of time, minutes normally, to be able to handle a spike but it will be much better than manually scaling the instance size and is really easy to get to grips with.

Briansbum
  • 1,112
  • 9
  • 13
1

I would recommend auto-scaling as mentioned above with the addition of some CloudWatch alarms to start the auto-scaling process when specific thresholds start to increase, not when it's already to far gone.

For example; configure CloudWatch to monitor your server, when the CPU is at 50% or higher for a period of 30 seconds or more start the auto-scaling process.

This may not be completely faultless but it's easy to do via some online guides and all configurable via the GUI.

Also, if your landing page is static why not host that on a free-tier t2.micro and use another t2.micro free-tier for your app?

jto
  • 378
  • 1
  • 6
  • 19
1

I’d love to help with this if you’re looking for some help. Depending on your page you may not need ec2 at all. For instance if your serving something static or JavaScript it can be served from s3 with a cloudfront distribution. Or we could possibly use an auto scaling group if absolutely necessary.

user4564
  • 11
  • 1
1

There are two general strategies for dealing with traffic surges: increasing capacity and reducing cost.

Increasing capacity means auto-scaling, which everyone was very excited about when public clouds first became available. In its most basic sense, this will boot up more webservers for you based on load and add them to a load balancer, but since can be a pain to manage, there are more automagic solutions as well, like Elastic Beanstalk.

The trouble with automated capacity expansion is that its also automated bill expansion - 10x normal traffic means 10x servers means 10x money you have to pay. That's why, while it's a useful strategy to keep in mind, I think you should always start by seeing how much you can cheat.

By cheat, I mean cache, which rests on the idea that most of the time you can give users slightly out of date data and they won't notice, and that can save you tremendous amounts of time. Imagine that you have a page that you decide it's ok if it's five seconds out of date, and it gets 20 req/s. Without caching, you're running that calculation 1200 times a minute, whereas with caching it's only 12. You can see how this can make a tremendous difference.

There are of course many types of caching, and a successful website will use several of them. But for your use case, there are two pretty good and easy options.

The first is to make the site completely static. This assumes that you can do so, but if you can, then you just have Nginx serve up the html directly, and it can serve tons of requests with no sweat.

If you need some level of dynamicity, then doing some full-page caching is a good option. Nginx has some capability to do this, but I really like Varnish because of its flexibility.

Whatever option or options you go with, make sure you do load testing to validate that you've set it up properly; sometimes fixing one spot exposes a new bottleneck.

Xiong Chiamiov
  • 2,841
  • 1
  • 10
  • 30
0

I would like to share our experience with AWS. We deployed our application on EC2 and face the same problem and also high cost. We deployed our application Amazon EC2 Container Service although our application was monolithic, but we achieved

  • Availability
  • Cost Effective
  • Scalability

Application load balancer will handle the traffic and will route traffic to the healthy instance and you can run multiple tasks of same services without worrying of scaling and balancing the load.

This architecture makes it easier to implement failure isolation. Techniques such as health-checking, caching, bulkheads, or circuit breakers allow you to reduce the blast radius of a failing component and to improve the Overall availability of a given application.

Adiii
  • 191
  • 6
0

This depends a lot on the specific architecture, but for example:

  • Front load your website with CloudFront to reduce load on the host
  • Use client side hosting service in something like S3 for scale
  • Elastic Load Balancer with an Autoscaling group
Henry
  • 516
  • 3
  • 4