11

I'm working with a project that's using Jenkins to build and deploy microservices to Elastic Beanstalk. We deploy an integration branch to a test environment, release branches to a staging environment, and then a final master build to production. I have a couple of concerns with doing it this way: first, it means we end up with a matrix of one build per project per environment, duplicating efforts; and two, it means we aren't deploying the same build artifacts to production that were validated in staging.

I'm inclined to abandon Beanstalk and move to plain ASGs using something like Chef for deployments. That would leave us with one build per project, producing a build artifact, and we could deploy the same artifact to production that was approved in staging. Transitioning has a not-insignificant up-front cost, however. Is there some way to use Beanstalk better that would allow for more reliable, easier-to-manage CI/CD?

Note: Promoting the same build artifact is exactly what I want to do, but from the docs I don't see any clear way to do that; it explains how to deploy to EB from your app source, but not how to promote an existing version to another environment, unless I managed to scroll right past it. If it is available in EB itself, there may be a limitation in the Jenkins EB deployment plugin that prevents it from being done in Jenkins specifically, but I haven't seen a way to do it at all.

030
  • 13,383
  • 17
  • 76
  • 178
Adrian
  • 783
  • 4
  • 8

1 Answers1

4

IMO opinion your issue isn't with Elastic Beanstalk in that scenario, it's with Jenkins, or at least the way you're using it. You should really concentrate on building "a thing" only once, regardless of what that thing is.

Full Disclosure: I work for ThoughtWorks and am incredibly biased to GoCD. I'll try to spell out what I mean in as neutral was as I can. I will use our tool's docs are examples, but hopefully people can extrapolate to their systems.

Somewhere early in your pipeline you're building "artifacts". This could be binaries representing all or part of your application, or they could be output from any number of tools such as testing tools. These artifacts should be stored by the system and never built again. The system should then fetch the artifact from the proper revision when needed.

For example...

  1. I build a .jar file and run some unit tests on it, basic C/I stuff. If it passes, that .jar file and the output of the tests gets uploaded to that specific pipeline job.
  2. The next pipeline could be your deployment to a more complex testing environment. It should fetch the exact jar from the exact job that built it. It then executes Elastic Beanstalk to deploy that jar to the correct environment.
  3. The next pipeline is your staging deployment. It goes all the way back to the first pipeline and fetches the exact jar from the exact job that built it. In then executus Elastic Beanstalk to deploy that jar to the correct environment.

These are each separate pipelines because that allows you to run more in parallel or on demand without blocking.

You could use Elastic Beanstalk, Chef, Puppet, Ansible, uDeploy or any number of other tools to do the actual deployments. That's not where your issue is coming from. Continuous Integration servers weren't built originally to do this. Of course there are lots of plugins you can use to get to the same place if that's your preference.

Continuous Delivery servers like GoCD, Chef Automate and ConcourseCI were built specifically to solve things like this.

Ken Mugrage
  • 166
  • 6