In today’s world, customers take it for granted that their applications will constantly run seamless updates. To make your product competitive, you need to meet their expectations. If you don’t, your competition will surely take care of it.
Regular application updates are almost impossible with the legacy approach, where developers commit a large number of upgrades, updates, and fixes at once, and they do so every few months. Unfortunately, your customers may request more frequent updates. What’s more, introducing bulk changes can generate a great number of issues, as there might have been not enough time to test these changes before the deployment.
The legacy approach called for a change. And so it came.
Modern Approach to Seamless Updates: Continuous Integration / Delivery
The modern approach is to release, test, and deploy smaller changes more often. It uses Continuous Integration and Continuous Delivery pipeline, which helps to automate steps in the software delivery process. Thanks to this method, companies can deliver software quickly, securely, reliably and with no outages.
Continuous Integration (CI)
Continuous Integration is a software development practice where all team members must use a version control system and integrate all code changes with shared repositories a few times a day. An automated process then verifies each code change to detect integration errors as quickly as possible. CI focuses on automated code building and testing.
Continuous Delivery (CD)
Continuous Delivery extends the above practice to a methodology that automates the entire release process. This means that every software change can be automatically built, tested, and delivered with a single “push of the button”.
Continuous Delivery is NOT Continuous Deployment. Continuous Delivery aims to make sure that every change passes a test and is ready to be deployed into production. It doesn’t mean, however, that this change is actually deployed. The change may require additional auditing and authorization by a person or tooling before that happens.
CI/CD Step 1: Integration
You can picture the Continuous Integration and Delivery pipeline as a pipeline where new code is submitted at one end, then goes through a series of stages, to be eventually published as the production-ready code. The first step of CI/CD is committing the code to the central repository.
Developers should regularly commit their code and merge all changes to the release branch. They shouldn’t keep a single fragment of the code off the main branch.
A workflow engine monitors each branch; once the code is pushed to the repository, a command will be sent to a builder tool to build the code and run the unit tests and quality checks. The final step for the builder is to create binaries and other artifacts. After it finishes, the Continuous Integration part ends.
CI/CD Step 2: Delivery
The next phase of the process is Continuous Delivery. At this stage, the application code is deployed in a staging environment, which is the exact copy of the production stack, where more functional tests can be run.
You can automate the process of building of the staging environment with Infrastructure as a Code tools. This exercise will become useful when creating the production environment, using an analogical method.
CI/CD Step 3: Deployment
The final phase of the pipeline is deployment. Once the software is well-tested and ready for production, it can be manually or automatically deployed. After that, the entire process restarts. New features, updates and/or upgrades are coded, committed, built, tested, delivered and deployed. And so on, for the whole application lifecycle.
CI/CD Pipeline AWS Tools
With the CI/CD pipeline, the entire process seems to be straightforward. However, to ensure full automation and high stability of the deployment, Amazon delivers a continuous delivery orchestration tool, AWS CodeStar.
AWS CodeStar is an overlay for other AWS Code Services family members which lay underneath. It leverages AWS CodePipeline, AWS CodeBuild, AWS CodeCommit, and AWS CodeDeploy and integrates the setup, tools, templates, and dashboard. For more control, you can use each tool separately.
You can use this service for applications and infrastructure updates. It builds, tests, and deploys code every time it changes. You run CodePipeline through the CodeStar or through the AWS Management Console. As a source, you can use GitHub, AWS CodeCommit or Amazon S3 storage.
CodePipeline works seamlessly with the AWS CodeBuild, i.e., the fully-managed build service that compiles the source code, runs tests, and produces ready-to-deploy software packages. This eliminates the need for private build servers that have to be provisioned, managed and scaled on one’s own.
AWS CodeBuild can scale itself continuously and process multiple builds concurrently. AWS CodePipeline can also integrate with build servers like Jenkins, TeamCity, and others.
The Continuous Integration pipeline ends here for CodePipeline. Now’s the time for Continuous Delivery.
The CD pipeline consists of the Staging and Production steps.
The Production step requires manual approval. For each pipeline stage, you can add a specific action or actions from different categories.
This is where you can integrate CodePipeline with other AWS services.
You can stop the pipeline execution so that someone with proper permissions may approve or reject the action, you can commit code with AWS CodeCommit, build and test code with AWS CodeBuild, deploy to EC2 or on-prem instance with AWS CodeDeploy, or invoke a Lambda function.
Integration with AWS Lambda
Both, AWS CodeStar and AWS CodePipeline, support integration with AWS Lambda to create resources, integrate with third-party systems or perform different kinds of checks. In general, Lambda actions can be beneficial during Continuous Delivery. Thanks to them, you can:
- Apply or update CloudFormation template
- Swap CNAME for zero-downtime deployment with Elastic Beanstalk
- Backup resources
- Integrate with third-party systems, Slack, for example, to send messages to the Slack channels
- Deploy to the EC2 ECS Docker instances
- Create resources on demand in one stage and delete them in another
In addition to the above services, there are some other services that you may use during the Continuous Delivery pipeline. For instance, AWS OpsWorks can manage configuration using Chef. AWS Elastic Beanstalk can deploy and scale web applications. Also at any stage of the pipeline, you can select AWS CloudFormation as a deployment action.
CloudFormation is a robust service, which operates on stacks that are sets of AWS resources to be used to provision infrastructure as a code. With CloudFormation templates, you can provision, reconfigure, and delete resources, all with a single click. These templates contain information about all resources and the dependencies between them. Once a stack is deployed, all resources from the template are provisioned and configured. When a stack is deleted, all resources are removed as well. Again, all with a single click.
You can also use all the above tools to build CI/CD pipelines for serverless applications based on AWS Lambda, which are functions triggered by events.
Going Back to Deployment
After the software is built, tested, and delivered, it can be deployed. There are multiple deployment strategies for rolling out new software versions. The most common are:
- all at once,
- immutable and
All at once
With all at once, you can deploy code to the existing fleet of servers. The method replaces all the code with a single deployment action. As the update runs on all servers at the same time, this entails downtime, but there’s no need for DNS change. The only option of rollback is to redeploy code on all servers again. This type of deployment is available in AWS Elastic Beanstalk as All at Once. AWS CodeDeploy refers to it as ‘In-place Deployment’.
In a rolling deployment, a fleet of servers is divided into portions that are upgraded one by one. Two versions of software exist at the same time on the same fleet. This option enables a zero-downtime update, and if the update fails, it affects only a part of the fleet.
AWS Elastic Beanstalk refers to this type of deployment as ‘rolling’ and ‘rolling with additional batch’. In AWS CodeDeploy it’s a variation of the In-Place Deployment called ‘OneAtATime’ and ‘HalfAtATime’.
The immutable deployment leverages the cloud ability to start a new set of servers with a new version of an application with simple API calls.
Blue/Green deployment is a type of the immutable deployment. It requires the creation of a new environment. Once it’s up and passes all tests, traffic switches to it. The old deployment, Blue, remains idle as a backup, in case there’s a need for a rollback.
Both services support this deployment, AWS Elastic Beanstalk (immutable and blue/green) and AWS CodeDeploy (blue/green deployment).
As you can see, Amazon natively supports Continuous Integration, Delivery, and Deployment by providing you with multiple tools and services. The ease of provisioning new resources makes the cloud a perfect choice for CI/CD pipeline implementation.
CI/CD allows you to deploy applications with just a few clicks. At the same time, the process is quick, reliable, and you can repeat it as often as you wish. Moreover, you can easily integrate AWS services with the tools used at on-prem deployments, like Ansible, Chef, Jenkins, and other.
If you haven’t tried CI/CD in the cloud yet, do it now and see how easy it is.