BlogPost

April 20, 2022

Continuous Integration and Continuous Deployment (CI/CD) Pipelines

Continuous Integration and Continuous Deployment (CI/CD) Pipelines

A continuous integration and continuous deployment (CI/CD) pipeline is a series of steps that must be performed in order to deliver a new version of software. CI/CD pipelines are a practice focused on improving software delivery throughout the software development life cycle via automation.

By automating CI/CD throughout development, testing, production, and monitoring phases of the software development lifecycle, organizations are able to develop higher quality code, faster and more securely. Although it’s possible to manually execute each of the steps of a CI/CD pipeline, the true value of CI/CD pipelines is realized through automation.

What’s a CI/CD pipeline?

A pipeline is a process that drives software development through a path of building, testing, and deploying code, also known as CI/CD. By automating the process, the objective is to minimize human error and maintain a consistent process for how software is released. Tools that are included in the pipeline could include compiling code, unit tests, code analysis, security, and binaries creation. For containerized environments, this pipeline would also include packaging the code into a container image to be deployed across a hybrid cloud.CI/CD is the backbone of a DevOps methodology, bringing developers and IT operations teams together to deploy software. As custom applications become key to how companies differentiate, the rate at which code can be released has become a competitive differentiator.

Understanding OpenShift Pipelines

Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.

Key features

  • Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
  • Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture.
  • Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
  • You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
  • You can use the OpenShift Container Platform Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces.

How continuous integration improves collaboration and code quality

Continuous integration is a development philosophy backed by process mechanics and automation. When practicing continuous integration, developers commit their code into the version control repository frequently; most teams have a standard of committing code at least daily. The rationale is that it’s easier to identify defects and other software quality issues on smaller code differentials than on larger ones developed over an extensive period. In addition, when developers work on shorter commit cycles, it is less likely that multiple developers will edit the same code and require a merge when committing.

Teams implementing continuous integration often start with the version control configuration and practice definitions. Although checking in code is done frequently, agile teams develop features and fixes on shorter and longer timeframes. Development teams practicing continuous integration use different techniques to control what features and code are ready for production.

Many teams use feature flags, a configuration mechanism to turn features and code on or off at runtime. Features that are still under development are wrapped with feature flags in the code, deployed with the main branch to production, and turned off until they are ready to be used. In recent research, devops teams using feature flags had a ninefold increase in development frequency. Feature flagging tools such as CloudBees, Optimizely Rollouts, and LaunchDarkly integrate with CI/CD tools to support feature-level configurations.

Automated builds

In an automated build process, all the software, database, and other components are packaged together. For example, if you were developing a Java application, continuous integration would package all the static web server files such as HTML, CSS, and JavaScript along with the Java application and any database scripts.

Continuous integration not only packages all the software and database components, but the automation will also execute unit tests and other types of tests. Testing provides vital feedback to developers that their code changes didn’t break anything.

Most CI/CD tools let developers kick off builds on demand, triggered by code commits in the version control repository, or on a defined schedule. Teams need to determine the build schedule that works best for the size of the team, the number of daily commits expected, and other application considerations. A best practice is to ensure that commits and builds are fast; otherwise, these processes may impede teams trying to code quickly and commit frequently.

Continuous testing and security automation

Automated testing frameworks help quality assurance engineers define, execute, and automate various types of tests that can help development teams know whether a software build passes or fails. They include functionality tests developed at the end of every sprint and aggregated into a regression test for the entire application. The regression test informs the team whether a code change failed one or more of the tests developed across the functional areas of the application where there is test coverage.

A best practice is to enable and require developers to run all or a subset of regression tests in their local environments. This step ensures developers only commit code to version control after code changes have passed regression tests.

Regression tests are just the beginning, however. Devops teams also automate performance, API, browser, and device testing. Today, teams can also embed static code analysis and security testing in the CI/CD pipeline for shift-left testing. Agile teams can also test interactions with third-party APIs, SaaS, and other systems outside of their control using service virtualization. The key is being able to trigger these tests through the command line, a webhook, or a web service, and get a success or failure response.

Continuous testing implies that the CI/CD pipeline integrates test automation. Some unit and functionality tests will flag issues before or during the continuous integration process. Tests that require a full delivery environment, such as performance and security testing, are often integrated into continuous delivery and done after a build is delivered to its target environments.

Stages in the continuous delivery pipeline

Continuous delivery is the automation that pushes applications to one or more delivery environments. Development teams typically have several environments to stage application changes for testing and review. A devops engineer uses a CI/CD tool such as Jenkins, CircleCI, AWS CodeBuild, Azure DevOps, Atlassian Bamboo, Argo CD, Buddy, Drone, or Travis CI to automate the steps and provide reporting.

For example, Jenkins users define their pipelines in a Jenkinsfile that describes different stages such as build, test, and deploy. Environment variables, options, secret keys, certifications, and other parameters are declared in the file and then referenced in stages. The post section handles error conditions and notifications.

A typical continuous delivery pipeline has build, test, and deploy stages. The following activities could be included at different stages:

  • Pulling code from version control and executing a build.
  • Enabling stage gates for automated security, quality, and compliance checks and supporting approvals when required.
  • Executing any required infrastructure steps automated as code to stand up or tear down cloud infrastructure.
  • Moving code to the target computing environment.
  • Managing environment variables and configuring them for the target environment.
  • Pushing application components to their appropriate services, such as web servers, APIs, and database services.
  • Executing any steps required to restart services or call service endpoints needed for new code pushes.
  • Executing continuous tests and rollback environments if tests fail.
  • Providing log data and alerts on the state of the delivery.
  • Updating configuration management databases and sending alerts to IT service management workflows on completed deployments.

A more sophisticated continuous delivery pipeline might have additional steps such as synchronizing data, archiving information resources, or patching applications and libraries.

Teams using continuous deployment to deliver to production may use different cutover practices to minimize downtime and manage deployment risks. One option is configuring canary deployments with an orchestrated shift of traffic usage from the older software version to the newer one.

CI/CD tools and plugins

CI/CD tools typically support a marketplace of plugins. For example, Jenkins lists more than 1,800 plugins that support integration with third-party platforms, user interface, administration, source code management, and build management.

Once the development team has selected a CI/CD tool, it must ensure that all environment variables are configured outside the application. CI/CD tools allow development teams to set these variables, mask variables such as passwords and account keys, and configure them at the time of deployment for the target environment.

Continuous delivery tools also provide dashboard and reporting functions, which are enhanced when devops teams implement observable CI/CD pipelines. Developers are alerted if a build or delivery fails. The dashboard and reporting functions integrate with version control and agile tools to help developers determine what code changes and user stories made up the build.

Conclusion

To recap, continuous integration packages and tests software builds and alerts developers if their changes fail any unit tests. Continuous delivery is the automation that delivers applications, services, and other technology deployments to the runtime infrastructure and may execute additional tests.

Developing a CI/CD pipeline is a standard practice for businesses that frequently improve applications and require a reliable delivery process. Once in place, the CI/CD pipeline lets the team focus more on enhancing applications and less on the details of delivering it to various environments.

Getting started with CI/CD requires devops teams to collaborate on technologies, practices, and priorities. Teams need to develop consensus on the right approach for their business and technologies. Once a pipeline is in place, the team should follow CI/CD practices consistently.

logo
  • Cutting-Edge Technologies,
    Tailor-Made Solutions - Services
    by VividVista.Tech

  • FacebookXInstagramLinkedInYoutube

Copyright © 2024 VividVista | All Rights Reserved | Terms and Conditions | Privacy Policy