Part 2. DevOps model and practices

DevOps requires a delivery cycle that comprises planning, development, testing, deployment, release, and monitoring with active cooperation between different members of a team.


A DevOps lifecycle



To break down the process even more, let’s have a look at the core practices that constitute the DevOps:


Agile planning

In contrast to traditional approaches of project management, Agile planning organizes work in short iterations (e.g. sprints) to increase the number of releases. This means that the team has only high-level objectives outlined, while making detailed planning for two iterations in advance. This allows for flexibility and pivots once the ideas are tested on an early product increment.


Continuous development

The concept of continuous “everything” embraces continuous or iterative software development, meaning that all the development work is divided into small portions for better and faster production. Engineers commit code in small chunks multiple times a day for it to be easily tested. Code builds and unit tests are automated as well.


Continuous automated testing

A quality assurance team sets committed code testing using automation tools like Selenium, Ranorex, UFT, etc. If bugs and vulnerabilities are revealed, they are sent back to the engineering team. This stage also entails version control to detect integration problems in advance. A Version Control System (VCS) allows developers to record changes in the files and share them with other members of the team, regardless of their location.


Continuous integration and continuous delivery (CI/CD)

The code that passes automated tests is integrated in a single, shared repository on a server. Frequent code submissions prevent a so-called “integration hell” when the differences between individual code branches and the mainline code become so drastic over time that integration takes more than actual coding.


Continuous delivery is an approach that merges development, testing, and deployment operations into a streamlined process as it heavily relies on automation. This stage enables the automatic delivery of code updates into a production environment.


Continuous deployment

At this stage, the code is deployed to run in production on a public server. Code must be deployed in a way that doesn’t affect already functioning features and can be available for a large number of users. Frequent deployment allows for a “fail fast” approach, meaning that the new features are tested and verified early. There are various automated tools that help engineers deploy a product increment. The most popular are Chef, Puppet, Azure Resource Manager, and Google Cloud Deployment Manager.


Continuous monitoring

The final stage of the DevOps lifecycle is oriented to the assessment of the whole cycle. The goal of monitoring is detecting the problematic areas of a process and analyzing the feedback from the team and users to report existing inaccuracies and improve the product’s functioning.


Infrastructure as a code

Infrastructure as a code (IaC) is an infrastructure management approach that makes continuous delivery and DevOps possible. It entails using scripts to automatically set the deployment environment (networks, virtual machines, etc.) to the needed configuration regardless of its initial state.


Without IaC, engineers would have to treat each target environment individually, which becomes a tedious task as you may have many different environments for development, testing, and production use.


Having the environment configured as code, you

  1. Can test it the way you test the source code itself and

  2. Use a virtual machine that behaves like a production environment to test early.


Once the need to scale arises, the script can automatically set the needed number of environments to be consistent with each other.


Containerization

Virtual machines emulate hardware behavior to share computing resources of a physical machine, which enables running multiple application environments or operating systems (Linux and Windows Server) on a single physical server or distributing an application across multiple physical machines.


Containers, on the other hand, are more lightweight and packaged with all runtime components (files, libraries, etc.) but they don’t include whole operating systems, only the minimum required resources. Containers are used within DevOps to instantly deploy applications across various environments and are well combined with the IaC approach described above. A container can be tested as a unit before deployment. Currently, Docker provides the most popular container toolset.


Microservices

The microservice architectural approach entails building one application as a set of independent services that communicate with each other, but are configured individually. Building an application this way, you can isolate any arising problems ensuring that a failure in one service doesn’t break the rest of the application functions. With the high rate of deployment, microservices allow for keeping the whole system stable, while fixing the problems in isolation. Learn more about microservices and modernizing legacy monolithic architectures in our article.


Cloud infrastructure

Today most organizations use hybrid clouds, a combination of public and private ones. But the shift towards fully public clouds (i.e. managed by an external provider such as AWS or Microsoft Azure) continues. While cloud infrastructure isn’t a must for DevOps adoption, it provides flexibility, toolsets, and scalability to applications. With the recent introduction of serverless architectures on clouds, DevOps-driven teams can dramatically reduce their effort by basically eliminating server-management operations.

An important part of these processes are automation tools that facilitate the workflow. Below we explain why and how it is done.


DevOps tools


The main reason to implement DevOps is to improve the delivery pipeline and integration process by automating these activities. As a result, the product gets a shorter time-to-market. To achieve this automated release pipeline, the team must acquire specific tools instead of building them from scratch.


Currently, existing DevOps tools cover almost all stages of continuous delivery, starting from continuous integration environments and ending with containerization and deployment. While today some of the processes are still automated with custom scripts, mostly DevOps engineers use various products. Let’s have a look at the most popular ones.


Server configuration tools are used to manage and configure servers in DevOps. Puppet is one of the most widely used systems in this category. Chef is a tool for infrastructure as code management that runs both on cloud and hardware servers. One more popular solution is Ansible that automates configuration management, cloud provisioning, and application deployment.

CI/CD stages also require task-specific tools for automation — such as Jenkins that comes with lots of additional plugins to tweak continuous delivery workflow or GitLab CI, a free and open-source CI/CD instrument presented by GitLab.


For more solutions, check our corresponding article where we compare the major CI tools on today’s market.


Containerization and orchestration stages rely on a bunch of dedicated tools to build, configure, and manage containers that allow software products to function across various environments. Docker is the most popular instrument for building self-contained units and packaging code into them. The widely-used container orchestration platforms are commercial OpenShift and open-source Kubernetes.


Monitoring and alerting in DevOps is typically facilitated by Nagios, a powerful tool that presents analytics in visual reports or open-source Prometheus.


While a DevOps engineer – we’ll discuss this role in more detail below – must operate these tools, the rest of the team also uses them under a DevOps engineer’s facilitation.

29 views0 comments

Recent Posts

See All

Through this article we wish to share with you the 10 great advantages that we could find in Nginx.