Emerging Tech Part 2 Student Version (1)

Document Details

Uploaded by Deleted User

Tags

DevOps Software Development Cloud Computing Emerging Technology

Summary

This document describes DevOps, a set of practices that merges development, quality assurance, and operations into a single continuous process. It covers core concepts, advantages, and principles of DevOps.

Full Transcript

[**Part 3: DevOps 2**](#part-4-devops) [**Chapter 1: Introduction to DevOps 3**](#chapter-1-introduction-to-devops) [**Chapter 2: DevOps Tools - Configuration Management and Containerization 21**](#chapter-2-devops-tools---configuration-management-and-containerization) [**Chapter 3: DevOps Tools...

[**Part 3: DevOps 2**](#part-4-devops) [**Chapter 1: Introduction to DevOps 3**](#chapter-1-introduction-to-devops) [**Chapter 2: DevOps Tools - Configuration Management and Containerization 21**](#chapter-2-devops-tools---configuration-management-and-containerization) [**Chapter 3: DevOps Tools - Infrastructure as Code and Cloud Platforms 25**](#chapter-3-devops-tools---infrastructure-as-code-and-cloud-platforms) [**Chapter 4: DevOps Best Practices and Case Studies 32**](#chapter-4-devops-best-practices-and-case-studies) [**Additional Resources 36**](#additional-resources) Part 4: DevOps ============== ###### Chapter 1: Introduction to DevOps Chapter 2: DevOps Tools - Configuration Management and Containerization Chapter 3: DevOps Tools - Infrastructure as Code and Cloud Platforms Chapter 4: DevOps Best Practices and Case Studies Chapter 1: Introduction to DevOps --------------------------------- ##### Chapter Contents - - - - - - - - ### 1.1. What is DevOps? DevOps is a collection of two words, "Development" and "Operations,". It is a practice that aims at merging development, quality assurance, and operations (deployment and integration) into a single, continuous set of processes. This methodology is a natural extension of Agile and continuous delivery approaches. What is DevOps? ### 1.2. Advantages of Adopting DevOps By adopting DevOps companies gain three core advantages that cover technical, business, and cultural aspects of development. - - - These benefits come only with the understanding that DevOps isn't merely a set of actions, but rather a philosophy that fosters cross-functional team communication. More importantly, it doesn't require substantial technical changes as the main focus is put on altering the way people work. The whole success depends on adhering to DevOps principles. ### 1.3. DevOps principles In 2010 Damon Edwards and John Willis came up with the CAMS model to showcase the key values of DevOps. CAMS is an acronym that stands for Culture, Automation, Measurement, and Sharing. #### 1. Culture DevOps is initially the culture and mindset forging strong collaborative bonds between software development and infrastructure operations teams. This culture is built upon the following pillars. - - - - #### 2. Automation of Processes Automating as many development, testing, configuration, and deployment procedures as possible is the golden rule of DevOps. It allows specialists to get rid of time-consuming repetitive work and focus on other important activities that can\'t be automated by their nature. #### 3. Measurement of KPIs (Key Performance Indicators) Decision-making should be powered by factual information in the first place. To get optimal performance, it is necessary to keep track of the progress of activities composing the DevOps flow. Measuring various metrics of a system allows for understanding what works well and what can be improved. #### 4. Sharing Sharing is caring. This phrase explains the DevOps philosophy better than anything else as it highlights the importance of collaboration. It is crucial to share feedback, best practices, and knowledge among teams since this promotes transparency, creates collective intelligence and eliminates constraints. You don\'t want to put the whole development process on pause just because the only person who knows how to handle certain tasks went on a vacation or quitted. ### 1.4. DevOps Model and Practices DevOps requires a delivery cycle that comprises planning, development, testing, deployment, release, and monitoring with active cooperation between different members of a team.\ \ ![DevOps lifecycle](media/image1.jpg) A DevOps lifecycle Source: To break down the process even more, let's have a look at the core practices that constitute the DevOps. #### 1. Agile Planning In contrast to traditional approaches of project management, Agile planning organises work in short iterations (e.g. sprints) to increase the number of releases. This means that the team has only high-level objectives outlined, while making detailed planning for two iterations in advance. This allows for flexibility and pivots once the ideas are tested on an early product increment. Check our Agile infographics to learn more about different methods applied. #### 2. Continuous Development The concept of continuous \"everything\" embraces continuous or iterative software development, meaning that all the development work is divided into small portions for better and faster production. Engineers commit code in small chunks multiple times a day for it to be easily tested. Code builds and unit tests are automated as well. #### 3. Continuous Automated Testing A quality assurance team sets committed code testing using automation tools like Selenium, Ranorex, UFT, etc. If bugs and vulnerabilities are revealed, they are sent back to the engineering team. This stage also entails version control to detect integration problems in advance. A Version Control System (VCS) allows developers to record changes in the files and share them with other members of the team, regardless of their location. #### 4. Continuous Integration and Continuous Delivery (CI/CD) The code that passes automated tests is integrated in a single, shared repository on a server. Frequent code submissions prevent a so-called "integration hell" when the differences between individual code branches and the mainline code become so drastic over time that integration takes more than actual coding.\ \ Continuous delivery, detailed in our dedicated article, is an approach that merges development, testing, and deployment operations into a streamlined process as it heavily relies on automation. This stage enables the automatic delivery of code updates into a production environment. #### 5. Continuous deployment At this stage, the code is deployed to run in production on a public server. Code must be deployed in a way that doesn't affect already functioning features and can be available for a large number of users. Frequent deployment allows for a "fail fast" approach, meaning that the new features are tested and verified early. There are various automated tools that help engineers deploy a product increment. The most popular are Chef, Puppet, Azure Resource Manager, and Google Cloud Deployment Manager. #### 6. Continuous monitoring The final stage of the DevOps lifecycle is oriented to the assessment of the whole cycle. The goal of monitoring is detecting the problematic areas of a process and analysing the feedback from the team and users to report existing inaccuracies and improve the product's functioning. #### 7. Infrastructure as a Code Infrastructure as a code (IaC) is an infrastructure management approach that makes continuous delivery and DevOps possible. It entails using scripts to automatically set the deployment environment (networks, virtual machines, etc.) to the needed configuration regardless of its initial state.\ \ Without IaC, engineers would have to treat each target environment individually, which becomes a tedious task as you may have many different environments for development, testing, and production use.\ \ Having the environment configured as code, you can test it the way you test the source code itself and use a virtual machine that behaves like a production environment to test early. Once the need to scale arises, the script can automatically set the needed number of environments to be consistent with each other. #### 8. Containerization Virtual machines emulate hardware behavior to share computing resources of a physical machine, which enables running multiple application environments or operating systems (Linux and Windows Server) on a single physical server or distributing an application across multiple physical machines.\ \ Containers, on the other hand, are more lightweight and packaged with all runtime components (files, libraries, etc.) but they don't include whole operating systems, only the minimum required resources. Containers are used within DevOps to instantly deploy applications across various environments and are well combined with the IaC approach described above. A container can be tested as a unit before deployment. Currently, Docker provides the most popular container toolset. #### 9. Microservices The microservice architectural approach entails building one application as a set of independent services that communicate with each other, but are configured individually. Building an application this way, you can isolate any arising problems ensuring that a failure in one service doesn't break the rest of the application functions. With the high rate of deployment, microservices allow for keeping the whole system stable, while fixing the problems in isolation. Learn more about microservices and modernising legacy monolithic architectures in our article. #### 10. Cloud infrastructure Today most organisations use hybrid clouds, a combination of public and private ones. But the shift towards fully public clouds (i.e. managed by an external provider such as AWS or Microsoft Azure) continues. While cloud infrastructure isn't a must for DevOps adoption, it provides flexibility, toolsets, and scalability to applications. With the recent introduction of serverless architectures on clouds, DevOps-driven teams can dramatically reduce their effort by basically eliminating server-management operations. ### 1.5. DevOps tools The main reason to implement DevOps is to improve the delivery pipeline and integration process by automating these activities. As a result, the product gets a shorter time-to-market. To achieve this automated release pipeline, the team must acquire specific tools instead of building them from scratch.\ \ Currently, existing DevOps tools cover almost all stages of continuous delivery, starting from continuous integration environments and ending with containerization and deployment. While today some of the processes are still automated with custom scripts, mostly DevOps engineers use various products. Let's have a look at the most popular ones. - - - - ### 1.6. A DevOps Engineer: Roles and Responsibilities The main function of a DevOps engineer is to introduce the continuous delivery and continuous integration workflow, which requires the understanding of the mentioned tools and the knowledge of several programming languages.\ \ Depending on the organisation, job descriptions differ. Smaller businesses look for engineers with broader skillsets and responsibilities. For example, the job description may require product building along with the developers. Larger companies may look for an engineer for a specific stage of the DevOps lifecycle that will work with a certain automation tool.\ \ DevOps Engineer Role and Responsibilities DevOps Engineer Role and Requirements The basic and widely-accepted responsibilities of a DevOps engineer are: - - - - - - Additionally, a DevOps engineer can be responsible for IT infrastructure maintenance and management, which comprises hardware, software, network, storages, virtual and remote assets, and control over cloud data storage. ![Scheme of IT Infrastructure management](media/image4.jpg) Scheme of IT Infrastructure management ### 1.7. DevOps Engineer Skillset While this title doesn't require a candidate to be a system administrator or a developer, this person must have experience in both fields. When hiring a DevOps engineer, pay attention to the following characteristics: - - - - - ### 1.8. Comparison of Traditional SDLC vs. DevOps The Software Development Life Cycle (SDLC) and DevOps are two distinct approaches for software development and deployment, each with unique methodologies, team structures, and goals. Here's a comparison of traditional SDLC versus DevOps: #### 1. Process Flow - - #### 2. Team Structure and Collaboration - - #### 3. Speed of Delivery - - #### 4. Testing Approach - - #### 5. Automation - - #### 6. Deployment and Releases - - #### 7. Feedback and Improvement - - #### 8. Risk Management - - #### 9. Tooling and Technology - - Chapter 2: DevOps Tools - Configuration Management and Containerization ----------------------------------------------------------------------- ##### Chapter Contents - - - DevOps is a crucial approach that streamlines the processes of software development, deployment, and operation. Key to DevOps are configuration management and containerization tools, which enable efficient, repeatable, and scalable application deployment. In this guide, we'll delve into popular configuration management tools --- Ansible, Puppet, and Chef---as well as containerization with Docker, and discuss how to integrate these tools to automate the creation and deployment of Docker images using configuration management. ### 2.1. Introduction to Configuration Management Tools Configuration management involves maintaining consistency in an application's software environment and infrastructure across various stages of its lifecycle. This is particularly important for large-scale applications and distributed systems. - - - ### 2.2 Introduction to Containerization Containerization encapsulates an application and its dependencies in a container that can run on any system with minimal configuration. This portability makes containerization a powerful tool for DevOps, allowing developers to "build once, run anywhere." Docker is the most popular containerization tool in the industry. - ### 2.3. Integration of Configuration Management and Containerization Integrating configuration management with containerization allows for advanced automation capabilities in DevOps. By combining tools like Docker with Ansible, Puppet, or Chef, organisations can automate the deployment and configuration of containers across their environments. #### Key Integration Concepts 1. 2. 3. #### For instance: - - - Chapter 3: DevOps Tools - Infrastructure as Code and Cloud Platforms -------------------------------------------------------------------- ##### Chapter Contents - - - In modern software development and IT operations, the concept of Infrastructure as Code (IaC) has transformed the way infrastructure is managed. Traditionally, configuring servers, networks, and other IT resources required manual work, which was both time-consuming and prone to errors. With IaC, infrastructure is managed in a codified format, allowing for automated, consistent, and repeatable deployments. Infrastructure as Code essentially brings the principles of DevOps and software development to the infrastructure level, enabling teams to define, provision, and manage infrastructure through code. This approach allows teams to track changes to infrastructure, version it, and apply the same methodologies as they do in software development, such as testing and version control. Infrastructure as Code also facilitates collaboration among teams by ensuring that everyone works from a single source of truth, namely the code. This approach ensures that configurations are consistent across environments, which reduces configuration drift and operational issues. In IaC, developers write code that defines the desired state of the infrastructure, including servers, storage, networking, and other resources. IaC tools then interpret this code and automatically configure resources to match this desired state. ### 3.1. Popular Infrastructure as Code Tools Two of the most widely used IaC tools are Terraform and CloudFormation. Each tool has its own approach to infrastructure management, and both are highly popular among DevOps engineers for different reasons. Terraform is an open-source IaC tool developed by HashiCorp that is widely appreciated for its versatility and flexibility. Terraform allows users to define infrastructure using a declarative configuration language called HashiCorp Configuration Language (HCL). It can manage resources across multiple cloud platforms, including AWS, GCP, Azure, and more, making it a multi-cloud tool. One of the main strengths of Terraform is its ability to maintain and manage resources across various cloud providers from a single configuration. This cross-platform compatibility allows teams to avoid vendor lock-in and supports multi-cloud strategies. Terraform uses a concept called \"providers\" to interact with cloud platforms and other services, which allows users to extend its functionality and manage various services. Additionally, Terraform\'s \"state\" feature maintains an up-to-date record of all resources managed by Terraform, making it easy to track and manage changes over time. Using state files also allows Terraform to calculate the differences between the current infrastructure and the desired configuration, which helps with incremental deployments and efficient updates. CloudFormation is a service provided by Amazon Web Services (AWS) and is tailored specifically for managing AWS resources. With CloudFormation, users can define infrastructure using JSON or YAML templates, which specify the configuration of AWS resources and how they should be interconnected. CloudFormation then provisions and configures these resources according to the specifications defined in the templates. One of the main benefits of using CloudFormation is its deep integration with AWS services, which enables a high level of functionality within the AWS ecosystem. CloudFormation is particularly well-suited for organisations that are fully committed to using AWS and seek a comprehensive solution for managing infrastructure within that environment. Unlike Terraform, CloudFormation is specific to AWS, meaning it lacks the cross-cloud functionality that Terraform offers. However, it provides extensive capabilities for managing complex AWS deployments, including nested stacks, parameterized templates, and the ability to manage lifecycle events. ### 3.2. Cloud Platforms: AWS, GCP, and Azure In the modern cloud landscape, three major providers dominate the field: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Each platform provides a vast array of services that can be managed through IaC tools, allowing DevOps teams to deploy, configure, and manage infrastructure at scale. Amazon Web Services (AWS) is the largest and most widely used cloud provider, known for its comprehensive set of services and global reach. AWS offers a vast range of services, including compute, storage, networking, databases, machine learning, analytics, and much more. AWS also provides tools for security, monitoring, and compliance, which makes it particularly appealing for enterprises and industries with strict regulatory requirements. The platform is known for its scalability and reliability, and it has data centres in multiple regions worldwide, allowing organisations to deploy applications with low latency for global users. AWS supports several IaC tools, including its native CloudFormation as well as Terraform, enabling teams to automate the provisioning and management of AWS resources. For teams looking to manage large-scale infrastructures, AWS provides robust options like Elastic Load Balancing, Auto Scaling, and Amazon RDS for managed databases, among others. Google Cloud Platform (GCP) is Google's cloud offering, and it stands out for its strong data and machine learning capabilities. GCP provides a suite of services similar to those offered by AWS, including compute, storage, networking, and data management, but it is particularly popular in industries that rely heavily on data analysis and artificial intelligence. Google's machine learning and artificial intelligence tools, such as TensorFlow and AutoML, are industry-leading and widely used for building data-driven applications. GCP has a flexible pricing model and is known for its cost efficiency, making it appealing for organisations focused on optimising expenses. GCP also has a global infrastructure with regions and zones across multiple continents, providing options for low-latency, high-availability deployments. For IaC, GCP integrates well with Terraform, and it also has its own tool, Google Cloud Deployment Manager, which allows users to define infrastructure as code for GCP-specific resources. Microsoft Azure is another major cloud provider and is especially popular in enterprise environments due to Microsoft's extensive integration capabilities with existing enterprise software. Azure offers a wide range of services, including computing, databases, machine learning, DevOps, and IoT. Azure is often chosen by companies that use Microsoft products, such as Windows Server, Active Directory, and Microsoft SQL Server, as it offers seamless integration with these technologies. Additionally, Azure provides hybrid cloud capabilities that allow organisations to connect on-premises environments with Azure cloud resources, which is particularly useful for enterprises looking to adopt a hybrid cloud approach. Azure has its own IaC tool called Azure Resource Manager (ARM), which allows users to define and manage Azure resources. Terraform is also compatible with Azure, providing users with a flexible option for multi-cloud management. ### 3.3. Integration of IaC and Cloud Platforms Integrating IaC with cloud platforms is essential for deploying and managing infrastructure in a scalable, automated manner. By using IaC tools, teams can define the desired state of infrastructure and deploy it across one or multiple cloud platforms without manual intervention. The integration of IaC with cloud platforms provides several benefits, including version control, reusability, and consistency. For instance, with Terraform, teams can use the same configuration to deploy infrastructure on AWS, GCP, and Azure, making it easier to implement a multi-cloud or hybrid cloud strategy. CloudFormation, although AWS-specific, offers a powerful way to manage complex AWS deployments and maintain a consistent configuration across multiple environments within AWS. When deploying IaC in a cloud environment, the process typically involves writing code that defines the desired infrastructure, which includes resources such as virtual machines, storage accounts, networks, and databases. This code is then executed by the IaC tool, which interacts with the cloud platform's API to provision and configure resources accordingly. Most cloud platforms provide SDKs and APIs that allow IaC tools to interact with their services, and each platform has a specific set of permissions and authentication methods to secure access to resources. In Terraform, for example, users configure providers that are responsible for interacting with the cloud platform's API. Each cloud provider offers its own set of providers, which are responsible for managing resources on that platform. Terraform uses these providers to authenticate and interact with the cloud services, allowing teams to create, modify, and delete resources in an automated way. In a CI/CD pipeline, IaC can be integrated to automate infrastructure provisioning as part of the deployment process. For instance, when a new feature is deployed, the IaC tool can automatically provision the necessary infrastructure and configure it according to predefined templates. This approach ensures that infrastructure is consistent and up-to-date, reducing the risk of configuration drift or errors due to manual processes. Chapter 4: DevOps Best Practices and Case Studies ------------------------------------------------- ##### Chapter Contents - *DevOps Best Practices* - *Real-World DevOps Case Studies* ### 4.1. DevOps Best Practices 1. 2. 3. 4. 5. 6. ### 4.2. Real-World DevOps Case Studies 1. 2. 3. 4. Additional Resources -------------------- **Introduction to DevOps** - **DevOps Principles and Practices** - - **DevOps Tools: Version Control and Continuous Integration** - - - **DevOps Tools: Configuration Management and Containerization** - - **DevOps Tools: Infrastructure as Code and Cloud Platforms** - - - **DevOps: Best Practices and Case Studies** -

Use Quizgecko on...
Browser
Browser