DevSecOps and Common Software Weaknesses PDF
Document Details
IU Internationale Hochschule
2022
Augustine Imonlaime, PROF. DR. STEPHAN SPITZ
Tags
Related
Summary
This document is a course book on DevSecOps and common software vulnerabilities. It covers topics such as Agile development, IT operations, security integration, and common software weaknesses. The book is published by IU Internationale Hochschule GmbH.
Full Transcript
DEVSECOPS AND COMMON SOFTWARE WEAKNESSES DLBCSEDCSW01_E DEVSECOPS AND COMMON SOFTWARE WEAKNESSES MASTHEAD Publisher: IU Internationale Hochschule GmbH IU International University of Applied Sciences Juri-Gagarin-Ring 152 D-99084 Erfurt Mailing address: Albert-Pro...
DEVSECOPS AND COMMON SOFTWARE WEAKNESSES DLBCSEDCSW01_E DEVSECOPS AND COMMON SOFTWARE WEAKNESSES MASTHEAD Publisher: IU Internationale Hochschule GmbH IU International University of Applied Sciences Juri-Gagarin-Ring 152 D-99084 Erfurt Mailing address: Albert-Proeller-Straße 15-19 D-86675 Buchdorf [email protected] www.iu.de DLBCSEDCSW01_E Version No.: 001-2022-0930 Augustine Imonlaime © 2022 IU Internationale Hochschule GmbH This course book is protected by copyright. All rights reserved. This course book may not be reproduced and/or electronically edited, duplicated, or dis- tributed in any kind of form without written permission by the IU Internationale Hoch- schule GmbH. The authors/publishers have identified the authors and sources of all graphics to the best of their abilities. However, if any erroneous information has been provided, please notify us accordingly. 2 PROF. DR. STEPHAN SPITZ Mr. Spitz is a professor of cyber security at IU International University. Mr. Spitz studied electrical engineering and information technology and earned his doctorate in applied cryptology at the Technical University of Munich (Germany). He went on to teach applied IT security for a decade and co-wrote a textbook on the subject. He is the author of numerous scientific publications and has been awarded more than 50 patents. Mr. Spitz has worked in the field of information security for 30 years in various positions, including research and development of new security systems, security and management consulting, and strategic management of international corporations. He was involved in the founding of the Trustonic joint venture and advises technology companies in development, such as SecureThingz Ltd. and Rubean AG. In addition to serving on standards bodies such as ETSI, ISO/IEC, IoT Security Foundation, and Global Platform, he is also an advisor to ENISA (European Union Agency for Cyber Security). 3 TABLE OF CONTENTS DEVSECOPS AND COMMON SOFTWARE WEAKNESSES Module Director.................................................................. 3 Introduction Signposts Throughout the Course Book............................................. 8 Basic Reading.................................................................... 9 Further Reading................................................................. 10 Learning Objectives.............................................................. 13 Unit 1 DevSecOps - Integrating Agile Software Development, IT Operations, and Security 15 1.1 The Benefit of DevOps and the Need for DevSecOps............................. 16 1.2 Agile Software Development: Values, Principles, and Methods.................... 18 1.3 IT Operations and Service Improvement....................................... 27 1.4 Resolving the Core Chronic conflict............................................ 32 1.5 DevOps Ways and Ideals...................................................... 35 1.6 DevSecOps - Ensuring Security in a DevOps World............................... 42 Unit 2 DevOps Capabilities, Accelerators, and Enablers 45 2.1 The Science of DevOps - Key Capabilities to Accelerate.......................... 46 2.2 Organizational Culture, Collaboration and Experimentation...................... 50 2.3 VCS and Automated Testing................................................... 52 2.4 CI/CD Pipeline............................................................... 54 2.5 Infrastructure as Code, Containers, and Cloud Computing....................... 55 2.6 Supporting Tools and Architecture............................................. 60 Unit 3 Fundamentals of Software Systems Security 67 3.1 CIA and Non-Repudiation..................................................... 68 3.2 Important Concepts and Terminology.......................................... 71 3.3 Compliance, Regulation, and Security Standards................................ 73 3.4 Defense in Depth............................................................ 78 3.5 Common Misconceptions..................................................... 80 4 Unit 4 Common Software Weaknesses – Bugs and Flaws 83 4.1 Bugs, Flaws, and Taxonomies................................................. 84 4.2 OWASP..................................................................... 86 4.3 MITRE...................................................................... 91 4.4 GitHub Advisory Database and CodeQL........................................ 92 4.5 Application Security Education and Tooling.................................... 93 Unit 5 DevSecOps - Shifting/Pushing Left to Secure the SDLC 95 5.1 Securing DevOps Culture..................................................... 96 5.2 Security Requirement........................................................ 99 5.3 Secure Design.............................................................. 101 5.4 Secure Code................................................................ 108 5.5 Security Testing............................................................ 113 Unit 6 DevSecOps – Securing the CI/CD-Pipeline 121 6.1 Securing the Continuous Delivery Pipeline.................................... 122 6.2 Supply Chain Security....................................................... 125 6.3 Security in the Pipeline (SIP)................................................. 126 6.4 Security of the Pipeline (SOP)................................................ 126 6.5 Security Around the Pipeline (SAP)........................................... 127 Unit 7 Continuous Security 129 7.1 Monitoring, Observability, and Incident Response.............................. 130 7.2 Vulnerability Disclosures and the Mitre CVE Process............................ 138 7.3 Continuous Patch Management.............................................. 139 7.4 Running Bug Bounty Programs............................................... 140 7.5 Cloud Security – Shared Responsibilities...................................... 140 Appendix List of References............................................................... 144 List of Tables and Figures........................................................ 152 5 INTRODUCTION WELCOME SIGNPOSTS THROUGHOUT THE COURSE BOOK This course book contains the core content for this course. Additional learning materials can be found on the learning platform, but this course book should form the basis for your learning. The content of this course book is divided into units, which are divided further into sec- tions. Each section contains only one new key concept to allow you to quickly and effi- ciently add new learning material to your existing knowledge. At the end of each section of the digital course book, you will find self-check questions. These questions are designed to help you check whether you have understood the con- cepts in each section. For all modules with a final exam, you must complete the knowledge tests on the learning platform. You will pass the knowledge test for each unit when you answer at least 80% of the questions correctly. When you have passed the knowledge tests for all the units, the course is considered fin- ished and you will be able to register for the final assessment. Please ensure that you com- plete the evaluation prior to registering for the assessment. Good luck! 8 BASIC READING Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The science behind DevOps: Building and scaling high performing technology organizations (Illustrated ed.). IT Revolution Press. http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.46721&site=eds-live&scope=site Kim, G., Humble, J., Debois, P., Willis, J., & Forsgren, N. (2021). The DevOps handbook: How to create world-class agility, reliability, & security in technology organizations (2nd ed.). IT Revolution Press. http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.48152&site=eds-live&scope=site Toesland, F. (2019). The rise of DevSecOps. Computer Weekly. http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=bsu&AN=13509 6575&site=eds-live&scope=site 9 FURTHER READING UNIT 1 Layton, M. C. (2020). Agile project management for dummies (3rd ed.). John Wiley & Sons. Chapter 2 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51865&site=eds-live&scope=site Ajam, M. A. (2018). Project management beyond Waterfall and Agile. CRC Press. Chapters 6–7 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51862&site=eds-live&scope=site UNIT 2 Coupland, M., (2021). DevOps adoption strategies (1st ed.). Packt Publishing. Chapters 7–8 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51860&site=eds-live&scope=site Hüttermann, M. (2012). DevOps for developers (2nd ed.). Apress. Chapter 9 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51864&site=eds-live&scope=site 10 UNIT 3 Kim, D., & Solomon, M. G. (2016). Fundamentals of information systems security. Jones & Barlett. Chapters 3, 9 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51863&site=eds-live&scope=site Bell, L., Brunton-Spall, M., & Smith, R. 2017. Agile application security. O'Reilly Media. Chapter 6 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51806&site=eds-live&scope=site UNIT 4 Rice, L. (2020). Container security: Fundamental technology concepts that protect container- ized application. O'Reilly Media. Chapter 14 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51764&site=eds-live&scope=site UNIT 5 Hüttermann, M. (2012). DevOps for developers (2nd ed.). Apress. Chapter 9 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51864&site=eds-live&scope=site UNIT 6 Mulder, J. (2021). Enterprise DevOps for architects. Packt Publishing. Section 3 http://search.ebscohost.com.pxz.iubh.de:8080/login.aspx?direct=true&db=cat05114a&AN =ihb.51861&site=eds-live&scope=site 11 UNIT 7 McKinsey Digital. (2021, July 22). Security as code: The best (and maybe only) path to secur- ing cloud applications and systems. Available online 12 LEARNING OBJECTIVES The idea of DevOps was developed to fix the siloed system between the development team and the operations team in large companies. The end of the silos birthed the con- cept of continuous software development, which entails continuous development, deliv- ery, deployment, and microservices in the agile management model. Furthermore, the world we live in is constantly propelled by information technology (IT), and there is an increasing dependence on software in our daily lives and businesses. This software is delivered via the internet as a server-side application or mobile technology. As dependence on the internet becomes more prevalent, there is an increasing need for fast and regular delivery cycles for software distribution. This makes understanding DevSe- cOps and Common Software Weaknesses critical. Because of its increasing popularity, there is a growing security concern regarding DevOps implementation. Efforts at securing the DevOps pipeline have been approached using the waterfall model primarily as a long-term plan. Additionally, the adoption of these tradi- tional testing methods is often not quick enough to match the rapid production of soft- ware. Some software developers and IT professionals work side-by-side with security organizations to ensure that security is given appropriate attention and to reduce the like- lihood of a security incident. 13 UNIT 1 DEVSECOPS - INTEGRATING AGILE SOFTWARE DEVELOPMENT, IT OPERATIONS, AND SECURITY STUDY GOALS On completion of this unit, you will be able to... – explain the stages of the software development lifecycle using the waterfall model as a foundation. – understand Agile software development methodology. – evaluate the role of operations in the software development lifecycle. – discuss the security challenges associated with the development and operations of software and the role of DevOps. – describe how DevSecOps mitigates security challenges in the DevOps cycle. 1. DEVSECOPS - INTEGRATING AGILE SOFTWARE DEVELOPMENT, IT OPERATIONS, AND SECURITY Introduction DevSecOps stands for development, security, and operations, and its main purpose is to automate the integration of security at each stage of the software development lifecycle, beginning with basic design through integration, testing, deployment, and finally the soft- ware delivery. DevSecOps is fundamental to how development organizations handle security. Before its advent, security used to be added on to software at the end of the development cycle, almost as an afterthought. It was evaluated by a quality assurance (QA) team and developed by a separate security team. Software changes that were provi- ded only once or twice a year made this achievable. Nevertheless, when software engi- neers adopted Agile and DevOps methodologies in an attempt to cut software develop- ment cycles to weeks or even days, the conventional tacked-on approach to software security became unworkable. Agile and DevOps methods and technologies are easily integrated into DevSecOps, provid- ing application and infrastructure security. It addresses security concerns as they arise, when fixing them is simpler, quicker, and less costly (and before they are put into produc- tion). DevSecOps also shifts the burden of application and infrastructure security from a security silo to a shared duty across development, security, and information technology (IT) operations teams. By automating the supply of secure software without delaying the software development cycle, it makes “software, safer, sooner” possible. This is the Dev- SecOps philosophy. 1.1 The Benefit of DevOps and the Need for DevSecOps The rapidly evolving competitive environment, changing security requirements, and per- formance scalability has become a challenge for today’s enterprises. Organizations must quickly and reliably deliver and operate software to meet this demand. Also, to remain competitive, enterprises need to create a conduit between rapid feature development and operational stability. Furthermore, rapid innovation is the fundamental differentiator for organizations compet- ing in this digital economy. This innovation contest brought about DevOps, the main essence of which is to boost collaboration between application developers and the opera- tions team, who rarely work together seamlessly. Although the main reason for the crea- tion of DevOps, aside from enhancing development and operations collaboration, is to 16 reduce the development lifecycle of systems and spinoff updates and features parallel with the business's objectives, the fast improvement of digital technology has stretched it to its limit. With the emergence of contemporary trends such as the Internet of Things, cloud computing, and Agile methodology, DevOps has been adopted as a model for imple- mentation. However, an upsurge of security risks has been an unexpected consequence. Since corpo- rate business executives have adopted technology-driven business models, there has been an increase in the misalignment of cybersecurity programs within the business. The Digital Trust Survey reports that about 53 percent of businesses executing digital transfor- mation projects incorporate an extensive risk management procedure from inception (PwC, 2021). Meanwhile, billions of dollars have been lost to cybercriminals as they main- tain a slight edge over cyber security professionals. Compliance, governance, and security lag innovation and development. Furthermore, many security and compliance monitoring tools used to mitigate these challenges were not designed to function at the pace DevOps requires (Null, 2019). In 2018, Sonatype’s DevSecOps Community Survey discovered that there is only one security expert for every 100 developers in an organization (Accelerate Report, 2019). This disparity makes it difficult for a security team to match the velocity at which DevOps teams operate. Similarly, in the same year, the Dora's Accelerate: State of DevOps report estimated that there is just one information security person for every 10-infrastructure people per 100 developers in large organizations (Accelerate Report, 2019). It was also sta- ted that security is thought about after the end of a delivery cycle in most companies. Fur- thermore, the report also rightly deduced that security could be expensive for business and stressful for professionals at that phase. Finally, all discoveries in DevOps research point in one direction; a handful of organizations make security a top priority to avoid dis- aster in future, and the implementation of security within the DevOps cycle is more imper- ative now. To create secure applications, the implementation of DevSecOps is rapidly gaining trac- tion. DevSecOps converges DevOps, software, and security engineers to cooperate at the beginning stages of the development and deployment, thereby eliminating or reducing delays. According to the Department of Defense of the United States, the primary goal of DevSecOps is to automate, monitor, and imbed security at all stages of the software lifecy- cle. Functional, integration, and security testing begin from the early stage of the develop- ment process through an automated unit (Department of Defense, 2021). Embedding security from the beginning can result in an IT infrastructure with flexible, streamlined processes and a highly proactive and robust covering from cyber threats. 17 1.2 Agile Software Development: Values, Principles, and Methods Software development is a systematized procedure to deliver applications quicker, better, and less expensively (Ruparelia, 2010). In recent years, researchers have studied and made suggestions on ways to improve the development process of applications (Ruparelia, 2010). This interest has led to the creation of a new application development method known as Agile software development. The introduction of this new method helped over- come the quickly evolving business needs of organizations, needs that were difficult to achieve using traditional methods. Agile’s primary focus is to develop solutions while highlighting customer satisfaction quickly and efficiently through structuring the software development process in iterations. Agile software development is unlike the traditional approach because it pays less attention to rigid, plan-centered control and instead focuses on managing change during the development lifecycle. The fundamental principles of Agile software development rely on some existing theories and principles of software engi- neering, information systems, and project management. Software Development Lifecycle (SDLC) Software is a multi-step process for developing and delivering a complex product. That is one thing that all the diverse approaches have in common: software, like all products, begins as an idea. Depending on the method used, the concept becomes a document or a prototype. The item developed in one phase becomes the input to the following process: a document, diagram, or working software. The software is eventually supplied to the cus- tomer. The software development lifecycle (SDLC) refers to the sequence of actions fol- lowed by various methodologies. Planning Project and product management are parts of the planning process. Examples include the allocation of resources (both human and materials), requirement analysis, planning for capacity, project planning, the estimation of costs, and provisioning. Project plans, timetables, cost estimates, and procurement needs are among the outputs of the planning phase. To ensure that all viewpoints are reflected, project managers and development employees should cooperate with operations and security teams. Requirements engineering The business must communicate its new development and enhancement requirements to IT teams. This information is gathered from business stakeholders and subject matter experts (SMEs) during the requirements process. Architects, development teams, and product managers collaborate with SMEs to docu- ment business processes that require software automation. For example, in a Waterfall project, the outcome of this phase is usually a document that summarizes these criteria. On the other hand, Agile procedures may result in a backlog of activities to be completed. 18 Designing and prototyping Software architects and developers start designing the application after the requirements are known. For application architecture and software development, the design process follows well-established patterns. Architects can construct an application from existing components using an architecture framework like The Open Group Architecture Frame- work (TOGAF), which promotes reuse and standardization. To tackle algorithmic challenges consistently, developers employ proven design patterns. This phase may also contain quick prototyping, sometimes known as spiking, to compare options and determine the best match. This phase's output includes patterns and compo- nents chosen for the project, which are listed in design documents. Implementation This stage results in the development of the software. This phase may be completed in time-boxed “sprints” (Agile) or as a single block of effort (Waterfall), depending on the technique. Development teams should provide working software as rapidly as possible, regardless of approach. Regular engagement with business stakeholders is necessary to satisfy their expectations. This phase produces testable, functional software. Testing The SDLC's testing phase is among the most crucial. Without testing, it is impossible to deliver high-quality software. Quality is guaranteed by performing a wide range of tests, including the following: code integrity unit testing (functional tests) integration testing performance evaluation testing for security The V-model methodology is divided into two phases: verification and validation. During the validation step called unit testing, unit tests written during the module design phase are conducted on the code. This process involves testing at the code level and helps elimi- nate flaws at an early stage, however it cannot detect all defects. Integration testing is related to the process of architectural design. Integration tests are conducted to examine the coexistence and communication of the system's internal elements. System testing is intimately related to the phase of system design. System tests examine the complete system’s functionality and its ability to communicate with external systems. During the execution of this system test, most of the software and hardware compatibility issues can be identified. 19 Acceptance testing is related to the phase of requirement analysis and involves testing the product in the user environment. Acceptance tests expose incompatibility concerns with other user-environment-accessible systems. Additionally, it identifies non-functional con- cerns such as load and performance difficulties in the real user environment. Finally, automating tests is the most reliable way to ensure that they are run regularly and are never omitted for convenience. Continuous integration tools, such as Codeship, can be used to automate tests. The testing step produces working software that can be deployed in a production environment and verifies if the results match the requirements. Deployment The deployment procedure should be as automated as possible. This phase is almost unnoticeable in high-maturity enterprises; software is deployed as soon as it is ready. Manual approvals are required for enterprises with lesser maturity or in some heavily regulated industries. Even in those circumstances, it is better if the deployment is fully automated and done in a continuous deployment style. Medium and big businesses employ application release automation (ARA) technologies to automate app deployment to production environments. Continuous integration tools are commonly used to inte- grate ARA systems. Working software is released to production at the end of this phase. Operation and maintenance The operations and maintenance phase are, in a sense, the “end of the beginning.” This is not the end of the SDLC. For operations to be maintained at an optimal level, software must be monitored frequently. Bugs and faults detected in production must be reported and addressed, which frequently requires additional labor. Bug fixes may not go through the complete cycle, but at least a simplified version is required to confirm that the patch does not cause current issues (known as a regression). Agile Manifesto: Values and Principles The Agile Manifesto outlines the principles and core values for the development of soft- ware. It was set up in 2001 by 17 professionals led by Kent Beck, who was practicing one form of Agile (Beck et al., 2022). The Agile Manifesto has four main rules and 12 principles (Agile Alliance, 2022). The core values of the Agile Manifesto There are four core values of the Agile Manifesto, as described below: 20 1. People and interactions over process and tools: This value emphasizes communica- tion and teamwork. The development of software is a human activity; therefore, the interaction between individuals is essential. While tools are a key aspect of software development, making perfect software requires teamwork, regardless of the tools used. 2. Working software over comprehensive documentation: Documentation is crucial because it provides a reference for both users and other team members. Nevertheless, software development aims to create software that benefits the business rather than extensive documentation. 3. Customer collaboration over contract negotiation: It is essential to have frequent communication between the development team and the customer. The needs of all stakeholders will be understood when teams listen actively and get constant feed- back. 4. Responding to change by following a plan: In software development, changes are con- stant, and this reality should be reflected in software processes. The project plan should be malleable enough to accommodate change when needed. The 12 principles of the Agile Manifesto The 12 principles of the Agile Manifesto are highlighted below. 1. The highest priority is customer satisfaction via early and continuous delivery of val- uable software. 2. Changing requirements are welcomed, even if it comes late in the development. The Agile process uses change to give the customer a competitive edge. 3. Frequent delivery of working software is essential, from a few weeks to a few months but with a preference for shorter timescales. 4. Both the business team and developers must work throughout the project together. 5. The project should be built around motivated people. An excellent environment should be provided, and proper support is needed to get the job done. 6. Face-to-face conversation is the most effective communication medium with the development team. 7. The primary measure of progress is working software. 8. The Agile process promotes sustainable development. Users, sponsors, and develop- ers should maintain a constant pace. 9. Continuous attention should be given to excellent design and technical excellence to enhance Agility. 10. Simplicity is essential. 11. Self-organizing teams produce the best architecture, designs, and requirements. 12. Teams reflect on becoming more effective at regular intervals and adjust their behav- ior accordingly. Scrum Takeuchi and Nonaka (1986) first mentioned the term Scrum while presenting a quick, self-organizing, and adaptive product development approach. This approach was devel- oped for the management of the systems development processes. Scrum applies the theory of industrial process control to system development, resulting in a flexible, adap- 21 tive, and productive approach (Rubin, 2012). There are no precise techniques of software development for the implementation of applications. The focus of Scrum is on the func- tion of team members for the system to be flexible in an environment that constantly changes (Rubin, 2012). The fundamental principle of Scrum is that the development of systems encompasses several variables that are both technical and environmental, such as time, customer requirements, resources and technology that can change throughout the process (Rubin, 2012). This results in the unpredictability of the development process; therefore, a flexible system development procedure is required to respond to these changes. Figure 1: Scrum Practices Source: Augustine Imonlaime (2022). Scrum roles The development efforts of Scrum consist of one or more Scrum teams, each containing about three Scrum roles: Scrum master, product owner, and development team members. Of course, additional roles are possible depending on the team, but the requirement of the Scrum framework only entails three roles as listed above. Stakeholders should also be kept in mind. The product owner is responsible for being the focal point of product leadership. Product owners are responsible for deciding what features and functionality should be built and in what order. They give the team a clear visualization of what the product should look like. 22 Therefore, they are responsible for the complete success of the product. The product owner ensures that the most valuable work is completed. To achieve this, they collaborate actively with both the development team and the Scrum master. The responsibility of the Scrum master is to assist every team involved in the project in understanding and embracing the practices, principles, and values of Scrum. They assume the position of coach by providing leadership to the Scrum team and the organization towards developing a Scrum approach that is high-performance and specific to the organ- ization. The Scrum master also assists the organization through the complicated change management process associated with the adoption of Scrum. It is also the responsibility of the Scrum master to protect the team from external distractions and interference and to take the front row in removing impediments to the productivity of the team. Finally, it is not the duty of the Scrum master to act as a development or project manager or exercise control over the team; their only duty is to lead the team. In traditional software development, various job roles exists, such as architect, program- mer, tester, user interface designer, and database administrator. The development team’s role in Scrum is diverse and cross-functional. The team consists of a group of individuals whose responsibility is to design, build, and test the product. The development team is self-organizing and devises the best approach to meet the goals set by the product owner. The size of the development team is typically between five and nine individuals who have the required skills to produce an application of high quality. Scrum can, however, be used in developments that involve a larger team; the large team is broken down into smaller teams, each with a development team of about five to nine members or fewer. Extreme Programming (XP) The purpose of the extreme programming (XP) approach to Agile methodology is to pro- duce software of high quality and to improve the quality of life of the development team. XP deals with engineering practices from software development. According to Don Wells, extreme programming is appropriate for use on a project that meets the characteristics below (Wells, 2009). The requirement of the software changes dynamically. The development team is small, extended, and co-located. The risk associated with timeboxed projects when an innovative technology is in use. Technology in use accommodates automated functional and unit tests. Extreme Programming Values Extreme programming consists of five values, which are explained in detail below. 23 Communication The development of software involves a team rather than a single individual, and it depends on adequate communication to aid the transfer of knowledge from one team member to another. Therefore, the importance of the appropriate kind of knowledge-shar- ing is stressed in XP, such as face-to-face communications with the help of whiteboards or other mechanisms for drawing. Simplicity This refers to achieving a goal using the most straightforward approach. It aims to elimi- nate waste by doing what is necessary and making the design as simple as possible so that it is easy to support, maintain, and revise. Feedback Areas of improvement can be identified by constantly receiving feedback regarding previ- ous efforts. This can assist the team in quickly identifying areas that need improvement and revising their practices. In addition, the team can get feedback on a design and imple- mentation and adjust the product in future, aiding straightforward design. Courage According to Beck (2004), courage can be defined as effective action when faced with anxi- eties or fears. This definition depicts avoiding a harmful result to the team by preferring action based on other principles. Courage is needed to point out organizational issues affecting the team’s effectiveness and to accept and act on feedback. Respect There is a need for mutual respect among members of the team to foster communication, provide and accept feedback, and work in unison to identify designs and solutions that are simple. Extreme Programming Practices The following sections discuss fundamental aspects of XP. Even though it is possible to perform these practices individually, several teams perform some practices in conjunction with others to eliminate risks associated with software development. The planning game This entails choosing the right strategy for releasing plans for the project and meeting the customers and the developers. The practice of XP is also to focus on the strategies to improve communications among stakeholders. The system releases the plans and dates for the meetings and project review. Discussions centered around the work progression 24 are conducted via XP. Stakeholders meet and deliberate on queries to ensure that they have been resolved and the developers and the customers are on the same page regarding the system and flow of the project (Beck, 2004). Small releases The whole project consists of several functions, and when each function is completed, the version is released to the customer by the development team. The feedback from the cus- tomer is quick because of the shortened release cycle of XP. This also helps reduce risks and errors associated with the release of each version, thereby assuring better accounta- bility and efficiency. When each function is released, it is integrated into the previously released function. Therefore, continuous integration is executed (Beck, 2004). Metaphor This is a description of the look and feel of the program. This document explains how the system works and expresses the evolving vision of the project that would define the scope and purpose of the system. The principles and standards of the project architecture and requirements are used to derive the metaphor. Testing One of the major emphases of XP is regular testing. Testing is done to ensure that codes are error-free. Unit tests are carried out on each function to guarantee the safety and effi- ciency of every code written to achieve that function before progressing to the following function. Acceptance testing is performed to ascertain that user requirements are under- stood and met by developers. Also, integration testing is carried out to ensure that each function interacts with the others and is perfectly fused to form one unit. The developers and tester work together to achieve a fault-free system by initially running small codes and eventually the entire system. Refactoring This involves systematically improving code without creating new functionality to ensure that the code is clean and the design is simple. It ensures that duplicate codes are removed and that the code is easy to understand. XP does not encourage complex coding practices; long methods and redundant classes are also avoided (Beck, 2004). This stage involves restructuring the internal structure of the software without causing an observable behavioral change in the software. Pair programming Pair programming involves two programmers working on a single project with one work- station. The programmer on the keyboard is referred to as the driver, while the other pro- grammer who focuses on the project’s overall direction is called the navigator; the pro- grammers are expected to swap roles every few minutes (Beck, 2004). In XP, pair programming is used to implement user stories, for example. 25 Collective ownership Collective ownership gives everyone the opportunity to contribute fresh ideas to every segment of the project. Changes can be made by any developer and on any line of code, bugs can be fixed, and improvement can be made to designs. 40-hour work XP projects require developers to be fast and efficient and to maintain the quality of the product. Therefore, team members should not be stretched to their limit; they should be able to have a healthy work-life balance. In XP, the ideal number of hours for work is 40 hours and must not exceed 45 hours each week. Overtime can only be done if there was none in the previous week. On-site customer In XP, the end customer participates fully in the development process. Therefore, the cus- tomer needs to be available to provide answers to any doubt the team may have and set priorities and resolve disputes, when necessary (Beck, 2004). Coding standards There should be a standard set of coding practices for the team, and the same format and writing style should be used. When standards are applied, the team can easily read, share, and refactor code easily, track input from team members and learn from each other faster. In addition, when codes are written following the same set of rules, collective ownership is encouraged. Kanban Method The Kanban approach recommends managing the workflow with continuous improve- ment without placing too much burden on the development team, which focuses on effi- ciency and productivity. Kanban was created initially as a just in time (JIT) manufactur- Just in time (JIT) ing process (Institute for Manufacturing, 2016). manufacturing With JIT manufacturing, products are produced The next phase of Kanban was the introduction of new principles and practices to make it when needed, rather than more efficient for knowledge workers (Anderson, 2010). The Kanban method is implemen- in excess or ahead of ted by businesses to assist teams in optimizing their workflow and utilizing their full schedule. JIT production aims to eliminate waste potential. connected to excess inventory, waiting, and This strategy endeavors to improve the business's return on investment by reducing wait- overproduction – three of the seven waste catego- ing inventory and eliminating any cost associated with it. At its most basic level, each pri- ries listed in the Toyota oritized task (or card) on a Kanban board goes through a visualization of the team's work- Production System (known in North America flow as they occur. Every main activity in the team’s workflow is arranged in visual as the lean production columns on the Kanban board, which usually begins with the definition of the task and model). ends with the delivery to the customer. 26 Workflows are visualized as columns on the Kanban Board, usually starting at task defini- tion and finishing with delivery to the customer. Of course, being Agile, these cards and activities are visible to all participants, including the customer. While the Kanban workflow can become complex, there are four distinct stages of progres- sion of a task in a basic workflow visualization. 1. Backlog: These are tasks that are waiting to be worked on. 2. In progress: These are functions that a team member presently develops. 3. Testing: The function is undergoing testing; these tests are integration, system, or user acceptance tests (UAT). 4. Done: The project is completed and ready to be demonstrated or deployed at this stage. Each workflow stage has limits called the work in progress (WIP) limit to the number of presently active tasks to identify and control bottlenecks and process limitations. This ensures regular monitoring and measurement of the flow of the work. There are six major elements of Kanban: 1. Visualize 2. Limit work in progress (WIP) 3. Manage workflow (eliminate or reduce any bottleneck) 4. Ensure explicit policies 5. Create a feedback loop 6. Collaboratively improve 1.3 IT Operations and Service Improvement The emergence of DevOps does not substitute the importance of IT operations. It is expe- dient to perceive the Dev and Ops teams as separate entities. The Information Technology Infrastructure Library (ITIL) describes the distinctive features and nature of IT operations. ITIL is founded on the idea of “services” and the responsibili- ties of the operations to support the design of services with its implementation, opera- tions, and improvement in line with the framework. 27 Figure 2: The ITIL Lifecycle Source: Augustine Imonlaime (2022). The steps of the ITIL Lifecycle for services are depicted in the above diagram. Service strat- egy, service design, service transition, service operation, and continual service improve- ment are all stages in the ITIL Lifecycle for services. Service strategy is at the heart of the ITIL Lifecycle, as seen in the figure above. Service design, service transition, and service operation steps form a logical sequence to transform an idea into a fully functional service for clients. All of them are connected to the service strategy stage since any decisions made in these three phases must align with the service’s strategic goals. The ITIL Lifecycle model’s continual service improvement stage encompasses all preceding stages. This implies that continuous improvement is necessary throughout the service lifespan, after the service operation stage and not only at the end. Service managers should aim to improve their service procedures from the service strategy stage to the service operation stage. Responsibilities and Services of IT Operations The service of an operation may include providing hardware and software or providing support for several IT functions. Also, these services may encompass service level agree- ment (SLA) specification and monitoring, information security, capacity planning, disaster recovery, and business continuity. The IT operations team consists of the IT manager, who oversees managing the team, and a mix of other cross-functional team members, includ- ing systems and network engineers and network security professionals working to man- age and maintain IT infrastructure. As identified by the disciplined Agile framework, there are six classifications of tasks for IT operations and the associated activities linked to each strategic objective. 28 Evolve infrastructure IT operations must innovate and evolve the IT infrastructure to meet the ever-changing demands of their organizations. These activities include identifying the impact of change, applying patches for software, and driving service performance by introducing more effi- cient hardware and software applications. Run solutions The primary existence of IT operations is to ensure that all solutions are running. In addi- tion, it is the operations team’s responsibility to perform back-ups, restore systems after there is an outage or an update, tune servers and other configuration items for perform- ance, and promote the effective delivery of IT services by allocating resources where they are needed most. Managing infrastructure Preventing a lapse in the IT infrastructure is another key reason why an operations team is set up. IT infrastructure consists of network and computing hardware coupled with the applications they process. Infrastructure managed includes on-premises, hybrid cloud, and applications deployed in the cloud; network security management; management of facilities; and components of hardware infrastructure. Configuration management The IT operations team must document hardware configuration; this documentation should also include solution dependencies. The new configuration should be done where necessary to optimize the performance of services and IT infrastructure. The IT operations team is responsible for the business’ disaster recovery plan. The team protects the organization from significant downtime and revenue loss from unexpected disasters by planning, simulating, and practicing disaster recovery situations. IT operations governance IT operations monitors and measures the IT infrastructure performance and the organiza- tion’s security posture. They also develop metrics for operations to assist in evaluating the performance of significant processes and services, the management of licenses, and soft- ware compliance. Finally, they conduct audits on the infrastructure to ascertain if security and operational targets are achieved. Governance, Risk, and Compliance (GRC) The Open Compliance and Ethics Group (OCEG) refers to governance, risk, and compliance (GRC) as a well-planned and integrated array of all the capabilities needed to ensure prin- cipled performance at all organizational levels. The extent of these capacities include 29 the duties performed by internal audit, risk, legal, HR, IT, finance, and compliance. the tasks performed by the line of business, the board, and the executive suite. the tasks assigned to third parties involving external stakeholders. The term principled performance can be seen as an approach or a point of view toward business that assists organizations in achieving their objectives reliably while paying attention to uncertainty and acting with integrity (OCEG, 2022). The ITIL 4 explains the constituting elements of GRC in specific terms, as outlined in the following sections. Governance The International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC) 38500:2015 standard for IT governance describes it as a system through which the present and future uses of IT are controlled and directed (International Organization for Standardization, 2015). Governance enables the management of IT resources effectively and prudently, resulting in long-term business success. IT gover- nance is a branch of corporate governance; therefore, both concepts align in several ways. The significance of IT governance is divided into three activities, according to Control Objectives for Information and Related Technologies (COBIT; Kidd, 2019): 1. It evaluates the needs of the stakeholders to ascertain that the objectives are bal- anced and agreed on. This will entail a review of the business performance in the past, future imperatives, also the operating model and environment presently and in the future. 2. It steers the organization by prioritizing and making decisions. This usually takes the form of policies, strategies, and established controls. 3. It monitors compliance and performance to ensure the organization moves in the agreed-upon direction. This is achieved through compliance audits and performance reports. Corporate governance in most organizations is the responsibility of the board of directors. However, specific governance responsibilities are assigned to a specific subset of the organization at the appropriate level. For example, the IT governance team may consist of some members of the board that have in-depth knowledge in IT or a fusion of individuals from both IT and business directly in charge of management, IT, and finance (Kidd, 2019). Compliance Compliance is referred to as an array of digital security requirements and practices to ensure that a company’s business process is secure and sensitive data are protected from unauthorized parties. They are also a legal requirement for specific IT security standards or industries (Steinberg, 2011). It is also the procedure for meeting the requirement of a third party to enable business operations in a niche market (IBM Cloud Education, 2020). The need for compliance revolves around one or more of the following four concepts: 30 1. Regulations of industry 2. Governance policies 3. Framework for security 4. Contractual terms for clients Regulatory compliance involves external laws, regulations, and industry standards that apply to a company. Internal or corporate compliance covers rules, regulations, and con- trols that individual companies set. Integrating external compliance requirements into an internal compliance management program is imperative (IBM Cloud Education, 2020). This internal compliance management program should entail procedures for creating, updating, distributing, and tracking compliance policies and employee training on those policies (IBM Cloud Education, 2020). Several laws outline criteria specific to various industries that a business is compelled to meet. Below are a few examples: Health Insurance Portability and Accountability Act (HIPPA): This is a law that defines how personal health information should be protected and shared in the United States. The Sarbanes–Oxley Act of 2002 (SOX): This is a financial regulation applicable in the United States, and it applies to a broad spectrum of industries. Payment Card Industry Data Security Standards (PCI-DSS): This applies to all organiza- tions involved in the processing of payment cards to safeguard cardholder data (CHD) and/or sensitive authentication data (SAD) like personal identification numbers (PINs). Payment Card International, which was created by VISA, MasterCard, and other credit schemes, has provided 12 categories of standards. ISO 27001: This outlines the conditions necessary to create, put into place, maintain, and constantly enhance an information security management system within the context of the enterprise. Additionally, it contains standards for the evaluation and manage- ment of information security risks that are specific to the requirements of the company. Risk management Risk management is how organizations identify and assess control strategic, financial, legal, and security risks. There is a need for the application of resources by the organiza- tion towards minimizing, monitoring, and controlling the effect of adverse events while maximizing the impact of positive ones for the reduction of risk. In general terms, risk management is a systematic interaction between people, processes, and technology that empowers an organization to establish objectives in parallel with its values and risks. The purpose of an enterprise risk management program is to optimize risk and secure value while accomplishing the corporate objective. Prioritization of the stakeholder’s expecta- tions and delivering reliable information is part of the task. Also, a risk management pro- gram applies to identifying information security and cybersecurity risks and threats, including software vulnerabilities and the implementation of plans to mitigate them (IBM Cloud Education, 2020). 31 The program should assess the effectiveness and performance of systems, identify tech- nological and operational shortcomings that could affect the core business, assess legacy technology, and monitor risks associated with infrastructure and the potential failure of computing resources and networks (IBM Cloud Education, 2020). A risk management program must align with legal, internal, contractual, ethical, and social objectives, as well as monitor regulations associated with any innovative technol- ogy. 1.4 Resolving the Core Chronic conflict Many organizations find it challenging to deploy production changes within a brief period (i.e., minutes or hours); instead, such changes require weeks or even months. Also, most organizations cannot deploy hundreds or thousands of changes per day on production and instead even find it difficult to achieve this successfully monthly or quarterly. In this dispensation, competitive advantage requires organizations to increase their time to mar- ket, improve their service level, and experiment relentlessly. Project stakeholders and cus- tomers also put pressure on teams to deliver changes frequently. To achieve this, the chronic core conflict must be resolved within their technology organization. There is an intrinsic conflict between the IT operations and development teams in many organizations, resulting in a downward spiral, slowing the time to market for new prod- ucts and features, increasing outages, and reducing quality. One factor that leads to this conflict is the competing goals of development and IT operations. Two primary objectives of IT organizations among many are to respond to the ever-changing competitive land- scape and provide stable, secure, and reliable services to customers regularly. It is the development team’s responsibility to respond to market changes by introducing new fea- tures and changes into production as fast as possible. The operations team bears the responsibility of providing support to the customer with secure, stable, and reliable IT services, which can make it hard to introduce production changes. With this configuration, the development and production teams have divergent goals and incentives. According to Dr. Eliyahu M. Goldratt, this configuration form is the chronic core conflict (Kim et al., 2018). This is when the achievement of an organizational or global goal is hindered by siloed teams’ organizational measurements and incentives. The product of this conflict is a downward spiral which can prevent the organization from achieving its business outcomes both within and outside the organization. The need for automation (100:10:1) According to a new study by Sonatype, there has been a 50 percent rise in breaches relat- ing to open-source components in applications since 2017; this emphasizes the need for developers to adopt DevSecOps practices (DeBoer, 2019). In 2018, a report by CA Veracode showed that only 52 percent of developers around the globe update opensource compo- 32 nents when there is a new vulnerability. In general, one out of three respondents in the Sonatype study assume a breach due to a vulnerability in a web application occurred over the previous year (DeBoer, 2019). The report also shows that it is imperative to have automated application security testing to tackle cyber security issues and improve the productivity of businesses. For instance, the ratio of developers to security personnel is 100:1, while 48 percent claimed that they do not have sufficient time to spend on application security. Moreover, the implementa- tion of DevOps is a pathway to the implementation of DevSecOps: those who practice DevOps to maturity are 24 percent more likely to have automated security practices deployed throughout their lifecycle. DevOps Infinity Loop The DevOps process involves a delivery pipeline that empowers teams to create and release applications continuously and improves the software in terms of features and security in a timely fashion (Kelly, 2019). The term DevOps infinity loop is used by some industry professionals to describe the continuous development and integration process (Kelly, 2019). The product has no final or waterfall stage. Figure 3: Key Phases of the DevOps Infinity Loop Source: Augustine Imonlaime (2022), based on Mukherjee (2021). Planning At this stage, the development team, the operations team, and other stakeholders make decisions on the list of features to be added to each stage of the project and an iteration value and criteria. Developers, system administrators, product management, marketing personnel, and technical writers all require a seat at the table to participate in the devel- 33 opment plan. Project plans should be kept in a secure, central area, such as Atlassian Jira or Confluence. In addition, every team member should have access to the project plans at any time and from any location. Code Some teams separate the coding and building operations, depending on what works best for their business. Developers complete the coding tasks that have been assigned to them during the code phase. To code their allocated jobs, they typically use integrated develop- ment environments (IDEs). As they complete their tasks, they check their work into a cen- tralized source code repository, such as GitLab or GitHub, which must serve as the sole source of truth for code. Other static code tests, such as SonarQube, are occasionally exe- cuted on the committed code while it is being committed. SonarQube is a continuous code quality inspection tool. Build The build stage entails retrieving software code from a centralized source using an auto- mated application like Chef or Puppet. This automated program converts software code into a binary artifact, tests it, and saves it to a centralized shared repository. In most cases, these repositories are set up so that code is automatically produced following every com- mit. Continuous integration (CI) is a DevOps software development technique that involves developers regularly merging their source code into a common repository, accompanied by automated builds and tests. Continuous integration refers to the build or integration stage of the software release process, which comprises both an automated component (e.g., a CI or build service) and a culture component (e.g., learning to integrate frequently). The main goals of continuous integration are to identify and resolve bugs faster, improve software quality, and reduce the time it takes to validate and deploy new software upgrades. Testing Continuous testing can be achieved by using technologies like Selenium or JUnit to test several codebases in parallel. An automated testing technique ensures that the functional- ity of an application is faultless. Automated testing also produces extensive codebase reports. These data can be used by company stakeholders to understand more about the development cycle and product maturity. Continuous deployment DevOps eliminates the manual procedure at the end of the development cycle. Instead, every code change is routed through the entire pipeline and deployed to production in real-time. An organization can schedule as many deployments per day as necessary depending on its needs and the pace of its workforce. All deployments are automated in this procedure. Thus, after each build, the output, along with all dependencies, is distrib- uted to specified servers. 34 Operate During this step of the DevOps process, IT administrators oversee software production. Tools such as Ansible, Puppet, PowerShell, Chef, Salt, and Otter provide management and data collection capabilities, as well as operational visibility into production applications. Continuous monitoring The development and operations teams must constantly monitor their production apps. Access to information regarding the health of the application is required by most organiza- tions. Set up collaboration and communication channels to notify all teams of any produc- tion issues that develop. Some of the most common monitoring tools for this phase are New Relic, Datadog, Grafana, Wireshark, Splunk, and Nagios. 1.5 DevOps Ways and Ideals The Three Ways of DevOps According to Kim et al. (2018) in their book, The Phoenix Project, there are three “ways” of DevOps: The first way: Flow/Systems thinking The first way stresses overall system performance rather than the performance of a single silo of work or department. This might be as vast as a division (e.g., Development or IT Operations) or as small as a single contributor (e.g., a developer or system administrator). The emphasis is on all IT-enabled corporate value streams. Alternatively, it begins with the identification of requirements (e.g., by the business or IT), development, and transition to IT operations, where the value is supplied to the client as a service. Never sending a known fault to downstream work centers, never permitting local optimi- zation to cause global degradation, constantly attempting to boost flow, and always seek- ing to obtain profound insight into the system are all benefits of implementing the first way. The second way: Amplify feedback loops The second way entails the establishment of a right-to-left feedback loop. Every process improvement project aims to minimize and magnify feedback loops so that essential modifications can be implemented on a regular basis. Identifying and responding to all consumers, both internal and external, shortening and amplifying all feedback loops, and embedding knowledge where it is needed are all results of the second way. 35 The third way: Culture of continual experimentation and learning The third way is about cultivating a culture that encourages two functions: (a) continuous experimentation, taking chances, and acquiring knowledge from failure, as well as (b) a realization that proficiency requires repetition and practice. Both are essential. Experi- mentation and taking risks are what keep us pushing forward, even if it means venturing deeper into the danger zone than ever before. We also require proficiency in talents that will allow us to retreat from danger if we have gone too far. Allocating time for daily work development, building rituals that reward the team for taking chances, and injecting defects into the system to increase resilience are all results of the third way. Five Ideas of DevOps The Unicorn project by Kim (2019) highlighted five ideas of DevOps, each of which are summarized below: Locality and simplicity There is a need to design systems and the organizations that construct them with location in mind. Everything that is done should be simple. This ideal refers to a development team’s ability to make local code modifications in a single area without affecting other teams. Whether in code, structure, or procedures, the last place one should have complex- ity is internally. Focus, flow, and joy The second ideal concerns how team members feel at work daily. Is work characterized by monotony and waiting for others to complete tasks on our behalf? Do team members work blindly on small portions of the whole, only seeing the results of their efforts when everything goes wrong, which results in firefighting, punishment, and burnout? Or do team members operate in small batches, ideally in single-piece flows, with immediate and continuous feedback? These are the conditions that allow teams to focus and flow, learn, discover, master their topic, and even enjoy work. Improvement of daily work The third ideal focuses on reducing technical debt while also enhancing architecture. When technical debt is prioritized and paid down and architecture is continuously enhanced and modernized, teams can work more fluidly. This helps them deliver better value faster, safer, and more happily. When developers can meet corporate performance requirements, the company gains. 36 Psychological safety One of the most important determinants of team effectiveness is psychological safety. Problems can not only be fixed but also prevented when team members feel confident talking about them. Problem-solving necessitates honesty, and honesty necessitates the lack of fear. Psychological safety in knowledge work should be given the same priority as physical safety in production. Customer focus Geoffrey Moore defines customer focus as the distinction between core and context (Moore, 2014). The bread and butter of any business is what clients are willing and able to pay for. Context, which clients do not care about, is what it takes to deliver them that product, which includes all an organization’s backend systems such as HR, marketing, and development. It is crucial to consider these context systems as mission critical and effec- tively fund them. Core should never be killed by context. Value Stream, Product Thinking, Shift Left Software value stream A software value stream encompasses all activities required to offer software products or services to clients, from concept to production (Onuta, 2020). The client determines the value of software products (Onuta, 2020). Customers will favor software solutions or serv- ices that consistently deliver value, which will create corporate value. Therefore, stages of the value stream should create “value” in the customer-centric understanding of the term to maximize their return on investment and the potential to please customers. As a con- cept progresses through development, the build process, and testing, its value to the cli- ent must increase until it is finally delivered. However, because software development and delivery in large businesses are complica- ted, mapping the value stream from beginning to end can be difficult (Plutora, 2022b). Value stream management Value stream management aims to please consumers by delivering software products that provide value to their lives. Value stream management takes a comprehensive approach to software product development that allows any company to become a software behe- moth. All parts of the software delivery process are captured in the total view of the value stream. Therefore, it provides problem-solving tools to value stream managers, release managers, DevOps managers, product managers, and leadership to enhance software development constantly. Platforms for value stream management use an integration architecture that combines toolchains into a standard data model that provides end-to-end visibility and traceability across the value chain. 37 Workflow measurement through value streams The operational silos between tools and teams must first be broken down to manage value streams from beginning to end. Teams may see the workflow across the business in real-time by combining toolchains across the portfolio with value stream management systems. These toolchains allow teams to track what is currently in the value streams, how it got there, and trends over time. This can cause engineering and product teams to go from having little visibility into their processes to being bombarded with real-time per- formance information. Value stream management platforms enable teams to focus on the most critical KPIs and track them in real-time. DevOps metrics Teams should begin by using DevOps metrics to track the throughput and consistency of their value streams. These metrics are used as indicators of your value stream’s health and how it is changing over time. According to the Accelerate State of DevOps Report, there are four leading DevOps indica- tors (Accelerate Report, 2019). 1. Deployment frequency: This is how regularly code in a team’s value stream is deployed to production. 2. Lead time: For the team’s value stream, lead time is the time it takes from when the code is committed to when it is executed in production. 3. Meantime to repair: This is how long it takes to restore service when a service incident or a defect that affects users occurs inside the team’s value stream (e.g., unplanned outage or service degradation). 4. Change fail rate: This is the percentage of production changes that degrade service (e.g., cause a service outage) and necessitates remediation (e.g., a hotfix, rollback, etc.) within the team’s value stream. Flow metrics A team might begin to integrate flow metrics after introducing DevOps metrics into proce- dures. These metrics are based on the flow item notion and provide insight into what is going on within the value stream. A flow item is a piece of work that is important to the company: a feature, defect,risk, or debt. Each of the flow metrics examines a different aspect of your value chain. They analyze how workflows work together through your value stream to see how your planning, release, and investment decisions are increasing and protecting value delivery and how far along your DevOps journey. Consider the following definitions (Plutora, 2022a): Flow velocity: Flow velocity is the number of flow items of each type completed in each amount of time, also known as throughput, and it indicates if value delivery is speeding up. Flow distribution: The ratio of the four flow items completed over a certain period is measured by flow distribution. This aids in the prioritization of various sorts of work throughout specific time frames to achieve the intended business objective. 38 Flow time: Flow time quantifies the time it takes for flow items to go from “work start” to “work complete,” including both active and wait times, and can be used to detect when the time to value or cycle time is increasing. Flow efficiency: This is the ratio of active time to waiting time as a percentage of total flow time. It can assist you in determining when waste in your processes is increasing or decreasing. Flow load: The number of flow items currently in progress (active or waiting) within a value stream is measured by flow load. This checks overuse and underuse of value streams, both problems that can cause lower production. Product thinking Product thinking is a set of methods for identifying, understanding, and prioritizing chal- lenges encountered by a specific group of customers and then systematically developing and validating solutions with a long-life cycle (Copsey, 2020). Project execution is in stark contrast to product thinking. When discussing a project, it is assumed that it will be completed. The success of this project is decided by completing a specific set of requirements by a particular deadline and within a particular budget (the “iron triangle”). However, product thinking is a mindset that enables one to continuously produce desired and meaningful value by allowing a product to change based on actual and demonstrable needs. There are four main themes of product thinking (Copsey, 2020): 1. Validating the problem 2. Prioritization with ruthlessness 3. Success that is well-defined and measurable 4. Empathy for customers Shifting left and Agile methodology The “shift left” testing idea aims to bring testing closer to the beginning of the software development process. A project can reduce the number of problems and improve the code quality by testing early and often. The goal is to avoid finding any critical defects that require code patching during the deployment phase. The Agile methodology’s reduced development cycle necessitates testing. As a result, shift left testing is a good fit for Agile. The testing engineer must test each code increment – often referred to as a two-week sprint. Some companies like to shift testing left even closer to the coding stage. Test-driven devel- opment is a valuable strategy to use. To use test-driven development, you must first write the tests for the code you wish to create. As a result, the validity of the code can be 39 checked immediately. Static analysis tools are another technique to push testing to the left (DeBoer, 2019). A static analysis tool can assist in uncovering issues with parameter types or improper interface usage. Eslin, for example, is a well-known static code checker Node.js – or linter – in the Node.js community that highlights your coding errors. An open-source, cross- platform runtime envi- ronment for JavaScript is In addition, testing experts feel that behavior-driven development (BDD) can hasten the Node.js. It is a well-liked shift to the left. BDD establishes a standard design language that all stakeholders, includ- tool for practically every ing product owners, testing engineers, and developers, can understand (Testim, 2022). As project kind. The heart of Google a result, all engaged stakeholders can work on the identical product feature simultane- Chrome, the V8 Java- ously, increasing the team's agility. In other words, BDD encourages cross-team collabora- Script engine, is operated tion while speeding up feature delivery. by Node.js outside of the browser. Because of this, Node.js is fast. Benefits of DevOps There areseveral benefits brought about by DevOps. Maximizes efficiency with automation According to the late DevOps guru Robert Stroud, DevOps is all about transforming busi- nesses, which includes changes in people, processes, and culture (Foster, 2016). The most effective DevOps transformation initiatives concentrate on structural changes that foster community. A successful DevOps program necessitates a culture (or mindset) shift that fosters increased collaboration among many teams and automation to meet business objectives. DevOps stresses distributing software more frequently, reliably, and safely by managing engineering processes. Optimizes the entire business According to system architect Patrick Debois, the founder of the DevOps movement, the primary benefit of DevOps is the knowledge it delivers. It forces businesses to optimize for the entire system, not just IT silos, to benefit the overall business. In other words, to fit with customer and business needs, be more adaptable and data-driven (Foster, 2016). Improves speed and stability of software development and deployment According to a multi-year analysis published in the annual Accelerate State of DevOps Report, top-performing DevOps businesses excel at software development and deploy- ment speed and stability, as well as at guaranteeing that their product or service is availa- ble to end consumers. However, since the concept of DevOps is somewhat nebulous, how can a company tell if its DevOps strategy is paying off? The 2019 Accelerate report also identifies five performance metrics – lead time, deploy- ment frequency, change fail, time to restore, and availability – that provide a high-level view of software delivery and performance and predict DevOps success (Accelerate Report, 2019). 40 Focus on the most important aspect: People The most crucial component of a DevOps strategy is people, not tools. Key role players (i.e., people) such as a DevOps evangelist, a compelling leader who can communicate the commercial benefits of DevOps methods and dispel preconceptions and worries, can improve your chances of success. An automation specialist may build strategies for contin- uous integration and deployment, ensuring that production and pre-production systems are fully software-defined, flexible, adaptive, and available, which is critical to DevOps success. Challenges of Adopting DevOps A DevOps endeavor faces numerous hurdles. To improve processes, organizations need to rethink their structure. However, companies frequently underestimate the effort required for a DevOps shift. According to a recent Gartner analysis, 75 percent of DevOps initiatives failed to fulfil their objectives in 2020 due to organizational learning and transition chal- lenges (Costello, 2019). Difficulty picking the right metrics Metrics must be used by businesses shifting to DevOps principles to track progress, docu- ment achievements, and identify areas for improvement. An increase in deployment pace without a corresponding gain in quality, for example, is not a success. Analytics that drive intelligent automation decisions are required for a successful DevOps effort, yet organiza- tions frequently struggle with DevOps metrics. A solution to this is to find measures that are related to throughput and velocity. Adjustments will take time due to the large organizational and IT changes involved – with formerly isolated teams joining forces, changing job positions, and dealing with other shifts. The biggest challenges to DevOps success, according to a study of IT executives con- ducted by software company Pensa (2017), are budget constraints (cited by 19.7 percent of respondents), legacy software (17.2 percent), complexity of application (12.8 percent), managing numerous environments is difficult (11.3 percent), and the corporate culture (9.4 percent). Complexity DevOps efforts can become complicated. Key executives may struggle to understand the business value of IT leaders’ work. Will centralization and standardization improve gover- nance, or will they add more layers of innovation-stifling bureaucracy? Then there's organ- izational transformation: Can teams overcome resistance to change and inertia, unlearn- ing many years of doing things one way, sharing, and learning from others, and integrating and orchestrating the correct tools? 41 DevOps can be ruined by unrealistic goals and poor metrics DevOps can be ruined by unrealistic goals and poor metrics. Setting unreasonable expect- ations, tracking metrics that do not align with business goals, or deploying a incomplete DevOps effort that adopts agile principles while retaining IT ops and engineering/develop- ment teams in traditional silos are all reasons DevOps attempts fail (Null, 2019). 1.6 DevSecOps - Ensuring Security in a DevOps World Application security has evolved with the industry’s shift to DevOps, weaving itself across the three “ways” to ensure the best quality software is generated. Security has had to adapt to sprint beside development and operations, adding security checks to the pipe- line and dividing their activities into smaller, faster chunks. The security team can no lon- ger be brought in towards the end of the SDLC due to the new requirement for rapid feed- back (the second way). DevSecOps refers to the addition of security to the three ways, conducting AppSec within a DevOps context. Five Strategies for Building a DevSecOps Pipeline According to Janca (2019), there are five strategies organizations can employ to incorpo- rate security into their pipeline. Use unit test as a weapon The first strategy is to use the unit tests as weapons. A typical unit test is a “positive test,” which ensures that the code accomplishes what it should. Ensure third-party components are safe The second strategy is to double-check the security of any third-party components, libra- ries, application dependencies, or any code used in the app that was authored by another (not from your dev team). Third-party components now account for more than half of all code in all applications, and 26 percent of those components have known vulnerabilities (Contrast Security, 2014). When any dependencies are included in a project, you accept the risk of any vulnerabilities they may contain. For many years, this issue of employing components with known vulnerabilities has been on the Open Web Application Security Project (OWASP) Top Ten (OWASP, 2017). Fortunately, MITRE developed the Common Vulnerability Enumerator database (CVE) (Mitre, 2022), and the US government created the National Vulnerability Database (NVD; National Institute of Standards and Technology, 2022). Both contain a list of all officially 42 known (i.e., publicly disclosed) vulnerabilities that can be searched quickly and efficiently in any pipeline. Various paid and free tools accomplish this purpose, with differing levels of quality and usability. If feasible, utilize two applications in case there are errors or something is overlooked. Each tool uses various methods to verify the components, and there are more databases to search than just two. Audit the state of systems and settings Verifying the state of the server’s or container’s patches and configuration, encryption Container status (key length, algorithms, expiration, health, forcing HTTPS, and other TLS settings), A container is a standar- dized software compo- and security headers (browser/client-side hardening) is the third tactic. Although system nent that wraps up code administrators may believe they have implemented patches or changed various settings, and all of its dependen- this step is to audit and verify that the policy matches the reality of the application. cies to ensure that an application will run swiftly and consistently in Some tools can handle all three in one pass, while others specialize in one or more. No different computer envi- ronments. application should be deployed on a platform with security flaws, missing updates, or weak encryption, and no user should be compelled to visit a website that does not employ the security capabilities available in the browser they are using. Dynamic Application Security Testing (DAST) in the pipeline The fourth technique includes dynamic application security testing (DAST) in the pipeline by conducting scripted assaults and fuzzing (i.e., automatic bug detection using mal- formed data injection) against the application operating on a web server. DAST is not a rapid procedure, so one or both of the following solutions should be used: Execute only a baseline scan or run it in a parallel security pipeline that does not publish to production, runs only after hours, and has an unlimited amount of time to complete. Static application security testing (SAST) in the pipeline The final strategy is to include static application security testing (SAST) in the workflow (also known as static code analysis). SAST tools, in general, are not only slow (running for hours or even days) and expensive, but they also have a high false positive rate (some- times over 90 percent), which may make this proposal surprising (Fischer, 2021). If only one sort of vulnerability (for example, XSS or injection) is searched for per code sprint and the tool in fine-tuned, you can potentially eliminate an entire bug class from your applica- tion(s). SUMMARY DevSecOps is the seamless integration of security testing and protection throughout the software development and deployment lifecycle. DevSe- cOps, like DevOps, is as much about culture and shared accountability as it is about particular technologies and procedures. 43 Like DevOps, DevSecOps aims to deploy better software more quickly and to discover and respond to software issues in production more quickly and efficiently. This unit introduced the concept of Devops and the importance of DevSecOps. Agile software development methodol- ogy was also explained with emphasis on some of the methods. Furthermore, the role of IT operations was discussed. This unit went fur- ther to discuss the three ways and five ideas of DevOps according to Kim et al. (2018) in The Phoenix Project and Kim (2019) in the Unicorn project. This unit concluded with a brief explanation of why DevSecOps is impor- tant in the DevOps cycle. 44 UNIT 2 DEVOPS CAPABILITIES, ACCELERATORS, AND ENABLERS STUDY GOALS On completion of this unit, you will be able to... – understand the benefits of DevOps and the need for security in the DevOps lifecycle. – explain the software development lifecycle from an Agile perspective. – expatiate on the role of operations in the software development lifecycle. – describe infrastructure as a code and explain its significance in DevOps. – discuss how DevOps is instrumental to modern software development process. 2. DEVOPS CAPABILITIES, ACCELERATORS, AND ENABLERS Introduction DevOps is a mindset that combines collaboration, cultural transformation, and automa- tion to boost corporate efficiency and customer satisfaction. Collaboration is essential to DevOps. Once the culture of collaborative delivery is engrained in the organization, each team joins in its pursuit of the same business objectives. This cultural trend typically necessitates a substantial adjustment in how people operate and might provoke resist- ance to transformation. Without losing sight of the actual advantages along the way, an effective DevOps strategy will formalize and convey the route to the final objective. DevOps focuses on the three pillars of people, process, and technology to increase the speed and quality of software delivery by integrating information technology (IT) develop- ment and operations. The optimal objective state is total delivery process automation, which streamlines every aspect of the operation. However, this is not a one-time quick fix. In fact, it requires continuing lean optimization to maintain peak efficiency, security, and maintainability. Managing the DevOps approach as a whole is a challenging endeavor. To make the move into a DevOps model as seamless as possible, it is crucial to address people, process, and technology in a consistent, long-term approach. This unit gives a detailed overview of the three pillars of DevOps. 2.1 The Science of DevOps - Key Capabilities to Accelerate The State of DevOps research program by DevOps Research and Assessment (DORA) is based on seven years of research and data from over 32,000 professionals worldwide (Accelerate Report, 2019). It is the longest-running academically rigorous research project on DevOps. It objectively assesses the behaviors and competencies that promote high per- formance in technology delivery and organizational outcomes. Their study uses behavio- ral science to identify the most effective and efficient ways to design and deploy software. Forsgren et al. (2018) have studied how varied practices (or capabilities) affect team effec- tiveness. As the result of a multiyear study, they identified 24 essential characteristics that software teams and organizations should strive to adopt to improve software delivery. The capabilities are listed in no particular sequence within each category (Forsgren et al., 2018). 46 continuous delivery (CD) architecture product and process lean management and monitoring culture continuous delivery capabilities version control for all production artifacts All production artifacts, including application code, application configurations, system configurations, and scripts for automating the build and configuration of the environment, are version controlled using a version control system such as GitHub or Subversion. Automate Your Deployment Process The degree to which deployments are automated and do not require manual intervention is referred to as deployment automation. Implement continuous integration The first step toward continuous delivery is continuous integration (CI). This is a develop- ment strategy in which code is checked regularly. Each check-in initiates a series of fast tests to find significant regressions, which developers correct right away. The continuous integration process generates canonical builds and packages, which are then deployed and delivered. Use trunk-based development method Trunk-based development has been proven to predict good software development and delivery performance. There are fewer than three active branches in a code repository. Branches and forks have noticeably short lifetimes (e.g., less than a day) before being merged into the master. Furthermore, application teams rarely, if ever, have “code lock” periods during which no one can check in code or do pull requests due to merging con- flicts, code freezes, or stabilization phases. Implement test automation Test automation runs software tests automatically (rather than manually) throughout the development process. Effective test suites are dependable because they detect real faults and only passcodes can be released. It is worth noting that developers should oversee cre- ating and maintaining automated test suites. Support test data management Test data needs to be carefully maintained, and test data management is becoming a more significant aspect of automated testing. Having enough data to execute your test suite, being able to get essential data on demand, being able to condition your test data in 47 your pipeline and preventing a lack of data from restricting the number of tests you can run are all good practices. However, we recommend that teams reduce the quantity of test data required to perform automated tests whenever possible. Shift left on security Integrating security into the design and testing stages of the software development proc- ess is critical for improving IT performance. This includes doing application security reviews, involving the information security team in the application design and demo proc- ess, leveraging preapproved security libraries and packages, and testing security features as part of the automated testing suite. Implement continuous delivery Continuous delivery is a software development practice in which the team prioritizes keeping the software in a deployable state above working on new features throughout its lifecycle. All team members have access to immediate feedback on the system's quality and deployability, and when they receive reports that the system is not deployable, fixes are made quickly. Finally, the solution can be immediately deployed to production or end customers. Architecture Capabilities Use loosely coupled architecture This impacts a team’s ability to test and deploy apps on demand without the need for orchestration with other services. A loosely linked design enables teams to work inde- pendently without relying on other teams for support and services, allowing them to work more quickly and add value to the organization. Architect for empowered teams Forsgren et al. (2018) demonstrated that teams with control over which tools they use per- form better in continuous delivery, leading to improved software development and deliv- ery. Unfortunately, no one knows what practitioners require to be effective. Product and Process Capabilities Gather and implement customer feedback According to findings, software delivery success is influenced by whether firms actively and routinely seek customer feedback and incorporate that feedback into product design (Forsgren et al. 2018). 48 Make the flow of work visible through the value stream Teams should have a clear grasp of and visibility for the flow of work from the company to the customer, including product and feature status. According to our research, this has a good impact on IT performance. Foster and enable team experimentation Team experimentation refers to developers’ capacity to try out innovative ideas and estab- lish and update specifications during the development process without requiring outside clearance, allowing them to innovate quickly and produce value. This is especially effec- tive when combined with small batch production, customer feedback, and making the workflow transparent. Lean Management and Monitoring Capabilities Lightweight change approval processes A lightweight change approval process based on peer review (pair programming or intra- team code review) outperforms external change approval boards (CABs) in terms of IT per- formance. Monitor across applications and infrastructure to inform business decisions Business decisions should be based on data from application and infrastructure monitor- ing tools. This is more than