Reviewer Integrative Programming PDF

Summary

This document provides an overview of software development methodologies, including agile and waterfall approaches. It details the DevOps concept and its importance in modern software development. It also describes the core principles of DevOps and the DevOps lifecycle.

Full Transcript

REVIEWER INTEGRATIVE PROGRAMMING Agile Method Process: I. DevOps 1. Plan 2. Design...

REVIEWER INTEGRATIVE PROGRAMMING Agile Method Process: I. DevOps 1. Plan 2. Design 3. Develop 4. Testing 5. Deploy 6. Review 7. Launch TRADITIONAL DEVELOPMENT APPROACH: AGILE METHOD Agile is a set of principles and practices for software development and project management TRADITIONAL DEVELOPMENT APPROACH that emphasizes flexibility, collaboration, customer feedback, and iterative progress. In an agile model, programmers create prototypes to understand client requirements. Feedback and Prototype: o Client provides feedback and a list of changes to be made. o Developers code the changes. The entire process of building software is broken down into small actionable blocks called sprints. Sprint Process: 1. Plan WATERFALL METHOD: 2. Code A traditional approach of software development. 3. Test In waterfall development, development happens 4. Review in a step-by-step manner and focuses on a 5. Repeat sequential and linear approach to software TRADITIONAL DEVELOPMENT APPROACH: AGILE development. METHOD ADVANTAGE Client requirements are better understood TRADITIONAL DEVELOPMENT APPROACH: WATERFALL because of constant feedback. METHOD DISADVANTAGE Product is delivered faster compared to the A new requirement from the client will restart Waterfall method. the development cycle. Resource-intensive. Developers and operations teams work in silos. TRADITIONAL DEVELOPMENT APPROACH: COMPANIES REALIZED DURING WATERFALL METHOD INTRODUCTION TO DEVOPS It is very expensive to make changes during the The word DevOps is a combination of the terms end of the project. development and operations. It represents a Software must be delivered faster with fewer collaborative or shared approach to the tasks resources. performed by a company's application development and IT operations teams. DEVELOPMENT APPROACH: AGILE METHOD More information: DevOps is NOT a technology. It is an approach. DevOps can change the software delivery chain, services, job roles, IT tools, and best practices. DevOps is one of many techniques IT staff uses to execute IT projects to meet business needs. CORE PRINCIPLES OF DEVOPS DEVOPS LIFECYCLE 1. Collaboration: Breaking down silos and fostering Plan: Define the goals, requirements, and collaboration between software development features of the software. and IT operations. Code: Involves actual software development. 2. Automation: Increases efficiency, reduces errors, Build: The code is compiled, integrated, and and accelerates the software delivery process. transformed into executable binaries or artifacts. Test: Automated and manual testing to identify CORE PRINCIPLES: CI/CD and fix bugs, errors, and issues. CI (Continuous Integration): Ensures that code Release: Involves deploying the software to a changes are tested and integrated as soon as staging or production environment. possible to reduce integration issues and detect o Includes Blue-Green Deployments to defects early. minimize downtime and reduce risk CD (Continuous Deployment): Automatically during new releases. deploys code changes to production without Deploy: The software is deployed to the target manual intervention. production environment. Operate: Once deployed, the software needs TECH GIANTS USING DEVOPS monitoring and maintenance. Many tech giants and organizations have Monitor: Continuous monitoring of the adopted DevOps. application’s performance and health in real- o Example: time. ▪ Released in 2007, many Feedback: Feedback is collected from users, downtimes in 2014. stakeholders, and monitoring systems. ▪ The profit increased after Optimize: Teams make improvements based on adopting the DevOps approach. feedback and monitoring data. BENEFITS OF WORKING IN A DEVOPS ENVIRONMENT DEVOPS LIFECYCLE WITH EXAMPLE TOOLS Faster time-to-market. Git: A distributed version control system. Improved collaboration. Maven/Gradle: Build automation tools used in Increased efficiency. software development. Higher quality software. Selenium: A popular open-source framework for Scalability. automating web browser interactions. Jenkins: An open-source automation server used CHALLENGES IN DEVOPS ADOPTION for building, deploying, and automating software 1. Cultural Transformation: development pipelines. o Resistance to new practices and changes Amazon Web Services (AWS): A leading cloud in culture can slow down DevOps computing platform offering a wide range of adoption. services for building, deploying, and managing o Traditional silos and lack of collaboration applications. between development and operations Ansible/Docker/Kubernetes: Tools for teams are common hurdles. containerization and orchestration. 2. Security Concerns (DevSecOps): Nagios: An open-source monitoring and alerting o Integrating security practices into the system for IT infrastructure. DevOps pipeline can be challenging. o Ensuring that security is a part of every development stage is crucial but complex. 3. Tooling and Technology Selection: o The wide range of tools available can be overwhelming, making it difficult to select the right ones for specific organizational needs and workflows. II. GIT Distributed Version Control Systems In a DVCS (such as Git, Mercurial, Bazaar or Darcs), clients Version Control - Version control is a system that records don’t just check out the latest snapshot of the files; changes to a file or set of files over time so that you can rather, they fully mirror the repository, including its full recall specific versions later. history. Thus, if any server dies, and these systems were Also, version control systems are software tools that help collaborating via that server, any of the client repositories software teams manage changes to source code over can be copied back up to the server to restore it. Every time. As development environments have accelerated, clone is really a full backup of all the data. version control systems help software teams work faster and smarter. They are especially useful for DevOps teams A Short History of Git since they help them to reduce development time and In 2002, the Linux kernel project began using a increase successful deployments. proprietary DVCS called BitKeeper. In 2005, the relationship between the Timeline of Version Control community that developed the Linux kernel and Manual File Copying the commercial company that developed Source Code Control System (SCCS -1970) BitKeeper broke down, and the tool’s free-of- Revision Control System (RCS – 1980) charge status was revoked. This prompted the Centralized Versions Linux development community (and in particular Control System (CVS - 1980) Linus Torvalds, the creator of Linux) to develop Subversion (SVN - 2000) their own tool based on some of the lessons they Git (2005) learned while using BitKeeper. Some of the goals of the new system were as follows: Local Version Control Systems Many people’s version-control method of choice is to Speed copy files into another directory (perhaps a time- Simple design stamped directory, if they’re clever). This approach is Strong support for non-linear development (thousands very common because it is so simple, but it is also of parallel branches) incredibly error prone. It is easy to forget which directory Fully distributed you’re in and accidentally write to the wrong file or copy Able to handle large projects like the Linux kernel over files you don’t mean to. To deal with this issue, efficiently (speed and data size) programmers long ago developed local VCSs that had a simple database that kept all the changes to files under What is GIT? revision control. By far, the most widely used modern version control One of the most popular VCS tools was a system called system in the world today is Git. Git is a mature, actively RCS (Revision Control System), which is still distributed maintained open source project originally developed in with many computers today. RCS works by keeping patch 2005 by Linus Torvalds, the famous creator of the Linux sets (that is, the differences between files) in a special operating system kernel. A staggering number of format on disk; it can then re-create what any file looked software projects rely on Git for version control, including like at any point in time by adding up all the patches. commercial projects as well as open source. Developers who have worked with Git are well represented in the Centralized Version Control Systems pool of available software development talent and it The next major issue that people encounter is that they works well on a wide range of operating systems and IDEs need to collaborate with developers on other systems. To (Integrated Development Environments). Having a deal with this problem, Centralized Version Control distributed architecture, Git is an example of a DVCS Systems (CVCSs) were developed. These system have a (hence Distributed Version Control System). Rather than single server that contains all the versioned files, and a have only one single place for the full version history of number of clients that check out files from that central the software as is common in once-popular version place. For many years, this has been the standard for control systems like CVS or Subversion (also known as version control. SVN), in Git, every developer's working copy of the code is also a repository that can contain the full history of all changes. In addition to being distributed, Git has been designed with performance, security and flexibility in mind. The Three States If you ever need help while using Git, there are three Git has three main states that your files can reside in: equivalent ways to get the comprehensive manual page modified, staged, and committed: (manpage) help for any of the Git commands: Modified means that you have changed the file but have $ git help config not committed it to your database yet. These commands are nice because you can access them Staged means that you have marked a modified file in its anywhere, even offline. In addition, if you don’t need the current version to go into your next commit snapshot. full-blown manpage help, but just need a quick refresher Committed means that the data is safely stored in your on the available options for a Git command, you can ask local database. for the more concise “help” output with the -h option, as in. The Three Main Sections The working tree is a single checkout of one version of The Command Line the project. These files are pulled out of the compressed There are a lot of different ways to use Git. There are the database in the Git directory and placed on disk for you original command-line tools, and there are many to use or modify. graphical user interfaces of varying capabilities. For one, The staging area is a file, generally contained in your Git the command line is the only place you can run all Git directory, that stores information about what will go into commands — most of the GUIs implement only a partial your next commit. Its technical name in Git parlance is subset of Git functionality for simplicity the “index”, but the phrase “staging area” works just as well. REAL EXAMPLE OF GIT WORKFLOW The Git directory is where Git stores the metadata and Git init = files in the computer object database for your project. This is the most Git add File1 File2 = file in the Staging Area important part of Git, and it is what is copied when you Git commit –m “Initial commit” = file In the git Repository clone a repository from another computer. Basic Git Commands THE BASIC GIT WORKFLOW git init -To take a directory and turn it into a new Git The basic Git workflow goes something like this: repository so you can start version controlling it, you can simply run git init. 1. You modify files in your working tree. git add - The git add command adds content from the 2. You selectively stage just those changes you want to be working directory into the staging area (or “index”) for part of your next commit, which adds only those changes the next commit. When the git commit command is run, to the staging area. by default it only looks at this staging area, so git add is 3. You do a commit, which takes the files as they are in used to craft what exactly you would like your next the staging area and stores that snapshot permanently to commit snapshot to look like. your Git directory. git status - The git status command will show you the different states of files in your working directory and Configuring Git staging area. Which files are modified and unstaged and The first thing you should do when you install Git is to set which are staged but not yet committed. In its normal your username and email address. This is important form, it also will show you some basic hints on how to because every Git commit uses this information, and it’s move files between these stages. immutably baked into the commits you start creating: git commit - The git commit command takes all the file $git config -- global user.name “John Doe” contents that have been staged with git add and records $git config -- global [email protected] a new permanent snapshot in the database and then moves the branch pointer on the current branch up to it. Again, you need to do this only once if you pass the -- global option, because then Git will always use that STAR and FORK information for anything you do on that system. If you Star – or starring a repository on GitHub is a way to want to override this with a different name or email bookmark or mark it as interesting or valuable to you. It's address for specific projects, you can run the command similar to adding a repository to your favorites or without the --global option when you’re in that project. watchlist. Fork - Forking a repository on GitHub creates a copy of the repository under your GitHub account. It allows you to freely modify, experiment with, and contribute to the Benefits of CONTAINER codebase without affecting the original repository. Increased portability - Applications running in containers can be deployed easily to multiple III. VM vs. CONTAINER different operating systems and hardware platforms. What is VIRTUAL MACHINE? Greater efficiency - Containers allow Virtual machines are heavy software packages that applications to be more rapidly deployed, provide complete emulation of low-level hardware patched, or scaled. devices like CPU, Disk and Networking devices. Virtual Less overhead - Containers require less system machines may also include a complementary software resources than traditional or hardware virtual stack to run on the emulated hardware. These hardware machine environments because they don’t and software packages combined produce a fully include operating system images. functional snapshot of a computational system. It is an Better application development - Containers emulation of a physical computer. VMs enable teams to support agile and DevOps efforts to accelerate run what appear to be multiple machines, with multiple development, test, and production cycles. operating systems, on a single computer. VM vs. Container. Benefits of Virtual Machine Virtual machines (VMs) and containers are often Lower hardware costs - Many organizations compared because they are both technologies used for don’t fully utilize their hardware resources. virtualization and application isolation, but they serve Instead of investing in another server, different purposes and have distinct characteristics. organizations can spin up virtual servers instead. People sometimes confuse container technology with Enhanced Data Security - Virtualization virtual machines (VMs) or server virtualization streamlines disaster recovery by replicating your technology. Although there are some basic similarities, servers in the cloud. Since VMs are independent containers are very different from VMs. of the underlying hardware, organizations don’t need the same physical servers offsite to Virtual Machines (VMs) and containers are two different facilitate a secondary recovery site. In the event virtualization technologies that operate at different levels of a disaster, employees can be back online of the computing stack, which is why VMs are often quickly with a cost-effective backup and disaster associated with the hardware level, while containers are recovery solution. associated with the operating system (OS) level. Portability - It’s possible to seamlessly move VMs across virtual environments and even from one Which option is better? physical server to another, with minimal input on Speed - Container the part of IT teams. VMs are isolated from one Security Reliability - VM another and have their own virtual hardware, Reliability - Container & VM making them hardware-independent. Moving Cost-Effectiveness – Container physical servers to another location is a more resource-intensive task. SUMMARY In summary, the comparison between virtual machines What is CONTAINER? (VMs) and containers revolves around key differences in Containers are lightweight software packages that isolation, resource efficiency, and use cases: Virtual contain all the dependencies required to execute the Machines (VMs): contained software application. These dependencies VMs offer strong isolation by running multiple include things like system libraries, external third-party complete operating systems on a single host code packages, and other operating system level They are resource-intensive due to each VM having its applications. The dependencies included in a container own OS and kernel. exist in stack levels that are higher than the operating Ideal for scenarios requiring strict isolation, system. Containers are designed to be consistent and run compatibility testing with different OSes, and full-system reliably across different environments, from a testing. developer's laptop to a production server. Offers high security but with higher resource overhead and slower startup times. Containers: Regular code committing Containers provide lightweight isolation by sharing the Build staging host OS kernel while maintaining separate user spaces. A build machine dedicated to the integration They are highly resource-efficient, with minimal Continuous feedback overhead, making them quick to start and efficient in Developer test categorization resource utilization. Suitable for component-level testing, microservices, JENKINS Terminologies rapid development, and scaling. Offers good security Pipeline: A pipeline in Jenkins is a series of automated for most use cases but with less strict isolation compared steps that define the process of building, testing, and to VMs. deploying software. Pipelines can be defined using the "Pipeline DSL" (Domain-Specific Language) in a Ultimately, the choice between VMs and containers Jenkinsfile or using visual tools like Blue Ocean. depends on the specific requirements of your project, Job: A job is a basic unit of work in Jenkins. It including the need for strong isolation, resource represents a single task, such as building code, running efficiency, and the nature of the testing or deployment tests, or deploying an application. Jobs can be configured tasks at hand. In many cases, a combination of both and executed independently. technologies may be used to leverage their respective Node/Agent: A node (also known as an agent) is a strengths. machine (physical or virtual) that is part of the Jenkins environment and is capable of running Jenkins jobs. IV. Jenkins Jenkins can distribute jobs to different nodes based on their capabilities. What is JENKINS? Executor: An executor is a computational resource on a Jenkins is an open-source automation tool written in Java node that can execute a Jenkins job. Nodes can have one with plugins built for continuous integration. Jenkins is or more executors, allowing them to run multiple jobs used to build and test your software projects simultaneously. continuously making it easier for developers to integrate Workspace: Each job in Jenkins has its own workspace, changes to the project, and making it easier for users to which is a directory on the node where the job's files and obtain a fresh build. It also allows you to continuously artifacts are stored during its execution. deliver your software by integrating with a large number Jenkinsfile: A Jenkinsfile is a text file that defines the of testing and deployment technologies. entire pipeline using the Pipeline DSL. It allows for Hudson's Beginnings (2004-2010) Jenkins has its roots version control and codification of the pipeline. in a project called "Hudson," which was created by Freestyle Project: A freestyle project is a type of Jenkins Kohsuke Kawaguchi while he worked at Sun job that provides a simple and flexible way to configure Microsystems (later acquired by Oracle). Hudson was and run tasks. It is suitable for basic build and initially released in 2004. deployment tasks. In 2011, there was an infamous dispute between the Multibranch Pipeline: A multibranch pipeline is a independent Hudson open source community and pipeline that is automatically created and run for each Oracle, which now has Sun Microsystems under its branch in a version control repository (e.g., Git). It is umbrella. useful for managing multiple branches and pull requests. Artifact: An artifact is a file or set of files produced as a How does continuous integration work? result of a Jenkins job. These files can include compiled Continuous integration (CI) is an integral part of the code, deployment packages, or other build artifacts. software development process. It can consist of a SCM (Source Code Management): SCM refers to the number of different tasks, including the use of unique integration of Jenkins with version control systems like functionality in the repository, feature development, and Git, Subversion, or Mercurial. Jenkins can automatically bug fixes amongst others. A continuous integration tool, trigger jobs based on changes in the source code such as Jenkins, is great in identifying issues with current repository. Build Trigger: A build trigger is an event or application sources and provides speedy response by condition that causes a Jenkins job to run. Common checking the integration process with the help of triggers include code commits, pull requests, or automated build and test features. Listed below are scheduled builds. common CI practices: Plugin: Plugins are extensions that add functionality to Jenkins. There are thousands of Jenkins plugins available for various purposes, including SCM integration, and testing to deployment and monitoring. Its primary notification, reporting, and more. purpose is to enhance development efficiency, reduce Master: The Jenkins master is the primary Jenkins manual tasks, and ensure the reliability and quality of server that manages the configuration and scheduling of software projects. Jenkins is a critical tool in modern jobs. Agents (nodes) connect to the master to execute DevOps practices, enabling continuous integration and jobs. continuous delivery, ultimately helping organizations Artifact Repository: An artifact repository is a storage deliver software more rapidly and with fewer errors. location where Jenkins can publish and retrieve build artifacts. Common artifact repository tools include V. Docker Nexus, Artifactory, and Docker Hub. BRIEF HISTORY JENKINS Installation The first use of Docker dates back to around 2010 when Open CLI/Bash Solomon Hykes and his team at dotCloud (now Docker, Type: java –jar Jenkins.war Inc.) started developing a platform for deploying Copy this one (password) and type to your browser the applications in containers. The motivation behind this https://localhost:8080 project was to solve the problem of inconsistent and unreliable application deployments across different JenkinsFile is defined using two types of syntax: environments. Declarative pipelines syntax In 2013, Solomon Hykes unveiled the project as Docker Creating pipelines is much easier with this syntax. It during a lightning talk at PyCon US. The key innovation of features a well-established hierarchy that helps in Docker was the use of container technology to provide creating pipelines. It offers you simple ways to exercise lightweight and isolated environments for running control over every aspect associated with the execution applications, along with a set of tools to manage these of pipelines. containers. Scripted pipeline syntax Docker's first public release was in March 2013. It uses a lightweight executor and runs on Jenkins REASONS TO USE DOCKER: master. It has its own set of resources that it puts to use One or more files missing to convert pipelines into atomic commands. As is quite Software version mismatch evident from their definitions, both these syntax are Different configuration settings. quite different from each other. Not only this, but they are also even defined in different ways. What is docker? Docker is an open source platform that enables SUMMARY developers to build, deploy, run, update and manage In summary, Jenkins is an automation server that plays a containers—standardized, executable components crucial role in software development and has the that combine application source code with the following key purposes and functions: operating system (OS) libraries and dependencies required to run that code in any environment. Continuous Integration (CI): Continuous Delivery (CD): Why use docker? Automated Builds: Because of the simplicity and improvements it brings Testing Automation: to the app development lifecycle, Docker has Integration with Version Control: gathered a large community of users. Big companies Customizable Pipelines: like Adobe, Netflix, PayPal, and Strips use Docker Plugins and Extensibility: today for a number of reasons. Monitoring and Reporting: Here are some of the main benefits that you’d get Notifications and Alerts: from implementing Docker: Version Control and Traceability: Parallel and Distributed Builds Portability Across Machines - You may deploy your containerized program to any other system In essence, Jenkins automates various aspects of the that runs Docker after testing it. You can be software development lifecycle, from code integration confident that it will perform precisely as it did Docker Container Lifecycle during the test.. Rapid Performance - Although virtual machines are an alternative to containers, containers do not contain an operating system (whereas virtual machines do), which implies that containers have a considerably smaller footprint and are faster to construct and start than virtual machines. Lightweight - Containers' portability and performance advantages can aid in making your development process more fluid and responsive. Created - This is the initial state of a Docker container Using containers and technology like Enterprise after it has been created, but before it has been started. Developer Build Tools for Windows to improve Running - When a Docker container is started, it your continuous integration and continuous transitions to the running state. In this state, the delivery processes makes it easier to provide the container is actively executing its processes. appropriate software at the right time. Paused - If a container is paused, it is temporarily Enterprise Developer Build Tools for Windows is stopped from running its processes, but it is not a component of Enterprise Developer that terminated. provides all of Enterprise Developer's features Exited - If a container's main process completes, the for compiling, building, and testing COBOL code container stops and transitions to the exited state. without the need for an IDE. Dead - If a container fails to start, it is in the dead state. Isolation - Any supporting software your Containers in this state cannot be restarted and must be application requires is likewise included in a recreated. Docker container that hosts one of your applications. It's not a problem if other Docker Docker Detached and Foreground Mode containers include apps that require different In Docker, you can run containers in both detached versions of the same supporting software (background) mode and foreground mode. Each mode because the Docker containers are completely serves different purposes depending on your needs. self-contained. This also implies that as you progress through the stages of your Detached (Background) Mode: development lifecycle, you can be confident that To run a Docker container in detached mode, you a picture you create during development use the -d or --detach flag when starting a will operate identically in testing and, potentially, container. in front of your users. Detached mode is typically used for long-running Scalability - If the demand for your apps services or background processes where you necessitates, you can quickly generate new don't need to interact with the container's containers. You can use a variety of container console directly. Common use cases include management techniques when using multiple running web servers, databases, or containers. microservices as containers. Detached mode allows these services to run in the background Docker Architecture while freeing up your terminal for other tasks. Foreground Mode: By default, when you run a Docker container without the -d flag, it runs in the foreground (also known as attached mode). Foreground mode is useful for debugging, development, and scenarios where you need to interact with the container directly, such as running a shell inside a container for troubleshooting or testing purposes. It's also valuable when you want to see the real-time Docker Image output of an application to monitor its behavior. A Docker image or a container image is a lightweight, standalone, executable package that contains Docker File and Instructions everything needed to run a piece of software, A Dockerfile is a plain text configuration file used to including the code, a runtime, system tools, libraries, define the steps and instructions for building a Docker and settings. Docker images are the building blocks container image. These instructions specify how the for creating and running containers, which are container image should be constructed, what files and instances of these images. dependencies should be included, and how the container should behave when it's run. Docker uses the instructions Docker Compose in the Dockerfile to create a reproducible and portable Docker Compose is a tool for defining and running container image. multi-container Docker applications. It allows you to define your application's services, networks, and Here are some common Docker instructions used in a volumes in a single, easy-to-read YAML file called a Dockerfile: docker-compose.yml. With Docker Compose, you FROM: Specifies the base image on which the can manage and orchestrate complex applications new image will be built. It's the starting point for consisting of multiple containers, making it easier to your container. develop, test, and deploy applications that rely on WORKDIR: Sets the working directory inside the multiple interconnected services. container where subsequent commands will be For example we are creating a project: executed. Frontend – React COPY and ADD: Copies files and directories from Backend – NodeJS the host system into the container image. Database – MongoDB RUN: Executes a command inside the container during the image build process. It's often used for Docker Hub installing software and setting up the Docker Hub is a cloud-based repository and platform for environment. sharing, distributing, and managing Docker container EXPOSE: Informs Docker that the container will images. It serves as a central hub where developers, listen on a specific network port at runtime. It's a teams, and organizations can publish, store, and access metadata instruction and does not actually container images. Docker Hub is an integral part of the publish the port. Docker ecosystem and is widely used in the CMD and ENTRYPOINT: Specifies the default containerization community. command to run when the container is started. CMD is often used for providing default Docker Commands arguments to the main application, while 1.docker --version: ENTRYPOINT is used to specify the primary Displays the Docker version installed on executable. your system. ENV: Sets environment variables inside the container, which can be used by applications and 2. docker run [OPTIONS] IMAGE [COMMAND] [ARG...]: scripts. Runs a container based on the specified LABEL: Adds metadata to the image in the form image. You can specify various options of key-value pairs. Labels can be useful for and provide a command to run inside the organizing and annotating images. container. USER: Sets the user or UID that the container should run as when executing subsequent 3. docker build [OPTIONS] PATH | URL | -: commands. Builds a Docker image from a Dockerfile VOLUME: Creates a mount point and marks it as located at the specified path or URL. You externally mounted. It is often used for persisting can use the -t option to tag the image. data outside the container. 4. docker images [OPTIONS]: Lists all locally available Docker images. You can use options like -a to show all images, including intermediate ones. 5. docker-compose [OPTIONS] [COMMAND] [ARGS...]: Manages multi-container applications using a Compose file (usually named docker-compose.yml). It simplifies container orchestration for complex setups. 6. docker login: Logs in to a Docker Hub account or another container registry, allowing you to push and pull images from that registry. 7. docker push IMAGE_NAME: Uploads a local Docker image to a container registry, making it accessible to others. 8. docker logs [OPTIONS] CONTAINER_ID or CONTAINER_NAME: Displays the logs generated by a running container. You can use options like -- follow to continuously stream logs. 9. docker pull IMAGE_NAME This command is used to download a Docker image from a container registry (such as Docker Hub or a private registry) to your local machine These are some of the essential Docker commands used for managing containers and images in Docker. Docker offers many more commands and options for fine- grained control over containerized applications. You can explore additional commands and their options in the Docker documentation for specific use cases and scenarios.

Use Quizgecko on...
Browser
Browser