ALLLLL PDF
Document Details

Uploaded by NicestTrumpet
Tags
Summary
This document is about the practices of continuous integration and continuous deployment/delivery, and details its benefits, including increased productivity, improved quality, and faster deployment. It also presents the build automation process and different types of builds.
Full Transcript
Lec 3 Automated Builds: Automatically compile, test, and package applications, often integrated with continuous integration. Triggered by source code changes like commits or pull requests. Continuous Integration (CI): A practice of frequently integrating team members' code changes into the mainlin...
Lec 3 Automated Builds: Automatically compile, test, and package applications, often integrated with continuous integration. Triggered by source code changes like commits or pull requests. Continuous Integration (CI): A practice of frequently integrating team members' code changes into the mainline, ideally multiple times daily, to prevent divergence. Combined with automated testing, CI ensures code reliability and stability. Continuous Deployment: A practice linked to continuous integration, ensuring the application is always deployable. It automates frequent deployments of the master branch to production after successful automated testing. Continuous Delivery (CD): A practice ensuring application changes can be reliably released at any time. It extends continuous integration by including all configurations needed for production deployment. CD encompasses idea generation, development, and readiness, ensuring smooth workflows with CI inherently integrated. Benefits of Automated Builds & Continuous Integration: Benefit of Automated Builds & Description Continuous Integration Automates the build process, saving time and enabling developers to Increased Productivity focus on critical tasks, leading to faster development cycles. Reduces errors by running automated tests and validating code Improved Quality before integration, ensuring only high-quality code reaches production. Accelerates feature releases and bug fixes, improving customer Faster Deployment satisfaction with quicker access to updates. Simplifies teamwork by enabling seamless integration of code Easier Collaboration changes without manual merging. Build Automation Process: bLTARTEEEB LAZEMM 1. Commit Code: A developer commits code to a repository in a version control system (VCS) like C GitHub, GitLab, or Bitbucket, sending updates to a shared repository. 2. CI Server Detects Changes: The CI server monitors the repository for new commits, saves changes in its database, and triggers the CI pipeline. Engineers can configure build settings, add C build agents, and integrate with language-specific tools. 3. Trigger Build Agent: A build configuration (rules for when and how builds happen) detects T changes and activates a build agent to execute build jobs. Triggers may include new commits, scheduled times, or external processes. B 4. Build Agent Activates Tools: The build agent, deployed on a server, listens to the CI server and orders build tools to perform tasks like compiling, compressing, and testing the code. Parallel agents can run on different machines for various environments. B 5. Build Tools Execute Tasks: Build tools, specific to programming languages (e.g., Maven for Java, MSBuild for C#, Grunt for JavaScript), compile the code, run unit tests, and perform other G operations. 6. Generate Build Artifacts: Successfully built and tested code produces build artifacts (e.g.,.exe files) that are sent back to the CI server for deployment, further testing, or storage. Build Automation Summary process Developer commits updates to a shared repository in a version control Commit Code system (VCS). CI Server Detects CI server monitors changes, saves updates, and triggers the CI pipeline. Build configuration activates an agent to execute build jobs based on Trigger Build Agent defined triggers. Agent directs tools to perform tasks like compiling and testing on a Build Agent Activates server. Build Automation Summary process Tools compile code, run tests, and perform operations specific to the Build Tools Execute programming language. Tested code produces artifacts (e.g.,.exe files) for deployment, testing, Generate Artifacts or storage. Types of Builds: 1. Incremental Builds: Only rebuilds the parts of the codebase that have changed, making the process faster (usually under 10 minutes). Commonly used for efficiency, as build tools detect and process only modified files. 2. Full (Clean) Builds: Rebuilds the entire codebase from scratch, often used in cases of stability issues or to remove outdated files. These are less common due to their longer execution times. CI Servers: The CI server market is well-established, and most tools effectively perform the required tasks. The differences between them are often subtle. For a more detailed comparison, it's recommended to explore dedicated resources on CI/CD tools. Jenkins: Jenkins is an open-source, Java-based CI server that automates build, test, and deployment processes, enabling continuous integration and delivery. How Jenkins Works: 1. Developers modify and commit code to the repository. 2. Jenkins CI server monitors the repository for changes. 3. The build server compiles the code, generating an executable if successful. If the build fails, an email with build logs is sent to the developer. 4. On success, the application is deployed to the test server. 5. If no issues are found, the application is automatically deployed to production. For larger projects or testing across different environments, Jenkins uses a distributed Master- Agent architecture to handle the load and integrate various languages and platforms. How Jenkins work Summary Commit Code Developers modify and commit code to the repository. Monitor Repository Jenkins CI server monitors the repository for changes. Build server compiles code, generates executable on success; sends Compile Code failure logs to developer if it fails. Deploy to Test On success, the application is deployed to the test server. Server Deploy to If no issues are found in testing, the application is automatically Production deployed to production. Master-Agent For larger projects, Jenkins uses a distributed Master-Agent setup to Architecture handle load and integrate multiple languages and platforms. Master-Agent Architecture in Jenkins: The master-agent architecture in Jenkins is used to manage distributed builds. The master and agent(s) communicate via TCP/IP. Roles of Jenkins Master: Schedules build jobs. SMS Selects the appropriate agent for dispatching builds. Monitors agents, taking them online/offline as needed. WP Presents build results and reports to developers. While the master can execute jobs, it's recommended to delegate tasks to the appropriate agents. Roles of Jenkins Agent(s): A remote machine connected to the master. Executes build jobs dispatched by the master. READ Agents can run on different operating systems and handle specific build requirements. Developers can choose which agent or type of agent to run the build on, though the master typically selects the most suitable agent for the task. Component Role Jenkins - Schedules jobs and selects agents. - Monitors agents and presents build Master results. - Can execute jobs but delegates tasks to agents. Jenkins - Remote machine that executes jobs from the master. - Can run on different OS Agent(s) and handle specific builds. - Master selects the appropriate agent. How Does Jenkins Work In Master-Agent Architecture? As developers keep pushing code, Jenkins Agents can run different builds versions of the code for different platforms. Jenkins Master (or Master Node) controls how the respective builds should operate. LEC.4 Microservices is an architectural style that breaks an application into independently deployable services, each with its own business logic and database. These services can be updated, tested, deployed, and scaled independently. This approach decouples major business concerns, making complexity more manageable. Microservices often pair with DevOps to support continuous delivery and adapt to changing user requirements. Monolithic Architecture: A software design where the entire application is a single unit with one code base, coupling all business functions. Changes require updating and redeploying the whole system, making updates time-consuming. While easy to manage and deploy in early stages, it becomes restrictive as the application grows. Cloud Computing provides computing resources like storage and processing power over the internet, reducing the need for physical infrastructure and offering flexibility, innovation, and cost savings. Types of Cloud Deployments: PHP Public Cloud: Shared resources from providers like AWS or Azure. Private Cloud: Dedicated resources for one organization, offering higher security. Hybrid Cloud: Combines public and private clouds for flexibility and optimized deployment. Cloud Description Key Features Service Users provision resources; ideal for Provides raw infrastructure (servers, IaaS DevOps toolchains and specialized storage, OS) to rent. tasks. Offers platform for building and deploying Supports tech stacks; automatic scaling PaaS apps without managing infrastructure. and deployment. Delivers software over the internet, Examples: CRM, webmail, productivity SaaS managed by cloud providers. tools. Cloud Description Key Features Service Serverless computing where code runs FaaS Executes code temporarily; pay-per-use. without managing infrastructure. Containers vs. Virtual Machines: Both are resource virtualization technologies, but the key difference is that virtual machines virtualize entire machines down to the hardware layer, while containers virtualize only the software layers above the operating system. Aspect Containers Virtual Machines (VMs) Virtualization Software layers above OS. Full machine, including OS. Weight Lightweight, app dependencies only. Heavy, includes full OS. Performance Faster, lower resources. Slower, higher resources. Isolation Shares host OS, less isolated. Strong isolation, separate OS. Use Case Scalable apps, microservices. Multi-OS, isolated apps. Containers: Lightweight software packages that include all dependencies required to run an application, such as libraries and third-party code. They operate at a higher level than the operating system. Pros of Containers Cons of Containers Fast iteration: Lightweight and easy to modify. Shared host risks: Exploits can affect others. Robust ecosystem: Pre-made containers save Security risks: Public containers may be time. unsafe. Pros of Containers: Fast iteration: Containers are lightweight and easy to modify, speeding up development. Robust ecosystem: Many container runtimes offer public repositories of pre-made containers, saving development time. Cons of Containers: Shared host risks: Containers share the same hardware, so an exploit in one could affect others. Security risks: Public repositories may contain vulnerable or malicious containers. What is Docker? Docker is a containerization platform that simplifies building, deploying, and running containers. It uses a client-server architecture and automates processes via a unified API. Developers package applications into container images using Docker's toolkit and Dockerfiles. These images can be deployed on any platform supporting containers, such as Kubernetes or Docker Swarm. Key Benefits: Simplifies container creation and management. Ensures consistency across development, testing, and production environments. Challenges: Managing containers at scale requires additional tools. Solutions: Container orchestration platforms like Kubernetes and Docker Swarm address scaling, load balancing, zero-downtime deployments, and security for large-scale containerized applications. What is Kubernetes? Kubernetes (K8s) is an open-source platform for orchestrating containers across a networked cluster of resources. Initially developed by Google to manage billions of containers weekly, it simplifies the deployment and management of complex distributed systems while maximizing resource efficiency. Key Features: Groups containers (e.g., app server and database) to optimize network and resource use. Handles service discovery, load balancing, automated rollouts/rollbacks, and self -healing. Supports configuration management for streamlined workflows. Use in DevOps: Kubernetes is vital for building reliable CI/CD pipelines, making it especially valuable for DevOps teams. It enhances scalability, efficiency, and resilience in containerized environments. Aspect Docker Kubernetes A containerization platform for An open-source orchestration platform for Definition building, deploying, and managing managing containers across a cluster. containers. Simplifies creation and management Manages containerized applications at Purpose of individual containers. scale, ensuring efficiency and resilience. - Automates container orchestration. - Key - Builds and deploys containers. - Handles service discovery, load balancing, Features Ensures environment consistency. and self-healing. Managing containers at scale requires Requires setup and expertise for managing Challenges additional tools. clusters effectively. Integrates with orchestration tools Streamlines CI/CD pipelines and supports Solutions like Kubernetes or Docker Swarm. large-scale deployments. Best for individual container lifecycle Ideal for distributed systems and container Use Case management. orchestration in DevOps workflows. Edge Computing Edge computing is a distributed paradigm that processes and stores data near its source, such as on IoT devices or local servers, rather than relying on centralized data centers. This approach reduces latency, saves bandwidth, and improves data processing efficiency. Edge devices are equipped with processors, memory, storage, and I/O capabilities, allowing applications to process and analyze data directly at the generation site. How Edge Computing Works Edge computing processes data near its source, like I oT devices or local servers, instead of centralized data centers. This reduces latency, saves bandwidth, and handles growing data volumes. By bringing computing closer to where data is generated, it ensures faster and more efficient processing, especially for time-sensitive tasks. Edge computing processes data closer to its source (e.g., IoT devices, local servers). Reduces latency and bandwidth usage. Handles growing data volumes efficiently. Shifts computing resources from central data centers to local sites. Ensures faster, more efficient processing for time-sensitive data. Aspect Edge Computing Cloud Computing Fog Computing Deploys computing and Centralized, scalable Places compute and storage storage resources where compute and storage resources between the edge Definition data is generated, at the resources in global and the cloud, closer to the network edge. distributed data centers. data. Resources are physically Resources are distributed Resources are located in located near the data between edge devices and Location centralized global cloud source (e.g., wind cloud, often in regional hubs data centers. turbine, train station). or intermediary nodes. Processes data in remote Processes data in local or Processes data locally to Data data centers, often nearby nodes, reducing reduce latency and Processing resulting in higher latency compared to the bandwidth usage. latency. cloud while not at the edge. Suitable for IoT with Best for real-time, low- Ideal for large-scale resource constraints, latency applications storage, analytics, and Use Case needing local processing (e.g., sensors, centralized IoT without full edge autonomous vehicles). operations. deployment. Data is transmitted to the Processes data near the edge Data is processed Connection cloud for processing, but not directly at the data directly at the source to Data possibly leading to source, balancing resources (data generation point). delay. and latency. Servers on wind turbines Cloud-based platforms Intermediate nodes or train stations for for storing and analyzing processing sensor data from Example immediate data IoT data from various IoT devices before sending processing. sources. it to the cloud. CRFSI Benefits of Edge Computing: Autonomy: Works in remote or low-connectivity areas, processing data locally before transferring it when stable connections are available. Data Authority: Keeps data close to its origin, complying with regulations like GDPR and ensuring privacy and security. Edge Security: Enhances security by encrypting data during transfer and protecting against malicious activities. Benefit Description Operates in remote or low-connectivity areas, processing data locally before Autonomy transferring when stable. Data Keeps data close to the source, ensuring privacy and compliance with Authority regulations like GDPR. Enhances security by encrypting data during transfer and protecting against Edge Security malicious activities. Serverless Computing Overview: Serverless architecture enables developers to build and run services without managing infrastructure. Cloud providers handle server provisioning, scaling, and maintenance. It dynamically allocates compute resources based on user requests and computing events, allowing applications to run without the need for server management or configuration. This model simplifies the deployment process, enabling developers to focus on code rather than infrastructure. Serverless vs. FaaS: Aspect Serverless FaaS (Function-as-a-Service) A broader architecture where developers can A subset of serverless, specifically Definition focus on writing code without managing focusing on event-driven code infrastructure. execution. Includes services like storage, databases, event Focuses solely on backend code Scope streaming, messaging, and API gateways. execution. Relation FaaS is one aspect of serverless computing. Serverless vs. BaaS: Aspect Serverless BaaS (Backend-as-a-Service) A broader architecture where A subset of serverless that provides ready- developers focus on writing code made backend services, such as user Definition without managing infrastructure, and it authentication, push notifications, cloud includes both FaaS and BaaS. storage, and database management. Focuses on providing backend services to Scope Encompasses both FaaS and BaaS. simplify development. Handles common backend tasks, enabling Developers manage code execution Focus developers to focus on front-end (FaaS) and backend services (BaaS). development. BaaS handles common backend tasks, In Short while serverless includes both BaaS and FaaS. Serverless vs. SaaS Aspect Serverless SaaS (Software-as-a-Service) An infrastructure model where Delivers fully functional software applications developers build and deploy over the internet, with the provider managing Definition applications without managing everything, including infrastructure and servers. updates. Provides scalable backend services Users simply access the software, with Scope but does not deliver ready-to-use everything managed by the provider. applications. In Serverless is a backend architecture SaaS is software delivery for end-users. Essence for developers. Fundamental Concepts in Serverless Architecture: 1. Invocation: A single execution of a serverless function. CC 2. Duration: The time taken for a serverless function to execute. TDI 3. Cold Start: Latency that occurs when a function is triggered for the first time or after inactivity. 4. Concurrency Limit: The maximum number of function instances that can run simultaneously in one region, set by the cloud provider. Exceeding this limit results in throttling. 5. Timeout: The duration a function is allowed to run before being terminated by the cloud pr ovider. Most providers set a default and maximum timeout. Component Description Purpose-built offerings like AWS Lambda, Azure Functions, and Function as a Google Cloud Functions that execute backend operations triggered by Service (FaaS) user events. Supports stateless interactions, handles high and low volume data The Client transfers, and accommodates short spikes of requests in a serverless Interface setup. A stateless interaction point where user requests are initiated, Web Server on processed, and terminated before responses are delivered by the FaaS the Cloud service. Ensures secure handling of concurrent requests using token services and Security Service identity management tools, like AWS Cognito, for authentication and secure access. Stores information ranging from static content to dynamic databases, Backend often using Backend as a Service (BaaS) solutions for reduced Database maintenance. Acts as the bridge between the client interface and FaaS, relaying user API Gateway actions and enabling interactions with multiple FaaS services for added functionality. Advantages and Disadvantages of Serverless AWS Lambda is a serverless, event-driven platform by Amazon Web Services (AWS) that lets developers run code without managing servers. It automatically handles the required computing resources. AWS Lambda supports languages like Node.js, Python, Java, Go, Ruby, and C# (via.NET), and allows custom runtimes. AWS Lambda Invocation Methods: 1. Push-based Invocation: Lambda functions are triggered directly by events from AWS services (e.g., file upload to S3). This is event-driven or asynchronous invocation. 2. Event-based Invocation: Lambda functions are triggered by specific events from AWS services, such as changes in S3, records in DynamoDB, or messages in SQS. 3. Poll-based Invocation: Lambda functions actively check for changes or events in resources (e.g., polling an SQS queue or external API for updates). Additional Services: Kinesis Data Analytics: Used for real-time streaming data transformation and analysis. Amazon SNS: A service for message notifications and event-driven Lambda triggers. Introduction to APIs: APIs (Application Programming Interfaces) enable communication between two software components using defined protocols and methods. Example: A weather app on your phone communicates with a weather bureau's system through an API to display daily weather updates on your phone. What does API stand for? API stands for Application Programming Interface. Application refers to any software with a distinct function. Interface acts as a contract between two applications, defining how they communicate via requests and responses. API documentation provides guidelines on structuring these requests and responses for developers. Different Types of APIs: 1. Private APIs: o Internal to an organization. o Used for connecting systems and data within the business. 2. Public APIs: o Open to the public. o May or may not require authorization or come with a cost. 3. Partner APIs: o Accessible only by authorized external developers. o Used for business-to-business partnerships. 4. Composite APIs: o Combine two or more APIs. o Address complex system requirements or behaviors. How APIs Work: 1. Client and Server: o Client: The application sending the request (e.g., mobile app). o Server: The application sending the response (e.g., weather database). 2. API Types Based on Protocols: o SOAP APIs: ▪ Use Simple Object Access Protocol (XML-based). ▪ Less flexible, popular in the past. o RPC APIs: ▪ Remote Procedure Calls. ▪ The client executes a procedure on the server, and the server sends the result back. o WebSocket APIs: ▪ Use JSON objects for data exchange. ▪ Support two-way communication, allowing the server to send callback messages to clients, making it more efficient than REST APIs. REST APIs: Most Popular & Flexible: Commonly used on the web. Client-Server Communication: The client sends requests with data to the server. Server Response: The server processes the request, performs internal functions, and returns the output to the client. SOAP APIs: 1. Client Request: The client wraps a method call in SOAP/XML and sends it over HTTP to the server. 2. Server Processing: The server parses the XML to read the method name and parameters, then processes the request. 3. Response: The server sends an XML response back to the client with the return value or fault data. 4. Client Response Parsing: The client parses the response XML to use the return value. RPC APIs: 1. Client Request: The client sends an RPC request, including the function name, arguments, and any required credentials. 2. Server Processing: The server processes the request, verifying authentication and authorization credentials. 3. Function Execution: The server executes the specified function or method using the provided arguments. 4. Server Response: The server returns the result of the execution (value, object, error message, etc.) to the client. 5. Client Processing: The client receives the result, performing any necessary validation or error checking. Websocket APIs: 1. Handshake: The client sends an HTTP request to the server, asking to upgrade the connection to a WebSocket connection. 2. Handshake Response: The server responds with an HTTP message confirming the upgrade to WebSocket. 3. Open Connection: Once the handshake is complete, the connection is open, allowing both client and server to send and receive data. 4. Data Exchange: Data is transmitted in WebSocket messages, consisting of a header (with message type, length, and flags) and a payload (the actual data). 5. Close Connection: Either the client or server can send a special WebSocket message to close the connection, completing the termination process after both parties acknowledge the message. REST APIs: 1. Resource Identification: REST APIs are based on resources identified by unique URIs (Uniform Resource Identifiers). Resources represent entities like users, products, or orders. 2. HTTP Verbs: REST APIs use HTTP methods (verbs) to perform actions on resources: o GET: Retrieves a resource or collection of resources. o POST: Creates a new resource. o PUT: Updates an existing resource. o DELETE: Deletes a resource. 3. Request and Response: The client sends a request to the API with the URI of the resource and the HTTP method to use. 4. Stateless: Each request is independent of previous ones, making REST APIs scalable and allowing clients to cache responses for better performance. 5. Representations: Data is transferred between client and server in representations, often in formats like JSON or XML. 6. Hypermedia: REST APIs can include hypermedia, which provides links and navigation elements in API responses, allowing clients to discover and interact with resources dynamically. API Security: 1. Authentication: Verifying the identity of the user or application accessing the API. Common methods include API keys, OAuth2, and JSON Web Tokens (JWT). 2. Authorization: Determining the actions and resources the authenticated user or application can access. This is often based on roles, permissions, or other criteria. 3. Encryption: Encrypting data transmitted between the client and server to protect sensitive information. Common encryption methods include HTTPS and SSL/TLS. 4. Rate Limiting: Restricting the number of requests that can be made to the API within a certain timeframe to prevent Denial-of-Service (DoS) attacks and abuse. 5. Input Validation: Ensuring that input data is properly formatted and within expected parameters to prevent attacks such as SQL injection and cross-site scripting (XSS). 6. Auditing and Logging: Tracking and recording API usage and activities to detect and investigate security incidents and unauthorized access attempts. 7. API Gateway: A layer between the client and API that provides added security features like access control, rate limiting, and monitoring. API Authentication Mechanisms: 1. API Keys: o A unique key is provided to each authorized client. o The client includes the API key with each request. o The server verifies the key to ensure authorization. 2. OAuth2: o A three-step process: authentication, authorization, and token exchange. o Allows access to resources on behalf of a user, without needing user credentials. 3. JSON Web Tokens (JWT): o A token-based authentication method. o User information is encoded into a signed JSON token. o The client includes the JWT with each request, and the server verifies its authenticity. API Keys: Public Key: o Included in the request made by the client. o Identifies the client and allows access to the API. Private Key: o Used for server-to-server communication. o Treated like a password and kept confidential for security. OAuth 2.0: 1. Initial Request: o The consumer application sends its application key and secret to the authentication server's login page. o Upon successful authentication, the authentication server responds with an access token. 2. Redirect with Access Token: o The access token is included as a query parameter in a redirect (HTTP 302) to the resource server (API server). 3. API Request: o The user makes a request to the resource server (API server) with the access token in the header, prefixed by "Bearer". o The API server validates the access token to authenticate the user. 4. Access Tokens: o The access token not only authenticates the user but also defines what permissions the user has to use the API. o Access tokens typically expire after a set period and require re-authentication from the user. JWT (JSON Web Token): 1. Token Generation: o After successful authentication, the server generates a JWT containing claims, which include information about the user (e.g., username, role). 2. Token Storage: o The client stores the JWT token (usually in local storage or cookies) and includes it in the Authorization header for subsequent API requests. 3. Token Verification: o For each API request, the server verifies the JWT by checking its signature and decoding its contents. If valid, the request is allowed to proceed. 4. Token Expiration: o JWT tokens have an expiration time. Once expired, the client must request a new token to continue accessing the API. Infrastructure as Code (IaC) manages and provisions infrastructure through machine-readable code rather than manual configuration, reducing errors and ensuring consistency. IaC allows infrastructure components like servers, networks, and storage to be defined and managed using code. Key Points: Automation: IaC automates infrastructure provisioning, reducing manual effort and enabling faster deployment. Evolution: It started with static virtual machines, advanced to containers and provisioning tools, and now includes complex cloud infrastructures with serverless and managed services. Methods: o Manual: Point-and-click resource management. o Ad-hoc Automation: Using CLI commands or scripts. o IaC: Using code to declaratively manage resources and configurations. Tools for Infrastructure as Code (IaC): 1. Configuration Management Tools: o Ansible, Chef, Puppet: These tools define infrastructure configurations as code and automate the deployment and management of servers and infrastructure components. 2. Infrastructure Automation Tools: o Terraform, CloudFormation, Pulumi: These tools define infrastructure resources as code and automate the deployment and management of cloud infrastructure, ensuring the desired state of the infrastructure is achieved. Ansible Overview: Purpose: Ansible is an open-source automation tool that simplifies the deployment and management of infrastructure and applications. Language: Uses YAML (Yet Another Markup Language) to define configurations. Managed Components: Servers, network devices, cloud resources. How Ansible Works: Connection: Ansible connects to target systems via SSH using credentials from the Ansible inventory. Modules: Once connected, Ansible pushes small programs called modules to the target system for execution, then removes them after completion. Management Node: This is the controlling node from which playbooks are executed. It connects to remote systems via SSH to perform tasks and installs products/software. Execution: Ansible executes tasks on the target system, removing the modules after use. Key Ansible Terms: Ansible Server: The machine where Ansible is installed and from which playbooks are run. Fact: Information gathered from the client system using the gather_facts operation. Play: The execution of a playbook. Handler: A task that only runs if a notifier is triggered. Notifier: A section within a task that calls a handler if the output changes. Tag: A name assigned to a task, allowing it to be run independently or as part of a group. Terraform Overview: Purpose: Open-source tool for Infrastructure as Code (IaC), enabling declarative management of infrastructure. Providers: Supports AWS, Azure, Google Cloud, and more. Language: Uses HashiCorp Configuration Language (HCL), similar to JSON/YAML. Key Features: Declarative Approach: Define the desired state, and Terraform ensures it’s maintained. Cross-Cloud Compatibility: Manage resources across multiple cloud platforms. Terraform Core Concepts: Concept Description Key-value pairs used to customize Terraform modules, allowing dynamic input to Variables control the behavior of configurations. A plugin that interacts with the APIs of cloud services (e.g., AWS, Azure, GCP) Provider to enable access to cloud resources. A folder containing Terraform templates where configurations and infrastructure Module resources are defined. Modules are reusable and help organize code. Cached information about the current state of infrastructure managed by State Terraform, tracking resources and their configurations. Blocks representing infrastructure objects (e.g., compute instances, virtual Resources networks) created, managed, or updated using Terraform configurations. Data Providers return information about external objects, such as data from other Source systems, for use in Terraform configurations. Concept Description Output Return values from a Terraform module used in other parts of the configuration or Values output to the user. The phase where Terraform calculates changes to transition infrastructure from its Plan current state to the desired state, identifying what to create, update, or destroy. The phase where Terraform applies the changes from the plan, updating the Apply infrastructure to match the desired state. How Terraform Works? Terraform Configuration: The user-defined configurations that specify what resources need to be created or managed. State: The current setup of infrastructure that Terraform maintains to track and compare the real state of resources.