Software Fundamentals PDF

Summary

This document is a reading material about software fundamentals, including software basics, system software, application software, operating systems, and device drivers. It's geared towards an undergraduate-level audience.

Full Transcript

Software basics Computer software is a set of instructions, data or programs that tells a computer how to operate and execute specific tasks. Software is the non-physical (intangible) aspect of a computing system. In a way, it is the “opposite” of hardware, which is the physical parts of a computin...

Software basics Computer software is a set of instructions, data or programs that tells a computer how to operate and execute specific tasks. Software is the non-physical (intangible) aspect of a computing system. In a way, it is the “opposite” of hardware, which is the physical parts of a computing system. Software refers to applications, scripts and programs that run on a device (PC, server, mobile device, etc). All software is written in some programming language and executed by the computer’s processor to produce results. Broadly speaking, software is divided into two main types: system software and application software. System software System software is a type of software that is designed to allow the computer hardware to communicate and operate. System software is used to manage the computer itself. It runs in the background, maintaining the computer's basic functions. System software also acts as the “bridge” between hardware and the user’s applications. System software is typically written in a low-level language (e.g. assembly) since it requires direct access to the hardware and CPU, or in C and C++ (since they offer excellent support for low-level programming). Some of the main features of system software include: hardware management: manages and controls computer hardware resources. user interface: provides interfaces like command-line or graphical UI (GUI). platform for applications: supports and runs application software. resource allocation: prioritizes tasks and manages system resources. device control: includes drivers for communication with hardware. security: Offers basic security features like authentication and access control. The main types of system software are: operating systems drivers firmware (BIOS, boot programs) hypervisors 2 assemblers utility programs Operating systems An operating system (OS) is a core type of system software that manages a computer's hardware and software resources. It provides a foundation for applications to run and enables users to interact with the computer. The OS handles critical computer functions such as memory management (how to utilize the main memory (RAM) of the computer), process scheduling (determining when to run certain instructions and programs), device control (managing communication between input / output devices and the process), and so on. The OS ensures the system operates smoothly and efficiently. The most widely used operating systems for PCs are Microsoft Windows, macOS and Linux. Whereas Windows dominates the overall market share, Linux is the most widely used option on server hardware. On mobile devices, the main competition is between Android and iOS operating systems. An end user can mainly interact with the OS through its graphical user interface (GUI) and, in some operating systems, a less complex command-line interface (CLI). For example, most server operating systems feature only a CLI, but no GUI. Drivers A device driver, or simply a driver, is a special kind of system software that controls a specific hardware device attached to a computer. Device drivers are essential for a computer to work properly - without a device driver, the respective hardware will fail to work accordingly. Device drivers provide a software interface for attached hardware that enables the operating system (OS) and other applications to access that hardware's functionalities. They define the messages and mechanisms by which the computer - the OS and applications - can access the device or make requests for the device to fulfill. The driver program converts the more general input/output (I/O) instructions of the OS into messages that idevice can understand. For example, drivers translate “key presses” and “mouse clicks” into “1s and 0s” the computer can understand, or translate the “1s and 0s” of the computer into something that a printer can print out. Device drivers are hardware-dependent and specific to the OS. In other words, you usually have to install different drivers for e.g. different kinds of printers, and you cannot install Windows drivers onto a Linux machine (and vice-versa). Drivers communicate with computer hardware through a bus or a communications subsystem that is connected to the hardware. 3 Firmware In general, firmware is permanent software embedded into a read-only memory (ROM). It is a set of instructions (semi-)permanently stored on a hardware device, which provides essential information regarding how the device interacts with other hardware. The “semi-permanent” in the definition is due to some firmware being able to update a special software known as a firmware updater. In computers, the BIOS (basic input/output system) is a kind of firmware that starts the computer system after it is powered on. The BIOS manages the data flow between the OS and attached devices, such as the hard drive, video adapter, keyboard, mouse and printer. The BIOS “lives” on a chip on the motherboard. The boot program is firmware (closely tied to BIOS) which loads the OS into the computer's main memory or RAM. Hypervisor The hypervisor is a system software that can be used to run multiple virtual machines on a single physical machine. In other words, a hypervisor lets you run multiple operating systems on a single computer. The hypervisor allocates the underlying physical computing resources such as CPU and memory to individual virtual machines as required. Every virtual machine has its own operating system and applications, and they share the hardware resources of the physical machine. Utility programs Utility system software is designed to aid in analyzing, optimizing, configuring, and maintaining a computer system. These utilities perform specific tasks to support the operating system, such as managing files, protecting data, and optimizing system resources. Examples of 4 utility software include file management systems (Windows Explorer), disk cleanup tools (defragmenter, Disk Cleanup), backup utilities, “System Restore”, etc. Application software Application software is software designed for end-users to perform specific tasks. Application software uses the computer's operating system (OS) and other system software to function and provide tools to create, interact, and manage content. Application software is generally written in high-level languages (Python, Java, JavaScript, …). Some common types of application software include include: word processors (Microsoft Word, Google Docs, …) database programs (DBeaver, …) Web browsers (Chrome, Firefox, Safari, …) communication platforms (Slack, Discord, Instagram, Facebook, …) image editors programming and deployment tools (VSCode, Git, …) An “application” is another way to call application software. Generally, while the word “application” might be more associated with smartphone applications in our mind - in reality, it is a moniker for “any application software”, regardless of platform. An application requests services from and communicates with other technologies via an application programming interface (API). Unlike system software, which runs in the background and ensures our computer system keeps working, application software is directly accessed and used by the end user. Application software can also be divided into several categories, based on the kind of platform it is running on. Desktop applications Desktop applications are those programs installed on individual computers (PCs). Desktop applications run directly on the operating system. They are usually more powerful but limited to the device they are installed on. Basically, any program that you “install” on your computer is 5 a desktop application. Some examples include Office programs, Photoshop, Steam and Steam games, and more. Web applications Web applications are programs which run in a Web browser, requiring no installation. They are accessible from any device with internet access. Some examples include Gmail, Google Docs, LMS, and basically any Web page. Native applications Native applications are programs built specifically for a particular operating system (e.g., iOS or Android). Native apps can take full advantage of the device’s hardware and features, providing high performance. However, each mobile operating system has its own separate programming language that you have to use, and you would need to re-write the application in a different language for a different platform. Some examples include iPhone’s iMessage, Android’s Google Maps, and similar. Hybrid applications Hybrid applications are a combination of native and web applications. Hybrid apps are developed using web technologies but are “wrapped” in a native shell, allowing them to be installed like native apps. Some examples include Instagram, Uber, and so on. Cross-platform applications Cross-platform applications are programs designed to work on multiple platforms with a single codebase. In other words, you write a single codebase that will work on multiple different platforms. Cross-platform apps aim to provide a consistent experience across devices, using tools like Flutter or React Native. Some examples include Slack, Facebook, and similar. The two illustrations above represent the differences between system and application software, and a “stacked” architecture of different software that is used on a daily basis. 6 Types of programming languages Machine language As discussed in previous weeks, 1s and 0s are the only kind of “language” a computer can “understand” and carry out. This is known as machine language - binary-coded instructions that are directly used by the computer. These instructions are built into the hardware of a particular computer. Every computer chip belongs to a certain architecture (x86, ARM, …), and each architecture has a predefined “meaning” of what certain combinations of 1s and 0s represent built into it. When a CPU receives a combination of 1s and 0s (electrical signals or lacks thereof), the circuits (gates / transistors) on it can “understand” and interpret which operation (instruction) they represent. Each machine-language instruction performs one very low-level task. Even a simple addition requires three steps: load a number into a register, add another number to it, store the result to another register or main memory. When computers were first invented, programmers had no choice but to write programs in machine language - since no other programming language existed yet. Therefore, they had to remember / keep track of complex strings of 1s and 0s in order to write early programs. Here is one example of a “programming” written in machine language: 7 10100001 00000000 00010000 10001011 00011110 00000000 00010010 11110111 11100011 While unintelligible to us, these binary strings actually contain instructions for multiplication of two numbers in the x86 architecture. Assembly languages Assembly languages (or sometimes simply called assembly) are programming languages which assign mnemonics (“keywords”) to each machine-language instruction for a particular computer. Rather than using binary digits, the programmer can now use keywords and special symbols to program in a more efficient and less error-prone way. After all, it is much easier to remember e.g. the keyword “MUL” for multiplication, then a binary string 11110111. However, every program that is executed on a computer eventually must be in the form of the computer’s machine language (1s and 0s). To that extent, the system software known as an assembler is used. Assemblers translate an assembly-language program into machine language.The assembler reads each of the instructions in assembly form and translates them into the machine-language equivalent. Due to their complexity and “closeness” to the hardware layer of the computer, machine language and assembly languages are collectively known as low-level programming languages. Here is an example of an assembly language - in fact, the above mentioned machine language code converted into assembly: MOV AX, [NUM1] MOV BX, [NUM2] MUL BX It is now much easier to to understand that this was an example of a multiplication operation. While assembly was a step-up from machine language, it had its own issues. For one, it takes multiple assembly language commands to program very basic operations (such as our multiplication example - we need 3 instructions for a single multiplication). Once we start dealing with more “complex” logic such as conditions, loops, repeated code, etc. it would require very large amounts of assembly code. Secondly, assembly language is not portable. Because each computer architecture has a different machine language, each computer also has its own corresponding assembly language. In other words, e.g. you cannot write assembly code on an x86 device, and then run that code on an ARM chip (and vice versa). 8 High-level languages During second-generation software, high-level languages first appeared. High-level programming languages are programming languages that are designed to allow humans to write computer programs without having to have specific knowledge of the processor or hardware that the program will run on. Rather than being machine-oriented like low-level languages (you have to think about what hardware the program will run on, and use its language), programming is now problem-oriented: you are designing a solution to a problem, which can run on various hardware. A high-level programming language is written in a language that is designed to be easily understood by humans. High-level languages use command words and something called a syntax, which is a specific set of rules in which program statements must be written. The syntax generally reflects everyday language, making them easier to learn and use. High-level languages are an “abstraction” over low-level ones. Each line of code or command accomplishes multiple things (rather than a single thing in low-level languages) and the programmer does not need to think about the underlying hardware details such as the chip or architecture (something a low-level programmer has to take into consideration). For example, rather than having to write 3 lines of assembly code and think about how to get and store data in memory, I can just write “num1 * num2” in a high-level language. Similar to low-level languages, a high-level language also has to eventually be translated into machine language. This can be accomplished through the use of two types of software: compilers and interpreters. A compiler analyzes the entire source code, checks it for errors, and translates the entire code into machine language. An interpreter reads the source code one line or statement at a time, translating it into machine language and executing it immediately. One of the primary divisions of high-level programming languages is based on whether they are compiled or interpreted. To summarize the main characteristics of high-level languages: Syntax that is easy for humans to understand and learn. Syntax that uses command words similar to natural human language. A single line of code can accomplish multiple tasks. They allow the programmer to focus on what the program is trying to achieve rather than how the computer or specific hardware operates. Source code is translated into machine code for the computer to process. Programming paradigms There exist two primary “paradigms” (ways of “doing things”) in high-level languages, imperative and declarative. In imperative languages, programmers solve a problem by writing a set of instructions that state how a problem should be solved. The program describes the processing necessary to solve the problem. The majority of programming languages throughout history have been imperative, with some notable examples being Python, C, C++, Java, and so on. In declarative programming, programmers describe what the program should do (the result of the program), but the steps to accomplish the results are not 9 stated. The programmer “declares” the problem to be solved, without having to know how the solution is actually executed. Notable examples include SQL, Prolog, Haskell, and the like. Imperative languages can be further divided into procedural and object-oriented languages. Procedural programming is an imperative model in which the statements are grouped into subprograms (procedures). A program is a hierarchy of subprograms, each of which performs a specific task necessary to the solution of the overall program. Object-oriented programming (OOP) is an imperative model that organizes software design around data (objects), rather than procedures. An object can be defined as a module that contains unique attributes (data) and behavior. These modules work together and combine into the overall program. Below, you have an example of a procedural language on the left (C) and an object-oriented language on the right (Java). Declarative languages can be further divided into functional and logic languages. Functional programming is based on the mathematical concept of functions - computation is expressed in terms of the evaluation of functions. There are no variables in these languages, and data is always constant. Logic programming is based on the principles of symbolic logic - you create a “set of facts” about objects and a “set of rules” about the relationships among the objects. You “program” by asking questions about these objects and their relationships, which can be deduced from the facts and the rules. Below, you have an example of a functional language on the left (Lisp) and a logic language on the right (Prolog). 10 Problem solving in computer science Computer science is sometimes defined as “the study of algorithms and their efficient implementation in a computer”. An algorithm is a set of instructions for solving a problem or subproblem in a finite amount of time using a finite amount of data. “Finite amount of time and data” means that an algorithm should not take too much time or too many resources to solve a problem. An algorithm is a “plan” of the solution to a problem we are having, and as such, there exists a certain methodology on how to approach a computer problem. In order to design an algorithm, you can follow a few steps. First of all, you need to analyze the problem. Start by listing the information you have to work with. Most likely, this will be the data given in the problem statement. Try to underline / note key information you have, and any assumptions about the problem or the given information. Try to “fill in” any “gaps” in the understanding of the problem you might have. After that, try specifying how the general solution should look like and account for any special (edge) cases you might encounter. For example, if implementing a division, you need to think about how to handle division by 0. Lastly, think about how you would solve the problem by hand. For example, try to solve the problem on paper first or draw a diagram showcasing how the solution should look like. 11 As a second step, you can list the main tasks. You can use English words or pseudocode to restate the problem. Pseudocode is a detailed yet readable description of what a computer program or algorithm should do, written in something resembling a programming language (but it is not an actual language). Try to divide the problem into smaller functional areas. If the problem is too large or complex, divide it into smaller parts (subtasks, procedures, subprograms, etc). This is known as the divide-and-conquer approach to algorithm development. As a part of this, list what “control structures” you will need (conditions, loops, functions, etc). However, try not to “reinvent the wheel”. If a solution exists, or you solved a similar problem earlier, use that solution. In the third step, develop and implement the algorithm. Using the data from the previous steps, you develop a logical sequence of steps to be used to solve the problem - your algorithm. Try to put your hand-written solution and pseudocode into actual lines of code and programming concepts. You should translate the algorithm (the general solution) into a programming language. If you already broke down the problem into smaller pieces, first “translate” the individual pieces into code, and then connect them into a larger program. Lastly, run your code and check the results, making corrections if necessary until the answers are correct. The fourth and final step is to maintain and revise the algorithm. Use your program and modify it if necessary to meet changing requirements or to correct any errors. For instance, in the future, you might discover a special case you did not think about in the beginning, detect a certain problem, or a new requirement may arise. Moreover, plan for change. Software is rarely a “one-and-done” kind of thing - you will often need to refine your solutions. Do not be afraid to start over if you determine your solution was not appropriate enough. The problem-solving strategy outlined in the previous four paragraphs is known as top-down design. Software development life cycle (SDLC) Software development life cycle (SDLC) is a cost-effective and time-efficient structured process that is used by development teams to design, develop, and test good-quality software. The goal of SDLC is to minimize project risks through forward planning, and deliver high-quality, maintainable software that meets the user’s requirements. SDLC achieves this by dividing software development into a series of phases (stages) that can be assigned, completed, and measured. SDLC provides a systematic management framework with specific deliverables (end results) at every stage of the software development process. Some common benefits of SDLC include: increased visibility of the development process for all stakeholders involved efficient estimation, planning, and scheduling improved risk management and cost estimation systematic software delivery and better customer satisfaction Different literature and different teams might employ varying steps in SDLC, but the following are most common phases: 12 planning analysis (requirements gathering) design implementation (coding) testing deployment maintenance As can be seen in the illustration, SDLC is a cycle; client requirements are changing and software is continually evolving, so these “phases” have to be repeated again and again as we develop the product. In some literature, you may see steps 4-7 combined into “implementation”, or steps 1 and 2 combined into “planning” or “analysis” - but these seven phases are the baseline SDLC. Planning In the planning phase, the initial aspects of both project management and product management are defined, such as opportunities and feasibility: is it feasible (worth it) to build the system, is there a market for it, and do the customers want the product? Secondly, a work plan is developed and scheduling, time management and capacity planning is done: what are the overall deadlines, how many people do we need involved, etc. The project is staffed and material and human resources are allocated to different parts of the system. Lastly, cost estimation is also done. The planning phase answers the question: “Why build the system?” If planning is successful, you should have a project plan and a system request document at the end of it. Collaboration between management, developers and clients is required at this stage. Analysis The analysis phase, sometimes also called requirements gathering, is when requirements are collected from various stakeholders such as customers, internal and external experts, and managers. This phase answers the question “Who, what, where and when for this system?” Developers and managers work together with the customer to document all business processes (requirements) that the software should implement. There are two types of requirements at this phase: functional and nonfunctional requirements. Functional requirements define what a product must do and what its features and functions are. Simply put, these are the features of the system. Nonfunctional requirements describe the general properties of a system. They are also known as quality attributes. Nonfunctional requirements are things like: what operating systems are supported, how fast the system should be, what security measures should be implemented, etc. The end result of analysis is an SRS (Software Requirement Specification) document. 13 Design After defining requirements, developers and software architects start to design the software. In the design phase, software engineers analyze requirements and identify the best solutions to create the software. “Design” does not only mean creating a UI (user interface) for the system - UI is simply one part of the overall steps done during the design phase. During design, we chose the software architecture and technologies (which programming language, which database, which servers, etc.), identify development tools, create diagrams of various requirements (illustrations of the features and data flow), design initial mockups of UI, etc. The end result of this phase are design documents with the patterns and components selected for the project’s upcoming implementation. Implementation During the implementation (or sometimes called coding or development) phase, the previously defined requirements and designs are actually coded. If you recall our discussion of differences between “programming” and “coding”, you can see that “coding” is only one part of the overall SDLC, whereas “programming” covers all of the phases. The main goal in this phase is to produce working software as quickly as possible. Developers analyze the requirements to identify smaller coding tasks they can do daily to achieve the final result. The end result is testable and functional software. Testing The testing phase is one of the most important in the SDLC: there is no way to deliver high-quality software without testing. “Testing” means checking the software for errors (bugs) and checking if it meets customer requirements. Software bugs are flaws or errors in software code that cause the program to behave unexpectedly, produce incorrect results, or crash. Bugs can result from mistakes in programming, unexpected interactions between software components, or unhandled special (edge) cases. Bugs vary in severity, from minor visual issues to critical errors that compromise functionality or security. Identifying and fixing bugs is essential for creating reliable and secure software. There exist several software quality assurance procedures that can be done during this phase. Code quality review is ensuring that the written code adheres to best practices and standards of the company / industry. Unit testing is testing of individual smaller parts of the software (usually done by developers, in parallel with implementation). Integration testing is testing how individual parts of the software (units) work together. Performance testing involves testing the speed, responsiveness, and reliability of the software. Security testing is ensuring that the software is secure enough and not susceptible to hacking. And lastly, user acceptance testing is asking a select group of customers if they actually like and would use the software in the state it is currently at. Because many teams immediately test the code they write (which is known as unit testing), the testing phase often runs parallel to the implementation phase. At the end of testing, we get full-functional software that is ready for deployment. 14 Deployment The deployment phase is when the tested software is made available to end users - the customers. When teams develop software, they code and test on a different copy of the software than the one that the users will have access to. The software that customers use is called the production environment, while other copies which are used for development and testing are said to be in the build, testing or staging environments. This phase is usually highly-automated and “invisible” to a regular developer. In modern development, this process is usually handled by a job role known as DevOps engineers. The end result of this phase is the product release. Maintenance The SDLC does not end with a deployment of the system - it has to be continuously maintained and improved. In the maintenance phase, among other tasks, the team fixes bugs, resolves customer issues, and manages software changes. Patching and updates are a regular part of the maintenance phase. Patching involves applying small updates, often to address security vulnerabilities or bugs. Patches help protect software from new threats and resolve specific issues. Updates enhance software by adding new features, improving compatibility, or optimizing performance. Updates ensure the software remains current and functional for users. In addition to this, the team monitors overall system performance, security, and user experience to identify new ways to improve the existing software. The “end result” of maintenance is planning for the next round / cycle of SDLC. SDLC models SDLC conceptually presents the steps to develop, test and maintain good software in an organized fashion. Different companies use different SDLC models, which are concrete implementations of SDLC phases. While there are many SDLC models out there “in the wild” our focus will be on the three most important and commonly used ones: the waterfall model the iterative model the Agile model The waterfall model The waterfall model is the oldest and “most traditional” SDLC model. It was used for a long time historically before “better” models were developed, but it is still widely used today. The waterfall model arranges all the phases sequentially so that each new phase depends on the outcome of the previous phase. In other words, software development moves linearly from phase to phase, and each phase has to be 100% completed before you can move on to the next phase. Since the work “flows” from one phase down to the next, this model is known as waterfall. In the example image, if the user wants a “vehicle” you build it phase by phase. You do every phase 100% completely before moving onto the next step, and have a working product at the end. 15 The key advantages of the waterfall model are that it identifies system requirements long before programming begins which minimizes changes (and makes it easier on the developers). The waterfall model provides discipline to project management and gives a tangible output at the end of each phase. However, since there is no way to “go back” to a previous phase if a change has to be made, this can affect the software's delivery time, cost, and quality. We would have to “scrap” almost all work in a previous phase, and go back to it if a change is required. Moreover, it can take a very long time for all requirements and design decisions to be completed (as each phase has to be fully completed), so by the time we get to implementation, the project might be outdated already. The waterfall model is best used in two (kind of opposite) cases. Firstly, waterfall might be a good fit for small software development projects, where tasks are easy to arrange and manage and requirements can be pre-defined accurately (there are few requirements, so it is not likely they will change). Secondly, it can be good in large-scale, stable projects like accounting software or military systems. These projects have well-defined requirements and need thorough documentation, with minimal changes once development begins (accounting or military software is “stable”, which means the clients will know beforehand what they want and are unlikely to come into the middle of development and request something new). The iterative model While waterfall worked well for early software development, it was soon realized that clients were often waiting too long to see some actual results of the system. This led to the creation of the iterative model. In iterative development, the team begins software development with a small subset of requirements and implements them. Then, they iteratively enhance 16 versions over time based on customer feedback, until the complete software is ready for production. The team produces a new software version at the end of each iteration (cycle). In the example image, if the user wants a “vehicle”, you start small with basic functionalities (e.g. a skateboard is a “vehicle” with basic functions) and then iterate into better versions, until you end up with a complete software. The iterative model has a few advantages over waterfall. For one, it is flexible and allows adjustments and refinements at each iteration, making it adaptable to changing requirements. Secondly, it reduces risk. With iterative development, we can detect and address issues early, minimizing project risk by allowing regular testing and feedback. Lastly, it offers continuous improvement. The quality of the project improves over time, as feedback from users and stakeholders is incorporated in each cycle. On the other hand, there are some drawbacks as well. Interactive models can introduce “scope creep”. This means that frequent updates can lead to additional requirements and expansion of the original scope (planned features), prolonging timelines. Apart from that, it is resource intensive. This model requires ongoing commitment from developers and stakeholders, increasing time and resource demands (even beyond initial planning). Lastly, it can be complex to manage. Tracking progress and managing frequent iterations can be challenging, especially in larger teams. The iterative model is a good fit for enterprise applications (e.g. customer management systems, resource planning systems, learning management systems, etc.). This model allows for gradual improvements and adaptations based on user feedback, making it suitable for complex applications that evolve over time. 17 The agile model The agile model is a step-up from the iterative approach. It was envisioned in 2001 by the “Agile Alliance” (a group of software engineers) and its main “values” are presented in a document known as the “Agile Manifesto”. The core tenets of the Agile Manifesto are as follows: individuals and interactions over processes and tools working software over comprehensive documentation customer collaboration over contract negotiation responding to change over following a plan The agile model arranges the SDLC phases into several development cycles (commonly known as “sprints”). The team iterates through the phases rapidly, delivering only small, incremental software changes in each cycle. They continuously evaluate requirements, plans, and results so that they can respond quickly to change. The agile model is both iterative and incremental, making it more efficient than other SDLC models. The difference between “iterative” and “incremental” can be explained as follows. Let us say you are building a learning management system (LMS), which will have features like videos, assignments and quizzes, course management, profile management, etc. If the application is built using an iterative model, we will build a little bit of all features to have a working LMS, show that to the customer and iterate from the feedback. If we use an incremental model, we will focus on a single part first, e.g. we build the profile management feature, show it to the customer and update based on feedback. Then, we will focus on a different part of the system, and so on. The figure below shows a more illustrative example of the difference between. The “incremental” way of building a hamburger would be adding the required ingredients layer by layer (“increments”). The “iterative” way would be starting with a very basic hamburger, and making a better, more juicy one in every next “iteration”. 18 Agile is both iterative and incremental, as shown in yet another example below. The key difference between waterfall and agile is that in waterfall, a working product is available only at the end of SDLC. In agile, a minimum working product is available after the first “cycle” (sprint) of SDLC. This product will then undergo updates and improvements based on continuous user feedback. 19 The agile model has several major advantages over the previous approaches. For one, it is customer-centric. Frequent feedback loops ensure the product aligns closely with customer needs and preferences. Secondly, it is flexible. Agile easily adapts to changing requirements, enabling quick changes and adjustments based on new insights. Next up, it enables faster delivery. In Agile, working software is delivered in shorter cycles, allowing users to experience and benefit from new features sooner. The last advantage is that it fosters team collaboration. Agile encourages strong communication and collaboration, fostering a cohesive team environment. On the other hand, it also suffers from some drawbacks (many similar to the iterative model). Firstly, it is less predictable. Agile’s flexible nature can make project timelines and budgets harder to predict accurately. Secondly, it can introduce “scope creep” - continuous changes and additions can lead to uncontrolled growth of project scope, impacting deadlines. Moreover, agile requires high engagement. It demands frequent interaction with stakeholders and users, which may strain resources. Lastly, it can be difficult to document. Emphasis on rapid delivery can sometimes lead to insufficient documentation for future reference. Agile is great for Web-based services and mobile apps. Frequent changes and updates driven by user needs make Agile ideal for fast-paced projects with evolving requirements. Moreover, agile is generally one of the most commonly used models in modern software development. 20

Use Quizgecko on...
Browser
Browser