CC105 Module ASC Approved 1 PDF

Document Details

Uploaded by Deleted User

Cris Norman P. Olipas, Angelito I. Cunanan Jr., Ronaldin V. Bauat, Vanessa C. Pascual

Tags

software engineering application development operating systems computer science

Summary

This learning module covers fundamental concepts of application development and software engineering, highlighting basic concepts of emerging technologies. It also discusses the importance of software and operating systems, compares different types of operating systems, and identifies software-related problems.

Full Transcript

FOREWORD The world today has witnessed the use of different applications and systems to transform processes and procedures into a more effective and efficient one. Information Technology plays a vital role in producing relevant applications for different...

FOREWORD The world today has witnessed the use of different applications and systems to transform processes and procedures into a more effective and efficient one. Information Technology plays a vital role in producing relevant applications for different organizations, groups and individuals, and communities. This course will cover the discussion on the relevant fundamental concepts of application development and software engineering. In this course, basic concepts on emerging technologies will also be given emphasis in order for students to be able to have an idea on how they can integrate such technologies in application development. While this course focuses on application development, integration of recent advancements and technologies will also be covered to give students essential information and update about them. Contents of this learning module conforms to the minimum requirements of CHED Memorandum Order No. 25 Series of 2015 or the Policies, Standards, and Guidelines of BSCS, BSIS, and BSIT Programs. Author and contributors seek to continuously refine the contents of this learning module together with a laboratory guide for application development using JavaFX. Students are expected to engage in Synchronous and Asynchronous Learning Modalities to have a better experience in learning the course. Have fun learning on how to develop your first application project. Cris Norman P. Olipas Angelito I. Cunanan Jr. Ronaldin V. Bauat Vanessa C. Pascual TABLE OF CONTENTS LESSON 1: OVERVIEW ON SOFTWARE AND HARDWARE APPLICATIONS 4 LESSON 2: INTRODUCTION TO SOFTWARE ENGINEERING AND APPLICATION DEVELOPMENT 14 LESSON 3: REQUIREMENTS ANALYSIS AND MODELLING 19 LESSON 4: PROCESS MODELLING 26 LESSON 5: SOFTWARE DESIGN 32 LESSON 6: APPLICATION AND SOFTWARE PROTOTYPING 40 LESSON 7: SOFTWARE QUALITY ASSURANCE 45 LESSON 8: SOFTWARE TESTING 55 LESSON 9: SOFTWARE MAINTENANCE 51 3 LESSON 1 OVERVIEW ON SOFTWARE AND HARDWARE APPLICATIONS Introduction Today’s modern era has observed significant changes in the way how people work, communicate, and perform different tasks and activities. Relative to this, applications and different software plays a vital role in transforming different processes into a more effective and efficient one. This lesson will cover an overview of software and hardware applications essential for you to start in this journey of learning more about applications development and emerging technologies. Learning Objectives At the end of the lesson, you must be able to: 1. define what is software and operating systems: 2. discuss the importance of software and operating systems; 3. compare and contrast the different types of operating systems; 4. identify the different software-related problems; and 5. construct insights on the different software applications Discussion Software is computer instructions that, when executed, provides desired functions and performance. It is a data structure that enables the programs to manipulate information adequately. The evolution of software starts in the early years, including batch orientation, with limited distribution and custom software. In the second era, software became multi-users, real-time, integrates database, and is more on product software. During the third era, distributed systems emerged and embedded intelligence; hardware cost became low; and consumer impact was prioritized. In the fourth era, robust desktop systems are observed; and object-oriented technologies are produced, as well as expert systems, artificial neural networks, and parallel computing. Software-Related Problems 1. Hardware advances continue to outpace our ability to build software to tap hardware’s potential 2. Our ability to build new programs cannot keep pace with the demand for new programs, nor can we build programs rapidly enough to meet business and market needs 4 3. The widespread use of computers has made society more and more dependent on the software’s reliable use. 4. We struggle to build high-quality and reliable computer software. 5. Poor design and inadequate resources threaten our ability to support and enhance existing programs. Characteristics of Software 1. Software is constructed according to the needs of the client, and it is not produced in a usual sense that conforms to “one size, fits all” principle. This means that software is custom engineered. 2. The software does not wear out. 5 The collection of data or computer instructions that tells the computer how to perform different operations is commonly known as software. In computing disciplines, software pertains to all the information processed by computer systems, programs, and data. It includes computer programs, libraries, and related non-executable data like documentation and digital media. The system software is designed to provide a platform for other software. Examples of system software include Linux, macOS, Android, and Microsoft Windows. Figure 1: Types of Operating Systems Types of Operating Systems Operating systems are vital for performing the essential tasks of a computer, including managing files, processes, and memory. Operating systems behave like a manager on a computer that manages the various functions. Operating systems also act as crossing point between the machine and the user. The following are the types of operating systems widely used. 1. Batch Operating System Batch Operating Systems do not directly interact with the computer. There is an operator that performs the job. The operator takes similar jobs with the same requirements, then group them as a batch. The operator’s primary responsibility is to sort similar jobs with similar needs. Examples of batch systems include a payroll system and banking statements. 6 Figure 2: Batch Operating System Advantages of Batch Operating System a. The time required to complete any job can be a challenging task and challenging to know. However, the processors of batch systems are capable of identifying how long the job would take when it is in the queue. b. Multiple types of users can share the batch systems. c. There is a lesser amount of idle time in Batch OS. d. Managing a large amount of work when performed repeated is very much more comfortable in batch systems. Disadvantages of Batch Operating System a. Computer operators should be well-oriented with batch systems. b. Debugging a batch system is a challenging task. c. The cost of batch systems can be high. d. In case a job fails, other jobs in the queue will have to wait. 2. Time-Sharing Operating Systems In time-sharing operating systems, tasks are given enough time to execute. This means that to execute each task smoothly, each user gets the CPU time, for they only use a single system. These systems are commonly known as Multi-tasking Systems. Single users or multiple users can generate tasks. Quantum Time is the term used to describe the time each task gets to 7 execute. After the allotted time for a task, the OS switches over to the next. Examples of Time-Sharing Operating System include Multics and Unix. A dvant ages of Time- Shari Figure 3: Time-Sharing Operating System ng Operating System a. Equal opportunity for each task is given b. The chances of having duplication are lesser than the other types c. The idle time of the CPU can be reduced Disadvantages of Time-Sharing Operating System a. Reliability is one of the challenges of Time-Sharing Operating System b. Security needs to be taken care to increase integrity. c. There is a problem that can arise in terms of data communication 3. Distributed Operating System Distributed Operating System is widely-accepted worldwide with a great pace for it enables autonomous, interconnected computers to communicate with each other through shared communication networks. Each autonomous, independent computer has its memory unit and CPU. Distributed Operating Systems are commonly known as Loosely Coupled Systems. Since computers are interconnected, it is possible to share files or software with other computers through remote access. Locus is one of the examples of a Distributed Operating System. 8 Figure 4: Distributed Operating System Advantages of Distributed Operating System a. Since systems have their memory and CPU, failures do not affect the network communication of others. b. The data exchange speed of electronic mail increases c. Computation is highly fast and durable because of the shared resources d. Reduction of load on host computers e. Scalability is possible since systems can be easily added to the network f. Reduction of delay in the data processing Disadvantages of Distributed Operating System a. Since there is a central network responsible for the entire communication procedure, connections of systems can be stopped when the main network fails b. The language used to establish distributed systems are not very well defined yet. c. Distributed Operating systems are expensive because of their complexity. 4. Network Operating System Network Operating Systems run on a server and provides the capacity to manage data, users, groups, security, applications, and other types of networking 9 functions. It enables sharing files, printers, security, applications, and other essential networking functions over a small private network. This type of Operating System is also known as a tightly coupled systems. This means that the network members know the configuration and the other users connected within the same network. Examples of Network Operating Systems include Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell Network, and BSD. Figure 5: Network Operating System: Advantages of Network Operating System a. Centralized Servers are highly stable b. The server handles security concerns c. Integration to the system of new technologies and hardware upgrades can be quickly done d. Access to servers can be done remotely from different locations and types of systems. Disadvantages of Network Operating System a. The cost of servers is high. b. Since there is a server, users have to depend on it for most operations c. It is required to maintain and update the system regularly. 5. Real-Time Operating Systems When time is part of the requirements, Real-Time Operating Systems are very efficient. Applications such as air traffic control systems, robotics, and 10 missile systems are some of the few examples of real-time operating systems. There are two categories of Real-Time Operating Systems: Hard Real-Time Systems and Soft Real-Time Systems. Hard Real-Time Systems are those that require an application to be very strict with time requirements. Meaning, possible delays in terms of response are not acceptable. This type of system is built to live life like an automatic parachute or airbags. Also, virtual memory is almost never found in this type of system. On the other hand, Soft Real-Time Systems are those that have less strict time-constraints. Figure 6: Real-Time Software Advantages of Real-Time Software a. Maximum Consumption b. Task Shifting c. Focus on Application d. Real Time operating systems in embedded systems e. Error Free f. Memory Allocation Disadvantages of Real-Time Software a. Limited Tasks b. Use heavy system resources c. Complex Algorithm d. Device driver and interrupt signals e. Thread Priority 11 ACTIVITY 1.1 (Suggested Mode of Submission of Output: Email/Learning Management System) 1. Construct a 500-word essay about your understanding on the different types of operating systems. Your answer will be graded based on the following: Content, 30 points; Organization, 10 points; and Clarity, 10 points. 12 Software Applications Software applications include system software, real-time software, engineering, scientific software, embedded software, personal computer software, and artificial intelligence software. The system software is a collection of programs written to service other programs. Examples include compilers, editors, file management utilities, operating systems, and drivers. Real-time software is a program that monitors, analyzes, and controls real-world events as they occur. On the other hand, business software is an application that is used in the business environment and that supports business operations. Engineering and scientific software are applications in fields such as astronomy, volcanology, and the space shuttle. Meanwhile, embedded software resides in read-only memory and controls products and systems for the consumer and industrial markets. Examples include keypad control for microwave oven and braking systems. Personal computer software is an application that supports individual needs such as word processing, spreadsheets, multimedia applications, and entertainment. Lastly, artificial intelligence software uses a non-numerical algorithm to solve complex problems that are not amenable to computation or straightforward analysis. Examples include pattern recognition and game playing. Cutting Edge Hardware and Software Technologies 1. Artificial and Augmented Intelligence 2. Real-Time Supply Chain Visibility 3. Data Standards and Advanced Analytics 4. Warehouse Robotics 5. Driverless Deliveries 6. Wearable Devices 7. More Space for Technology 13 ACTIVITY 1.2 (Suggested Mode of Submission of Output: Uploading of Video Recording in the Learning Group/Learning Management System) 1. In a group of five (5) members, look on the internet of one example for each cutting-edge hardware and software technologies. Your team must be able to present it in a video recording, allowing your classmates to know more about these technologies. Provide brief description and explanation for each cutting- edge technologies. Presentation should not be more than 10 minutes. ASSESSMENT – QUIZ NO. 1 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 14 LESSON 2 INTRODUCTION TO SOFTWARE ENGINEERING AND APPLICATION DEVELOPMENT Introduction Applications Development and Software Engineering are two growing trends in the field of Information Technology. Today, different companies and organizations seek to employ application developers and software engineers to cater the needs of organization and the demands of the community as well. In this lesson, you will be able to understand the fundamental concepts of application development and software engineering. These are essential for you to start in the journey of developing your own applications which can then be used to transform communities through Information Technology. Learning Objectives At the end of the lesson, you must be able to: 1. explain what is software engineering and its difference to application development; 2. identify the layers of software engineering and its similarities to application development; 3. recognize the different software engineering and application development paradigms; and 4. construct an understanding and insights about the use of different software and application development paradigms and models. Discussion Software engineering is the establishment and use of sound engineering principles to obtain economic software that is reliable and works efficiently on real machines. Software engineering layers include tools, methods, process, and focus on quality. 15 1. Process. It is the glue that holds the technology layers together and enables rational and timely development of computer software. 2. Methods. These provide technical “how to’s” for building software. 3. Tools. These provide automated or semi-automated support for the process and the methods. Software Engineering and Application Development Paradigms 1. Prototyping Model. It is a method in which a prototype is built, tested and then reworked as necessary until an acceptable outcome is achieved, from which the complete system or product can be developed.. Figure 7: Prototyping Model Prototyping can be problematic for the following reasons: a. In prototyping, the customer thought that the working version of the software satisfies the required expected quality. Unaware that it is a “prototype”, the customer will not consider some problems. Thus, the quality is compromised. b. Oftentimes, developers quickly deploy the prototype resulting to more software or application-related problems. 2. Spiral Model. The spiral model is similar to incremental development for a system, with more emphasis placed on risk analysis. 16 Figure 8: Spiral Model Six task regions: a. Customer communication. At this region of the spiral model, it important that the application developer or software engineer establish an effective communication with the customer to gather all the essential requirements for the project. One of the major causes of application or software failure is the lack of communication between stakeholders. Project leaders and other key technology- drivers of a project must seek to it that proper and effective communication medium and relationship is well established. b. Planning. Planning is vital to provide important insights about the project at hand. At this region of the spiral model, key players must be able to identify the needed resources; define its relevance to the project; draw timeline; and gather all the project related data which contributes to the overall success of the undertaking. c. Risk analysis – This region of the spiral model highlights the core essence of the model. Tasks essential in this section require assessment of the 17 Figure 9: Spiral Model technical tools needed, and the risk management capacities of the players and the project at hand. Successful analysis of the possible risks reduces greater challenges and failures as the project progress. d. Engineering. In this region of the model, application and software developers are required to develop representations of the application. It is relevant to perform such tasks to have an idea of how the system would work and how it captures all the needed requirements from the customers. Also, engineering phase allows further analysis to perform immediate changes and adjustments. e. Construction and release. After the successful completion of the project, this phase of the model requires construction of user support manual and documentation, testing and initial deployment, as well as providing user support and training. f. Customer evaluation. When the actual project is deployed, it is relevant to gather the customer's feedback and evaluation of the application to continuously refine the project and to provide additional necessary changes and adjustments. In this phase, the plan is reviewed and counterchecked if it conforms to what the customer needs and requires. 3. Fourth Generation Technique. The source code is automatically generated according to the specifications made by the developers. Fourth generation technique utilizes mechanisms that is close to natural language or specific notation to achieve significant functions. It includes some or all of the following tools: nonprocedural languages for database query, report generation, data manipulation, screen interaction and definition and code generation. Figure 10: 4GL Technique 18 Summary of the current state of 4GT approaches: a. Over the years, the use of 4GT has made possible the development of different applications in different areas. It has been a practical approach in conducting software and application projects. b. Immense amount of time, effort, and resources has been reduced based from the data collected from different organizations, entities, or groups that have employed and utilized 4GT approach. c. To achieve a better quality of output, for large application projects, 4GT focused more on the analysis of essential requirements to save substantial time, effort, and resources while trying to produce a more quality output. 19 ACTIVITY 2.1 (Suggested Mode of Submission of Output: Google Drive/Learning Management System) 1. Form a team of five (5) members and look for studies that have applied prototyping, spiral, incremental, agile and waterfall models in international journals. Your team should be able to make an analysis for each article on how they have applied each model. The analysis must not be less than 300-words and not more than 500-words. ASSESSMENT - QUIZ NO. 2 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 20 LESSON 3 REQUIREMENTS ANALYSIS AND MODELLING Introduction Requirements play a vital role in the overall success and complete delivery of software applications. When requirements are not properly identified, analyzed, and collected, failure of delivery might happen. In this lesson, you will learn more about requirements analysis, the different steps involved in conducting the process, and the ways on how to convert these into meaningful models that will be significant and helpful in developing applications. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. explain the importance of requirements and the requirements analysis process; 2. differentiate feasibility study, requirements gathering, software requirement specifications, and software requirements validation; 3. demonstrate understanding on the different types of requirements gathering techniques; and 4. compare and contrast functional and non-functional requirements Discussion Requirement Analysis is a software engineering and application development task that bridges the gap between system-level software allocation and software design. It enables the system engineer or application developer to specify software function and performance, indicate software’s interface with other system elements, and establish constraints that software must meet. Requirements analysis allows developers to allocate software in a more refined manner and to build relevant data models including functional and behavioral domains which will be treated by software. Different designs may be produced using requirements analysis which include data, architectural, interface, and procedural design. 21 Figure 11: Requirements Analysis as Bridge between Systems Engineering and Design Five Areas of Efforts 1. Problem recognition 2. Evaluation and synthesis 3. Modeling 4. Specification 5. Review Figure 12: Configuration Management Diagram 22 Configuration Management is a set of procedures that track the requirements that define the system, the design modules that are generated from the requirements, the program code that implements the design, the tests that verify the functionality of the system, and the documents that describe the system. Software Requirements Software Requirements include the description of features and functionalities of the target system. Requirements are essential for it convey the expectations of the users from the software product or application. It can be obvious or hidden, known or unknown, expected or unexpected from the client’s point of view. It covers the process of requirements engineering. Requirements engineering is the process of gathering the requirements from the client, analyzing it, and documenting them. Steps in requirements engineering includes feasibility study, requirements gathering, software requirement specifications, and software requirements validation. a. Feasibility Study. It is the process of studying in details whether the desired system and its functionality are feasible to be developed. Typically, feasibility studies focus on the goals of the organization. Developers must take time to analyze every detail to see if the desired system is possible to be implemented and will contribute to the organization success. In addition, identifying cost constraints is essential in conducting feasibility studies to see if the desired system is practical and achievable. The technical aspects are also analyzed in terms of usability, maintainability, productivity, and integration ability. The output of this process is a feasibility study report which contains comprehensive and adequate comments and recommendations for the management to decide whether to pursue or not a software project. ACTIVITY 3.1 (Suggested Mode of Submission of Output: Video Presentation via Google Form/Learning Group/Learning Management System) 1. Think of a possible application development project for a community in the new normal. Make a three (3) to five (5)- minute video explaining about the concept and other features. Upload the video presentation in the Facebook Community Learning Group. b. Requirements Gathering – In requirements gathering, analysts and software developers communicate with the client and end-users to know their ideas on what the application or system should provide including the intended features they desired. 23 c. Software Requirements Specifications (SRS) – The SRS is a document created by system analysts after the requirements gathering is conducted and essential requirements are collected from the different types of stakeholders. SRS defines how the intended application or system will interact with hardware, external interfaces, speed of operation, response time of the system, portability across different platforms, maintainability, and speed of recovery after crashing, security, quality, limitations, and the likes. Since the requirements collected are written in the natural language, it is the responsibility of the analyst to translate them in technical languages to be understood by the members of the development team. The following are the vital features of the SRS document. i. User requirements are expressed in natural language ii. Technical requirements are expressed in structured language, which is used inside the organization iii. Design description should be written in pseudo code iv. Format of Forms and GUI screen prints v. Conditional and mathematical notations for DFDs, etc. d. Software Requirements Validation – When software requirement specification document has been developed, the contents are validated. Users might ask for any illegal, impractical solutions, or experts may interpret the requirements inaccurately. If not nipped in the bud, this may result in increase in cost. The following conditions may be considered when checking requirements. i. If they can be practically implemented ii. If they are valid and as per functionality and domain of software iii. If there are any ambiguities iv. If they are complete v. If they can be demonstrated Software Requirements Elicitation Process Requirements elicitation process includes requirements gathering, requirement organization, negotiation and discussion, and requirements specification. Figure 13: The Software Requirements Elicitation Process 24 a. Requirements Gathering. The developers discuss with the client and end-users and know their expectations from the software b. Organizing Requirements. The developers prioritize and arrange the requirements in order of importance, urgency, and convenience. c. Negotiation and Discussion. If the requirements are ambiguous or there are some conflicts in requirements of various stakeholders, it is then negotiated and discussed with the stakeholders. Requirements may be then prioritized and reasonably compromised. The requirements come from various stakeholders. To remove ambiguity and conflicts, they are discussed for clarity and correctness. Unrealistic requirements are compromised reasonably. d. Documentation. All formal and informal, functional and non-functional requirements are documented and made available for next phase processing. Requirements Elicitation Techniques This process includes activities to find out the requirements for an intended software system by communicating with client, system users, and others who have a stake in the software system development. The following are the commonly used techniques in requirements elicitation process. a. Interviews. Interviews are strong medium to collect requirements. Organizations may conduct several types of interviews such as structured (closed) interviews where the information to be gathered is already identified. A pattern is followed in conducting structured interview. On the other hand, unstructured or open interviews are more flexible, less biased, and information to be gathered is not yet identified before the conduct of interview. Oral interviews and written interviews can also be done. In terms of dynamics, one-to-one interview or group interviews may be done. b. Survey. Organizations may conduct surveys among various stakeholders by querying about their expectations and requirements from the upcoming system. c. Questionnaire. It is a document with pre-defined set of objective questions and respective options is handed over to all stakeholders to answer, which are collected and compiled. A disadvantage of this technique is, if an option for some issue is not mentioned in the questionnaire, the issue might be left unattended. d. Task Analysis. Members of the development team may analyze the operation for which the new system is required. If the client already has some software to perform certain operation, it is studied and requirements of proposed system are collected. e. Domain Analysis. Every software falls into some domain category. The expert people in the domain can be a great help to analyze general and specific requirements 25 f. Brainstorming. It is an informal debate held among several stakeholders and all their inputs are recorded for further requirements analysis g. Prototyping. It is the process of building user interface without adding functionality for user to interpret the features of intended software product. It helps giving better idea of requirements. h. Observation. Development team visits the client’s organization to observe the actual working of the existing installed system. They also observe the workflow and how execution problems are dealt. In this process, the team draws conclusions which aid to form requirements from the software. Software Requirements Characteristics 1. Clear 2. Correct 3. Consistent 4. Coherent 5. Comprehensive 6. Modifiable 7. Verifiable 8. Prioritized 9. Unambiguous 10. Traceable 11. Credible source Categories of Software Requirements 1. Functional Requirements. These requirements are related to the functional aspect of the software. They define functions and functionality within and from the software system. Examples: a. Search option given to user to search from various invoices b. User should be able to main any report to management c. Users can be divided into groups and groups can be given separate rights d. Should comply business rules and administrative functions e. Software is developed keeping downward compatibility intact 2. Non-Functional Requirements. Non-Functional Requirements are not related to the functional aspect of the system. They are the implicit or expected characteristics of software, which users make assumption of. Examples: a. Security b. Logging 26 c. Storage d. Configuration e. Performance f. Cost g. Interoperability h. Flexibility i. Disaster Recovery j. Accessibility Requirements are categorized logically as: 1. Must Have 2. Should Have 3. Could Have 4. Wish List User Interface Requirements User interface (UI) are important aspects of systems. A software is widely accepted if it is easy to operate, quick in response, effectively handle operational errors, and provide simple yet consistent UI. Since UI is the only way users perceive the system, it is important that the system is equipped with attractive, clear, consistent, and responsive user interface. Otherwise, the functionalities cannot be used effectively. A system is said to be good if it provides the users means to effectively use it. The following are the UI requirements: 1. Content presentation 2. Easy Navigation 3. Simple interface 4. Responsive 5. Consistent UI elements 6. Feedback mechanisms 7. Default settings 8. Purposeful layout 9. Strategical use of color and texture 10. Provide help information 11. User centric approach 12. Group based view settings ASSESSMENT - QUIZ NO. 3 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 27 LESSON 4 PROCESS MODELLING INTRODUCTION In the previous lesson, you learned about the relevance of requirements and the ways on how to effectively collect and analyze them. In this lesson, you will learn more about process modelling. Process modeling in application development helps the team in deepening their understanding about the project at hand. It is important that you will be able to construct your own process model like data flow diagram to convert the requirements into meaningful model representation of the project. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. determine the importance of modelling in application development; 2. recognize what are process models; 3. identify the different essential components of data flow diagram; and 4. create a data flow diagram based on given specifications and requirements. DISCUSSION Modelling incorporates text and graphics and is used to develop information systems. Examples include data models, dynamic models, object models, process models, solution-based models, and workflow models. Modelling provides a common set of graphic symbols, provides examples for imitation or comparison, provides a blueprint about the solution to be developed, use to clarify concepts, identifies areas of interest, decreases ambiguity or redundancy, decreases details or complexity, manages complexity, and minimizes development cost. Data models, process models, and dynamic models' goal is to explain a portion of the customer’s problem or solution domain. When models are created, the focus is on collecting the information that has the largest impact to the solution. Process Models Process models was formalized in the late 1970s. Process models specify the essential processing requirements, that is, state what must be accomplish without specifying the technology to accomplish it. It identifies the information needed for each business event. Process models manage complexity by limiting the amount of detail at each level. It enforces the business rules defined in the data model. Also, process models minimize development cost by clearly defining customer processes and provide necessary input to determine development cost. 28 Process models define the functional behavior of a customer’s business. Components include customer involvement wherein customer provide the business knowledge, data, and data associations. Then, analyst capture the information using a specific development approach to quickly convey important aspects of the customer’s business to help understand the problems. Data Flow Diagram Data flow diagram depicts a set of processes and each process represents a particular task or set of tasks that needs to be accomplished. Each component in the DFD includes textual documentation that further explains its purpose. DFD is use to limit and manage the complexity of the customer’s business. Purpose of DFD 1. Defines the scope of the customer’s business processes. 2. Divides each process into less complex activities. 3. Facilitates a more detailed analysis of the processing requirement. 4. Specifies how the data is manipulated within the customer’s business functions. 5. Serves as an input for preparing the system’s behaviour. Components of DFD a. Process – also called as process bubble. It is illustrated as a circle on a data flow diagram. It manipulates information and remains dormant until it receives 29 data from a data flow or is triggered by a control flow. Process identifiers are used on DFD to depict the relationship between the parent process and the child process. It may be either complex or simple. Complex process performs more than one function while simple process performs only one function. b. Agent – illustrated as a square or rectangle and acts as a source, a sink or both. It can be a person, group, organization, or other systems. It can be classified as external agent and internal agent. Agent acting as a source provides information to the application while agent acting as a sink receives information from the application. Internal agents can only act as a source and send a control flow c. Data flow – Illustrated as a directed continuous arc, comprises a set of attributes. It describes the movement of information from one part of the application to another. Data flow enters data stores, processes, and agents acting as sources. These are used to carry data from an origin object to a destination object. Also, data flow has descriptive label, usually a noun that identifies the type and purpose of the information carried. d. Control flow – Control flow is illustrated as a directed dashed arc which triggers a process by notifying it that a certain condition has been met. It contains no data. It is given a descriptive name that helps identify the purpose or the business policy governing the trigger. Also, control flow triggers a process to take action. e. Junction – Junctions are illustrated as a solid dot, representing a process that provides data to another process. It is depicted as a solid dot on one end of the flow and must be used whenever both of the following conditions exists 1. A process provides input (data or control) to another process 2. The process that provides input to another process is a parent process, or the process that receives input from a process in a parent process f. Data store – It is illustrated as two parallel lines that represents a group of like information at rest. Data store represents a logical table from the data model. The table can represent either an entity or a regular relationship that did not create an associative entity. Data store cannot interact with a parent process. ACTIVITY 4.1 (Suggested Mode of Delivery of Output: Google Form/Learning Group/Learning Management System) 1. Based on the video presentation of yours containing the possible application development project, construct the context diagram and the level 1 diagram of your proposed project. 30 Process Specification Process specification provides the details of how processes perform their functions and describe specific steps performed by a process to achieve its expected output. It is a description of specific steps performed by a process. It is a prescription for solving a particular problem or achieving an expected response without regard to any arbitrary sequencing requirements or to the details of how the problem or response is solved. Also, it is written using a non-procedural language built on a set of structured keywords. Purposes of Process Specification 1. Verify understanding of a process with the customer. 2. Describe the data input, the junction performed and the data output for each simple process. 3. Provide a basis for estimating the amount of development or maintenance work in the project definition and project plan. 4. Serve as an input for preparing module specification. 5. Validate any previously developed data flow diagrams or processes. 6. Uncover additional functions that must be incorporated into the data flow diagrams. 7. Serve as a checklist to record the tables accessed by the process. 8. Validate the data model in terms of logical table structure. 9. Define what must be done to achieve the expected process response. Structured Nonprocedural Language Process specifications are termed nonprocedural, meaning the solution should be specified in terms of structures that are relevant to the problem, rather than the operations or control structures that are specific to a particular physical target environment. The structure of a process specifications may change slightly from one method to another because of the vocabulary. This vocabulary enables other engineers to easily and quickly understand the customer’s business. 31 Data Conservation Every process should receive only the information that the process needs to perform its function. If data passes through a process without contributing to the operation of the process, the data should be removed from the data flows. To ensure that the process adheres to data conservation, you must understand the purpose of the process. It is important to make the data flow diagram more understandable by limiting the data carried on its data flows to only the data needed by each process. Also, provide the necessary information for the analyst to create the process specification. Context Diagram A context diagram is a data flow diagram that depicts the highest level of a process model. It has only one process, and one or more external agents that either provide information to or receive information from the process as indicated by the data flows. Context Diagram is at the highest level of abstraction within the process model. Context Diagram provides only a small understanding of the overall behavior of the application’s functions. Context diagram defines the boundary of focus, identify the information needed to provide the information requested, and identify where the information is coming from and where it is going to outside the boundary. Rules in Context Diagram 1. It must have only one process illustrated to represent the logical grouping of the business processes being analyzed. 32 2. It must have at least one external agent, acting as a source, a sink or both. Also, it may have more agents acting as a source, a sink or both. 3. It must have a data flow connecting each of the agents to the process. The directions of the data flow indicate the role of the agent has with the process (either as a source or as a sink). High Level Process Diagram (HLP) HLP illustrates a process one level below the context diagram. It is a child diagram of the context diagram. It corresponds directly to one event on the event response list. HLP illustrates somewhat more detail than the context diagram, but for only one event. The high-level process diagram breaks the context diagram into manageable pieces. It focuses on a particular event of the event-response list. Rule in High Level Process Diagram 1. It must have one and only one process. 2. It must have one agent (internal and external), acting as source. Also, it may have external agents acting as sinks. 3. It must have a flow (data or control) connecting each of the agents to the process. The direction of the flow indicates the role the agents has with the process (either as a source or as sink). Composite Event A composite event is composed of two or more interrelated events. It identifies when an additional process and input attributes are needed to direct which processes to trigger or one of the incoming data flows is not expected to contain any data on the initial triggering of the high-level process. When an additional process and input attributes are needed to direct which tasks (process) to trigger, this attribute is a design technique. Intermediate Level Process It represents a child process that is not simple process and that consists of two or more child processes. Intermediate level process manages the detail of a data flow diagram by reducing the number of processes illustrated on a level, increase the readability of its parent process diagram, and decrease the complexity of its parent process’ diagram. In constructing intermediate level process, it must be a child of a parent process and it must have at least two child processes. ASSESSMENT - QUIZ NO. 4 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 33 LESSON 5 SOFTWARE DESIGN INTRODUCTION In this chapter, you will learn more about software design and the different techniques and procedures use to perform the activities. Software design is more than what the eyes can see, but also how the users can use and make it more relevant. It is expected that in this lesson, you will be able to apply the learned concepts and techniques to successfully construct the design of your own application. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. state the importance of software design in application development; 2. identify the different design steps; and 3. recognize the different design concepts and explain their importance. DISCUSSION Software design sits at the technical kernel of the software engineering process and is applied regardless of the software process model that is used. The first of three technical activities – design, code generation, testing – is required to build and verify the software. It includes an iterative process through which requirements are translated into a “blueprint” for constructing the software. Figure 14: Application Design 34 ACTIVITY 5.1 (Suggested Model of Submission of Output: Email/Learning Management System) 1. Thinking about your proposed application as the reference: a. Construct the proposed entity-relationship diagram b. Construct the proposed data flow diagram (Context Diagram and Level 1) c. Construct the state-transition diagram Design Steps a. Data Design transforms the information domain model created during analysis into the data structures that will be required to implement the software. b. Architectural Design defines the relationship among major structural elements of the program. c. Interface Design describes how the software communicates within itself, to systems that interoperate with it, and with humans who use it. d. Procedural Design transforms structural elements of the program architecture into procedural description of software components. Design Concepts 1. Abstraction 2. Refinement 3. Modularity 4. Software architecture 5. Control hierarchy 6. Structural partitioning 7. Data structure 8. Software procedure 9. Information hiding Levels of Abstraction a. Procedural Abstraction is a named sequence of instructions that has a specific and limited function. b. Data Abstraction is a named collection of data that describes a data object. c. Control Abstraction implies a program control mechanism without specifying internal details. Refinement Stepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. It is a process of elaboration. It causes the designer to elaborate on the original statement, providing more and more detail as each successive refinement occurs. Refinement helps the designer to reveal low-level details as design progress. It helps the designer to reveal low-level details. 35 Modularity It includes the division of software into separately named and addressable components (modules) integrated to satisfy problem requirements. Modularity is the single attribute of software that allows a program to be intellectually manageable. Criteria to define an effective modular system 1. Modular decomposability 2. Modular composability 3. Modular understandability 4. Modular continuity 5. Modular protection Software Architecture Alludes to “the overall structure of the software and the ways in which that structure provides conceptual integrity of the system”. The hierarchical structure of program components (modules), the manner in which these components interact, and the structure of the data that are used by the components. Different models in architectural design 1. Structural model 2. Framework model 3. Dynamic model 4. Process model 5. Functional model Control Hierarchy It is also called program structure. Control hierarchy represents the organization (often hierarchical) of program components (modules) and implies a hierarchy of control. It does not represent procedural aspects of software such as sequence of processes, occurrence/order of decisions or repetition of operations. It also represents two subtly different characteristics of the software architecture: visibility and connectivity. 36 Structural Partitioning Horizontal partitioning defines separate branches of the modular hierarchy for each major program functions while vertical partitioning, often called "factoring", suggests that control (decision making) and work should be distributed top-down in the program architectural. Benefits of horizontal partitioning includes: results in software that is easier to test, leads to software that is easier to maintain, results in propagation of fewer side effects, and results in software that is easier to extend. 37 Data Structure Data structure is a representation of the logical relationship among individual elements of data. It is as important as program structure to the representation of software architecture. Data structures dictates the organization, methods of access, degree of associativity, and processing alternatives for information. Its organization and complexity are limited only be the ingenuity of the designer. A scalar item, the simplest of all data structures, represents a single element of information that may be addressed by an identifier. Vectors are the most common of all data structures and open the door to variable indexing of information. When the sequential vector is extended to two, three and ultimately, an arbitrary number of dimensions, an n-dimensional space (also called "array") is created. A linked list is a data structure that organizes non-contiguous scalar items, vectors or spaces in manner (called nodes) that enables them to be processed as a list. For example, a hierarchical data structure is implemented using multi-linked lists that contain scalar items, vectors and possibly n-dimensional spaces. Like program 38 structures, data structures can be represented at different levels of abstraction. For example, a stack is a conceptual model of a data structure that can be implemented as a vector or a linked list. Software Procedure Software procedure focuses on the processing details of each module individually. It must provide a precise specification of processing including sequence o of events, exact decision points, repetitive operations, and even data organization/structure. Information Hiding In information hiding, modules should be specified and designed so that information (procedure and data) contained within a module is inaccessible to other modules that have no need for such information. It implies that effective modularity can be achieved by defining a set of independent modules that communicate with one another only, and that information is necessary to achieve software function. It defines and enforces access constraints to both procedural detail within a module and any local data structure used by the module. Cohesion Cohesion is a natural extension of the information hiding concept. A cohesive module performs a single task within a software procedure, requiring little interaction with procedures being performed in other parts of a program. It may be represented as a “spectrum”. A module that performs a set of tasks that relate to each other loosely, if at all, is termed coincidentally cohesive. A module that performs tasks that are related logically is logically cohesive. When a module contains tasks that are related by the fact that all must be executed within the same span of time, the module exhibits temporal cohesion. Figure 15: Cohesion 39 Coupling Coupling is a measure of interconnection among modules in a program structure. It may also be represented on a spectrum. Coupling depends on the interface complexity between modules, the points at which entry or reference is made to a module, and what data pass across the interface. At moderate levels, coupling is characterized by passage of control between modules. Figure 16: Coupling Data Design Data design is the first of four design activities that are conducted during software engineering and application development. The impact of data structure on program structure and procedural complexity causes it to have a profound influence on software quality. Its primary activity is to select logical representations of data objects (data structures) identified during the requirements definition and specification phase. The important related activity is to identify those program modules that must operate directly upon the logical data structure. Principles for data specification a. The systematic analysis principles applied to function and behavior should also be applied to data. b. All data structures and the operations to be performed on each should be identified. c. A data dictionary should be established and used to define both data and program design. d. Low level data design decisions should be deferred until late in the design process. e. The representation of data structure should be known only to those modules that must make direct use of data contained within the structure. f. A library of useful data structures and the operations that may be applied to them should be developed. g. A software design and programming language should support the specification and realization of abstract data types. 40 Architectural Design Architectural Design is the initial design process of identifying these subsystems and establishing a framework for subsystem control and communication. Its objective is to develop a modular program structure and represent the control relationships between modules. It melds program structure and data structure, defining interfaces that enables data to flow throughout the program. Architectural design maybe based on a particular architectural model or style. Five step process to accomplish the transition from information flow to structure: 1. The type of information flow is established. 2. The boundaries are indicated. 3. The DFD is mapped into program structure. 4. Control hierarchy is defined by factoring. 5. The resultant structure is refined using design measures and heuristics. Interface Design It focuses on three areas of concern: the design of interfaces between software modules; the design of interfaces between the software and other nonhuman producers and consumer’s information; and the design of the interface between a human (the user) and the computer. The design of internal program interfaces, sometimes called intermodular interface design, is driven by the data that must flow between modules and the characteristics of the programming language in which the software is to be implemented. Procedural Design Occurs after data, architectural and interface designs have been established. It must specify procedural detail unambiguously. Its foundation was formed in the early 1960s and was solidified with the work of Edsgar Dijktra and his colleagues. They proposed the use of a set of existing logical constructs from which any program could be formed. Constructs of procedural design are the following: sequence – implements processing steps that are essential in the specification of any algorithm; condition – provides the facility for selected processing based on some logical occurrence; and repetition – provides for looping. ASSESSMENT - QUIZ NO. 5 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 41 LESSON 6 APPLICATION AND SOFTWARE PROTOTYPING INTRODUCTION In this lesson, you will be able to learn about prototyping and its relevance in software and application development. Prototyping has different forms and types and these will be discussed in this lesson. After this lesson, you will be making your own prototype based on the models and requirements you have from the previous lesson of this learning module. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. identify what is a prototype, the benefits of prototyping, and the advantages of prototyping; 2. compare and contrast the different types of prototyping; 3. construct prototype of the proposed application project. DISCUSSION A prototype is an early sample, model, or release of a product built to test a concept or process. It is a term used in a variety of contexts, including semantics, design, electronics, and application development and software programming. A prototype is generally used to evaluate a new design to enhance precision by system analysts and users. A prototype is a draft version of a product that allows you to explore your ideas and show the intention behind a feature or the overall design concept to users before investing time and money into development. A prototype is a simple experimental model of a proposed solution used to test or validate ideas, design assumptions, and other aspects. It is a partial implementation of a product expressed either logically or physically with all external interfaces presented. Also, a prototype typically simulates only a few aspects of, and may be completely different from, the final product. SOFTWARE AND APPLICATION PROTOTYPE A software or application prototype is an executable model of the proposed software system or application. It must be producible with significantly less effort than the planned product. Software prototyping is the activity of creating prototypes of software applications i.e. incomplete versions of the software program being developed. It is an activity that can occur in software and application development and is comparable to prototyping as known from other fields, such as mechanical engineering 42 or manufacturing. The degree of completeness and the techniques used in prototyping have been in development and debate since its proposal in the early 1970s. Benefits of Prototyping 1. The software designer and implementer can get valuable feedback from the users early in the project. 2. The client and the contractor can compare if the software made matches the software specification according to which the software program is built. 3. It also allows the software or application engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. Advantages of Prototyping 1. Collect feedback from users/ stakeholders about the functionality of the product before the public release. 2. Reveal areas for improvement and help identify faults and usability issues before the public release. Help reduce unnecessary costs. 3. Improve team efficiency and collaboration. 4. Allow the user to interact with a working model of their product. 5. Help convert an abstract idea into a tangible product in a cost-effective way. 6. Identify if your product idea is a weak one and cost you heavily before actually moving forward with it. Types of Prototyping 1. Low Fidelity Prototypes  Wireframes are used to represent the basic structure of a website/ web page/ app. It serves as a blueprint, highlighting the layout of key elements on a page and its functionality. 43 Figure 17: Wireframes  Storyboards are another low-fidelity prototyping method that helps visualize the user’s experience in using your product or how the user would interact with your product. Figure 18: Storyboards  Diagrams are multiple diagram types that can help you visualize different aspects of a product, which can in turn help you optimize your prototype such as mind maps, customer journey maps, and flowcharts. Figure 19: Diagrams 44  Animation can be used to visualize how your product works. For example, if it is a mobile app, you can animate how a user would navigate from one screen to the other. This will help the stakeholders or users get an idea about the functionality of the product. 2. High Fidelity Prototypes  Interactive UI Mockups. A UI mockup is a more fleshed-out version of the wireframe. It represents the color schemes, typography and other visual elements that you have chosen for the final product. Figure 20: Interactive UI Mock-ups  P hysical Models. If the final product is a physical one, you can use different materials to create a model that represents the final look, shape and feel of the product. You can use materials such as cupboards, rubber, clay etc.  Wizard of OZ Prototyping. This is a type of prototype with faked functions. This means when a user interacts with the product, the system responses are generated by a human behind the scene rather than by a software or code. This prototyping technique allows you to study the reaction of the user at a lesser cost. 45 The Prototyping Process 1. Identify Obstacles. 2. Select the Features. 3. Sketch Your Design. 4. Share Your Design. 5. Continue to Develop. ACTIVITY 7.1 (Suggested Mode of Submission of Requirements: Email/Learning Management System) Choose among the presented types of prototyping techniques and develop the prototype of your proposed project. The prototype must showcase your creativity in designing the UI of your proposed project. ASSESSMENT - QUIZ NO. 7 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 46 LESSON 7 SOFTWARE QUALITY ASSURANCE INTRODUCTION Quality is what software development team always look at. Teams must be able to produce quality output to provide significant impact in an organization, business entities, and individuals. In this lesson, you will learn more about the importance of quality, and software quality in particular. Output of this lesson is a continuation of the output produced from the previous lesson. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. recognize what is software quality assurance and its importance; 2. differentiate quality control and quality assurance; 3. explain the different factors affecting software quality; and 4. demonstrate understanding on the different software quality metrics DISCUSSION Software Quality Assurance (SQA) is an umbrella activity that is applied throughout the software engineering process. It is an error-prevention technique that focuses on the process of systems and software development. It is a planned and systematic pattern of actions that are required to ensure quality in software. SQA encompasses: 1. analysis, design, coding and testing methods and tools 2. formal technical reviews that are applied during each software engineering step 3. a multi-tiered testing strategy 4. control of software documentation and the changes made to it 5. a procedure to assure compliance with software development standards 6. measurement and reporting mechanism SQA is often thought of as a software testing activity – WRONG If quality is not part of a product prior to testing, it will not be part of the product after testing is completed. SQA must be part of Software Engineering from the beginning The goals of SQA 1. To improve software quality by monitoring both the process and the product. 2. To ensure compliance with all local standards for SE. 3. To ensure that any product defect, process variance, or standards non- 47 compliance is noted and fixed. Quality Control Vs. Quality Assurance  Quality control is about work product; quality assurance is about work process.  Quality control activities are work-product oriented. They measure the product, identify deficiencies, and suggest improvements. The direct result of these activities are changes to the product. These can range from single-line code changes to completely reworking a product. It is an error-removal technique. Quality assurance activities are work-process oriented. They measure the process, identify deficiencies, and suggest improvements. The direct result of these activities are changes to the process. These changes can range from better compliance with the process to entirely new processes. The output of quality control activities is often the input to quality assurance activities. Software Quality Conformance to explicitly stated functional and performance requirements, explicitly documented development standards and implicit characteristics are expected of all professionally develop software. An alternative definition by W. Edwards Deming Striving for excellence in reliability and functions by continuous (process) improvement, supported by statistical analysis of the causes of failure Three (3) important points to remember on software quality 1. Software requirements are the foundation from which quality is measured. Lack of conformance to requirements is lack of quality. 2. Specified standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not followed, lack of quality will almost surely result. 3. There is a set of implicit requirements that often goes unmentioned (e.g., the desire for good maintainability) if software conforms to its explicit requirements, but fails to meet implicit requirements. Quality Characteristics These refer to any property or element that can be used to define the nature of a product. Each characteristic can be physical or chemical properties such as size, weight, volume, color or composition. Software Quality 1. It is achieved through a disciplined approach - called software engineering 48 SE. 2. It can be defined, described, and measured. 3. It can be assessed before any code has been written. 4. It cannot be tested into a product. Software quality challenges 1. Defining it. 2. Describing it (qualitatively). 3. Measuring it (quantitatively). 4. Achieving it (technically). Designing software is a creative task, and like most such tasks, success is more likely if the designer follows what might be termed a set of rules of form. The rules of form also provide some way of assessing the quality of the eventual product, and possibly of the processes that led to it. Quality example Music provides a good analogy: we could write music by scattering notes around at random, but we would get a better result if we considered melody and harmony Awful Bearable Good Extremely good My music Beethoven’s music How do we scale this? Figure 21: Quality Example In his book Quality Is Free, Philip Crosby suggests an analogy between quality and sex: 1. Everyone is for it, except in certain situations. 2. Everyone believes they understand it, but they don’t want to explain it. 49 3. Most people believe the execution is merely a matter of following natural inclinations. 4. Everyone believes that problems with it are always someone else’s fault. REALISING QUALITY A set of abstract quality factors (‘the ilities’) has been defined. These cannot be measured directly but do relate to the ultimate goal. Mapped Realized Measured On to Through By Quality factors Quality Measurable Counts (ilities) Quantity from criteria designs Figure 22: Model of Realizing Quality Software Quality Factors (by McCall) 1) Product Revision (changing it)  Flexibility (can I change it?) The effort required to modify an operational program. Change and enhancement of the system should be easily implementable.  Maintainability (can I fix it?) The effort required to locate and fix an error in a program. The system should be easy to keep up for its intended use. Changes for improving operational efficiency should be easy to implement. Failed operations should be easy to restore to satisfactory condition.  Testability (can I test it?) The effort required to test a program to ensure that it performs its intended function. The ability of the system to produce quality product units should be easily testable. Useful messages should be generated for testing and debugging purposes. 2) Product Transition (modifying it to work in a different environment)  Interoperability (Will I be able to interface it with another system?) The effort required to couple one system to another.  Portability (Will I be able to use it on another machine?) The effort required to transfer the program from one hardware and/or software system environment to another. The system should be portable among people and among machines. Attainment of the other quality characteristics greatly facilitates portability.  Reusability (Will I be able to reuse some of the software?) 50 The extent to which a program (or part of a program) can be reused in other applications-related to the packaging and scope of the functions that the program performs. 3) Product Operations (using it)  Correctness (Does it do what I want?) The extent to which a program satisfies its specification and fulfills the customer’s mission objectives. The extent to which software is free from design defects and from coding defects; that is fault-free.  Reliability (Does it do it accurately all of the time?) The extent to which a program can be expected to perform its intended function with required precisions under stated conditions for a stated period of time.  Efficiency (Will it run on my hardware as well as it can?) The extent to which a software performs its function with a minimum consumption of computing resources. It should not use any hardware components or peripheral equipment unnecessarily.  Integrity (Is it secure?) The extent to which access to software or data by unauthorized persons can be controlled.  Usability (Is it designed for the use?) The effort required to learn, operate, prepare input, and interpret output of a program. Quality Metrics Provides an indication of how closely software conforms to implicit (essential) and explicit (specific) requirements.  Auditability refers to the ease with which conformance to standards can be checked.  Accuracy is the precision of computations and control. A qualitative assessment of freedom from error. A quantitative measure of the magnitude of error. The correct data values are recorded.  Communication commonality is the degree to which standards interfaces, protocols, and bandwidths are used.  Completeness is the degree to which full implementation of required function has been achieved. All data items are captured and stored for use. Data items are properly identified with time periods.  Conciseness is the compactness of the program in terms of lines code.  Consistency is the use of uniform design and documentation techniques throughout the software development project. 51  Data commonality is the use of standard data structures and types throughout the program.  Error tolerance is the damage that occurs when the programs encounters an error. Suitable error prevention and detection procedures are in place. There are procedures for reporting and correcting errors. Various audit procedures are applied.  Execution efficiency refers to the run-time performance of a program.  Expandability is the degree to which architectural, data, or procedural design can be extended.  Generality refers to the breadth of potential application of program components.  Hardware independence is the degree to which the software is decoupled from the hardware on which it operates.  Instrumentality is the degree to which the program monitors its own operation and identifies errors that do occur.  Modularity refers to the functional independence of program components.  Operability refers to the ease of operation of a program.  Robustness is the extent to which software can continue to operate correctly despite the introduction of invalid inputs  Security is the availability of mechanism that control or protect programs and data. The system and its operations are protected from various environmental and operation risks. There are provisions for recovery in the event of failure or destruction of part or all system  Self-documentation is the degree to which the source code provides meaningful documentation.  Simplicity is the degree to which a program can be understood without difficulty.  Software system independence refers to the degree to which the program is independent of nonstandard programming language features, operating system characteristics, and other environmental constraints.  Traceability refers to the ability to trace a design representation or actual program component back to requirements.  Training is the degree to which the software in enabling new users to apply the system. Laws of software evolution dynamics (by Belady and Lehman): 1. Law of continuing Change (self-evident). It is a large program that is being used undergoes continuing change until it is judged more cost-effective to rewrite it. Software over time must be changed not only to repair errors that are 52 discovered, but also to incorporate enhancements to adapt to new hardware systems and changing environment. 2. Law of Increasing Entropy (intuitive). The entropy (disarray) of a system increases with time unless specific work is done to maintain it or to reduce it. Continuous changes made to a system tend to destroy the integrity of the system thus increasing entropy. 3. Law of Statistically-Smooth Change (controversial). Measures of a global system attributes and project attributes may appear quite irregular for a particular system, but are cynically self-regulating, with statistically identifiable invariance and well-defined long-range trends. Work input or the effort expended per unit time, remained constant over the lifetime of the system. SQA ACTIVITIES 1. Application of technical methods The use of technical tools and methods helps analyst to achieve high quality specification and the designer to develop a high-quality design 2. Conduct of formal technical reviews (FTR) Reviews are another quality control activity. Again, reviews are held to find defects in a work product. The result is changes to the work product. Over time, the collection of these changes may induce process changes, but that is not necessary. a. To uncover errors in function, logic or implementation for any representation of the software b. To verify if software meets requirements c. To ensure that software has been presented according to pre-defined standards. d. To achieve software that is developed in a uniform manner e. Make projects more manageable. It is not the task of team to correct faults, but merely to record them for later correction. Defect found early in the software life cycle can be repaired at much less expense than later in the life cycle. Types of Reviews 1) Walk through is an interactive process to evoke questions and discussions Time limit - 2 hours SQA , participants : 4 - 6, senior tech. staff Ways to conduct walk through: 53 a) Participant - driven  Each participant goes through his or her list of unclear items which may appear incorrect. b) document-driven  A person responsible for the document walks the participants through that document, with the reviewers in everything either with their prepared comments or comments triggered by the presentation 2) Inspection (goes far beyond walk through) Formal steps of inspection a) An overview of document is given to participants. b) Participants prepare for inspection, aided with lists of fault types previously found. c) Inspection - every piece of logic is covered at least once, and every branch is taken at least once. Written report of inspection is produced. d) Rework - resolves all faults and problems noted in written report. e) Follow-up ensures that every single issue raised has been satisfactorily resolved. Review Guidelines 1. Review the product, not the producer. 2. Set an agenda and maintain it. 3. Limit debate and rebuttal. 4. Enunciate problem areas, but do not attempt to solve every problem noted. 5. Take written notes. 6. Limit the number of participants and insist upon advance preparation. 7. Develop a checklist for each product that is likely to be reviewed. 8. Allocate resources and time schedule for FTRs. 9. Conduct meaningful training for all reviewers. 10. Review your early reviews. 3. Software testing Software testing is the activity or running software to find errors in the software. The direct output of testing results in product changes. A study of these changes may result in process changes, but this is not necessary. Thus, testing is a quality control activity. It combines a multi-step strategy with a series of test 54 case design methods that help ensure effective error detection. Program testing can be very effective way to show the presence of bugs but it is hopelessly inadequate for showing their absence 4. Enforcement of standards SQA must be established to ensure that standards are being followed. An assessment of compliance to standards must be conducted 5. Control of change Every change to software has the potential for introducing error or creating side effects that propagate errors. Request for change must be formalized- evaluate the nature of change, and control the impact of change. 6. Measurement To track software quality and assess impact of changes to software quality. 7. Record keeping and reporting Provide procedures for the collection and dissemination of SQA information. Software Quality and Productivity: What Top Management Must Do A. Create constancy of purpose to increase software quality and productivity with a plan to become competitive and to stay in business. Top management is responsible to the general public, especially the software users whose satisfaction is the supreme justification for the existence of a software business. B. Adopt the new philosophy of quality and cost-effective software with the understanding that we can no longer live with delays, mistakes, poor quality and costly software. C. Cease dependence on conventional software methods, requiring, instead of statistical evidence that software quality is built in the eliminate need for inspection and correction on the job where the software is used. Top management of every computer and software developer and User Company has a new job and must learn it. Every user company can now demand meaningful software warranty using statistical quality control. Every developer company must now show credibility by delivering software warranty. 55 D. End the practice of awarding software business on a lowest cost basis, demanding, instead, that meaningful measures of quality along with price. Software developers who cannot qualify with statistical evidence of quality must be eliminated. E. Find problems. Management must work continually on improving software practices: training, supervision, retraining, and improvement of software development methodologies using statistical methods. F. Institute modern methods and rigorous programs of training in software engineering and quality assurance with statistical quality control. G. Institute modern methods of supervision of software personnel. The responsibility of software personnel must change from sheer numbers to both numbers and quality. Achievement of quality will automatically increase productivity. H. Drive out fear so that everyone in a software company will work effectively. I. Break down barriers among departments. Personnel in research, design, development, sales, and production must work as a team to foresee problems of software development. J. Create a structure in top management that will direct and control every day on the above nine points. ACTIVITY 8.1 (Suggested Mode of Submission of Output: Email/Google Form/Learning Management System) 1. Write at least 500-word insight on the importance of software quality assurance, its impact to application development, and the need to further improve the quality of applications and software in the new normal. Your insight paper will be assessed based on content and relevance to the topic (30 points), organization of thoughts and clarity (20 points). ASSESSMENT - QUIZ NO. 8 (Suggested Mode of Delivery of Assessment: Google Form/Quizziz/Learning Management System) 56 LESSON 8 SOFTWARE TESTING INTRODUCTION In the previous lesson, you learned about software quality assurance and its fundament concepts. In this lesson, you will learn more about software testing and the different software testing techniques and activities you may conduct in developing your application project. At the end of the lesson, you are expected to conduct software testing activities in your proposed project application. LEARNING OBJECTIVES At the end of the lesson, you must be able to: 1. identify what is software testing and its importance in software and application development; 2. compare and contrast different software testing techniques; and 3. construct understanding on how to evaluate and apply proper and appropriate testing techniques. DISCUSSION Software testing is a critical element of software quality assurance that represents the ultimate review of specification, design and coding. It is a series of tests cases that are intended to “demolish” the software that has been built are created. Software testing is one step in the software engineering processes that could be viewed as destructive rather than constructive. It requires that the developer discard preconceived notions of the “correctness” of software just developed and overcome a conflict of interest that occurs when errors uncovered. Testing is a process of executing a program with the intent of finding an error. A good test is one that has a high probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet undiscovered error. Testing Principles a. All tests should be traceable to customer requirements. b. Tests should be planned long before testing begins. c. The Pareto Principle applies to software testing. d. Testing should begin “in the small” and progress toward testing “in the large”. e. Exhausting testing is not possible. f. To be most effective, testing should be conducted by an independent third party. 57 Testability Testability is simply how easily a computer program can be tested. It is important to consider that it pays to know what can be done to streamline it. Testability is sometimes used to mean how adequately a particular set of tests will cover the product. Set of characteristics that lead to testable software a. Operability b. Observability c. Controllability d. Decomposability e. Simplicity f. Stability g. Understandability Software Testing Strategies It provides a road map for the software developer, the quality assurance organization, and the customers. It incorporates test planning, test case design, test execution and resultant data collection and evaluation. Strategies should be flexible enough to promote the creativity and customization that are necessary to adequately test all large software-based systems. It must be rigid enough to promote reasonable planning and management tracking as the project progresses. Characteristics of Testing Strategies a. Testing begins at the module level and works “outward” toward the integration of the entire computer-based system. b. Different testing techniques are appropriate at different points in time. c. Testing is conducted by the developer of the software and an independent test group. d. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. Testing Techniques Softwaretestinghelp.com presents the different testing techniques as follows: 1. Functional Testing b. Unit Testing Testing of an individual software component or module is referred to as Unit Testing. Typically, this is done by the programmer and not by the tester, as it requires detailed knowledge of the internal program design and code. It may also require test driver modules or test harnesses to be developed. 58 c. Integration Testing Testing of all integrated modules to verify the combined functionality after integration is referred to as integration testing. Modules are typically code modules, individual applications, network client and server applications, etc. This type of testing is particularly relevant for the client/server and distributed systems. d. System Testing The entire system is tested according to the requirements of the System Testing technique. It is a Black-Box type of testing that is based on the over- all requirements and covers all the combined parts of the system. e. Sanity Testing Sanity Testing is done to determine whether or not a new software version is performing well enough to accept it for a major test effort. If the application crashes for initial use, the system is not stable enough for further testing. As a result, a build or application is assigned to fix it. f. Smoke Testing Whenever a new build is provided by the development team, the Software Testing team validates the build and ensures that there is no major issue. The testing team ensures that the design is stable and that further detailed testing is carried out. Smoke Testing checks that there is no show stopper defect in the build that prevents the testing team from testing the application in detail. If testers find that the major critical functionality is broken down at the initial stage itself, then testing team can reject the build and inform accordingly to the development team. Smoke Testing is carried out to a detailed level of any Functional or Regression Testing. g. Interface Testing The objective of this GUI Testing is to validate the GUI as per the business requirement. The expected GUI of the application is mentioned in the Detailed Design Document and GUI mockup screens. The GUI Testing includes the size of the buttons and input field present on the screen, alignment of all text, tables, and content in the tables. It also validates the menu of the application, after selecting different menu and menu items. It validates that the page does not fluctuate and the alignment remains same after hovering the mouse on the menu or sub-menu. 59 h. Regression Testing Testing an application as a whole for the modification in any module or functionality is termed as Regression Testing. It is difficult to cover all the system in Regression Testing, so typically Automation Testing Tools are used for these types of testing. i. Beta/Acceptance Testing Beta Testing is a formal type of Software Testing which is carried out by the customer. It is performed in the Real Environment before releasing the product to the market for the actual end-users. Beta Testing is carried out to ensure that there are no major failures in the software or product and that it satisfies the business requirements from an end-user perspective. Beta Testing is successful when the customer accepts the software. Usually, this testing is typically done by end-users or others. It is the final testing done before releasing an application for commercial purpose. Usually, the Beta version of the software or product released is limited to a certain number of users in a specific area. So, end-user actually uses the software and shares the feedback to the company. Company then takes necessary action before releasing the software to the worldwide. 2. Non-Functional Testing a. Performance Testing This term is often used interchangeably with ‘stress’ and ‘load’ testing. Performance Testing is done to check whether the system meets the performance requirements. Different performance and load tools are used to do this testing. b. Load Testing Load Testing is a type of Non-Functional Testing whose objective is to check how much load or maximum workload a system can handle without any performance degradation. Load Testing helps to find the maximum capacity of the system under specific load and any issue that causes software performance degradation. Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc. c. Stress Testing This testing is done when a system is stressed beyond its specifications in order to check how and when it fails. This is performed under heavy load like putting large number beyond storage capacity, complex database queries, and continuous input to the system or database load. 60 d. Volume Testing Volume Testing is a type of Non-Functional Testing performed by the Performance Testing team. The software or application undergoes a huge amount of data and Volume Testing checks the system behavior and response time of the application when the system came across such a high volume of data. This high volume of data may impact the system’s performance and speed of the processing time. e. Security Testing It is a type of testing performed by a special team of testers. A system can be penetrated by any hacking way. Security Testing is done to check how the software or application or website is secured from internal and external threats. This testing includes how much software is secured from the malicious program, viruses and how secured and strong the authorization and authentication processes are. It also checks how software behaves for any hacker's attack and malicious programs and how software is maintained for data security after such a hacker attack. f. Compatibility Testing It is a testing type in which it validates how software behaves and runs in a different environment, web servers, hardware, and network environment. Compatibility testing ensures that software can run on a different configuration, di

Use Quizgecko on...
Browser
Browser