Manual Testing FAQ PDF
Document Details
![SpiritedVerism](https://quizgecko.com/images/avatars/avatar-1.webp)
Uploaded by SpiritedVerism
Tags
Summary
This document is a Frequently Asked Questions (FAQ) document about manual testing. It covers various aspects of testing, including testing levels, techniques, and considerations.
Full Transcript
**[Manual FAQ:]** - **Tell me what the types of testing or levels?!**\ **- Test levels are**:\ \* unit testing -- **Developer**\ \* integration testing -- **Tester**\ \* system integration testing (**SIT**) -- **Tester**\ \* user acceptance testing (**UAT**) ( acceptance testi...
**[Manual FAQ:]** - **Tell me what the types of testing or levels?!**\ **- Test levels are**:\ \* unit testing -- **Developer**\ \* integration testing -- **Tester**\ \* system integration testing (**SIT**) -- **Tester**\ \* user acceptance testing (**UAT**) ( acceptance testing such as beta testing and alpha testing):\ **- Alpha testing** is when we test the final product on the developing site\ \* static testing\ \* dynamic testing\ \* functional testing\ \* non-functional testing (performance testing, usability, scalability) - **Performance Testing**: It is a testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Its types are: - **When should I know that the system needs performance testing? Is it mandatory on all apps?** - **Tell me more about the testing techniques**\ **- Black box testing** ( testing of the behavior of the application as input and output without looking to the internal structure) send inputs and wait for the outputs as expected results\ - **White box** ( testing the internal code of the application ( mainly done by automation engineers and developers -- needs high coding skills ) - **unit testing**\ - experience base techniques such as exploratory testing ( mainly done when we have lack of time or requirements) also based on the tester experience\ - **Gray box testing** ( combination between the black box and the white box testing. - **Verification vs Validation:**\ - **Verification** (is the process to check whether **we build the product right or not\ and to verify if it meets the requirements**, we have or not) QC\ **- Validation** (to validate if **we build the right product or not** (which the client wants)\ the final product meets the client needs (acceptance testing) QA\ we might have an application that is 100% verified but it still didn't meet the client expectations or needs. - **Difference between Dynamic Testing and Static testing:** - **Quality Control vs Quality Assurance:** - **What does the Test plan include?**\ - test scope\ - environment\ - entry and exit criteria\ - testing objectives - **Retesting vs Regression:**\ - **Retesting or confirmation testing** ( is when we find out a defect or a bug and the it gets fixed we should then retest it to make sure its fixed and the defect is not presence anymore\ - **Regression** ( when we fix a defect or add new features to our application we should then test the features that are related or linked to the fixed bug or to the added feature to make sure the fixing or the new features didn't impact on another features )\ mainly done when :\ new functionalities added to the application\ change requirements ( CRs )\ defect fixing\ environment changes - **Smoke testing** (is done when a new build is deployed) that we run the core or high risk or the full work flow scenario to make sure the core functions of the application is working properly\ if the smoke test suite fails we as a testers reject the build because then the build is not stable and there is no need to continue testing until the core functions gets fixed and the development team should deploy anew build. - **Severity vs Priority:**\ **- Severity:** how the defect impacts on our application's function for example if the registration feature doesn't work it's a defect with critical severity because we can't continue the testing without fixing this defect\ **- Priority:** is how soon we should fix the defect because it important to the business\ ( if we have a webpage and the name of the company is misspelled it has high priority but low severity ) it doesn't effect on the functions of the app but it still high priority to fix it because it has high business value )\ **High Priority -- High Severity** : submit button is not working on login screen\ **Low priority -- High severity** : crash when a user orders 100000 quantity of a product its low because it's hard for the end user to face such a defect\ **High priority -- Low severity** : spelling mistake of accompany name on the web page\ **Low priority -- Low severity** : FAQ pages takes too long time to load. - **Who decides the severity and the priority of a bug?** - **Severity** is impact of defect on application. **Tester** will determine severity after defect is detected. However, - **Bug Priority** is determined by **the Product Owner**. - **What if you raise a defect but the team says it is not a defect?!**\ - We should then review the requirements again if it's according to the requirement is a bug then it is a BUG. - What is post condition in test case with example? - **When we automate tests?**\ - tests that takes so much time to be executed\ - repetitive tests\ - complex scenarios which is hard to be manually executed. - **What a defect report should have?!**\ - descriptive title\ - steps to reproduce\ - environment that has occurred on (chrome browser for example not fire fox )\ - severity\ - priority\ - attachments and screenshots - **Best practices to write or design test cases?!**\ - Test cases with end user perspective\ - write test cases in a simple way that anyone can run easily\ - set the priority\ - test data if there is any. - **Difference between positive and negative testing?!**\ - Positive testing is to verify what the system should actually do\ it helps to check whether the application justifying the requirements or not\ - negative testing is to determine what the system not supposed to do, it helps to find out the defect in the app (it increases the product's quality). - **Test suites**: collection of test cases mainly the related test cases to each other - **Test scenario:** or test condition or domain it gives the idea of what we test for example the registration is a test condition which has positive and negative test cases. - **Test case:** set of negative and positive executable steps of a test scenario. - **Test environment:** combination of hardware and software on which the testing team performs testing\ - OS\ - Web browser\ - application under test. - **BUG life cycle?**\ - new\ - assigned to a developer\ - opened (when they work on fixing it)\ - closed when retesting and its fixed\ - reopened if the bug still exist and then we start all over again\ - deferred: will be fixed but in a later sprint (**Product Owner**)\ - duplicated: if 2 defects has the same name or function\ - not a bug: when it's a bug cause of wrong test data or poor internet connection. - **Black Box testing types**\ -Equivalence partition\ -Boundary value analysis\ -Decision table - **Entry criteria**\ - the prerequisites that must be achieved before starting testing ( test data and environment ready ) - **Exit criteria or acceptance criteria**\ - the conditions that must be verified to stop testing or to say that we tested the specific condition. - **What is RTM?**\ It is a document created to trace the requirements to tests to verify whether the requirements are fulfilled. - **What is Releasing Notes?**\ It's a document to summarize what happened during the sprint (sprint date, tester who covered the testing phase, build number, feature covered, testing notes, issues listed with status, devices versions "Mobile Model & SW", testing language and what should be done on the next sprint) - **When should we stop testing?**\ - requirement coverage reaches a specified point\ - testing deadlines\ - by reaching the decided pass percentage of test cases\ - the risk of the project is under an acceptable limit\ - all high priority bugs are fixed\ - when acceptance or exit criteria are met\ - depends on management decision. - **What is the difference between SDLC and STLC?** - **What are the different roles in Scrum? / Who are the members involved in the Sprint?** - **Scrum vs. Agile:** - **What are the Agile meeting types?** - Sprint planning (Identifying the requirements) - Sprint (Acceptance of features we will work on) - Daily standup (Daily meeting for updates) - Sprint review (Demo for the completed features with the PO) - Sprint retrospective (Process review for further development) - Product backlog refinement (adding detail, estimates, and order to items in the Product Backlog) - **Sprint Backlog vs Product Backlog:** - **What is Defect Age?**\ Defect Age in Time is the difference between the time of defect detection and defect fixing. Means when the testing team identified the defect and when the defect is fixed. The defect fix date may be the past date on which it is fixed or the current date if still the defect is open/unsolved - **Incident** = a problem was found during testing, at this point we don\'t know if it is a bug or not, might just be a misunderstanding - **Bug** = this is a fault in the system, needs to be fixed (e.g. the app doesn\'t work in IE11) - **Issue** = there is a business question that needs to be addressed (the app doesn\'t work in IE8, do we need to support IE8)? - Mobile Application Testing - - - - -