As we all known that Software Testing is set of functions and activities that are used to find the bugs and improve the quality of system. For this goal we choose two ways to improve the quality and reliability of software system- Manual and Automaton. Nowadays automation testing is hot area in the testing world and everyone who want to start their career in quality area, want to start with automation. But before building any floor in Testing land we should make our base that is Manual Testing. It learns that why we should test the system, what is the role of quality and reliability, what is the risk to test any application, how to improve our testability through risk factors and so many things……
We are dealing so many terms in manual testing everyday and sometime we don’t care about that why we are using such thing. Here I am trying to define some common and useful things which are required to make one solid base in manual testing.
Bug Life Cycle:
Bug Life Cycle starts from “New” stage then it comes in “Open” stage. After resolving the issue, it comes in “Resolve” stage and then comes in “Fixed” stage if it is working in staging environment. After performing the regress testing it comes in “Reopen” stage and if it is not valid issue then it comes in “Invalid” stage and otherwise “Closed” stage.
After integrating all modules and functionality we checks that links, data exchange between modules, and flow of control in an application.
In Regression Testing after every update we should check the system thoroughly and check the effect on all venerable point of the system.
Test Plan is high level of testing document that describes testing projects
Test Plan defines that-
- How the testing will be done?
- Who will do it?
- What will be tested?
- How long it will take?
- What quality level is required?
Risk of Test Project:
The test factors describe the broad objective of testing. These are the risks that the tester need to evaluating to assure the objectives identified by that factor have been achieved.
Severity and Priority levels:
We have different Severity and Priority levels to define and communicate the bugs-
Data Driven Testing:
When we test the application, we may want to check that same functionality of application with multiple sets of different data that is Data Driven Testing. If we want to check the same functionality with multiple sets of data we should go for it.
Define Low Priority and High Severity bug:
Suppose that some functionality blocks the application but not needed for current deliverable that is Highest Severity and Lowest Priority.
Define High Priority and Low Severity bug:
Suppose that client asks for change the label of button for current deliverable that is Highest Priority and Lowest Severity.
Define High Priority and High Severity Bug:
Any functionality which blocks the application and also needed for current deliverable that is High Priority and High Severity.
Severity means it has blocker condition for an application.
Priority means it has critical condition for current deliverable.
A method of dividing application system into segments so that testing can occur within the boundaries of those segments. The concept complements top-down design
Specification frequently partitions the set of all possible inputs into classes that receive equivalent treatment. Such partitioning is called Equivalence Partitioning.
Types of Integration Testing:
Integration Testing are two types-
What we Test in Web Application:
In Web Based testing, we test the general things like who are users of the application and what purpose the application is going to use. In Appearance, we test the consistence layout, look and feel, consistence color and font types, page title, status bar messages, minimize, maximize, re-size window. In Navigation, we test that where I am? Where I have been, and Where can I go, clear indication of current page and validation in submitting form.
A Traceability Matrix document defines relationship between test requirements and test case.
Test Life Cycle:
A Test Life Cycle starts from Plan Test, after that we write Test Cases and then run Test Cases against the application, after running the test case we analyze Test Result and then document the test result. After completing result documentation we prepare the validation report and then we perform regress testing.
May be in this post I have not covered all things but this is not my last post 🙂 I always update my knowledge with this space.