Test Plan Template Document for both Mobile and Web Application
1 Test
Methodology
The below
list is not intended to limit the extent of the test plan and can be modified
to become suitable for the particular project.
The purpose
of the Test Plan is to achieve the following:
·
Define testing strategies for each area and sub-area to include all the
functional and quality (non-functional) requirements.
·
Divide Design Specifics into testable areas and sub-areas (do not confuse with
more detailed test spec). Be sure to also identify and include areas that
are to be omitted (not tested) also.
·
Define bug-tracking procedures.
·
Identify testing risks.
·
Identify required resources and related information.
·
Provide testing Schedule.
The purpose
of usability testing is to ensure that the new components and features will
function in a manner that is acceptable to the customer.
Development
will typically create a non-functioning prototype of the UI components to
evaluate the proposed design. Usability testing can be coordinated by
testing, but actual testing must be performed by non-testers (as close to
end-users as possible). Testing will review the findings and provide the
project team with its evaluation of the impact these changes will have on the
testing process and to the project as a whole.
Unit testing
is conducted by the Developer during code development process to ensure that
proper functionality and code coverage have been achieved by each developer
both during coding and in preparation for acceptance into iterations
testing.
The
following are the example areas of the project must be unit-tested and
signed-off before being passed on to regression Testing:
·
Databases, Stored Procedures, Triggers, Tables, and Indexes
·
NT Services
·
Database conversion
·
.OCX, .DLL, .EXE and other binary formatted executables
During the
repeated cycles of identifying bugs and taking receipt of new builds
(containing bug fix code changes), there are several processes which are common
to this phase across all projects. These include the various types of
tests: functionality, performance, stress, configuration, etc. There is
also the process of communicating results from testing and ensuring that new
drops/iterations contain stable fixes (regression). The project should
plan for a minimum of 2-3 cycles of testing (drops/iterations of new builds).
At each
iteration, a debriefing should be held. Specifically, the report must
show that to the best degree achievable during the iteration testing phase, all
identified severity 1 and severity 2 bugs have been communicated and
addressed. At a minimum, all priority 1 and priority 2 bugs should be
resolved prior to entering the beta phase.
Below are
examples. Any example may be used if deemed appropriate for the
particular project. New content may also be added that are reasoned to be
suitable to the project.
Important
deliverables required for acceptance into Final Release testing include:
·
Application SETUP.EXE
·
Installation instructions
·
All documentation (beta test scripts, manuals or training guides, etc.)
Testing team
with end-users participates in this milestone process as well by providing
confirmation feedback on new issues uncovered, and input based on identical or
similar issues detected earlier. The intention is to verify that the
product is ready for distribution, acceptable to the customer and iron out
potential operational issues.
Assuming
critical bugs are resolved during previous iterations testing- Throughout the
Final Release test cycle, bug fixes will be focused on minor and trivial bugs
(severity 3 and 4). Testing will continue its process of verifying the
stability of the application through regression testing (existing known bugs,
as well as existing test cases).
The
milestone target of this phase is to establish that the application under test
has reached a level of stability, appropriate for its usage (number users,
etc.), that it can be released to the end users and cabin community.
Release for
production can occur only after the successful completion of the application
under test throughout all of the phases and milestones previously discussed
above.
The
milestone target is to place the release/app (build) into production after it
has been shown that the app has reached a level of stability that meets or
exceeds the client expectations as defined in the Requirements, Functional
Spec., and cabin Production Standards.
Testing of
an application can be broken down into three primary categories and several
sub-levels. The three primary categories include tests conducted every
build (Build Tests), tests conducted every major milestone (Milestone Tests),
and tests conducted at least once every project release cycle (Release Tests).
The test categories and test levels are defined below:
Build
Acceptance Tests should take less than 2-3 hours to complete (15 minutes is
typical). These test cases simply ensure that the application can be
built and installed successfully. Other related test cases ensure that
adopters received the proper Development Release Document plus other build
related information (drop point, etc.). The objective is to determine if
further testing is possible. If any Level 1 test case fails, the build is
returned to developers un-tested.
Smoke Tests
should be automated and take less than 2-3 hours (20 minutes is typical).
These tests cases verify the major functionality a high level.
The
objective is to determine if further testing is possible. These test
cases should emphasize breadth more than depth. All components should be
touched, and every major feature should be tested briefly by the Smoke Test. If
any Level 2 test case fails, the build is returned to developers un-tested.
Every bug
that was “Open” during the previous build, but marked as “Fixed, Needs
Re-Testing” for the current build under test, will need to be regressed, or
re-tested. Once the smoke test is completed, all resolved bugs need to be
regressed. It should take between 5 minutes to 1 hour to regress most
bugs.
Critical
Path test cases are targeted on features and functionality that the user will
see and use every day.
Critical
Path test cases must pass by the end of every 2-3 Build Test Cycles. They
do not need to be tested every drop, but must be tested at least once per
milestone. Thus, the Critical Path test cases must all be executed at
least once during the Iteration cycle, and once during the Final Release cycle.
Test Cases
that need to be run at least once during the entire test cycle for this
release. These cases are run once, not repeated as are the test cases in
previous levels. Functional Testing and Detailed Design Testing
(Functional Spec and Design Spec Test Cases, respectively). These can be
tested multiple times for each Milestone Test Cycle (Iteration, Final Release,
etc.).
Standard
test cases usually include Installation, Data, GUI, and other test areas.
These are
Test Cases that would be nice to execute, but may be omitted due to time
constraints.
Most
Performance and Stress Test Cases are classic examples of Suggested test cases
(although some should be considered standard test cases). Other examples
of suggested test cases include WAN, LAN, Network, and Load testing.
Bug
Regression will be a central tenant throughout all testing phases.
All bugs
that are resolved as “Fixed, Needs Re-Testing” will be regressed when Testing
team is notified of the new drop containing the fixes. When a bug passes
regression it will be considered “Closed, Fixed”. If a bug fails
regression, adopters testing team will notify development team by entering
notes into GForge. When a Severity 1 bug fails regression, adopters
testing team should also put out an immediate email to development. The
Test Lead will be responsible for tracking and reporting to development and
product management the status of regression testing.
Testing will
be considered complete when the following conditions have been met:
1. When Adopters and Developers, agree that testing is complete, the app is
stable, and agree that the application meets functional requirements.
2. Script executions of all test cases in all areas have passed.
3. Automated test cases have in all areas have passed.
4. All priority 1 and 2 bugs have been resolved and closed.
5. NCI approves the test completion
6. Each test area has been signed off as completed by the Test Lead.
7. 50% of all resolved severity 1 and 2 bugs have been successfully
re-regressed as final validation.
8. Ad hoc testing in all areas has been completed.
Please add
Bug reporting and triage conditions that will be submitted and evaluated to
measure the current status.
1. Bug find rate indicates a decreasing trend prior to Zero Bug Rate (no new
Severity 1/2/3 bugs found).
2. Bug find rate remains at 0 new bugs found (Severity 1/2/3) despite a constant
test effort across 3 or more days.
3. Bug severity distribution has changed to a steady decrease in Severity. 1 and
2 bugs discovered.
4. No ‘Must Fix’ bugs remaining prior despite sustained testing.
Testing will
provide specific deliverables during the project. These deliverables fall
into three basic categories: Documents, Test Cases / Bug Write-ups, and
Reports. Here is a diagram indicating the dependencies of the various
deliverables:
As the
diagram above shows, there is a progression from one deliverable to the
next. Each deliverable has its own dependencies, without which it is not
possible to fully complete the deliverable.
The
following page contains a matrix depicting all of the deliverables that Testing
will use.
Below is the
list of artifacts that are process driven and should be produced during the
testing lifecycle. Certain deliverables should be delivered as part of test
validation, you may add to the below list of deliverables that support the
overall objectives and to maintain the quality.
This matrix
should be updated routinely throughout the project development cycle in you
project specific Test Plan.
Deliverable
|
Documents
|
Test Approach
|
Test Plan
|
Test Schedule
|
Test Specifications
|
Test Case
/ Bug Write-Ups
|
Test Cases / Results
|
Test Coverage Reports
|
Reports
|
Test results report
|
Test Final Report - Sign-Off
|
The Test
Approach document is derived from the Project Plan, Requirements and Functional
Specification documents. This document defines the overall test approach
to be taken for the project. The Standard Test Approach document that you
are currently reading is a boilerplate from which the more specific project
Test Approach document can be extracted.
When this
document is completed, the Test Lead will distribute it to the Product Manager,
Development Lead, User Representative, Program Manager, and others as needed
for review and sign-off.
The Test
Plan is derived from the Test Approach, Requirements, Functional Specs, and
detailed Design Specs. The Test Plan identifies the details of the test
approach, identifying the associated test case areas within the specific
product for this release cycle.
The purpose
of the Test Plan document is to:
1. Specify the approach that Testing will use to test the product, and the
deliverables (extract from the Test Approach).
2. Break the product down into distinct areas and identify features of the
product that are to be tested.
3. Specify the procedures to be used for testing sign-off and product release.
4. Indicate the tools used to test the product.
5. List the resource and scheduling plans.
6. Indicate the contact persons responsible for various areas of the project.
7.
Identify risks and contingency plans that may impact the testing of the
product.
8. Specify bug management procedures for the project.
9. Specify criteria for acceptance of development drops to testing (of builds).
This section
is not vital to the document as a whole and can be modified or deleted if
needed by the author.
The Test
Schedule is the responsibility of the Test Lead (or Department Scheduler, if
one exists) and will be based on information from the Project Scheduler (done
by Product Manager). The project specific Test Schedule may be done in MS
Project.
A Test
Specification document is derived from the Test Plan as well as the
Requirements, Functional Spec., and Design Spec documents. It provides
specifications for the construction of Test Cases and includes list(s) of test
case areas and test objectives for each of the components to be tested as
identified in the project’s Test Plan.
A
Requirements Traceability Matrix (RTM) which is used to link the test scenarios
to the requirements and use cases is a required part of the Test Plan
documentation for all projects. Requirements traceability is defined as
the ability to describe and follow the life of a requirement, in both a forward
and backward direction (i.e. from its origins, through its development and
specification, to its subsequent deployment and use, and through periods of
ongoing refinement and iteration in any of these phases).
Attached is
a sample basic RTM which could provide a starting point for this documentation.
The important thing is to choose a template or document basis that achieves
thorough traceability throughout the life of the project.
The below
workflow illustrates the testing workflow process for Developers and Adopters
for User Acceptance and End to End testing.
Pl. note the
yellow highlighted process where the Adopter is required to directly send
defect list with evidence to the Developer. Similarly, Developer is required to
confirm directly with the Adopter after bug fixes along with updating on the BugZilla.
All High
priority defects should be addressed within 1 day of the request and
resolved/closed within 2 days of the initial request
All Medium
priority defects should be addressed within 2 days of the request and
resolved/closed within 4 days of the initial request
All Low
priority defects should be resolved/closed no later than 5 days of the
initial request.
The Test
Lead will be responsible for writing and disseminating the following reports to
appropriate project personnel as required.
A weekly or
bi-weekly status report will be provided by the Test Lead to project
personnel. This report will summarize weekly testing activities, issues,
risks, bug counts, test case coverage, and other relevant metrics.
When each
phase of testing is completed, the Test Lead will distribute a Phase Completion
Report to the Product manager, Development Lead, and Program Manager for review
and sign-off.
The below
bullets illustrates an example of what the document may include.
The document
must contain the following metrics:
·
Total Test Cases, Number Executed, Number Passes / Fails, Number Yet to Execute
·
Number of Bugs Found to Date, Number Resolved, and Number still Open
·
Breakdown of Bugs by Severity / Priority Matrix
·
Discussion of Unresolved Risks
·
Discussion of Schedule Progress (are we where we are supposed to be?)
A Final Test
Report will be issued by the Test Lead. It will certify as to the extent
to which testing has actually completed (test case coverage report suggested),
and an assessment of the product’s readiness for Release to Production.
Bug Severity
and Priority fields are both very important for categorizing bugs and
prioritizing if and when the bugs will be fixed. The bug Severity and
Priority levels will be defined as outlined in the following tables
below. Testing will assign a severity level to all bugs. The Test
Lead will be responsible to see that a correct severity level is assigned to
each bug.
The Test
Lead, Development Lead and Program Manager will participate in bug review
meetings to assign the priority of all currently active bugs. This
meeting will be known as “Bug Triage Meetings”. The Test Lead is
responsible for setting up these meetings on a routine basis to address the
current set of new and existing but unresolved bugs.
The tester
entering a bug into GForge is also responsible for entering the bug
Severity.
Severity
ID
|
Severity
Level
|
Severity
Description
|
1
|
Critical
|
The
module/product crashes or the bug causes non-recoverable conditions. System
crashes, GP Faults, or database or file corruption, or potential data loss,
program hangs requiring reboot are all examples of a Severity. 1 bug.
|
2
|
High
|
Major
system component unusable due to failure or incorrect functionality.
Severity. 2 bugs cause serious problems such as a lack of functionality, or
insufficient or unclear error messages that can have a major impact to the
user, prevents other areas of the app from being tested, etc. Severity.
2 bugs can have a work around, but the work around is inconvenient or
difficult.
|
3
|
Medium
|
Incorrect
functionality of component or process. There is a simple work around
for the bug if it is Severity. 3.
|
4
|
Minor
|
Documentation
errors or signed off severity 3 bugs.
|
2.5.2 Priority List
Priority
ID
|
Priority
Level
|
Priority
Description
|
5
|
Must Fix
|
This bug
must be fixed immediately; the product cannot ship with this bug.
|
4
|
Should Fix
|
These are
important problems that should be fixed as soon as possible. It would
be an embarrassment to the company if this bug shipped.
|
3
|
Fix When
Have Time
|
The
problem should be fixed within the time available. If the bug does not
delay shipping date, then fix it.
|
2
|
Low
Priority
|
It is not
important (at this time) that these bugs be addressed. Fix these bugs
after all other bugs have been fixed.
|
1
|
Trivial
|
Enhancements/
Good to have features incorporated- just are out of the current scope.
|
Comments
Post a Comment