Software Test Plan for Mobile Application

Introduction

Overview
This document explains the testing methodology for a mobile application, and is to be used as a guide for the testing activity.

The intended audience for this document:
The Project Managers
Product development team members
Test engineers

Scope
The scope of testing as explained in the document is to test the operating characteristics of an application that runs on mobile devices supporting J2ME. The tests are organized by requirement category such as usability, functionality, security, etc. The procedure for carrying out testing in terms of preparation of test cases, test environment setup, defects logging and reporting are explained.

The document does not address the following:
Content censorship (i.e. assessment against standards for violence, gambling, political messaging etc.) for the purpose of preventing the deployment or sale of an application. Distribution, DRM etc.
Testing requirements specific to a particular manufacturer’s (or network operator’s) device, user interface, and standards (e.g. WAP) implementation.

References
Mention the documents of references
Acronyms
Acronym 
Expansion
DRM
Digital Rights Management
J2ME™
Java™ 2 Platform Micro Edition

Test Plan and Strategy

Unit Testing

Objective
The objective of Unit testing is to verify that a particular module of source code is working properly. Unit tests are designed to test a single class or component or module in isolation. Developers run unit tests, and only for the components they are working on.

Entry Criteria
Test cases are reviewed
Build is complete and self test done
Unit Test environment is set up

Exit Criteria
All planned test cases are executed
Units are working as per the expected results
Defect are fixed in the code and tracked to closure

Logging Tests and Reporting
The developer will fix the defects that are found in unit testing. Additionally, if defects corresponding to other modules or components are found during unit testing, these will be reported.

System Testing
In System Testing, separate units (packages / modules / components), or groups of units of the application are united and tested as a completely merged application. The purpose of System Testing is to identify defects that will only surface when a complete system is assembled. Verification of the system at this stage might include: functionality, usability, security, installation etc. It is intended to validate the application as a whole.
The rest of this document mainly explains how System Testing is performed by the testing team.

Testing Procedure
The steps in testing consist of:
Creation of all the test scenarios and test cases
Preparation of a test case document that has a brief description of the test case , steps to conduct tests and expected result
Defect Report generation.

Outlined below are the main test types that will be performed
1. Application Characteristics (AC) – Information about the application is provided to help the testing team in the testing work.
2. Stability (ST) – Focusing on the application being stable on the device.
3. Application Launch (AL) – Once an application is loaded it must start (launch) and stop correctly in relation to the device and other applications on the device.
4. User Interface (UI)
5. Functionality (FN) - Documented features are implemented in the application and work as expected. Sources for the information are user manuals, formatted application specification documents and online documentation.
6. Connectivity (CO) – the application must demonstrate its ability to communicate over a network correctly. It must be capable of dealing with both network problems and server-side problems.
7. Personal Information Management (PI) - The application accessing user information needs to be able to do it in an appropriate manner and not to destroy the information.
8. Security

Regression Testing
This is an additional step, and is done prior to taking up system testing which is to test new functionality. Regression testing consists of running a set of standard tests to ensure that old functionality has not been broken by new functionality. Regression tests are also run if a new release is made after fixing a number of defects.

Pass/Fail Conditions
It is expected that an application must pass all the tests in each test category to be successful.

Test Report
For each report, the following information is provided:
1. The name of the application
2. The version number of the application
3. Device used for testing
4. Device firmware version

For each error reported, the following information is provided:
Description of the error
Frequency of occurrence of error: Systematic or Random or Once
Location of the error in the application
Steps to reproduce the error

Schedules for Testing
This will be decided in consultation with the project manager.

Risks and Assumptions

Risks
The following may impact the test cycle:
Device availability
Any new feature addition/modification to the application which is not communicated in advance.
Any delay in the software delivery schedule including defect fixes. Any changes in the functional requirements since the requirements were signed-off/formulated 

Assumptions
Every release to QA will accompany a release note specifying details of the features implemented and its impact on the module under test.
All "Show-Stopper" bugs receive immediate attention from the development team.
All bugs found in a version of the software will be fixed and unit tested by the development team before the next version is released
All documentation will be up-to-date and delivered to the system test team.
Devices, Emulators and other support tools will be fully functional prior to project commencement.
In case of lack of required equipment or changes in the feature requirements, the test schedules may need to be reviewed.

Entry and Exit Criteria


Entry Criteria
Development of the application is complete
Successful completion of unit testing for the applications
Release of software to the test environment
Dedicated resources are allocated
Approved test bed to carry out system testing.
Test environment is up and working

Exit Criteria
The Following is the criteria when the testing will be stopped for this module:
All test cases have been executed and at least 95% have passed successfully. The remaining 5% do not impact critical functionality
All test results have been evaluated and accepted.
There are no showstoppers or high criticality defects unresolved or outstanding

Test Metrics
Following metrics will be captured and reported as part of the Test
Summary Report
Test Design effort
Test execution effort
Number of Test Cases executed
Number of Defects and their classification
Test Coverage (Number of test cases executed/Number planned)

Logging Tests and Reporting
Some third party applications will be used for reporting bugs found during test execution. The QA team will log defects in the tool as testing progresses.

Roles and Responsibilities
The roles and responsibilities of the testing team for the project are as follows:
Project Manager / Test Manager Responsibilities
Overall responsibility for managing the testing process
Approval of test documents
Approval of inspections/reviews done as per the test plan
Providing resources for the project

Test Lead
Responsibilities:
Requirement gathering
Planning and estimating for testing
Tracking and monitoring the testing as per the test plan
Reporting the project status

Test Engineer
Responsibilities:
Creating the test cases as per the test plan
Executing the test cases
Documenting the results and logging errors.
Test Plan
Test Cases Document – Document with a description and expected result for each test case.
Test Results – The Pass/Fail status of each test cases and the list of issues.

Comments

Popular posts from this blog

Online Selenium Training With Real Time Scenario

Online Tricentis Tosca Automation Training with Real Time Scenarios

Online Training for Manual/Functional