Skip to content

Master Test Plan

Document Master Test Plan
Author: Niko Satejeff
Version: Ver 0.2
Date: 19.2.2024

General information

A master test plan (MTP) is a high-level document that describes the overall testing strategy, objectives, and scope for a software project or product. It provides a comprehensive overview of the key decisions, resources, risks, and deliverables involved in the testing process. It also defines the relationship and coordination among different test levels, such as unit testing, integration testing, system testing, and acceptance testing. An MTP helps to ensure that the testing activities are aligned with the project goals and requirements, and that the quality of the software is verified and validated.

Master Test Plan

1. Introduction

This document’s purpose is to give all project members an idea of how and what will be tested. The main testing target will be Tukko and all the features that will be added. For now, all the testing will be manual testing.

2. Test Objectives

Our team’s testing objective is to test functionality, performance, security, and availability. Test cases will aim to identify different type of problems. The outcome should meet the team requirements and user expectations.

3. Test Items

In the application we will test the functionality of the map, the communication between different frameworks and the features.

4. Features to be Tested

The following table list the features that will be tested:

Feature User stories Priority
FEA101 Compare different LAM stations side by side US001, US067, US068 P1
FEA105 Implement web app accessibility measures US041, US042, US043, US044 P3
FEA106 Improve dark mode colors US045 P1
FEA110 Enhance color contrast for color blindness US046 P1
FEA112 Change branding to team and JAMK brand US062 P1
FEA304 Localization for Swedish US058 P2
FEA305 Localization for Norwegian US059 P3
FEA403 Regularly scan for unknown security vulnerabilities US017 P1
FEA404 Enforce secure coding practices US018 P1
FEA405 Implement automated security testing pipeline US019, US022 P2
FEA406 Harden all the containers US056 P3
FEA407 Control access to the server US057 P2
FEA409 Protect applications with Web Application Firewall US055 P3
FEA517 Maintainable documentation US061 P1

5. Features not to be Tested

Feature User stories Priority Reason
FEA516 Manual testing US060 P1 Testing a test won't be necessary

6. Approach

Different testing types that will be used: Unit testing, integration, system, acceptance, sanity testing, security testing, accessibility, and exploratory testing. All testing will be done manually unless some specific cases emerge. If automated testing is needed, we will use robot framework.

7. Item Pass/Fail Criteria

An test has passed when no unexpected issues occur while testing. Test has failed if test output is outside of the expected output.

8. Suspension Criteria and Resumption Requirements

Testing activities will be suspended when testing has ended, and a bug has been found. The testing will continue when the bug has been fixed.

9. Test Deliverables

When testing has been started an test case forum will be filled.

If bug has been found you need to make a new ticket for it in GitLab and explain in it, what were you testing, how you encountered the bug and possible fixes for it.

After the test you also need to fill out an test result form.

10. Testing Tasks

Before starting any testing you need to know what type of testing you are supposed to do and how you are supposed to report your findings.

11. Environmental Needs

The application is run in the CSC VM machines to ensure that it runs propaly and in the user side we use chrome unless we are testing the functionality in other browsers.

12. Responsibilities

Niko is testing manager and the only tester in this project, but everyone in the team will do some form of testing at some point of the project. The dev who made the tested part/function cannot be the tester of his/her own work.

13. Staffing and Training Needs

Depending on the type of testing the tester might not need any type of training, but for example if the testing is some form of Whitebox or security testing then the tester needs to at least know on some level how the code works or how the security works.

14. Schedule

The following schedule is only giving guidance for when to start testing. Testing can start earlier or little bit later depending on the circumstances.

Feature TestCaseID Test start-end date
FEA517 Maintainable documentation TC517 22.1-26.4
FEA407 Control access to the server TC407 11.3-18.3
FEA110 Enhance color contrast for color blindness TC110 11.3-18.3
FEA112 Change branding to team and JAMK brands TC112 11.3-18.3
FEA405 Implement automated security testing pipeline TC405 11.3-18.3
FEA106 Improve dark mode colors TC106 18.3-25.3
FEA409 Protect applications with Web Application Firewall TC109 25.3-1.4
FEA105 Implement web app accessibility measures TC105 25.3-1.4
FEA406 Harden all the containers TC406 25.3-1.4
FEA104 Visualize analyzed data in user friendly format TC104 1.4-8.4
FEA101 Compare different LAM stations side by side TC101 1.4-8.4
FEA304 Localization for Swedish TC304 8.4-15.4
FEA305 Localization for Norwegian TC305 8.4-15.4
FEA403 Regularly scan for unknown security vulnerabilities TC403 8.4-15.4
FEA404 Enforce secure coding practices TC404 12.4-19.4

15. Risks and Contingencies

Main problems for testing are technical challenges, resource constraints, communication problems and outside factors like power outages.

16. Approvals

The test case needs to be approved by at least one team member.