I would like to demonstrate an example of a real case to show how the Test Architecture, Test Strategy, and Implementation Plan are built within an existing project to improve and optimize the testing process.
This is a general-purpose CRM project. There are a user portal and two mobile applications: for internal and external CRM end users with Live Chat and several chatbots to solve delivery problems, place orders, and contact technical support.
Part of the CRM functionality is outsourced to other companies.
From a business perspective
From the customer’s business perspective, the most important CRM features are mobile applications’ functionality, chatbots and dashboards on the user portal.
What is less important are the development and improvements in the external API, reporting system, and the company’s internal portal.
In terms of data
In a test environment, all data is created by the tests themselves.
If problems arise in the production environment, defect root cause analysis is performed there.
Unit tests use a predefined set of test data.
Third-party companies, which outsource part of the functionality, provide accounts for testing with a set of test data.
From the application point of view
There are web applications, mobile applications, APIs and external services that provide interfaces for:
- mass e-mailing
- creation of electronic documents using templates
- IP telephony service.
In terms of technology
Each module is covered by unit tests in the main programming language of the module.
Integration and main business scenarios are tested by end-to-end tests for web applications and for mobile applications.
After studying the current state of the project, the following gaps were identified:
- It is not possible to determine test coverage due to an inconsistent approach to writing and storing test cases. Manual testers use checklists, test automation developers use cucumber feature files.
- Automated testing on a test environment is not possible due to incompatibility of test data after several updates.
- Chatbots testing is not automated, except for a couple of basic scenarios for mobile applications.
- There are no performance and security testing in the project.
- Testers who validate releases on the production environment have access to users’ personal data.
Opportunities and Solutions
After careful analysis of existing testing processes, the following improvement steps have been proposed:
- Introduce a unified approach to writing and storing test cases. Establish a transparent connection between requirements and test cases. For automated tests, implement automated execution reporting.
- Break test data into Master data (the initial data set in the database needed to start testing) and Test data for testing mobile applications and chatbots.
- Deploy test environments in the “Infrastructure as code” concept.
- Implement code analysis static tools into CI/CD pipelines to achieve compliance with security and performance requirements.
- Shift the chatbot testing “to the left”: before starting the chatbot tests, prepare the necessary test data using automated scripts. Use the API to send chatbot commands and check expected results. Redistribute the scope of testing: check the entire functionality of the chatbots via the API, check the integration, and the presence of graphic elements and controls on mobile devices.
- Implement performance testing of chatbots. To eliminate the influence of network connections of mobile devices, performance testing should be carried out at the API level.
- Based on the data in the Production, generate de-personalized test data. Shift testing to the right. Use “production-like data” on test environments. Check the correctness of data updates after the release of new functionality.
- Generate a large amount of test data and start the performance testing of web applications.
In brief; I use here […] to indicate that specific tools, approaches, utilities, or technologies will be indicated in this test strategy document. But in this article – it doesn’t matter which ones.
Test Cases and Scope Testing:
Test cases are written in a single format using a single Test Management tool […].
Prioritization of test cases depends on business priorities.
The result of the test case execution during manual testing is updated manually, automated testing results are reported automatically. The following technology is used […].
A direct connection between requirements and test cases is established in [ …] way.
The basic principle in determining the test scopes: everything that can be tested earlier and in isolation from other systems should be tested earlier.
Master data is stored there: [….], updated by […] every time the database structure is updated.
Test data for testing mobile applications are created by [….], scripts are stored […], updated by […].
Test data for performance testing is generated by [….], scripts are stored […], updated by […].
Automated testing developers provide a set of test suites for DevOps engineers and developers which can be run on any test environment.
Component testing is performed using mocks or simulators [….]
API testing of chatbots is carried out using […] technologies.
End-to-end scripts for mobile devices are executed on […] simulators/devices using […] technologies
To test integration with 3rd-parties systems, test data […] is used, test data is stored […]
Web application testing is performed using […] frameworks, […] technologies
Testing environments for development – […]
The environment for writing autotests – […]
Integration Testing / End-to-End testing environment – […]
CI/CD pipeline uses such environments- […]
Mobile applications testing environment – […]
Manual testing environment – […]
Testers and developers participate in the daily analysis of failed tests in CI/CD pipelines.
After each major release, test managers analyze test coverage, passed/failed tests, defects, and decide to release the code to Production
Each major release is tested for Performance and Security.
CI/CD pipeline involves […] suites.
In brief, this is a high-level plan. I want to show how, after developing an updated testing strategy, specific tasks for its implementation are formed.
- Develop a test data generator based on production data.
- Develop test data requirements for testing chatbots.
- Based on the requirements from task #2 write scripts to generate test data for testing chatbots.
- Conduct a workshop with testers and developers on testing chatbots and using scripts to create test data
- Configure autotest suites for updated CI/CD pipelines
- Expand the performance testing environment
- Develop scenarios for performance testing. Write automatic tests.
- Develop scenarios for security testing.
- Perform security testing of the latest major release on the production environment.
- Write/implement libraries for testing the chatbots via API in the test framework.
- Export existing test cases to Test Management Tool.
- Implement automated reporting in all automated test frameworks.
- In the Project Management Tool, configure a dashboard with information about test coverage of each module, test results, and registered defects.
What have we got as a result of all these changes?
- Improved transparency of the testing process in the project.
- Improved measurability and manageability of the testing process.
- Built stable test environments due to more accurate test data.
- Achieved independence of development and testing in teams.
- Achieved a significant reduction in test time; not only due to the test execution time but also due to the redistribution of testing levels.
- Non-functional testing became mandatory for every major release.