Building Test Architecture and Test Strategy in CRM project

I would like to demonstrate an example of a real case to show how the Test Architecture, Test Strategy, and Implementation Plan are built within an existing project to improve and optimize the testing process.

Project background

This is a general-purpose CRM project. There are a user portal and two mobile applications: for internal and external CRM end users with Live Chat and several chatbots to solve delivery problems, place orders, and contact technical support.

Part of the CRM functionality is outsourced to other companies.

From a business perspective

From the customer’s business perspective, the most important CRM features are mobile applications’ functionality, chatbots and dashboards on the user portal.

What is less important are the development and improvements in the external API, reporting system, and the company’s internal portal.

In terms of data

In a test environment, all data is created by the tests themselves.

If problems arise in the production environment, defect root cause analysis is performed there.

Unit tests use a predefined set of test data.

Third-party companies, which outsource part of the functionality, provide accounts for testing with a set of test data.

From the application point of view

There are web applications, mobile applications, APIs and external services that provide interfaces for:

  • mass e-mailing
  • creation of electronic documents using templates
  • IP telephony service.

In terms of technology

Each module is covered by unit tests in the main programming language of the module.

Integration and main business scenarios are tested by end-to-end tests for web applications and for mobile applications.

Gap analysis

After studying the current state of the project, the following gaps were identified:

  1. It is not possible to determine test coverage due to an inconsistent approach to writing and storing test cases. Manual testers use checklists, test automation developers use cucumber feature files.
  2. Automated testing on a test environment is not possible due to incompatibility of test data after several updates.
  3. Chatbots testing is not automated, except for a couple of basic scenarios for mobile applications.
  4. There are no performance and security testing in the project.
  5. Testers who validate releases on the production environment have access to users’ personal data.

Opportunities and Solutions

After careful analysis of existing testing processes, the following improvement steps have been proposed:

  1. Introduce a unified approach to writing and storing test cases. Establish a transparent connection between requirements and test cases. For automated tests, implement automated execution reporting.
  2. Break test data into Master data (the initial data set in the database needed to start testing) and Test data for testing mobile applications and chatbots.
  3. Deploy test environments in the “Infrastructure as code” concept.
  4. Implement code analysis static tools into CI/CD pipelines to achieve compliance with security and performance requirements.
  5. Shift the chatbot testing “to the left”: before starting the chatbot tests, prepare the necessary test data using automated scripts. Use the API to send chatbot commands and check expected results. Redistribute the scope of testing: check the entire functionality of the chatbots via the API, check the integration, and the presence of graphic elements and controls on mobile devices.
  6. Implement performance testing of chatbots. To eliminate the influence of network connections of mobile devices, performance testing should be carried out at the API level.
  7. Based on the data in the Production, generate de-personalized test data. Shift testing to the right. Use “production-like data” on test environments. Check the correctness of data updates after the release of new functionality.
  8. Generate a large amount of test data and start the performance testing of web applications.

Test strategy

In brief; I use here […] to indicate that specific tools, approaches, utilities, or technologies will be indicated in this test strategy document. But in this article – it doesn’t matter which ones.

Test Cases and Scope Testing:
Test cases are written in a single format using a single Test Management tool […].
Prioritization of test cases depends on business priorities.
The result of the test case execution during manual testing is updated manually, automated testing results are reported automatically. The following technology is used […].
A direct connection between requirements and test cases is established in [ …] way.
The basic principle in determining the test scopes: everything that can be tested earlier and in isolation from other systems should be tested earlier.

Data:
Master data is stored there: [….], updated by […] every time the database structure is updated.
Test data for testing mobile applications are created by [….], scripts are stored […], updated by […].
Test data for performance testing is generated by [….], scripts are stored […], updated by […].

Test Automation:
Automated testing developers provide a set of test suites for DevOps engineers and developers which can be run on any test environment.
Component testing is performed using mocks or simulators [….]
API testing of chatbots is carried out using […] technologies.
End-to-end scripts for mobile devices are executed on […] simulators/devices using […] technologies
To test integration with 3rd-parties systems, test data […] is used, test data is stored […]
Web application testing is performed using […] frameworks, […] technologies

Test environments:
Testing environments for development – […]
The environment for writing autotests – […]
Integration Testing / End-to-End testing environment – […]
CI/CD pipeline uses such environments- […]
Mobile applications testing environment – […]
Manual testing environment – […]

Testing frequency:
Testers and developers participate in the daily analysis of failed tests in CI/CD pipelines.
After each major release, test managers analyze test coverage, passed/failed tests, defects, and decide to release the code to Production
Each major release is tested for Performance and Security.
CI/CD pipeline involves […] suites.

Implementation plan

In brief, this is a high-level plan. I want to show how, after developing an updated testing strategy, specific tasks for its implementation are formed.

  1. Develop a test data generator based on production data.
  2. Develop test data requirements for testing chatbots.
  3. Based on the requirements from task #2 write scripts to generate test data for testing chatbots.
  4. Conduct a workshop with testers and developers on testing chatbots and using scripts to create test data
  5. Configure autotest suites for updated CI/CD pipelines
  6. Expand the performance testing environment
  7. Develop scenarios for performance testing. Write automatic tests.
  8. Develop scenarios for security testing.
  9. Perform security testing of the latest major release on the production environment.
  10. Write/implement libraries for testing the chatbots via API in the test framework.
  11. Export existing test cases to Test Management Tool.
  12. Implement automated reporting in all automated test frameworks.
  13. In the Project Management Tool, configure a dashboard with information about test coverage of each module, test results, and registered defects.

Summary

What have we got as a result of all these changes?

  1. Improved transparency of the testing process in the project.
  2. Improved measurability and manageability of the testing process.
  3. Built stable test environments due to more accurate test data.
  4. Achieved independence of development and testing in teams.
  5. Achieved a significant reduction in test time; not only due to the test execution time but also due to the redistribution of testing levels.
  6. Non-functional testing became mandatory for every major release.

Related services