Pull to refresh
Online collaborative whiteboard platform

QA process at Miro

MiroIT systems testingWeb services testingMobile applications testingDevelopment Management
We have been working on our current QA process for about two years, and we still keep improving it. This process may seem obvious, but when we started to implement it in a new team that consisted entirely of new developers, we realized that it was difficult to do right away. Many people are used to working differently and need to make a lot of changes at once to switch, which is difficult. At the same time, it is ill-advised to implement this process in parts, because it can negatively affect the quality.

What do we do? We need to do preliminary preparation for each block of the development process: task decomposition, evaluation and planning, development itself, investigative testing, and release. This preparation does not consist of simply throwing old parts out of the process but of their adequate replacement, which increases quality.

In this article, I will talk in detail about our testing process at each stage of creating a new feature and also about the introduced changes that have increased the quality and speed of development.


The transition from Waterfall to Scrum

A few years ago, we realized that our development process, which was based on the classic waterfall model, needed to be adjusted to deliver value to users faster and more often. Scrum was great for this because it allowed us to end each sprint with an increment.

We introduced Scrum events and short iterations. Everything looked good in theory:

  • to make a release at the end of the week, we need to have the implemented functionality on Wednesday;
  • test it on Thursday;
  • fix the bugs;
  • roll it out to the production environment on Friday.

In reality, it turned out differently because we had not in fact changed the process but only put our waterfall inside a weekly sprint. As a result, the functionality was most often ready by Friday (not by Wednesday) because we could not correctly evaluate the tasks and because we were faced with new, higher priority tasks in the middle of a sprint. Testing was completely left out of the sprint process.

Then we moved the preparation of acceptance scenarios to the beginning of a sprint. This immediately gave a result. Scenario preparation is approximately 60 percent of the testing time. Going through prepared scenarios is a quick process, and in addition, we learn about nonstandard cases before the start of development and can immediately take them into account in planning.

Stages of the QA process

Kick-off meeting, example mapping, acceptance scenarios

Say a product manager gives the team a user story or a technical lead brings the technical story of a component’s development.

The first thing you need to do is decompose the story. To do that

  • The team forms an understanding of the story’s requirements that is common for all participants, using, among other things, an exhaustive list of questions for the product manager. This also helps to find requirements that were not taken into account initially. For meetings, we use an approach called “example mapping” (a map of test cases), which greatly improves their efficiency. It’s important not to apply this approach formally without an understanding of how it works, because in that case it will not work, and the team will end up with a negative attitude toward such changes. Learn more about example mapping.
  • UX designer anticipates user behavior and creates mockups.
  • Developer creates the technical side of the implementation.
  • QA engineer writes acceptance criteria for each story and creates acceptance scenarios based on them (not a draft, but a complete list of tests that need to be performed to ensure that everything is checked).

An acceptance scenario (acceptance criteria are the part of the definition of done) is not just a simple list of test cases, but the result of an exhaustively detailed decomposition of the task. After that, you should be left in a state of “there is nothing more to discuss here.”

Backlog grooming and sprint planning

At this stage, we estimate, among other things, the coverage tasks and think through the investigative testing that may be needed: load testing, security testing, consumer testing, etc. Then, in sprint planning, we explicitly take test coverage tasks or write acceptance criteria for main tasks where tests are also explicitly taken into account.

Test coverage is an integral part of a task, and test writing is a regular job of a developer. But not everyone is used to that yet, so it is better to take test coverage tasks into a sprint explicitly, at least in the early stages. Now, fortunately, we are already encountering cases where the developers themselves remind us that we have not made a scenario for a specific task.
If we introduce restrictions and rules (for example, you cannot merge a task unless all acceptance scenarios are automated and passed successfully), then the only way to speed up time to market is to improve quality. We can do it faster only if we can do it better.

Improving quality reduces the number of iterations and development time. In our experience, this has allowed us to reduce development time by more than half.


Development and manual testing

The main difficulty here is a large number of development iterations. For example, one of the features in our product went through twenty-six iterations. Why? Because earlier, an engineer, instead of testing the code themselves, gave it to the QA team, which led to errors and many fixes and improvements.

Here is what it looked like:

  1. The developer implements the task but does not thoroughly test it because they know that a QA engineer will check everything anyway.
  2. QA engineer finds errors and returns the task for revision.
  3. The developer fixes the found errors but makes new ones.
  4. The cycle is repeated many times.

As a result, no one can guarantee the quality of functionality. The developer does not remember what they were doing in the last iteration. The QA engineer does not know exactly what and at what point they checked something, so the problem is that they both have a blurred perception (it is difficult to look at the same thing again and again) while also being busy with several features at different stages of development at once.

What do we do about it? We could have transferred manual testing from QA engineers to developers, but this could have led to a loss of quality. Changes in processes are needed only when they guarantee an increase in the quality of the result. Therefore, instead of simply removing manual testing, we replaced it with new tasks that improve quality:

  • Preparation of acceptance scenarios, thanks to which a developer knows exactly what to check and has every opportunity to do that.
  • Test coverage at different levels. We release daily and have about thirty teams that make changes to the codebase. At the same time, our website, frontend, and backend are three monoliths divided into modules and components, but there are still interconnections between them that can break.
  • Test automation. We do test coverage during development; for this, all QA engineers in the company can write autotests. Test coverage is organized differently in different teams: in some teams, all types of tests are written by developers (unit, integration, component, module, E2E); in others, QA engineers write API tests or all autotests.
  • Validation of positive scenarios together with the product owner. This allows the team to understand the reasoning behind the task better and to walk through a user story one more time.
  • Verification of the design and layout. This stage takes place before the merge request, and both the designer and the frontend developer participate in it.

Our product works in different browsers as well as several desktop and mobile applications. Due to the large number of changes that affect many browsers and applications, we are unable to verify today what we implemented yesterday. It is impossible to test everything frequently manually, so automation becomes a necessity rather than a fashion choice.

We have mandatory low-level testing. For example, method logic should be covered at the unit level. A large number of cases at the E2E level makes it impossible to cover them all (this number is essentially equal to the Cartesian product of all possible usages of every method).
With a large number of users, there will always be someone who will trigger a specific variation, and without low-level tests, it can be missed in testing. This is one of the main reasons for bugs in the production environment.


Now the developer knows that no one will test the functionality for them and that everything they merge will be automatically deployed to the production environment. Therefore, developers do manual testing — not because there is no QA engineer in this workflow, but because it increases the level of responsibility and quality. In any workflow model, a developer must ensure that everything works as they planned: not putting blind trust in their experience, but checking everything according to it. Another thing to note is that the developer does not want to do manual testing, which stimulates them to cover their code with tests. Also, unit tests help them to avoid double-checking the same functionality multiple times, which means that we do not transfer the problem of blurred perception from QA engineer to developer.

There are cases where some details cannot be thought out at the previous stages; when this happens, a QA engineer gets involved in the development stage to change the scenarios or for manual testing. But these cases are isolated.

Thanks to these changes, we implement both simple and large, complex tasks (that take, for example, a month of work for five engineers) in a few iterations, often in just one. Internally, we’ve agreed that backend tasks should be implemented in one or two iterations, five tops if the frontend is complicated. If the number of iterations increases, this is a signal for us that there is a problem in our processes.

Feature checkmarks and investigative testing

By removing routine current testing tasks from the QA engineers, we freed up 80 percent of their time. It is very easy for the team to fill that freed-up time, but this does not always lead to improvements in quality. We spend it on additional testing, which helps us to dig deeper and find nonstandard cases that previously slipped to the production environment unnoticed.

A big feature is usually implemented by several people, is a sequence of tasks, and is released in parts, and the functionality itself is initially hidden from users (we use a technique of feature checkmarks for this). When a feature is deployed on the production server and is still hidden from the user, a QA engineer performs all the investigative tests that were worked out during backlog refinement (grooming): load tests, security tests, consumer tests, etc. For example, they could allocate time to break the whole functionality purposefully. The QA engineers have everything they need for that: they understand how it works, since they analyzed it in detail at the meetings and during the writing of acceptance scenarios, and their perception is not blurred since they did not participate in the development process.

At this stage, a product manager must ensure that the implemented functionality is the same as the planned functionality. They verify that the result matches the task description, check the main positive scenarios, and try to use the feature.

Investigative testing covers the entirety of new functionality testing and how it fits into the current product: how consistent it is, how it interacts with other functionalities, etc.


Release and monitoring

After all investigative tests are performed, we release the functionality to users (remove the feature checkmark), and the team begins to monitor the feature. The release process itself consists of several stages, but I will talk about that in another article.


Summary of all the changes we made to the testing process

Testing no longer takes place at the end of a sprint but is distributed throughout it.
The quality of the result is not the responsibility of a QA engineer, but the whole team. Previously, a QA engineer took responsibility for everything done by the team because they were the only ones who did the testing and gave the order to release.

Now, everyone has a role in maintaining quality:

  • Designer is responsible for the consistency of UX and the ease of use of the feature;
  • Developer is responsible for test coverage, including E2E;
  • QA engineer is responsible for the tricky cases of interconnection with other parts of the system and various testing approaches that help test the feature in its entirety;
  • Product manager ensures that the team implements the features that the users need — or rather, the implemented feature meets all the originally defined criteria.

The diagram of the whole QA process in an easy-to-view format.

p.s. This article was first published on Medium.
Tags:mirorealtimeboardtestingqa managementqa testingtesting framework
Hubs: Miro IT systems testing Web services testing Mobile applications testing Development Management
Total votes 1: ↑1 and ↓0 +1

Popular right now

Top of the last 24 hours


201–500 employees
Сергей Шабалин

Habr blog