Custom Software Testing: How We Use the Pareto Principle and Escape the Murphy's Law
The quality of a custom software cannot be guaranteed without proper testing.
|Both a client and software developers need to be sure that:||Types of testing procedures, which are used:|
|The software works correctly if an end-user is trying to use it in an expected or provided manner.||The “positive” or “smoke” testing procedures.|
|The software works correctly if an end-user is trying to use it in an unexpected or unprovided manner.||The “negative” testing procedures.|
|The software works correctly if it was changed or new features were added (for the software which has been previously tested).||The “regression” testing procedures.|
How We Use the Pareto Principle
“20% of your time produces 80% of your results, and vice versa” — the Pareto Principle in custom software testing is about focusing on positive test cases first.
A Test Case is an algorithm that should be performed during testing. The Test Case looks like a textual description of what steps must be done. It is written by a manual testing specialist based on User Stories and according to the Software Requirements Specification Document.
In custom software testing, test cases are divided into two types:
- The positive test cases to check whether the software works without bugs in popular and obvious scenarios of behavior, which 80% of this software users reproduce.
- The negative test cases to check whether the software works without bugs in the less possible scenarios, which 20% of this software users reproduce.
If a QA specialist is trying to run both positive and negative test cases for every recently developed feature simultaneously, a programmer is waiting for the results from the tester. What if there is 1 tester for every 4-5 programmers and all of them have just sent some recently developed features for testing? What about more complex projects with 2 testers and 8-10 programmers?
As a result, all developers are just waiting for the feedback from testers. It causes low productivity and Client’s dissatisfaction!
How to use the Pareto principle to boost productivity?
- First, a manual QA specialist performs only positive test cases for each task and reports about results to the first developer. Then he makes the same thing for the task of the second developer and so on. The positive test cases require 20% of the manual QA specialist’s time but give us a possibility to find about 80% of all bugs.
- The manual QA specialist does not perform negative test cases for a task until all positive test cases are passed.
- Then if the project budget allows it, the negative test cases are being performed (80% of the manual QA specialist’s time) and it gives us a possibility to find the rest bugs (about 20%).
In detail, the general process of manual software testing looks like this:
- Before a programmer indicates that he has added a feature for testing by a manual QA specialist, he performs positive testing himself based on the test cases previously written by the QA tester.
- The manual tester performs positive testing for each task and reports feedback to the programmer ASAP in JIRA.
- The manual tester does not wait until the programmer fixes the bugs — he continues performing positive testing for other tasks.
- If the developer has fixed the bugs, the manual QA specialist makes regression testing.
- If there are no bugs found, the manual tester starts performing negative testing.
Such process organization leads to boosting the software development teams’ productivity for the same money.
How We Escape the Murphy's Law
“Anything that can go wrong will go wrong” — the Murphy's Law is about being prepared for the worst-case scenarios and not letting this happen. The most worst-case scenario is when recently implemented features break earlier implemented features. You can escape the Murphy's Law by mandatory use of regression testing based on auto tests.
An autotest is a script written by an automation testing specialist to automate the execution of the test case. Meanwhile, most of the manual test cases must be transformed into auto tests. Autotests are required for the regression testing. It does not make sense to perform regression testing manually (tens, hundreds of test cases for large software projects) because it violates the Pareto principle in software testing.
How to organize the process of autotesting to escape the Murphy's Law?
Automated software testing divided into backend automation testing (unit testing and API testing) and frontend automation testing (web testing and mobile testing).
How do we organize the process of automated software testing?
In such a manner (let's use an example of API automated testing. Why is API testing important? APIs are the backbone connecting today's software applications):
- Once a back-end programmer developed the API, he should create the technical API documentation for the front-end developers (web developers, iOS developers, Android developers and others) and a set of positive auto tests for each feature. He should also perform these tests by himself before delivering this feature to the automation QA tester.
- The automation QA tester checks out whether the number of autotest cases written by the API developer is enough to test API or more auto tests are needed and should be written. After he creates the missing positive auto tests, he can also write negative auto tests based on:
- The manual test cases from the manual QA specialist.
- The API documentation from a back-end developer.
Therefore, we have a battle-worthy army of auto tests, which doesn’t allow any bugs to happen at all.
If auto tests are not passed, we know what we need to do to fix that fast:
- Either the developer needs to fix his code that doesn’t allow the current auto tests to pass.
- To rewrite (actualize) the outdated auto tests.
As a result:
- We increase the total efficiency of development team's performance and decrease manual testing work hence save client's budget.
- We spend the minimum time and budget for updating either the source code or auto tests.
- Proper organization of auto testing process blocks the Murphy's Law impact on software products we create. We are sure on the stability of the previously developed functionality when adding new functionality.