How to define & measure quality?
Everyone talks about quality. Quality is the top priority of any product team to build trust and gain confidence from customers. The question is about how we measure quality.
Quality must be defined to measure it. Many talk about quality without even defining it. The subjective talks neither help the organization nor the customer.
Anything that is not defined cannot be measured.
As per chat GPT, quality is defined as below:
The definition talks more about subjective and doesn’t quantify the quality to measure it.
Anything subjective cannot be quantified.
Let us decode, “how well something meets its intended purpose or achieves its desired outcome”.
From the perspective of the customer, there are two types of checks. The checks help to assess the deliverable’s quality from the customer’s perspective.
Functional checks
Acceptance criteria - It is nothing but a list of use cases to meet expected application behavior. It is quantified as % of test cases passed.
Non-Functional checks
Latency / Response Time - It will be covered as part of performance testing to understand the application response time for various requests. The expected response time can be calculated in terms of Percentile & median instead of average. It can be quantified as % of test cases passed.
The number of requests/workload acceptable per a given period - It will be covered as part of load testing to assess the acceptable workload at a given time. It is quantified as a boolean value to validate this metric.
From a DevOps perspective, here are the quality checks:
Unit Test coverage - It is a critical parameter of assurance that the changes don’t break the code. This parameter is ignored mostly by startups while ramping up the product. But it must be considered once the product starts scaling up to a wider customer base. It is quantified as #% of unit test coverage.
Integration Tests - Integration tests are super helpful in validating the code across the components. More tests will strengthen the pipeline quality assessment. It is quantified as #% of integration tests passed.
Static code analysis - The analysis helps to understand code vulnerabilities and security issues. The reliability score is usually given as A, B, and so on. Having an A is always the best score for good code quality. It is quantified as # Number of code checks below A or B and the number of issues that exists if the reliability score is below B.
Every organization defines quality based on the customer profile. It is totally up to the organization to specify the tolerance level and measure the metrics before releasing any product/deliverable to customers.
Here is my preferred scale to consider as a base: