Quality assurance in software development: When should you start the testing process?

“A procedure intended to establish the quality, performance, or reliability of something, especially before it is taken into widespread use” — Test definition at the Oxford dictionary.

Customers don’t like to deal with defected software. They want their demands to be delivered with high quality and in the shortest possible timebox. That testing phase that starts only a few days before releasing the next version of the product might not (and it probably won’t!) be able to ensure product quality.

Below is an example of a typical SDLC:

  • Planning – Goals and objectives are defined, requirements are gathered, costs and resources are estimated and feasibility of alternative solutions is analyzed.
  • Analysis & Design – Features are detailed taking into account the user needs. Wireframes and business rules are defined. Other relevant documents are attached.
  • Construction – Code is written for real in this phase.
  • Testing – Pieces are put together in a test environment to verify the system. Different types of tests can be performed, but that is a conversation for another day.


This cycle is exposed to some problems. As we can see, each activity starts at the end of the previous phase. First, let’s think about bugs found during tests: many of them exist in the system since design or even planning phase, and will probably be much more expensive to fix these bugs after development is completed than it would be if the problems were identified in previous steps. Furthermore, in a predictive planning, a tight deadline combined with a delay in completion of the construction phase can reduce the available time for testing, which might significantly undermine quality of the product.

It is observed that most of the errors found in the testing phase were introduced during requirements gathering or design.

Why testing should start early in the software development life cycle?

The process starts with requirements, and as the project evolves inside the SDLC more efforts are allocated to create or modify the solution, more people are involved and the cost of the project increases. Bugs detected at the end of the process tend to require significantly more effort to be fixed. The sooner the bug is identified, the cheaper it will be to fix the problem. In Software Testing, Ron Patton says that the cost of fixing a bug can be represented by something around a logarithmic function, where the cost can increase by more than 10 times as the project progresses through the phases of the SDLC.

For instance, a bug identified during conception costs something around zero, but when this same bug is found only after implementation or test, the average cost of repair can get to something between 10 and 1000 times more than in the previous step. When customers find this bug in production environment, the cost of the problem considers all side effects related to it.

Main advantages of testing in earlier phases

  • Many problems are introduced into the system during planning or design. Requirements testing anticipate future problems at a significantly lower cost.
  • Since the testing process is involved with all phases of the SDLC, Management will not feel like testing is the bottleneck to release the product.
  • Testers will be more familiar with the software, as they are more involved with the evolution of the product in earlier phases.
  • Test cases written during requirements and shared with the Dev team before the construction phase can help developers to think outside the box and evaluate more chances of failure in their code.
  • The test environment can be prepared in advance, anticipating risks and preventing delays.
  • The risk of having a short time for testing is greatly reduced, increasing test coverage and types of tests performed.
  • Involving quality assurance in all phases of the SDLC helps creating a ‘quality culture’ inside the organization.

Defect prevention: Quality is built in, not added on

Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You cannot inspect quality into a product.” — Out of the Crisis, pg. 29

Start a test plan at the beginning of the project and identify test requirements. Test requirements are not test cases, as they do not describe the data being used for tests. Data is irrelevant at this level. These tests should be used as input documents for generating Test Cases. Testing should start at the planning phase and it should continue throughout analysis and design phase. At the end of the design phase, integration and unit test cases should be completed.

  • “Validate that you can insert an entry to the repository”
  • “Validate that you can’t insert an entry when the repository already contains one with the same unique identification”
  • “Validate that you can’t insert an entry when the repository reaches 300 entries”
  • “Validate that you can insert an entry to the repository when it is empty (initial test)”
  • “Validate that the full repository can be loaded to the screen in 2 seconds”


Verify that requirements are clear and consistent. It is important to eliminate ambiguities in interpretation caused by some general terms. Some customers may use terms that have different meanings, which compromises analysis of the document.

Discover missing requirements. In many cases, project designers have no clear understanding of modules and assume certain requirements. Requirements should cover all aspects of the system without any assumptions.

Ask the client about the need of requirements that are not related to the project goals. It is important that these requirements are identified and the client asked if it is really necessary. A requirement can be considered irrelevant when its absence causes no significant impact on the project goal.

Some others tests include (Source: link):

  • Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
  • Is every reference to a defined term consistent with its definition?
  • Is the context of the requirements wide enough to cover everything we need to understand?
  • Is every requirement in the specification relevant to this system?
  • Does the specification contain solutions posturing as requirements?
  • Is the stakeholder value defined for each requirement?
  • Is each requirement uniquely identifiable?
  • Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

Defect detection

The tester should report the defect detection efficiency on completion of the project. It measures the process efficiency within the SDLC. It helps to understand and track the phases of the SDLC that are generating more problems and compromising product quality.

The role of developers in defect prevention

Developers must be aligned with the expectations regarding the requirements. In many cases, in order to keep up with the schedule, developers do not invest enough time to review the specification and often ignore important documents or misunderstand some requirements. This kind of ambiguity generates more bugs to be identified at the end of the project and the cost of repair will end up more expensive.

Developers should also create unit tests and review code (and/or have their code reviewed) before commits. Together, these small daily activities make great contribution to defect prevention during construction phase.

In addition, some types of tests certainly worth the consideration of being automated, and an automation team would get involved with the process. The execution of automated tests (UI, load, performance, unit, etc.) can be strongly linked to commits of developers during the construction phase (see Continuous Integration), but that’s a topic for another conversation.

Putting it all together

Defect prevention is an important investment with short-term return. The joint actions not only increase product quality by anticipating issues, but also reduce maintenance cost of the product, increase overall productivity and reduce development time of the project. As a consequence of this combination of factors, customer satisfaction increases, as well as the reliability and reputation of the organization.