Having practiced TDD and also having tried the “unholy” path of not unit testing my code as part of SDLC, I have experienced varying emotions attached to TDD, few think it’s the industry de-facto and hence we should have it, some just follow the “standards” and some think process warrants it but very few understand the big picture.
I am trying to pen “often overlooked” aspects of decision making, while advocating TDD in a project. I AM NOT PROPOSING A NEW PROCESS OR A NEW APPROACH to software development.
Let me clarify and set the context, am not trying to be unit testing critique here, am trying to pen down some interesting observations and aspects, which I believe will make our decision making more pragmatic while planning/estimating and implementing TDD.
Conceptual understanding or lack of it
TDD == Write code to pass a set of test cases. Test cases here would be unit test cases.
Conceptually unit testing is simple; you test a piece of code in isolation. Assume you have a multi layer logical architecture, and you are developing unit test cases for your service layer, you would need to isolate your service layer from rest of the layers (at higher level). At the lower level, you would need to also isolate the method/logical unit of service layer from rest of the classes or dependencies.
I think I need not explain the positives of TDD; there is enough information available on internet. However, I want to put the wholesome picture of where TDD fits the bigger piece of puzzle in software development.
What I found missing is the big picture understanding of why TDD, which means, if I had a QA strategy in place and I had a goal for quality in sight, TDD should play it’s part in it vs. TDD will ensure quality thought process. Let me clarify with a little more information:
If I had testing strategy in place for quality check, I would want to check how I cover entire application based on the goal of strategy. So, if I decide to only follow TDD or unit testing as QA strategy, my questions should be:
What is the coverage I get? Let’s say I get 30% (higher side) coverage, how do I cover other areas of application?
I might want to look at integration testing as an option.
I might also want to look at having data driven test cases. I might also want to have navigation testing.
I might also want to look at having an automated
regression testing suite.
Having weighed my options above, I would want to look at the ROI of each of options or combination of options in terms of:
What is the skill availability e.g. automation test suite would require specific skill set.
What is the maintenance cost of each of the options e.g. variation in functional design would require change in entire set of test cases (of any programmatic form)?
Infrastructure and processes in place, e.g. how do you tie CI engine with automated regression suite? Whether you have build verification suite in place, which will mark a build green vs. red etc.?
Effective unit testing
If I establish that I would definitely make use of unit testing in my development process (more often than not this is the case), I would (as a second step) want to ensure that my team understands and follows a standard of figuring out important test cases.
Identifying a test case:
Most of the time we only look at the programmatic aspect of writing a java unit test case and overlook the “method” in arriving at a test case. Mostly QA is equipped to identify “good” test cases but developers would find it hard – or at least wouldn’t be experts at doing so and hence there is a chance that the inventory of test cases (unit/programmatic) may not represent the most effective set of test cases. So it is imperative and important to ramp developers up on the process/methodology of identifying the “effective” test cases.
Revisiting the “identifying techniques”
Like I mentioned above, one of the important aspect of TDD is to identify meaningful and effective (subjective) test cases. Here is a list of some of the techniques which could/should be used to identify various scenarios:
- – Equivalence class partitioning
- – Boundary Value Analysis
- – Invalid Inputs
- – Special Inputs (uncommon)My recommendation would be to come up with a standard set of rules and guidelines to determine test cases.
Most often we do not differentiate between WBX and BBX unit test cases, my recommendation would be to focus on white box test cases (WBX) along with black box unit testing (BBX). In most cases/scenarios, where you have other testing methods integrated in overall testing strategy as BBX scenarios overlap with other options like automated regression or navigation test or functional test or during manual testing. This means that you have coverage for critical piece of code at the level you want (subjective).
When to write BBX (Data driven or input/output driven)
In using this approach, the tester views the program as a black box and is not concerned about the internal behavior and structure of the program. You derive to a BBX unit test case from the contract itself, which means in the java world, you can write all your unit tests by just having an interface to work with.When to write WBX/Structural (Logic driven)
Using this strategy, the tester derives test data from an examination of the program’s logic and structure. You would write WBX once the implementation for a given “function” is complete, the driver to “unit test” the code would be the complexity of the code. E.g. if you have many alternate flows in the code, this is one candidate of writing WBX unit tests.How does it fit in SDLC?
Assume a Java based project implementation; following is an ordered listing of the steps to be followed in the implementation phase of the entire Software Development Lifecycle (a little detailed and only suggestive e.g. every project may not warrant a BBX design document a good Javadoc could do the job).
………………………….
6) Contract Implementation – Based on the Technical Design Document, write the public classes and interfaces representing the public contract implementation.
7) Black-Box Unit Tests Design – Using the Functional Design Specification Documents and the contract implementation, design the unit tests for the functionality of classes as seen as black-boxes; this step should result in a Black Box Design Document.
8) Black-Box Unit Tests Implementation – Implement the unit tests according to the Black Box Design Document.
9) Black-Box Unit Tests Code Review – Assures that testing guidelines and coding standards have been followed; results in an Inspection Report.
10) Code Implementation – Implement the actual code using a test-driven development approach – code is written to pass the black-box unit tests.
11) Black-Box Unit Tests Execution – This step intermingles with the previous one in an iterative effort to implement functionality while keeping in mind the precise goal of passing all the black-box unit tests.
12) Code Review – After having implemented the functionality, the Design Inspector reviews the code implementation and proposes – if he considered necessary – where the implementation should be thoroughly tested using white-box testing.
13) White-Box Unit Tests Design – Design unit test cases (where required) to test the implementation from a structural perspective. This should result in a WhiteBox Design Document containing a list of methods to be tested.
14) White-Box Unit Tests Implementation – Implement the white-box test cases.
15) White-Box Unit Tests Code Review – Assures that coding standards have been followed and testing goals have been achieved; results in an Inspection Report.
16) White-Box Unit Tests Execution – Run the tests to thoroughly verify the implementation.
..........................................
Infra … what’s in a role?
How is Infra or a role relevant to TDD? Why are we even talking about it?
- CI/Build engine
- B. Build manager
One of the common mistakes we make is to ignore the setup to execute builds and integrate checks and balances for TDD (and many more such practices). There is also need to identify a build manager (representing a role, who owns build related practices), who would make sure builds are tied to TDD completely and not partially. E.g. how project is designed (probably a function acting as input to build setup) and how dependencies are tracked in a modular setup to how a build is produced, released, deployed and tested (build certification to regression testing). Probably this topic deserved its own piece of write up for the shear subjectiveness it carries.
Involvement by Ramp up/Training
One of the biggest road blocks in implementing TDD is low level/detailed interest in developers to adopt TDD and its benefits. Often, unit testing as a terminology is used to describe any testing done by developers whether integration testing or functional testing, which in my mind, dilutes the whole concept and confuses new developer’s understanding. There is a greater need to educate developers/testers to identify new and effective tools, this will ensure there is required comfort level and expertise for them to participate in whatever strategy/process we arrive at.
I would recommend each project to probably block some time in planning a ramp up/training for the sake of fulfilling the objective of quality deliverable by greater participation of the developers.
case for an alternative strategy
I believe, for some projects programmatic testing and not necessarily unit testing is an effective tool (probably as effective as unit testing) in development cycle to find and fix defects early in the process. Assume a project which is more or less customization of a tool/product and required less amount of coding in comparison to an implementation from a scratch on top of a custom “tech stack”. I would probably look at using BBX test cases at a very high level, at integration/flow testing level to check quality and cost. Open source tools like WATIJ or WATIR are good candidates for such a project to follow Test based development (not necessarily test driven).
I think what we need to understand and realize is what would save us “cost” at the same time result in “quality” output/deliverable. In my experience, I have observed that we tend to include all possible tools and processes (from so called laundry list of standards) to fill in the gaps while coming up with a testing strategy. Like I said above testing strategy has to be comprehensive yet cost effective and productive. What this pretty much means is; there is no objective way of arriving at one.
Leave a Reply