I’ve heard it all before, many times. “The vendor or developer will do all the testing.” That’s right, all you’ll have to do is plug it in and everything will be ok!
I wonder how many times through the ages that has actually been the case? In my experience, never.
Almost as bad is the idea that “We’ll have a look at it once it comes back.” Under this scenario, its very possible that what comes back isn’t what you asked for, or perhaps more realistically, what you thought you’d asked for!
When we break it down to the bare minimum, we can forget for a moment all the fancy terms for all the different stages, levels and types of testing and just focus on the fact that there are two sets of testing that need to be performed for an IT project, and these are:
- That which is the accountability of the developer/vendor.
- That which is the accountability of the owner/client.
Testing conducted under the first one, includes everything concerned with validating that the system has been built right at the build level, communicates correctly at the component level, operates correctly at both the functional and non-functional level and talks to all other systems it needs to.
Testing which is the accountability of the owner/client includes all forms of User Acceptance testing (UAT) and it’s critical that this is done correctly. I say ‘accountability’ of the owner/client as it’s quite often the case that this is also outsourced to consultancies, vendors, and contractors and isn’t done by the owner/client at all. It is imperative that proper process is followed for UAT or the whole project could be doomed from the outset.
Good UAT involves a full review of the User Requirements (or in Agile the user stories, use cases, etc). This drives out many of the ambiguities, omissions, inconsistencies, and errors that would otherwise have been given to the developer/vendor team to translate into a solution document. What’s left is a clear, concise, and accurate set of requirements that align to the capabilities that the business wants, and that can be measured, solutioned, developed and tested against when the system is delivered.
In good practice, testing starts at the beginning of the lifecycle with the recognition that defects do not just occur in software. Defects present in poorly written requirements can be extremely expensive to fix when found at the end of the lifecycle or in production. The developer/vendor team could have built exactly what you asked for, but if that wasn’t what you really wanted, needed, or thought you’d asked for, whose problem is that?
If you would like to talk to someone regarding how you do testing at your organisation, please get in touch with UnicornX and we’d be happy to listen.