No matter how well an application is designed or how hard developers try, in a world of complex software we need to rely on QA and software testing to ensure that applications are secure and stable. Unfortunately for many companies, they still treat the QA and test teams as an afterthought. Unless that changes, software will never live up to the hype or be the successful tool that most users expect it to be.

Whether it be commercial or internal software development, the days of just writing the best application have long gone. Today, every developer has to be cognisant of the phrase “Time to Market” (TTM). What this means is that instead of getting it right, you have to deliver it on schedule, irrespective of whether it really works or not.

Of course, many would argue that there never was a time when you could take your time to plan, design and release gold standard solutions and that TTM has always been there. It may well have, but I’d suggest that the continued substandard state of so much software means that if it was always there, it is now being applied with more ruthlessness than ever before.

One of the main casualties of reducing the timescale or being inflexible has always been the QA and testing schedule. This is easier to see with in-house software than commercial products but nonetheless, it happens. To try and overcome that, we have seen a change in the way that software developers work.

One major change has been the introduction of continual unit tests as part of the source code repository check-in system. If code fails to meet the unit test, it cannot be checked in. This is a great improvement on how developers work because it will prevent some bad code getting into the core software.

Note that I said “some bad code”. This was not a mistake. The only code that is disallowed is that rejected by the unit tests so you are only as good as your unit tests. Remember, these unit tests are often looking at a single piece of code in isolation, not as part of the greater application.

This means that we need to change things up a little. Keep unit tests but now have QA and test engineers working with the developers to do continuous test improvement and if that means that code which had been accepted is suddenly kicked out of the build tree, then that’s life. One place where we should be seeing failure is in the daily test build.

Here, any failures during the build process should create an automatic rejection of code in the source tree. More importantly, it should flag up the unit test that allowed that code through, for re-evaluation and redesign. This means that we also need a multi-layered approach to failure. It is no longer a simple pass or fail but we have degrees of failure that have to be addressed each day.

Although no failed code should ever get into the source code tree, resource allocation should always focus on the big failures as a priority. One solution to this is to have a rectification group inside the development team. This team would be looking at the failed code each morning allowing the developers to continue with their existing work.

Again, there are those who would be prepared to allow some degree of code failure into the source tree in order to get software out there. This is part of what is often referred to as technical debt and should only apply to software where the issue is minor, cosmetic and has no security impact. However, if you are going to go down this route you need to learn from those using Agile methodologies and have a dedicated cycle where you clean up technical debt on a regular basis. That does not mean in two years time when you release the next version.Another problem with software testing is the complexity of applications. This has led test engineers and designers down the path of testing for known outcomes which would prove the software good rather than testing for destruction. Let’s look at an example of such a failure in a non developer environment.

To, too, two – there, their, they’re. There are many other examples of homophones but all of these are correctly spelled but it is the grammar and context that decides which one we should be using. Your spell checker will not tell you when you have used the wrong one. If we only test software for known good outcomes then we make the same mistake as the spellchecker.

There is another reason to test for destruction. The complexity of software, especially as developers move to parallel and multi-threading, means that we miss such serious bugs as race conditions caused by concurrency bugs. With users commonly having multiple applications open on the desktop and an increase in collaborative working, it is possible to reach the position where stuff is overwritten without the user knowing until it is too late. Even than tracking the problem is extremely difficult.

With increased virtualisation and shared resources, the risk of concurrency bugs is increasing all the time. How serious is it? The issue of concurrency bugs has been around for some time now and most of the test tool vendors can talk to you about them and their implications in detail. The problem is how to build tests and tools to specifically test for them.

Last year, two things happened which should make it easier to test for and reduce concurrency bugs. The first was the emergence of Software Testing as a Service (STaaS). This allows test teams to gain access to very large pools of resources and high value software to be able to create concurrency bugs, if they exist, to occur.

The second has been the emergence of a new breed of test vendor who is much more aggressive about the role of software testing. I’m going to call out one company in particular here, Corensic. They came to market last year with a software product called Jinx which is designed to hunt for concurrency bugs and to date, they appear to be the only test tool vendor with a serious emphasis on this issue.

If we are to restore faith in software and allow users to work without the constant risk of software crashes and data loss, we need to reinstate the importance of QA and testing. This means continuous use of the QA and test teams and integrating them into the entire software development process not just bolting them on if we have time at the end.