Security, is a big nightmare. You have no idea what else may be running on the computer that your software is to be installed on. To be sure that your software is protected and to protect your users, developers must improve their secure code skills.

We have seen a significant increase in the number of unit tests that are linked to the check-in process of the code repository. Some of these are simple bug tests, others are security focused and work by checking for known problems. But these are point solutions that are designed to reduce the risk of error in the software.

What is often overlooked is what happens when the software is completed and shipped. Here there is a need for developers to think about the wider picture and what they can do to prevent applications being hijacked and rogue versions deployed. Typically developers sign their software.

Unfortunately, this is not a foolproof solution. Not every file gets signed and signing depends on certificates. Even the big companies fail to maintain their certificates and often you will get installation messages saying the certificate is invalid. Call technical support and they inevitably tell you to ignore it. In fact, I’ve even seen that in FAQs shipped with products which raises the question – what’s the point of signing anything?

What indeed. If you can’t be bothered to maintain it, don’t use it. Telling the user to ignore something designed to protect them only leads to them ignoring valid security messages.

You could ship a validation file that contains a hash value of key files. This is good but what happens when a user calls up and says that the hash signature appears to be invalid. Normally, you would be safe to assume that the file has been hacked or altered. But not always.

It is not uncommon for people to make inline changes to software components and then not create an updated hash signature or properly increment the file version. Here the user is faced with an invalid signature but the right version number. The result is that they end up not applying the latest version of the file because the application validation routine thinks it is invalid.

Another solution is to use whitelisting. Now I confess, I’ve become quite a fan of whitelisting, especially as the technology underpinning it goes back to the 1980’s and it can be used to create a very secure environment.

Whitelisting works by maintaining a hash value for each executable file. You can, if you want, extend it to all the files that you distribute with your application, something that I would recommend and will explain why later. Like a blacklist, a whitelist controls what can run on a computer. While a blacklist has to know about all the bad files a whitelist only needs to know about the good ones. This is a critical difference and an important one for businesses.

In theory, all applications running inside an organisation and on its computers, should be approved. This means that they have a value to the business, are properly licensed and can be maintained through controlled patching and update processes. On very strict environments, where your desktop is completely locked down, there is no risk of unsupported applications. In reality, users have gotten used to installing a lot of things on their desktops that they shouldn’t as few work in such tightly controlled environments.

My first introduction to whitelisting, back in 1989, was through auditing software. We went from machine to machine, created hash signatures of all the files and then compared them to the hash signatures we created from a sample machine running approved applications. Within a week we found hundreds of pirate copies of software, a lot of unapproved software and a not inconsiderable amount of rather dodgy software. Much of that dodgy software later turned out to be inline patched software from vendors.

So how could you use whitelisting to improve the user experience of your software? When you sign off the software to production, you generate a signature of all files, add the relevant application name, version details and publisher data and then submit to the whitelisting engines. This means that whether you are a commercial company or just an in-house developer, your users can install the software and have it validated as fit to run.

Of course, this does not mean that there are no security bugs, just that the version of the files with that specific signature has not been tampered with. A more important benefit is that unlike signing files, your certificate does not expire and you can soon build up a list of all the valid versions of your software.

But why do signatures for all files?  The reason is application impersonation. Malware likes to hide itself to prevent anti-virus programs deleting it. The main executable files are too obvious so these days, malware will hide itself in a wide variety of file formats. If everything in your software distribution has a signature registered with a whitelisting engine, malware will have to look elsewhere as it will be easily spotted.

The benefits of this approach can be more than just protecting your users and more importantly your reputation. If you have a signing and whitelisting control in-house, all components that you create are protected. This means that whenever you call a component or another application you can validate its signature before passing it data. Over time, your entire software library will end up being protected.