On July 6, 2016, the Dutch CPB (Bureau for Economic Policy Analysis) published a report. It describes the economic situation of various aspects of cybercrime, which resembles what is written in the 2008 Geekonomics book by David Rice. Rice’s book is not mentioned as a reference, but it is interesting to see that some of the same conclusions end up in government publications after 8 years.
“Make software vendors liable”
One of the advices is to make the software industry liable for software bugs. Security.nl posted about this. However, it is unfortunate that their article seems to link the liability advice with the types of cybercrime that are precisely not directly caused by software bugs, but mostly by human behaviour, such as ransomware and phishing. This only creates more confusion in the already confusing discussions that I sometimes see. How can one ever hold a supplier liable for the fact that a user clicked on a dangerous link in an email? And which software supplier would be liable in this case? The one supplying the email client software? The OS? Or perhaps the one that supplied the provider’s mail server software?
When interpreting these texts however, it is important to keep in mind against which background they were written. Technology has changed, and so has the impact of software vulnerabilities on society. And when having these discussions, it is important to clearly understand the problem, not in the least the technical side of it.
Software liability only works for problems that are caused by faulty software, as is clear from the report. Rice’s book discusses other directions in which policy can be changed, such as programmer education and certification (either of software vendors or of the products themselves). Unfortunately, the CPB report only spent two sentences on this. It is not discussing programmer education at all, but mentions transparency about software quality as a means to get more control over the failing market situation. Usually, certification is a much more industry friendly and cheaper way than liability, because in the latter case, only lawyers will benefit. But that is not in the report. So let’s elaborate on that and compare liability and certification a little bit.
We need a standard to define “secure”
In order to judge a liability case, it needs to be determined if the accused party is really at fault, for example by being negligent. Did the accused party neglect to take certain security measures that could be expected to be in the software? To answer this, there needs to be a certain standard to compare their software to. Without such a standard, we will get endless discussions about whether the application should have prevented the user from choosing an easy to guess password or not.
The Secure Software Foundation (which is also mentioned in the report) has developed the Framework Secure Software, which can serve as such a standard.
Reactive or proactive?
Another problem of the liability route is that it is a reactive approach. Action is only taken after things have gone wrong. Surely there will be a scare effect that will incentivize suppliers to make more secure software, but again, without a trusted standard, this will be limited to fear only. You may end up paying more for an equally bad product, only because the vendor needs a bigger war chest for legal battles.
Then there is the big question whether open source software suppliers can be held liable. If that is the case, then the whole open source software infrastructure may collapse.
And what about third party software components that are used in software? Does this mean that an accused party may need to accuse one of their suppliers and spend even more on legal battles?
Becoming a software lawyer begins to sound more and more like an excellent career choice: highly in demand, guaranteed pay and more than enough vagueness to give you all the job security you want.
A secure software certificate on the other hand will perform a similar security assessment, but before software is shipped. You will also pay more for your software, but get a better product in return. Open source software developers do not have to worry. If an open source product needs to be independently assessed, it can be done by whoever is willing to spend resources on that. Sounds like a much more productive way to spend money. And remember, you can only spend your money once.
Ending the information asymmetry
Whether you prefer certification or liability as a policy, in either case you need a trusted standard that describes what security aspects can be expected for a certain piece of software.
The report describes that within the software industry, information asymmetry, both for software users and software development organizations, is a cause for market failure. No matter how you want to incentivize the software market to make more secure software, making security visible is the key.