🤯 50% Off! 700+ courses, assessments, and books

Could You Be Sued for Bugs in Your Application?

Craig Buckler
Share

An article which recently appeared on TechRepublic will strike fear into the heart of all developers and software manufacturers: Should developers be sued for security holes?

The question was posed by University of Cambridge security researcher Dr Richard Clayton. Software security losses cost billions per year and he wants vendors to accept responsibility for damage resulting from avoidable flaws in their applications. He argues that companies should not be able to rely on End-User License Agreements which waive liability.

While no legislation has been passed, committees in the UK and Europe have been considering the requirement for several years. Clayton wants applications to be assessed to consider whether the developer has been negligent. He argues that the threat of court action would provide an incentive to minimize security holes:

If you went down to the corner of your street and started selling hamburgers to passers-by they can sue you [in the case of food poisoning].

It’s not going to be easy. There’s going to be a lot of moaning from everybody inside [the industry] and we’re going to have to do it on a global basis and over many years.

Understandably, the software industry has fought back with several points:

  • No one purposely makes insecure software, but the complexity of code can introduce unforeseen errors.
  • When a home is burgled, the victim doesn’t usually ask the maker of the door or window to compensate them.
  • Legislation would stifle innovation and manufacturers would prevent application interoperability to guard against undesirable results.
  • Who would be liable for open source software?

Litigious Lapses

Clayton’s primary concern is security holes, but what does that mean? Bugs. It doesn’t matter whether they are caused by the coder’s inexperience, lack of testing or unforeseen circumstances owing to a combination of factors.

However the legislation is worded, if someone can sue for security issues, they can sue for any bug. Did an application crash before you saved 20 hours of data entry? Did an email or Twitter message reach an unintended recipient? Did Angry Birds cause distress by failing to update your high score?

Burgers vs Browsers

Let’s use Clayton’s burger analogy. Preparing a burger involves sourcing good-quality (OK — acceptable quality) meat and throwing any which is past its best. You won’t have problems if the ingredients are kept cool until required then cooked at a high enough temperature for a long enough time.

I don’t want to berate the fast food industry but there are a dozen variables and you only deal with two or three at a time. Nearly all are common sense — if the meat smells bad or looks green, it won’t be fit for human consumption. A burger costs a couple of dollars but, eat a bad one, and it will kill you.

Let’s compare it to a web browser. Conservatively, a browsing application could have 10,000 variables. There’s no linear path and each variable could be used at a different time in a different way depending on the situation. The browser is running on an operating system which could have one million lines of code and another 100 thousand variables. It could also be interacting with other software and running on a processor with its own instruction sets. It’s complex.

However, a browser is completely free at the point of use. It may be the worst application ever written. You may lose time, money and hair. But no one will die. There are risks, but are they more than outweighed by the commercial benefits?

Terminal Software

It is possible to limit programming flaws. Consider avionic software: a bug which caused a plane to fall out of the sky will lead to death. Failure is unacceptable.

Aircraft software development is rigid, fully documented, optimized for safety, thoroughly tested, reviewed by other teams and governed by legislation. It takes considerable time, effort and focus. Airbus won’t demand a cool new feature mid-way through coding. Boeing won’t rearrange interface controls one week before deployment.

The software is incredibly complex, but it’s one large application running on a closed system. The development cost is astronomical — yet failures still occur. They’re rare, but it’s impossible to test an infinite variety of situations in a finite period.

Assessing Developer Negligence

There’s only one way to learn programming: do it. Learning from your mistakes is a fundamental part of that process. You never stop learning. And you still make mistakes. I cringe when I examine code I wrote last week … applications written ten years ago scare the hell out of me.

While education is a start, it takes time, patience, and real-world problem solving to become a great developer. How could you gain that experience if you weren’t being paid? If you’re being paid, it stands to reason someone is using your software.

Anyone who thinks applications can be flaw-free has never written a program. Even if your code is perfect, the framework you’re using won’t be. Nor is the compiler/interpreter. What about the database, web server, operating system or internal processor instruction set?

But let’s assume lawyers found a way to legally assess developer negligence. Who in their right mind would want to become a programmer? Fewer people would enter the profession and daily rates would increase. Those developers prepared to accept the risk would have to adhere to avionic-like standards and pay hefty insurance premiums. Software costs would rise exponentially and become an expensive luxury for the privileged few.

Clayton’s proposal may be well-meaning but it doesn’t consider the consequences. His suggested legislation would kill the software industry. Ironically, that would solve all security flaws — perhaps that would make him happy?