Developer-Led Code Security: Why False Positives Are Worse than False Negatives

Most SAST tools target security compliance auditors. Their goal is to raise an issue for anything even remotely suspicious. There's no fear of false positives for those tools because the auditors will figure it out; after all it's the auditors' job to sort the wheat from the chaff and the signal from the noise. But the industry should rally around efforts to kill all that noise. There's little tolerance among developers for crying wolf. SAST players should listen to developers and follow the guiding principle to prefer "reasonable" false negatives to raising false positives.

What does that mean in practical terms? Well, let's play with some numbers. Let's say you have a codebase with 12 Vulnerabilities. That's 12 things that absolutely need fixing. A typical SAST analysis might raise 500 issues in total, and then the auditors will spend X weeks sorting through that to bring you, the developer, the audit result maybe a month or so after you've moved on to other code. 

For Secure Code, Maintainability Matters

Author Robert Collier said that "Success is the sum of small efforts repeated day in and day out." That's especially true when it comes to security. By now we all understand that securing your systems isn't as simple as installing a firewall and calling it a day. Instead, it's multiple actions and strategies in concert, implemented consistently over time. And believe it or not, one small but important strategy is simply writing code that's reliable (bug-free) and maintainable (easy to understand). Yes, I know that sounds too simple, and possibly even self-serving. So in this post, I'll lay out some of the evidence for how writing reliable and maintainable code means you're inherently writing more secure code.

Poor Maintainability Contributed To Heartbleed

To make the case for how maintainable code contributes to security, I'll start with the Heartbleed Bug. Remember that one? It was a serious vulnerability in OpenSSL that allowed attackers to steal sensitive information with a really trivial attack that XKCD illustrates beautifully. David A. Wheeler teaches a graduate course in secure development at George Mason University. He wrote an extensive analysis of the vulnerability. In it, he laid part of the blame on the difficulty of simply understanding the code involved: "Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be 'as simple as possible'."

What Is Taint Analysis and Why Should I Care?

He covered a wet, hacking cough with his hand, then pushed through the door of the ward. I reached the same door and hesitated. The Cougher had just tainted the door with his germs. If I touched it, I'd be tainted too.

These days we all know what germs are and how they're passed from person to person, and from hand to door to hand. The fact is that particularly in cold and flu season you have to regard every doorknob, and every elevator button as suspicious. You always wash your hands afterward, because you never know which doorknob is tainted with germs. You have to assume they all are.