views:

109

answers:

2

If you are a developer and had access to tools such as Ounce Labs, Fortify Software or Veracode, would you object to metrics on the code you write being made publicly available? If you would object, what would it take for you to feel more comfortable with more transparency in this regard?

If metrics were public, do you think this would have a positive side effect of encouraging developers to spend more time on learning secure coding techniques? Are there better approaches of encouraging developers to spend their own time to learn secure coding given competing concerns?

A: 

I regard any metric with a high degree of suspicion for all the usual reasons. Security metrics are likely to be used as FUD. You also need to know the appropriateness. For instance, defects related to untrusted code are irrelevant with untrusted code.

There are two important questions, IMO: Does it have a vulnerability? Just how obvious is that?

(Another question would be: Why are you linking to some local group without making it clear in the link?)

Tom Hawtin - tackline
I think your response is accurate and helps with making products more secure but doesn't really answer how to get developers to pay more attention to writing secure code. Are there alternatives other than using suboptimal tactics such as throwing them under the bus using faulty metrics...
jm04469
+1  A: 

I don't think that access to tools is necessarily the problem. From the looks of your profile we are in a similar position. I am responsible for architecting fairly large client-server solutions. The security problems in our codebase are the result of secure practices simply not being "on the radar". They are bumped for feature sets and other customer facing bug fixes.

We are currently looking into how to secure our codebase, manage security roles at the enterprise level, and all that good stuff because of market demand. I think that having a way to gauge the level of risk in a codebase is very important. That is where tools come into play - they offer reports and can be used to show improvement. Developers like measurable advancement and decreasing the number of security risks found by an analysis tool from one release to the next is very measurable.

I think tools and the reporting that they offer can be helpful and actually play a pretty critical role. But until someone (management, market, et.al.) places some value on closing security holes and mandates the repeated use of tools to show a decrease in risks assessed, developers aren't going to be overly interested in security-related programming. It really isn't that interesting when you look at all of the other really cool whiz-bang stuff. You might want to check out this related thread from a few weeks ago or even this one from last October.

Even worse, security concerns only really impact a very small amount of software today. With that said, a large number of the defects reported by The Field would not have occurred if we were paying more attention to security; in other words, they are security defects even though they are reported because they are causing other problems. This is where I believe we can actually get the biggest bang for the buck when fixing security defects. I wonder if anyone has done a study that examines the rate of bug reports before and after going through a security assessment and repair. My gut tells me that the rate of reported defects should decrease - i.e., "more secure software" == "higher quality software". That's enough rambling...

D.Shawley