views:

406

answers:

7

We developers are in a unique position when it comes to the ability to not only be skeptical about the capabilities provided by open source software, but to actively analyze the code since it is freely available. In fact, one may even argue that open source software developers have a social responsibility to do so to contribute to the community.

But at what point do you as a developer say, "I better take a look at what this is doing before I trust using it" for any given thing? Is it a matter of trusting code with your personal information? Does it depend on the source you're getting it from?

What spurred this question on was a post on Hacker News to a javascript bookmarklet that supposedly tells you how "exposed" your information on Facebook is as well as recommending some fixes. I thought for a second "I'd rather not start blindly running this code over all my (fairly locked down) Facebook information so let me check it out".

The bookmarklet is simple enough, but it calls another javascript function which at the time (but not anymore) was highly compressed and undecipherable - though understandbly more for optimization than obfuscation. That's when I said "nope, not gonna do it". So even though I could have verified the original uncompressed javascript from the Github site and even saved a local copy to verify and then run without hitting their server, I wasn't going to. It's several thousand lines and I'm not a total javascript guru to begin with.

Yet, folks are using it anyway. Even (supposedly) bright developers. What makes them trust the script? Did they all scrutinize it line by line? Do they know the guy personally and trust him not to do anything bad? Do they just take his word?

What makes you trust that a piece of open source software is not malicious?

Note, to clarify, the reason I limit the scope of the discussion to open source software is because with open source software it can often be a matter of "do I want to expend the effort to do a proper analysis or should I just go ahead?" When the opportunity to check is there, when do you skip it and just implicitly trust the software?

+4  A: 

I'd say it's a function of the community. If a piece of open source software is freely available but the source (the developer) is a relative newbie, I'm far less likely to trust it. If s/he is a member of a larger community and has an established reputation, I might give it some credibility.

I like to use news as a good metaphor here. What makes you trust CNN or the BBC over a smaller, local outlet? It's the large community supporting the publication and the long-standing reputation they've amassed for putting out quality information. The same holds true for open source developers. I'd trust just about anything Matt Mullenweg publishes. But guest_user_002943 would be a completely different story.

EAMann
That's a good point and I know that I definitely implicitly trust basically anything out of Debian/Ubuntu package repositories not just because they're open source but because they've undergone at least one fairly rigorous review and incremental reviews since then whenever changes come down the pipe.
Daniel DiPaolo
A friend of my found a random OSS app that looked like an interesting password encryptor/generator/many other things. He downloaded installed used it for a few days. Decided he wanted to extend it. Since project was dead he took it upon himself. While digging through code he found a line where all username and passwords were emailed to a gmail account. Fortunately the email and password were in the comments. So he logged into the gmail account to find 1000s of usernames and passwords, including his. All that to say definitely becare ful of guest_user_002943 OSS projects.
percent20
@percent20 excellent example, that was on codinghorror.com [but unfortunately it wasn't open source so not entirely relevant](http://www.codinghorror.com/blog/2008/03/a-question-of-programming-ethics.html)
Daniel DiPaolo
I'd like to add that a community which reviews and edits the source makes it even more reputable. There's a difference between "free software" published/maintained by a single developer and truly free software that's maintained/reviewed by a larger group. Community = accountability when it comes to development.
EAMann
You see, the problem is three-fold really. There are three factors that will get you screwed, regardless of OSS. 1. Vendor trust: do you trust that the vendor does not have secret evil plans? 2. Vendor ignorance: Do you trust that the vendors themself are not getting hacked and having malicious code go through them without anyone noticing? 3. Path of trust: Does the vendor use proper PKI to ensure that the code gets to you _untampered_? 1 is somewhat rare, 2 a little less 3 is commonplace, in fact, it's almost impossible to get most programs without being vulnerable to men in the middle.
Longpoke
@Daniel DiPaolo Thanks for letting me know that was on coding horror going to have to go talk to a friend that claimed it as his own.
percent20
Accepting this because it was the earliest and while Longpoke's long answer is very good and thought-provoking, it doesn't really answer "what makes you trust" when the answer is "I don't" :)
Daniel DiPaolo
Sony and Energizer are big companies that people truest, they have also spread backdoors.
Rook
@The Rook: FWIW, I trust no publicly held corporations, Sony in particular. Any publicly held corporation is one proxy fight away from turning into a force for evil in the world, so while I may have reason to trust a company now, it may be irrelevant next year. I'd rather trust people.
David Thornley
+6  A: 

Its a matter of relativity. After all who's to say that there isn't an NSA backdoor on your Windows machine? Or that a backdoor isn't installed during a 'silent' patch. I know that my Linux kernel is safe from such attacks, because there are people more paranoid than I that keep watch. After all Backdoors are found in closed source software, and having the source code makes it A LOT easier to find.

Rook
A USB battery charger. I'm... impressed.
Paul Nathan
I agree that having the source code by definition makes these sorts of things easier to find, but is it enough that "someone somewhere probably catches this stuff" to make you trust it? Obviously someone has to be the judicious and probably paranoid soul that takes the time to audit things.
Daniel DiPaolo
Great articles.
Wallacoloo
@Daniel DiPaolo I do audit code before i use it. But thats mostly looking for accidental security flaws. The truth is there is a lot of code out there, and a lot of opportunity for something to go wrong. Honestly I don't think a good rebuttal has been posted to my answer. Open Source code in general is more trust worthy than closed source projects and nothing is 100%.
Rook
+12  A: 

The reputation of the source.

anon
Short, sweet, and... maybe a bit of double entendre? :)
Daniel DiPaolo
+1 reputation is far more important than any kind of code metric.
Matthew Jones
I demand this answer be made community wiki, because of the irony involved!
Kevin Panko
Reputation? What about the article on the energizer backdoor that i posted?
Rook
Sorry, I am missing the double entendre and irony?
anon
@Neil Butterworth lol i just got the double entente. That was not intended.
Rook
@Neil two double entendres really, "reputation" as in SO rep *or* trustworthiness of where you get the software from, and "source" as in the place you get the software from *or* as in the actual code itself
Daniel DiPaolo
@Daniel Oh well, I've never believed that about SO "rep" - it's more a measure of how much time you have available to waste.
anon
I guess sony has a bad reputation after the whole rootkit debacle.
Rook
+6  A: 

Simple: I don't trust any software, and you shouldn't either. From experience, I've see quite a few compromised open and closed source projects from time to time.

But at what point do you as a developer say, "I better take a look at what this is doing before I trust using it" for any given thing?

NO. This is not good enough, one can plant extremely well hidden vulnerable code in a program and without some hardcore static analysis, you'll never find it. I mean, for small apps it's okay, I do this sometimes myself, but for anything large scale, it's pretty much impossible.

I only "trust" packages from my distribution vendor, so there is a single point of failure (well, hardware vendor too, but this is another beast). It's a lot less likely for my vendor to serve compromised code to me than if I just execute random programs off the net. Even if my distribution does serve compromised code, it's more likely that someone will notice it before I even run it.

For Java apps, I just stick 'em in the sandbox. Can do the same with SELinux for C apps. However, this is not enough. Java and the Linux kernel are both written in C and are thus subject to a constant stream of critical stack smashing related vulnerabilities. So even when I'm running all my untrusted programs in secured domains, I still can't trust them.

Once C fazes out and we have a secure operating system with a decent security model based on sandboxing/mandatory access control/etc, and the kernel is written entirely in some safe language, supported at the hardware level, this should stop. We will be able to stick any program into an isolated domain, and never have to worry about weather or not it's to be trusted, because it simply can't break out.

Android and iPhone are already doing this to some extent, however, I haven't looked into them much yet, and anyways I doubt they are ideal. For one, iPhone doesn't even have Java support or .NET CLR support, big mistake. Meanwhile Android runs Java, but I'm not sure sure Java is even ideal for this anymore because of some recent code execution vulnerabilities (these are language level, and nothing to do with stacksmashing, how did they mess this up? I haven't looked into it enough to judge weather or not Java's security is taking a direction that is inheritly flawed, but now I am starting to have doubts).

Perhaps in the future, cryptographic webs of trust will be established to determine which versions of software are safe or not. Still, you should be able to run most programs and only give them a few explicitly allowed permissions, the web of trust can help determine which privileges should be given to save some time, or help people who do not know how to determine which privileges should be used on their own. Unfortunately, in the current state of Gentoo's 3rd party overlay repositories, this is simply not the case, and most people in them don't even care about security. I'm pretty sure this is the case for every other similar distribution / operating system.

Longpoke
Yeah, whenever I start getting into discussions with "trust" and people defend with "oh the community!", I start wondering well how do you know you can trust the community? Cryptographic rings/webs of trust are an interesting thing to bring up here and I wonder how they could be successfully deployed widely with high user acceptance.
Daniel DiPaolo
@Daniel: _but_, a decentralized web of trust _on top_ of a community driven o/s, is surely better than only trusting one or the other. I actually haven't put much thought into the web of trust idea, but there is quite a bit of research on it, I'm pretty sure something decent can be made of it. Nevertheless, my stress is on getting a decently secure o/s made, where it wouldn't matter if you trust code or not for the most part. I remember Linus himself saying something along the lins of "the O/S can never be secure", so we should have some kind of backup plan, that made me mad.
Longpoke
+4  A: 

Community, community, community. Given the fervor of most open source communities, any purposely malicious code found in an open source package would spell the end of an OSS company's distribution stream - which is the business model for most of these guys.

I feel confident that I can trust the OSS vendors I utilize in my deployments, cause I'm a member of their communities and feel comfortable that those communities are active and diligent. Two great examples are SugarCRM and MindTouch, both are products I use and both have two strong communities.

Mateo Ferreira
I've seen Ubuntu communities with obvious vulnerabilities in their web apps, someone could just hack them and plant malware.
Longpoke
A: 

Community reviews and testimonials.

this. __curious_geek
A: 

The fact that it's open source.

brian
Isn't that a bit naive/overly simplistic?
Daniel DiPaolo
absolutamutably
brian