Sorry if this is a stupid question but sometimes I see Easter eggs and stuff in programs like Aptitude. (the package manager for Debian)
Is it possible Is it likely that more sinister features make their way into open-source software?
Sorry if this is a stupid question but sometimes I see Easter eggs and stuff in programs like Aptitude. (the package manager for Debian)
Is it possible Is it likely that more sinister features make their way into open-source software?
It's possible but sort of harder because the source code is there. The author would be counting on no one bothering to read the source code before running it which is true for a lot of people I suppose. I know I don't bother to read the source code of the open source programs I run. In a larger project it's harder because the code is often reviewed but if there's just one author then it becomes a lot easier.
Any software can contain malicious parts (intentionally or unintentionally). The advantage of open source is that you can check it (if you like and have the time to do so).
Yes it is possible, the same that it's possible for closed-source software to have the same occur (malicious developer on the team, etc)
It's arguably less likly with open-source though, as the moment anything like that is noticed, any other user can pull the problem code and it's no longer a problem.
I'd say that it obviously it is possible. All it requires is that code gets accepted without sufficient review. It's not hard to come up with scenarios permitting that, since reviewers are human.
The more interesting question then becomes how likely it is that malware gets accepted into some package where it can do harm. This is far harder to answer, unfortunately. We seem to be doing okay so far (knocks on wood), at least.
I guess Linus's law ("with enough eyes, all bugs are shallow") holds true, but it easy to think that just because something is open source, people will spend a lot of time eyeballing its code. That is not generally true, as far as I know.
EDIT: Changed the wording about Linus's law above, had it wrongly attributed.
It is, but usually its noticed and removed before it becomes an issue. With any well maintained open source software there are many people who check each revision for any changes that were made.
Yes it's possible, see the Debian OpenSLL debacle for a nice example: http://www.metasploit.com/users/hdm/tools/debian-openssl/
Although this is not a virus/spyware/malware, it clearly shows what could go wrong in open source software.
Yes it is possible, it depends how carefully controlled commit access to the source code is and how carefully monitored those commits are. Some projects have a few lead developers who request patches from the community and commit this to the code base, other projects will grant access to many more developers. Equally some projects have a large number of people reviewing the source code as changes are made.
It is possible, indeed as it is code like any proprietary software. However, the main difference is that you -and the community- has access to the code, and this fact is enough to stop it from happening in almost all cases. Also, the vast amount of versions of libraries and kernel makes malware less likely to succeed.
Do you really know what you use? Do you check? Does typical user check or care?
For example google for the keywords: repository compromised
or gpg repository compromised
or something along these lines.
It is possible, but not very likely.
There's nothing special about open source code that makes it magically resistant to containing bad things, but open source which is actively developed by a group of people is very unlikely to contain malicious code, because someone would notice and blow the whistle.
In addition, in most open source projects it's possible to trace the history of any particular piece of code, by looking through the project's source repository, which means that the author of a malicious piece of code can be identified.
If in doubt, you can always review the code yourself, or hire someone to review it for you. Code review generally won't catch subtle bugs or errors, of course, but malicious code is likely to be more obvious.
Is it possible in principle? Of course. Any software can do things people don't want.
Is it possible in practice? The argument against is of course that the software is available, and that many eyes are looking at it, so it'll be discovered before it can do too much damage.
On the other hand, there is the Underhanded C Code Contest, http://brainhz.com/underhanded/, in which you submit programs that intentionally misbehave, but where the cause of the bad behavior is not apparent from an inspection of the source.
Then there's of course the Debian SSL bug, where SSL keys generated with the OpenSSL library on Debian where quite insecure. This, apparently, was just an act of incompetence (Hanlon's Razor, everybody), but it shows how security problems can sneak into open source code. With weak keys and SSH access, you don't need a virus in the code, you just exploit weak code when it's running on production systems.
Take that as a yes/no/maybe :)
Similar how closed source software can be viruses/spyware/malware, open source can be as well. As well as how there's tons of horrible open source software.
This far every closed source windows software I've seen has been some sort of malware though, so the bias is closed source apps have higher chances being crap in overall.
Who prevents it? Even if software would be open source, Only poor software allows anyone touch the release repository without authorization. Usually there are maintainer(s) who review all the incoming patches.
99% of all software produced and used is poor quality and bug ridden.
It's certainly possible, but it's more complicated. I don't know of any actual malware going around, but people have made mistakes with similar effects. (I know of mistakes that have been found; obviously, I don't know how many haven't been.)
If you put malware into closed-source software, the only way to find it is to detect the effects and analyze the binary. There are people who are very good at analyzing the binary.
In open-source software, anybody can look at the source code. Not many will, for most packages, but there's a much higher chance of being found out. Once found out, anybody can patch the software to do the good things without the bad. Moreover, most open source software has publicly available repositories, which means that anybody can track down the history of the code, and (at least to a pseudonym) who did what. There is also a tendency to produce more readable code in open source, so that changes will stand out more.
The caveat, of course, is that most of us really don't know what to look for in software security. If I run a compression program, and it compresses my file to a shorter version that looks like gibberish, and I can get the original back, I know that's working. If it changes it to gibberish that it claims is encrypted, I don't know a priori how to tell if it's well encrypted.
Another example where open-source security is superior to that of closed source is the interbase backdoor.
From The Register:
A back door password has been hidden in Borland/Inprise's popular Interbase database software for at least seven years, potentially exposing tens of thousands of private databases at corporations and government agencies to unauthorized access and manipulation over the Internet, experts say.
The password was discovered when Interbase was made open source.
Which doesn't mean that security of open-source software is perfect, or even good. Who needs to insert malware into original software when there are thousands of remotely exploitable security holes all over the place?
Is it likely that more sinister features make their way into open-source software?
Officially no, it's relatively unlikely that malware features will get in and go unnoticed for long. But:
servers holding distribution sources can be (and have been) compromised so that what you're downloading doesn't correspond to the open-source development work;
in the case of binary distributions (usually for Windows), the installer for the software can be packaged with malware. Again, officially this happens quite rarely; one example is early versions of LimeWire, which installed a ‘shopping helper’ affiliate-fee-stealer BHO to “support the project”, and lost a lot of goodwill doing so;
but, there are also some scam artists who squat search results for well-known open source projects (again, most commonly with file-sharing software) and deliver their own tweaked installers bundled with malware. Always find the project's official site before downloading.
I can't believe nobody's mentioned Ken Thompson's compiler virus yet.
Having access to the source code offers a reasonable level of assurance that the program won't behave maliciously. However, unless you've inspected one of:
you could end up with malicious code in the compiled binary that never appears in the source. Admittedly it's a very unlikely and extremely difficult form of attack, but it's theoretically possible to introduce malicious code into an open source project in a way that cannot be detected in the source code (of either the project or the compiler).
If you're not working for the CIA (or the equivalent agency of another government), compiler security probably isn't something you have to worry about. But it is a very cool concept to think about.
Just a reminder,
both of them happened in a result of an attack, so it wasn't planted by original developers. I think this kind of proves that it can but not that likely and will get picked up quite soon if the application is popular enough.