views:

423

answers:

8

I am a FireFox user, and I recently installed the GMail notifier add-on. When you first install the add on, it requests your GMail address and password, and will then use this to login to your Gmail account (presumably via SSL).

It shows a number on your taskbar, indicating how many unread emails are in your inbox, and also notifies you via a little modeless popup from the taskbar. This essentially turns email into a "push" medium, which is very useful as I know that I don't have to periodically check my email. I know that the GMail notifier will tell me.

This is not meant as any disrespect to the developer of the GMail notifier, or authors of other similar applications. But how do I know that the application isn't harvesting emailing addresses and passwords, which is can then use for malicious purposes? In this day and age, an email address/password combination is analogous to a PIN for your ATM keycard. Most websites allow you to reset your password by simply supplying your email address and emailing you a new password; this essentially makes an email address password a "skeleton key" to your entire online world.

I don't want to see applications like GMail notifier go away. What I would like to see is a more objective and transparent way for users to know that the application is doing nothing wrong. Open source applications are obviously less of a concern, but even then I wouldn't want to have to look through code to determine whether the program is doing the right thing. It can't be a subjective process, where you develop a trust for a particular author, as this is prone to abuse.

There was a case recently where an offline utility was discovered to be harvesting email addresses and passwords, and sending them to a separate email account - I can't remember the name of the application but this is the kind of thing we need to be able to prevent.

Does anyone know of any precedents for this sort of thing? Could a certification process (or something similar) be introduced?

A: 

As to certification, this sort of certification would be incredibly expensive. Essentially, the certifying organization would have to review the source code and make sure there are no trojan horses or back doors. Extremely large organizations could afford it, but your average Firefox plugin developer wouldn't even consider it.

Micah
+13  A: 

One of the advantages of Open Source Software code is the ability to have security professionals inspect the code for any malicious or unintentionally dangerous code.

Merely making it open source doesn't prove it is safe, but it certainly adds a level of confidence.

For example, I am willing to use Password Safe because I am confident there are many eyeballs inspecting the code. I decline to use closed source password list software.

Solutions that involve inspecting code all have a flaw that was famously (and mind-blowingly) described by Ken Thompson in Reflections on Trusting Trust.

Oddthinking
"Reflections on Trusting Trust" is a classic, but the exploit only works if the source is provided by the same party that provides the compiler.
Dour High Arch
Open source software could actually open the door for some truly malicious software by having the perception that it's open. It wouldn't be hard to provide fake source code for a program and then release an exe with modified code that was malicious.
greg7gkb
You would almost have to independently compile the application and checksum it against the available exe. I don't imagine many people doing this.
Simucal
Of course, a checksum mismatch wouldn't prove malicious intent, which makes this even harder to defeat. (Different compiler version/settings, different CPU architecture, etc. could all produce different binaries from the same source.)
Dave Sherohman
+3  A: 

You could look at the certification processes that semi-open mobile platforms like Symbian use. "Trusted" on Symbian doesn't mean quite the same thing as in your example, about knowing the user's GMail login, but the techniques are relevant.

The summary is that in order to put a "trusted" app onto Symbian, you have to put your company on the line. The certification process itself is mostly about ensuring basic usability and quality: they don't actually do a code audit. So it doesn't stop your app being malicious, but it does mean everyone has a good idea who wrote it. If it later turns out to be malicious, then one or more multinational corporations (mobile operators and/or handset manufacturers) will sue the living bejeesus out of you. In practice this has most of the effect at a fraction of the cost (although Symbian certification still isn't cheap).

But once you see the rigmarole involved in getting a trusted app onto Symbian, you start to understand why there is so much resistance in the PC market to any kind of code signing and certification. The economic problem is that code signing doesn't really benefit the people who write the code. Currently, application writers are in charge of PCs, unlike mobile platforms which are dictated by the operators and/or manufacturers. There's no obvious way for any potential code-signing authority (including Microsoft, or GNU, or whoever) to demonstrate any net benefits of certification even if they wanted to: consider how much trouble MS have had trying to enforce signed drivers, and that's just for a very small proportion of all third-party code.

Remember in any case that security doesn't give absolute guarantees of the kind you want. Or sometimes it does, but usually at very high cost. What you actually get at a price you can afford (in this case: free), is that it's unlikely that the add-on is malicious, because such maliciousness would probably be spotted reasonably quickly, and hence would not give much benefit to the perpetrator. Crooks will therefore probably aim for softer targets.

If you want better than that, stick to more mature and popular apps, where it's even more likely that malicious behaviour will have been spotted before it gets to you.

Finally, note that you don't have to trust this add-on any more than you do any other Firefox add-on: if a malicious Firefox add-on wants to capture your GMail password as you type it into the login form, then it can. Not sure whether that makes you feel more safe, or less :-)

Steve Jessop
+1  A: 

It comes down to: do you trust the person who wrote that code? If not, you shouldn't give them your password. The application could do literally whatever it wants with that information.

If it's open source, you can, of course, look at the source, compile it yourself, and be assured that what you're using won't do something strange with your personal information. Besides that, you'll have to trust some other party besides yourself... it's up to you.

I tend to assume that it'll be OK, but I am naive that way.

Claudiu
See the "underhanded C contest" for examples of code that it's not realistically possible to review yourself, even if they are open source.
Steve Jessop
You don't have to treat those as open source. You don't even have to verify yourself. With open source, there are other people looking at the code; if one of them finds anything malicious, word will spread.
David Thornley
A: 

If the software is not open source, you really have no way of knowing what they are doing with the information you enter into their program. At that point it is really up to you, the end user, to decide whether or not you trust a given application with your credentials.

cowgod
Agreed. And see the "underhanded C contest" for reasons why even if it is open source, you still don't have much chance of successfully auditing it yourself. Trust is everything.
Steve Jessop
A: 

Software Catalogs (such as download.com or softpedia, etc), test software and guaranty that it isn't malicious.

And the same thing goes for browser add-ons (such as firefox add-ons), mozilla's website is an example of a safe source of add-ons.

Lawand
A: 

I think it just boils down to end-user perception, like some have mentioned before.

I guess the right answer would be to put yourself in that perspective and think about when do you trust an application?

For me it's when I know who is behind it, that there is positive rave and a solid user base, that they've been here for some time and both intend to stay and develop a continuing business.

kRON
A: 

OAuth can be part of the answer here: give the app a specific token not your real username and password:

  • you don't give the untrusted app your real original username and password
  • the server can potentially keep an audit trail of actions by that app
  • you can revoke one app
  • the app can be given fine-grained permissions: in this case perhaps just "do i have new mail?" and no more. (For instance, Launchpad's OAuth interface lets you say whether the app should be able to read or write, and to see private data or not; and that could be split up even more.)

This is not the whole story; you also need isolation around the app to make sure it can't capture your keyboard or poke around inside the browser. But it's a start.

poolie