views:

62

answers:

3

If you have a .Net application (or any other application for that matter,) how do you safely store internal encryption keys? I'm not talking about user-entered keys, but keys that are hard-coded in the program itself, for talking between other instances of the program. For example, if you have a peer to peer type program, you may want to encrypt the packets so you're sure you're talking to another instance of your program, and not someone else's. My solution would be to hard-code a key into the clients, and simply encrypt/decrypt everything that way.

I'm wondering, however, if this is safe to do in .Net. I haven't worked extensively with Reflector or anything of the sort, but from what I hear, it sounds fairly easy to deconstruct .Net applications from CIL. Would going and finding my magic number be trivial for someone with one of these applications?

A: 

Instead of hardcoding a key, you could generate an asymmetric key (per session?). You can send the public key to a peer, with this key, the peer can encrypt a message that only you can decrypt.

Asymmetric encryption is slow, you the peer could also send a symmetric key encoded with your public key, that is used for this session only.

If you really need to store secret data, you can use the ProtectedData class. You could call it during setup, to store your secret (key).

GvS
But would `byte [] secret = { 0, 1, 2, 3, 4, 1, 2, 3, 4 };` be visible simply by opening up the .exe in Reflector?
Daniel Rasmussen
Yes, if this in your code, it will be readable.
GvS
If you put it in another dll and embed that dll in your assembly, it won't open in reflector ... but it could still be extracted and opened. Protecting in layers and raising the bar of decompiling to a level where most people won't bother is about all you can do with purely managed code.
Jason Haley
+1  A: 

If the key is stored in your program's executable code, or in a resource (or in Windows TCB using a key in the executable code) someone could, with enough effort, decoded it.

Because in the end with the executable they have all the information needed to get it whatever protection you add (since the decoding code is in place as well).

To start with:

  • Don't create your own mechanisms, there is too many easy ways to compromise the security (the experts often need multiple tries to get it right).
  • Think about the value of what you are trying to protect, and the resources available to those who might attack it. If there is little value to protect you don't need so much protection.
  • Unless you are protecting somewhere like Aldermaston you don't need High *Security*.

The KISS principle will serve you well.

Richard
+2  A: 

There is absolutely no way to verify that the executable on the other end of the connection is the one you wrote. If you encrypt a key with a key, where do you store the second key? If you Diffie-Hellman that with the server's to get a secret key, where do you store that key? (hint: in memory, where it can be dumped from then read). It's a recursive problem that you can never solve.

I've read that AOL Instant Messenger's servers would periodically poll the AIM client for the hash of specific addresses of code (i.e. calculate SHA1(address1 -> address2)) and then disconnect the client if the hash was not correct. This was because it is illegal to redistribute the executable (and unfeasible to build a table of the hash between every two intervals), so this is merely a legal challenge and not a technical one.

If you want software that really tries hard to defeat reverse engineering (and actually does a good job), try attaching a debugger to Skype :-)

bowenl2