views:

853

answers:

3

Hi,

I have an encryption/copy protection question.

I'm writing an application for a company that uses a dongle. Please don't tell me that software protection is useless, or that I should just let it fly free into the air, or that any time I spend doing this is a waste; this isn't a philosophical question about the validity of software protection, more like a how-to.

As I understand it, the first step in cracking a dongle-protected piece of software is to remove all the calls to the dongle from the code (ie, patch the executable). Also as I understand it, I can create 'strong names' in .NET in order to protect the application and the assembly, as explained in this MSDN article.

Is strong naming enough to ensure that my application can't be easily patched? Or do I need to use some sort of encryption library? If I need to use a library, which one, or where can I get information about setting this up?

The next step, of course, is to put important algorithms on the dongle. I realize that these are just speed bumps to the dedicated cracker, but as our market share grows, the speed bump will help us get to the point where the sting of piracy is not so keenly felt (I hope).

Thanks!

+3  A: 

Hi

Signing your assembly will make it impossible to alter it without altering the signature, and hence its reference. The consequence of this is that a (strong named) reference to the assembly will fail to resolve against the altered version. And that's guaranteed against ridiculous odds.

That doesn't solve your problem, though. Not completely, anyway. If you pack your dongle calls, say, into a strongly named assembly, then reference that assembly from your application, the application will not work without your unaltered assembly, and hence not without the dongle. But the application itself can be altered!

Another means available to you is obfuscation. There's a free version of an obfuscator shipped with Visual Studio, which can be upgraded to industrial strength. Obfuscation renders code incomprehensible without altering it's behaviour, and hence presents a real barrier to reverse engineering.

I'd say the solution lies in some clever combination of these two techniques.

And that's the extent of my knowledge, I'm afraid. Someone else will have to provide the actual answer here (and it's probably embarassingly much shorter than mine ;-)

Tor Haugen
that's a big help to my understanding, but I still need to know the nitty-gritty details. Thanks!
mmr
+2  A: 

If they are patching your executable a strong name does not help. It will however help you ensure that a dll you reference is the correct version and has not been tampered with.
you might check Salamander orpreemptive for obfuscation.
Encryption you might look at Assembly Lockbox, CodeVeil, or ThinApp

Aaron Fischer
Why thinapp? It looks like it's something that will make it run on another OS-- won't that cause significant slowdowns?
mmr
Because ThinApp represents a line of virtualization tech that I could remember off the top of my head. Speed wise I can't say I only played with small apps in it. but it will provide the tamper proofing you are looking for.
Aaron Fischer
+5  A: 

Assembly strong naming was never designed to protect against an attacker who is in control of the machine. From the msdn entry on delay signing:

The following example turns off verification for an assembly called myAssembly.dll.

sn –Vr myAssembly.dll

The design goal of strong names is to provide name uniqueness and to protect the user (not the publisher) against an attacker. If the user wants to disable all strong name checks, or maybe even strip out your signature and re-sign the assembly with his own key then there is technically speaking nothing to prevent him from doing so.

Simply loading your assemblies from an encrypted file is also not very useful because the decryption code itself cannot be encrypted and is therefore an easy target for reverse engineering.

As mentioned by other posters, what you are looking for is obfuscation. You probably already have such a tool: Visual Studio (at least 2005 and 2008) comes with the community edition of PreEmptive Solutions’ Dotfuscator. Microsoft also has its own "Software Licensing and Protection Services" product.

Obfuscation has some technical disadvantages however:

  • it may complicate your build process. You need an unobfuscated and an obfuscated build, because the latter is not debuggable.
  • I like to have an error dialog for unexpected exceptions where the user can click "copy details" and send me a mail with some technical information including the stack trace. With obfuscation however, you can forget about getting anything useful from Exception.StackTrace.
  • if your code makes use of reflection then there is a good chance that things will break in the obfuscated build, because internal type and member names are not preserved.
Wim Coenen
So basically, I could make the release build obfuscated, and the debug build clean, and be able to still develop then? Can obfuscation cause changes to the code that are significant (ie, behavior changes)?
mmr
Obfuscation techniques are designed to not change the code behaviour. But as I said, it becomes hard to guarantee this if the code is inspecting itself through reflection. Fortunately reflection should still work if you only use it on public/protected (i.e. not internal) types and members.
Wim Coenen
I'm pretty sure I'm not using reflection, unless it's being done without my knowledge. So this looks like the right first step.
mmr