views:

260

answers:

6

My question is pretty straightforward: You are an executable file that outputs "Access granted" or "Access denied" and evil persons try to understand your algorithm or patch your innards in order to make you say "Access granted" all the time.

After this introduction, you might be heavily wondering what I am doing. Is he going to crack Diablo3 once it is out? I can pacify your worries, I am not one of those crackers. My goal are crackmes.

Crackmes can be found on - for example - www.crackmes.de. A Crackme is a little executable that (most of the time) contains a little algorithm to verify a serial and output "Access granted" or "Access denied" depending on the serial. The goal is to make this executable output "Access granted" all the time. The methods you are allowed to use might be restricted by the author - no patching, no disassembling - or involve anything you can do with a binary, objdump and a hex editor. Cracking crackmes is one part of the fun, definately, however, as a programmer, I am wondering how you can create crackmes that are difficult.

Basically, I think the crackme consists of two major parts: a certain serial verification and the surrounding code.

Making the serial verification hard to track just using assembly is very possible, for example, I have the idea to take the serial as an input for a simulated microprocessor that must end up in a certain state in order to get the serial accepted. On the other hand, one might grow cheap and learn more about cryptographically strong ways to secure this part. Thus, making this hard enough to make the attacker try to patch the executable should not be tha t hard.

However, the more difficult part is securing the binary. Let us assume a perfectly secure serial verification that cannot be reversed somehow (of course I know it can be reversed, in doubt, you rip parts out of the binary you try to crack and throw random serials at it until it accepts). How can we prevent an attacker from just overriding jumps in the binary in order to make our binary accept anything?

I have been searching on this topic a bit, but most results on binary security, self verifying binaries and such things end up in articles that try to prevent attacks on an operating system using compromised binaries. by signing certain binaries and validate those signatures with the kernel.

My thoughts currently consist of:

  • checking explicit locations in the binary to be jumps.
  • checksumming parts of the binary and compare checksums computed at runtime with those.
  • have positive and negative runtime-checks for your functions in the code. With side-effects on the serial verification. :)

Are you able to think of more ways to annoy a possible attacker longer? (of course, you cannot keep him away forever, somewhen, all checks will be broken, unless you managed to break a checksum-generator by being able to embed the correct checksum for a program in the program itself, hehe)

+2  A: 

I believe these things are generally more trouble than they're worth.

You spend a lot of effort writing code to protect your binary. The bad guys spend less effort cracking it (they're generally more experienced than you) and then release the crack so everyone can bypass your protection. The only people you'll annoy are those honest ones who are inconvenienced by your protection.

Just view piracy as a cost of business - the incremental cost of pirated software is zero if you ensure all support is done only for paying customers.

paxdiablo
Yes, for commercial software, I totally agree. Including layers and layers of self-validation just will make things too complicated unmaintainable and just annoying.But this is not for commercial software - it is a game, and the goal is to spend lots of effort on one side.
Tetha
+1  A: 

There's TPM technology: tpm on wikipedia

It allows you to store the cryptographic check sums of a binary on special chip, which could act as one-way verification.

Note: TPM has sort of a bad rap because it could be used for DRM. But to experts in the field, that's sort of unfair, and there's even an open-TPM group allowing linux users control exactly how their TPM chip is used.

Purfideas
+5  A: 

You're getting into "Anti-reversing techniques". And it's an art basically. Worse is that even if you stomp newbies, there are "anti-anti reversing plugins" for olly and IDA Pro that they can download and bypass much of your countermeasures.

Counter measures include debugger detection by trap Debugger APIs, or detecting 'single stepping'. You can insert code that after detecting a debugger breakin, continues to function, but starts acting up at random times much later in the program. It's really a cat and mouse game and the crackers have a significant upper hand.

Check out... http://www.openrce.org/reference_library/anti_reversing - Some of what's out there.

http://www.amazon.com/Reversing-Secrets-Engineering-Eldad-Eilam/dp/0764574817/ - This book has a really good anti-reversing info and steps through the techniques. Great place to start if you're getting int reversing in general.

kervin
+1  A: 

One of the strongest solutions to this problem is Trusted Computing. Basically you would encrypt the application and transmit the decryption key to a special chip (the Trusted Platform Module), The chip would only decrypt the application once it has verified that the computer is in a "trusted" state: no memory viewers/editors, no debuggers etc. Basically, you would need special hardware to just be able to view the decrypted program code.

Rasmus Faber
A: 

So, you want to write a program that accepts a key at the beginning and stores it in memory, subsequently retrieving it from disc. If it's the correct key, the software works. If it's the wrong key, the software crashes. The goal is that it's hard for pirates to generate a working key, and it's hard to patch the program to work with an unlicensed key.

This can actually be achieved without special hardware. Consider our genetic code. It works based on the physics of this universe. We try to hack it, create drugs, etc., and we fail miserably, usually creating tons of undesirable side-effects, because we haven't yet fully reverse engineered the complex "world" in which the genetic "code" evolved to operate. Basically, if you're running everything on an common processor (a common "world"), which everyone has access to, then it's virtually impossible to write such a secure code, as demonstrated by current software being so easily cracked.

To achieve security in software, you essentially would have to write your own sufficiently complex platform, which others would have to completely and thoroughly reverse engineer in order to modify the behavior of your code without unpredictable side effects. Once your platform is reverse engineered, however, you'd be back to square one.

The catch is, your platform is probably going to run on common hardware, which makes your platform easier to reverse engineer, which in turn makes your code a bit easier to reverse engineer. Of course, that may just mean the bar is raised a bit for the level of complexity required of your platform to be sufficiently difficult to reverse engineer.

What would a sufficiently complex software platform look like? For example, perhaps after every 6 addition operations, the 7th addition returns the result multiplied by PI divided by the square root of the log of the modulus 5 of the difference of the total number of subtract and multiply operations performed since system initialization. The platform would have to keep track of those numbers independently, as would the code itself, in order to decode correct results. So, your code would be written based on knowledge of the complex underlying behavior of a platform you engineered. Yes, it would eat processor cycles, but someone would have to reverse engineer that little surprise behavior and re-engineer it into any new code to have it behave properly. Furthermore, your own code would be difficult to change once written, because it would collapse into irreducible complexity, with each line depending on everything that happened prior. Of course, there would be much more complexity in a sufficiently secure platform, but the point is that someone would have reverse engineer your platform before they could reverse engineer and modify your code, without debilitating side-effects.

Triynko
A: 

Great article on copy protection and protecting the protection Keeping the Pirates at Bay: Implementing Crack Protection for Spyro: Year of the Dragon

The most interesting idea mentioned in there that hasn't yet been mentioned is cascading failures - you have checksums that modify a single byte that causes another checksum to fail. Eventually one of the checksums causes the system to crash or do something strange. This makes attempts to pirate your program seem unstable and makes the cause occur a long way from the crash.

Tom Leys