Sounds like a programming question to me.
If you have a clear idea about what the stolen and original components of these pages are, and those differences are general enough that you can write a filter to separate them, then do that, hash the 'stolen' content, and then you should be able to compare hashes to determine if two pages are the same.
I guess web-page thieves might go to some further code-obfuscation to mess you up, including changing whitespace, so you might want to normalise the html before hashing, for instance removing any redundant whitespace, making all attributes use "
quotes etc.