Ok, you see why it's taking long, right?
You have 1MB strings, and for each token, replace is iterating through the 1MB and making a new 1MB copy. Well, not an exact copy, as any token found is replaced with the new token value. But for each token you're reading 1MB, newing up 1MB of storage, and writing 1MB.
Now, can we think of a better way of doing this? How about instead of iterating the 1MB string for each token, we instead walk it once.
Before walking it, we'll create an empty output string.
As we walk the source string, if we find a token, we'll jump token.length() characters forward, and write out the obfuscated token. Otherwise we'll proceed to the next character.
Essentially, we're turning the process inside out, doing the for loop on the long string, and at each point looking for a token. To make this fast, we'll want quick loop-up for the tokens, so we put them into some sort of associative array (a set).
I see why it is taking long alright,
but not sure on the fix. For each 1mb
string on which I'm performing
replacements, I have 1 to 2 thousand
tokans I want to replace. So walking
character by character looking for any
of a thousand tokens doesn't seem
faster
In general, what takes longest in programming? New'ing up memory.
Now when we create a StringBuffer, what likely happens is that some amount of space is allocated (say, 64 bytes, and that whenever we append more than its current capacity, it probably, say, doubles its space. And then copies the old character buffer to the new one. (It's possible we can can C's realloc, and not have to copy.)
So if we start with 64 bytes, to get up to 1MB, we allocate and copy:
64, then 128, then 256, then 512, then 1024, then 2048 ... we do this twenty times to get up to 1MB. And in getting here, we've allocated 1MB just to throw it away.
Pre-allocating, by using something analogous to C++'s reserve() function, will at least let us do that all at once. But it's still all at once for each token. You're at least producing a 1MB temporary string for each token. If you have 2000 tokens, you're allocating about a 2 billion bytes of memory, all to end up with 1MB. Each 1MB throwaway contains the transformation of the previous resulting string, with the current token applied.
And that's why this is taking so long.
Now yes, deciding which token to apply (if any), at each character, also takes time. You may wish to use a regex, which internally builds a state machine to run through all possibilities, rather than a set lookup, as I suggested initially. But what's really killing you is the time to allocate all that memory, for 2000 copies of a 1MB string.
Dan Gibson suggests:
Sort your tokens so you don't have to
look for a thousand tokens each
character. The sort would take some
time, but it would probably end up
being faster since you don't have to
search thousands of tokens each
character.
That was my reasoning behind putting them into an associative array (e.g, Java HashSet). But the other problem is matching, e.g., if one token is "a" and another is "an" -- if there are any common prefixes, that is, how do we match?
This is where Keltex's answer comes in handy: he delegates the matching to a Regex, which is a great idea, as a Regex already defines (greedy match) and implements how to do this. Once the match is made, we can examine what's captured, then use a Java Map (also an associative array) to find the Obfuscated token for the matched, unobfuscated one.
I wanted to concentrate my answer on the not just how to fix this, but on why there was a problem in the first place.