I've inherited a piece of code that makes intensive use of String -> byte[] conversions and vice versa for some homegrown serialisation code. Essentially the Java objects know how to convert their constituent parts into Strings which then get converted into a byte[]. Said byte array is then passed through JNI into C++ code that reconstitutes the byte[] into C++ std::strings and uses those to bootstrap C++ objects which mirror the Java objects. There is a little more to it but this is a high level view of how this piece of code works; The communication works like this in both directions such that the C++ -> Java transition is a mirror image of the Java -> C++ transition I mentioned above.
One part of this code - the actual conversion of a String into a byte[] - is unexpectedly showing up in the profiler as burning a lot of CPU. Granted, there is a lot of data that is being transferred but this is an unexpected bottleneck.
The basic outline of the code is as follows:
public void convertToByteArray(String convert_me, ByteArrayOutputStream stream)
{
stream.write(convert_me.getBytes());
}
There is a little more to the function but not much. The above function gets called once for every String/Stringified object and after all of the constituents are written to the ByteArrayOutputStream, the ByteArrayOutputStream gets converted into a byte[]. Breaking the above down into a more profiler-friendly version by extracting the convert_me.getBytes()
call shows that over 90% of the time in this function is spent in the getBytes() call.
Is there a way to improve upon the performance of the getBytes() call or is there another, potentially faster way to achieve the same conversion?
The number of objects that are being converted is quite large. On the profiling runs which are using only a small subset of the production data, I'm seeing something like 10 million plus calls to the above conversion function.
Due to the fact that we're very close to releasing the project into production, there are a few workarounds that aren't possible at this point in time:
- Rewrite the serialisation interface to just pass String objects across the JNI layer. This is the obvious (to me) way of improving the situation but it would require major reengineering of the serialisation layer. Given the fact that we're going into UAT early this week, it's far too late to make this sort of complex change. It is my top todo for the next release so it will be done; I however do need a workaround until then, but so far the code is working, has been used for years and has most of the kinks worked out. Well, aside from the performance.
- Changing the the JVM (currently 1.5) is also not an option. Unfortunately this is the default JVM that is installed on the client's machines and updating to 1.6 (which might or might not be faster in this case) is unfortunately not possible. Anybody who has worked in large organisations probably understands why...
- In addition to this, we're already running into memory constraints so attempting to cache at least the larger Strings and their byte array representation, while being a potentially elegant solution, is likely to cause more problems than it will solve