views:

305

answers:

3

I've inherited a piece of code that makes intensive use of String -> byte[] conversions and vice versa for some homegrown serialisation code. Essentially the Java objects know how to convert their constituent parts into Strings which then get converted into a byte[]. Said byte array is then passed through JNI into C++ code that reconstitutes the byte[] into C++ std::strings and uses those to bootstrap C++ objects which mirror the Java objects. There is a little more to it but this is a high level view of how this piece of code works; The communication works like this in both directions such that the C++ -> Java transition is a mirror image of the Java -> C++ transition I mentioned above.

One part of this code - the actual conversion of a String into a byte[] - is unexpectedly showing up in the profiler as burning a lot of CPU. Granted, there is a lot of data that is being transferred but this is an unexpected bottleneck.

The basic outline of the code is as follows:

public void convertToByteArray(String convert_me, ByteArrayOutputStream stream)
{
  stream.write(convert_me.getBytes());
}

There is a little more to the function but not much. The above function gets called once for every String/Stringified object and after all of the constituents are written to the ByteArrayOutputStream, the ByteArrayOutputStream gets converted into a byte[]. Breaking the above down into a more profiler-friendly version by extracting the convert_me.getBytes() call shows that over 90% of the time in this function is spent in the getBytes() call.

Is there a way to improve upon the performance of the getBytes() call or is there another, potentially faster way to achieve the same conversion?

The number of objects that are being converted is quite large. On the profiling runs which are using only a small subset of the production data, I'm seeing something like 10 million plus calls to the above conversion function.

Due to the fact that we're very close to releasing the project into production, there are a few workarounds that aren't possible at this point in time:

  • Rewrite the serialisation interface to just pass String objects across the JNI layer. This is the obvious (to me) way of improving the situation but it would require major reengineering of the serialisation layer. Given the fact that we're going into UAT early this week, it's far too late to make this sort of complex change. It is my top todo for the next release so it will be done; I however do need a workaround until then, but so far the code is working, has been used for years and has most of the kinks worked out. Well, aside from the performance.
  • Changing the the JVM (currently 1.5) is also not an option. Unfortunately this is the default JVM that is installed on the client's machines and updating to 1.6 (which might or might not be faster in this case) is unfortunately not possible. Anybody who has worked in large organisations probably understands why...
  • In addition to this, we're already running into memory constraints so attempting to cache at least the larger Strings and their byte array representation, while being a potentially elegant solution, is likely to cause more problems than it will solve
+2  A: 

I'm guessing part of the problem may be that a Java String is in UTF-16 format - i.e. two bytes per character; so getBytes() is doing a bunch of work to convert each UTF-16 element into one or two bytes, dependent on your current character set.

Have you tried using CharsetEncoder - this should give you more control over the String encoding and allow you to skip some of the overhead in the default getBytes implementation.

Alternatively, have you tried explicitly specifying the charset to getBytes, and use US-ASCII as the character set?

Dave Rigby
The OP is not specifying a charset for the getBytes() call which in addition to defaulting to the current default Locale does a bunch of extra work to actually retrieve that Locale.
Boris Terzic
Specifying the locale in the call to getBytes() seems to have a benign effect on the memory consumption at least, but unfortunately did not lead to a real improvement in the runtime. Next step would be to rewrite the functions using CharsetEncoder and see if that improves matters.
Timo Geusch
A: 

If it is the same strings you convert all the times, you could cache the result in a WeakHashMap.

Also, have a look at the getBytes() method (the source is available if you install the SDK) to see what exactly it does.

Thorbjørn Ravn Andersen
Caching does sound like a neat idea but unfortunately the conversion function is called millions of times even with a comparatively small data set and the strings are distinct. There is likely to be some duplication but given that we are already running into memory constraints on 32--bit JVMs, caching the converted strings would most likely cause more problems than it'll solve.
Timo Geusch
Then you need to find out WHY the conversion is slow....
Thorbjørn Ravn Andersen
It seems to be related to two things - (a) the amount of data (not much I can do about that) and (b) the charset/locale I'm converting into. So far it seems that converting into UTF-8 is measurably faster, which isn't surprising but unfortunately the C++ side doesn't currently support UTF-8.
Timo Geusch
+1  A: 

I see several options:

  • If you have Latin-1 strings, you could just split the higher byte of the chars in the string (Charset does this too I think)
  • You could also split the work among multiple cores if you have more (the fork-join framework had backport to 1.5 once)
  • You could also build the data into a stringbuilder and only convert it to byte array once at the end.
  • Look at your GC/memory usage. Too much memory utilization might slow your algorithms down due frequent GC interruptions
  • kd304