views:

353

answers:

3

When writing a Java program, do I have influence on how the CPU will utilize its cache to store my data? For example, if I have an array that is accessed a lot, does it help if it's small enough to fit in one cache line (typically 128 byte on a 64-bit machine)? What if I keep a much used object within that limit, can I expect the memory used by it's members to be close together and staying in cache?

Background: I'm building a compressed digital tree, that's heavily inspired by the Judy arrays, which are in C. While I'm mostly after its node compression techniques, Judy has CPU cache optimization as a central design goal and the node types as well as the heuristics for switching between them are heavily influenced by that. I was wondering if I have any chance of getting those benefits, too?

Edit: The general advice of the answers so far is, don't try to microoptimize machine-level details when you're so far away from the machine as you're in Java. I totally agree, so felt I had to add some (hopefully) clarifying comments, to better explain why I think the question still makes sense. These are below:

There are some things that are just generally easier for computers to handle because of the way they are built. I have seen Java code run noticeably faster on compressed data (from memory), even though the decompression had to use additional CPU cycles. If the data were stored on disk, it's obvious why that is so, but of course in RAM it's the same principle.

Now, computer science has lots to say about what those things are, for example, locality of reference is great in C and I guess it's still great in Java, maybe even more so, if it helps the optimizing runtime to do more clever things. But how you accomplish it might be very different. In C, I might write code that manages larger chunks of memory itself and uses adjacent pointers for related data.

In Java, I can't (and don't want to) know much about how memory is going to be managed by a particular runtime. So I have to take optimizations to a higher level of abstraction, too. My question is basically, how do I do that? For locality of reference, what does "close together" mean at the level of abstraction I'm working on in Java? Same object? Same type? Same array?

In general, I don't think that abstraction layers change the "laws of physics", metaphorically speaking. Doubling your array in size every time you run out of space is a good strategy in Java, too, even though you don't call malloc() anymore.

+1  A: 

To the best of my knowledge: No. You pretty much have to be writing in machine code to get that level of optimization. With assembly you're a step away because you no longer control where things are stored. With a compiler you're two steps away because you don't even control the details of the generated code. With Java you're three steps away because there's a JVM interpreting your code on the fly.

I don't know of any constructs in Java that let you control things on that level of detail. In theory you could indirectly influence it by how you organize your program and data, but you're so far away that I don't see how you could do it reliably, or even know whether or not it was happening.

Jay
+3  A: 

If you are down to where an improvement of a few percent makes a difference, use C where you'll get an improvement of 50-100%!

If you think that the ease of use of Java makes it a better language to use, then don't screw it up with questionable optimizations.

The good news is that Java will do a lot of stuff beneath the covers to improve your code at runtime, but it almost certainly won't do the kind of optimizations you're talking about.

If you decide to go with Java, just write your code as clearly as you can, don't take minor optimizations into account at all. (Major ones like using the right collections for the right job, not allocating/freeing objects inside a loop, etc. are still worth while)

Bill K
Ditto. The moral of the story is NOT "don't try to optimize". It's "optimize within the structure of the language". When you're thinking about how to trick the compiler into generating better machine code, you're at best wasting your time. I strongly suspect that the people who wrote the JVM would have difficulty figuring out exactly how to construct the sort of trick the questioner is talking about, never mine the majority of us who have only the vaguest idea what's really going on inside the JVM.
Jay
Although I agree with you @Jay, I usually use different semantics. Optimizing is trying to find a way to pervert clean code in an attempt to make it faster, but choosing the correct collection, avoiding inner loops where possible, avoiding allocations in loops where possible, those are just coding correctly. (Hmm, avoiding allocating objects does fall into the category of "optimizations that corrupt your code" Hmph!)
Bill K
@Bill - in Java, you shouldn't even be too afraid to allocate short-lived objects. The garbage collector is very fast and efficient at cleaning up short-lived objects. (I.e. don't corrupt your code to save on allocation some short-lived objects too soon).
Jesper
@Jesper I agree and felt bad about saying that, but he was talking about shaving minute amounts off a tight loop and being aware of all the things you do in a tight loop is somewhat important to just being a programmer, so I left it--but I think you saw me waffling about it in my previous comment :)
Bill K
+4  A: 

The key to good performance with Java is not try to outwit the JIT compiler. If you write your code to try to influence it to do things in a certain way at the native instruction level, you are more likely to shoot yourself in the foot.

First of all, you might not know the best way to do something. HotSpot and other optimizing runtimes are extremely clever about how they do things. If I were an expert machine language programmer, I'd write machine language, not Java. And if I'm not, it would be unwise to think that I could do a better job of optimizing my code than the experts.

Second of all, even if you do know the best way to implement something for a particular CPU, the beauty of Java is write-once-run-anywhere. Clever tricks to "optimize" Java code tend to make optimization opportunities harder for the JIT to recognize. Straight-forward code that adheres to common idioms is much easier to optimize. So even when you get the best Java code for your testbed, that code might perform horribly on a different architecture, or at best, fail to take advantages of enhancements in future JITs.

If you want good performance, keep it simple. Teams of really smart people are working to make it fast.

erickson
Still, there are certain things in code that are just generally easier for computers, regardless of how many layers of abstraction are on top of each other. I have seen Java code run noticably faster on data that was compressed in memory, although the decompression had to take additional CPU cycles. This is just because transferring data from RAM to CPU is a bottleneck that no JIT is going to overcome if you insist on keeping your data large. Also, if I put data in a small array, I doubt that any JVM will start scattering it all over the RAM, so I can improve locality of reference by doing it.
Hanno Fietz
That's a very dangerous line of thinking. "I doubt any JVM..." It's completely within the right of any implementer of an abstraction to change it's implmentation if they don't change the semantics of the interface, this is why you really should try and work at a level appropriate to the abstraction.
Falaina
Yes, Hanno, I think you have some good points. But they are points that are just broad strokes good computer science -- improve locality of reference, minimizing I/O, etc. You aren't getting down to the level of "most L1 data cache is sized like this so..."
erickson