I can test for the presence of a key in an NSDictionary in two ways:
BOOL containsKey = [[dictionary allKeys] containsObject:foo];
BOOL containsKey = ([dictionary objectForKey:foo] != nil);
which method is faster? Please show your work.
I can test for the presence of a key in an NSDictionary in two ways:
BOOL containsKey = [[dictionary allKeys] containsObject:foo];
BOOL containsKey = ([dictionary objectForKey:foo] != nil);
which method is faster? Please show your work.
I don't see how asking for the allKeys array could possibly be faster, otherwise NSDictionary would at least do the equivalent internally.
EDIT: I suppose that you could construct a case where the allKeys
method would be faster - by taking a long time in your key's hash
method, but not in your isEqual:
method, for example. And you could also swap in a crazy implementation for NSDictionary
in which they are swapped, too (since NSDictionary
is abstract.)
A hash lookup should be faster in general than going over all the dictionary keys, creating an array from them (memory allocation is relatively expensive) and then searching the array (which can't even be a binary search since the array is not sorted).
For the sake of science, though, I made two executables that just execute each style 1 million times and timed them.
With allKeys:
real 0m4.185s
user 0m3.890s
sys 0m0.252s
With objectForKey:
real 0m0.396s
user 0m0.189s
sys 0m0.029s
Obviously, various factors can influence this — size of the dictionary, caching the allKeys return value, etc. I wouldn't expect there to be a case in which the array search is faster than the dictionary lookup, though.
When thinking about performance questions like this, keep in mind that the Foundation data classes swap out their underlying data structures depending on how many objects you store in them. For example, I think a small NSArray actually uses a hash table for storage until it reaches a certain size.