views:

487

answers:

5

I need a java data structure/solution that meets these requirements. What best fits these?

1) Object's insertion order must be kept

2) Object's must be unique (These are database objects that are uniquely identified by a UUID).

3) If a newer object with the same ID is added, the older version of the object should be over-written/removed

4) The Solution should be accessible by many threads.

5) When the first object added to the Structure is read/used, it should be removed from the data structure

A: 

Sounds like you have to create your own data structure, but it sounds like a pretty easy class assignment.

Basically you start with anything like an Array or Stack but then you have to extend it for the rest of the functionality.

You can look at the 'Contains' method as you will need that.

James Black
+1  A: 

My thought is something like the following:

 Collections.synchronizedMap(new LinkedHashMap<K, V>());

I think that takes care of everything except requirement 5, but you can do that by using the remove() method instead of get().

This won't be quite as efficient as a ConcurrentMap would be - synchronization locks the entire map on every access, but I think ConncurrentMap implementations can use read-write locks and selective locking on only part of the map to allow multiple non-conflicting accesses to go on simultaneously. If you wanted, you could probably get better performance by writing your own subclass of some existing Map implementation.

David Zaslavsky
+8  A: 

There are a couple of possibilities here. The simplest might be to start with a LinkedHashSet. That will provide you with the uniqueness and predictable ordering that you require. Then, you could wrap the resulting set to make it thread-safe:

Set<T> s = Collections.synchronizedSet(new LinkedHashSet<T>(...));

Note: Since a Set doesn't really define a method for retrieving items from it, your code would have to manually invoke Set.remove(Object).

Alternatively, you could wrap a LinkedHashMap, which does provide a hook for the delete-on-read semantics you require:

class DeleteOnReadMap<K, V> implements Map<K, V> {

    private Map<K, V> m = new LinkedHashMap<K, V>();

    // implement Map "read" methods Map with delete-on-read semantics
    public V get(K key) {
        // ...
    }
    // (other read methods here)

    // implement remaining Map methods by forwarding to inner Map
    public V put(K key, V value) {
        return m.put(key, value);
    }
    // (remaining Map methods here)
}

Finally, wrap an instance of your custom Map to make it thread-safe:

Map<K, V> m = Collections.synchronizedMap(new DeleteOnReadMap<K, V>(...));
harto
To make it thread safe- you could do thisSet s = Collections.synchronizedSet(new LinkedHashSet(...));
RN
@RN updated answer accordingly, thanks.
harto
To make it thread safe, the delete-on-read semantics would have to be inside the synchronised code, otherwise two reads could expose a race condition.
Peter Lawrey
@Peter good point, however, I just realised that Set doesn't have an equivalent of get() - so the deletion would have to happen external to the Set anyway.I've added to my answer with a more useful Map example. Probably makes more sense if the client is referencing objects by some UUID anyway.
harto
+1  A: 

1) Object's insertion order must be kept

This is any "normal" data structure - array, arrayList, tree. So avoid self-balancing or self-sorting data structures: heaps, hashtables, or move-to-front trees (splay trees, for example.) Then again, you could use one of those structures, but then you have to keep track of its insertion order in each node.

2) Object's must be unique (These are database objects that are uniquely identified by a UUID).

Keep a unique identifier associated with each object. If this is a C program, then the pointer to that node is unique (I guess this applies in Java as well.) If the node's pointer is not sufficient to maintain "uniqueness", then you need to add a field to each node which you gaurantee to have a unique value.

3) If a newer object with the same ID is added, the older version of the object should be over-written/removed

Where do you want to place the node? Do you want to replace the existing node? Or do you want to delete the old node,and then add the new one to the end? This is important because it is related to your requirement #1, where the order of insertion must be preserved.

4) The Solution should be accessible by many threads.

The only way I can think of to do this is to implement some sort of locking. Java lets you wrap strucutres and code within an synchronized block.

5) When the first object added to the Structure is read/used, it should be removed from the data structure

Kinda like a "dequeue" operation.

Seems like an ArrayList is a pretty good option for this: simply because of #5. The only problem is that searches are linear. But if you have a relatively small amount of data, then it isn't really that much of a problem.

Otherwise, like others have said: a HashMap or even a Tree of some sort would work - but that will depend on the frequency of accesses. (For example, if the "most recent" element is most likely to be accessed, I'd use a linear structure. But if accesses will be of "random" elements, I'd go with a HashMap or Tree.)

rascher
A: 

The solutions talking about LinkedHashSet would be a good starting point.

However, you would have to override the equals and hashcode methods on the objects that you are going to be putting in the set in order to satisfy your requirement number 3.

A_M