views:

176

answers:

4

I've got a real little interesting (at least to me) problem to solve (and, no, it is not homework). It is equivalent to this: you need to determine "sessions" and "sessions start and end time" a user has been on in front of his computer.

You get the time at which any user interaction was made and a maximum period of inactivity. If a time greater or equal than the period of inactivity elapsed between two user inputs, then they are part of different sessions.

Basically the input I get are this (the inputs aren't sorted and I'd rather not sort them before determining the sessions):

06:38
07:12
06:17
09:00
06:49
07:37
08:45
09:51
08:29

And, say, a period of inactivity of 30 minutes.

Then I need to find three sessions:

[06:17...07:12]
[07:37...09:00]
[09:51...09:51]

If the period of inactivity is set to 12 hours, then I'd just find one big session:

[06:17...09:51]

How can I solve this simply?

There's a minimum valid period of inactivity, which shall be about 15 minutes.

The reason I'd rather not sort beforehand is that I'll get a lot of data and only storing them in memory be problematic. However, most of these data shall be part of the same sessions (there shall be relatively few sessions compared to the amount of data, maybe something like thousands to 1 [thousands of user inputs per session]).

So far I am thinking about reading an input (say 06:38) and defining an interval [data-max_inactivity...data+max_inactivity] and, for each new input, use a dichotomic (log n) search to see if it falls in a known interval or create a new interval.

I'd repeat this for every input, making the solution n log n AFAICT. Also, the good thing is that it wouldn't use too much memory for it would only create intervals (and most inputs will fall in a known interval).

Also, every time if falls in a known interval, I'd have to change the interval's lower or upper bound and then see if I need to "merge" with the next interval. For example (for a max period of inactivity of 30 minutes):

[06:00...07:00]  (because I got 06:30)
[06:00...07:00][07:45...08:45]   (because I later got 08:15)
[06:00...08:45] (because I just received 07:20)

I don't know if the description is very clear, but that is what I need to do.

Does such a problem have a name? How would you go about solving it?

EDIT

I'm very interested in knowing which kind of data structure I should use if I plan to solve it the way I plan to. I need both log n search and insertion/merging ability.

+1  A: 

I am not aware of a name for your problem or a name for the solution that you found. But your solution is (more or less) the solution I would propose. I think it's the best solution for that kind of problem.

If your data is at least somewhat ordered, you might find a slightly better solution by taking this ordering into account. E.g. your data could be ordered by date but not by time. Then, you would separate individual dates.

Tobias Haustein
+3  A: 

Maximum Delay
If the log entries have a "maximum delay" (e.g. with a maximum delay of 2 hours, an 8:12 event will never be listed after a 10:12 event), you could look ahead and sort.

Do Sort
Alternatively, I'd first try sorting - at least to make sure it doesnt work. A timestamp can be reasonably stored in 8 bytes (4 even for your purposes, you could put 250 Millions of then into a gigabyte). Quicksort might not be the best choice here as it has low locality, insertion sort is almost-perfect for almost-sorted data (though it has bad locality, too), alternatively, quick-sorting chunk-wise, then merging chunks with a merge sort should do, even though it increases memory requirements.

Squash and conquer
Alternatively, you can use the following strategy:

  1. transform each event into a "session of duration 0"
  2. Split your list of sessions into chunks (e.g. 1K values / chunk)
  3. Within each chunk, sort by session start
  4. Merge all sessions than can be merged (having sorted before allows you to reduce your look ahead).
  5. Compact the list of remaining sessions into a large single list
  6. repeat with step 2 until the list doesn't get any shorter.
  7. sort-and-merge over all

If your log files have the kind of "temporal locality" your question suggests, already a single pass should reduce the data to allow a "full" sort.

[edit] [This site]1 demonstrates an "optimized quicksort with insertion sort finish" that's quite good on almost-sorted data. As has this guys std::sort

peterchen
+1  A: 

Your solution using an interval search tree sounds like it would be efficient enough.

You don't say whether the data you have provided (consisting solely of timestamps without date) is the actual data that you are processing. If so, consider that there are only 24 * 60 = 1440 minutes in a day. As this is a relatively small value, creating a bit-vector (packed or not---doesn't really matter) feels like it would provide both an efficient and easy solution.

The bit-vector (once filled) would be capable of either:

  • answering the query "Has the user been sighted at time T?" in O(1), if you decide to set a field of the vector to true only when the corresponding time has shown up on your input data (we can call this method "conservative add") or

  • answering the query "Was a session active at time T?" in O(1) as well, but with a larger constant, if you decide to set a field of the vector to true if a session was active at that time---by this I mean that when you add time T, you also set the following 29 fields to true.

I'd like to note that by using a conservative add, you are not limiting yourself to session-intervals of 30 minutes: indeed, you can change this value online at any time, since the structure does not extrapolate any information but is just a practical way of storing/viewing presence records.

Jérémie
+2  A: 

You are asking for an online algorithm, i.e. one that can calculate a new set of sessions incrementally for each new input time.

Concerning the choice of data structure for the current set of sessions, you can use a balanced binary search tree. Each sessions is represented by a pair (start,end) of start time and end time. The nodes of the search tree are ordered by their start time. Since your sessions are separated by at least max_inactivity, i.e. no two sessions overlap, this will ensure that the end times are ordered as well. In other words, ordering by start times will already order the sessions consecutively.

Here some pseudo-code for insertion. For notational convenience, we pretend that sessions is an array, though it's actually a binary search tree.

insert(time,sessions) = do
    i <- find index such that
         sessions[i].start <= time && time < session[i+1].start

    if (sessions[i].start + max_inactivity >= time)
        merge  time  into  session[i]
    else if (time >= sessions[i+1].start - max_inactivity)
        merge  time  into  sessions[i+1]
    else
        insert  (time,time)  into  sessions

    if (session[i] and session[i+1] overlap)
        merge  session[i] and session[i+1]

The merge operation can be implemented by deleting and inserting elements into the binary search tree.

This algorithm will take time O(n log m) where m is the maximum number of sessions, which you said is rather small.

Granted, implementing a balanced binary search tree is no easy task, depending on the programming language. The key here is that you have to split the tree according to a key and not every ready-made library supports that operation. For Java, I would use the TreeSet<E> class; as said, the element type E is a single session given by start and end time. Its floor() and ceiling() methods will retrieve the sessions I've denoted with sessions[i] and sessions[i+1] in my pseudo-code.

Heinrich Apfelmus
@Heinrich Apfelmus: thanks a lot for your answer. I didn't know about the term *"online algorithm"* which is very interesting. I'll probably implement the solution to this problem in Java :)
NoozNooz42
I've looked through the Java API docs, you'll probably want to use the `TreeSet` class. Edited my answer.
Heinrich Apfelmus