views:

146

answers:

4

Hi,

I am writing an application which is recording some 'basic' stats -- page views, and unique visitors. I don't like the idea of storing every single view, so have thought about storing totals with a hour/day resolution. For example, like this:

Tuesday    500 views   200 unique visitors
Wednesday  400 views   210 unique visitors
Thursday   800 views   420 unique visitors

Now, I want to be able to query this data set on chosen time periods -- ie, for a week. Calculating views is easy enough: just addition. However, adding unique visitors will not give the correct answer, since a visitor may have visited on multiple days.

So my question is how do I determine or estimate unique visitors for any time period without storing each individual hit. Is this even possible? Google Analytics reports these values -- surely they don't store every single hit and query the data set for every time period!?

I can't seem to find any useful information on the net about this. My initial instinct is that I would need to store 2 sets of values with different resolutions (ie day and half-day), and somehow interpolate these for all possible time ranges. I've been playing with the maths, but can't get anything to work. Do you think I may be on to something, or on the wrong track?

Thanks, Brendon.

A: 

You don't need to store every single view, just each unique session ID per hour or day depending on the resolution you need in your stats.

You can keep these log files containing session IDs sorted to count unique visitors quickly, by merging multiple hours/days. One file per hour/day, one unique session ID per line.

In *nix, a simple one-liner like this one will do the job:

$ sort -m sorted_sid_logs/2010-09-0[123]-??.log | uniq | wc -l

It counts the number of unique visitors during the first three days of September.

Sheldon L. Cooper
A: 

You can calculate the uniqueness factor (UF) on each day and use it to calculate the composite (week by example) UF.

Let's say that you counted:

  • 100 visits and 75 unique session id's on monday (you have to store the sessions ID's at least for a day, or the period you use as unit).
  • 200 visits and 100 unique session id's on tuesday.

If you want to estimate the UF for the period Mon+Tue you can do:

UV = UVmonday + UVtuesday = TVmonday*UFmonday + TVtuesday*UFtuesday

being:

UV = Unique Visitors
TV = Total Visits
UF = Uniqueness Factor

So...

UV = (Sum(TVi*UFi))
UF = UV / TV
TV = Sum(TVi)

I hope it helps...

This math counts two visits of the same person as two unique visitors. I think it's ok if the only way you have to identify somebody is via the session ID.

helios
In summary: just add.
Sheldon L. Cooper
+1  A: 

You could store a random subsample of the data, for example, 10% of the visitor IDs, then compare these between days.

The easiest way to do this is to store a random subsample of each day for future comparisons, but then, for the current day, temporarily store all your IDs and compare them to the subsampled historical data and determine the fraction of repeats. (That is, you're comparing the subsampled data to a full dataset for a given day and not comparing two subsamples -- it's possible to compare two subsamples and get an estimate for the total but the math would be a bit trickier.)

tom10
+1  A: 

If you are OK with approximations, I think tom10 is onto something, but his notion of random subsample is not the right one or needs a clarification. If I have a visitor that comes on day1 and day2, but is sampled only on day2, that is going to introduce a bias in the estimation. What I would do is to store full information for a random subsample of users (let's say, all users whose hash(id)%100 == 1). Then you do the full calculations on the sampled data and multiply by 100. Yes tom10 said about just that, but there are two differences: he said "for example" sample based on the ID and I say that's the only way you should sample because you are interested in unique visitors. If you were interested in unique IPs or unique ZIP codes or whatever you would sample accordingly. The quality of the estimation can be assessed using the normal approximation to the binomial if your sample is big enough. Beyond this, you can try and use a model of user loyalty, like you observe that over 2 days 10% of visitors visit on both days, over three days 11% of visitors visit twice and 5% visit once and so forth up to a maximum number of day. These numbers unfortunately can depend on time of the week, season and even modeling those, loyalty changes over time as the user base matures, changes in composition and the service changes as well, so any model needs to be re-estimated. My guess is that in 99% of practical situations you'd be better served by the sampling technique.

piccolbo
Thank you for your answer. I did think tom10 was on to something, but failed to recognise a way to select random visitors as opposed to visits (without storing all visitors). Using modular arithmetic on a uniformly-distrubited hash is exactly what is needed, and is a very elegant solution. Thank you (piccolbo) in particular, and to everybody else who contributed ideas or suggestions...
Brendon