views:

381

answers:

5

I need to analyze 1 TB+ of web access logs, and in particular I need to analyze statistics relating to requested URLs and subsets of the URLs (child branches). If possible, I want the queries to be fast over small subsets of the data (e.g. 10 million requests).

For example, given an access log with the following URLs being requested:

/ocp/about_us.html
/ocp/security/ed-209/patches/urgent.html
/ocp/security/rc/
/ocp/food/
/weyland-yutani/products/

I want to do queries such as:

  • Count the number of requests for everything 'below' /ocp.
  • Same as above, but only count requests for child nodes under /ocp/security
  • Return the top 5 most frequently requested URLs.
  • Same as above, except group by an arbitrary depth,

e.g. For the last query above, depth 2 for the data would return:

2: /ocp/security/
1: /ocp/
1: /ocp/food/
1: /weyland-yutani/products/

I think the ideal approach would probably be to use a column DB and tokenize the URLs such that there is a column for each element in the URL. However, I would really like to find a way to do this with open source apps if possible. HBase is a possibility, but query performance seems too slow to be useful for real-time queries (also, I don't really want to be in the business of re-implementing SQL)

I'm aware there are commercial apps for doing this this type of analytics, but for various reasons I want to implement this myself.

A: 

You might want to checkout the HIERARCHYID datatype in SQL Server 2008 or its equivalent in Oracle.

User
+2  A: 

Before investing too much time into designing a hierarchical data structure on top of a relational database, consider reading "Naive Trees" section (starting at slide 48) in the excellent presentation SQL Anti-Patterns Strike Back by Bill Karwin. Bill outlines the following methods for developing a hierarchy:

  1. Path enumeration (slide 55)
  2. Nested sets (slide 58)
  3. Closure table (slide 68)
jakemcgraw
Good presentation, thanks!
Rob
+2  A: 

Trees are generally not very efficient in databases. I mean: if you'd design the tree to be truly recursive, with items pointing to their parents, you'll get lots of queries to find all sub-nodes.

But you can optimize the tree, according to your needs.

Put any part of the url into a column is not a bad idea. You need to limit the depth to a certain number of sub-nodes. You could have indexes on any column, which makes it very fast.

Queries on such a structure are very simple:

Select count(*) From Hits where node1 = 'ocp' AND node2 = 'security';

Make a access statistic:

SELECT node1, node2, count(*) as "number of hits"
FROM hits 
GROUP BY node1, node2
ORDER BY count(*) DESC

you'll get

node1            node2        number of hits
'ocp'                        23345
'ocp'            'security'   1020
'ocp'            'food'        234
'weyland-yutani' 'products'     22

You could also store the url as it is and filter using regex. This is more flexible, but slower, because you don't have indexes. You need only to limit the whole length of the url, not the number of sub-nodes.

I think you could do this with any database good enough to store large amount of data. For instance MySql.

Stefan Steinegger
For storing trees into a database you might want to look into the Nested Set model.
Jasper Bekkers
+1  A: 

The book, The Art of Sql, by Stephane Faroult has a very excellent chapter (7 - Dealing with Hierarchical Data) which explains and compares 3 methods for storing and querying trees using relational databases.

If you are doing a serious, industrial-strength implementation, studying the chapter will be time well spent.

wire science
+1  A: 

I think the most efficient way to store this type of data is in a parts explosion (or hierarchy) table.

A parts explosion table consists of three columns: an identity, a parent, and a description. For the example data, the table would look something like this:

Identity Parent Description
0        Null   ocp
1        0      about_us.html
2        0      security
3        2      ed-209
4        3      patches
5        4      urgent.html
6        2      rc
7        0      food
8        Null   weyland-yutani
9        8      products

As the URL (explosion) table is being populated, populate a table that records the leaf of each URL. From the example data:

 Leaf ID
-------
1
5
6
7
9

I believe you can answer all your questions starting with these two tables.