views:

39

answers:

1

I have a requirement of parsing both Apache access logs and tomcat logs one after another using map reduce. Few fields are being extracted from tomcat log and rest from Apache log.I need to merge /map extracted fields based on the timestamp and export these mapped fields into a traditional relational db ( ex. MySQL ).

I can parse and extract information using regular expression or pig. The challenge i am facing is on how to map extracted information from both logs into a single aggregate format or file and how to export this data to MYSQL.

Few approaches I am thinking of

1) Write output of map reduce from both parsed Apache access logs and tomcat logs into separate files and merge those into a single file ( again based on timestamp ). Export this data to MySQL.

2) Use Hbase or Hive to store data in table format in hadoop and export that to MySQL

3) Directly write the output of map reduce to MySQL using JDBC.

Which approach would be most viable and also please suggest any other alternative solutions you know.

+2  A: 

It's almost always preferable to have smaller, simpler MR jobs and chain them together than to have large, complex jobs. I think your best option is to go with something like #1. In other words:

  1. Process Apache httpd logs into a unified format.
  2. Process Tomcat logs into a unified format.
  3. Join the output of 1 and 2 using whatever logic makes sense, writing the result into the same format.
  4. Export the resulting dataset to your database.

You can probably perform the join and transform (1 and 2) in the same step. Use the map to transform and do a reduce side join.

It doesn't sound like you need / want the overhead of random access so I wouldn't look at HBase. This isn't its strong point (although you could do it in the random access sense by looking up each record in HBase by timestamp, seeing if it exists, merging the record in, or simply inserting if it doesn't exist, but this is very slow, comparatively). Hive could be conveinnient to store the "unified" result of the two formats, but you'd still have to transform the records into that format.

You absolutely do not want to have the reducer write to MySQL directly. This effectively creates a DDOS attack on the database. Consider a cluster of 10 nodes, each running 5 reducers, you'll have 50 concurrent writers to the same table. As you grow the cluster you'll exceed max connections very quickly and choke the RDBMS.

All of that said, ask yourself if it makes sense to put this much data into the database, if you're considering the full log records. This amount of data is precisely the type of case Hadoop itself is meant to store and process long term. If you're computing aggregates of this data, by all means, toss it into MySQL.

Hope this helps.

Eric Sammer
Thanks Eric I am using file based approach with slight change of merging data in database rather doing it in mapreduce. Parsed data from both logs will be stored into two separate staging tables and these staging tables are joined to get a final aggregated data which will be stored in final table.For your question of whether it makes sense to put this much data into the database, parsed data would be aggregated filtered useful data which is very less compared to log file records. The reason to store data into relational db is to allow traditional apps to have access on that data
Harsha Hulageri