views:

23

answers:

1

I am trying to load the tomcat log of combined pattern to MySQL db,now I push each record to db when it is parsed.

First,read a line from the InputStream,

then parse it to a LogItem which is used to wrap a log record,

then Using the MySQLDbManager to insert the record to db,

MySQLDbManager hold a reference of a database connection pool,once it is called to push a log, it will request a connection to do the sql operation, then release it。

I am afraid if this is low preformance, since the MySQLDbManager will get and release connection all the times during the log pushing.

I have thought make a List to save all the LogItems,after the log file are read over, I pass this list to the MySQLDbManager,however the log file maybe owns too many records, that's to say,it will cause a log of LogItems to be put into the list,is this safe?

This is the two method:

MySQLDbManager dbm = MySQLDbManager.getInstance();
private void pushLogToDbByStream(BufferedReader br) {
    String line;
    int total = 0;
    int error = 0;
    try {
        while ((line = br.readLine()) != null) {
            LogItem li = LogParser.parser(line);        
            dbm.writeLogItemToDb(li);
            total++;
        }
    } catch (IOException e) {
        log.error("Error occur when read log from file");   
    } catch (IllegalStateException e) {
        error++;
        total++;
    }
    log.info(total + " items have been processed, " + error + " failed");
}

public int writeLogItemToDb(LogItem item) {
    //get a connect from the connect pool
    DBConnection dbc = pool.alloc();
    String insertSql = "insert into log values" + item.buildSQLInsertValues();
    //The buildSQLInsertValues() will return something like this--('127.0.0.1','GET','http://www.google.com'...)
    int res;
    try {
        res = dbc.updateSQL(insertSql);
        return res;
    } catch (SQLException e) {
        log.error("error occur when try to insert the item to db +\n" + insertSql + "\n");
        return 0;
    } finally {
        //release the connection
        pool.release(dbc);
    }
}

Is there a better idea?

BTW,if guys think my problem is too elementary to answer, you can reject to answer,but please do not vote my post down, I am hardly earning reputations. :)

+2  A: 

If you are using a connection pool then there is no actual overhead of "the MySQLDbManager will get and release connection all the times during the log pushing", because you won't actually be opening and closing physical connections for each request. This is the entire purpose of a connection pool.

In addition, any time you are afraid of how something will perform, the best thing you can do is to simply test it yourself. Write a quick testbench and see just how fast this code is - how many logs can you push to the database per second? How long does it take to push a 100MB log to the database?

Only by actually testing the code will you know any answers for sure.

matt b
Yes, it is true, I can test the time taken to push log to db,but I have no idea to test how many object can be put at a list or map at most,just put object to them until the system itself stop work? Anyway,your answer "you won't actually be opening and closing physical connections for each request." is rather helpful for me. THanks.
hguser