You could programmatically configure log4j when you initialize the job.

You can also set the file at runtime via a system property. From the manual:

Set the resource string variable to the value of the log4j.configuration system property. The preferred way to specify the default initialization file is through the log4j.configuration system property. In case the system property log4j.configuration is not defined, then set the string variable resource to its default value "".

Assuming you're running the jobs from different java commands, this will enable them to use different files and different filenames for each one.

Without specific knowledge of how your jobs are run it's difficult to say!

Phill Sacre

Tom you coud specify and appenders for each job. Let's that you have 2 jobs corresponding to two different java packages com.tom.firstbatch and com.tom.secondbatch, you would have something like this in log4j.xml :

   <category name="com.tom.firstbatch">
      <appender-ref ref="FIRST_APPENDER"/>
   <category name="com.tom.secondtbatch">
      <appender-ref ref="SECOND_APPENDER"/>
Alexandre Victoor
+1  A: 

If the job names are known ahead of time, you could include the job name when you do the getLogger() call. You then can bind different appenders to different loggers, with separate file names (or other destinations).

If you cannot know the job name ahead of time, you could configure the logger at runtime instead of using a configuration file:

FileAppender appender = new FileAppender();
Logger logger = Logger.getLogger(""+jobName);
Asgeir S. Nilsen
+7  A: 

Can you pass a Java system property for each job? If so, you can parameterize like this:

java -Dmy_var=somevalue my.job.Classname

And then in your


You could populate the Java system property with a value from the host's environment (for example) that would uniquely identify the instance of the job.

+1  A: 
+2  A: 

You can have each job set NDC or MDC and then write an appender that varies the name based on the NDC or MDC value. Creating a new appender isn't too hard. There may also be a appender that will fit the bill in the log4j sandbox. Start looking in

James A. N. Stauffer

Hello, you may implement following:

  • A ThreadLocal holder for the identity of your job.
  • Extend FileAppender, your FileAppender has to keep a Map holding a QuietWriter for every job identity. In method subAppend, you get the identity of your job from the ThreadLocal, you look up (or create) the QuietWriter and write to it...

I may send you some code by mail if you wish...

+1  A: 

We have something similar implemented in our system. We store the specific loggers in a HashMap and initialize appenders for each of them as needed.

Here's an example:

public class JobLogger {
private static Hashtable<String, Logger> m_loggers = new Hashtable<String, Logger>();
private static String m_filename = "...";  // Root log directory

public static synchronized void logMessage(String jobName, String message)
    Logger l = getJobLogger(jobName);;

public static synchronized void logException(String jobName, Exception e)
    Logger l = getJobLogger(partner);, e);

private static synchronized Logger getJobLogger(String jobName)
    Logger logger = m_loggers.get(jobName);
    if (logger == null) {
        Layout layout = new PatternLayout("...");
        logger = Logger.getLogger(jobName);
        m_loggers.put(jobName, logger);
        try {
            File file = new File(m_filename);
            file = new File(m_filename + jobName + ".log");
            FileAppender appender = new FileAppender(layout, file.getAbsolutePath(), false);
        catch (Exception e)
 { ... }
    return logger;

Then to use this in your job you just have to use a one line entry like this:

JobLogger.logMessage(jobName, logMessage);

This will create one log file for each job name and drop it in its own file with that job name in whichever directory you specify.

You can fiddle with other types of appenders and such, as written it will continue appending until the JVM is restarted which may not work if you run the same job on a server that is always up, but this gives the general idea of how it can work.