We have something similar implemented in our system. We store the specific loggers in a HashMap and initialize appenders for each of them as needed.
Here's an example:
public class JobLogger {
private static Hashtable<String, Logger> m_loggers = new Hashtable<String, Logger>();
private static String m_filename = "..."; // Root log directory
public static synchronized void logMessage(String jobName, String message)
{
Logger l = getJobLogger(jobName);
l.info(message);
}
public static synchronized void logException(String jobName, Exception e)
{
Logger l = getJobLogger(partner);
l.info(e.getMessage(), e);
}
private static synchronized Logger getJobLogger(String jobName)
{
Logger logger = m_loggers.get(jobName);
if (logger == null) {
Layout layout = new PatternLayout("...");
logger = Logger.getLogger(jobName);
m_loggers.put(jobName, logger);
logger.setLevel(Level.INFO);
try {
File file = new File(m_filename);
file.mkdirs();
file = new File(m_filename + jobName + ".log");
FileAppender appender = new FileAppender(layout, file.getAbsolutePath(), false);
logger.removeAllAppenders();
logger.addAppender(appender);
}
catch (Exception e)
{ ... }
}
return logger;
}
}
Then to use this in your job you just have to use a one line entry like this:
JobLogger.logMessage(jobName, logMessage);
This will create one log file for each job name and drop it in its own file with that job name in whichever directory you specify.
You can fiddle with other types of appenders and such, as written it will continue appending until the JVM is restarted which may not work if you run the same job on a server that is always up, but this gives the general idea of how it can work.