views:

349

answers:

7

I have over 500 machines distributed across a WAN covering three continents. Periodically, I need to collect text files which are on the local hard disk on each blade. Each server is running Windows server 2003 and the files are mounted on a share which can be accessed remotely as \server\Logs. Each machine holds many files which can be several Mb each and the size can be reduced by zipping.

Thus far I have tried using Powershell scripts and a simple Java application to do the copying. Both approaches take several days to collect the 500Gb or so of files. Is there a better solution which would be faster and more efficient?

+2  A: 

The first improvement that comes to mind is to not ship entire log files, but only the records from after the last shipment. This of course is assuming that the files are being accumulated over time and are not entirely new each time.

You could implement this in various ways: if the files have date/time stamps you can rely on, running them through a filter that removes the older records from consideration and dumps the remainder would be sufficient. If there is no such discriminator available, I would keep track of the last byte/line sent and advance to that location prior to shipping.

Either way, the goal is to only ship new content. In our own system logs are shipped via a service that replicates the logs as they are written. That required a small service that handled the log files to be written, but reduced latency in capturing logs and cut bandwidth use immensely.

Godeke
+3  A: 

I guess it depends what you do with them ... if you are going to parse them for metrics data into a database, it would be faster to have that parsing utility installed on each of those machines to parse and load into your central database at the same time.

Even if all you are doing is compressing and copying to a central location, set up those commands in a .cmd file and schedule it to run on each of the servers automatically. Then you will have distributed the work amongst all those servers, rather than forcing your one local system to do all the work. :-)

Ron

Ron Savage
A: 

We have a similar product on a smaller scale here. Our solution is to have the machines generating the log files push them to a NAT on a daily basis in a randomly staggered pattern. This solved a lot of the problems of a more pull-based method, including bunched-up read-write times that kept a server busy for days.

Jekke
A: 

It doesn't sound like the storage servers bandwidth would be saturated, so you could pull from several clients at different locations in parallel. The main question is, what is the bottleneck that slows the whole process down?

sth
A: 

I would do the following:
Write a program to run on each server, which will do the following:
Monitor the logs on the server
Compress them at a particular defined schedule
Pass information to the analysis server.

Write another program which sits on the core srver which does the following:
Pulls compressed files when the network/cpu is not too busy.
(This can be multi-threaded.)
This uses the information passed to it from the end computers to determine which log to get next.
Uncompress and upload to your database continuously.

This should give you a solution which provides up to date information, with a minimum of downtime.
The downside will be relatively consistent network/computer use, but tbh that is often a good thing.

It will also allow easy management of the system, to detect any problems or issues which need resolving.

Bravax
A: 

NetBIOS copies are not as fast as, say, FTP. The problem is that you don't want an FTP server on each server. If you can't process the log files locally on each server, another solution is to have all the server upload the log files via FTP to a central location, which you can process from. For instance:

Set up an FTP server as a central collection point. Schedule tasks on each server to zip up the log files and FTP the archives to your central FTP server. You can write a program which automates the scheduling of the tasks remotely using a tool like schtasks.exe:

KB 814596: How to use schtasks.exe to Schedule Tasks in Windows Server 2003

You'll likely want to stagger the uploads back to the FTP server.

K. Brian Kelley
A: 

Each server should probably:

  • manage its own log files (start new logs before uploading and delete sent logs after uploading)
  • name the files (or prepend metadata) so the server knows which client sent them and what period they cover
  • compress log files before shipping (compress + FTP + uncompress is often faster than FTP alone)
  • push log files to a central location (FTP is faster than SMB, the windows FTP command can be automated with "-s:scriptfile")
  • notify you when it cannot push its log for any reason
  • do all the above on a staggered schedule (to avoid overloading the central server)
    • Perhaps use the server's last IP octet multiplied by a constant to offset in minutes from midnight?

The central server should probably:

  • accept log files sent and queue them for processing
  • gracefully handle receiving the same log file twice (should it ignore or reprocess?)
  • uncompress and process the log files as necessary
  • delete/archive processed log files according to your retention policy
  • notify you when a server has not pushed its logs lately
Chris Nava