I am using Perl readdir to get file listing, however, the directory contains more than 250,000 files and this results long time (longer than 4 minutes) to perform readdir and uses over 80MB of RAM. As this was intended to be a recurring job every 5 minutes, this lag time will not be acceptable.
More info: Another job will fill the directory (once per day) being scanned. This Perl script is responsible for processing the files. A file count is specified for each script iteration, currently 1000 per run. The Perl script is to run every 5 min and process (if applicable) up to 1000 files. File count limit intended to allow down stream processing to keep up as Perl pushes data into database which triggers complex workflow.
Is there another way to obtain filenames from directory, ideally limited to 1000 (set by variable) which would greatly increase speed of this script?