tags:

views:

38

answers:

2

Hi, I have a task where I need to move a bunch of files from one directory to another. I need move all files with the same file name (i.e. blah.pdf, blah.txt, blah.html, etc...) at the same time, and I can move a set of these every four minutes. I had a short bash script to just move a single file at a time at these intervals, but the new name requirement is throwing me off.

My old script is:
find ./ -maxdepth 1 -type f | while read line; do mv "$line" ~/target_dir/; echo "$line"; sleep 240; done

For the new script, I basically just need to replace find ./ -maxdepth 1 -type f with a list of unique file names without their extensions. I can then just replace do mv "$line" ~/target_dir/; with do mv "$line*" ~/target_dir/;.

So, with all of that said. What's a good way to get a unique list of files without their file names with bash script? I was thinking about using a regex to grab file names and then throwing them in a hash to get uniqueness, but I'm hoping there's an easier/better/quicker way. Ideas?

A: 

A weird-named files tolerant one-liner could be:

find . -maxdepth 1 -type f -and -iname 'blah*' -print0 | xargs -0 -I {} mv {} ~/target/dir

If the files can start with multiple prefixes, you can use logic operators in find. For example, to move blah.* and foo.*, use:

find . -maxdepth 1 -type f -and \( -iname 'blah.*' -or -iname 'foo.*' \) -print0 | xargs -0 -I {} mv {} ~/target/dir

EDIT

Updated after comment.

Here's how I'd do it:

find ./ -type f -printf '%f\n' | sed 's/\..*//' | sort | uniq | ( while read filename ; do find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \; ; sleep 240; done )

Perhaps it needs some explaination:

  • find ./ -type f -printf '%f\n': find all files and print just their name, followed by a newline. If you don't want to look in subdirectories, this can be substituted by a simple ls;
  • sed 's/\..*//': strip the file extension by removing everything after the first dot. Both foo.tar ad foo.tar.gz are transformed into foo;
  • sort | unique: sort the filenames just found and remove duplicates;
  • (: open a subshell:
    • while read filename: read a line and put it into the $filename variable;
    • find . -type f -iname "$filename"'*' -exec mv {} /dest/dir \;: find in the current directory (find .) all the files (-type f) whose name starts with the value in filename (-iname "$filename"'*', this works also for files containing whitespaces in their name) and execute the mv command on each one (-exec mv {} /dest/dir \;)
    • sleep 240: sleep
  • ): end of subshell.

Add -maxdepth 1 as argument to find as you see fit for your requirements.

Giuseppe Cardone
The filename won't always be 'blah*.' That was just an example. they won't be known or hard-coded at all. I need to get a list of all unique file names. So, for a folder that has the files "blah.html, blah.txt, foo.jpg, foo.html," it should output "blah, foo," which I could then pipe to a script that moves blah* and then foo* 4 minutes later. See what I mean?
Eli
I'd edit the answer with another one-liner, but you already answered your own question :)
Giuseppe Cardone
Crud, actually, I just tried it, and it seems like <code>do mv $line*</code> doesn't actually work. Any ideas?
Eli
I edited my answer, I hope this is the solution you are looking for.
Giuseppe Cardone
A: 

Nevermind, I'm dumb. there's a uniq command. Duh. New working script is:

find ./ -maxdepth 1 -type f | sed -e 's/.[a-zA-Z]*$//' | uniq | while read line; do mv "$line*" ~/target_dir/; echo "$line"; sleep 240; done

EDIT: Forgot close tag on code and a backslash.

Eli
I'm pretty sure you need to put the wildcard outside of the quotes (i.e. `mv "$line"* ...`) to get it to work. Also, you should probably exclude dotfiles (i.e. add `-not -name ".*"` to the find command) to prevent it from finding ".somethingorother" -> "" -> `mv ""* ...` which would be bad.
Gordon Davisson