tags:

views:

192

answers:

3

The only way I know is:

find /home -xdev -samefile file1

But it's really slow. I would like to find a tool like locate. The real problems comes when you have a lot of file, I suppose the operation is O(n).

A: 

What I'd typically do is: ls -i <file> to get the inode of that file, and then find /dir -type f -inum <inode value> -mount. (You want the -mount to avoid searching on different file systems, which is probably part of your performance issues.)

Other than that, I think that's about it.

John Feminella
He is already using `-xdev`, which is the same as `-mount`. So this is no better.
mark4o
+1  A: 

Here's a way:

  • Use find -printf "%i:\t%p or similar to create a listing of all files prefixed by inode, and output to a temporary file
  • Extract the first field - the inode with ':' appended - and sort to bring duplicates together and then restrict to duplicates, using cut -f 1 | sort | uniq -d, and output that to a second temporary file
  • Use fgrep -f to load the second file as a list of strings to search and search the first temporary file.

(When I wrote this, I interpreted the question as finding all files which had duplicate inodes. Of course, one could use the output of the first half of this as a kind of index, from inode to path, much like how locate works.)

On my own machine, I use these kinds of files a lot, and keep them sorted. I also have a text indexer application which can then apply binary search to quickly find all lines that have a common prefix. Such a tool ends up being quite useful for jobs like this.

Barry Kelly
+1  A: 
Jörg W Mittag