The only way I know is:
find /home -xdev -samefile file1
But it's really slow. I would like to find a tool like locate
.
The real problems comes when you have a lot of file, I suppose the operation is O(n).
The only way I know is:
find /home -xdev -samefile file1
But it's really slow. I would like to find a tool like locate
.
The real problems comes when you have a lot of file, I suppose the operation is O(n).
What I'd typically do is: ls -i <file>
to get the inode of that file, and then find /dir -type f -inum <inode value> -mount
. (You want the -mount
to avoid searching on different file systems, which is probably part of your performance issues.)
Other than that, I think that's about it.
Here's a way:
find -printf "%i:\t%p
or similar to create a listing of all files prefixed by inode, and output to a temporary filecut -f 1 | sort | uniq -d
, and output that to a second temporary filefgrep -f
to load the second file as a list of strings to search and search the first temporary file.(When I wrote this, I interpreted the question as finding all files which had duplicate inodes. Of course, one could use the output of the first half of this as a kind of index, from inode to path, much like how locate works.)
On my own machine, I use these kinds of files a lot, and keep them sorted. I also have a text indexer application which can then apply binary search to quickly find all lines that have a common prefix. Such a tool ends up being quite useful for jobs like this.