views:

658

answers:

13
+6  Q: 

Quick ls command

I've got to get a directory listing that contains about 2 million files, but when I do an "ls" command on it nothing comes back. I've waited 3 hours. I've tried "ls | tee directory.txt", but that seems to hang forever. I assume the server is doing a lot of inode sorting. Is there any way to speed up the ls command to just get a directory listing of filenames? I don't care about size, dates, permission or the like at this time.

Thanks, Mark

+1  A: 

How about find ./ -type f (which will find all files in the currently directory)? Take off the -type f to find everything.

Mark Biek
+5  A: 

Try using:

find . -type f -maxdepth 1

This will only list the files in the directory, leave out the -type f argument if you want to list files and directories.

Ryan Ahearn
+1  A: 

Things to try:

Check ls isn't aliased?

alias ls

Perhaps try find instead?

find . \( -type d -name . -prune \) -o \( -type f -print \)

Hope this helps.

toolkit
+2  A: 

You can redirect output and run the ls process in the background.

ls > myls.txt &

This would allow you to go on about your business while its running. It wouldn't lock up your shell.

Not sure about what options are for running ls and getting less data back. You could always "man ls" to check.

Eric
A: 

I'm assuming you are using GNU ls? try

\ls

It will unalias the usual ls (ls --color=auto).

wbkang
+6  A: 
ls -U

will do the ls without sorting.

Paul Tomblin
Do you know if `ls -U|sort` is faster than `ls`?
User1
I don't know. I doubt it, because sort can't complete until it's seen all the records, whether it's done in a separate program in in `ls`. But the only way to find out is to test it.
Paul Tomblin
A: 

what partition type are you using? having millions of small files in one directory it might be a good idea to use JFS or ReiserFS which have better performance with many small sized files.

Tanj
A: 

You should provide information about what operating system and the type of filesystem you are using. On certain flavours of UNIX and certain filesystems you might be able to use the commands "ff" and "ncheck" as alternatives.

tonylo
+1  A: 

If a process "doesn't come back", I recommend strace to analyze how a process is interacting with the operating system.

In case of ls:

$strace ls

you would have seen that it reads all directory entries (getdents(2)) before it actually outputs anything. (sorting… as it was already mentioned here)

bene
A: 

Lots of other good solutions here, but in the interest of completeness:

echo *
jj33
With 2 million files, that is likely to return only a "command line too long" error.
rq
A: 

You can also make use of xargs. Just pipe the output of ls through xargs.

ls | xargs

If that doesn't work and the find examples above aren't working, try piping them to xargs as it can help the memory usage that might be causing your problems.

Jim
A: 

Some followup: You don't mention what OS you're running on, which would help indicate which version of ls you're using. This probably isn't a 'bash' question as much as an ls question. My guess is that you're using GNU ls, which has some features that are useful in some contexts, but kill you on big directories.

GNU ls Trying to have prettier arranging of columns. GNU ls tries to do a smart arrange of all the filenames. In a huge directory, this will take some time, and memory.

To 'fix' this, you can try:

ls -1 # no columns at all

find BSD ls someplace, http://www.freebsd.org/cgi/cvsweb.cgi/src/bin/ls/ and use that on your big directories.

Use other tools, such as find

Rich Homolka
A: 

This is probably not a helpful answer, but if you don't have find you may be able to make do with tar

$ tar cvf /dev/null .

I am told by people older than me that, "back in the day", single-user and recovery environments were a lot more limited than they are nowadays. That's where this trick comes from.

telent