views:

219

answers:

4

A searh uses recursively defined function that easily throws exceptions. I have tried 3 ways to handle exeptions:

  1. to ignore with an empty-try-catch()
  2. add-dummy-return stop err-propagation due to exeption
  3. throw a specific except. (this part I don't really understand. If I throw except, can I force it to continue elsewhere, not continuing the old except-thrown-path?)

Some exceptions I do not realy care like during execution removed files -exception (NullPointer) but some I really do like unknown things.

Possible exceptions:

    // 1. if a temp-file or some other file removed during execution -> except.
    // 2. if no permiss. -> except.
    // 3. ? --> except.

The code is Very import for the whole program. I earlier added clittered-checks, try-catches, avoided-empty-try-catches but it really blurred the logic. Some stoned result here would make the code later much easier to maintain. It was annoying to track random exeptions due to some random temp-file removal! How would you handle exceptions for the critical part?

Code

public class Find
{
        private Stack<File> fs=new Stack<File>();
        private Stack<File> ds=new Stack<File>();
        public Stack<File> getD(){ return ds;}
        public Stack<File> getF(){ return fs;}

        public Find(String path)
        {
                // setting this type of special checks due to errs
                // propagation makes the code clittered
                if(path==null)
                {
                        System.out.println("NULL in Find(path)");
                        System.exit(9);
                }
                this.walk(path);
        }

        private void walk( String path )
        {
                File root = new File( path );
                File[] list = root.listFiles();

                //TODO: dangerous with empty try-catch?!
                try{
                        for ( File f : list ) {
                                if ( f.isDirectory() ) {
                                        walk( f.getAbsolutePath() );
                                        ds.push(f);
                                }
                                else {
                                        fs.push(f);
                                }
                        }
                }catch(Exception e){e.printStackTrace();}
        }
}

Code refactored from here.

+1  A: 

Ignoring exception with an empty catch is generally dangerous. You have to make sure the exception you will catch there have no importance over the execution.

In order to keep the logic of the method clean, you can extract the error handling code in another method. There you can put all the necessary code to identify the source of the error and stack it up if required.

catch(Exception e){
    handleException();
}


private void handleException throws Exception() {...}

If you care about keeping track of the exceptions in your recursion, you can carry along a list in your method parameters to stack exceptions in, and handle them at once when the execution has completed.

private void walk(String path, List<Exception> listExceptions) {...}

This way you might be able to ignore error on a subpath while keeping track of it and continue execution on the rest of your tree.

Etienne
+1 interesting idea yeah that would keep code more tidy
HH
A: 

Depends on how you want to handle errors.

If you walk a directory of 100 files with your code now, and the second file causes an exception, what will happen? Well, you get a stacktrace in System.out, the walk method will terminate and nothing more happens. Find.getF() will contain only the first file and the rest of your program won't know that something went wrong.

Thats probably not how you want it?

If you know you dont care about some errors (like file not found), then put a try/catch block for that inside your loop. In the catch body you just log what exactly went wrong with that specific file and then you go on in your loop. Often you don't want to log complete stacktraces here, just a single line.

If you want to handle unexpected Exceptions somehow, then first decide how you want to handle them (just log and go on with next file? send an email? show user dialog? terminate program?), and then decide which class should have responsibility for handling unexpected errors.

If you find that the caller of your class should handle unexpected errors, then catching and rethrowing an Exception of your own is a good way to tell the caller that something went wrong. If you decide that your Find class should handle unexpected errors, then put the handling into your catch block. If you want the loop to continiue even after unexpected errors, remove your outer try/catch and do all catching inside the loop.

deadsven
+4  A: 

This is the most readable that I can make the code to:

import java.util.*;
import java.io.*;

public class Find {
    List<File> files = new ArrayList<File>();
    List<File> dirs = new ArrayList<File>();
    List<Exception> excs = new ArrayList<Exception>();
    public Find(String path) {
        walk(new File(path));
    }
    void walk(File root) {
        for (File child : getChildren(root)) {
            if (isDirectory(child)) {
                dirs.add(child);
                walk(child);
            } else if (isFile(child)){
                files.add(child);
            }
        }
    }

(contd.)

    boolean isDirectory(File f) {
        try {
            return f.isDirectory();
        } catch (SecurityException e) {
            excs.add(e);
            return false;
        }
    }
    boolean isFile(File f) {
        try {
            return f.isFile();
        } catch (SecurityException e) {
            excs.add(e);
            return false;
        }
    }
    List<File> getChildren(File root) {
        File[] children;
        try {
            children = root.listFiles();
        } catch (SecurityException e) {
            excs.add(e);
            return Collections.emptyList();
        }
        if (children == null) {
            excs.add(new IOException("IOException|listFile|" + root));
            return Collections.emptyList();
        }
        return Arrays.asList(children);
    }
}

Here are some key observations:

  • No need to check if path is null
    • File(String pathname) throws NullPointerException if pathname == null
  • There is no need to go from String to File to String etc like the original code.
    • Just work on File instead
  • Effective Java 2nd Edition Item 25: Prefer lists to arrays
  • The potentially throwing File methods are encapsulated into non-throwing helper methods
    • The recursive part main logic is clean that way
  • File.listFiles(), File.isFile() and File.isDirectory(), each throws SecurityException
    • It turns out that instead of throwing IOException, listFiles() would return null instead
      • This is manually translated into an IOException
  • If there's any exception caught, just return something that wouldn't interfere with walk
    • an empty list from getChildren()
    • false from isFile(File) and isDirectory(File)
  • catch (Exception e) is bad in general, so we only catch (SecurityException e)
  • Instead of excs.add, you can actually use a logging framework to log the exception instead
polygenelubricants
You may be interested in my comparison btw the code and equivalent code under Apache license. They used linkedList :D
HH
A: 

Comparison between the Poly-SO mix code and Apache Commons FileUtils: iterateFiles and listFiles

Apache commons-io has similar methods iterateFiles and listFiles in FileUtils, suggested by Bozho. Parameter checks are done in a variety of ways, but never "System.exit(9)"! They comp par to null, check its existence with File-type (available method for it), They use static, linkedList in the listFiles-implementation -- suggested in the book by poly.

They reuse the field to match all dirs:

TrueFileFilter.INSTANCE Singleton instance of true filter (From Apache API, singleton?)

The two methods are the only methods that use IOFileFilter as a parameter. I am uncertain of its implications. They can surely reuse their code.

There are some very succinct -- I think nice -- evaluation points, no blurring dummy vars. Please, see the evalution of (a?b:c), saves stupid dums and if-clauses.

    return listFiles(directory, filter,
        (recursive ? TrueFileFilter.INSTANCE : FalseFileFilter.INSTANCE));

The FileUtils-class in which the methods resides have only 4 field value -- about 2.5 methods per field! Now I am shamed with my class. A striking difference is the use of Exceptions. They use them but -- apparently due to the different goal of FileUtils class -- they let for user to handle them, no centralised collection in list. No extra declarations.

Summary

  • Similarity: linkedList and list
  • Differences: less inits, less decs, smaller density of fields to methods -- succincy
  • Different Goals: SO's class end-user case, FileUtils more backend
  • Difference(natural): exceptions handling in SO but not in FileUtils (perhaps that is the reason it is so clean)

I liked comments and particularly the source -- much better for educational purposes than reading trivial APIs, I feel. Hope you too :)

Apache Commons: FileUtils.java, listFiles, iterateFiles - code pieces

/**
 * Finds files within a given directory (and optionally its
 * subdirectories). All files found are filtered by an IOFileFilter.
 * <p>
 * If your search should recurse into subdirectories you can pass in
 * an IOFileFilter for directories. You don't need to bind a
 * DirectoryFileFilter (via logical AND) to this filter. This method does
 * that for you.
 * <p>
 * An example: If you want to search through all directories called
 * "temp" you pass in <code>FileFilterUtils.NameFileFilter("temp")</code>
 * <p>
 * Another common usage of this method is find files in a directory
 * tree but ignoring the directories generated CVS. You can simply pass
 * in <code>FileFilterUtils.makeCVSAware(null)</code>.
 *
 * @param directory  the directory to search in
 * @param fileFilter  filter to apply when finding files.
 * @param dirFilter  optional filter to apply when finding subdirectories.
 * If this parameter is <code>null</code>, subdirectories will not be included in the
 * search. Use TrueFileFilter.INSTANCE to match all directories.
 * @return an collection of java.io.File with the matching files
 * @see org.apache.commons.io.filefilter.FileFilterUtils
 * @see org.apache.commons.io.filefilter.NameFileFilter
 */
public static Collection listFiles(
        File directory, IOFileFilter fileFilter, IOFileFilter dirFilter) {
    if (!directory.isDirectory()) {
        throw new IllegalArgumentException(
                "Parameter 'directory' is not a directory");
    }
    if (fileFilter == null) {
        throw new NullPointerException("Parameter 'fileFilter' is null");
    }

    //Setup effective file filter
    IOFileFilter effFileFilter = FileFilterUtils.andFileFilter(fileFilter,
        FileFilterUtils.notFileFilter(DirectoryFileFilter.INSTANCE));

    //Setup effective directory filter
    IOFileFilter effDirFilter;
    if (dirFilter == null) {
        effDirFilter = FalseFileFilter.INSTANCE;
    } else {
        effDirFilter = FileFilterUtils.andFileFilter(dirFilter,
            DirectoryFileFilter.INSTANCE);
    }

    //Find files
    Collection files = new java.util.LinkedList();
    innerListFiles(files, directory,
        FileFilterUtils.orFileFilter(effFileFilter, effDirFilter));
    return files;
}


/**
 * Allows iteration over the files in given directory (and optionally
 * its subdirectories).
 * <p>
 * All files found are filtered by an IOFileFilter. This method is
 * based on {@link #listFiles(File, IOFileFilter, IOFileFilter)}.
 *
 * @param directory  the directory to search in
 * @param fileFilter  filter to apply when finding files.
 * @param dirFilter  optional filter to apply when finding subdirectories.
 * If this parameter is <code>null</code>, subdirectories will not be included in the
 * search. Use TrueFileFilter.INSTANCE to match all directories.
 * @return an iterator of java.io.File for the matching files
 * @see org.apache.commons.io.filefilter.FileFilterUtils
 * @see org.apache.commons.io.filefilter.NameFileFilter
 * @since Commons IO 1.2
 */
public static Iterator iterateFiles(
        File directory, IOFileFilter fileFilter, IOFileFilter dirFilter) {
    return listFiles(directory, fileFilter, dirFilter).iterator();
}

//*Cut out a part***//

/**
 * Finds files within a given directory (and optionally its subdirectories)
 * which match an array of extensions.
 *
 * @param directory  the directory to search in
 * @param extensions  an array of extensions, ex. {"java","xml"}. If this
 * parameter is <code>null</code>, all files are returned.
 * @param recursive  if true all subdirectories are searched as well
 * @return an collection of java.io.File with the matching files
 */
public static Collection listFiles(
        File directory, String[] extensions, boolean recursive) {
    IOFileFilter filter;
    if (extensions == null) {
        filter = TrueFileFilter.INSTANCE;
    } else {
        String[] suffixes = toSuffixes(extensions);
        filter = new SuffixFileFilter(suffixes);
    }
    return listFiles(directory, filter,
        (recursive ? TrueFileFilter.INSTANCE : FalseFileFilter.INSTANCE));
}

/**
 * Allows iteration over the files in a given directory (and optionally
 * its subdirectories) which match an array of extensions. This method
 * is based on {@link #listFiles(File, String[], boolean)}.
 *
 * @param directory  the directory to search in
 * @param extensions  an array of extensions, ex. {"java","xml"}. If this
 * parameter is <code>null</code>, all files are returned.
 * @param recursive  if true all subdirectories are searched as well
 * @return an iterator of java.io.File with the matching files
 * @since Commons IO 1.2
 */
public static Iterator iterateFiles(
        File directory, String[] extensions, boolean recursive) {
    return listFiles(directory, extensions, recursive).iterator();
}
HH
Nice report! +1!
polygenelubricants