views:

135

answers:

4

What is the best/easiest way to sort a large list of words (10,000-20,000) by the number of times they occur in the list, in Java. I tried a basic implementation but I get an out of memory runtime error, so I need a more efficient way. What would you suggest?

ArrayList<String> occuringWords = new ArrayList<String>();
    ArrayList<Integer> numberOccur = new ArrayList<Integer>();
    String temp;
    int count;
    for(int i = 0; i < finalWords.size(); i++){
        temp = finalWords.get(i);
        count = 0;
        for(int j = 0; j < finalWords.size(); j++){
            if(temp.equals(finalWords.get(j))){
            count++;
            finalWords.remove(j);
            j--;
            }
        }
        if(numberOccur.size() == 0){
            numberOccur.add(count);
            occuringWords.add(temp);
        }else{
            for(int j = 0; j < numberOccur.size(); j++){
            if(count>numberOccur.get(j)){
                numberOccur.add(j, count);
                occuringWords.add(j, temp);
            }
        }
    }
}

Where finalWords is the list of all of the Strings. I had to store the number of times each word occured in a separate arraylist because I couldn't think of a better way to keep them paired without making each word a separate object.

+9  A: 

Build a HashMap<String, Integer> mapping words to the number of occurrences. The first time you see a word add it to the map and set the count to 1. Every time thereafter if the word already exists in the map increment the count.

This will be much faster since you will only have to iterate over the list of words once. It's the difference between O(n) and O(n2), which for a large dictionary will be a tremendous difference.

At the end you can then take the list of words and sort them by count. You'll have to take them out of the map and add them to a separate data structure to do this. (Hint: you could use a TreeSet with a custom Comparator which compares words based on their frequency. Or, less elegantly, add them to a List and then sort that list, again with a custom Comparator.)

John Kugelman
And if you are still running out of memory, try and see if you can give your JVM more RAM. Use the -Xmx and and -Xms options for maximum and initial memory. Just because you get a OutOfMemoryException does not mean you are out of physical memory.
phisch
@John Kugelman: how do you sort a Map<String,Integer> on its value?
Webinator
@Wizard: Iterate your Map<String,Integer> and add them to a Map<Integer,String> with the count as the key. Then iterate the resulting map by key.
Software Monkey
Build a list based on Map.entrySet() and sort it with custom comparator.
Konrad Garus
@John Kugelamn: lol, you keep modifying your answer making my comment invalid. My point was: your first answer was incomplete because you didn't talk about the sorting part ;)
Webinator
@Konrad Garus: or use another data structure altogether, like a Tree-bidir map. But my point was that the first answer was rather "sparse", I suppose it's a SO trick to get upvotes faster than other answers: you start by answering one line, then add another one, then another one etc.
Webinator
That's why we get an edit button, so we can improve our answers in response to comments. :-)
John Kugelman
Instead of rolling your own code to add to the map you can use google collections multiset (http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/collect/Multiset.html). Then you can just sort it. An example of that is at http://philippeadjiman.com/blog/2010/02/20/a-generic-method-for-sorting-google-collections-multiset-per-entry-count/.
Carnell
+2  A: 

Why all so complicated? You need basically the following:

  1. Sort the words in-place. The same words will be grouped now.
  2. Go through the array, counting duplicates and store the resulting pairs (word, number of occurrences) in other array
  3. Sort the other array by number of occurrences.

The complexity is O(n log n).

Vlad
Also a good answer. This could be faster or slower than my answer depending on how many duplicates there are. I there are relatively few then this will be better as it will avoid the extra data structure; if there are many then mine will eliminate the duplicates before sorting which will save time.
John Kugelman
A: 
public List<String> countOccurences(ArrayList<String> list){
  HashMap<String, Integer> hm = new HashMap<String, Integer>();
  for (String s:list) {
     Integer i = hm.get(s);
     if (i == null){
      i = 0; 
     } 
     i++;

     hm.put(s, i);
  }


  List<String> mapKeys = new ArrayList<String>(hm.keySet());
  List<Integer> mapValues = new ArrayList<Integer>(hm.values());
  HashMap<String, Integer> sortedMap = new LinkedHashMap<String, Integer>();
  TreeSet<Integer> sortedSet = new TreeSet<Integer>(mapValues);
  Object[] sortedArray = sortedSet.toArray();
  int size = sortedArray.length;
  for (int i=0; i<size; i++){
     sortedMap.put(mapKeys.get(mapValues.indexOf(sortedArray[i])), 
                  (Double)sortedArray[i]);
  }
  return new ArrayList<String>(sorted.keyset());

}
Paul
PS. i have not tested it... just wrote it out.
Paul
+3  A: 

The Multiset is what you are looking from google collections. That data structure is exactly built to support your use cases. All you need to do is populate it with your words. It will maintain the frequency for you

Pangea
+1 Agree on the simple solution.
gpampara
+1 Google collections, although now it's included in Google Guava:http://code.google.com/p/google-collections/http://code.google.com/p/guava-libraries/
volothamp