What is the best way to find the total number of words in a text file in Java? I'm thinking Perl is the best on finding things such as this. If this is true then calling a Perl function from within Java would be the best? What would you have done in condition such as this? Any better ideas?
views:
1299answers:
6I'd initialize a word_count
int to 1, then loop through each character in the file and increment word_count
for every whitespace character unless the previous character was a whitespace character. (A space, tab, or newline.)
making some assumptions about what defines a 'word', one solution would be to open the file using a text stream reader and scan it, counting the number of non-contiguous whitespace characters, plus one for the end, e.g.
this is some sample text
this is some more sample text
the text above would have 11 words in it, counted as 9 spaces and 1 newline and 1 end-of-file
While Perl can do this, I'd consider it overkill to link it in / call it for this kind of task (unless you already have that tested out).
- My suggestion would be to lookfor & learn from code that does what you need on the web, e.g. here: http://schmidt.devlib.org/java/word-count.html
int count = 0;
Scanner sc = new Scanner(new File("my-text-file.txt"));
while (sc.hasNext()) {
++count;
sc.next();
}
Congratulations you have stumbled upon one of the biggest linguistic problems! What is a word? It is said that a word is the only word that actually means what it is. There is an entire field of linguistics devoted to words/units of meaning - Morphology.
I assume that you question pertains to counting words in English. However, creating a language-neutral word counter/parser is next to impossible due to linguistic differences. For example, one might think that just processing the groups of characters separated by white space is sufficient. However, if you look at the following example in Japanese, you will see that that approach does not work:
これは日本語の例文です。
This example contains 3 distinct words and none of them are separated by spaces. Typically, Japanese word boundaries are parsed using a dictionary-based approach and there are a number of commercial libraries available for this. Are we lucky to have spaces in English! I believe that Indic languages, Chinese and Korean also have similar problems.
If this solution is going to actually be deployed in any ways that multi-lingual input is possible, it will be important to be able to plug in different word counting methods depending upon the language being parsed.
I think the first answer was a good answer because it uses Java's knowledge of Unicode whitespace values as delimiters. It tokenizes by matching using the following regex: \p{javaWhitespace}+