I just learned about Java's Scanner class and now I'm wondering how it compares/competes with the StringTokenizer and String.Split. I know that the StringTokenizer and String.Split only work on Strings, so why would I want to use the Scanner for a String? Is Scanner just intended to be one-stop-shopping for spliting?
They're essentially horses for courses.
Scanner is designed for cases where you need to parse a string, pulling out data of different types. It's very flexible, but arguably doesn't give you the simplest API for simply getting an array of strings delimited by a particular expression.
String.split() and Pattern.split() give you an easy syntax for doing the latter, but that's essentially all that they do. If you want to parse the resulting strings, or change the delimiter halfway through depending on a particular token, they won't help you with that.
StringTokznier is even more restrictive than String.split(), and also a bit fiddlier to use. It is essentially designed for pulling out tokens delimited by fixed substrings. Because of this restriction, it's about twice as fast as String.split(). (See my comparison of String.split() and StringTokenizer.) It also predates the regular expressions API, of which String.split() is a part.
You'll note from my timings that String.split() can still tokenize thousands of strings in a few milliseconds on a typical machine. In addition, it has the advantage over StringTokenizer that it gives you the output as a strong array, which is usually what you want. Using an Enumeration, as provided by StringTokznier, is too "syntactically fussy" most of the time. So from this point of view, StringToknizer is a bit of a waste of space nowadays, and you may as well just use String.split().
If you have a String object you want to tokenize, favor using String's split method over a StringTokenizer. If you're parsing text data from a source outside your program, like from a file, or from the user, that's where a Scanner comes in handy.
StringTokenizer was always there. It is the fastest of all, but the enumeration-like idiom might not look as elegant as the others.
split came to existence on JDK 1.4. Slower than tokenizer but easier to use, since it is callable from the String class.
Scanner came to be on JDK 1.5. It is the most flexible and fills a long standing gap on the Java API to support an equivalent of the famous Cs scanf function family.
Let's start by eliminating StringTokenizer
. It is getting old and doesn't even support regular expressions. Its documentation states:
StringTokenizer
is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use thesplit
method ofString
or thejava.util.regex
package instead.
So let's throw it out right away. That leaves split()
and Scanner
. What's the difference between them?
For one thing, split()
simply returns an array, which makes it easy to use a foreach loop:
for (String token : input.split("\\s+") { ... }
Scanner
is built more like a stream:
while (myScanner.hasNext()) {
String token = myScanner.next();
...
}
or
while (myScanner.hasNextDouble()) {
double token = myScanner.nextDouble();
...
}
(It has a rather large API, so don't think that it's always restricted to such simple things.)
This stream-style interface can be useful for parsing simple text files or console input, when you don't have (or can't get) all the input before starting to parse.
Personally, the only time I can remember using Scanner
is for school projects, when I had to get user input from the command line. It makes that sort of operation easy. But if I have a String
that I want to split up, it's almost a no-brainer to go with split()
.
I am using split() currently to scan through a file where each line has number of strings delimited by ~. I read somewhere that Scanner could do better job performance wise with a long file so thought about checking it out. But my question is, would I have to create two instances of Scanner? one to read a line and one based on the line to get tokens for delimeter? If I have to do so, I doubt if I would get any advantage of using it. May be I am missing something here?
String.split seems to be much slower than StringTokenizer. The only advantage with split is that you get an array of the tokens. Also you can use any regular expressions in split. org.apache.commons.lang.StringUtils has a split method which works much more faster than any of two viz. StringTokenizer or String.split. But the CPU utilization for all the three is nearly the same. So we also need a method which is less CPU intensive, which I am still not able to find.