views:

5511

answers:

4

Just for my own purposes, I'm trying to build a tokenizer in Java where I can define a regular grammar and have it tokenize input based on that. The StringTokenizer class is depracated, and I've found a couple functions in Scanner that hint towards what I want to do, but no luck yet. Anyone know a good way of going about this?

+2  A: 

If I understand your question well then here are two example methods to tokenize a string. You do not even need the Scanner class, only if you want to pre-cast the tokens, or iterate through them more sofistically than using an array. If an array is enough just use String.split() as given below.

Please give more requirements to enable more precise answers.

 import java.util.Scanner;


  public class Main {    

    public static void main(String[] args) {

     String textToTokenize = "This is a text that will be tokenized. I will use 1-2 methods.";
     Scanner scanner = new Scanner(textToTokenize);
     scanner.useDelimiter("i.");
     while (scanner.hasNext()){
      System.out.println(scanner.next());
     }

     System.out.println(" **************** ");
     String[] sSplit = textToTokenize.split("i.");

     for (String token: sSplit){
      System.out.println(token);
     }
    }

}
Balint Pato
Yeah, I should have elaborated more. That's helpful for splitting a string **around** matches to a regex, but not for finding the tokens that actually match the regex.
eplawless
+2  A: 

If this is for a simple project (for learning how things work), then go with what Balint Pato said.

If this is for a larger project, consider using a scanner generator like JFlex instead. Somewhat more complicated, but faster and more powerful.

Michael Myers
I would also highly recommend JFlex for anything non-trivial. Writing scanner specifications takes some practice but JFlex has good starter files and is a great skill to acquire.
Josh
+4  A: 

The name "Scanner" is a bit misleading, because the word is often used to mean a lexical analyzer, and that's not what Scanner is for. All it is is a substitute for the scanf() function you find in C, Perl, et al. Like StringTokenizer and split(), it's designed to scan ahead until it finds a match for a given pattern, and whatever it skipped over on the way is returned as a token.

A lexical analyzer, on the other hand, has to examine and classify every character, even if it's only to decide whether it can safely ignore them. That means, after each match, it may apply several patterns until it finds one that matches starting at that point. Otherwise, it may find the sequence "//" and think it's found the beginning of a comment, when it's really inside a string literal and it just failed to notice the opening quotation mark.

It's actually much more complicated than that, of course, but I'm just illustrating why the built-in tools like StringTokenizer, split() and Scanner aren't suitable for this kind of task. It is, however, possible to use Java's regex classes for a limited form of lexical analysis. In fact, the addition of the Scanner class made it much easier, because of the new Matcher API that was added to support it, i.e., regions and the usePattern() method. Here's an example of a rudimentary scanner built on top of Java's regex classes.

import java.util.*;
import java.util.regex.*;

public class RETokenizer
{
  static List<Token> tokenize(String source, List<Rule> rules)
  {
    List<Token> tokens = new ArrayList<Token>();
    int pos = 0;
    final int end = source.length();
    Matcher m = Pattern.compile("dummy").matcher(source);
    m.useTransparentBounds(true).useAnchoringBounds(false);
    while (pos < end)
    {
      m.region(pos, end);
      for (Rule r : rules)
      {
        if (m.usePattern(r.pattern).lookingAt())
        {
          tokens.add(new Token(r.name, m.start(), m.end()));
          pos = m.end();
          break;
        }
      }
      pos++;  // bump-along, in case no rule matched
    }
    return tokens;
  }

  static class Rule
  {
    final String name;
    final Pattern pattern;

    Rule(String name, String regex)
    {
      this.name = name;
      pattern = Pattern.compile(regex);
    }
  }

  static class Token
  {
    final String name;
    final int startPos;
    final int endPos;

    Token(String name, int startPos, int endPos)
    {
      this.name = name;
      this.startPos = startPos;
      this.endPos = endPos;
    }

    @Override
    public String toString()
    {
      return String.format("Token [%2d, %2d, %s]", startPos, endPos, name);
    }
  }

  public static void main(String[] args) throws Exception
  {
    List<Rule> rules = new ArrayList<Rule>();
    rules.add(new Rule("WORD", "[A-Za-z]+"));
    rules.add(new Rule("QUOTED", "\"[^\"]*+\""));
    rules.add(new Rule("COMMENT", "//.*"));
    rules.add(new Rule("WHITESPACE", "\\s+"));

    String str = "foo //in \"comment\"\nbar \"no //comment\" end";
    List<Token> result = RETokenizer.tokenize(str, rules);
    for (Token t : result)
    {
      System.out.println(t);
    }
  }
}

This, by the way, is the only good use I've ever found for the lookingAt() method. :D

Alan Moore
Your pos < end loop will need to increment pos before the 'for rules' loop in case no rules are matched right? Otherwise nice example and thanks for the lookingAt() suggestion.
Graphain
Good catch. Yes, it there should be a 'pos++' right after the for-loop. This may be a bare-bones example with no error checking, but I should at least have made sure it didn't have any potential infinite loops.
Alan Moore
D'oh! I just remembered I can still edit my answer. Gotta retrain my online reflexes. :D
Alan Moore
I really like this approach and used this as an example for my own code just yesterday. I did notice the order of the Rules list can affect results, however. In my solution, I attempt to match on all rules instead of breaking after the first match. Then I select the longest match.
Eric Burke
A: 

Most of the answers here are already excellent but I would be remiss if I didn't point out ANTLR. I've created entire compilers around this excellent tool. Version 3 has some amazing features and I'd recommend it for any project that required you to parse input based on a well defined grammar.

raiglstorfer