Parser generators don't need a scanner. But you are pretty much crazy if you don't use one.
Parsers built by parser generators don't care what you feed them, as long as you call them tokens.
To build use a parser generator without a scanner, simply define your grammar down to the character level, and feed individual characters to the parser as tokens.
The reason this is crazy is that parsing is a more complex activity than lexing. You can build lexers as finite state machines, which translate to machine code pretty much as "compare and jump to next state". For speed, that's really hard to beat. Parser generators construct parsers that do recursive descent predictive parsing (for most LL generators such as ANTLR) or do table lookups by hashing, binary or linear search, etc. So a parser spends a lot more energy on a token than a lexer spends on a character.
If you feed characters to a parser as tokens, it will then spend correspondingly more energy on the character than equivalent lexer will. If you process a lot of input text, this will eventually matter, whether you do it for zillions of small input streams or a few really large ones.
The so-called scannerless GLR parsers suffer from this performance problem, relative to GLR parsers that are designed to use tokens.
My company builds a tool, the DMS Software Reengineering Toolkit which uses a GLR parser (and successfully parses all the common langauges you know and lot of odder ones you don't, because it has a GLR parser). We knew about scannerless parsers and chose not to implement them because of the speed differential; we have a classically styled (but extremely powerful) LEX-like subsystem for defining lexical tokens. In the one case where DMS went nose-to-nose against an XT (a tool with a scannerless GLR parser) based tool processing the same input, DMS appeared to be 10x as fast as the XT package. To be fair, the experiment done was ad hoc and not repeated, but as it matched my suspicions I saw no reason to repeat it. YMMV.
And of course, if we want to go scannerless, well, its pretty easy to write a grammer with character terminals, as I have already pointed out.
GLR Scannerless parsers do have another very nice property that won't matter to most people. You can take two separate grammars for a scannerless parser, and literally concatenate them, and still get a parser (often with a lot of ambiguities). This matters a lot when you are building one language embedded inside another. If that's not what you are doing, this is just an academic curiosity.
And, AFAIK, Elkhound isn't scannerless. (I could be wrong on this).
(EDIT: 2/10: Looks like I was wrong. Won't be the first time in my life :)