LR parsers IMHO can be the fastest. Basically they use a token as an index into a lookahead set or a transition table to decide what to do next (push a state index, pop a state indexes/call a reduction routine). Converted to machine code this can just a few machine instructions. Pennello discusses this in detail in his paper:
Thomas J. Pennello: Very fast LR parsing. SIGPLAN Symposium on Compiler Construction 1986: 145-151
LL parsers involve recursive calls, which are a bit slower than just plain table lookups, but they can be pretty fast.
GLR parsers are generalizations of LR parsers, and thus have to be slower than LR parsers. A key observation is that most of the time a GLR parser is acting exactly as an LR parser would, and one can make that part run essentially as the same speed as an LR parser, so they can be fairly fast.
Your parser is likely to spend more time breaking the input stream into tokens, than executing the parsing algorithm, so these differences may not matter a lot.
In terms of getting your grammar into a usable form, the following is the order in which the parsing technologies "make it easy":
* GLR (really easy: if you can write grammmar rules, you can parse)
* LR(k) (many grammars fit, extremely few parser generators)
* LR(1) (most commonly availalbe [YACC, Bison, Gold, ...]
* LL (usually requires significant reengineeing of grammar to remove left recursions)