You ask a big question and the philosophical answer is that no two environments are alike and no two sets of requirements or toolkits are alike. "Fastest" here reflects development time but not performance. Those questions must be answered with benchmarking.
Answer: don't commit the sin of optimizing before the program is complete.
Answer #2: but kinda pay attention to performance as you go along.
Ignore the toolkit for a second and look at just PHP and the machine. Best thing you can do is optimize the server and then pay attention to rest. On that server level, I will address only caching and the memory footprint here - and show the measured trends before / after on a PHP environment.
One of interesting side-effects of using opcode caching is a smaller memory footprint. That, in turn, gives you an ability to scale upwards. Machine, while being hammered, has more memory to deal with instantaneous requests and has more time to recover from swap borrows.
This graph below is a bit confusing (and clipped) but what it shows is unoptimized vs optimized memory footprint. The lowest ledge is the post-optimization memory footprint.
Long axis is an abstract type of page (home vs post vs page vs etc..) going from common, simple to complex. The other axis is caching off, no opcode caching to caching on, opcode caching on.
This illustrates that you can make great improvements by recompiling PHP/Apache to use opcode caching alone. It's probably the biggest optimization gain with least amount of effort and you don't have to be aware that you're using a templating language within a toolkit that runs as a runtime substitute for C which is a compilation enhancement over machine code. (Insert more hair-splitting nerdyness here...)
After this particular optimization, the machine was able to take far more burst traffic (from 200 requests per hour to 700 easily).
Good luck.