George's slides are definitely a good basis to work from. Note that he is not talking about a specific technique or technology; rather he's discussing more general architectural and design decisions that will help your application scale as a whole.
I personally think this sort of high-level thinking would be much more valuable than individual optimisation techniques. Perhaps you could take a well known web application and hack it until it scales well across multiple machines? A cluster of lots of cheap, low-power EC2 machines could be really useful here. Getting an existing or new application to run properly across a number of machines would be a fantastic exercise.
Counter-intuitively, rather than getting as much as possible to run on a single machine, I'd say it would be much more educational to get the same application running on several machines.
Once you have that, it makes sense to move onto more specific improvements like a separate static content tier, memcached
, DB sharding, batch operations and so on.
In terms of specific projects to work on, how about cloning Twitter, Flickr or The Pirate Bay. They've all had performance and scaling challenges in the past.