There's been a good deal of hype about quantum computers over the last decade or two, but there a number of problems that will need to be resolved before they'll become practical.
Some of these are "just" engineering problems, like shrinking the size down from room-sized 6-qbit systems to something more like the density of an integrated circuit. Or figuring out a way to prevent thermal noise from scrambling the system, without requiring the customer to keep large stocks of liquid Nitrogen (or Helium!) on hand.
On the other hand, there appear to be some more fundamental problems with constructing quantum computers with large numbers of qbits.
Primary among these is error-correction. Part of the inherent nature of the entangled systems used for quantum computing is that they can lose "coherence" spontaneously. Great strides have been made in increasing the entangled lifetime, but you're still very limited in the number of operations that you can perform reliably.
Some techniques for error correction in quantum computations have been developed, but the last article I read on quantum EC indicated that the number of error-correcting qbits required goes up exponentially more-or-less logarithmically with the number of active qbits. Note that the initial constant factor may be quite large - it can take 5 physical qbits to represent 1 logical qbit.
To some extent (it remains to be seen how much), this exponential growth in size is going to mitigate against the exponential advantage in speed that quantum computation is supposed to have over conventional computation.
Okay, so you can get a 6 qbit system today, which is way too small to tackle "interesting" problems with. Something like factoring a 2048-digit number is going to require systems with millions or billions of qbits. Sure, you'll get the answer "instantly", but there's no clear path to get anywhere near that level of performance using current techniques. Just loading the problem into the system would probably exceed the coherence lifetime.
Oh, to answer your other questions:
I think that most folks are working with quantum storage systems with a single pair of states. In principle, most of these systems could store multiple non-overlapping states per storage unit, but I think a lot more effort is going into making the equipment work reliably at all, rather than maximizing efficiency.
Quantum algorithms are weird in the same way that quantum physics is. Rather than trying to explain how they work, here's an article on Shor's algorithm for factoring integers.
http://en.wikipedia.org/wiki/Shor's_algorithm
And here's a reference on the error-correction issue:
http://en.wikipedia.org/wiki/Quantum_error_correction