A possible alternative is to skip testing and simply ship it. This provides you with a small army of testers with all kinds of environments, some of whom will report to you (with angry e-mails or demands for refunds, perhaps).
For some years, I was in a happy position where I had essentially a captive customer base (this was in-company) and my code either worked for them or they reported on failure and I'd fix it the same day. THEY WERE GRATEFUL to get fixes so quickly; other departments tested their code and had lengthy release cycles but not really fewer bugs; and if something went wrong it took ages to fix.
This may not be good advice for you, but (as my anecdote shows) it would depend on the circumstances. If you're not worried about annoying a few users or tarnishing your reputation, this might allow you to cut a corner or two.
EDIT:
Based on your feedback, you're in the (more common) situation where this kind of operation would earn you a spanking. In that case, I'd agree with the recommendations of some other answers that encourage you to test all major releases but not the different editions.
As the programmer, you should theoretically have a feel for what distinguishes the Home/Premium/Whatever versions. These days, as far as I've seen, that tends to be stuff like:
- Is there server-type software (like IIS) included?
- Is there a limit on network connections?
- Does it ship with all languages?
- Which multimedia gadgets and codecs does it include?
In some cases, the difference in versions came down to a single flag stored in some number on the CD... identical code, just an option for toggling capabilities. Unless the subset of capabilities has an impact on your code, you're probably safe to ignore it. A sensible approach might be to test with the lowest common denominator (i.e. the "poor peoples' editions") of the product, as that will
- be the least expensive to purchase for testing on; and
- have the greatest number of restrictions on what your code can do.
But as usual, Your Mileage May Vary.