8 finds is best, because the code's simpler and clearer.
Thinking about performance is more fun, though, so I'll do that too.
As Artelius said while I was writing this answer, ignore the complexity. It's not relevant because you know that n=100. For example, insertion sort has worse algorithmic complexity than quicksort (at least in the average case), but in almost all implementations, insertion sort is faster than quicksort for small n and so quicksorts break out to insertion sort towards the end. Your n is also small, so the limit as n -> infinity isn't what matters.
Since the code for both options is simple to write, I'd suggest profiling it. This will (a) tell you which is faster, and (b) prove that both are so fast that it doesn't matter which you do (unless it's the only thing your program does, and it does it a lot). Especially since you talk about switching on the key - if the key is an integral type then the limiting factor is more likely to be memory cache issues than any actual processing.
However failing that, usually the way to compare searching algorithms is to count the comparisons, on the assumption that they're much slower than traversing the structures. If nothing else, each comparison accesses memory and is an unpredictable branch, which are two things CPUs are often worst at.
If you sort your 8 elements before you start (which takes 24 comparisons or so) instead of the switch you propose, then because the map is also sorted, you only have to do one comparison at each node you traverse, plus one comparison per item you're searching for (compare one node from each "side". If they match increment both sides. If they don't match, increment the side with the smaller element). So that's 8+100 in the worst case, or less if you find all 8 before you get to the end. But the average position of the last of 8, if they're randomly located in the map, is still something like 8/9 of the way through. So call it 8+88+24 = 120 comparisons, with 132 as the worst case. The best case is 8 (if the things you're searching for are all at the start) +24 (for the initial sort) = 32 comparisons, or even better if you get lucky on the sort as well or your 8 search items are ready-sorted for some reason.
The average depth of a node in a Red-Black tree (which map usually is) is slightly over log2(n). Call it 7 in this case since 2^7 is 128. So finding 8 elements should take something like 56 comparisons. IIRC the balance characteristic of a Red-Black tree is that the deepest node is no more than twice the depth of the shallowest. So the worst case depth is floor(2*log2(n)), call it 15, for a total of 120, and the best is ceil(1/2 log2(n)), which is 4. That's 32 comparisons again.
So assuming that comparisons are the only thing that affects speed, the 8 finds will be somewhere between 4 times as fast, and 4 times as slow, as the linear traversal, with a 2x better average.
The linear traversal will probably touch more memory, though, so might be slower on that account. But ultimately for n=100 you're talking millisecond times, so just do whatever's the simplest code (probably the 8 finds). And did I mention that if you really want to know the speed you can't hope to predict, you just have to profile it?