In information theory there is a concept called entropy that is a sort of a measure of the "true" amount of information in a message (in your example, the message is the SWF file). One of the common units used for this measure is the bit.
A file with 1.21 MB occupies approximately 10,150,215 bits. However its entropy may be less than 10,150,215 bits because there is some order, or predictability, in the data. Let's say you measured that file's entropy and came to the conclusion that the entropy is only 9,000,000 bits. This means that you can't compress it in a lossless manner to a size less that 9,000,000 bits.
But compression algorithms end up adding some more data to the compressed files so that they are able to uncompress it later. Algorithms include some information about the kind of "abbreviations" made when compressing the data. This means that the theoretical limit given by the entropy won't be reached because of that extra algorithm-specific data.
If your file is already compressed that means its size is already close to the entropy of the original data. When you try to compress it again (and specially in your case as you're using the same algorithm), the size reduction won't be much, and you will be adding yet another layer of the algorithm-specific extra data. If the extra data is more than the extra size reduction, your twice compressed file will be larger than the one compressed only once.