I don't know of any library that already exists that can do what you're asking other than just running it through gzip or some other lossless compression algorithm. However, since you know that the frames are highly correlated, you could XOR the frames like Conspicuous Compiler suggested and then run gzip on that. If there are few changes between frames, the result of the XOR should have a great deal less entropy than the original frame. This will allow gzip or another lossless compression algorithm to achieve a higher compression ratio.
You would also want to send a key(non-differential) frame every once in a while so you can resynchronize in the event of errors.
If you are just interested in learning about compression, you could try implementing the RLE after XORing the frames. Check out the bit-level RLE discussed here about half way down the page. It should be pretty easy to implement as it just stores in each byte a 7 bit length and a one bit value so it could achieve a best-case compression ratio of 128/8=16 if there are no changes between frames.
Another thought is that if there are very few changes, you may want to just encode the bit positions that flipped between frames. You could address the 320x200 image with a 16-bit integer. For instance, if only 100 pixels change, you can just store 100 16 bit integers representing those positions (1600 bits) where the RLE discussed above would take 64000/16=4000 bits at the minimum (it would probably be quite a bit higher). You could actually switch between this method and RLE depending on the frame content.
If you wanted to go beyond simple methods, I would suggest using variable-length codes to represent the possible runs during the run-length encoding. You could then assign shorter codes to the runs with the highest probability. This would be similar to the RLE used in JPEG or MPEG after the lossy part of the compression is performed (DCT and quantization).