My first thought was that .NET's jpeg encoder uses chroma subsampling even at the highest quality setting, so color information is stored at half resolution. The setting is hard-coded as far as I can tell. But that wouldn't explain why you would get better quality in the second example. Unless maybe in the 2nd it didn't use antialiasing, giving a sharper (but lower quality) image and the artifacts went unnoticed.
Edit: The dst.Save(m, format); looks like your problem. You're encoding it as jpeg there, with default quality (not 100%), then immediately decoding it back to an image. dst is already an Image (Bitmap class inherits from Image), so you can just return it as-is