Since the whole point of compressed sensing is to avoid taking measurements, which, as you say, can be expensive to take, it should come as no surprise that the compression ratio will be worse than if the compression implementation is allowed to make all the measurements it wants, and cherry pick the ones that generates the best outcome.
As such, I very much doubt that an implementation utilizing compressed sensing for data already present (in effect, already having all the measurements), is going to produce better compression ratios than the optimal result.
Now, having said that, compressed sensing is also about picking a subset of the measurements that will reproduce a result that is similar to the original when decompressed, but might lack some of the detail, simply because you're picking that subset. As such, it might also be that you can indeed produce better compression ratios than the optimal result, at the expense of a bigger loss of detail. Whether this is better than, say, a jpeg compression algorithm where you simply throw out more of the coefficients, I don't know.
Also, if, say, an image compression implementation that utilizes compressed sensing can reduce the time it takes to compress the image from the raw bitmap data, that might give it some traction in scenarios where the time used is an expensive factor, but the detail level is not. For instance.
In essence, if you have to trade speed for quality of results, a compressed sensing implementation might be worth looking into. I have yet to see widespread usage of this though so something tells me it isn't going to be worth it, but I could be wrong.
I don't know why you bring up image search though, I don't see how the compression algorithm can help on image search, unless you will somehow use the compressed data to search for images. This will probably not do what you want, related to image search, as very often you search for images that contain certain visual patterns, but aren't 100% identical.