For an Emacs extension, I'd like to retrieve data over HTTP. I'm not particularly fond of the idea of shelling out to things like wget
, curl
, or w3m
to be able to do that, so I'm using the url-retrieve
function.
One of the HTTP servers i'm talking to happens to ignore Accept-Encoding
headers and insists on always sending out its data with Content-Encoding: gzip
.
As a result of that, and of the fact that url-retrieve
doesn't automatically decode response bodies, the buffer url-retrieve
will present me will contain binary gzip data.
I'm looking for a way to decode the response body, preferably chunk by chunk, as the data arrives. Is there a way to instruct url-retrieve
to do this for me?
Decoding the response all at once, once it completely arrived, would also be acceptable, but I'd rather avoid all the fubar involved in creating an asynchronous subprocess running gzip, piping parts of the response I got to that, and reading the decoded chunks back in - I'd be looking for some library function here.