views:

1426

answers:

7

Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't.

I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option?

I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.

+3  A: 

No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption.

You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible.

The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.

A: 

Yes, I considered the proxy cache-in-the-middle that performed the decryption and request on-behalf. The problem with that is that it adds complexity when deploying the application. Such a proxy cache will have to be installed on a customers premises. For enterprise customers, it will also have to be made fault-tolerant and reliable. So that would probably mean some sort of failover or load balanced. And maybe even a shared distributed cache (memcached)...

Such deployments will also incur additional support, development and testing costs.

I would really like utilize their own installed, load-balanced & supported infrastructure.

Damian Hickey
+5  A: 

The comment by Rory that the proxy would have to use a self-signed cert if not stricltly true.

The proxy could be implemented to generate a new cert for each new SSL host it is asked to deal with and sign it with a common root cert. In the OP's scenario of a corportate environment the common signing cert can rather easily be installed as a trusted CA on the client machines and they will gladly accept these "faked" SSL certs for the traffic being proxied as there will be no hostname mismatch.

In fact this is exactly how software such as the Charles Web Debugging Proxy allow for inspection of SSL traffic without causing security errors in the browser, etc.

imaginaryboy
+1  A: 

I think you should just use SSL and rely on an HTTP client library that does caching (Ex: WinInet on windows). It's hard to imagine that the benifits of enterprise wide caching is worth the pain of writing a custom security encryption scheme or certificate fun on the proxy. Worse, on the encyrption scheme you mention, doing asynmetric ciphers on the entity body sounds like a huge perf hit on the server side of your application; there is a reason that SSL uses symmetric ciphers for the actual payload of the connection.

Doubt
A: 

I think you should just use SSL and rely on an HTTP client library that does caching (Ex: WinInet on windows). It's hard to imagine that the benifits of enterprise wide caching is worth the pain of writing a custom security encryption scheme or certificate fun on the proxy. Worse, on the encyrption scheme you mention, doing asynmetric ciphers on the entity body sounds like a huge perf hit on the server side of your application; there is a reason that SSL uses symmetric ciphers for the actual payload of the connection.

The application concerned is not a browser app, it's a desktop app pulling data over the internet. What is going to happen is that all instances of the app will be pulling the same piece of at around about the same time. This data needs to be secured, but I'm hoping to increase perf by having some instances of the app get a cached version from the corporate proxy server.

The data chunks are small, but they may be requested frequently. Essentially all app instances are going to request the same data as each other at the same time.

The data/message body on the server side will be pre-encrypted and cached in a distributed in-memory hash-table. Encryption will not be performed on a per-request basis.

I'm also investigating using a message bus, such as NServiceBus instead.

Damian Hickey
+1  A: 

Check out www.bluecoat.com is a commercial proxy that in fact CAN do https interception in order to block sites, restrict content, inspect for viruses and cache content (GETs)

A: 

How about setting up a server cache on the application server behind the component that encrypts https responses? This can be useful if you have a reverse-proxy setup.

I am thinking of something like this:

application server <---> Squid or Varnish (cache) <---> Apache (performs SSL encryption)
Jesse Hallett