I have multiple app processes that each connect to servers and receive data from them. Often the servers being connected to and the data being retrieved overlaps between processes. So there is a lot of unnecessary duplication of the data across the network, more connections than should be necessary (which taxes the servers), and the data ends up getting stored redundantly in memory in the apps.
One solution would be to combine the multiple app processes into a single one -- but for the most part they really are logically distinct, and that could be years of work.
Unfortunately, latency is critically important, and the volume of data is huge (any one datum may not be too big, but once a client makes a request the server will send a rapid stream of updates as the data changes, which can be upwards of 20MB/s, and these all need to be given to the requesting apps with the shortest possible delay).
The solution that comes to mind is to code a local daemon process, that the app processes would request data from. The daemon would check if a connection to the appropriate server already exists, and if not make one. Then it would retrieve the data and using shared memory (due to latency concern, otherwise I'd use sockets) give the data to the requesting app.
A simpler idea in the short term that would only solve the redundant connections would be to use unix domain sockets (this will run on a unix OS, though I prefer to stick to crossplatform libs when I can) to share a socket descriptor between all the processes, so they share a single connection. The issue with this is consuming the buffer -- I want all the processes to see everything coming over the socket, and if I understand right with this approach a read in one process on the socket will prevent other processes from seeing the same data on their next read (the offset within the shared descriptor will be bumped).