views:

44

answers:

1

As part of an experiment, I want to write a OpenGL-based UI server for applications, similar to X11 or Quartz in architecture: a core process renders objects into a single viewport, but all graphical objects are being controlled by remote processes.

The idea is that the views stability is only dependent on the core process. If a client process segfaults, its allocated resources would be safely freed - a requirement for that feature is being able to securely find out whether a client process has crashed.

What is the best practice here?

+1  A: 

I think this should be detected as an event on the connection to the client, just as with any other client/server architecture.

If you use sockets, the socket will eventually register that one side has closed the socket (as the process crashes, its end of the socket will be closed), and you can detect that, look up the owning client in the server's records, and clean out all resources.

It would be very weird for the server to directly (through process IDs or whatever) look for the clients, and that would also needlessly limit your architecture to only run locally, and not across a network.

unwind