As part of an experiment, I want to write a OpenGL-based UI server for applications, similar to X11 or Quartz in architecture: a core process renders objects into a single viewport, but all graphical objects are being controlled by remote processes.
The idea is that the views stability is only dependent on the core process. If a client process segfaults, its allocated resources would be safely freed - a requirement for that feature is being able to securely find out whether a client process has crashed.
What is the best practice here?