We are developing a Windows Forms application that will be installed on about 1,000 employee pcs. Users may run multiple instances of the application at the same time. The clients are all on a single intranet.
Changes in the application may cause database record changes, which in turn must be communicated to the other clients so their UIs are updated.
Our team has talked about two different approaches:
1. Multicast packets
The source client modifies the records and then sends out a multicast packet with a payload that something has changed. The other clients receive this and fetch the data specified. We need to account for the cases when the packet is not received, falling back onto actively retrieving the data.
My question at this point is how does a client know it didn't receive a packet? (don't know what you don't know) Which brings us to some sort of event log with timestamps in the database, and UI controls track the last time they were updated. They come into focus, check their timestamps, and update as needed.
Someone else said the UI elements would just reload every time they come into focus (think modes in outlook, bringing controls to the front of a stack workspace with CAB). And that the multicast is to update the clients that their current context has changed. If they miss it they work with stale data until they change modes and come back.
2. WCF and Callbacks
Clients register with WCF contracts for callbacks over a tcp binding. The primary technical concern with this is the server maintaining many open sockets. We have read up on how it isn't open in the traditional sense, it is put to sleep for a maximum of 90 seconds and then re-established at that point. We also read about the maximum number of open connections a Windows 2003 Server machine can handle, and how to modify that in the registry.
If we have 1,000 open socket connections to a server is this going to fall apart?
If anyone has faced this same situation and tried or evaluated the WCF approach we would love to hear about it.