views:

653

answers:

8

So I was looking at the sample examples people have created for Duplex Communications namely when hosted by IIS and connected to via Silverlight. There are plenty of examples of this out there (this MSDN article is great), but all use the same paradigm:

User A connects to server A, it puts him in an in-memory list to receive future updates.
User B connects to server A, it notifies all users in list that someone "logged in".

... but what happens when

User C connects to server C, the in-memory list for server C doesn't contain User A or B.

The problem is that I'm looking to implement this in a clustered (web farm) environment. This complicates things because I cannot verify which machine will wind up fielding the wcf call, so relaying any message out to all other users is difficult.

The best scenario I can think of is to actually have the clients connect to some sort of routing service that takes the incoming request and forwards the client to a particular machine. Of course, then I'm losing the benefit of the web farm, since a single machine is effectively fielding all incoming requests.

A less effective solution is to have the service continually polling something (either a file on the fileserver or a table in the DB) looking for changes. Once changes are present, push them out to the clients. This seems like a very ugly baby, tho.

What have I missed?

UPDATE - The routing system is impossible for my needs. My hosting company will not allow me to directly connect to a specific machine on the farm via IP. I can only connect to the generic load-balancer front-end, so cannot guarantee my users will wind up on the same server.

So far, we're down to polling the table in the db looking for changes. Still seems like an ugly baby.

A: 

You can configure your web farm with "Sticky IP".

That means that when a client connects to a web farm he is routed to one machine. All following requests from that client will go to the same machine on the farm. This works a bit like the routing service you've described in your question.

EDIT

It is probably simplest to implement a polling system where the silverlight client asks the web server "is there anything new for me", that request would contain the time of the last time the client asked. The list of new things would be stored in a database table. So no problem which web server you hit.

Also, you need to watch out for limitations in Silverlight WCF, if I understand correctly it does not implement all of WCF.

EDIT 2

In the case that you need to communicate to all users at the same time, the call does not need to go all the way down to the database. This can be cached in memory at the WCF service level, the other clients will get this from memory, giving you better performance and less load on the database.

EDIT 3

As long as you are using a silverlight client it is difficult for the clients to communicate directly with each other. There are two possiblities although they require extra work/cost:

  • use Azure service bus, each client talks to an endpoint in the cloud which is converted to direct communication.
  • Drop using silverlight, use a client that can expose a WCF service endpoint. When the client starts up it registers the endpoint with the server. Each client can then ask the server who is online and send a message directly to the client.
Shiraz Bhaiji
This isn't the correct solution, tho, since it provides no way for me to communicate with all users at the same time. Only all users on the same machine. The examples I've seen all use in-memory objects / lists to push data to all connected clients. What about clients connected to a different machine entirely?
JustLoren
I'll clarify my question :)
JustLoren
Your database Polling answer is viable, but I can't help but feel it would be CPU intensive on the server. Every .5s would be another db hit asking "any new messages?" Are there any other solutions?
JustLoren
In regards to your Edit 2, that does not solve the problem of communicating between users on different machines in the farm. In fact, it explicitly ignores it, the same as your original proposition :)
JustLoren
+1  A: 

Assuming you don't need real time type of notification, a typical method would be to use a backend Session database or dedicated session server so that all your current logged in users are visible to all clustered machines. Then you could write a polling service to sent change notifications, or something more advanced depending on your requirements.

In your example, you would move the 'in-memory' user list to a shared memory server or shared database. You could of course implement some sort of cluster update notification to send to all machines, but the complexity of that could be way beyond your needs.

Jeff
Could you expound upon "Shared Memory Server"?
JustLoren
In the 'linux' server world, you can run a memcached service. This is a in-memory session server that uses tcp/ip to connect to your application servers. It basically a faster version (no SQL, in memory) of using a separate database server. It also has another advantage of using shared memory so that if you use in on the same server as your application, you get the benefit of an 'application session' (similar to asp/asp.net/java) that allows you to share global objects between user sessions.
Jeff
A: 

Can the servers communicate directly with each other? If so, you might want to set up private endpoints that only other servers in the farm can connect to. Then when server C receives a message, it sends a message to server A informing it of this fact, and then server A can forward this along to its clients.

Yuliy
Which is what Jeff proposed above with the cluster notification services. I've got a ticket in with my webhost to see if this is a viable solution, but I'm betting I'll be unable to establish connections between them. :(
JustLoren
+1  A: 

Use Memcached or MSMQ.

With Memcached, you would use it as the single point of truth for all items that need to be broadcast. So, when you get a client login, you dump some simple data into Memcached. It notifies the other server and updates the other server's list. Then, when you publish the information, query Memcached.

With MSMQ, push the login info to a queue, then implement listeners on both servers, reading from the queue and updating their in-memory lists of "publishable" information. That way, both servers are kept informed of the data that needs to get published.

Doanair
My main problem is lack of control over the servers (at this point), so memcached is straight out.MSMQ seems like I have to be able to connect directly from one server to another within the farm. I'm not certain how possible that is, but I'll check it out.
JustLoren
+1  A: 

Does your solutions provider use MS SQL for a database server? If you have full rights to MS SQL database, you could implement T-SQL triggers. You could write a trigger that executes code when a database CRUD operation occurs. With the current version of MS SQL you can even execute managed (C#/VB#) code.

This solution would be very complex but possible. I would use a central MS SQL for your cluster and write some T-SQL trigger code. When the records you care about are modified/etc I would have the SQL server send special HTTP web request messages to the other servers in the cluster (Assuming that a server IN the cluster, can arbitrarily talk to other/all cluster servers) to let them know about any changes. Then each server could use the global application cache to broadcast changes to each session on the server.

That's my off-the-top suggestion.

-Jeff

Jeff
It's a terrific idea, but is subject to the same problem of being unable to have the servers in the cluster communicate with each other.
JustLoren
Assuming your MS SQL server is "in" the cluster lan, you could solve that problem by writing a web page/service on the SQL server that each cluster'd Web server would call on application start (global.aspx) to register itself and its internal IP address in the cluster with your MS SQL server. Then you would have a list of each separate cluster server. If the SQL server is outside the cluster, your still stuck. Good luck.
Jeff
+1  A: 

Assuming you have zero control of your environment beyond what you can install on each server (i.e. no MSMQ, no ESB, etc.), then I would look into using WCF to communicate between servers. The simple problem seems to be that you have an in-memory list that needs to remain synced between the two servers, and whenever the contents of the list changes, users of both servers need to be notified.

With an internal WCF service that both servers host and use, you could use simple fire-and-forget messaging to keep the lists in sync. Imagine the following scenario:

  1. 'User A' logs in to Server A
    1. Add 'User A' to list of online users
    2. Fire message to Server B to notify it of added 'User A'
      1. Causes Server B to add 'User A' to its list of online users
      2. Causes Server B to notify all users of user login
    3. Notify all users on Server A of user login
  2. 'User B' logs in to Server B
    1. Add 'User B' to list of online users
    2. Fire message to Server A to notify it of addd 'User B'
      1. Causes Server A to add 'User B' to its list of online users
      2. Causes Server A to notify all users of user login
    3. Notify all users on Server B of user login
  3. 'User A' logs off of Server A
    1. Remove 'User A' from list of online users
    2. Fire message to Server B to notify it of removed 'User A'
      1. Causes Server B to remove 'User A' from its list of online users
      2. Causes Server B to notify all users of user logoff
    3. Notify all users on Server A of user logoff
  4. Periodically, have Server A and Server B sync their lists with each other (Could be implemented Ping-Pong style...one server pings its list to other server, other server merges and pongs merged list back)

The above scenario obviously assumes that you have the ability to install WCF services on your hosted servers such that they can communicate with each other. I am not sure if you have the ability, internally to each server, to know of the other servers, as you mentioned all traffic had to go through a load balancer.

jrista
Your last paragraph is the crux of the issue. With the inability to communicate between servers, your solution becomes impossible. I would like to point out that it is still an *excellent* solution and very well written!
JustLoren
It may still be possible to get the IP's of each server. You have the ability to execute .NET code, and .NET provides very rich network capabilities. You might be able to write a simple one-off web page that provides the internal (LAN) IP address of netbios name or whatever of each server you are renting. Thats all you would really need to know to get this going. Just hope the hosting company doesn't notice and take issue with that. ;-)
jrista
A: 

the velocity project from MS might be a faster solution for you than a database backend; it's a in-memory cache layer with all the clustering/fail-over fancy stuff...you can slide it in between WEB and DB layer; its API is quite simple and consistent with the rest of .NET too.

A: 

Looks like you need to use a peer-peer network (netPeerTcpBinding). I'm not sure if you're hosting environment would support this.

http://msdn.microsoft.com/en-us/library/cc297274.aspx

JontyMC