tags:

views:

667

answers:

3

I am completely new to WCF. I was pretty sure it was going to work like regular web services - and I'm also pretty sure I was doing that wrong too, but now I want to make sure I'm doing it right.

Our ASP.Net app connects to the WCF service across the internet. I have impletemented basic security and using SSL. It is working, but slower than when we had regular web services going. The data being returned is fundamentally the same as with the regular web service.

When I was using the regular web service, anytime I needed to get data, I would create a new service object and call the function for the data I needed. This seemed to work ok, but as I would imagine, not the best way to do it especially if there were thousands of users connecting at the same time. So when I converted to WCF, I decided to keep one client open and just use that for everyone connecting to the site. I put it in the cache and when the cache would dump the object, I had a callback function to dispose it.

Now I didn't even think about it till after I changed all this that it might pose an issue for multiple people connecting. If person A requests data, person B has to wait for that to finish before their data is fetched through the service.

So I changed it to be session based. I either implemented this wrong or it just backfired as it didn't work well at all. The client would time out, cause a fault, or just plain not work. I changed it back to being cached for now and it seems to be working fine (except slow).

What is a "best practice" for this scenario? Do I create the client on the fly when it's needed, create one session based (and figure out what I did wrong), or keep it as is and use the one client cached method?

+1  A: 

I typically create a client on the fly as you mentioned, but make sure you dispose of this after the request is complete. I've done this w/out much of an issue but to be honest I don't have 1000+ users hitting the exact same service at the same time.

You can find the exact implementation details in this blog post if you are interested.

Just to clarify something you mentioned in the question -when you say "regular web service", are you talking about ASMX or ?

Toran Billups
Yea, ASMX. The disposing was the thing I was worried about the most. I think I need to do some refactoring to clean this whole thing up.
TheCodeMonk
+4  A: 

This sort of problem is usually solved by maintaining a pool. Rather than only having one service object in one extreme and one per user in the other extreme, the pool would hold a collection of service objects that are needed to support the con-current demand for their services. Hence the pool should grow only to a point of maximum demand.

You would make sure that objects dropped out of the pool before any other timeout from inside the service object and also ensure they dropped out if they have any kind of exception.

This way you don't have multiple client requests waiting for access to single object nor do you have idle objects hanging around in a service and likely dying of old age before they can be reused again anyway.

AnthonyWJones
Never thought of creating a pool of clients. This could be the best of both worlds. Not creating a ton of objects on the fly and also not slowing down when there is a lot of activity. Great idea!
TheCodeMonk
I like the technique mentioned, any way you could point to some implementation of this so I can play around?
Toran Billups
As scary as this is going to sound, I already have the makeup of a good implementation to this. I am going to test it and if it works as I expect, I will write a blog post about this and document what I did.
TheCodeMonk
@Toran- have a look at this http://blogs.msdn.com/wenlong/archive/2007/10/27/performance-improvement-of-wcf-client-proxy-creation-and-best-practices.aspx and this http://blogs.msdn.com/wenlong/archive/2007/11/14/a-sample-for-wcf-client-proxy-pooling.aspx
RichardOD
+2  A: 

The general best practive for WCF services would be to have the per-call, single-instance model whenever possible. This gives you the best throughput, the best and simplest behavior in the service instance. So whenever possible, and unless you have a really compelling reason, use this model.

It seems that in your case, creating the service instance is a rather expensive operation. Maybe you need to get this cleaned up somehow - make the actual service instance very lean and lightweight so it can be created and disposed of in a blink of an eye (or less), and then have some background worker processes (or possibly a pool of those, as suggested by Anthony) which you can then call from your actual service instances.

Marc

marc_s