If all you want is a thin CRUD layer exposed as a web service (to provide database access without a VPN, etc.), then you can do the same thing using WCF Data Services without all the effort, and have something that's a great deal more flexible (you can write Linq against the proxies, for example).
What you're calling the service layer ought to expose domain objects, so assuming you have a domain model and want to expose this using a WCF Web Service (REST or otherwise), the answers to your questions are:
WCF is very fast. It's obviously not transparent, but from experience, if you're connecting to the services over a network connection then any "slowness" you experience will be due to latency/bandwidth limitations of the network itself. The only exception is the setup time of the WCF client (i.e. the channel) - which is why you generally want to keep them alive as long as possible, they are not throwaway objects like a DataContext
.
Method overloading is not supported over the wire. You can overload methods within the service assembly and differentiate them via the OperationContract
attribute (and specifically the Name
property, but to an outside client, they will appear to be different web methods with different names.
However, if you're designing web services, even REST services, the very first thing you need to do is change your perspective from an RPC-based ("function") mindset to a document-based ("message") one. In other words, instead of having 4 methods that take different combinations of 4 possible arguments, you should define a "request" class that exposes all 4 of those parameters as properties. This is often considered bad design for "local" code, but it is good design for web services.
In the same vein, using a web service to expose a "repository" is typically considered an anti-pattern (with the exception of WCF Data Services which serves a very different purpose). The reason is that a web service is supposed to provide business logic (which I assume is what your service layer does). It should provide very coarse-grained operations, atomic transactions where the client supplies all of the information required to perform a single complete transaction at the same time, instead of invoking several methods in succession.
In other words, if you find, when trying to translate your services into web services, that it's necessary to invoke several operations on several different services in order to perform a single "unit of work", then you should think about redesigning the services to provide better abstractions over the work. The overall design should minimize "chatter" between client and service.
So to summarize, it probably makes very little sense for you to have a "service layer" that lives on the client which talks to a "data layer" that's exposed as a web service, unless you need to solve the very specific problem of providing CRUD operations over a WAN. From an architectural perspective, what makes a lot more sense is to expose the actual services through WCF, and move toward more thin-client applications.
Keep in mind, however, that going down the "SOA" path, while it may have many long-term benefits, is likely to cause some short-term pain. You basically have another library to maintain, another library to test, another point of failure, another thing you need to document. If you don't have a large, distributed architecture, or plan to in the near future, then it may be too early to start integrating WCF services beyond the WCF Data Services framework mentioned at the top.
Also, you don't specify the domain or the kind of application you're developing, but REST as a specific service model imposes a number of trade-offs with respect to security, distributed transactions, etc. If these services are intended for internal or B2B consumption - i.e. if they are "enterprise" services - you really should consider SOAP instead, which gives you access to WS-Security, Active Directory integration and all that good stuff. REST is great for public apps and mashups but isn't appropriate for every scenario.