A few years ago, I designed and implemented a critical business system that used .NET Remoting. We had a client implemented as a Windows Forms GUI, a server implemented as a Windows Service, and a SQL Server database.
I designed for troubleshooting/debugging/development, so one of my first design criteria was that I could trivially remove the entire .NET Remoting implementation and run the whole system on my desktop. So, I could deactivate the remoting by changing a single boolean configuration setting to "false" = off. I could then troubleshoot, debug, develop completely without the overhead or interference of .NET Remoting.
It seems that this would be valuable for your situation as well. As a matter of fact, I can't imagine a situation in which that is not a desirable feature, especially since it is easy to implement.
So, to implement it, the configuration setting was used by each of the client and the server code to decide which implementation class to instantiate for communication with the other side. All communication occurred through a custom C# interface which had two concrete implementation classes on each side: one class implemented the communication using .NET Remoting, the other class implemented the communication as a direct in-process passthrough (direct calls).
Only the one pair of classes (one on each side) knew anything about .NET Remoting, so the isolation was total. Most of the time, all the developers worked with the remoting turned off, which was faster and simpler. When they needed to, on rare occasion, they turned it on (mostly just me, or when someone connected to test/production for troubleshooting).
By the way, I made the remoting interface dead simple:
public Response execute(Request)
Beyond that, I also used the debugger launching tip mentioned above, and I agree that you need to be mindful of the impact to the GUI threading.