I have a short program that is used exclusively with a remote desktop connection that's been set to only run that program and not allow any other access into the remote machine. Previously, the program just exited and let the connection terminate, but it was very slow, so I wrote the following code to terminate the remote session when the program is done running.
[DllImport("wtsapi32.dll", SetLastError = true)]
static extern bool WTSLogoffSession(IntPtr hServer, int SessionId, bool bWait);
private IntPtr WTS_CURRENT_SERVER_HANDLE;
private const int WTS_CURRENT_SESSION = -1;
...
private void HardTerminalExit()
{
WTSLogoffSession(WTS_CURRENT_SERVER_HANDLE, WTS_CURRENT_SESSION, false);
}
This works fine when this program is in its production environment, used by the people who are remoting in using a specific RDP connection file. The connection exits after the program is run. However, when testing and debugging this program, my computer restarts after every run.
I'm looking for a good way to distinguish between these cases. Should I set up some kind of debug script that remotes in and runs the program remotely? Or is there some way to programmatically detect whether the program is running in debug mode and just disable the exit procedure in that case?