Traditionally work like this is done using a BackgroundWorker
Basically it is a simple class that gives you the ability to perform a function on a worker thread and then will automatically invoke back to the UI thread after that function is complete. During the time that the function is being run the UI is unblocked and can display a progress message, or process other user input (cancel for example).
The result is similar to your pattern, but a separate thread is created and destroyed (well, there is pooling really...) for each task.
UI thread ---> Show splashscreen------------------->Show window-------
| |return to UI |
| create background worker | |
-> Process user ------------ ->Perform query etc.
Okay, based on your comment:
You can use a pattern like that, it is a simple case of eventing. Give the UI access to your manager so that it can perform a method call on it and register for events when the task is completed (this link shows the two major patterns for async operations in .NET). Inside the manager you'll need to maintain a list of tasks that can be performed in sequence on the single thread and ensure that the events that are called to return the results to the UI are properly invoked so that are run on the main UI thread (basically recreate the background worker pattern).
I'm not sure what you are hoping to gain by doing this, is there a reason the application needs to be limited to two threads? Are you concerned about the cost of creating backgroundworkers? Do you need some kind of query queue system? The examples in the diagram in your question don't seem to require the complexity of this kind of pattern.