I would look first at the why you want to do this. Normally applications operate like that, there is no issue in 2 clients doing inserts at the same time (well, besides wrong approaches in the code).
Also the solution will vary with the scenario. One option is to have a microsoft message queue (MSMQ), and move the inserts out of those services, so the load of the inserts is controlled by the process that reads form the queue.
Update 1: I still fail to see why you want to avoid the inserts from running in parallel (and from what I read in the other responses I think others do to). I will quote 2 of your comments on this:
The reason I want this is because in the case of a power surge, I lost only one transaction instead of many.
Ant the other:
It's because before the insertion, there are other lengthy jobs that require parallelism. Only when during the insertion time, there is a need for sequential access.
If I read the first alone, I would actually think that's a reason to want them in parallel. You want to be done with them as fast as possible. Going sequential will actually increase the time to make the inserts, so there is more time where the power surge can occur.
Reading the second, it makes me think that you are probably more concerned in the effect of these processes running on parallel and the insert not getting into the db. This again means you want to be done with the inserts asap, so not a reason to do them sequentially.
For built-in support, you might have a case for a distributed transaction that also includes the file system (ms was looking at the file system thing, but I don't recall if they ever did something on the newer OS). Distributed transactions are kind of a pain to set up though (msdtc and access to the ports it uses).
A good path I have followed, is adding more info to the process to be able to tell where it failed. You might not even code an automatic recovery process yet, but at least you know you will have the info for sure to know something went wrong.
The simplest way is an insert at the beginning of the process and have a flag to signal when it was completed. If it is a long running process you might want to have something more like an status you keep updating to be able to tell in which step it failed. An alternative is writing the status to the file system.
In any case it will only tell you that the last step that completed successfully, not whether the current step was or not able to be completed. This is what makes the retry logic a bit more complex, as you can't just continue where it stopped, you have to check whether the last step was or not done, and that depends on each step.
Ps. if the above is the case, it is hard to tell from the question. You might want to open a different question about long running processes and/or automatic retries.