The big question is whether duplicate updates to the web service matter, and if they do whether they can be detected. If you can detect duplicates (usually with a unique transaction number) or if duplicates don't matter, then you can build a two phase commit style approach which is reliable.
If duplicate transactions to the web service cannot be detected and the updates are not idempotent then you are out of luck.
This is the basic algorithm:
begin transaction;
do local work;
save information for external call;
set an appropriate time for next attempt;
mark external call as not performed;
commit work;
begin transaction;
make external call;
if successful
mark external call as performed (or delete the record)
else
set the time for the next attempt
commit;
You then need a regular task, thread, or whatever that does something like this:
for each record where the time for the next attempt <= now
begin work;
if the remote service has not performed this transaction
make the remote call;
if successful
mark as done;
else if too many attempts
mark the transaction as permanently failed,
alert operator;
else
set the time for the next attempt;
endif;
else
mark as done;
endif
commit;
endfor
This approach handles all the failure conditions reliably, and ensures that both pieces of work are eventually done.
The basic failures:
A failure before the first commit completes: Everything rolls back.
A failure after the first commit but before the web service completes (this includes transient failures in the web service itself): The remote webservice transaction is replayed by the recovery task.
A failure after the web service completes but before the second commit completes: The duplicate web service transaction is detected by the recovery task and the local record is dequeued.
Failures in the recovery task: Essentially the same as failures in the second transaction.
Other notes:
A gradual back-off approach is useful for failures. If there is a transient failure on a service you want to slow down your retries.
If you have an ordering requirement on the external service you might need some additional structure.
Depending on how you implement the recovery task, you could just leave the web service calls to that task and not have the second transaction in the main application flow.
Response to the additional requirement: "The two parts transaction must be done together, doing a cron job to synchronize the table is not desirable"
My reading of this requirement is: "the two systems should never fail."
When one of the systems (or both) of the systems you need something to pick up the pieces and reconcile things. You can use a fully fledged TP monitor to do transaction co-ordination, or you can build a simple monitor like the one in my example that handles your specific case. Either way, there is something that keeps track of what was happening so that things can be resolved correctly after a failure condition.
If your requirement really is that things always happen together (and a transactional message queue or two phase commit approach does not work for you), you would be better off storing the data for both systems in the same database (aka "resource manager") and having a single resource manager transaction.
If you do get a solution to this problem that meets the requirements of having two separate systems consistent across multiple transactions and never requires subsequent reconciliation after a failure, you should write it up and get it published in The VLDB Journal , ACM TODS or IEEE TKDE.