defer.execute does indeed execute the function in a blocking manner, in the same thread and you are correct in that defer.execute(f, args, kwargs)
does the same as defer.succeed(f(*args, **kwargs))
except that defer.execute
will return a callback that has had the errback fired if function f throws an exception. Meanwhile, in your defer.succeed example, if the function threw an exception, it would propagate outwards, which may not be desired.
For ease of understanding, I'll just paste the source of defer.execute here:
def execute(callable, *args, **kw):
"""Create a deferred from a callable and arguments.
Call the given function with the given arguments. Return a deferred which
has been fired with its callback as the result of that invocation or its
errback with a Failure for the exception thrown.
"""
try:
result = callable(*args, **kw)
except:
return fail()
else:
return succeed(result)
In other words, defer.execute
is just a shortcut to take a blocking function's result as a deferred which you can then add callbacks/errbacks to. The callbacks will be fired with normal chaining semantics. It seems a bit crazy, but Deferreds can 'fire' before you add callbacks and the callbacks will still be called.
So to answer your question, why is this useful? Well, defer.execute
is useful both for testing / mocking as well as simply integrating an async api with synchronous code.
Also useful is defer.maybeDeferred
which calls the function and then if the function already returns a deferred simply returns it, else functions similar to defer.execute
. This is useful for when you write an API which expects a callable that when called gives you a deferred, and you want to be able to accept normal blocking functions as well.
For example, say you had an application which fetched pages and did things with it. And, for some reason, you needed to run this in a synchronous fashion for a specific use case, like in a single-shot crontab script, or in response to a request in a WSGI application, but still keep the same codebase. If your code looked like this, it could be done:
from twisted.internet import defer
from twisted.web.client import getPage
def process_feed(url, getter=getPage):
d = defer.maybeDeferred(getter, url)
d.addCallback(_process_feed)
def _process_feed(result):
pass # do something with result here
To run this in a synchronous context, without the reactor, you could just pass an alternate getter function, like so:
from urllib2 import urlopen
def synchronous_getter(url):
resp = urlopen(url)
result = resp.read()
resp.close()
return result