I'm going to talk about floats, not ints, because that is where the technology seems to be.
The standard way of computing functions like this is to use some special case logic so that you only have to represent the function within some range [a, b], approximate this by a rational function - one polynomial divided by another, and then patch up whatever you had to do to reduce the range. The source of exp(x) at http://www.netlib.org/fdlibm/e_exp.c seems to follow this pattern.
This gives you an approximation of exp(x) of the form a(x)/b(x). You actually want 1/(1+exp(-x)). You should be able to rearrange the implementation of a(x)/b(x) so as to get it to do b(-x)/(a(-x)+b(-x)), so that you have just one divide instruction, inside the rearranged exp(x), instead of one divide instruction inside it and one outside it.
This will save you something, depending on how much more expensive divide is on your machine - it might be noticeable if your inner loop really is 90% calls to the logistic function. The pattern of range reduction plus rational approximation is so firmly entrenched that I doubt that you will do much better without sacrificing a good deal of accuracy, although if you are resorting to integers you may be prepared to do that.
I dare say you could transfer this into the fixed point world if you worked hard enough. I'm afraid I'd be inclined to go back to linear interpolation between values in a table instead, that is, assuming that I couldn't find a way to get the logistic function out of the inner loop.