views:

222

answers:

3

I have an android client which will make Http connections to a server.

The server requires that all Http request provide a monotonically increasing counter in the Http header. e.g.

POST /foo/server
X-count: 43

Places that will initiate Http connections:

  1. Inside activities under the user's command e.g. button clicks
  2. Inside a Service (started by Context#startService)

To dish out counter values, I plan to host an AtomicInteger in my Application subclass. All code will then retrieve a count from a central place.

If the Http connection fails (e.g. server is down), I need to decrement the counter.

Do you think AtomicInteger is a good fit for my scenario ?

+9  A: 

AtomicInteger is exactly what you want to use for that purpose.

Milan Ramaiya
However, you should watch out for the case where an HTTP request with a higher count is sent before one with a lower value (e.g. in case where the thread of the lower count HTTP request ends up sleeping)
notnoop
notnoop is right. If you want to preserver ordering (ie, happens-before) on the server side, you have to implement some sort of Lamport clocks.
Miguel Ping
A: 

If the Http connection fails (e.g. server is down), I need to decrement the counter.

I was going to say "hell yes", but I'm somewhat less certain after this sentence. I take it you want to do something like this:

def sendRequest(url)
    request = new request to url
    request.header["X-count"] = next serial
    if request.send() != SUCCESS
        rewind serial

In that case, I'd guess that two threads should not be allowed to send requests simultaneously, and then you want something that serializes requests rather than an AtomicInteger, which really just lets you perform a few operations atomically. If two threads were to call sendRequest simultaneously, and the first one would fail, this would happen:

Thread  |  What happens?
--------+-------------------------
A       |  Creates new request           
B       |  Creates new request           
A       |  Set request["X-count"] = 0    
A       |  Increment counter to 1        
A       |  Send request
B       |  Set request["X-count"] = 1    
B       |  Increment counter to 2        
B       |  Send request
A       |  Request fails                 
B       |  Request succeeds              
A       |  Rewind counter down to 1      
C       |  Creates new request
C       |  Set request["X-count"] = 1
C       |  Increment counter to 2

And now, you've sent two request with X-count = 1. If you want to avoid this, you should use something like (assume Request and Response are classes used to handle requests to URLs):

class SerialRequester {
    private volatile int currentSerial = 0;

    public synchronized Response sendRequest(URL url) throws SomeException {
        Request request = new Request(url);
        request.setHeader("X-count", currentSerial);
        Response response = request.send();
        if (response.isSuccess()) ++currentSerial;
        return response;
    }

}

This class guarantees that no two successful requests (made through the same SerialRequester) have the same X-count value.

Edit Many seem concerned about the above solution not running concurrently. It doesn't. That's correct. But it needs to work this way to solve the OP's problem. Now, if the counter needn't be decremented when a request fails, an AtomicInteger would be perfect, but it's incorrect in this scenario.

Edit 2 I got it in me to write a serial requester (like the one above) less prone to freezing, such that it aborts requests if they've been pending too long (i.e., queued in the worker thread but not started). Thus, if the pipes clog and one request hangs for a very very long time, other requests will wait at most a fixed amount of time, so the queue doesn't grow indefinitely until the clog goes away.

class SerialRequester {
    private enum State { PENDING, STARTED, ABORTED }
    private final ExecutorService executor = 
        Executors.newSingleThreadExecutor();
    private int currentSerial = 0; // not volatile, used from executor thread only

    public Response sendRequest(final URL url) 
    throws SomeException, InterruptedException {
        final AtomicReference<State> state = 
            new AtomicReference<State>(State.PENDING);
        Future<Response> result = executor.submit(new Callable<Response>(){
            @Override
            public Result call() throws SomeException {
                if (!state.compareAndSet(State.PENDING, State.STARTED))
                    return null; // Aborted by calling thread
                Request request = new Request(url);
                request.setHeader("X-count", currentSerial);
                Response response = request.send();
                if (response.isSuccess()) ++currentSerial;
                return response;
            }
        });
        try {
            try {
                // Wait at most 30 secs for request to start
                return result.get(30, TimeUnit.SECONDS);
            } catch (TimeoutException e){
                // 30 secs passed; abort task if not started
                if (state.compareAndSet(State.PENDING, State.ABORTED))
                    throw new SomeException("Request queued too long", e);
                return result.get(); // Task started; wait for completion
            }
        } catch (ExecutionException e) { // Network timeout, misc I/O errors etc
            throw new SomeException("Request error", e); 
        }
    }

}
gustafc
The rule of thumb is to use locking (whether intrinsic or through the Lock interface) for as little time as possible. Your suggestion would use the lock through an entire network send. Despite the alien synchronize invocation, the network delay could cause unexpected delays.
John V.
I fail to see where two **sucessfull** requests where sent with X-count=1 in your example. Two of them were sent, but only one of them was sucessfull.
Miguel Ping
@John W. - as I understand the OP, the requests have to be sent in sequence. Or at least that's a logical conclusion if they all need their own, unique, serial X-count value.
gustafc
@Miguel Ping - it doesn't matter if thread C's request succeeds or not. We already have two errors here: First, the request in thread B should have X-count = 0 because the preceding request (in thread A) didn't succeed. Second, the request in thread C shouldn't have the same X-count as the *successful* request from B, regardless of whether it succeeds or not. If a request with an X-count = `n` succeeds, then no later requests should have X-count = `n` - or at least, that's how I read the OP.
gustafc
@Gustafc. I understand why you ensure the mutual exclusion there, but I wouldnt suggest that implementation. If you have many threads that are sending to many different URLs and some of the URLs time out then any thread trying to do a send will have to wait until the timeout is finished. The request themselves are local to the method so the waiting threads would have no clue on what the problem is
John V.
Yes, I agree it's utter crap - but that's what's asked for :)
gustafc
@gustafc thanks for the explanation
Miguel Ping
A: 

gustafc is right! The requirement

If the Http connection fails (e.g. server is down), I need to decrement the counter.

kills any possibility of concurrency!

If you want a unique counter for the HTTP header AtomicInteger is good, but you cannot send requests from multiple servers or JVM, and you need to allow holes.
So, since counting is futile (like always in highly scalable environment) using a UUID is a more "scalable" and robust solution. Human needs to count, machine don't give a damn!
And, so if you want the total number of success increment a counter after the successful send (you can even keep track of failed/successful UUID requests).

My 2 cts on parallel counting :)

Fred Simon
I am more concerned about the concurrency with gustafc's comment though. What if the Server goes down? Then any time a thread invokes SerialRequeste.sendRequest the method will block all threads until the send request returns with a timeout execption. That is a killer for any type of throughput
John V.
@John W. - my solution is prone to freezing the app if you don't set appropriate timeouts and the server goes down or the pipes clog, I agree, but the thing about it is that it's *correct* - it does what the OP asks for, which just happens to require that requests are sent sequentially.
gustafc