views:

346

answers:

3

I am stumped about this, so I thought I'd ask in case any of you have come across it, since HttpClient development is a little bit of an art.

The problem I am facing is this: An application is using the Apache HttpClient Java library to communicate to a server in the same company network. Most of the time it works without a problem, but on occasion we'll see a barrage of exceptions caused by incomplete responses: They're all missing the last three characters of the closing tag, so the parser in the client complains. This lasts for maybe 5 to 10 minutes and then goes away.

I haven't been able to replicate this problem locally, and I have verified the response is written completely by the server. The client is obtaining the response content with the PostMethod's getResponseBodyAsStream() method, but it's called only once. Maybe it needs to loop calling this method until it gets null for the rare occasion when the response is buffered?

I'll appreciate any input.

Edit: The server is writing the content-length header and flushing correctly, and at the client, data is read into a String with:

//method is a PostMethod, client is a HttpClient
client.executeMethod(hostconfig, method); 

InputStream is = method.getResponseBodyAsStream();
String response = null;

try {
    ByteArrayOutputStream bos = new ByteArrayOutputStream();    
    byte[] buf = new byte[1024];
    int len;

    while ((len = is.read(buf)) > 0) {
        bos.write(buf, 0, len);
    }

    response = new String(bos.toByteArray(), "UTF-8");

} ... // closing try block
+1  A: 

Are the content-length headers from the sever being set correctly? I'm not 100% sure if the Commons-HttpClient respects those or not, but it easily could. I can't think of any reason why you would need to repeatedly call getResponseBodyAsStream.

Its also conceivable that your code for reading the stream is making false assumptions. Perhaps we could see a snippet of how you read the data to insure you are actually reading the entire stream correctly? Some common coding mistakes there can lead to reading only up to a buffered amount (which will result in seemingly random failures).

Other than that, its hard to say... we use Commons HttpClient regularly with no similar symptoms.

jsight
Hey jsight, thanks for responding.I edited the question to clarify. The server is writing the headers correctly and flushing, and I added the code that is reading from the Stream. I also looked into the HttpClient configuration that is being used by the client and the only thing that jumped at me was the explicit setting of the Linger-on-timeout parameter to 0 (disabled).
Munir
A: 

I've been facing this issue too. This problem appeared only after changing the URL from localhost to a public one.

I have found a couple of solutions...

The first "solution" I found was to execute a Thread.sleep(1000) before start the reading process. I think this causes the buffer to be filled before trying to read. (I know this doesn't make sense since the read() states that it block until data is available, but unfortunately the read method sometimes thinks that it have reached the end earlier than expected). This is more like a ugly patch, so I keep looking...

The second option and the best one is to use the method readLine() from BufferedReader. This method implements correctly the read process. I haven't read the source code of the readLine but I think we could find the solution to our problem in there.

Greetings.

Jorge
A: 

I am using readLine() of BuffereReader to read xml being returned. It reads first line that is declaration but don't read rest of xml.

Amit Patel