views:

380

answers:

4

I have a home grown web server in my app. This web server spawns a new thread for every request that comes into the socket to be accepted. I want the web server to wait until a specific point is hit in the thread it just created.

I have been through many posts on this site and examples on the web, but cant get the web server to proceed after I tell the thread to wait. A basic code example would be great.

Is the synchronized keyword the correct way to go about this? If so, how can this be achieved? Code examples are below of my app:

Web Server

while (true) 
{
     //block here until a connection request is made
     socket = server_socket.accept();

     try 
     {  
      //create a new HTTPRequest object for every file request
      HttpRequest request = new HttpRequest(socket, this);

      //create a new thread for each request
      Thread thread = new Thread(request);

      //run the thread and have it return after complete
      thread.run();

                       ///////////////////////////////
                       wait here until notifed to proceed
                       ///////////////////////////////
     } 
     catch (Exception e) 
     {
      e.printStackTrace(logFile); 
     }
}

Thread code

public void run()
{
      //code here

      //notify web server to continue here
}

Update - Final code is as below. The HttpRequest does just call "resumeListener.resume()" whenever I send a response header (of course also adding the interface as a separate class and the "addResumeListener(ResumeListener r1)" method in HttpRequest):

Web Server portion

// server infinite loop
      while (true) 
      {

       //block here until a connection request is made
       socket = server_socket.accept();

       try 
       {
        final Object locker = new Object();

        //create a new HTTPRequest object for every file request
        HttpRequest request = new HttpRequest(socket, this);

        request.addResumeListener(new ResumeListener(){
         public void resume()
         {
          //get control of the lock and release the server
          synchronized(locker)
          {
           locker.notify();
          }
         }
        });

        synchronized(locker)
        {
         //create a new thread for each request
         Thread thread = new Thread(request);

         //run the thread and have it return after complete
         thread.start();

         //tell this thread to wait until HttpRequest releases
         //the server
         locker.wait();
        }
       } 
       catch (Exception e) 
       {
        e.printStackTrace(Session.logFile);
       }
      }
+6  A: 

You can use java.util.concurrent.CountDownLatch with a count of 1 for this. Arrange for an instance of it to be created and shared by the parent and child thread (for example, create it in HttpRequest's constructor, and have it retrievable by a member function). The server then calls await() on it, and the thread hits countDown() when it's ready to release its parent.

bdonlan
Thanks for the suggestion, I will try implement this way. Seems simple enough.
Ken
+2  A: 

You probably need to use a Java Condition. From the docs:

Conditions (also known as condition queues or condition variables) provide a means for one thread to suspend execution (to "wait") until notified by another thread that some state condition may now be true.

Vinay Sajip
Condition variables alone are a bit hard to open-code; you also need an associated Lock and a flag to know when to stop looping around and blocking. Plus there's no need to use the Condition class directly; just use the monitor implicit in every Object...
bdonlan
Also, to clarify a bit - Condition/Lock make for a more explicit system, but one that is independent from the built-in monitors used by `synchronized` and `Object.wait()`. They would be useful if you need multiple condition variables attached to a single lock, but in this case it's way overkill.
bdonlan
Actually `Condition` is an interface, I was only pointing it out as using an implementing class will be necessary in practice. The page I linked to has an example with `ReentrantLock`s.
Vinay Sajip
A: 

Run under a debugger and set a breakpoint?

If unfeasible, then read a line from System.in?

Thorbjørn Ravn Andersen
No idea how this relates to the question....
Jim Barrows
+1  A: 

First of all, I echo the sentiment of others that re-inventing the wheel here will most likely lead to a variety of issues for you. However, if you want to go down this road anyway what you are trying to do is not difficult. Have you experimented with Jetty?

Maybe something like this:

public class MyWebServer {

  public void foo() throws IOException {
    while (true) {
      //block here until a connection request is made
      ServerSocket socket = new ServerSocket();

      try {
        final Object locker = new Object();
        //create a new HTTPRequest object for every file request
        MyRequest request = new MyRequest(socket);
        request.addResumeListener(new ResumeListener() {
          public void resume() {
            locker.notify();
          }
        });
        synchronized(locker){

          //create a new thread for each request
          Thread thread = new Thread(request);

          //start() the thread - not run()
          thread.start();

          //this thread will block until the MyRequest run method calls resume
          locker.wait();
        }  
      } catch (Exception e) {
      }

    }
  }
}

public interface ResumeListener {
  public void resume();
}

public class MyRequest implements Runnable{
  private ResumeListener resumeListener;

  public MyRequest(ServerSocket socket) {
  }

  public void run() {
    // do something
    resumeListener.resume(); //notify server to continue accepting next request
  }

  public void addResumeListener(ResumeListener rl) {
    this.resumeListener = rl;
  }
}
Gary
Thanks for the example. It seems pretty straight forward and I will give it a try as well. I understand the concerns of implementing my own web server I have looked into Jetty already briefly, but did not go forward with it. Jetty still is a possibility for the future, but right now the home grown is performing well.
Ken