views:

220

answers:

2

Destroying threads is deprecated in Java (and not implemented according to javadoc), and interrupting it is only a suggestion which upon the thread is expected to quit, but might not do so. (Not to provide any way to kill a thread inside the J*VM* is a disturbing design, but my question is not design related.)

How do Java application servers unload applications? Are they able somehow to destroy the threads of an application being unloaded? If yes, how? If no, then a single thread of a deployed app with an infinite-loop could bring down an entire app server without any possibility to intervene?

Sorry that I'm not writing test cases for this, but I would like to know what is really going on there.

+2  A: 

You're not allowed to create a thread of your own inside an ejb server.

It's not that uncommon to spawn threads in a web container(such as e.g. tomcat) , though you should really think carefully about doing that - and be sure to manage the lifecycle of those threads.

nos
What can the application server do when a bean has an infinite loop "implemented" in one of its methods? The whole application server needs to be restarted on the OS level?
sibidiba
@sibidiba - Yes. That's about it.
Stephen C
+6  A: 

Not to provide any way to kill a thread inside the J*VM* is a disturbing design, but my question is not design related.

Since your real question has be answered, I'm going to address the quoted sentence above.

The history is that the Java designers originally did try to address the issue of killing and suspending threads, but they ran into a fundamental problem that they could not solve in the context of the Java language.

The problem is that you simply cannot kill safely threads that can mutate shared data in a non-atomic fashion or that can be synchronizing with other using a wait/notify mechanism. If you do implement thread killing in this context, you end up with partial updates to data structures, and other threads waiting for notifies that will never arrive. In other words, killing one thread may leave the rest of the application in an uncertain and broken state.

Other languages / libraries (e.g. C, C++, C#) that do allow you to kill threads suffer from the same problems I described above, even if the relevant specifications / text books do not make this clear. While it is possible to kill threads, you have to be really careful in the design and implementation of the entire application to do this safely. Generally speaking it is too hard to get right.

So (hypothetically) what would it take to make thread killing safe in Java? Here are some ideas:

  • If your JVM implemented Isolates you could launch the computation that you might want to kill in a child Isolate. The problem is that a properly implemented isolate can only communicate with other isolates by message passing, and they would generally be a lot more expensive to use.

  • The problem of shared mutable state could be addressed by banning mutation entirely, or by adding transactions to the Java execution model. Both of these would fundamentally change Java.

  • The problem of wait/notify could be addressed by replacing it with a rendezvous or message passing mechanism that allowed the "other" thread to be be informed that the thread it was interacting with has gone away. The "other" thread would still need to coded to recover from this.

EDIT - In response to commments.

Mutex deadlock was not an issue for thread.destroy() since it was designed to release (break) all mutexes owned by the thread that was destroyed. The problem was that there were no guarantees that the data structure that was protected by the mutex would be in a sane state after the lock was broken.

If I understand the history of this topic correctly, Thread.suspend(), Thread.delete() and so on really did cause problems in real world Java 1.0 applications. And these problems were so severe, and so hard for application writers to deal with, that the JVM designers decided that the best course was to deprecate the methods. This would not have been an easy decision to make.

Now, if you are brave you can actually use these methods. And they may actually be safe in some cases. But building an application around deprecated methods is not sound software engineering practice.

Stephen C
This, times a hundred
matt b
I'm not a hardcore JVM designer, but all the arguments about dealocks seems so much made up. Disabling this feature did not made Java deadlock prone. I simply do a lock, and never unlock it.Allowing thread.destroy() would indeed introduce a new possibility of deadlocks, but why on this world I'm not allowed to kill a thread that does not hold _any_ resources (according to the programmer or the JVM)?
sibidiba
@sibida: back in the days, years ago, I kept using deprecated methods to destroy threads I knew it was 'safe' to destroy and I never had any problem... *BUT* I've long stopped doing such a thing: there's always a way to 'cancel' a thread you've got control on, either by closing some socket, by putting a poison pill on a blocking queue, by setting some 'shouldExit' boolean to true, etc. I used to think a 'destroy' was really needed and kept doing it, but now I don't anymore. There's always another way right? What would be a case where there would be no way to 'cleanly' cancel a thread?
Webinator