Exposition:
I think the Java VM is awesome. It's guarantee of the safety of bytecode, the the standard libraries, ... are amazing, especially the ability to load a Java class on the fly, and know that it can't crash the VM (good luck with *.so files or kernel modules).
One thing I don't understand, is how Java treats Thread.stop
I've read http://java.sun.com/j2se/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html but it seems weird for the following reasons:
1) Resource Management
On Unix OS, if a process is hogging up resources, I can kill -9 it.
2) Breaking of Abstraction:
If I start a computationally expensive job, and I no longer need the computation, I can kill -9 it. Under this Java threading model, my computation thread has to periodically check for some boolean flag to see whether it should quit [this seems like breaking abstraction layers -- when I'm writing computation code, I should focus on computation code, not where to spread out checks for whether it should terminate.
3) Safety of Lock/Monitors
So the official reason is "what is a thread is holding a Lock/Monitor and it gets Thread.stopped ? The objects will be left in damaged states" -- yet, in OSes this is not a problem, we have interrupt handlers. Why can't Java threads have interrupt handlers that work like OS interrupt handlers?
Question:
Clearly, I am thinking about Java Threads with the wrong mental model. How should I be thinking about Java threads?
Thanks!