A: 
  1. IMHO, this is a very bad idea. If I were you, I would try really, really hard to find another way to do this. You're combining two really bad ideas: creating a truck load of threads, and messing with thread priorities.

  2. You mention that these operations only need to appear to run simultaneously. So why not try to find a way to make them appear to run simultaneously, without literally running them simultaneously?

Terry Mahaffey
I did mention fibers and cooperative multitasking. So I know alternatives exist, but the ones I know of require meddling with these long-running operations code itself. I'm interested in finding something that is more pre-emptive... and for my situation, I'm able to be a bit experimental. Kind of like the "one-process-per-tab" philosophy of Google Chrome, which I'm sure some people would be dubious of if they'd heard it proposed (many probably still are...but I'm sold)
Hostile Fork
Google Chrome one-process-per-tab was about tab isolation, so a crash in one tab doesn't affect others. The scale is also different. It has nothing at all to do with this project or your proposed idea. Your logic is deeply flawed here. I stand by my advice. But it's your project, so do what you'd like. Good luck.
Terry Mahaffey
I don't think you've done due diligence in pointing out any "deeply flawed" argument in what I suggest above. You just said "lots of threads are bad, setting thread priorities is bad" and that's a knee-jerk reaction. I was hoping for some more nuanced discourse on this question.
Hostile Fork
Clarified: I'm not going to downvote you or anything and I appreciate your input. But I don't feel you added anything that wasn't already covered in the SO threads I linked, or in my description of the question, hence I don't consider it a satisfactory answer.
Hostile Fork
A: 

It's been 6 months, so I'm going to close this.

Firstly I'll say that threads serve more than one purpose. One is speedup...and a lot of people are focusing on that in the era of multi-core machines. But another is concurrency, which can be desirable even if it slows the system down when taken as a whole. Yet concurrency can be achieved using mechanisms more lightweight than threads, although it may complicate the code.

So this is just one of those situations where the tradeoff of programmer convenience against user experience must be tuned to fit the target environment. It's how Google's approach to a process-per-tab with Chrome would have been ill-advised in the era of Mosaic (even if process isolation was preferable with all else being equal). If the OS, memory, and CPU couldn't give a good browsing experience...they wouldn't do it that way now.

Similarly, creating a lot of threads when there are independent operations you want to be concurrent saves you the trouble of sticking in your own scheduler and yield() operations. It may be the cleanest way to express the code, but if it chokes the target environment then something different needs to be done.

So I think I'll settle on the idea that in the future when our hardware is better than it is today, we'll probably not have to worry about how many threads we make. But for now I'll take it on a case-by-case basis. i.e. If I have 100 of concurrent task class A, and 10 of concurrent task class B, and 3 of concurrent task class C... then switching A to a fiber-based solution and giving it a pool of a few threads is probably worth the extra complication.

Hostile Fork
Hi Brian! It sounds like the question really is, when do you use the OS's threading facilities, and when do you "roll your own" multi-threading mechanism? And I suppose the answer depends on how well-suited the OS's threads are to the task at hand, vs how clever you are at making something better-suited. You can always "roll your own" pre-emptive multitasking using setjmp() and longjmp(), but it's tricky to get right and may not be worth the effort...
Jeremy Friesner