views:

429

answers:

5

Hi, I know how to implement multithreading using c#. But I want to know how is it working like.

  1. will only one thread run at a time and when that thread is waiting will it execute the second thread?
  2. If the second thread is executing and the first thread is ready. What will happen?
  3. Which thread will be given the priority?

I am confused in understanding the concept. I want to understand why do we go for multithreading and when do we use it .

Thanks in advance.

+5  A: 

Multithreading is useful in environments where one action needs to not BLOCK another action.

The primary example of that is in the case of a background process that shouldn't lock up the main user interface thread.

The operating system is generally going to decide who can do what, when. If a computer has only one core, multithreading has little benefit except the one listed above. But, as more cores are added, more actions can be performed concurrently.

However, even in a single core system, multithreading can facilitate non-blocking-IO which is very important in increasing the responsiveness of your application.

John Gietzen
so do you mean that both the process will run at the same time ?
Jebli
On a multi-core system, absolutely yes. Even on a single core system, the operating system will allow them to run fairly concurrently. For IO operations (when the CPU is idle) multithreading prevents waste.
John Gietzen
+15  A: 

Threads may or may not be running at the same time. On a single processor machine only one thread will is running at a time. On a multiprocessor system (multi-processor, multi-core, hyper-threading) then multiple threads can be running at the same time, one thread per processor.

The operation system scheduler determines when a thread gets to run. Windows is a preemptive multitasking system. It will run a thread for a certain amount of time, called a time slice (10ms or 15ms on Windows), stop the thread, then determine which thread to run next, which could be the same thread that is running. The actual algorithm is complex.

Threads do have priorities so that affects this as well, all things being equal a higher priority thread will get more time than a lower priority thread. If you don't manually set a priority on a thread, then it defaults to "Normal priority" In a simple case, two threads of the same priority that a ready to run, then both threads will run an equal amount of time, probably round-robin.

On why do we do multi-threading there are two basic reasons:

  1. Speed: On a multiprocessor system since more than one thread can run at a time, our code can perform more than one task at a time. For example if we are processing an image, we split up the image into pieces and have different threads work on each piece of the image.
  2. Asynchronous operations: There is some task that will take a while (e.g. reading a file from the Internet) and we want to be able to let that go on in the background while we do something else, so we create a thread to do the download while we go about our business. One of the big draws of this is in a GUI application we don't want to block the UI thread so the user interface still responds to user will processing is occurring.
shf301
great .thanks for the reply.I have gained some more ideas on multithreading :)
Jebli
+3  A: 

Multithreading speeds up program execution if there are parallelizable parts of the program.

You may want to have a look at different resources for multithreading to understand more about it.

Imagine you have a problem that needs to be done as quickly as possible. You have an easy one; count to a billion. You can do a loop: for (var i = 0; i < Math.Pow(10,9); i++) {} and then this will execute on one core only. It will take x amount of time. Now imagine doing it on two cores instead:

// execute action a concurrently across the domain to-from, where a takes the current index
void Execute(Action<int> a, int from, int to)
{
    // assert to > from, to != from, from - to > CPUs, otherwise equal ranges = CPUs
    // assert a != null
    var pllItems = Environment.ProcessorCount;
    var range = to-from;
    var ranges = new int[pllItems,2];
    var step = Convert.ToInt64(range / pllItems);

    // calculate the ranges each thread should do
    for (var i = 0; i < ranges.Length; i++) {
        var s = from+i*step; // where thread i starts
        ranges[i,0] = s; // -''-
        ranges[i,1] = s+step - 1; // where thread i ends
    }


    var ts = Thread[pllItems];
    for (var i = 0; i < pllItems; i++) ts.Start(o => {
        var currT = i; // avoid closure capture problems
        for (var x = ranges[currT, 0]; x < ranges[currT, 1], x++) {
            a(x);

            // could also have:
            // try { a(x) } catch (Exception e) { lock(ecs) ecs.Add(e); /* stop thread */ break; } 
            // return at the end of method failed threads:
            // return ecs;
        }
    });
    for (var i = 0; i < pllItems; i++) ts.Join();
}

Thankfully, if you download the MS Threading library from 2008 you will get this for free with

Parallel.For(0, Math.Pow(10,9), () => { });

There's also a new tool for VS2010 which displays in a graphical form how the threads are blocking, waiting for io etc.

There's a scheduler in .Net/the OS that allows threads to have different interleavings.

A few days ago, MS released documentation on how to do parallel operations in .Net 4. Have a download/read here

Henrik
thanks for the post the information was realy healpful
Jebli
+2  A: 

Purposes of a thread

  • Hide latency (i.e. do something else while waiting)
  • Exploit the concurrency of the hardware (in case of multiple cores, this gives better performance)
  • Discriminate importance levels (i.e. high and low priority threads)
  • Organize structure (i.e. thread per event, thread per resouce, thread per process)

There are others, but i think this are the basic uses of a thread

Henri
+4  A: 

If you look at the Processes tab in Task Manager on your Windows machine, you will see the processes that are currently active on the machine. If you add the Threads column to the view, you will see the number of threads that currently exist in each process. The operating system (OS) is the one that determines how all of these threads across all of these processes are scheduled for execution on the processor. So in effect, the OS is constantly determining which threads have work to do and scheduling those threads for execution on the processor.

Let's assume a single processor, single core machine for now.

In this example, your application is the only process that is doing anything. Say your application has two threads of equal priority (more on this below). In this case, the OS will alternate between these two threads, scheduling one for execution and then the other until the work that they are doing is complete. To accomplish this, the OS grants a timeslice to the first scheduled thread. For example purposes, let's say the timeslice is 10 milliseconds (it's actually much shorter than this). So thread A will execute for 10 milliseconds. The OS will then preempt thread A so thread B can execute for its timeslice, also 10 milliseconds.

This back-and-forth will continue uninterrupted until both threads have finished their work or until certain events occur. For example, let's say that thread A finishes its work before thread B. In this case, thread A has nothing else to so, so the OS will continue to grant timeslices to thread B since it is the only one with work to do. Another thing that can happen is that thread A can wait on an event, such as a System.Threading.ManualResetEvent, or an asynchronous read of a socket. Until that event is signaled or data is received on the socket, thread A is essentially dead in its tracks, so the OS will continue to grant timeslices to thread B until the event/socket that thread A is waiting on occurs. At that point, the OS will resume switching between thread A and thread B for execution.

A good example of this is the background printing that most applications do today. An application's main thread is dedicated to processing UI events - button clicks, keyboard presses, drag-and-drop, etc. If you print a document from your favorite word processor, what happens conceptually is that the task of sending the print instructions to the printer is delegated to a secondary thread. So at this point, your application has two threads that are running - one thread servicing the UI and the other thread handling the print job. Since this is on a single processor, single core machine, the OS swaps between the two threads, granting timeslices to each. In this case, the print job thread will end after it finishes sending the print instructions, and then only your UI thread will be left.

A question you may have at this point is this:

Doesn't it take longer to print this way on a single processor, single core machine since the OS is having to swap between the print job thread and the UI thread?

And the answer is YES. It does take longer this way. But consider the alternative. If the print job were executed on the UI thread, the user interface would be unresponsive to your input, i.e., button clicks, keyboard presses, etc., until the print job was complete. And this would frustrate you as the user because the application isn't responding to your input. So, in effect, multithreading is really an illusion of parallelism, at least on a single processor, single core machine. However, you get the satisfaction of being able to interact with your application while the print job is accomplished on another thread, even though the print job takes longer doing it this way.

Now let's move to a multicore machine. If your process has the same two threads, A and B, to execute, then each thread can be scheduled on a separate core. In this case, both threads run simultaneously without the interruption. The OS doesn't have to swap between the threads because each thread has its own core to run on. Make sense?

Finally, let's consider the priority associated with threads (assume single processor, single core again). Each thread in a given application has, by default, the same priority. What this means is that the OS will consider all threads equal with regard to scheduling. If you have two threads to be executed, they will get roughly the same amount of time on the processor. You can adjust this, however, by increasing/decreasing the priority of one thread over the other. In this case, the thread with the higher priority is favored for scheduling purposes over the thread with a lower priority, meaning that it gets more timeslices than the other thread. In some limited cases, adjusting the priority of threads can improve your application's performance, but for most applications, it is not necessary. The thing to be cautious of is to not "starve" a thread, especially the UI thread. The OS helps to prevent this by not starving a thread altogether. Still, adjusting the priorities can still make your application appear sluggish, if not altogether unresponsive, if the UI thread is "put on a diet," so to speak.

You can read more about thread priorities here and here.

I hope this helps.

Matt Davis