views:

94

answers:

2

When I ran this code using gcc,

$ cat eatup.c
#include<stdio.h>
#include<stdlib.h>

int main() {
    int i = 0;
    while(1) {
        i++;
    }
}
$

the CPU graph went like this :

alt text

I am not sure why there is a cross in the CPU core usage.

  • I started the run at the rise to the left of the 40 mark, then initially core2 usage rose to 100% but after sometime there was a switch and core1 usage went 100%.

  • Any subsequent runs have not reproduced the situation. All I get is a single rise alt text

This might be a bit OS dependent (scheduling processes on the cores), but is there anything that could explain why the switch happened (as shown in the first screenshot)? Any guesses?


Turns out these switches are not so uncommon. Here is a screenshot of System Monitor just after bootup (Ubuntu 10.04)

Full Size

alt text

+3  A: 

What may have happened, it that the OS had two other processes that needed to run. The first was given the second core (because you were on the first). The second caused your program to lose it's CPU core. Then the first thread released it's core, and your program was assigned to it.

I'm no linux guru but it is usually possible to tell the OS that you have a preferred core you want to run on.

jdv
That's correct. Note that most OS schedulers will try to keep a process on the same core when possible, to take advantage of caches associated with a single core. That's why in your first graph it switches and stays switched. But on the other hand, the scheduler should be willing to move a process to another core. Consider CPU-intensive processes A, B, C, running on cores 1, 2, 1. If process B ends, you definitely want the scheduler to move one of A or C onto core 2.
Russell Borogove
Outside of benchmarks, there are very few cases where limiting which cores are in use is a good idea. I'd just not do it without being very sure it was necessary. Even then, I'd benchmark the heck out of it both ways to make sure there weren't unexpected side effects.
RBerteig
@RBerteig: I never needed to do this in real life. Maybe just to make insightful to others that a process is effectively single threaded.
jdv
A: 

This is OS-dependent, but normally no OS ever gives you guarantees that your thread will be run on the same core the whole time, unless you take specific steps to make it so.

While there are some obvious benefits in keeping a thread associated with the same core, there's nothing unusual in it being reassigned to another core from time to time. You might even see it getting thrown from core to core every time it gets to run (or almost every time). In fact, what you see in your tests looks pretty good in that respect.

AndreyT
@AndreyT: yes, I understand that there is no guarantee, but what could cause the thread to be shifted to another core is what I wanted to know.
Lazer
@Lazer: OK. I would guess that it would be reasonable for an OS to do it from time to time without any specific reason just to distribute the thermal load more evenly across the CPU die.
AndreyT
With dozens of processes running in the system, alternating between computing things and waiting on I/O, there will always be perfectly sensible reasons for the scheduler to move a thread from one core to another, long before it considers thermal load.
Russell Borogove
@Russell Borogove: And what would be that reason, if the thread is *already running*? In the OP's case the thread does not go into the wait state (no I/O, obviously).
AndreyT
Perhaps a packet hit a network interface at the same time as the mouse was moved, both events waking high-priority threads, and putting the 'eatup' thread to sleep. Or any of the other dozens of things going on in a modern OS.
Russell Borogove