views:

127

answers:

5

What is meant by CPU Utilization of a process?

How to measure it?

What are the means for reducing it?

I have always been confused with this concept. I have tried to measure the CPU used by using 'top' command in the Linux. But, what I notice is, when there are no other user process running, then my process seem to spike up and take 99% CPU when it's not blocked in I/O. But, if there are other process running then it comes to 45% or 50%. Is it acceptable for a process to take 99% of CPU when no other process is running?

Any links or pointers in the direction will also help.

A: 

The CPU is a resource. The operating system (e.g. Linux, or Windows) manages this resource and ensures that all your programs are getting the time to execute fairly.

So you do not need to worry about this.

Will
+2  A: 

CPU utilization in this context is the proportion of the total available processor cycles that are consumed by each process.

If it's just computing some long-winded calculation, then what you are seeing is normal: the OS is dividing the available 100% of computing resources fairly between the processes that are asking for it.

If, instead, your program should be waiting for some event, like waiting for the user to press a key or for input to arrive from teh network, it sounds like your program is in an infinite loop: it never waits for anything but just churns away all the time. If that is the case, you should consider rewriting it so that, when it has no work to do, it waits for something. This might be waiting for the user to press a key, for a packet to arrive on a network connection or whatever. If that is the case, you should be looking for code that goes "Is there any work to do? No! Is there any work to do? No! Is there any work to do? No!" madly. If this is the case, you should find a way to make it wait for work using an appropriate operating system call or library function.

martinwguy
+1  A: 

CPU utilization is basically ('time spent using CPU' * 100 / 'real time'), usually calculated in small intervals.

99% means it just crunching numbers all the time without waiting. It's quite normal and actually sometimes that's what you want. The less it waits - the faster it will calculate the result.

When there other processes around with the same or higher priority, it is normal for CPU utilization to go down. After all, there's only one CPU and bunch of processes trying to use it at the same time, every now and then your process have to wait until others do their job, so that's why CPU utilization goes down.

But sometimes high CPU utilization is bad. When you have non-critical process working for hours with high CPU usage, other more critical processes can't get their fair share of CPU and take longer to finish. For web server that means less clients can use your server. For file server that means files will take longer to download. Real users will be unhappy in both cases.

So, basically, it depends on what your server is doing and is your process critical to your business or not so much. If it is critical, you want it to take as much CPU as possible. Otherwise as little CPU as possible.

vava
+1  A: 

What are the means for reducing it?

It depends if you want to reduce the height of the CPU usage spike or the duration of your lenghty computation or both.

So if your app is using 100% CPU for one minute, do you want to "reduce" it to "50% over 2 minutes" or "100% over 30 seconds" or "50% over 1 minute"?

These are all different.

For example, say you wanted to make sure that an application is never using more than 50% of the computing power of a dual-core machine (I'm not saying it's a good thing to do, I'm just giving an example), then there's a very easy way to do it: you could set the "CPU affinity" of your program to only one of the two cores. Note that I don't recommend to do it in your case: all I'm saying here is, for the sake of completeness, that it exists, it has valid uses, and there's a reason why both Linux and Windows developers have written APIs and end-user commands allowing to set the CPU affinity on a per-process basis (these functionalities are fully available both to programmers and to end-users).

Now if you want to reduce the duration of lenghty computation, you have to optimize your application so that it uses less resources (for example by using an algorithm that is best suited to your problem).

Is it acceptable for a process to take 99% of CPU when no other process is running?

It depends on your requirements. Usually yes, it is not only acceptable but also common.

Webinator
+1  A: 

Programs are either running or waiting, 100% or 0%. When you see a utilization of some other amount, like 50%, that is an average over some time interval, like a second. So don't worry about the utilization percent. What you should do is make sure it's not doing anything it doesn't absolutely have to do. That's what performance tuning is about. That's all there is to it.

Mike Dunlavey