views:

110

answers:

5

Does MSVC automatically optimize computation on dual core architecture?

void Func()
{
   Computation1();
   Computation2();
}

If given the 2 computation with no relations in a function, does the visual studio

compiler automatically optimize the computation and allocate them to different cores?

+6  A: 

No. It is up to you to create threads (or fibers) and specify what code runs on each one. The function as defined will run sequentially. It may switch to another thread (thanks Drew) core during execution, but it will still be sequential. In order for two functions to run concurrently on two different cores, they must first be running in two separate threads.

As greyfade points out, the compiler is unable to detect whether it is possible. In fact, I suspect that this is in the class of NP-Complete problems. If I am wrong, I am sure one of the compiler gurus will let me know.

Mark Wilkins
I think you meant "it may switch to another _core_ during execution". It will always be the same thread, but the OS may schedule it on any available core to run (barring explicit processor affinity settings).
Drew Hall
Or he meant that the OS may interrupt his program's thread and schedule a thread from a different process.
Nick Meyer
No, he clearly says "allocate them to **different** cores". It seems pretty unambiguous to me.
Terry Mahaffey
@Drew: Yes - thanks for pointing that out.
Mark Wilkins
I would think it is actually harder than NP-complete, it sounds closer to the halting problem to me - thats not to say that it wouldnt be possible for some functions e.g. it may be easy to detect that some pure functions are pure
jk
+8  A: 

Don't quote me on it but I doubt it. The OpenMP pragmas are the closest thing to what you're trying to do here, but even then you have to tell the compiler to use OpenMP and delineate the tasks.

Barring linking to libraries which are inherently multi-threaded, if you want to use both cores you have to set up threads and divide the work you want done intelligently.

Sam Post
+2  A: 

There's no reliable way for the compiler to detect that the two functions are completely independent and that they have no state. Therefore, there's no way for the compiler to know that it's safe to break them out into separate threads of execution. In fact, threads aren't even part of the C++ standard (until C++1x), and even when they will be, they won't be an intrinsic feature - you must use the feature explicitly to benefit from it.

If you want your two functions to run in independent threads, then create independent threads for them to execute in. Check out boost::thread (which is also available in the std::tr1 namespace if your compiler has it). It's easy to use and works perfectly for your use case.

greyfade
Well, just because the problem can't *always* be solved is no reason why the compiler couldn't do it in simple cases. Many compiler optimizations already work that way: They'll be applied if the code is simple enough that the compiler can determine that it's safe.
jalf
There's a compiler that automatically handles threading? News to me. You're right, though, of course. But I also think it's unreasonable to assume that it's so simple - particularly if one or both functions are in a separate compilation unit. More so when you consider the halting problem.
greyfade
+2  A: 

No. Madness would ensue if compilers did such a thing behind your back; what if Computation2 depended on side effects of Computation1?

If you're using VC10, look into the Concurrency Runtime (ConcRT or "concert") and it's partner the Parallel Patterns Library (PPL)

Similar solutions include OpenMP (kind of old and busted IMO, but widely supported) and Intel's Threading Building Blocks (TBB).

Terry Mahaffey
The question already assumed "computations with no relations" i.e. no dependencies. Of course, the compiler would have to _prove_ that, one of the many reasons this doesn't exist yet.
MSalters
+1  A: 

The compiler can't tell if it's a good idea.

First, of course, the compiler must be able to prove that it would be a safe optimization: That the functions can safely be executed in parallel. In general, that's a NP-complete problem, but in many simple cases, the compiler can figure that out (it already does a lot of dependency analysis).

Some bigger problems are:

  • it might turn out to be slower. Creating threads is a fairly expensive operation. The cost of that may just outweigh the gain from parallelizing the code.
  • it has to work well regardless of the number of CPU cores. The compiler doesn't know how many cores will be available when you run the program. So it'd have to insert some kind of optional forking code. If a core is available, follow this code path and branch out into a separate thread, otherwise follow this other code path. And again, more code and more conditionals also has an effect on performance. Will the result still be worth it? Perhaps, but how is the compiler supposed to know that?
  • it might not be what the programmer expects. What if I already create precisely two CPU-heavy threads on a dual-core system? I expect them both to be running 99% of the time. Suddenly the compiler decides to create more threads under the hood, and suddenly I have three CPU-heavy threads, meaning that mine get less execution time than I'd expected.
  • How many times should it do this? If you run the code in a loop, should it spawn a new thread in every iteration? Sooner or later the added memory usage starts to hurt.

Overall, it's just not worth it. There are too many cases where it might backfire. Added to the fact that the compiler could only safely apply the optimization in fairly simple cases in the first place, it's just not worth the bother.

jalf