views:

1642

answers:

15

I'm building a PC with the new Intel I7 quad core processor. With hyperthreading turned on it will report 8 cores in Task Manager.

Some of my colleagues are saying that hyperthreading will make the system unreliable and suggest turning it off.

Can any of you good people enlighten me and the rest of the stockoverflow users.

Follow on: I've been using hyperthreading constantly, and its been spot on. No instability whatsoever. I'm using:

  • Microsoft Server 2008 64 bit
  • Microsoft SQL Server 2008 64 bit
  • Microsoft Visual Studio 2008
  • Diskeeper Server
  • Lots of controls (Telerik, Dundas, Rebex, Resharper)
+1  A: 

I've had a hyperthreading PC for a couple years now. Not that many cores, but it's worked fine for me.

Wish I had test data to prove your colleagues wrong, but it sounds like it's just my opinion versus theirs at this point. ;)

Eddie Parker
+2  A: 

Unreliable? I doubt so. The only disadvantage of hyperthreading that I can think of is the fact that if the OS is not aware of it, it might schedule two threads on one physical processor when other physical processors are idle which will degrade performance.

Mehrdad Afshari
Precisely. As explained here [http://blogs.msdn.com/oldnewthing/archive/2004/09/13/228780.aspx]
Grant Wagner
"Windows XP and Windows Server 2003 are hyperthreading-aware. When faced with the above scenario, those schedulers will know that it is better to put one task on one of the A's and the other on one of the B's."
Grant Wagner
+8  A: 

Stability isn't likely to be affected, since the abstraction is very low level and the OS just sees it as another CPU to provide work to. However, performance is another matter.

In all honesty I can't say if this is still the case, but at least when the HT-enabled CPUs first came out, there were known problems with at least some applications. For example, MySQL, and multi-threaded apps like the Java application I support for my day job were known to have decreased performance when HT was enabled. We always recommended it be removed, at least for our particular use case of a server-side enterprise application.

It's possible that this is no longer an issue, and in a desktop environment this is less likely to be a problem for most use cases. The ability to split work on the CPU generally would lead to more responsive applications when the CPU is heavily utilized. However, the context switching and overhead could be a detrement when the app is already heavily threaded and CPU-intensive such as in the case of a database server.

Jay
My understanding is that Intel is using a completely different implementation of hyperthreading than was used in the Pentium 4 series, and that this one works much better. I have done some experimentation, and single-threaded programs on an i7 are about the same speed whether HT is on or off, which was not the case with the Pentium 4.
Curt Sampson
The problem with hyperthreading on the P4 was the HUGE pipelines. If you stalled waiting for slow main memory, then hyperthreading was an advantage. If you were mainly operating from cache, then hyperthreading was a serious disadvantage. IMHO. Which is why databases were often precluded.
Chris Kaminski
A: 

The short answer: yes.

The long answer, as with almost every question, is "it depends". Depends on the OS, the software, the CPU revision, etc. I have personally had to disable hyperthreading on two occasions to get software working properly (one, with the Synergy application, and two, with the Windows NT 4.0 installer), but your mileage may vary.

As long as you get windows installed detecting multiple HT cores from the beginning (it loads some relevant drivers and such), you can always disable (and re-enable) HT "after the fact". If you have bizarre stability issues with specific software that you can't resolve, it's not hard to disable HT to see if it has any impact.

I wouldn't disable it to start with because, frankly, it will probably work fine in 99.99% of your daily use. But be aware that yes, it can occasionally cause bizarre behaviors, so don't rule it out if you happen to be troubleshooting something very odd down the road.

nezroy
+2  A: 

There was a problem with SQL server and hyperthreading for some queries because SQL server has its own scheduler, maxdop 1 would solve that

SQLMenace
Thanks. Found this at http://support.microsoft.com/kb/329204"For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors."
Simon Hughes
A: 

The threads in a hyperthreaded CPU share the same cache, and as such don't suffer from the cache consistency problems that a multiple cpu architecture can. Though, if the developer of a piece of software is programming with multiple cpus in mind, they will (or should) be writing with read semantics (iirc, that's the term). i.e. all writes are flushed from the cache immediately.

SnOrfus
That's a very interesting remark. Having no experience with the Itanium, I will often prefix instructions with LOCK as in LOCK INC, or just use the OS Intrinsic, but is that what you're talking about, how would I ensure that "all writes are flushed from the cache immediately" hmm.
pngaz
LOCK is for locking the bus, preventing a read at the exact same time as a write, not for flushing straight to RAM. Cache-snooping eliminates the need to worry about write-through vs. write-back.
Brian Knoblauch
+2  A: 

To whatever degree Windows is unstable, it's highly unlikely that hyperthreading contributes significantly (or it would have made big news by now.)

le dorfier
+3  A: 

Off the top of my head I can think of a few reasons your colleagues might say this.

  • Several articles about SQL performance suffering under hyperthreading. I believe it winds up doing too much context switchings or cache thrashing. can't remember exactly.

  • Early on going from single proc to multi-proc or more likely for most people hyperthreaded procs, brought many threading issues into the open. Race conditions, deadlocks, etc, that they never saw before. Even though its a code problem some people blamed the procs.

Are they making the same claims about multi-core/multi-proc or just about hyperthreaded?

As for me, I've been developing on a hyperthreaded box for 4 years now, only problem has been a UI deadlock issue of my own making.

Rob McCready
I was going to answer with your second point when I saw your answer. Early hyperthreaded procs probably revealed lots of non-thread-safe application code on the desktop and hyperthreading was blamed for the application failures, rather than the applications themselves.
Grant Wagner
+3  A: 

Hyperthreading will mainly make a difference in the scheduler behaviour/performance when dispatching threads to the same CPU as opposed to different CPU...

It will show in a badly coded application that does not handle race conditions between threads...

So it is usually bad design/code.... that suddendly find a failure mode condition

+1  A: 

As far as I know, from the OS's point of view, it doesn't see hyperthreading as any different from having actual multiple cores. From the OS's point of view, there is no difference - it's isolated.

So, aside from the fact that hyperthreading's "extra cores" aren't "real" (in the strictly technical sense) and don't have the full performance of "real" CPU cores, I can't see that it'd be any less reliable. Slower, perhaps, in some rare instances, but not less reliable.

Of course, it depends on what you're running - I suppose some applications might get "down & dirty" with the CPU and hyperthreading might confuse them, but that's probably pretty rare.

I myself have been running a PC with hyperthreading for several years now, and I have seen no stability problems.

Sorry I don't have more concrete data!

Keithius
+1  A: 

I own an i7 system, and I haven't had any issues.

If it works w/ multiple cores, it works with hyperthreading.

Giovanni Galbo
A: 

Personally, I've found that hyperthreading, while not causing any problems, doesn't actually help all that much either. It might be like having an extra .1 of a processor. On my HT machine at work, I only very seldomly see my CPU go above 50%. I don't know if HT has gotten any better with newer processors like the i7, but I'm not optimistic.

Kibbee
I think it depends on your workload.
Giovanni Galbo
A: 

Other than hearing a few reports about SQL Server, all I can report is positive. I get about 25% better performance on heavy multi-threaded apps with HT on. Have never run into a problem with it, and I'm using a first generation HT processor...

Brian Knoblauch
A: 

Late to the party, but for future referrence;

I'm currently having an issue with this with SQLServer. Basically, my understanding is Hyperthreading on the same processor shares the same L1 & L2 cache, which can cause issues between the two. Citrix also appears to have this problem from what I'm reading.

Slava Ok wrote a good blog post on it.

John MacIntyre
A: 

I'm here very late but found this page via Google. I may have discovered a very subtle problem. I have a i7 950 running 2003 Server and it's great. Initially I left hyperthreading on in the BIOS, but during some testing and pushing things hard, I ran a program called "crashme" by Carrette. This program tries to crash an OS by spawning a process and feeding it garbage to try and run. My dual Opteron setup ran it forever without a problem, but the 950 crashed within the hour. It didn't crash for anything else unless I did something stupid, so it was very surprising. On a whim I turned off HT and ran the program again. It runs all night, even multiple instances of it. One anecdote doesn't mean much, but try it and see what happens. Also, it seems that the processor is slightly cooler at any given load if HT is turned off. YMMV.

Fred Bosick