Wednesday, February 21, 2007

Are hyperthreads good for you?

Although new architectures always sound good and are hailed for a number of advancements, computers are only as good as the people who program them. Computer science really peaked in the 60s and 70s, but unfortunately most of that knowledge has disappeared, in large part because the most experienced computer scientists were pushed into early retirement plans before they could pass on their knowledge.

In the case of hyperthreading this corporate behavior had two consequences. The first is that many operating systems see the threads as CPUs, which they are not because only critical resources like registers are duplicated. This error leads the scheduling algorithms in the OS to miscompute the available capacity and leads to “missing MIPS.”

The second consequence is that the art of concurrent programming has been mostly lost. Although for example the Cedar system had beautifully implemented threads, it had been very difficult to achieve (it cost Dough Wyatt a lot of sweat and tears until the last deadlock in the threaded viewer system was squelched). Today only few programmers know how to work with threads, and then often get it wrong.

Because the software I use cannot reliably exploit hyperthreading, I have turned it off. Maybe this is also why Intel’s latest processors are not hyperthreaded.

For more details, read the following paper: Neil J. Gunther, The virtualization spectrum from hyperthreads to grids, Proceedings of the Computer Measurement Group (CMG) 32nd International Conference, Reno (NV), December 3-8, 2006.

PS: Here are the hyperlinks to the two articles mentined in comment number 3 by reader RocketRoo: http://www.kernelthread.com/publications/virtualization/ and http://www.gotw.ca/publications/concurrency-ddj.htm.