The lesson of Prescott



Prescott is the third revision in Pentium 4 production line which is itself the seventh generation of Intel x86 CPUs.

What the new Pentium 4 series feature is a Netburst microarchitecture. This Netburst was designed to be scalable from 1GHz to 10GHz. The first of the series is the premature Willamette, released in Nov. 2000, which suffers a lot of design inefficiency. Then comes in Jan. 2002 the famous and highly successful Northwood. In Sep. 2003, Gallatin , also known as P4 Extreme Edition, was marketed. The reason I say it is marketed is because it features almost nothing new, except larger cache and higher FSB. Prescott is yet another important milestone of the P4 series. It was released in Feb. 2004 and features major reworking in the design. However, the heat problem truncated very early its life span. Generating 40% more heat than Northwood, no Prescott has ever achieved 4 GHz.

For the first time in the history of VLSI manufacturing, we are made aware of the possible disturbances of the Moore’s Law.

While the manufacturing capability of IC is steadily progressing at an exponential rate (Prescott achieved 90nm for the first time in industrial mass production), the design was still dictated by marketing needs, rather than architectural considerations. It is well known that Intel relies heavily on the clock frequency to sell its product. Hence it is not surprise that they (used to) tend to prioritize the following task over any other: shrink the core size and raise the clock speed. However, this approach, while certainly applicable, faces two major disadvantages: transistor leakage and power density[1].

Since May 2005, Intel has released dual-core processors based on the Pentium 4 under the name Pentium D. Basically it is a double Prescott, the way Intel “makes lemonade out of lemons”. Hugh money was again dispensed to the marketing of the assumed advantages of dual-cores. Pentium D was first introduced in 90nm, and soon transplanted to 60nm (the PD9xx series). In the meantime, one crucial factor which contributes to the capacity of this (or any other electronic product) is constantly being neglected: the design.

The introducing of Core microarchitecture seeks to address to these problems. It is better designed and as a consequence, achieved considerable higher performance with an energy efficiency that Netburst series could not possibly dream of. It proves, to our surprise, how much the capacity of CPU can benefit from a better design. Naturally, now Intel is making the Core 4xxx series, which features a single core, instead of two. Although Intel does want to keep the dual-core marketing value, this move signifies to some extent that dual core is not absolutely necessary.

Same lesson can be derived from the development of GPU. The Crossfire and SLI solution does provide a means, for certain occasions where money and energy efficiency is not a issue, to boost the performance right here, right now. In the long run, however, better designed single GPU will constantly outperform the older generations (those of six months ago) of dual (or quad, as Nvidia is offering now) GPUs.

If the energy efficiency becomes a serious issue in the future, we might want to consider a feature that I call “dynamic clock range”. It is basically the possibility to adjust the clock frequency according to work load. Asus Mainboard has already implemented this (Non-delay Overclocking) and I hope this will become a standard feature in all MB vendors and supported by the OS vendors.


[1] Transistor leakage can be measured by the static power dissipation. Power density is the power dissipated per unit area.


edit

No comments:

Popular Posts

Blog Archive