Archive for Sunday, January 9, 2005

Chip industry gearing up for two-headed PC processors

January 9, 2005


— For decades, computer performance has been driven largely by the increasing numbers of ever-smaller transistors squeezed into the machines' silicon brains.

With each generation, speeds jumped and prices dropped.

Though the tiny switches built in silicon are the heart of the digital revolution, they can't shrink forever. And in recent years, chip companies have struggled to keep a lid on power and heat -- the result of some transistor components getting as thin as a few atoms across.

Now, the world's leading semiconductor companies have unveiled a remarkably similar strategy for working around the problem: In 2005, microprocessors sold for personal computers will sprout what amounts to two heads each.

Instead of building processors with a single core to handle calculations, designers will place two or more computing engines on a single chip. The chips won't run as fast as single-engine models, but they won't require as much power, either, and will be able to handle more work at once.

"There are challenges certainly in this change," said Bernard Meyerson, chief technologist at IBM's Systems and Technology Group. "You're looking at a seminal shift of the industry."

Intel says it will start shipping dual-core chips for desktops, laptops and servers by midyear. AMD will start with a dual-core server chip before releasing ones for desktops and laptops, also in 2005.

Concept's background

Multicore technology is hardly new -- it's already being used in server and networking chips. The concept of using multiple, standalone microprocessors in a computer isn't new either -- Apple Computer Inc. and other vendors have sold dual- and multiprocessor machines for years.

Bernard Meyerson, IBM fellow and chief technologist for the
company's Systems and Technology Group, admires a POWER5 multi-chip
module. IBM is among the companies shifting away from building
increasingly complicated and fast chips with a single brain to
slower processors that have two engines, or cores.

Bernard Meyerson, IBM fellow and chief technologist for the company's Systems and Technology Group, admires a POWER5 multi-chip module. IBM is among the companies shifting away from building increasingly complicated and fast chips with a single brain to slower processors that have two engines, or cores.

IBM, which pioneered multicore chips in 2001 for servers, now says it will develop a multicore chip with Sony Corp. and Toshiba Corp. for video game consoles, high-definition TVs and home servers. The chip, dubbed Cell, should start shipping in 2006.

The shift toward dual- and multicore processors is expected to provide the industry some breathing room as it strives to continue the computing law named after Intel co-founder Gordon Moore -- that the number of transistors doubles roughly every 18 months.

New, exotic materials and more advanced manufacturing techniques are expected to help the industry maintain the pace predicted by Moore in 1965. But the bulk performance improvement will be the result of higher-level innovation, such as multicore chips.

That's because of the electrical leakage caused by the smaller and smaller dimensions of transistor components. Merely ratcheting up the clock speed generates dramatically more heat and provides less bang for the buck performance-wise.

The transition to multicore processors appears to have been more jarring for some companies than others.

AMD claims its latest chip architecture was developed from the ground up with the move to dual- and multicore chips in mind. In fact, AMD has been downplaying the clock speed of its chips, which trailed the frequency of Intel's offerings, since 2001.

"What's interesting to us is all of a sudden people are talking about it," said Ben Williams, vice president of AMD's server and workstation business.

Intel, whose executives once touted plans for 10 gigahertz chips, moved away from its strong focus on clock speed in 2004 -- which for many reasons will be remembered as an annus horribilis for the world's largest chip maker despite record sales and strong profits.

During the year, it canceled a next-generation Pentium 4 project, which had been code named Tejas. It also postponed plans for a 4-gigahertz Pentium under its current Prescott core before outright canceling that, too. Its top performer runs at 3.8 gigahertz.

'Sensible' move

Intel executives don't specifically blame the sudden change in plans on power consumption and heat, though the heat of Prescott-based processors suggests it might be the case. Instead, they say it just made more sense to shift toward dual- and multicore chips.

"We decided it was a better way to use our transistor budget and a better way to deliver performance and responsive computers to our users," said Stephen Smith, a vice president in Intel's Desktop Platforms Group.

Despite the slower clock speeds compared with single-core chips, the new processors are well suited for computer users who run multiple programs at once. Though operating systems have been capable of multitasking for years, it's been something of an illusion on the hardware end.

"What's going on inside of the machine is a bit of a juggling act," said Justin Rattner, an Intel senior fellow. "You've got that one engine doing a little bit of each one of those things, and it appears to users that they're all happening at once."

To take full advantage of multiple cores, programs have to be redesigned so that they know what the processor is capable of handling.

Intel said it paved the way on that front with a technology it introduced in 2002 called Hyper-Threading.

Built into Windows XP and supported by most Linux builds, hyper-threading tricks the operating system into thinking there is more than one processing engine in the computer, using idle moments to handle additional work.

Existing software applications run on a multicore chip on top of Windows. But many will need to be rebuilt to take full advantage of dual-core chips if they haven't already been optimized for Hyper-Threading.

Still, it's not clear how quickly Intel expected the shift to dual-core chips to occur once Hyper-Threading had been introduced.

Opined Nathan Brookwood, an analyst at the research firm Insight 64: "I think there may be a little bit of revisionism here, but it sure worked out well, regardless of whether it was a plan or they were lucky."

Commenting has been disabled for this item.