=================================================================
TITLE: DAC Keynote: Redefining Technology in the Nanometer Era
Published on Gabe on EDA
Bernard Meyerson, IBM fellow, vice president and chief
technologist of the systems and technology group of IBM Corp.
addressed the question "How does one define "technology" now that
classical scaling is dead (and has been for years)?" in his
keynote talk at this year's DAC. The information technology
industry has relied upon the classical scaling in the
semiconductor technology to drive performance and product
economics. The science driving performance gains over the past
decades involves more than the economic issues addressing the
aerial density of transistors on a chip.
One of the results of the transitions across semiconductor
process nodes, is that we have now come the end of using
frequency as a viable metric for evaluating computer
capabilities. When Gordon Moore first proposed his now famous law
1965, he was really pointing out a principle of economics and
only incidentally covering chip density. The benefits of scaling
devices have resulted in integrated circuits that are faster,
cheaper, lower power, and enable increased functionality.
Now, scaling device sizes is reaching its limits. To be
effective, all parameters, not just device area, must scale,
because power density should be constant. However, at 130
nanometers, the supply voltages and oxide thickness are not
changing because the atoms only exist in discreet unit
quantities. The dimensions of a component atoms are becoming very
large compared to its image feature sizes, leading to non
statistical variations. One result of not keeping power density
constant at the next process node is the growing need to manage
and design for power.
In previous transitions to smaller process nodes, smaller devices
resulted in faster circuits. Now, smaller devices equate to
cheaper parts but not improved performance. As a result, the
focus on processor performance is no longer the key parameter and
instead system performance becomes the critical issue.
To move beyond simple dimensional scaling requires innovations in
transistors. Science and creativity will enable further
innovation. One example is strained silicon. By changing silicon
lattice structures, one can effectively reduce electron mass,
resulting in vastly increased electron mobility. The opposite
strain in the silicon lattice results in improved hole mobility.
The 90 nanometer band gap modifications through lattice strain
was the first production process using science to change
performance rather than just using optical scaling.
The one difficulty in going from an optical- to scientific-basis
for performance improvements is a need to prove the change is
manufacturable. Even more difficult is the need to schedule
innovation as much as 20 years before it is needed. The
transistor roadmap has multiple possible directions, and the
domination of interconnect over most physical parameters is
driving into interlayer dielectrics towards lower k values.
Unfortunately, as k approaches 1, material strength goes to
zero, think of a vacuum as the ultimate dielectric.
Because one method for inducing strain depends on differences in
structural properties, future process generations will need to
model strain as a part of the development for interlayer
dielectrics. Chemists are investigating materials to get lower k
values while also getting lower brittleness. Other techniques are
adding porosity to the interlayer dielectric materials to achieve
similar results.
Nevertheless, the 65 nanometer technology is robust. However,
physics and costs intrude into the deployment scenario.
Lithography is moving to immersion optics, devices are being
constructed on an atomic basis, and interconnect and dielectrics
are being reengineered. Each of the changes in silicon processing
can cost in excess of $50 million. 300 mm fabs were projected to
cost $2.5 billion. Actual costs have been running between $4 and
$6 billion. These high costs are causing changes in the industry.
Companies need to pool capital and intellectual property, move
towards more open standards, and share the basic research. As a
result, companies must learn to differentiate later in the cycle.
Future creation of value in semiconductors will require a
holistic design, companies will need to optimize materials, chips,
software, architecture, and all other components as a system.
Technology and technology innovations by themselves are
insufficient for future success. Differences in factors due to
changes in components can lead to increased benefits. For
example, a multiple core die will have lower power with increased
throughput, compared to a single processor doing everything.
Compute workloads vary but today's processors and the process
they are built on are designed and optimized for the peak loads.
As designers move towards multiple cores, functions move towards
multiple threads. These multiple threads move towards minimal
function and computing requirements. Ultimately, virtual
functions and virtual machines will enable increased performance
and greatly reduced power requirements. The industry will need to
change its objectives to scale out versus scaling for increased
circuit speed. Multiple cores and subsystems will require a
complex intrachip communications systems.
Some examples of multiple cores leading to the vastly improved
throughput include IBM's Blue Gene supercomputer, built with two
processors per chip and scaling across a number of chips per
board, the number of boards per platform and a number of
platforms integrated into a cluster resulting in 130,000
processors and the world's fastest supercomputer. Another example
is the new Cell processor, to be used in the next generation Sony
Play Station. This is an application specific processor for
extremely high-speed graphics applications.
The bottom line for all of this is that innovation drives future
performance, but blind scaling has already hit the power wall.
The preoccupation with increased clock speed will be supplanted
by applications specific metrics. Systems solutions will dominate
the industry as the applications specific computing revolution
overcomes the issues of costs and power. The costs of innovation
and next-generation manufacturing will accelerate industry
consolidation.
=================================================================