the designers say, our design can be manufactured; all of our models
and previous experience tell us so. Right, says the fab team, but
will it yield well, and at what cost and level of process complexity?
With the advent of 90-nm device technologies and especially the looming
65- and 45-nm nodes, the two sparring groups must start to speak the
same language, communicate better, and learn from each other, because
the stakes are high. This issue's roster of Hot Button experts explores
some of the critical issues surrounding DFM.
VOLK (senior director of marketing, DesignScan, KLA-Tencor):
The driver for DFM methodologies is the dominance of limited yield
loss from systematic mechanisms. This trend leads to lower initial
yields in manufacturing and lower mature yields, which tend to worsen
with each process generation, despite ongoing advances in metrology
and yield management. Systematic, pattern-related yield loss now dominates
random yield loss, and with further investigation, we find it's all
feature-limited yield, which relates back to design. That's such a
big piece of yield loss, if we don't knock that out of the equation,
we can't solve the performance issues going back to place and route.
enhancement technology (RET) teams need to verify that each design
will yield before making reticles or silicon. Engineers can't continue
to troubleshoot design issues in manufacturing, because the cost to
cycle time is too high. We also hear interest in the electrical critical
path being fed downstream, but we're pretty far from that. First,
we must take care of the functional yield, and then solve issues causing
the chip to miss binning targets. Once we correct the features that
cause systematic yield loss and degrade the performance of the chip,
maybe we can get into optimization of critical paths.
must get a handle on manufacturing process variability in order to
create a calibrated model of the variability so that information can
be translated to the design side. In the case of lithography, this
"lithography-aware design verification" involves inspection of the
design data with advanced models to determine how the device pattern
is going to perform throughout the lithography process window. To
implement this, it takes advanced lithography knowledge with CD-SEM
expertise and resources to perform lithography calibration and then
tie accurate simulation models to the design for lithography process
electronic design automation (EDA) tools for mask synthesis require
that RET design teams do the calibration, relying on information from
manufacturing. Our observation has been that the handoff of information
coming from manufacturing is highly questionable unless you have deep
insight into lithography. Otherwise, there are a lot of false starts
on the calibration of models for process simulations.
process control" will be the next focus of DFM, such as identification
of hot spots, which comes out of the optical proximity correction
(OPC) tools. Hot spots are typically areas of features that could
not be decorated because of restrictions on the OPC model due to the
physical layout. These locations are fed downstream as thousands of
areas, and if you have insight into which features are going to fail,
not based on rules but on actual simulation, you can optimize the
inspection and metrology sampling to areas that will be affected first
by lithography process variations.
summary, DFM cannot be composed of a single-point solution. The linking
of manufacturing process with metrology and inspection results to
upstream design is the key to success. The critical first step is
creating a mechanism for lithography-aware design verification by
providing accurate and painless lithography models for design. For
design features that are still limiting a manufacturing process, the
implementation of design-aware process control will provide an economical
approach to monitoring manufacturing process variability and yield
DRAINA (associate director, International Sematech Manufacturing Initiative):
The integration of design with manufacturing has been a staple in
many industries, but it's been something of a latecomer to the semiconductor
world. Until fairly recently, design generally drove processing, but
many new factors—such as increasing variability, mask costs,
exploding data availability, and litho hardware limitations—are
posing significant challenges for IC designers. One response had been
the emergence of DFM, which aims to address productivity early in
the design cycle, so that manufacturing issues and concerns are integrated
with design to obtain more-producible products.
is important for R&D and
design to consider a design's manufacturing affordability.
experience with DFM has been in the area of process equipment manufacturing
readiness and robustness. My recent experience as senior manager of
equipment engineering at IBM's 300-mm facility in East Fishkill, NY
(where I was responsible for fab equipment ramp-up methodology, implementation,
and process equipment productivity) has demonstrated how important
it is for R&D and design to strongly consider a process design's
manufacturing affordability. Here are some examples:
Wafer processing time and its effect on equipment throughput.
Consideration should be given to etch or deposition rates, implant
time, furnace residence time, CMP removal rates, added recipe overhead,
additional purging steps for cooldown or particle control, extra rinse
steps, and the like. These factors can add cost and complexity. Lower-than-anticipated
throughputs cause capital costs to increase because of the need for
more equipment and fab space. Accordingly, cost of ownership (COO)
must be considered before commitments are made to manufacturing implementation.
Chemical costs and increased equipment complexity. The introduction
of a significant toxic, pyrophoric, or flammable gas to a process
recipe will cause total capital equipment costs to rise at least 10%,
because of the added expense of double-wall stainless pipe delivery
systems, additional scrubbers, safety interlocks, and alarm systems.
The increase in equipment com-plexity means that more maintenance
skills and headcount will be required to service the system. Expensive
gases and chemicals, such as NF3, silane, helium, some slurries, and
photoresists, should be used minimally or replaced with cheaper alternatives.
Chamber recovery to qualification specs after being opened to
atmosphere or wet cleaned. Attention to this area is needed for
any new vacuum process implementation. In particular, RTP gate hot
processes and some insulator processes can be very sensitive to moisture
that can getter process reactants. Defining these process recipes
in conjunction with chamber-seasoning parameters can be critical since
a poor relationship can result in seasoning times taking one to several
days. Many etch and deposition processes require seasoning after the
chamber is exposed to atmosphere or wet clean, so seasoning should
also be included in the process design here. The periodicity of chamber
opening or reactor wet cleaning must be considered. Pursuing changes
in this area while running in manufacturing can be especially troublesome,
since equipment can be sidelined for extended periods of time.
Consumable parts costs and frequency of exchange or wearout.
Any new process may drive different consumables or higher wearout
rates for elements such as electrostatic chucks, "wetted"
chamber parts, scrubber systems, and liquid-delivery or gas-flow systems.
Sometimes a seemingly minor adjustment to a reactor wall temperature
or gas flow causes a higher deposit rate on chamber sidewalls, resulting
in particle performance degradation, earlier O-ring degradation, electrostatic
chuck arcing, greater frequency of wet cleaning, and longer seasoning
times. COO must be understood relative to any parts change or increase
in usage, as well as equipment downtime associated with frequency
above are examples of areas to consider prior to making commitments
for process design implementations in a fab. The challenge for designers
is to adequately investigate these areas before making manufacturing
commitments, in order to minimize or eliminate activity in these areas
while running a manufacturing fab.
BRUNET (LFD market development manager, design to silicon division,
Mentor Graphics): The ability to accurately print the image
intended by the designer is one of the greatest challenges at the
65-nm node. This is especially true given the extensive use of RET
at this node. But modifications to a photomask are no longer sufficient
to ensure image fidelity. That's why foundries, designers, and EDA
tool providers are turning to DFM methodologies that promise improved
flows and solutions for managing yield.
design-rule checking (DRC) acted as one of the key communication vehicles
between manufacturing and design, informing the designer of limits
imposed on them by manufacturing. Most constraints represent true
process limitations, which, if not followed, produce nonfunctioning
silicon or considerably low yields.
designers, and EDA tool providers are turning to DFM methodologies.
foundries have also started delivering "DFM recommended rules,"
which indicate where a design becomes easier to manufacture by adhering
to the DFM rule, rather than the minimum-spacing DRC rule. The quest
to comply with DFM recommended rules opens the door to many questions:
for example, what tools help the designer determine if DRC rules or
DFM rules have a more positive impact on yield? One way is to gather
layout statistics on the feature in question. For instance, statistical
modeling and critical-area analysis highlight not only how often an
issue occurs, such as antennas and vias, but also in which combination
and level of severity the issue occurs.
DFM-recommended rules go a long way to mitigate some yield issues,
at 65 nm there is an even greater issue: Designers also need to assess
how physical layout, especially at the cell library level, can be
done so that feature fidelity is preserved across the manufacturing
process window, not just at nominal dose and focus.
developments in lithography-friendly design (LFD) methodology are
making the goal of modeling process variability to improve layout
robustness a reality. With an LFD "process kit" that encompasses RET
recipes, process models, and parameterizable rules, designers can
run simulations to see how a layout will print across the lithographic
process window. Simulation results can include recommendations about
areas where modifications to the layout would most likely improve
yield, with modifications made in the native layout environment. While
the kit contains all the data pertaining to pattern transfer, what
the designer sees and works in is very much like a DRC environment.
As designers become used to working in LFD mode, they will learn what
design elements respond favorably to manufacturing processes, and,
in time, naturally achieve a "LFD clean" design.
BALASINSKI (engineering manager, process technology R&D, Cypress
Semiconductor): If I were to look into a crystal ball, the
recent Photomask Japan conference provided a perspective on what solutions
may be in store for design for yield (DFY) challenges. For lithography,
there appear to be two trends: one for the immediate term and one
for the ultimate term.
immediate trend is sweaty. It is not necessarily cheap but definitely
less expensive than what's coming next. It requires a lot of effort
from design, computer-aided design (CAD), and technology to benefit
from the low-k1 resolution factor and stepper numerical aperture (NA)
stretched to the limits. Model-based 193-nm OPC for 45-nm random logic
has every chance to skyrocket mask complexity to multiple gigabytes
and data processing time to many days. Lithography engineers will
be busy optimizing exposure models and harassing design engineers
about layout best practices to enable half- or quarter-pitch printability,
slowing down data delivery, but making EDA engineers work twice as
hard on new flavors of scatter bars. The reduction of unnecessary
layout notches, definition of forbidden pitches and orientations,
and addition of enclosures for line-end vias spiral up within the
existing methodology, but to help out the cost, the expertise would
eventually transfer to Asia.
this era should not last for long. The IC industry leaders are tired
of paying millions for upgrades in stepper NA that are only good for
one generation. They want to leverage their resources toward extreme
UV, direct write, or imprint. They are not ecstatic about these choices:
the throughput of new tools may be half that of the good old 193-nm
stepper, but their risk is low. Moore's Law is alive and kicking,
if one looks at the International Technology Roadmap for Semiconductors.
They want the new lithography ready in a few years. This is a fantastic
challenge for the stepper vendors.
crystal ball tells me it is going to be EUV, but it does not really
matter. Any solution should trash the existing OPCs. The IC industry
would happily trade the software rigmarole for the hardware fix. The
new litho tool would be the Noah's ark, which allows the IC companies
that can afford the ticket to sail over the sea of OPC solutions.
Masks would become less complex, if not less expensive. But many of
the great EDA engineers may need to find new jobs. Perhaps they can
help the designers who are furious about litho-oriented DFM focus
with its other aspects (pattern density, wire spreading, or doubling
vias for reliability) that are closer to their core expertise, such
as timing analysis or leakage reduction. The resolution of these issues
does not represent a fundamental technical challenge but does require
some diligence, programming skills, and keeping an eye on one's return
about smaller companies? They should expand existing OPC expertise
and improve its cost-effectiveness.
When they also acquire EUV, they should be able to offer a competitive
solution to the multiple ASIC and R&D centers in search of innovation.
If at least a few chosen ones have leading-edge litho tools, this
will allow others to avoid or delay investing big capital dollars.
PRAMANIK (director of DFM solutions, Synopsys) and TERRY MA (director,
product marketing for DFM solutions, Synopsys): A key requirement
for DFM is a mechanism to incorporate process information that affects
the functionality and yield of designs into the flow so that the target
yield at the start of mass production can be attained quickly. With
shrinking process tolerances, however, it is becoming more difficult
to achieve the yield goal and, at the same time, meet the time-to-market
complements silicon data with accurate process and device simulations.
90-nm and subsequent technology nodes, parametric yield loss becomes
a significant part of overall yield loss because of the sensitivity
of design parameters (frequency, power, etc.) to process variations
(gate oxide thickness, gate critical dimension, halo implant dose,
etc.). To solve this problem, a bidirectional link between design
and manufacturing is needed. Design must account for manufacturing
variations in a way that does not sacrifice the performance advantages
provided by the new technology, while manufacturing must incorporate
design constraints into the overall process control structure.
computer-aided design (TCAD) provides the bidirectional link between
design and manufacturing because it offers a predictive framework
for correlating design parameter sensitivity with process variability.
TCAD complements silicon data with accurate process and device simulations
based on calibrated models. Extensive TCAD simulations with calibrated
models allow the user to capture the relationships between individual
process parameters and key device or even design characteristics and
cast them into efficient and fast mathematical models called process
compact models (PCMs). PCMs provide a robust way of transferring process
and device information into both design and manufacturing.
the design flow, PCMs can be incorporated into timing analysis and
physical verification tools to determine the sensitivity of key circuit
elements to process variations. In manufacturing, PCM enables advanced
process control methods that can reduce the overall performance variation
of a design caused by the random statistical variations of individual
methodology involves feeding CD, gate oxide, and other in-line metrology
measurements into the PCMs to determine the value of subsequent process
parameters. For example, if the final gate CD is at the lower end
of the specifications, with PCMs the user can calculate the change
in halo implant dose needed to keep the off-state leakage current
within limits while keeping the on currents in specification. It is
also possible to distribute the change among several different process
parameters, such as dose, implant angle, and anneal temperature. The
net result is an improvement in parametric product yield for specific
designs without the manufacturing engineer having to understand the