Advertiser and

Buyer's Guide
Buyers Guide

Chip Shots blog

Greatest Hits of 2005
Greatest Hits of 2005

Featured Series
Featured Series

Web Sightings

Media Kit

Comments? Suggestions? Send us your feedback.

Advanced Process/Equipment Control

Implementing EDA to improve equipment performance and fab productivity

Alan Weber, Alan Weber & Associates

The adoption of the data collection standards known as Interface A will provide semiconductor manufacturers with massive amounts of detailed equipment and process data, enabling them to improve tool performance and productivity. Written for a broad audience that includes operations managers, process engineers, factory automation specialists, equipment suppliers, and software vendors, this article coincides with important advances in the implementation of Interface A. First, Interface A specs are complete and stable, and implementations have been under way for close to 24 months. Several real customer pilot projects are now in progress. As a result, Interface A delivery requirements are appearing in the purchase specs of leading wafer fabs. To implement Interface A, technical training and management seminars are becoming available. In addition, commercial software packages and basic test tools are being offered so that OEMs and fab customers do not have to produce them in-house.

In short, as all the important pieces fall into place, it is time to take a serious look at the manufacturing applications and potential benefits of Interface A technology.

Highlights of EDA Interface A

The new equipment data acquisition (EDA) standards differ in various ways from the current-generation ones. First, they are based on mainstream communications technologies, including Web services, simple object access protocol (SOAP), and extensible markup language (XML). Second, in addition to providing the same kind of factory control information as the SECS/GEM interface, the new standards can support the collection of detailed sensor and operational data at the bandwidths needed for on-line monitoring and diagnostics (e.g., 1000 scalar values per tool and a total of 10,000 samples per second). Third, the standards include “self-describing” interfaces that may be queried at run time, enabling an application to discover the structure of a tool and the data it makes available. Moreover, since EDA supports multiple simultaneous client applications that may reside anywhere on the factory network (or beyond), it provides security features that determine which clients can be connected and which services and information are accessible to them.

The EDA suite consists of four distinct SEMI standards and their protocol bindings:

• E120, Common Equipment Model, defines a common representation of the physical structure of the equipment by using common attributes and terminologies.

• E125, Equipment Self-Description, defines an interface to retrieve a description of the physical equipment structure, available data items, events, and exceptions (external tool view).

• E132, Authentication and Authorization, defines a means of establishing an authenticated session between the data consumer and the equipment and defines an authorization mechanism to control access to equipment services.

• E134, Data Collection Management, defines mechanisms for collecting and reporting equipment data items, events, and exceptions.

As illustrated in Figure 1, the standards are used together. While this diagram shows a single client/equipment connection, multiple clients may be communicating with a tool at any given time.

Figure 1: Integrated EDA scenario shows how the SEMI standards are used together. (CHART COURTESY OF STG)

Challenges for Equipment Suppliers

The range of new technologies specified by the Interface A standards, and especially industry performance and data-quality expectations, preclude simple “bolt-on” software solutions such as a GEM → EDA message translator layered atop an existing GEM interface. Rather, as shown in Figure 2, many embedded control systems will have to be redesigned so that the EDA interface can connect directly to the internal data sources (sensors, actuators, real-time process parameter values). This move obviously will not happen without significant investment of time and resources. Hence, equipment suppliers and their customers must align their development roadmaps carefully.

Figure 2: Interface A performance and architecture requirements. With the new interface, many embedded control systems will have to be redesigned so that the EDA interface can connect directly to the internal data sources. (CHART COURTESY OF INTERNATIONAL SEMATECH MANUFACTURING INITIATIVE)

Tool suppliers will have to decide how much data to provide via Interface A, since this question raises both technical and business issues. From a technical standpoint, suppliers must monitor their tools’ behavior and notify the application client(s) when an unacceptable performance threshold has been approached. From a business standpoint, suppliers may consider some of the data that customers will want to collect as strictly proprietary and not expose them in their external tool models. Again, this issue must be negotiated early on to avoid conflicts.

Making Use of EDA Data: EES and APC Applications

Adoption of the EDA Interface A standards will provide process engineers with an order of magnitude more equipment and process data than ever before. But unless those data are converted into a form that enables engineers to initiate beneficial manufacturing action, the only people who will profit from the shift will be disk-drive suppliers. Hence, a new category of applications for dealing with the massive quantities of information has been developed: equipment engineering systems (EES).

Process and Production Monitoring. One of the first things process engineers will do with all the data is look at it. However, that is not as simple as it sounds. The granularity and volume of data will tax the capabilities of traditional data management and visualization tools to support even the simplest anticipated use cases. Moreover, the need to associate the relevant “context data” with trace data will add another level of complexity.

Equipment and process engineering use cases might include:

• Comparing the timing sequences of key chamber-level sensor values and events for supposedly identical tools running the same recipe. Differences in the relationship of these signals may point to problems in specific tool components.

• Looking for spurious signals that do not appear to affect wafer quality but that may eventually perturb the process if left unchecked. An example of this situation is depicted in Figure 3, which shows spikes in the measured chamber vacuum on either side of the processing window.

• Comparing trace data for a set of parameters in a given process from different wafers, lots, time periods, or chambers to identify sources of variations.

• Correlating any or all of the above data types with data from sources such as existing failure analysis systems, yield management systems, or manufacturing execution systems to understand their relationship to important factory metrics, including yield, process capability, and productivity.

Figure 3: A trace-data chart shows spikes in the measured chamber vacuum on either side of the processing window. (SCREEN CAPTURE COURTESY OF WONDERWARE)

Fortunately, because semiconductor manufacturing is not the first industry to have these requirements, commercial software systems are available to address them on a fabwide scale.

Knowledge Discovery and Analysis. The new data will undoubtedly provide a wealth of knowledge that can be applied to fault detection and classification (FDC), run-to-run (R2R) control, and other advanced manufacturing applications. However, in analyzing the data, it is important to follow a well-defined troubleshooting methodology to avoid finding only what one is looking for. In other words, engineers must be able to recognize phenomena that perhaps have been obscured by the way that problems have been posed until now.

The major steps in a data-to-action methodology are shown in Figure 4. First, data preparation is performed, which includes filtering, validation, summarization, and other basic transformations that make the data more useful in subsequent steps. In many cases, this process can be automated and directly linked to the data collection process. Second, data visualization takes place, as illustrated in Figure 5. Data visualization is supported by a set of tools that help engineers discover characteristics and relationships in the data. This procedure leads to a modeling step in which key data features are linked directly to the desired objective. If these links are sufficiently rich, genuine knowledge extraction can occur. Combining newly discovered and existing knowledge into software modules called action objects and deploying them in an execution environment is the final step in the process.

Figure 4: The major steps that are required in a data-to-action methodology. (CHART COURTESY OF CSENSE)

FDC. EDA data will initially have the greatest impact on the industry’s FDC applications, since these applications thrive on detailed equipment data regardless of the approach used. Moreover, since most tools’ embedded control systems cannot support high-speed trace-data collection reliably while performing their primary function of processing wafers, most FDC until now has been performed on a per-lot basis after the process has been completed. Interface A promises to change all that.

Figure 5: Condition-based parameter comparison. Data visualization is supported by a set of tools that help engineers discover characteristics and relationships in the data. (SCREEN CAPTURE COURTESY OF CSENSE)

While pattern-matching algorithms have been used in fault detection for some time, they rely on large amounts of trace data to be sufficiently robust. In fact, some of the most novel process-specific FDC systems use a sensor to fingerprint a tool and compare real-time traces with libraries of stored patterns to determine tool health. However, such signature-analysis techniques have not come into broad use. The increased scope and availability of both historical and real-time EDA data will breathe new life into these approaches.

Algorithms based on the use of multivariate statistics will also get a boost from the implementation of EDA Interface A. While model-based algorithms until now have been constrained by the limits placed on the collection of production trace data, they have another problem: Since they are purely statistical and do not map directly to the physical phenomena being observed, they are very sensitive to a tool’s specific operating point and production context. This means that in a multiproduct fab, thousands of parameter sets may need to be managed by a production FDC system. By analyzing large volumes of detailed process data collected via EDA, it may be possible to add first-principles-based components to these model-based algorithms, reducing their number, simplifying their management, and tightening their spec limits while improving the false-alarm rate.

Enhanced R2R Control. Feedforward and feedback control systems at all levels (wafer, lot, batch) can be enhanced with EDA data. While the industry has come a long way since the first advanced process control (APC) production applications in the early 1990s, the basic control scheme has not changed much. Most APC systems are based on linear models that relate product (or wafer) target characteristics (film thickness, CD, registration error, etc.) to the process tool’s recipe parameter settings (time, temperature, pressure, radio-frequency power, stage settings, etc.). These models are applied by running the process, measuring the result, calculating the error, and adjusting the input parameters to compensate for the error—a version of “steering by the wake.”

This approach is running out of steam for the most advanced processes and is also susceptible to the same model-data management challenges faced by FDC. Both of these issues can be addressed by including real knowledge of the process in the control algorithms, as is done in many other industries in which the useful lifetime of a given process is much longer than in the IC industry. For example, multiinput multioutput control systems incorporate more than just metrology results in feedback control calculations, but the integration of such systems with the necessary sources of data is still a significant challenge—and very site-specific.

EDA can support much-more-robust tool/process characterization methods than current methods, which provide fewer data. The result is a deeper understanding of real process behavior, sources and types of variation, and opportunities for tighter control. Moreover, the ability to exercise greater control while running production lots, rather than having to wait for the results of engineering runs, will accelerate learning cycles and shorten the time to market of new control applications.

New APC Concepts. Advanced Equipment Control/Advanced Process Control conferences have referred to “virtual metrology,” a technique that correlates metrology results for a given process step (i.e., measured features of a processed wafer) with detailed information collected from the process tool to construct a predictive model. If the prediction accuracy is sufficiently high, say 95% or better, the actual process variables are then used to calculate an in situ virtual metrology value without performing an actual measurement. This value is then used in feedforward/feedback APC calculations to adjust recipe parameters for subsequent wafers or lots. Actual wafer measurements using metrology tools are performed only to keep the predictive model calibrated.

It is not clear how broadly this technique can be applied across the fab. Nevertheless, it will be a significant consumer of detailed process information for several years to come as process engineers evaluate the possibilities.

Tool Diagnostics and Maintenance. The availability of detailed equipment data will likely revolutionize the industry’s approach to tool diagnostics and maintenance. Prevalent time-based just-in-case maintenance scheduling schemes will be replaced with better algorithms derived from a real understanding of equipment behavior, both normal and abnormal. Since this application area is a natural extension of FDC, many of the same basic technologies will be directly applied. However, the availability of detailed tool subsystem, event, and sensor data opens the door to radically different techniques, such as those based on physical (failure modes and effects analysis) models. This, in turn, could increase the viability of solutions from other industries, benefiting semiconductor manufacturers.

The security provisions of the EDA standards enable much of this work to be done remotely (e.g., at an equipment supplier’s support center). However, operational and IP ownership hurdles will more likely dictate the rate at which these capabilities are deployed.

Preparation for EDA Implementation

Now that most semiconductor equipment suppliers have at least begun the process of implementing Interface A, fabs will soon be able to collect massive amounts of detailed tool/process data. So what’s next?
To date, Interface A has mostly been viewed from the equipment perspective. Debates over performance requirements, remote access and other security considerations, XML verbosity, SOAP binding incompatibilities, the complexity of certificate management, and other topics have been resolved for the most part.

At this point, the industry must take a serious look at “the other end of the wire”—real fab-level requirements and implementation challenges. Wafer fabs are dynamic entities, and their manufacturing systems reflect that fact. A new fab’s system architecture is a snapshot of its owner’s information technology (IT) evolution; it contains multiple generations of applications and systems technologies, and its overall control system includes various levels of software from many different sources. The introduction of any major new system technology—including EDA standards—creates a “chicken and egg” situation on a grand scale. Consequently, companies’ adoption of Interface A will involve migration steps away from their current environment.

The first step in scoping the overall fab problem is to understand the specific requirements for each process area. Since every process is unique, no single data collection approach will be applicable across the fab. Rather, implementation will depend on the complexity, rate of change, controllability, criticality, and level of understanding of each process. These factors will then determine the number of parameters to collect, the sampling rates, retention periods, and access requirements for each type of tool. In aggregate, they will determine the total bandwidth needed, the network topology, the computing power, and on-line storage requirements. A wafer fab tool breakdown and EDA data collection requirements are presented in Table I. Expected response times and access privileges for the likely consumers of equipment data and process information must also be considered, because collecting and storing the data is only half of the solution.

Table I: Wafer-fab tool breakdown and EDA data collection requirements for a fab with 32,000 wafer starts per month.

The next step will be understanding how the new data capabilities will affect existing fab infrastructures and applications. For example, how will existing GEM-based equipment integration tools have to change to accommodate Interface A, or will a complementary system fill that need? If the latter alternative is pursued, how will the two systems interact to achieve a coordinated fab-level data collection strategy? Similarly, how will the data from external sensors be collected and merged? Can the existing process databases be expanded to hold that information, or are the requirements for real-time data management fundamentally different? How will R2R and FDC applications be affected by the availability of more-detailed process data? Many of these questions cannot be addressed reliably by studies alone; pilot projects will be needed to explore alternatives and select the best approach.

Productivity of Manufacturing Information Technology

The semiconductor EES market segment will probably evolve much like the APC segment, since it uses most of the same data, but it will consist of a broader set of applications. Accordingly, because fab IT departments will be creating much custom in-house software, IT resource productivity will become paramount. The ability to quickly develop, deploy, and change EES applications is the key to deriving competitive advantage from equipment and process data as manufacturing operations evolve. For this reason, the growing use of application-platform technology will provide EES developers with an opportunity to build custom applications on a multiindustry software foundation. The benefits of this approach are numerous. Figure 6 shows the kind of productivity improvement that can be expected at various phases of an application’s life cycle.

Figure 6: Chart showing the type of productivity improvement that can be achieved using a platform-based approach. This example shows a 65% reduction in overall project time.

The basic requirements of EES applications are reflected in the sample EES/APC architecture shown in Figure 7. These requirements include connectivity, real-time data management, and the security services and reliability needed to support mission-critical use.

Figure 7: EES/APC application platform. The requirements include connectivity, real-time data management, and the security services and reliability needed to support mission-critical use.

To be effective, EES applications must connect to process-data sources, the people responsible for managing those processes, and a fab’s complementary manufacturing systems. They require detailed, real-time data from many different types of devices that are spread across a variety of locations and time frames, and the data should be provided using well-established industry standards and commercial products. Since EES applications play an important role in a fab’s evolution, connectivity to people who perform many different job functions—including process engineers, production managers, and application developers—is vital. Finally, because systems exist that benefit from close interaction with EES applications, an application platform should provide a wide range of standard integration options. This approach will allow manufacturers to decide whether to build or buy individual applications, enabling internal development resource personnel to focus on core areas of value creation.

The ability to store and manage massive amounts of real-time information is as important as connectivity. For example, data collection rates and retention periods for a typical 300-mm fab are expected to result in on-line storage requirements of up to 15 terabytes for a single process area. A viable EES database system must support multiple mechanisms for high-speed storage and retrieval of manufacturing information, including features of relational databases as well as time-series high-performance extensions to fully integrate process data with event, summary, production, and configuration information.

IP, the lifeblood of semiconductor companies, resides in the manufacturing equipment and systems that populate the fab. Protecting IP with secure systems has become critical in recent years as the number, type, and severity of security threats have grown. Security for EES applications is especially important, because EES deals with data that are simultaneously sensitive to both customers and equipment suppliers. Also, remote access for e-diagnostics applications further heightens the need for fine-grained security.

The techniques used to distribute and coordinate applications across a scalable, fab-level network must result in high system reliability, since the system cannot fail completely when a single node fails.


The standards, technologies, and applications discussed in this article are just the starting point of Interface A implementation. Its adoption will likely spawn an entirely new market segment, since process engineers’ technology needs will probably increase after they gain unprecedented access to tool data. The purpose of this article is to provide an idea of the concrete steps that can be taken as Interface A comes into use and to highlight the performance and productivity benefits that the new technology will bring.


The author would like to thank partners Jim Hollister and Paul McGuire for their help in preparing this article. Thanks also go to the company’s customers and colleagues across the industry who have worked to create the standards and technologies discussed in this article.

Alan Weber is president of Alan Weber & Associates (Austin, TX), a consulting firm that specializes in semiconductor advanced process control, e-diagnostics, and related manufacturing systems technologies. He is on the board of directors of Cimetrix, a Utah-based supplier of equipment integration software products for semiconductor equipment and motion control products. Prior to forming the consulting company, Weber was VP/general manager of KLA-Tencor’s control solutions division after the division was acquired from ObjectSpace. While at ObjectSpace, he created the fab solutions division and was responsible for all aspects of the company’s semiconductor manufacturing systems business, including development of the APC framework and its eventual commercialization and deployment. Before joining ObjectSpace, he spent eight years at Sematech and was responsible for advanced manufacturing systems and related standards R&D, including the CIM Framework. Before that, Weber spent 16 years at Texas Instruments, managing a variety of technology programs in the semiconductor CAD and industrial automation/control area. He received BS and MS degrees in electrical engineering from Rice University in Houston. (Weber can be reached at 512/494-0700 or

MicroHome | Search | Current Issue | MicroArchives
Buyers Guide | Media Kit

Questions/comments about MICRO Magazine? E-mail us at

© 2007 Tom Cheyney
All rights reserved.