• Events
  • Subscribe

Enhancing decision-making in real-time scheduling: leveraging data and AI technology (Part 4/4)

Diving into digital twins: debunking misconceptions and unveiling the digital twin framework

Renowned influencer James Moyne joins Samantha Duchscherer in an engaging discussion exploring the importance of integrating additional information and advanced technologies like Artificial intelligence (AI) into the semiconductor industry’s scheduling and dispatching process. The comprehensive series consists of four parts focusing on various subjects such as the importance and advantages of data, AI, and human involvement. It also delves into the obstacles faced and offers insights into the role of digital twin technology.

In this fourth and final article of the series, they discuss two key aspects of digital twins—what they are versus common misconceptions and the concept of a digital twin framework.

Definition

As we started our discussion, we laughed that there’s the Hollywood impression of what a digital twin is—the human replica represented in movies like “I, Robot” — and then there’s the digital twin that is a powerful tool in the real world of semiconductor manufacturing. James provided a clear definition of the type of digital twin we’re focusing on:
James Moyne explains what a digital twin is in semiconductor manufacturing

Example of Digital Twin Technology

Sam: If a digital twin is not like what is commonly portrayed in Hollywood, could you provide a more realistic example?

James: Sure. Let’s consider an example on replicating the degradation of the filament of a lightbulb. If you remember, this type of bulb gets bright before it burns out. If I’m monitoring the temperature of that light bulb, or maybe the brightness of that filament, and predicting when that bulb is going to break, the model is used in a digital twin to predict its failure. In this example, we wouldn’t just model the theoretical failure of the light bulb, but we’d be synchronizing that model with an actual light bulb.

Sam: So, it seems like a key aspect of digital twins is the synchronization between them and their real counterparts?

James: Yes, digital twins are synchronized with their real counterparts in a time critical fashion. And this is important to note; it’s not necessarily in real-time, but rather is time critical. For this light bulb example, maybe I need to synchronize my model with this bulb every second because I need to predict within 60 seconds of the light bulb going out.

However, moving to the semiconductor manufacturing environment, in the case of dispatching and scheduling, I just need to synchronize every time a new wafer shows up.

Sam: What about the level of confidence in predictions? In our last blog we talked about the importance of this; how does that come into play when talking about digital twins?

James: The key outputs of the digital twin are along the lines of a prediction or detection (such as something is broken or is going to break). But, yes, just like we talked about in the last segment it also must provide information about its accuracy. If a digital twin tells you that a light bulb filament will fail, it must indicate when it will fail. If it says 60 seconds or ±5 seconds with a probability of 95%, you can use that information to order a replacement bulb.

Framework

Sam: Based off your definition of digital twins – it seems like they have been around for a long time in semiconductor manufacturing?

James: Right! For example, people don’t realize we’ve been using digital twins in run-to-run control in semiconductor manufacturing since the early 90s and now it’s pervasive. It takes a model of a particular piece of equipment and then tries to predict a recipe to improve the quality or throughput of that equipment. Run-to-run control is a form of model-based process control which also uses a digital twin of the process.

Predictive maintenance, which has now been around for 10 years, predicts some aspects or some failure mechanism such as in our earlier light bulb example. So, predictive maintenance also uses digital twin technology. Virtual metrology is another type of digital twin; a virtual metrology twin takes measurements from the tool, tries to predict metrology values and synchronizes with the real metrology tool to update the model.

So, as you point out, it’s important to emphasize that digital twins have been around for a very long time, and if we’re going to develop a digital twin framework for the industry, we have to have a framework to accommodate all these existing applications.

Sam: How do you build this integrated framework?

James: Essentially, we need to be able to both reuse digital twins and combine them.

First, I’ll discuss their reuse. Let’s say I’ve got a digital twin of an Applied etch tool that’s going to predict the machine’s throughput. I could develop a model that’s applicable to all etch tools that tells me what inputs are needed, but it won’t have high accuracy. If I refine that model for one Applied etch tool brand or a particular etch tool instance, the digital twin model will have more specificity. For example, it could determine what additional sensors are needed or the equation needs to be modified for the specific etch tool. This is what we call generalization hierarchy.

Combining digital twins is the other piece of the puzzle, which is particularly beneficial for scheduling and dispatch. Let’s say we have several existing digital twins: a scheduling and dispatch digital twin that is rule-based and tells us when to schedule different wafers at different tools, run-to-run control based digital twins indicating the quality of each tool that I’m sending wafers to, and a predictive maintenance digital twin that tells me when particular tools might fail in the future and with what probability. If I then had a scheduling dispatch digital twin that aggregates information from these twins on the quality of production of all my tools, when they’re going to fail and the probability of their failing, I can build a better scheduling and dispatch solution. That’s called the aggregation of digital twins.

Sam: Machine Learning and/or AI must benefit from Digital Twins – right?

James: Yes! Developing common interfaces, common ways in which these models can talk to each other, creates a playground for performing so many applications.

Digital Twins can take machine learning from different components and bring them together to create a much better system.

Conclusion

Digital twins are purpose-driven replicas of aspects of a system, such as processes, equipment, or products, that are synchronized with their real counterparts. They have been utilized for a long time in various applications. To develop a digital twin framework, it is essential to acknowledge and accommodate these existing applications while defining clear technical definitions for digital twins and the framework itself. We can leverage the power of digital twins to drive innovation and improve various applications – even AI.

Back to Part 1, Part 2 and Part 3

About Dr. Moyne

Dr. James Moyne is an Associate Research Scientist at the University of Michigan. He specializes in improving decision-making by bringing more information into the scheduling and dispatching area. Dr. Moyne is experienced in prediction technologies such as predictive maintenance, model-based process control, virtual metrology, and yield prediction. He also focuses on smart manufacturing concepts like digital twin and analytics, working towards the implementation of smart manufacturing in the microelectronics industry.

Dr. Moyne actively supports advanced process control through his co-chairing efforts in industry associations and leadership roles, including the IMA-APC Council, the International Roadmap for Devices and Systems (IRDS) Factory Integration focus group, the SEMI Information and Control Standards committee, and the annual APC-SM Conference in the United States.

With his extensive experience and expertise, Dr. Moyne is highly regarded as a consultant for standards and technology. He has made significant contributions to the field of smart manufacturing, prediction, and big data technologies.

About the Author

Picture of Samantha Duchscherer, Global Product Manager
Samantha Duchscherer, Global Product Manager
Samantha is the Global Product Manager overseeing SmartFactory AI Productivity, Simulation AutoSched® and Simulation AutoMod®. Prior to joining Applied Materials Automation Product Group Samantha was Manager of Industry 4.0 at Bosch, where she also was previously a Data Scientist. She also has experience as a Research Associate for the Geographic Information Science and Technology Group of Oak Ridge National Laboratory. She holds a M.S. in Mathematics from the University of Tennessee, Knoxville, and a B.S. in Mathematics from University of North Georgia, Dahlonega.