• Subscribe

Enhancing decision-making in real-time scheduling: leveraging data and AI technology (Part 1/4)

This blog series focuses on the role of data in helping semiconductor manufacturers achieve greater benefits in productivity and quality.

Renowned influencer James Moyne joins Samantha Duchscherer in an engaging discussion exploring the importance of integrating additional information and advanced technologies like Artificial intelligence (AI) into the semiconductor industry’s scheduling and dispatching process. The comprehensive series consists of four parts focusing on various subjects such as the importance and advantages of data, AI, and human involvement. It also delves into the obstacles faced and offers insights into the role of digital twin technology.

In this first article of the series, they focus on the current situation for scheduling and dispatching.

Sam: Why is Industry 4.0 so important to the semiconductor manufacturing industry, and how relevant is data for scheduling and dispatching?

James: The semiconductor industry leads all other industries in schedule and dispatch because semi manufacturing is a highly automated, fast-moving, flexible system. They need to make decisions quickly and in an automated fashion.

So far, the real time aspect of schedule dispatch is almost exclusive to semiconductor manufacturing. Over the past five years, this has been moving into industries like pharmaceutical and aerospace, but these industries are still behind. In fact, they are looking to semiconductor manufacturing to learn what they did so they can replicate it.

Sam: Which schedule and dispatch interventions are commonly used in the semiconductor industry today, and how do they work?

James: Right now, schedule and dispatch are rules-based systems. Using experts and data, they try to control actions such as, “When this happens along with A, B, and C, I want to make a decision and route something to a particular place.”

There are both push and pull effects. In push, the focus is on how wafers get routed from a higher-level system to resources down the line. This is essentially, ‘I have wafers and I know which piece of equipment to send them to.’

In the case of pull, the order of the processing is decided. For instance, a piece of equipment has been given four wafers to process and needs to know in what order to process them. Push and pull work together and use things like APF RTD® to do it, which is a rules-based solution.

Sam: There are many unexpected events that can occur in manufacturing. How are these events currently handled by these rule-based systems?

James: The current systems are currently both predictive and reactive. They are reactive in the sense that the system doesn’t have a predetermined or automated next step for events it didn’t predict. It sort of throws its hands up in the face of the unexpected. This is where a manual intervention by a subject matter expert is needed. That person comes in and determines either they need to add a new rule because this is the type of thing that could happen in the future or, if they deem it a one-time thing, how to resolve it in that instance.

They are predictive in that they take into consideration the types of rules you want to have, the way you want the equipment to operate, the throughput you want to get, and the quality, and then make decisions such as, ‘I want to route some of the more expensive wafers to a higher producing piece of equipment.’

They are predictive in that they take into consideration the types of rules you want to have, the way you want the equipment to operate, the throughput you want to get, and the quality, and then make decisions such as, ‘I want to route some of the more expensive wafers to a higher producing piece of equipment.’

Sam: What is needed to make these systems more predictive and advance these systems to the next level?

James: We have started adding this predictive aspect into scheduling and dispatching, and this is where data comes in. You first need to collect a lot of data not just about scheduling and dispatch, but about things that may impact them. A good example of this is maintenance. If you collect a lot of data, you may realize there are maintenance trends associated with a piece of equipment. You start to understand when it’s going to go down, how often, and more importantly, the variability of those numbers. You can then start to incorporate that into your scheduling and dispatch.

How quickly you can feel confident in your predictive maintenance depends on how much historical data you have. If you have a lot, you can see the patterns in history. If, for example, you have two years of maintenance logs for a piece of equipment, you can determine the behavior of the maintenance for this tool and use that moving forward. Without this type of historical data, you need to start from scratch and develop confidence in your predictive maintenance as you go. So, it’s kind of a function of how long your data archives are, how reliable they are, how good the data quality is, how dynamic or changing the behavior patterns are over time, and things like that.

Sam: Where are you finding data is having the most impact in predicting scheduling and dispatching events?

James: What we’re finding in some of the research is that you can break this predictive aspect into two pieces, the first being dealing with minor glitches. An example of this is when everything is going great, but maybe the processing time of a piece of equipment varies from one minute to one minute ten seconds; some decisions can be made based on that prediction and variability.

The second piece is in predicting more show-stopper types of events. This is where, for instance, unscheduled downtime is predicted and that is conveyed to scheduling dispatch. This enables more important decisions, such as rerouting, to be made. You can see where the data driven aspect steers that second piece, the catastrophic or large-scale changes you must make to your scheduling dispatch decisions.

It’s important to note data isn’t just driving schedule dispatch directly. It’s affecting applications whose outputs will affect schedule dispatch, such as predictive maintenance and run-to-run control, which is trying to make your process run more efficiently and produce better quality wafers. If you know that the run-to-run controller is producing better quality wafers on tool A versus tool B, that’s important for scheduling.

What we’re just starting to do now is roll that type of information into schedule dispatch. There are a lot of opportunities for data-driven things to impact schedule dispatch in the future, such as integrating AI, ML, and digital twin.


The semiconductor manufacturing industry is a leader in real time scheduling and dispatching. As other industries turn to it as a benchmark for how to implement this, the semiconductor industry is beginning to take advantage of the opportunities data provides to have more predictive scheduling and dispatch systems that can improve productivity and quality.

In our next article, we’ll define AI and ML.

About Dr. Moyne

Dr. James Moyne is an Associate Research Scientist at the University of Michigan. He specializes in improving decision-making by bringing more information into the scheduling and dispatching area. Dr. Moyne is experienced in prediction technologies such as predictive maintenance, model-based process control, virtual metrology, and yield prediction. He also focuses on smart manufacturing concepts like digital twin and analytics, working towards the implementation of smart manufacturing in the microelectronics industry.

Dr. Moyne actively supports advanced process control through his co-chairing efforts in industry associations and leadership roles, including the IMA-APC Council, the International Roadmap for Devices and Systems (IRDS) Factory Integration focus group, the SEMI Information and Control Standards committee, and the annual APC-SM Conference in the United States.

With his extensive experience and expertise, Dr. Moyne is highly regarded as a consultant for standards and technology. He has made significant contributions to the field of smart manufacturing, prediction, and big data technologies.

About the Author

Picture of Samantha Duchscherer, Global Product Manager
Samantha Duchscherer, Global Product Manager
Samantha is the Global Product Manager overseeing SmartFactory AI Productivity, Simulation AutoSched® and Simulation AutoMod®. Prior to joining Applied Materials Automation Product Group Samantha was Manager of Industry 4.0 at Bosch, where she also was previously a Data Scientist. She also has experience as a Research Associate for the Geographic Information Science and Technology Group of Oak Ridge National Laboratory. She holds a M.S. in Mathematics from the University of Tennessee, Knoxville, and a B.S. in Mathematics from University of North Georgia, Dahlonega.