• Subscribe

Using a framework to validate factory scheduling solution systems (Part 2/2)

Make more advanced validations and fine tune the schedule generated by a factory schedule solution system to get the most from your semiconductor factory
In Part 1 of “Using a framework to validate factory scheduling solution systems,” we looked at how a validation framework lets manufacturers determine the validity of a schedule that has been produced by a factory scheduling solution system. Methods for basic and secondary level validations, as well as evaluation of the input data, were discussed. We’ll now look at the next step, conducting advanced validations and fine tuning the factory schedule.

Factory schedule debugging and traceability

To validate, understand and fine tune a factory scheduling solution, you need to be able to explain and understand the decision making that went into the solution. When an end user configures a scheduler, they do so with expectations and assumptions for how it should behave. When that doesn’t happen, manufacturers need to be able to find out why by testing and validating those assumptions. (This is also part of the factory schedule fine tuning process.) The following information will help you understand that decision making:

  • Lot Assignment: Why is a particular lot assigned to a particular tool in a schedule? This explanation can be encoded visually in the form of a score, weightage, criteria, percentage, and/or list of tool and lot attributes.
  • Companies often continue the use of existing tools and methods to avoid ‘rocking the boat’ in manufacturing, despite the development of improved platforms that go well beyond Excel.
  • Winners and losers: The details of the lots that competed with any assigned or selected lot and lost out. In a factory with a lot of active work in process (WIP), and when the schedule horizon is not able to accommodate all the active lots in the schedule, you should be able to find details of unselected lots which lost out to the winning lots.

Advanced validation: quality of the schedule

Depending on the factory, the schedule is set up to automatically refresh from every five minutes up to one hour. The stability and reliability of the factory schedule run after run and over a rolling period needs to be tracked as part of solution validation, as it mimics the actual production environment at the customer factory. What constitutes a good schedule can be based on quantitative criteria and KPIs, but there is also a subjective component to the evaluation based on factory physics tradeoffs which again can be linked back to KPIs. Depending on the use case, some of the KPIs that, when tracked, would indicate the efficacy of a scheduling solution are:

  • Schedule compliance: Lagging indicator of assignment compliance to measure lots ran on the tools to which they were assigned during dispatch.
  • Projected moves and outs: Leading indicator of projected moves and outs by the scheduler by end of shift or day is a measure of the throughput of the factory.
  • Tool utilization: Standby time (SBY) with WIP (leading to avoidable white space in a schedule) and SBY without WIP (which can lead to bottleneck tools being starved of work). Standby time in a tool is a measure of the unutilized capacity available in the tool and, in typical use cases, the scheduler is designed to minimize this component of total tool time.
  • Finished lot cycle time: Tracking the finished lot cycle over a period is a lagging indicator for the effectiveness of the schedule. If the product mix, starts, and equipment qualification matrix did not dramatically change, then there should be a downward trend of finished lot cycle time before and after the deployment of the scheduler.
  • Factory X factor, DPML, CT pace or WIP overturns: Tracking the trend of leading cycle time measures like Days Per Masking Layer, or X factor, or WIP turns is an indicator for the effectiveness of the schedule. If the product mix, starts, and equipment qualification matrix did not dramatically change, then there should be a downward trend of these KPIs before and after the deployment of the scheduler.
  • On time delivery percentage or lagging lots percentage: In factories where lot due dates matter and this is a core objective, scheduler behavior is expected to improve the OTD metrics.
  • Average train size: In factories where setup minimization is a core objective of the scheduling solution, average train size is a measure that can evaluate the quality of the schedule.
These KPIs also need to be generated on a forward-looking basis as part of the factory scheduling output. This will allow the end users to gain an understanding of the outcomes predicted by scheduling and validate the schedule.

Role in successful implementation

A validation framework enables a structured way of conducting a thorough evaluation of the factory schedules generated by the solution. Both the software vendor and the customer need to be aligned on this to manage expectations, as well as for the successful deployment and management of the solution. There is no such thing as a ‘perfect schedule.’ Fine tuning is by default a part of making a factory scheduling solution work for an end user in the absence of AI/ML based methods which can automate the fine-tuning process to realize the theoretically achievable productivity gains.

In Part 1 of “Using a framework to validate factory scheduling solution systems,” we looked at how a validation framework lets manufacturers determine the validity of a schedule that has been produced by a factory scheduling solution system. Methods for basic and secondary level validations, as well as evaluation of the input data, were discussed. We’ll now look at the next step, conducting advanced validations and fine tuning the schedule.

Schedule debugging and traceability

To validate, understand and fine tune a scheduling solution, you need to be able to explain and understand the decision making that went into the solution. When an end user configures a scheduler, they do so with expectations and assumptions for how it should behave. When that doesn’t happen, manufacturers need to be able to find out why by testing and validating those assumptions. (This is also part of the schedule fine tuning process.) The following information will help you understand that decision making:
  • Lot Assignment: Why is a particular lot assigned to a particular tool in a schedule? This explanation can be encoded visually in the form of a score, weightage, criteria, percentage, and/or list of tool and lot attributes.
  • Lot sequence position: Why a particular lot is assigned and scheduled ahead or behind another lot in a schedule, or the position in the sequence for a tool. This explanation can be encoded visually in the form of a score, weightage, criteria, percentage, and/or list of tool and lot attributes.
  • Winners and losers: The details of the lots that competed with any assigned or selected lot and lost out. In a factory with a lot of active work in process (WIP), and when the schedule horizon is not able to accommodate all the active lots in the schedule, you should be able to find details of unselected lots which lost out to the winning lots.

Advanced validation: quality of the schedule

Depending on the factory, the schedule is set up to automatically refresh anywhere between every five minutes to one hour. The stability and reliability of the schedule run to run and over a rolling period needs to be tracked as part of solution validation, as it mimics the actual production environment at the customer factory. What constitutes a good schedule can be based on quantitative criteria and KPIs, but there is also a subjective component to the evaluation based on factory physics tradeoffs which again can be linked back to KPIs.

Based on the use case, some of the KPIs that when tracked would indicate the efficacy of a scheduling solution are:

  • Schedule compliance: Lagging indicator of assignment compliance to measure if, in actual production, lots ran on the tools to which they were assigned at the time of dispatch.
  • Projected moves and outs: Leading indicator of projected moves and outs by the scheduler by end of shift or day is a measure of the throughput of the factory.
  • Tool utilization: Standby time (SBY) with WIP (leading to avoidable white space in a schedule) and SBY without WIP (which can lead to bottleneck tools being starved of work). Standby time in a tool is a measure of the unutilized capacity available in the tool and, in typical use cases, the scheduler is designed to minimize this component of total tool time.
  • Finished lot cycle time: Tracking the finished lot cycle over a period is a lagging indicator for the effectiveness of the schedule. If the product mix, starts and equipment qualification matrix did not dramatically change, then there should be a downward trend of finished lot cycle time before and after the deployment of the scheduler.
  • Factory X factor, DPML, CT pace or WIP overturns: Tracking the trend of leading cycle time measures like Days Per Masking Layer or X factor or WIP turns is an indicator for the effectiveness of the schedule. If the product mix, starts, and equipment qualification matrix did not dramatically change, then there should be a downward trend of these KPIs before and after the deployment of the scheduler.
  • On time delivery percentage or lagging lots percentage: In factories where lot due dates matter and this is a core objective, scheduler behavior is expected to improve the OTD metrics.
  • Average train size: In factories where setup minimization is a core objective of the scheduling solution, average train size is a measure that can evaluate the quality of the schedule.

These KPIs also need to be generated on a forward-looking basis as part of the scheduling output. This will allow the end users to gain an understanding of the outcomes predicted by scheduling and validate the schedule.

Role in successful implementation

A validation framework enables a structured way of conducting a thorough evaluation of the schedules generated by the solution. Both the software vendor and the customer need to be aligned on this to manage expectations, as well as for the successful deployment and management of the solution. There is no such thing as a ‘perfect schedule.’ Fine tuning is by default a part of making a scheduling solution work for an end user in the absence of AI/ML based methods which can automate the fine-tuning process to realize the theoretically achievable productivity gains.

About the Author

Ravi Jaikumar, Global Product Manager, Real Time Dispatching and Scheduling
Ravi Jaikumar, Global Product Manager, Real Time Dispatching and Scheduling
Ravi is a Global Products Manager for Real Time Dispatching and Scheduling software solutions for semiconductor front end fabs and Assembly, Test and Packaging factories. Prior to joining Applied Materials Automation Products Group almost two years ago, he was a senior industrial engineer with Qorvo, Inc. He also served as an industrial engineer for ON Semiconductor and was a supply chain consultant with Hyster-Yale Group. He earned a bachelor’s degree in mechanical engineering from Anna University Chennai, and a master’s degree in industrial engineering from the North Carolina State University.