Deep reinforcement learning (RL) for Queue-time management in semiconductor manufacturing

Use deep RL to minimize yield loss with automatic control of Queue-time management

By Jeong Cheol Seo
Deep RL
Queue-time constraints (QTC) define a limit on the time that a lot can wait between two process steps in its flow. In semiconductor manufacturing, lots that exceed that time limit experience yield loss, need rework, or get scraped. QTCs are difficult to schedule, since a lot needs to wait to be released to the first process step until there is available capacity to process the final step. However, exactly calculating if there is enough capacity is computationally expensive.
In this work we propose a deep Reinforcement Learning (RL) method to manage releasing lots into the Queue-time constraint. We analyze the performance of our RL method and compare it to seven baseline solutions. Our empirical evaluation shows that the RL method outperforms the baselines in five performance metrics including the number of Queue-time violations and makespan, while requiring negligible online compute time.
For additional details, please view or download this PDF:

Related Posts

Predictive and Prescriptive Maintenance

Predictive And Prescriptive Maintenance
Learn how to minimize unplanned downtime by predicting equipment failures and prescribing corrective actions to avoid failures from occurring.

The data scientist conundrum – data exploration and feature engineering (Part 2/3)

Data Science Blog
Making data scientists’ life easier with integrated end-to-end software solution

Go green or go home

Go Green
Achieve pharma sustainability goals more quickly with SmartFactory Rx® implementation.