• Events
  • Subscribe

Enhancing decision-making in real-time scheduling: leveraging data and AI technology (Part 3/4)

Realizing the challenges and opportunities AI brings to semiconductor manufacturing

Renowned influencer James Moyne joins Samantha Duchscherer in an engaging discussion exploring the importance of integrating additional information and advanced technologies like Artificial intelligence (AI) into the semiconductor industry’s scheduling and dispatching process. The comprehensive series consists of four parts focusing on various subjects such as the importance and advantages of data, AI, and human involvement. It also delves into the obstacles faced and offers insights into the role of digital twin technology.

In this third article of the series, they discuss the challenges when implementing AI solutions, along with opportunities presented when these intelligent systems learn to ask for help.

Walk before you run with AI

AI and machine learning (ML) present many opportunities for semiconductor manufacturers for improving KPIs such as quality and yield. It’s important to understand, however, that integrating AI into your semiconductor manufacturing operation will at first require humans to create a system that is intelligent enough to ask the right questions from the right sources.

As James highlights, AI is prone to making mistakes due to the lack of context and its dependence on the quality of the data it operates on.

For data quality he has a saying, “One piece of bad data destroys 10 pieces of good data.” In terms of context, he notes that we can’t put data blindly into these systems; AI needs context for why it thinks a certain way. For example, it may know a light bulb is going to burn out, but the additional context of whether it’s in a residential building or an office complex, outside or inside, adds important information to help it predict when the bulb may burn out.

“Often times, when you look at data, it clusters really nicely around different context values like day versus night or outside versus inside,” he says. “A lot is dependent on how you treat these systems in terms of if you give them enough information.”

Integrated systems

Sam: Can you touch on why it’s important to have well-integrated systems along with high quality data?

James: To enable these systems to make accurate predictions, you want to make sure you have high quality data in terms of accuracy, precision, availability and timeliness. Not only that, but the data needs to be well integrated because, if the systems can’t talk to each other and collaborate with their data, it becomes very difficult; this is where you get issues such as time synchronization of data.

With scheduling, for example, maybe we’re thinking about an event – when the wafer arrives at a machine — and that’s how we’re making our decisions. Every time that action occurs, we make a decision. But at the fault detection level, maybe they’re not looking at actions, they’re looking at time intervals like seconds or milliseconds. How do we synchronize that kind of data?”

Sam: When you start to consider the integration of data, does sharing data become a new hurdle?

James: Yes, that’s another big challenge, especially when starting to go outside the factory’s four walls. We start to worry about things like ordering parts from suppliers in anticipation of equipment going down. While they won’t share their data with us and we wouldn’t share ours with them, we do need to share some form of information to understand each other. We also need to use that information to improve our predictions. How do you take data and secure your IP but still allow the information to pass so you can have important things like analytics, prediction and detection can take place?

Sam: Is there any other integration that needs to happen to give you a robust solution? If AI can make mistakes, how do we get to the point where it makes fewer mistakes?

James: What we’ve come to understand is that large language models are great for things like mining maintenance data coming from a human and trying to interpret what that human is saying. But that needs to go to an actual (human) subject matter expert who can take this information and fact-check it to filter out what isn’t correct. Then it can go on to something like scheduling and dispatch that will use this data to make important decisions.

A robust solution, then, would be an interface that always asks for help or for a subject matter’s input. To me, the most intelligent system is the system that knows when to ask for help, not the system that just knows things. That’s the problem with AI systems; they aren’t yet like humans where they say, “This is an area where I don’t know. Help me learn this piece.” They have to become more like students where they’re taught and they ask questions while they’re being taught.

Quantification of trust

Sam: While having accurate AI or ML predictions is important, how do we go about not only understanding these systems but trusting the output?

James: Yes, that’s an important question, because people can’t effectively use analytics and prediction, and even detection, without having trust in their recommendations. And this trust is not necessarily about the accuracy of these recommendations so much as about the ability to understand the accuracy level. It is called the quantification of trust. While it’s morbid, a good example is: if a doctor tells me I’m going to die, I’m going to take that seriously. But if the doctor doesn’t give me any indication of when, he’s 100 percent correct but basically telling me something I already know and not helping me. Even if the doctor says, “I am 100 percent confident you’re going to die,” that’s still not helping me. However, if he says, “I have 62 percent confidence that you’re going to die in the next two years,” I can use that information.

With predictive systems, it’s not as important that they’re great predictors. It’s that we understand completely the quality of the predictions they’re giving in terms of start time and stop time of this prediction and what that confidence level is. If I know that, I can use it with my analytics and do a lot to my scheduling and dispatch to make it work better.

Discussion — Are we there yet?

In our discussion, James and I explored the various ways in which AI can enhance our processes and products. However, before we could delve into the benefits, we first examined the distance we still must cover in manufacturing.
Sam and James discuss the spectrum along which AI will develop

Conclusion

There are so many opportunities AI/ML offers semiconductor manufacturers in terms of quality, productivity and the supply chain. This includes improvements in fault detection, predictive maintenance, virtual metrology, scheduling, dispatching and capacity planning, to name a few. Realizing these opportunities to their fullest potential will entail figuring out a better way to integrate the subject matter expertise of humans with AI.

In the final part of the series, James will discuss the digital twin angle and how it further empowers the AI movement

Back to Part 1 and Part 2

About Dr. Moyne

Dr. James Moyne is an Associate Research Scientist at the University of Michigan. He specializes in improving decision-making by bringing more information into the scheduling and dispatching area. Dr. Moyne is experienced in prediction technologies such as predictive maintenance, model-based process control, virtual metrology, and yield prediction. He also focuses on smart manufacturing concepts like digital twin and analytics, working towards the implementation of smart manufacturing in the microelectronics industry.

Dr. Moyne actively supports advanced process control through his co-chairing efforts in industry associations and leadership roles, including the IMA-APC Council, the International Roadmap for Devices and Systems (IRDS) Factory Integration focus group, the SEMI Information and Control Standards committee, and the annual APC-SM Conference in the United States.

With his extensive experience and expertise, Dr. Moyne is highly regarded as a consultant for standards and technology. He has made significant contributions to the field of smart manufacturing, prediction, and big data technologies.

About the Author

Picture of Samantha Duchscherer, Global Product Manager
Samantha Duchscherer, Global Product Manager
Samantha is the Global Product Manager overseeing SmartFactory AI Productivity, Simulation AutoSched® and Simulation AutoMod®. Prior to joining Applied Materials Automation Product Group Samantha was Manager of Industry 4.0 at Bosch, where she also was previously a Data Scientist. She also has experience as a Research Associate for the Geographic Information Science and Technology Group of Oak Ridge National Laboratory. She holds a M.S. in Mathematics from the University of Tennessee, Knoxville, and a B.S. in Mathematics from University of North Georgia, Dahlonega.
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.