SmartFactory Podcast | Episode 2
Semi‑Insightful

A practical look at where AI is and where it may go with Selim Nahas
Podcast cover art for Semi-Insightful SmartFactory Podcast: neon PODCAST with a microphone and headphones; episode 2 features host Samantha Duchscher on a blue panel with her photo.
.preload_poster svg { background: var(--plyr-video-control-background-hover, var(--plyr-color-main, var(--plyr-color-main, #347da2 ))); border: 0; border-radius: 100%; left: 50%; opacity: .9; padding: calc(var(--plyr-control-spacing, 8px) * 1.5); position: absolute; top: 50%; transform: translate(-50%, -50%); transition: .3s; z-index: 2; box-sizing: content-box; fill: #fff; } .preload_poster { display: block }

Transcript

Sam: Welcome to our podcast. I’m your host, Sam Duchscherer, and today I’m joined by Selim Nahas. Selim will offer a practical lens on the AI world from what’s real, what’s still aspirational, and what’s already delivering value. Selim, welcome.

Selim: Thank you.

Sam: As always, let’s start with your story. Can you tell our listeners a bit about yourself and your journey into the AI world? Mainly, I’m looking for: What sparked your interest? How does your current role involve AI today?

Selim: OK, sure. So let me begin by introducing myself as to what I do and what my role at Applied Materials is today. Today, I am the Director of Global Strategic Marketing. So, that’s across the three divisions of factory automation. We’re talking about productivity; we’re talking about MES; and we’re talking about process control, the process quality group. My journey to this point is, I’ve been in semiconductor manufacturing for 30 to 31 years (so I’m effectively old) and I think that I can definitively say that I have seen a number of different, sort of thought processes that have encompassed the semiconductor industry.

Once upon a time, it wasn’t as automated as it is today, so just the sheer concept of going from 200 millimeter to 300mm and seeing how fully automated that became was a remarkable step in saying, “Yes, I work with lights out facilities”—or quote-unquote lights out. They are fully automated. But then when the concept of automation came along and it was coupled with AI, that was really exciting. That was something that, for me, I have one wish: I wish it had come earlier in my lifetime because then I would have more time with it. But I think when I saw that, I was enamored with it and it’s something that drew me in very early on to sort of experiment and understand just where are the boundaries with it. And so today, I find myself using it both for process control sort of paradigms as well as business paradigms, as well as overall factory automation paradigms. So, in a nutshell, that’s the journey that I took and I’ve been on the customer side as well as on the vendor side (I’ve sort of lived the two sides of the coin). And so, I have an appreciation for the challenges that both the vendor side has as well as the customer side has.

So in a nutshell, that’s me. And I am in the northeast side of the United States. I’m in Boston.

Sam: Great. So, I think the new craze is this term agentic AI, right? (And I know that you very much are aware of this term.) So, can you kind of from your perspective, talk about Agentic AI in the context of the semiconductor industry? What does it mean? Maybe what does it not mean?

Selim: OK, sure. So, the way I envision it—the way I work with it—is the concept of the building blocks of AI. If you take LLMs, if you take libraries of specialized work—tools, if you will—if you think about how we’re going to address having a conversation that is remembered, if you think about how we intend to resolve problems of searching information that is relevant for a given application, those are the building blocks that you need to sort of consider when you build an agentic solution.

Putting them all together through the concept of an agent is essentially how you convert those tools into an agentic solution, right? So, I think that’s the part that I think is eluding a lot of people because it’s moving so quickly. I think it’s fair to say that in the last two to three years, we’ve seen a lot of technological change within that realm, and so the definition of agentic is essentially an application and agentic sort of combination of all those parts for a specific purpose.

And the last thing I’ll say about it is I think of it more as it’s not one-size-fits-all; there are highly specialized agents that I believe in for specific tasks so that they’re not made to be gigantic. They’re made to be relatively small and lightweight and highly specialized, right? So, think of it as agents are literally experts, but they are effectively AI experts for specific tasks and they’re put together into a larger community, if you will. And that’s what essentially creates, I think, agents or the agentic world and a community of agents, for lack of a better term.

Sam: So, you hit on a lot of things I want to kind of dive into there. Agreed with you on not one-size-fits-all, highly specialized. They’re the building blocks. I guess, then, what are some other, maybe, misconceptions you hear when people talk about agents?

Selim: OK, there’s two major misconceptions that I see, maybe three. The first one is what size agent should I be considering? What size LLM should I be considering for what I’m doing? How do I decide what size fits my needs, right? And there has been this extreme sort of emphasis on huge, huge models and things of that sort. I am not convinced that huge models are required for everything. In fact, I think quite the opposite. I believe that small surgical models that are connected together are far more effective, far more cost effective, and much easier to train for a specific purpose. So that’s the first one.

I think the second one is the strategy on how you’re going to connect these things—how are you going to actually physically build them? And that beckons the question of, you know, how are you going to, from a computing perspective, architect this thing? How is it distributed? How do I talk to it? When do I talk to it? Do I do it in the cloud, commercial cloud, on-prem cloud or not? And I think that’s kind of key. So that’s one misconception, if you will, categorically.

There’s one other that I want to bring up that’s actually the elephant in the room. What AI does, in my perspective, that nothing else to date has done is that it enables the concept of the digital twin modelling side that allows you to learn in a way that you previously could not learn. So let me pause and explain what I’m trying to say and then let me pause to see if it made sense to you.

The concept of the digital twin here is: I have an ability to model something that I want to understand, whereas historically I would hire a bunch of smart guys and girls that would basically go off and do a whole bunch of correlation work, a whole bunch of – you know—they would try to characterize the system. And what’s different here is, despite the fact that these folks could be brilliant and could be very experienced, they have no ability to compete with a concept like this where the digital twin is essentially a model that is so effective that it begins to highlight relationships we didn’t even know existed. And it allows us to basically steer the conversation in the direction of, “Here’s a list of things that you want to learn to understand. Here’s a set of correlations you may not have anticipated or considered.” And the design aspects, therefore, of AI will come more from investing and building these types of digital twins or these types of models of systems that we want to build AI solutions for.

So let me pause there, because I know that’s conceptually abstract, but I believe that is actually how the greatest work is being achieved today.

Sam: So earlier you mentioned the small surgical models and then you moved to digital twin, which is kind of this overarching architecture. So, I mean I guess my question or follow up— just to clear confusion—is how many small models do you see in that digital twin example that you listed.

Selim: OK, that’s a good question. So, I have modeled for example, a semiconductor process tool –and I did it purely to understand it. I used a real AI platform, a real set of agents and automation platforms, to do it. And I can tell you that by the time I got my first model cut out, there was probably about 35 to 40 agents working together, and each agent was highly specialized at a specific task within the tool. So at the end, what I had was a model that basically just represented a singular tool, but it had 30 to 40 agents in it. And I believe that if I had done more diligence on it, it may grow to maybe, you know, 30 percent, 40 percent more than that, right?

These are relatively small agents that have very specific specialized tasks. And they are easily managed because they are modular and their objectives are very specific so if, for whatever reason, any of these models are not meeting my needs or I’m getting ambiguous results, then I have the ability to either dumb them down further or make them smaller (keep them more surgical). Or, I can just change the paradigm entirely if I find that it’s maybe better off being more of a hardwired sort of a software concept embedded within a tool set for the AI.

To answer your question, I see the paradigm changing from one sort of monolithic big AI that can answer any question to that community of agents just for that one tool. So, 30 to 40 is not unusual, and that answer may be significantly refined. It could triple; it could quadruple; and it could go the other way. But right now, based on what I’m seeing, I think the most effective way to get this done is through this multitude of small agents.

Sam: Then there’s always that question on, you know, where do you draw the line between when it recommends actions (when an agent recommends actions) and when it actually takes actions. Do you see that at the agent level, at the tool level, or at the digital twin level—or maybe all three?

Selim: Great question. The digital twin level is my learning cycle. I use it to design and build. At runtime, I believe that there’s a hierarchy that’s established. This is the reason there are so many agents. So, let me explain what I mean by “at the tool level.”

In my particular case, I applied agents for the actual physical parts of the tool. What that means is: each component within the tool had a set of agents governing it, and so there was an overarching set of agents that had different responsibilities that were consulting these parts agents. For example, if I have a learning agent, the learning agent might compare what the data acquisition agent is doing, and the data acquisition agent is talking to all the parts of the tool and saying, “What data am I getting from you?” and also comparing “What am I expecting you to do with the request that I’ve asked you to run?” (so with the recipe that I’ve invoked on the tool and so on). I think that there’s a hierarchy there.

So the digital twin thing is designed to help me design how to build the solution for the tool, and at runtime, I think it’s more of a movement from this sense, decide, respond kind of world that we live in where that’s how automation historically worked. Something would happen (sense), we would decide what to do, analyze, and then we’d respond. We take some corrective action. What’s new here is there’s a set of agents beyond that that monitor what we did and how effective it was. And, if it was effective or wasn’t effective, the question is: Can you make it understand how the next go around should be different based on the learning of this go? The concept there is that every time you run something, you progressively get better at what you do because you’re learning from every single instance.

So let me make sure I’ve answered your question, because there was the concept of the digital twin there, and there was the concept of the agents on the process control side—just to sort of use this as one singular example. Did that make sense?

Sam: Yeah, that made a lot of sense. I think it cleared it up for me. In this tool example, when you were working on it, what was the most surprising?

Selim: You know, it’s funny. The most surprising part was something that literally went back to my childhood when I was taught the scientific method as a kid. You know, when you’re first a kid, you think that everything comes because, you know, you’re working with people that are just so smart. They know everything. That’s not the case. You follow a method and you discover things as you follow the method, right? And so, in the process of doing that, there was nothing different about this. This behaved exactly the same way. But what was super exciting about it is, it brought up things that I had not anticipated. And I love that because I felt like I had a partner in crime, almost somebody that was sitting over my shoulder going, “Oh, don’t forget about this,” “By the way, did you realize that these two are also tracking things like that?” And that was the most exciting part. So, on the one hand, it was a method that was certainly familiar to me from, you know, my very early education, but it was now being sort of—I felt like I had like an extension of multiple people with assets and skills that I don’t possess that were helping me all along the way.

So that was really exciting because it almost felt like a seamless department of other cohorts working with me to figure out things and the insights I was learning were significant because, if I hadn’t gone through that process, I would have missed at least two or three major concepts that went into the ultimate design. That basically made me realize, unless you go through this exercise, you’re still talking PowerPoint, you’re still talking, you know, conceptual. This helps you sort of take it one level deeper to the point where you say, “Well, that could work.” This model may not be everything we want. It may not solve everything, and it may do very poorly if we put it into production. But it certainly helped me walk through the thought process that I really needed to understand in order to start—discussing this and putting pen to paper, if you will, for requirements, for considerations, for chronological approaches to actually building something with it.

Sam: How do you think leaders should think about these different types of agents in a real system?

Selim: I think that people need to take a step back and be prepared to re-evaluate everything about how they thought of the world. And I mean that in the sense of you can approach it in two ways: You can either take the tack which most people are taking of, “I have this task that I do every day. How can AI take over my task?” That’s one way to think of it.

The other way to think of it is at the root of everything that we do, I can now unleash a set of agents that basically address specifics that are the root of where all of this is coming from in the first place. And if I did that and I had them all sort of working in unison, how would that change this task that I have? Maybe the task doesn’t even exist anymore at the end of it. So, it’s a very big difference between somebody saying I have a task that I want to automate versus somebody saying I’ve designed something that makes this task not even exist anymore because the way the system now works, it’s superfluous. It just works this way, and it accommodates it in the design in the first place. So now the things we focus on are entirely different. They are new decision levels, if you will, that we previously only aspired to get to, and now we can get to them, right?

That’s also very abstract, I realize, so let me pause and see if that made any sense or not.

Sam: It did for sure. I guess, looking ahead 12 to 24 months, what‘s one capability you think will feel normal that might sound ambiguous today?

Selim: I believe that 2027-2028 are going to be very significant years where we’re going to see the AI world on the stage start to really make ripple effects financially across the universe (for lack of a better term). And I think the reason for that is—I think what’s going to change—is the community of people that are going to understand how to use it is going to grow significantly. I think right now I’m watching people learn how to use Copilot and basic use cases and how to, you know, address little sorts of key things.

But, I think as we move forward between now and say’ 26 and ’27 (your time horizon is in there for sure), I think what we’re going to see is with a community of people changing like that and growing, we’re going to start seeing new applications and new things emerge out of it. And I think when that happens, that by definition will steer the world of new budding students, to adults that have an open mind, and so on, to start thinking about the world in a new way. And I think that you’re going to see people sort of schism on this one. Some people will not accept it and sort of fall behind. Others will embrace it and will be doing remarkable things. But I think that it really does boil down to people learning how to use the craft.

To me, it’s no different than an artist’s paint set, right? Everybody can be given a canvas and a set of paints, but not everyone can paint. And I believe the number of people that will learn to paint is going to grow significantly and, as a result, that’s going to drive a lot more innovation, a lot more use cases, a lot more adoption, and we’ll start to see things emerge where previously we didn’t. We’ll start to see new businesses, new thought processes. The community as a whole will begin to accept them a whole lot more—kind of like the autonomous driving vehicle, right? Fifteen to twenty years ago, nobody had them. Today you can call a taxi and it’s got no driver. So I think, you know, 10 years from now, I think we’re going to see a whole bunch of other things that are like this.

I don’t know of a single industry that is not going to be heavily impacted, and I mean this across the board—law, accounting, automation for factory automation, manufacturing, logistics—there’s nothing immune to this, but it doesn’t mean that people get replaced. I don’t look at it that way. I look at it as what we can achieve is on a new level. So, I think Jensen Huang said something that really stuck with me (and I think Scott brought it up again) and I love this: You’re not going to lose your job to AI. You’re going to lose your job to somebody that knows how to use AI. And I do agree with that. I think that what that means is, you know, if you go back to what you asked me earlier, what did I learn, what did I glean from doing this? I learned that I felt like I had–standing behind me, looking over my shoulder the whole time—AI was helping me. I had a team of developers. If I needed a piece of code written, I would have the AI write it for me, and it did it in a matter of minutes—things like that. I think that that is an amplification of the world around us that did not exist before AI and it was very expensive, very slow before AI. And now, you’re not shipping things offshore. You’re not doing anything. You can do everything you want right here. You can take your office with you no matter where you travel, and you can essentially have access to all these resources anytime you wish.

So I think that’s entirely new.

Sam: I know, it’s such an exciting time. I think I go to work genuinely every day and be like, “Man, what am I gonna learn today?” It’s, every day it’s a roller coaster.

Selim: I’m enjoying it to no end. And you know what? I have days where at the end of the day, I literally say to myself, “Oh, you broke something new today. That was new ground. You achieved something new today,” and it could be anything. It could be I learned to run something locally native in my office and I previously didn’t know how to do it. Or, I was having technical problems making it work or you know, the results were subpar, and I got my first really good results. This could be anything from graphics to concepts, to ideas, to audio, things of that sort. Anything goes and every time I open up a concept and I look it up, I’ve not run into any instance where the world is saying, “No, the AI world doesn’t do that.” There’s literally no boundary, and I think that that’s amazing. That’s incredible, so I share your enthusiasm. I’m incredibly excited about it.

Sam: All right, let’s end this podcast with, I hope, a popular round that I’m calling the lightning round. I did it in Episode One where I just ask quick questions and see where you land on them. As I told Rich in my Episode One, there’s no wrong answer—some of them might be a little bit funny—so I’ll see where your answers lie. So the first one, which I think might resonate with you very well based off your background, is if Agentic AI were a book, what’s the title of chapter one?

Selim: Oh, this. If it was a book, I think agentic AI to me, it reminds me of the De Revolutionibus by Copernicus. He knew something no one else knew. And I think that there’s a bunch of people out there that are starting to realize what’s possible with it. So I think now that’s a Latin sort of a title, so an English title would be “Bet you didn’t know.”

Sam: Great. I knew you would do really well on that question. I’m glad I asked it. OK, then second question: What’s the fastest way to get AI into trouble?

Selim: Oh, I don’t think it’s AI getting into trouble. AI seems to say, “I’m perfectly OK getting into trouble. You’re the one in trouble because you’re the one listening to me.” So, I think it’s more, I get into trouble when the AI messes up. I think that it behooves me to sort of say, well, “I know you can do anything. You’re like a super smart toddler. You have no concept of what you should and should not do.”

Now we are getting to a point where they are getting smart enough to know what is not appropriate and what is appropriate and so on, but I think that it’s really more on me working with the AI. In terms of getting into trouble, don’t intentionally throw things at it that it will fumble because if you ask yourself the same question over and over again (to a human being), we will fumble it as well, and so will an AI.

I think that those are quintessential ways to get into trouble. Hence, the answer earlier of smaller surgical, highly specialized AIs. That’s to avoid that whole hallucination, you know, keep getting bombarded with the same concept and expect a different result. It starts to search for things in places where it starts to make things up if you are not giving it any guidance and no rails. And by the way, humans are the same, so that’s how I think you should stay out of trouble.

Sam: That’s a great answer. And then the last one is: When you add agents, do your best practices win or do your worst ones get exposed?

Selim:Yes. Yes is the answer. They both do. Both things happen. So that learning process that you go through makes you realize what your best practices that do work are, because they persist, and the instances where you thought you were doing something really great and it sounds like somebody threw a symbol down the stairs, that’s also made apparent and it highlights stuff like that. So, I think that you will learn both. And I think that we all live with a lot of misconceptions.

I pride myself with one thing and I promised myself I would do this, you know, to my last breath in my life. I hope that I never become one of those people that does something by template over and over again and just stick to it. I like to be that person that says, “Last time I did this I did X and today I’m going to try something different because I realized that there’s some things that didn’t work out really well.” Now, of course, I’m applying this abstractly to the broader sense of life, if you will, but trying not to fall into that cliche of just gravitating to something that you get comfortable with.

I think that’s something that AI is really exciting about, and the best person I’ve ever been in my life is when I’m a little uncomfortable. It pushes me to do things that I previously didn’t know I could do. And I encourage anybody who has trepidation about stuff, go out there and take a risk, see what happens. AI will help you figure it out. In some cases, you can test a theory out before you build something, and I think AI is giving us a lot of leeway that way and it’s giving us the opportunity to fail in private, if you will, before you go out there and fail in public. I have had many public failures, but I believe that’s what made me a more effective person in life.

Sam: Love that. With that, though, we have to come to an end. So, Selim, thank you so much for joining me today. I think this was more than semi-insightful. It was truly insightful.

And for the audience, if you enjoyed this podcast, be sure to follow this series for more conversations. Thanks for listening. Thanks, Selim.

Selim: Thanks for having me.

 

About the Author

Picture of Samantha Duchscherer, Global Product Manager
Samantha Duchscherer, Global Product Manager
Samantha is the Global Product Manager overseeing SmartFactory AI™ Productivity, Simulation AutoSched® and Simulation AutoMod®. Prior to joining Applied Materials Automation Product Group Samantha was Manager of Industry 4.0 at Bosch, where she also was previously a Data Scientist. She also has experience as a Research Associate for the Geographic Information Science and Technology Group of Oak Ridge National Laboratory. She holds a M.S. in Mathematics from the University of Tennessee, Knoxville, and a B.S. in Mathematics from University of North Georgia, Dahlonega.