Samantha Duchscherer, Global Product Manager, and Emrah Zarifoglu, Head of R&D for Cloud and AI/ML, are among the automation experts working to bring AI powered technologies to SmartFactory’s solutions. In the first article of this two-part series, they defined cloud and discussed the decision-making process companies use when choosing to move to the cloud. As AI transforms industries, understanding its relationship with cloud technology is essential; in this second discussion, they explore how transitioning to cloud could affect deploying AI models.
Sam: Now I can begin incorporating AI into our discussions, which is really exciting for me. Let’s start by discussing the different phases of AI deployment.
When deploying AI, I would say there are three main stages: data preparation, model development, and model deployment. Thinking of the terms we previously
discussed [Part 1], how do they relate to the different phases of AI deployment?
Emrah: Eventually, all different phases of AI can be done using Docker containers. If you’re doing a production level deployment, it should be in Kubernetes or, ideally, it should be configured by Helm charts. This methodology is what we follow when working with our products and teams, as it represents a widely recognized best practice in the industry. However, it is important to note that these methods are not the only ones available.
When you’re trying to decide which components or phases of AI to move to the cloud, consider where scalability, flexibility and high use of resources are most beneficial. For instance, areas with significant data flow and preparation benefit greatly from the cloud’s scalability. The process of training and evaluating models repeatedly also benefits. During deployment, the cloud provides necessary visibility as you manage the process.
Ultimately, cloud has the most advantages for AI where high levels of scalability and flexibility are needed.
Emrah : For data as well as applications, most companies already operate in a hybrid cloud model where part of their data and applications are on the cloud and the rest is on prem. Depending on the technology and location, it’s always a challenge to synchronize the data and bring them together to create a pipeline. It’s easy to suggest moving everything to one cloud, but that’s not always feasible. Realistically, we need to be prepared to operate in a hybrid setting and to address its challenges.
One of the biggest challenges in that environment is securing data and preventing access with malicious intent. In this sense, it’s not the infrastructure but the network that represents the larger vulnerability and area for concern. On the other hand, if your network is secure enough, another challenge could be synchronization across your various technologies.
While having everything in one place is ideal, we don’t impose this solution on our customers but rather help them to feel comfortable and prepared to manage whichever environment (cloud or hybrid) they rely on.
Sam: Shifting the conversation from AI phases to AI resources, I assume there is a transition in resources as well. How does moving to the cloud impact the roles of data engineers, data scientists, and industrial engineers for a customer already using AI in production?
Emrah: The level of change is based on their use and level of connection to the infrastructure. Data engineers are closest to the infrastructure; the tools they use are directly impacted by the technology choices around infrastructure data. They need to be able to operate anywhere using whatever technology is chosen for them.
Data scientists, on the other hand, may or may not be impacted. Their proximity to the infrastructure depends on the tool they use and how they choose to use it.
Industrial engineers are the end users, the furthest removed from the infrastructure. Changes to the underlying infrastructure of the applications they use are likely to go unnoticed.
Cloud advantages for AI developers
About the Authors

Samantha is the Global Product Manager overseeing SmartFactory AI™ Productivity, Simulation AutoSched® and Simulation AutoMod®. Prior to joining Applied Materials Automation Product Group Samantha was Manager of Industry 4.0 at Bosch, where she also was previously a Data Scientist. She also has experience as a Research Associate for the Geographic Information Science and Technology Group of Oak Ridge National Laboratory. She holds a M.S. in Mathematics from the University of Tennessee, Knoxville, and a B.S. in Mathematics from University of North Georgia, Dahlonega.

Emrah leads the team delivering AI/ML solutions and cloud transformation of APG software for semiconductor manufacturers. He is a pioneer in developing SaaS applications, cloud transformation efforts and building optimization and analytics frameworks for cloud computing. He holds patents in semiconductor manufacturing, cloud analytics and retail science. He also has a well-established research record in planning and scheduling in semiconductor manufacturing. His work is published in IEEE and INFORMS and has been presented in IERC and INFORMS. He earned his Ph.D. in Operations Research and Industrial Engineering from University of Texas at Austin. He holds B.S. and M.S. degrees in Industrial Engineering from Bilkent University, Turkey.