Inductive Automation Blog

Connecting you to ideas, tips, updates and thought-leadership
from Inductive Automation

Getting Ignition Off the Ground Chris Fischer Mon, 06/16/2025 - 13:37

Join us for the next installment of our new series of webinars exclusively for integrators: Ignition Power Hour! The Power Hour is a webinar series covering a range of topics to provide useful Ignition knowledge and insight to the integrator community. Power Hour webinars can include tips and tricks of the trade, educational topics, technology trends, new Ignition features, and more — and are all led by IA engineers and specialists who are experts on the subject at hand.
 
In this next Power Hour, we will discuss Ignition Cloud Edition, and do a deep dive into Pikaview, an Exchange resource that provides a new way to interface with databases in Perspective. We'll also discuss the value of Ignition support plans and upgrade protection.
 
Learn About:
 
- Ignition Cloud Edition
- A Deep Dive into a New Database Interface Resource, Pikaview
- The Benefits of Ignition Support Plans & Upgrade Protection

Hero
Ignition Power Hour
Thumbnail
Ignition Power Hour
Wistia ID
lco7qv6z4w
Learning Series
Snowflake Exhibitor Demo: Unlocking Smart Manufacturing with IT/OT Convergence on the Snowflake AI Data Cloud Rachel Bano Thu, 12/05/2024 - 11:40

Modern manufacturing generates vast amounts of data from diverse sources, creating challenges in data integration and utilization. Traditionally, data silos have hindered the scalability of analytics across manufacturing and supply chains. The Snowflake AI Data Cloud breaks down these barriers by seamlessly converging IT and OT data, accelerating smart manufacturing initiatives. Join us to explore how Snowflake empowers manufacturers to harness the full potential of their data, driving innovation and operational excellence in the era of AI and Industry 4.0.

Transcript:

00:05
Greg Sloyer: Well, thank you for coming. Sort of during lunch, before one of the keynotes, I'd like to thank Inductive Automation for having us present again. This is our second year presenting at the conference. My name is Greg Sloyer. I'm from Snowflake. I am the Manufacturing Industry Principal, so I look at the business side of things from Snowflake. All the usual, do not buy or sell stocks based on what I'm talking about. Don't plan your 401(k)s and retirement. I've been doing data and analytics for manufacturing, supply chain, operations, logistics, all that for abo ut 17 years now, not all of which was Snowflake. Prior to that, I had 20 years in the chemical industry, DuPont, BASF, and I ran global supply chains and logistics and all sorts of things like that in the chemical industry. So, why is Snowflake at the Inductive Automation ICC conference? I will set this up by saying, Snowflake, how many people are familiar with Snowflake today? Okay, so about half. So, Snowflake started out as a data warehouse, data lake kind of thing in the cloud.

01:17
Greg Sloyer: It's been about 12 years now, and in 2014, the big thing here is we operate across AWS, Azure, and GCP, so all across all the major three clouds. Our big thing, especially in the 2018 timeframe, when you see this disrupt collaboration and this cool-looking thing in the middle, which is maybe a little hard to see, but there's a lot of starbursts and fireworks-looking things, that is data sharing in Snowflake. This is between customers and suppliers, between partners and OEMs, between logistics groups and manufacturers, between what we call our marketplace providers, so data providers in Snowflake, providing things like weather data, commodity pricing, freight rates, logistics, things like that. There's about 2,600 data sets or so in Snowflake that are available. Really cool thing is we do this all without moving data. We're not moving data in Snowflake. It is pointers. We've gotten rid of the ETLs and FTPs and emails, and heaven forbid you put stuff in CSV files and ship them over to a friend of yours. This is all essentially permissions.

02:39
Greg Sloyer: You give permission for somebody to see a table or set of data, or they give you permission to see a table or a set of data or a set of tables. Once that permission is granted, that data shows up in your database like it's one of your tables. So now you are extending your, to incorporate that data in analytics and reporting; you extend your SQL with a join statement. That's what it comes down to. That was in 2018. We've been exploiting that, and more so we have been now building applications. So you're seeing major applications like Blue Yonder for supply chain and others replatforming to Snowflake. And this has really been the progression, and we continue to add on to this. A lot of AI, Gen AI, ML types of capabilities, I'm gonna talk about a couple of them today, being brought to the data. So what we didn't want you to do is spend a lot of time bringing all the data and talk about IT data and OT data today, bringing all that to Snowflake just to then pull it out and have to do something else with it somewhere else.

03:45
Greg Sloyer: The idea is let the data there; let's bring all those capabilities to the data so you can operate all within Snowflake. We launched what's called a manufacturing data cloud at Hannover Messe about a year and a half ago, April of last year. And we looked at what was needed in the industry, manufacturing in general, and what were a lot of these opportunities people are struggling with, things like that. Hopefully, a number of these resonate. So one was IT and OT convergence. Okay. This has been a big topic now for a number of years, and Snowflake had been great at bringing in typical ERP, especially like SAP data, into Snowflake. We've been doing that for a number of years. Lots of big customers who are doing that today with not just one SAP or ERP system but 10s, 20s, 30s. And all of this is published when I provide a name, but Carrier, for example, has 140 ERPs that they consolidate their data into Snowflake on. What we weren't as strong was on the OT side. So bringing in the shop floor data.

05:15
Greg Sloyer: This is where we really pivoted about 18 months, two years ago, working very closely with Inductive Automation, Cirrus Link, and a number of other partners to provide different architectural ways to bring the shop floor data into Snowflake and take advantage of the time series capabilities and a number of those other capabilities we'll talk about in terms of AI, ML, Gen AI, things like that, to the data, which is sort of that third point, which is bringing and really deploying advanced analytics to the data. The middle one is taking advantage of that data sharing. So this is broadening the visibility outside of IT, outside of OT, so the enterprise, and really extending that to that partner network, broadening the view of the supply chain, incorporating that visibility into the decision and analytics process. Really taking advantage of a lot of these different Snowflake capabilities. The difficulties, and I'm sure many of you have experienced this, is that for years, decades, the shop floor manufacturing sites have generally been an island. Different organizations, different functional reporting roles from that systems sort of standpoint.

06:18
Greg Sloyer: OT sometimes reported into the CIO, but generally not. It reported into VP manufacturing. This created a lot of separations from a systems standpoint, made it not technically difficult but more organizationally difficult to sort of integrate and bring that data in, integrate it with sort of the rest of the data. There's some architectural discussions that happen, things like that. So different opportunities. And for those of you who have multiple plants, what I always say is if you have 50 plants, you probably have 48 different MES, MRO, LIMS, or QM, and all those systems because a lot of those plants came up from acquisition. As I said, it's an island. Investments weren't made, or if it wasn't broken, we're not gonna fix it. That kind of thing. So a lot of that is changing, but also the architectural patterns that you have to utilize that data, especially to bring it to the cloud, need to account for the fact that all these plants are different. They may have different protocols, different configurations, all that kind of stuff. So the solutions have to be adaptable.

0:07:38.7
Greg Sloyer: And that's really where, in partnering with Inductive Automation, all that, we've helped simplify the environment of bringing that data into the cloud, into Snowflake. So let's first take a very broad view as we look at the supply chain. So as I mentioned, we have everywhere from marketplace providers with commodity data, pricing, availability, geopolitical kinds of things that impact supply, the logistics areas, really bringing that sort of view to the plant, as well as then when you look outside from the customer standpoint, especially if you're building connected products, so products out in the field that are generating their own data after they've left manufacturing. How do I incorporate and create that visibility up and through the supply chain, through the ecosystem, to be able to make decisions more holistically to not just help manufacturing but to help that whole enterprise environment? And then this is the thing we were really putting in 18 months or so, two years ago, which is that ability to get much more fine-grained, granular data at speed.

09:04
Greg Sloyer: I won't call it real-time. Let's call it near real-time into the cloud. This is not replacing shop floor systems. If you have a safety system, your cloud is not your best spot to put those. That's gonna be edge-driven kinds of stuff. But as we look at how do I take advantage of that data, how do I bring and broaden access to it, how do I look across those 50 plants, how do I run much more advanced mathematics on that data to do root cause, cycle time, predictive quality, all those kinds of things? That's really why we're pulling that data into the cloud and combining it with a number of other components of the data. For example, let's say you are great at doing quality control and things like that, even looking at that from the shop floor, the isolated plant level. What we wanna do is show that vision extending that supply chain to say, okay, that's not the only variables going into your manufacturing facility. Your supplier quality, that delivery variability, all of those things come into play when you start looking at quality or predictive maintenance, those aspects. And then how do returns, how does warranty quality, for those of you more on the discreet side, how does that impact and how can I utilize that information from customer service, from field maintenance, things of that nature to see what that potential root causes were that started in manufacturing and started in supply?

10:37
Greg Sloyer: So again, being able to broaden that view for those organizations that have started moving beyond doing really cool and fancy analytics on their shop floor data, how do I paint that vision of the future? And this is where we really see the extending of that data and incorporating more of the IT types of data into those decision processes. So Snowflake has many, many more partners than this. These were the partners as part of our manufacturing cloud launch. Marketplace partners, there's 2,600, 2,700 different data sets in the marketplace, but there's everywhere, as I mentioned, financial data, there's ESG data. If you type in ESG into Snowflake marketplace in the search, you're gonna come up with 40 or 50 different data sets that are available, freight rates for kites and dart, things of that nature. From that perspective, again, to help provide that greater visibility. As I mentioned, Snowflake has really been doubling down in terms of applications and the capabilities there, building those on Snowflake.

12:00
Greg Sloyer: So Blue Yonder and a number of others are replatforming. In a lot of cases, there's ones that were built specifically by companies on top of Snowflake, taking advantage of that power of the cloud and cross-cloud. So as they build something, it's not just in AWS or in Azure; it goes across all three. And then system integrators, the SIs there. And again, many, many more are partners. But these were ones that stood up and said, I have built in Snowflake a supply chain or manufacturing or operations type of solution with a customer. And they're raising their hand and going, "Yep, they did a great job. We did it all in Snowflake. And that's how they were on this list." And this continues to expand as we work through. The main areas I mentioned so supply chain optimization, smart manufacturing, and connected products are the three areas where we start utilizing the manufacturing data, the supply chain data, and that sensor data, whether it's coming from the shop floor or from connected devices out in the field, to be able to really provide that visibility, take advantage of the cloud infrastructure.

13:23
Greg Sloyer: And depending on which booths you go to behind you here, you'll see slightly different versions of this. This is my extended version. And sometimes today if I get surprised by a slide, it's because somebody's legal, they're legal, our legal, somebody's marketing department got a hold of them. So I always enjoy this 'cause then the slides are as much a surprise to me as they are to you. So with Ignition, and we've had this now in place with them a little over a year, I wanna say closer to 18 months that we've been working with Ignition or Inductive Automation and Cirrus Link. But this is the easy button for getting data into Snowflake. This is zero code. When the part of the secret sauce is that IoT bridge for Snowflake that is available via Cirrus Link. And this drops the data in from Ignition. So not just the tag data, but the metadata all around that, that structured data. So all of that lands in Snowflake. And if you have a chance to see a demo, Arlen Nipper, I don't know if Arlen's in the audience today. Arlen and others have done this many, many times. I'm probably on many, many calls per month with him with different customers and prospects.

14:55
Greg Sloyer: And within that demo, Snowflake goes from knowing nothing about your shop floor to knowing everything about it that's coming through Ignition. So very, very fast, very, very easy method for getting the data into Snowflake. And these are some of the reasons why we're one of the partners of this versus some of the other ways you can land that data within the cloud is we really looked at this with them in the process engineer. So the plant is driving the configuration in Snowflake. Snowflake is not defining a structure that you've got to be so many levels deep and it has to have certain kinds of attributes and all that. It is driven from the edge. So the plant defines how you look in Snowflake. Snowflake does not define that. That's one of the keys. And then some of these other nuances, for those of you who get into the much more excruciating detail about data types and things like that and what can you land. But we're landing this all with MQTT. The cool part of the demo for me in terms of the processes within Snowflake is that MQTT is great for transmitting data and for storing data.

16:22
Greg Sloyer: Really, really small footprints for both. Allows you to go very quick, allows you to get the data up because of that event sort of driven change control that they have. Not so great for BI and for analytics. It has lots of nulls. Mathematics tends not to like nulls, and BI tends to not like a lot of zeros with a spike and a lot of zeros with a spike. Snowflake's got a, we use views in Snowflake, so you're not gonna repopulate and have to store all that data. But from the view perspective, it's hydrating those nulls with the previously good value. So now you have an analytic data set that your data scientists tend to like without having to code anything. This is out of the box. It's driven the moment you've set up this connection. And your BI tools like it because it's not that flatline spike, flatline spike. So, great. You got the data in Snowflake. Now what can I do with it? This is where the past year or two, Snowflake has been bringing in a lot of AI, ML, Gen AI capabilities into Snowflake. We continue to release new stuff. This happens to be one of our, I'll call it an easy button.

18:01
Greg Sloyer: 'Cause different levels, different organizations have different capabilities around data science, around analytics use, things like that. And we have for the data scientists in the room, Python, Java, Scala, all that can be done in Snowflake. It's not just a SQL house. So you can be writing all the cool data science stuff. I don't think I have it on here, but we're in the booth right across the hall. For those of you into optimization, you can be running optimization, mathematical optimization in Snowflake through the Python libraries. It's really cool. I've been, in the '90s, I watched optimization fail. And I'll talk a little bit on why I think it failed. But the ways we're getting and the capabilities that now are being brought to this data are really, to me, driving a lot of really cool stuff happening in manufacturing. But this two lines of code here with anomaly detection, this is using an ML function. I think of this as the trend function in Excel. So for all of you who are familiar with Excel and you use the trend function, all you had to know to generate a forecast or see the trend of data was trend parameters, and what are those two, three, four parameters I had to put in the trend function.

19:21
Greg Sloyer: You did not have to know the mathematics behind it was least squares at the time. You didn't have to care. You could just write a trend function. This anomaly detection, and there's about a dozen of these, what we call Cortex ML functions, that are available is something similar. You have to know the parameters, just like I had to know with the trend function. But now I can run an ML-based, I think it's gradient boost, anomaly detection on the data as it lands. I don't have to be a data scientist to apply that mathematics to the data. So there are functions like forecasting, and there's a couple of others that are out there that are more manufacturing-based. Like I said, there's about a dozen overall. But the ones I tend to see for manufacturing, supply chain, or anomaly detection, forecasting, and there's some contribution factors, things like that, that really get exciting, 'cause you're applying ML techniques without having to be a data scientist. So simplifying this approach.

20:22
Greg Sloyer: The screens here that I'm showing are actually built in what we call Streamlit. This is a Python-based graphics package that is in Snowflake. So the data does not have to go out. It will not compete with a Tableau or a Power BI. We can also go to there. So if you wanna do really cool and fancy dashboards, super. We operate with those. But for folks, data scientists especially, who wanna just show quick, easy visualizations of the data, of the results of their very cool mathematics, this is available for them as well. So why do I think optimization failed? But why do I get nervous about being able to use advanced analytics broadly across your organization? And I'll point to this and go, there's different reasons. But the biggest one, and why I think in the 90s optimization failed, for example, and what I don't wanna see with things like Gen AI and AI and ML, which are really cool tools, is that we had organizations wanting to go from here to here without doing the groundwork in between. There was too much change management.

21:33
Greg Sloyer: There was too much. There was not enough data governance, data quality in those processes. Optimization is great if you've got really good data quality, especially pricing, timing, things like that. The mathematical models rely on that. That is no different for Gen AI, AI, and ML. The square root of a bad number is still a bad number. It doesn't get better because I threw cooler mathematics at it. So this is where we, working with the partners, working with you folks, it's not that we're gonna say, "No, don't ever do this." What I'm saying is, as a warning, keep in mind that those structures in place, where there's governance, and this is really where the IT and the OT coming back together to help, through this process, really help then create the environments where AI and ML are gonna be a lot more successful. Make sense so far? All right.

22:45
Greg Sloyer: So data foundation is necessary. We build these out. We work with the customers and the partners to deploy these things. Like I said, we've been great at IT data, really excited about all the partnerships we have to bring in the OT data, take advantage of our time series, geospatial capabilities, things of that nature. So you can do all sorts of cool math with that. And then extending those with the partner data or sharing that data with your partners, customers, suppliers, logistics, for example. So what's that mean? So from the Unified Namespace, this is what we are continuing to develop: bringing IT, OT, connected products, getting all that within Snowflake, improving that visibility, and allowing you then to run greater AI, ML models, Gen AI, at the data, not, again, separating it out. So that you can take advantage of not just the ingestion of that kind of data, but what do I do with it after I've got it somewhere? So with that, any questions before I send you across the hall to the 1:00 o'clock keynote?

24:15
Audience Member 1: For a lot of us, the issue is not just the data. It's also the application. So you saw, basically, a lot of the applications there. What's the result look like if you wanna get the data out of Snowflake and give it to an individual in order for someone on the shop floor to be able to use it? Does it have to live alongside each other? And should we not think about it like it's a replacement for a data broker? It's just something that lets you do higher-level data?

24:37
Greg Sloyer: Generally, the question is, is there a path to go from Snowflake, let's say, back to Ignition as well? There are organizations that have gone down that route. I would say that the Ignition group, Inductive Automation, is the best ones to talk to. There's always the security and protocols and things like that that you have to work through on that. Technically, I do not believe it's an issue. But generally, it's been a one-way path to go up into Snowflake, because then you're looking, like I said, if you have 50 sites, you may have 50 Ignition brokers or whatever, and they're coming up into Snowflake. So you're looking more holistically at that data. I've not seen SAP data go down to Ignition or anything like that. That's usually staying up within Snowflake. Sure.

25:25
Audience Member 1: Oh, somebody else. So at the beginning of the presentation, you talked about how it's kind of a big permission space, rather than storage space. But then later on.

25:32
Greg Sloyer: For the, for data sharing.

25:41
Audience Member 1: Okay, 'cause when we saw the architecture diagram, if you define it in the namespace for Cirrus Link, it moves up. Where is the storage part in that situation?

25:50
Greg Sloyer: So it's in Snowflake. The data is coming into Snowflake. It's stored there. You have chosen, as a customer organization, AWS or Azure or both, let's say, for different reasons. And Snowflake sits on top of that. So physically, they can talk to you about where it makes most sense. But generally, it's in Snowflake. One last question real quick. Yes.

26:11
Audience Member 2: No. Yeah, that was it.

26:14
Greg Sloyer: No. Oh, okay. All right. Super. So I've already been shown the hook kind of thing 'cause they want you to get across the hall for the 1:00 o'clock. But thank you. Appreciate your time. And we are across the hall for any more detailed questions.

Wistia ID
1oey5sgios
Hero
Thumbnail
Video Duration
1598
ICC Year
2024.00
Sepasoft Exhibitor Demo: Sepasoft’s Workflow Solution: Building Bobbles With Batch Rachel Bano Thu, 12/05/2024 - 11:06

Sepasoft’s workflow solution can map out and execute the production process for almost anything – including made-to-order bobbleheads! Our demo will showcase how simple it is to manage production workflows, collect real-time data, and utilize document management with 3D models and form entry. We’ll also highlight how to authenticate and verify every action during production for compliance and accountability using Electronic Batch Records (EBR) and electronic signatures. Join us to see the latest Batch Procedure technology in action.

Transcript:

00:00
Tony Nevshemal: Hey everybody. Welcome and thank you for coming to our session today. I'm really excited to be here at ICC. It's actually my first ICC. But when I started... Well, today, my colleague Doug and I, sorry about that, are gonna be presenting "Sepasoft's Workflow Solution: Building Bobbles with Batch." We're gonna be building these really cool bobbleheads today using Sepasoft's Batch [Procedure] Module. And within Sepasoft, there's often been some controversy about how we start... How we named our module "Batch" because it's, some people think it's a misnomer. That it only applies to batch manufacturing. However, it truly is a workflow solution. It'll handle any workflow that's incorporated or associated with your manufacturing, and we intend to show you something of that today.

00:55
Tony Nevshemal: My name is Tony Nevshemal. I'm the CEO of Sepasoft, and I'm also the new guy, having joined just recently. Many of you know Tom, Tom Hechtman was the prior CEO of Sepasoft, and he has transitioned to the CTO role where he's in charge of the product roadmap, product innovation, and thought leadership. Prior to joining Sepasoft, I was actually at, a CEO of an ERP, a manufacturing ERP. And prior to that, I was an operations director at a large manufacturer. I'm very happy today to come down the Purdue pyramid to level three where all the cool kids are and one of them is Doug. So Doug, introduce yourself.

01:38
Doug Brandl: Yeah, thank you. My name is Doug Brandl. I'm an MES Solutions Engineer with Sepasoft. My background is, I've got 10 years of experience in pharma as an automation engineer and consultant, and then application development before then. But I grew up around the MES space, I grew up around the standards. My father was really involved in them, and our dinner table conversations with me and my brothers and my family often involved talking about operations, responses, and all the different object models. It was a bit nerdy, a bit geeky, push the glasses right up your face. But I've got an ingrained, internalized understanding of the space and I've been with Sepasoft for a little over a year and thank you to everybody who went to our session last year, and thank you for coming to this one today.

02:36
Tony Nevshemal: Well, and before I joined, I endeavored to take all the training classes at Sepasoft for all of our modules. But one of the training classes I have not taken yet is our Batch [Procedure] Module. So Doug is in the unenviable position of walking me through our Batch [Procedure] Module, the unit procedures, changing up a recipe, and you guys get to see it all in real time today. A quick word about Sepasoft before we proceed. Sepasoft is of course an Inductive [Automation] Solutions Partner. We have the broadest and deepest MES solution on the platform. We have batch processing production workflows, we'll be showing some of that today. We have genealogy and WIP inventory with our Track & Trace Module. ERP connectivity, we can hook up to pretty much any ERP, and we have a direct connector with SAP.

03:31
Tony Nevshemal: We're well known for our production efficiency and scheduling with our OEE and downtime, quality tracking is handled with SPC. We have a bunch of ancillary modules such as settings and changeover, document management, barcode, those types of things. And you can control it all at the enterprise level with our multi-sync management, multi-site management, not sync. I'm very happy to tell you that this week we're announcing another bullet point added to this list, and that's SepaIQ. So please come to our session on Thursday. SepaIQ is really an exciting breakthrough that we've made, that Tom's made, and it relates to our manufacturing, machine learning, AI, data contextualization, all of those topics. So please come to our session on Thursday to learn more about that.

04:21
Tony Nevshemal: And finally, a quick word about a change we've made regarding our Quick Start program at Sepasoft. Our Quick Start program is effectively access to our design consultation engineers. We've opened up that access to be universal to any and all Sepasoft customers. So to the extent that you need expertise with your MES project, whether that's at architecture, design, implementation, rollout, consider us part of the team because when you succeed, we succeed. So I think that's enough of that. Let's get into the presentation.

04:55
Doug Brandl: Yeah. To give everybody some context on what we're doing, we are receiving orders from our ERP system for made-to-order bobbleheads. And we're going to run through to assembly, and we're going to try and highlight, and I challenge you to think of it this way, the procedural control and workflow of what it takes to go from order to execution of making these bobbleheads. And Tony will have to put them together for us. We're gonna leverage our best procedure tool, we're gonna use our Track & Trace modules. We'll, hopefully, if we have time, be able to see some of the genealogy of lot consumption, and you'll see a handful of our components that we use to do all this and our recipe editor.

05:43
Tony Nevshemal: Yep.

05:45
Doug Brandl: Alright.

05:45
Tony Nevshemal: Alright.

05:46
Doug Brandl: So first things first, you guys are gonna have to excuse me, I've got to turn around to do this. We're gonna refresh our orders off of our ERP system, and I like this bobblehead for the Sepasoft company logo, that's awfully convenient that one's right at the beginning. So we're gonna go ahead and start a batch, and as you can see, we've got our batch ID, proceed to the review page before we can assemble. So what we've got here is, this is just a standard Perspective page, we've got our document viewer, which is an HTML5 WYSIWYG. You can do a lot of things in it, a lot of really cool things. In this case, we're embedding a WebGL model, this we do with the help of the Web Dev Module. And over here on the right side, we've embedded some form entry fields and all of this gets tracked to the batch, this gets tracked to the electronic batch record, the EBR, and I'll show you what all of that looks like here in a minute. But I guess probably before we go, I should give you a quick overview of the recipe so that we can...

07:00
Tony Nevshemal: Yeah. Is there a way to graphically view that?

07:01
Doug Brandl: Yeah. I put a little slide out here. Right over here is a visual representation, and this is also very similar to... Sorry. This is our recipe that we're gonna be executing and we here have "Review Station" which in this case is gonna be my computer where I'm going to do some 3D model review. We're going to do some authentication challenges. This links into the identity provider provided by Inductive [Automation].

07:29
Doug Brandl: And we'll challenge for some electronic signatures. We've got some logic that we can do to that where you can require double signatures, you can set up which roles need to be to gate certain steps. And then after our review, if we're happy with our model, we go through the assembly, so I have an equipment phase here. If you're not familiar with the standards, think of the phase as like a step. In this case, this equipment phase is a simulated PLC where I'm going to send to our printer, our 3D... Our beautiful Amazon printer here. Our 3D models that we're going to print, we're going to e-sign to make sure it didn't turn to spaghetti, and then we're going to measure, record the values to our SPC modules and then assemble our 3D, our little 3D bobblehead. Alright, so Tony.

08:26
Tony Nevshemal: Yes.

08:27
Doug Brandl: Well, I guess this is all me, I'm the reviewer. As far as... This looks appropriate to me. I'm not really seeing any mesh errors.

08:36
Tony Nevshemal: And all components, all three are present.

08:38
Doug Brandl: Yes, all of this is present. So I'm gonna go ahead and click through these and I'm gonna say this is all good, and I'm going to... You can't see it in the bottom right because it's covered by my shadow, but down here, we've got our button to finish this document. Now, when I do this, I'm gonna slide this back out. You can see where you've been and where you're going with our batch monitor. And when I click on this and expand it, I can see all of the relevant metrics that we're capturing as part of this step. I can see, right up here, I can see the model is appropriate. So this is really good for auditing and figuring out what really happened during the execution of a batch. Slide this guy back out, and I can see I've got an e-signature required to complete the review step.

09:28
Doug Brandl: I will go ahead as a reviewer, do this challenge, so here I am Doug, and my password. Alright, I accepted that. I could also reject it, which in our batch, in the recipe that you saw or branches, you can get pretty complex in your conditions that you put in there to do really whatever it is that you need. Next up, I guess we go to our assemble stage. Here, this is just a simple Perspective page that I put up tied to our fake little PLC. You can see I say that the state is running. Our PLC is saying that it is running, but in reality, it is waiting for some filament. So Tony, if you don't mind, could you scan some...

10:21
Tony Nevshemal: Sure. Beep.

10:24
Doug Brandl: Perfect. Alright, there we go. Okay, now we're off to the races. So, while this is running, I'm just capturing a handful of metrics, we're looking at filament consumed, layers printed, extruder speed, etc.

10:35
Tony Nevshemal: How did you build these screens?

10:37
Doug Brandl: Yeah, this is just standard Perspective. All of these are tag-driven, so this, when you install our modules, you get an MES tag provider. And as you configure which phases, which, as you configure the batch module, you can expose each step when it executes for a particular unit, you can expose all of those values as tags. So all of these are just tags, and I just... It's a very simple like plain old Ignition Perspective. And then, again, on this while it executes, I didn't pull it up fast enough, but we are tracking, you see Base_Out at the top, we see filament. These are material transfers, so this is actually piggybacking our Track and Trace Module. It allows us to consume material, track lot usage, and we'll see that hopefully at the end with our trace graph, and then it'll also... You get a file name, you get the extruder speed, all of that gets tracked live, and you can store those values as they change, you can store the last value, so that you can... And you can see all of this in your EBR at the end after execution.

11:56
Tony Nevshemal: And for those that don't know, what's an EBR?

11:58
Doug Brandl: Electronic batch record. Alright, so we'll go over to our measure. I forgot I have a e-signature here. Alright.

12:07
Tony Nevshemal: Well, it looks like they printed.

12:09
Doug Brandl: Okay, they didn't turn to spaghetti.

12:11
Tony Nevshemal: No.

12:11
Doug Brandl: Alright.

12:12
Tony Nevshemal: We got the parts.

12:13
Doug Brandl: So I'll go ahead and sign off. Or would you like to sign off?

12:16
Tony Nevshemal: Sure.

12:17
Doug Brandl: Yeah. And again, this is any identity provider in Ignition that you set up, so you don't need to do anything crazy, it's just part of the platform. Alright. Now we're good, hopefully. Well, I hit the login button. Now we're good to go to our measure. Alright, so we've got some annotations now here on our 3D model. Tony, I need you to take some measurements here.

13:00
Tony Nevshemal: Okay.

13:02
Doug Brandl: So let's look at the head first.

13:04
Tony Nevshemal: Which one?

13:06
Doug Brandl: And I want you to get the diameter of that section on the 3D model.

13:15
Tony Nevshemal: So that is 6.12.

13:17
Doug Brandl: Alright, and then let's go to the base. If I can put that. There we go. Now we're gonna grab that right there, the diameter.

13:32
Tony Nevshemal: Alright, 6.16.

13:37
Doug Brandl: And then finally, let's go for the spring diameter.

13:43
Tony Nevshemal: 6.02.

13:47
Doug Brandl: Perfect. So I'll go ahead and complete this step. Now, I don't know if you guys noticed, but part of our process, we measure, we record the values to SPC, which it popped up while I was looking away, but we record the values to SPC and then we go to assembly. But we may run into a problem in the future, so I think there's an opportunity for us to modify this recipe and for Tony to dabble in the batch recipe editor, so we are good there. Now it's just assemble.

14:19
Tony Nevshemal: Alright.

14:19
Doug Brandl: If you don't mind.

14:22
Tony Nevshemal: So how do I assemble?

14:23
Doug Brandl: No, that's...

14:24
Tony Nevshemal: Okay. So you take...

14:25
Doug Brandl: Yeah. Take the spring, put it in the hole. Now, obviously you use your imagination and your projects, this could obviously be significantly more complex. You don't have to use a 3D model like we are here, you could use documents. We can retrieve these out of controlled document management systems. The world is your oyster when it comes to this. Alright, cool. It is assembled. I'm gonna go ahead and complete the step. Alright, so we have, we've completed our assembly and now we're gonna send the label to the printer and that's that. But we did notice that there are some opportunities. So Tony, if you don't mind, I'd like for you to go ahead and go into the recipe editor and modify the recipe, and let's see if we can account for times where... Let's go with the spring is not gonna fit in the hole. We're not gonna be able to assemble this. So we've got our happy path, we've got our green path through this workflow, but we don't have a red path, we're not handling exceptions appropriately, so this is a great opportunity to show you how easy it is. So Tony, can you open up the assembly unit procedure on the bottom left?

15:39
Tony Nevshemal: Sure.

15:41
Doug Brandl: And scroll on down, and after the "Record Values" and the "Record Transition," we're going to insert a branch into this workflow, so you can delete that line right there. And then I want you on our logic controls here in the editor to drag on "Or Begin." What this is gonna let us do is this is gonna let us say, "When this condition is met, you go down this path. When a different condition is met, you go down another path," etc., etc. And you can change these. So connect that, and then we're going to put in those conditions.

16:16
Tony Nevshemal: Okay.

16:16
Doug Brandl: So if you could drag two transitions in, the transition is where you're going to be able to put in that expression, and we'll have one for our green path and one for our red path. Or happy and sad path. And go ahead and connect those guys. Perfect. And then let's edit. You can connect them to the next one as well.

16:41
Tony Nevshemal: Sure.

16:42
Doug Brandl: And then let's go ahead and edit that transition. Let's give it a name.

16:47
Tony Nevshemal: So this is good measurements, right?

16:49
Doug Brandl: Yes. And then this transition expression, so this transition expression, what we can do is we can look up through the recipe, through what's been executed, and we can pull out some of those metrics. So we had our operator record on that document, we had them record the diameters of the spring and of the head and the base, so what we're gonna do is we can grab those values and apply some rudimentary logic. So Tony, we called it "measure," is the name of that step, of that phase. "Measure" and then you're gonna say ".diameter" and let's go. So in this case, our good one is when the spring is smaller than the head and the spring is smaller than the base.

17:35
Tony Nevshemal: Right, so when the spring...

17:37
Doug Brandl: And, nope, we don't need to...

17:44
Tony Nevshemal: Oh yeah. Just less than...

17:46
Doug Brandl: Yeah, maybe too tight.

17:47
Tony Nevshemal: "Measure.Diameter_Spring" is... "Measure.Diameter_Base" right?

18:19
Doug Brandl: Yes.

18:20
Tony Nevshemal: Okay.

18:21
Doug Brandl: Go ahead and save that. And then let's do the same for... Let's do the inverse, the logic inverse of that for this red path, so let's just call this "rejects."

18:31
Tony Nevshemal: Reject.

18:31
Doug Brandl: Reject measurement. And then our transition expression is going to be when the spring is greater than or equal to the base, or the spring is greater than or, and... Is greater than or equal to the head.

19:00
Tony Nevshemal: Spring, is greater than or equal to. What did I do first?

19:11
Doug Brandl: You did the head first.

19:12
Tony Nevshemal: Alright, so this is base. Okay.

19:13
Doug Brandl: Perfect. Save. And then what do we... What do you think we should do?

19:19
Tony Nevshemal: Well, let's say... So if it fails its measurements, that means you're not able to assemble. So we should probably tell the assemblers.

19:27
Doug Brandl: Yeah, probably don't wanna waste their time.

19:28
Tony Nevshemal: Right.

19:28
Doug Brandl: Yeah. So let's throw in a user message. So we have some built in... You have like a whole standard library of phases that you can drop in. And in this case I've configured it so that our assembly station can have a user message. So if you can just click that, drag it over into that unit procedure and connect it. And let's go ahead and configure it.

20:00
Tony Nevshemal: So we'll call this "notify"?

20:02
Doug Brandl: Yeah, like "notify operator" or something.

20:04
Tony Nevshemal: Yeah. Okay.

20:14
Doug Brandl: And then let's just give them a message down at the bottom where it says "parameter value."

20:21
Tony Nevshemal: Yeah. What do we wanna say here?

20:24
Doug Brandl: Let's just say "assembly not possible."

20:25
Tony Nevshemal: Okay.

20:26
Doug Brandl: We'll keep it simple. In your own projects, I'm sure that you'd probably wanna put more in there. And then go ahead and save that.

20:33
Tony Nevshemal: Yep.

20:33
Doug Brandl: So I'm not covering it. But you can also do calculations where you can pull in values. So a lot of our phases have that. Yeah, let's go ahead and require acknowledgement on it.

20:43
Tony Nevshemal: Yeah.

20:44
Doug Brandl: There's a lot of ability to make it dynamic so it's not all static. It's not like you're always gonna say the same thing. Sometimes you want to include values from previous steps or maybe include batch parameters as part of the message or part of any other phase. So we do have also the ability to include that as part of like a calculation. But we're not doing that here. So let's go ahead and hit save.

21:05
Tony Nevshemal: Alright.

21:08
Doug Brandl: And then we're gonna put a transition on this. So every phase needs to have a transition after it's done. And in this case, we're just gonna say "complete." Once the notification has been sent and this phase is... The execution of it is complete, we'll continue on and we'll terminate the batch. So you can go ahead and insert suggested here. And what this does is it's gonna look at the link up and just say whenever that step is complete. And this is good. We'll go ahead and save it, and then put on a terminator in the logic controls on the...

21:39
Tony Nevshemal: Let's try it without a terminator.

21:41
Doug Brandl: We can't do it.

21:42
Tony Nevshemal: Can we validate it?

21:42
Doug Brandl: Yeah, you wanna validate it? So if you don't do this, we do have some validation of our recipes where it'll look at it and it'll tell you what's wrong. And in this case, it's saying the assembly unit procedure, UP5 transition needs to be followed by something.

22:00
Tony Nevshemal: Okay, cool.

22:00
Doug Brandl: Let's go ahead and drag the terminator on and connect it. And then let's validate. Again, make sure that that resolved that issue. Recipe is valid. Cool beans. Let's save it.

22:16
Tony Nevshemal: Alright.

22:21
Doug Brandl: Alright.

22:22
Tony Nevshemal: Right. Let's run it again.

22:23
Doug Brandl: Yeah, so we'll fly through this for the second time so that we can get to questions since we've got four minutes to go. So, alright. This is gonna be the world's fastest 3D printer here. I'm gonna go ahead and kill all of these old orders. These are on the old recipes. So we do version our recipes. So these are using, it's the version 61 of that recipe. We're going to reset this and I'm gonna go retrieve some more orders from our ERP system and that'll be version 62. So refresh orders right here. Alright. So this is the same steps. I'm gonna go fast for the sake of brevity.

23:04
Tony Nevshemal: Let's quickly review them.

23:05
Doug Brandl: Yep. Oh, this looks great. We've seen this one before. Check, check, check. Check. E-sign. I'll go in as an admin. Password.

23:21
Tony Nevshemal: Cool.

23:22
Doug Brandl: Cool, cool, cool. Close those.

23:24
Tony Nevshemal: It's printing.

23:25
Doug Brandl: Yeah, let's go over to our print. Beep boop, scan the lot. We're printing. We are printing at 50 layers a second.

23:36
Tony Nevshemal: Yeah. It's screaming.

23:37
Doug Brandl: This is a fast printer. I can tell who has a 3D printer in here and knows how frustratingly slow that they are. Alright. We're gonna have an e-signature.

23:50
Tony Nevshemal: Okay.

23:51
Doug Brandl: Verify it didn't turn to spaghetti. So I'm gonna go ahead and sign that one as well. Tony, it didn't turn to spaghetti, did it?

24:00
Tony Nevshemal: It did not. We have something.

24:03
Doug Brandl: Alright. So now we're on our measure step. So this is after this step is where we added our transition. So let's go ahead and measure the head outer diameter.

24:16
Tony Nevshemal: Okay, that is 6.2.

24:20
Doug Brandl: 6.2. Let's measure the base.

24:25
Tony Nevshemal: That is 5.9.

24:26
Doug Brandl: Whoa. Now let's do the spring.

24:33
Tony Nevshemal: That is 6.02.

24:35
Doug Brandl: 6.02. Alright. So clearly we are gonna violate our recipe. So when I do that, let's go ahead and take a look and see what happened. So right here, I expand this. Sorry, let me make this a little bit bigger here. I just like watching him walk back and forth with the shadow. So here you can see this transition. So we proceeded down this route here and you can look at this transition and you can see what specifically caused us to go down whatever path it was. And in this case, it was our spring is greater than or equal to our base. Our base was too tiny or our spring is too big. And then we have our notification. So that notification's up on the top right here. And we did require acknowledgement. So I'm gonna go ahead and sign in as an admin. Password.

25:35
Doug Brandl: And here we have our... Just a standard batch message list. This is, again, one of our components where I can click on it. Assembly not possible. I'm gonna acknowledge that. And again, all of this is tracked to the EBR. There's an awful lot that I wanna show you guys as it relates to our EBR, as it relates to our trace graph. I'll hit the trace graph really fast and then I think we're gonna have to go move on to Q&A. And if you want more you can come over to our booth and I would be happy to show this to you. Alright. So here I'm looking at all of the different types of filament, all the different batches. So here what I'll do is I'll slide that over. So right here I can see we have a completed bobblehead. This right here is the assembly unit procedure for that particular batch.

26:24
Doug Brandl: Looks like it was one that I had done on the fourth, I guess. I could see which filament I consumed. I can get the lot number for the base. So I create... As part of this step, I'm also creating that lot. I can see everything in and I can see all of the material that is created as part of it. And then if I click here, I can see all of the five other batches that use this same material. So this is really useful if you're looking, if you're doing any investigations for quality, for recall, any of that stuff. So this is a really good way to visualize, what did I use? I received green filament and I have it on this particular assembly, this batch right here. So I know all of the bobbleheads that came out that used that specific green filament. And this trace, there's not a realistic limit on this. So it does run back. You can chain all of your material transfers back and forth. I think that's all I've got time to show. Does anybody have any questions? I think it's the Q&A time. Yeah, go for it. Oh, she's going to give you a mic. Yeah.

27:39
Audience Member 1: The object model that you have, the recipe, like how accessible is that? Let's say that I've got basically something that's dynamically generating parts from like a pick-and-place machine, right? And I'm not gonna have all that data until it hits the end of the line as a transaction. Can I write all of that at once? Can I then query essentially every transaction I've had for these measurements and get something like capability? Or am I gonna need to layer in other modules like traceability and SPC to do that kind of stuff?

28:07
Doug Brandl: So if you're doing anything with material tracking, you're gonna need the Track and Trace Module. So material transfers as part of the batch. So you could do all the built-in phases, but when it comes to material in and material out and tracking any of that, and suppose you've got 100 different types of dynamic materials, you can set those for the material in property on the phase. So if you want, I can show that to you probably over at our booth. I can show you what that looks like. But yes, you can do that. But it does require the Track and Trace Module.

28:41
Audience Member 1: Okay.

28:42
Doug Brandl: Yeah.

28:43
Audience Member 2: Hi. Is there an array-based entry? I see the graphical method to put all these essentially routes in, but is there an array base or some other way that you could do it in bulk and not all the clicking and dragging?

28:57
Doug Brandl: Yeah, you can script this too. You can script the creation of recipes, of batches. You could pull it, some people even pull it out of their ERP system and dynamically create recipes. So all of this is backed. So we have this frontend here, we have these components. If you don't want to click and drag and you've got some more complicated system, you can script the creation of all of these recipes. And the execution. Yeah.

29:27
Audience Member 3: Does the system have a functionality to do order maintenance to modify existing batches in run to reflect the new recipe?

29:36
Doug Brandl: At the moment, I don't believe we do. Yeah. I'll let Tom answer that.

29:41
Tom Hechtman: To start a recipe, that's a ISA-88 model. So you have your master recipe and you create a control recipe. So once that... Sorry. Once you create that control recipe and you're executing it, it's isolated from the master recipe at that point. Now, if you modify phase or templates, we have templates and different things like that, you do have ways to push those changes down into your recipe and such.

30:13
Audience Member 4: And you can create something... Are there already existing scripts to help facilitate that that you need to customize for your use case?

30:20
Doug Brandl: Yes. So I definitely encourage you to reach out for the Quick Start program, reach out to our design consultation team. They've got a lot of experience doing that.

30:29
Audience Member 4: Awesome, thank you.

30:31
Doug Brandl: Yeah. Any more questions you guys have, please come visit us over at our booth and I really, really, really encourage you come on Thursday to Tom and Mark's presentation. It is very exciting what they're doing. So show up if you can. Alright, thank you guys.

30:47
Tony Nevshemal: Thank you.

Wistia ID
ags36basrq
Hero
Thumbnail
Video Duration
1853
ICC Year
2024.00