Inductive Automation Blog
Connecting you to ideas, tips, updates and thought-leadership
from Inductive Automation
Join us for the next installment of our new series of webinars exclusively for integrators: Ignition Power Hour! The Power Hour is a webinar series covering a range of topics to provide useful Ignition knowledge and insight to the integrator community. Power Hour webinars can include tips and tricks of the trade, educational topics, technology trends, new Ignition features, and more — and are all led by IA engineers and specialists who are experts on the subject at hand.
In this next Power Hour, we will discuss Ignition Cloud Edition, and do a deep dive into Pikaview, an Exchange resource that provides a new way to interface with databases in Perspective. We'll also discuss the value of Ignition support plans and upgrade protection.
Learn About:
- Ignition Cloud Edition
- A Deep Dive into a New Database Interface Resource, Pikaview
- The Benefits of Ignition Support Plans & Upgrade Protection


Kanoa MES is a modern Smart Manufacturing solution designed in and for Ignition. Learn about the Kanoa MES Modules, Kanoa MES Database, and Kanoa APP Ignition project you'll use to get started with Kanoa MES. Check out a live demo of Kanoa Ops and Kanoa Quality to see how you can configure your MES in days and get insights into your manufacturing data with ease.
Transcript:
00:01
Jason: I'd like to start by thanking you all for coming today to hear what it is we're doing at Kanoa, and thank Inductive for creating Ignition, for creating this incredible platform that has allowed us all to do the amazing things that we're doing today. In 2018, we formed Kanoa to help companies implement Ignition-based MES solutions with a bent on project management, lean Six Sigma solutions, and change management to help them drive continuous improvement. We'd seen way too many projects fail, and not because of software, but because of people's failure to transition. And it seemed that most companies were so focused on the digital transformation part and the implementation of software that they really hadn't spent any time on the people side of making sure that these projects were successful. So for us, selling MES software and solutions is a really poor business model if companies do not get value out of the solutions that we're implementing. So we really do focus, we come in for companies here. If it's your first time implementing an MES solution, we're going to work with you. Once you've got a proven track record, you've rolled this one out, you've done your pilots, you've got production lines, and people are actually deriving value from it, then knock yourselves out, you can carry on, you can use this as much or as little as you want.
01:33
Jason: MES applications are not trivial, and there's a fair amount of customization that has to take place. What we found over the years was really the difficulty in keeping up with the constant pace of a release train. I mean, every few months or every five weeks, they keep changing, they keep adding to it, and we have gone through so many refactors, we all have. We went from... Well, we started on 7.5, then 7.9 to 8.1 was a refactor vision to perspective, a huge one. Then we started changing expression tags to reference tags to take advantage of MQTT. And this is none of this is a bad thing. It's a constant evolution. But we have constantly had to keep reinventing ourselves to remain relevant. And because of that customization and the constant change, we found that some customers ended up potentially throwing away their solution, and starting again every time with the big changes. So in 2020, we decided to take a fresh look at what an MES solution or platform should be. And from what we've learned over the years, keeping the good and replacing the bad. So for a while now, we've been touting MES for masses. This is not a communist manifesto, but it's more of a guiding principle that really drives the products that we develop, in a sense, do I press or do you?
03:03
Jason: You got, amazing technology, go, it should be affordable. And we follow because of this, we follow the same licensing model as Ignition. It works for them, nobody books at the cost of Ignition. And the Ignition platform has been so flexible, in the sense that you could throw everything on a single server, every single one of your sites, your enterprises, your assets, run it up in the cloud, and you could have Edge devices pushing it up, but you could also distribute it. At the end of the day, the architecture that you're going to use is going to be driven by whatever constraints, whatever requirements of the applications you're building. In that vein, we said, let's follow what Ignition does, if you truly want to have an MES cloud server. And we think that's a great idea. Everything it has to connect to ERP systems is up in the cloud. Why not have a connectivity up there and use MQTT and Edge devices to push it up? It should be accessible. And that's a fairly easy thing to do because we are building modules exclusively for Ignition, and their licensing of unlimited users, unlimited tags, has been a game changer since 2010 when I first started using it.
04:17
Jason: If you're going to drive continuous improvement, you want everybody inside your corporation, your company, to have access to the information that's going to allow them to drive continuous change. It should be intuitive, moving to perspective. We absolutely love this because we can really make the user interface intuitive. And quite frankly, if you look at Amazon or Google, any of those companies, we use them every day. Nobody has ever read the user manual to be able to buy something on there. We kind of feel the same. Yes, there are aspects of MES that might be a little bit more specific, but if you use the same interfaces that people are using on their phones, if you give it to them in the same devices, it can be on a computer, a tablet, or a phone, then we can make it intuitive. And if it's intuitive, people will use it. And it has to have value. Value is in turning data into information. So when we built Kanoa MES, we started from the ground. We started with data. Data is the most essential part of it. So we built a third form, normalized database schema that stores the data, and it's open, and it's accessible.
05:34
Jason: So we build our APIs, our system functions that will interact with that database. They will give you the analysis, and it's lightning fast, and it's the smallest footprint that has data integrity and constraints. But you can also call those same stored procedures if you want to share it with Power BI, or Tableau, or an ERP system, or SSRS reports, it really doesn't matter. But you build from the data, you get value.
06:01
Jason: And then finally, it has to remain relevant. Keeping up with Ignition release train is like trying to board a train that's got no doors. You're never going to do it, and that may sound like a bad thing, but consider what Ignition gives us with this release train. They keep us relevant. They keep us on top with the newest technologies. They ensure that security matters are handled. They've given us MQTT, they've given us Kafka. I still don't know what Kafka is, but they've given it to us. And that's what Ignition does. So in this journey, we've got to keep abreast of the train. So whatever solution you're building here, it's got to be relevant, and it's got to be upgradable. So we, in our design, have ensured that our modules and our implementation have the lowest coupling with Ignition, because they're going to make changes anyway. We want you guys to be able to update with impunity and not fear that you're going to be held back by using our solution. Now, having said that, give us a chance to check under the hood of 8.3 before you upgrade. But with that, that's enough about words. I'm going to hand it over to Sam. He'll show you around. Thank you.
07:12
Sam: Yeah, thanks, Jason. So, really, all of those design ethos that Jason is talking about has culminated into the Kanoa MES platform that we have built. This configurable, flexible, intuitive MES software that is really meant to empower teams and drive continuous improvements. Because we are not doing MES just because it's fun, even though it is for some of us, but we are doing it to really improve processes and make plants run better. So when you get Kanoa MES, there are three components that you get every single time to make sure that you are starting from a strong foundation. You get the Kanoa MES database, that third form normalized database schema that Jason was just talking about. That is where all of that core key MES data is stored, as well as all of your configurations. You get our Kanoa MES modules that plug into Ignition and give you almost 400 new system functions to go and call the data that you need from that database. And very importantly, you get the Kanoa app Ignition project. This project is designed to give you a starting point with all of your configuration and analysis and daily operation tools that you need to get started with an MES from day one and continue to expand, customize, and tweak that application using the power of Ignition to make sure it can fit your application.
08:30
Sam: There are three modules that we sell over at Kanoa, actually, I guess two that we sell, but three that we make. Kanoa Core comes with any other module that you get because that has a lot of those core functionalities that you're just going to need for any smart manufacturing system. Theming, languages, security, all of that is in our core level and is shared across all other modules in Kanoa. But really, the two things that we're here to look at today are Kanoa Ops and Kanoa Quality. Kanoa Ops is going to be your system for OEE, work order management, asset management, scheduling, and shifts, and all the analysis that comes along with that. And then Kanoa Quality is a pretty unique offering in that this is a form design and dispatching tool that also gives you the tools that you need to analyze the data that you got from those quality forms. All again designed within the Ignition application. So, I am going to try to do the fastest demo I have ever done in 15 minutes and try to give you all enough time for questions at the end. But I do plan to do a webinar within the next two weeks after ICC to do a more thorough one-hour demo.
09:32
Sam: So if you like what you see here, definitely come and keep track of our LinkedIn page and our website to get more information on that. But without any further ado, here is our Kanoa Ops system. So as I mentioned, we do have two modules, Kanoa Ops and Kanoa Quality. You can get them together, you can get them totally separate. I'm going to start with Ops and then do Quality second. So let's kind of go through the day in the life of a production operator and the way that you could be using our Kanoa Ops platform. We'll start with looking at our work orders, scheduling some work, running that work in production on a line, and then getting some of the data afterwards. Then we'll actually peek into the configuration as well. So if I'm going to go ahead and manage my work orders, I need some interface for actually downloading all of those production orders. This can be downloaded from an ERP. They could be made right here in Kanoa MES. You are just picking the work order name the material that you need to run and how much of it you need to produce.
10:31
Sam: Once you have all that, you need to actually schedule that work on a line. So we have our operations schedule here, where you can actually see we're taking advantage of the BIJC calendar component that we do include with any Kanoa purchase. And this lets us do all sorts of things like create non-production events with certain recurring rules and things like that. Really fantastic tool to help manage all of these schedules. We have our normal production schedule here, but I can also do things like pop open our work order list, drag and drop a new work order into our timeline. The system's going to go, see what material you're running, see the appropriate rate that it runs on that line, and schedule it for the proper amount of time, which I'm then going to delete before it tries to run two work orders at once. The other thing that we have in here is our shift scheduling. So our shift scheduling is really cool. What it gives you the ability to do is to define shifts at any level in the hierarchy, and an asset will look for its closest parent with a shift. So if your whole plant runs on a four shift complex rotation pattern, except for the packaging area that runs in a different shift schedule, you can manage that very easily within Kanoa.
11:38
Sam: So we have our work orders, we've scheduled that work on a line, we have all of our shift data, we're going to track our data within the context of those shifts. Now it's time to actually open up one of these lines and get some work done. So from here, you can see our main enterprise overview page. You'll notice a couple of things here. So we're kind of following an ISA-95 style hierarchy with our enterprises at the top, a number of sites with areas, and then OEE enabled assets underneath them. We like to say we're ISA 95 inspired but not restrictive. So if you wanted to have, say, a business unit layer and organize all of your sites into business units between your enterprise and your site, go for it. We totally enable all of that. We want to have a site in an enterprise, but besides that, we're really flexible. So I can click into my production area here and get a summary of how all of my production lines in this area are currently running. We can see we got a little bit of an issue over here on Pac Line 1, and our other lines are running at various degrees.
12:35
Sam: I can go ahead and click into Bake Line 2 here and get to what we call our asset Operations screen. The idea of this screen is that every BIM operator that is responsible for this piece of equipment, everything that I need to run it is right here within this interface. I can see my current production modes and states. I can go into my run control and manually override my mode to say we need to go into a changeover.
13:00
Sam: I could manually select another work order or another product that I need to run from here. I can also go ahead and check things like the schedule right here from this interface. And then one of the very common things is, of course, to go and check on all of my downtimes. So I'm going to go and say, what were all of my downtime codes over the last seven days? And then from an interface like this, I can always double-click into one. I can recode things, I can add comments, I can add, delete, or change downtimes that we have recorded. Again, we like to collect all this data automatically and perfectly whenever we can, but there are plenty of times you need to do some manual work afterward too.
13:38
Sam: One other report that I'll show really quickly is our run review. So, this is really critical in letting you kind of see all of those production events that have gone through a certain asset. So what I'm pulling up here is we can see I've done three production runs on this line. It's breaking them up by shift and I'm getting certain metrics like their total runtime minutes, their OEE downtime minutes, all here from this screen. So another type of... We also have some more complex analyses. I'll pull up our downtime report as one example, taking advantage of some of the Apex charts here. Thanks again, Travis and the Ignition team for helping prepare all that. We can see all of our downtime by category, by state, and reason, broken out and seeing how it distributes by shift. I can do a stacked bar chart of my total downtime by reason, by day, and down here at the bottom, I can put it all into a table with a handy little export to Excel button. 'Cause I can make you the greatest dashboard in the world. And what's the first thing that you're going to ask me? How do I download it to Excel? I'll take it.
14:44
Sam: I'll take it. So again, in the fastest demo ever, I also want to quickly show you some of the configuration about this because one of the coolest things about Kanoa is again, everything I'm showing you here, you just get in that starter project that we are going to give you, including all of these configuration tools that you need to get you a significant amount of the way into your MES implementation. So you can see over here, I have my asset hierarchy. I can drill into a site and an area. I can click into a specific line and see I have this OEE enabled. You say, something's OEE enabled, we're going to go ahead. We drop a UDT into the ignition designer. And that's where you're wiring up your points. Another interesting thing to note is that everything I've shown you here runs off of three tags per piece of equipment. Give me your Infeed count, your Outfeed count, and your state. Everything else is configured over here in the Kanoa app. And granted, I know it can get more complicated than that. There's a lot of ways that you can make it more complicated than that.
15:41
Sam: But you can get all of this with just three pieces of data per piece of equipment. We have things like modes and states, where I'm designating all of the modes that are appropriate for this, and our state list where I am associating specific states with an asset, giving it a PLC code. That's how we're tracking all of your downtime. But it's really great that all of this is right here, configurable in the app with handy, intuitive tools. I can come in here, we can drag and drop this mix line into Jacksonville Juices if you want. It'll let us do all of that on the fly. So drag and drop assets, rename things. All of your data goes along with you. It happens all live. So that is a very quick preview of Kanoa Ops. Let's totally switch gears here and talk a little bit more about Kanoa Quality. So Kanoa Quality is all about paper on glass, right? You're running around with a bunch of check sheets today. You need to move that into a digital system to not only just get that paper off the shop floor, make that data more real-time, but also as we're moving these systems into digital platforms, we can establish more accountability.
16:46
Sam: We have this sense of a state of each of your check sheets. We're tracking the state of these as they go through. So check sheets can become overdue or missed, and we can flag operations and management teams when the sheets aren't getting done the way they need to get done. And that starts with our main overview schedule. Here you can see I have one approved test in my queue. I have four missed tests. Let's go and just do one of those missed tests, a little bit late. I'll double-click into this. I can even get a little bit bigger because again, we're just using Perspective for all of this. That's an important point I'll mention is that all of this is built in Perspective and none of this is using custom components. We are just using regular Perspective components that we are providing to you in that open starter project. So we're going to take advantage of Ignition's inheritance features. You're going to make new projects that inherit our projects where you can then override screens, make your own screens, all with our examples that you can build from. So I'm going to come in here, we're going to do a couple of checks to make sure that we can switch over this packaging.
17:49
Sam: Our area is clear of debris. Our machine is shut off. I'm going to take out my rye bread packaging and it's going to weigh 566 pounds. I'm going to put in our next wheat bread packaging. Notice this control limit up here as I put in something that's 625 pounds, and that gets flagged as orange in our little progress bar and in our control limits. I do a final checklist to say yes, my tooling is out of there and yes, my machine is turned back on. I do a final check to make sure that all of this data is the way that I want it and I go ahead and submit. So that was a very manual test. It doesn't always need to be that way. We can get data automatically from PLCs. We can get data and do run quality checks that don't have any manual data. And it's more like an event-based historian. The advantage of doing that is that we get all of that data into Kanoa Quality and then we can run our analysis on it. So I'm going to come into something like our fermentation temperature check, where I believe every 20 minutes this goes and collects three points out of our simulator and spits it back out here into this report.
18:52
Sam: Notice how quick that just happened. Right? Let's actually do for all of the data for this month in September, grab those three data points collected every 20 minutes, go get the data. It's done. That's the power of this database that we have in the background that's storing all this information. I can click into one of these zone temperatures, and I can chart that. This is where all of our SPC comes in. I can pick our Nelson rules. I can apply all of those. I can see my rule two violations, my threes. I can put it all in a histogram too.
19:19
Sam: Now, like ops, one of the most powerful things about this is, you don't need to go into the Ignition designer to do almost anything that I've shown you here. The only thing that you would need to do is to make certain tags available to the quality system so that you can just tie them in and get automated data. But the rest of this form design is done here in the app. If I come into our Kanoa quality configuration and look at our check sheets, I can take a look at that packaging changeover that we were doing earlier.
19:48
Sam: We can see things like if it's enabled, if this requires a sign off, if it's only appropriate for certain assets in my hierarchy, I can go into the checks themselves. And here is my machine shutoff check where you can see it's a string where I can add in specific instructions for my operators, where we can create a pick list of what shows up for them to be able to enter. The whole idea here being your quality managers and the ones that are making these forms, not necessarily the people that you want in your Ignition projects every day, they need a different interface to go in, add more instructions, tweak checks as things change. And that is why we give them this interface here. In addition to that SBC data and the configuration here, we did also talk about kind of the efficacy of the checks as well. So I can also do my check summary and by check sheet I can see how many are getting missed, how many are getting approved. I could put this on a shift heat map to see if there are certain shifts that are not doing the test they need to on time, again driving that continuous improvement and really trying to drive accountability around a lot of this data.
20:53
Sam: So I did it. That was a very quick demo. The one other thing that I will show really quick, 'cause I actually even have a little bit of extra time, is I didn't really get to talk too much about some of those Kanoa core functions that you get within every application. And there's three main things that you really get. One is over here and that we do have multi-language support. We are just using the embedded Ignition translation engines that you have in there. So we do have a couple of languages out of the box, though I've heard our Korean is terrible. We also have all of our themes in here. Jason would not let me do this presentation in grape, despite how bad I wanted to. These are also totally configurable. So you are totally welcome to go ahead and brand this for what you need for your specific company. And I will shift this back to blue before I go and show you the other main thing that you get out of the core modules, which is our security. So we're still using Ignition for all of your authentication, but we do add an extra layer of security here in Kanoa, just because the roles and permissions that you need in MES are a little bit different.
21:56
Sam: But we're doing it using things that you're all used to. We have our individual users that you put into groups. You give certain permissions to people in those groups, and you could do all of this by asset too. So I could be a manager for the packaging area, but just an operator somewhere else if I want to. So there's a lot of other exciting things that we have built or are building in the Kanoa Ops and quality platforms. We do have a mobile solution for Kanoa Quality if you wanted to run all of those checks on your phone with a slightly different interface. We do have a new dashboard editor as well as we are making new widgets to give people the capability to design their own MES dashboards. And we are also introducing lot tracking as a free upgrade in Kanoa Ops very, very soon. So we just need to upgrade some of the UX for it. The bones of it are all there and working, but it's really exciting to see that we can now have lot tracking and track traceability within our OEE solution, so that all of our counts are going to match up and all of those production orders and the tracking is all synced with a single source of truth.
23:02
Sam: So, again, that was a very fast demo. Keep an eye on our website and our LinkedIn if you wanna get more information on a webinar coming up soon. We do have a booth upstairs, but now is a great time for questions if anybody has anything they wanna ask us.
23:17
Audience Member 1: Can you talk about ERP integration?
23:20
Sam: Can we talk... The question is... Sorry, I'm gonna repeat it just 'cause I know there are some mics going around in the live stream. The question is, can we talk more about ERP integration? So, yes, we do a lot of ERP integration into these systems very frequently. Two of the most common points would be downloading all of those work orders that you have from an ERP into your MES. We can download them into the work order table and then have you schedule them manually, or we can fully schedule all of that work as well. The other one would be around material, something I didn't go through in the demo today, where we can download all of the materials that you run on your lines and then associate specific materials to specific assets with the rates that they are expected to run at, Jason you wanna talk more about that?
24:01
Jason: Yeah. Just to add, in terms of the interfaces, we can use all the tools that Ignition provides. So we can use web dev module for web services, you can use the Sepasoft one. If you wanna use the SAP business connector, it's really entirely up to you. Generally, we will do a RESTful API and then just have the ERP system pushing production orders down. If they push down a production order that's got the item, the item doesn't exist in our system, we will create it. If they then wanna put information about a start and end date for an item or an asset, we will create the association that this item can run on this asset. We'll give it default information. Every ERP integration that we've done is different. There are different business rules, so you've got to have that flexibility. But certainly, yes, web services are favored. We've done flat files. Hate doing flat files. Done middleware tables as well. Not really happy with those either. It's always funny when you have these digital transformation projects, they talk about everything they're going to do and then they're saying, yes, you can open this flat file and get the data out of it.
25:00
Sam: Great question. Any other questions? Yeah, right there.
25:06
Audience Member 2: So does the Kanoa Quality Module provide mechanisms to have a, say, PDF or image or something like that that is a helping guide in addition to the instructions and text?
25:16
Jason: Yes.
25:20
Sam: Yeah, sure.
25:20
Jason: Yeah. And again, everything that we've shown you here is we made a conscious choice. These are just Ignition perspective components. We've seen too many times where you'll get like a really complex component which doesn't allow for customization. So you can look at our views in here. If you're going to start like a production order, you're going to take a quality check, but you wanna have the operator do an additional step. I mean, you can go in, you can add, you can see how we're doing it in the background. They've got the PDF viewer, they've got the iFrame. So again, with every company you're going into is saying, sure, it'd be great to give them work instructions. Where do you store them? Is it in SharePoint? Is it on a network drive? How do you want to do it? We also add support for images. So particularly on the phone, we've now got it where people can take a quality check and it's saying, take a picture of a weld. From here we can use the phone, it will capture it. We store it as a blob in a database or we can push it out. All of that stuff is the customization.
26:13
Jason: What we're giving you here is not going to be 100% solution. It never is, but it gives you 80% of the way there. It's a fully functioning application. It's on you guys now to extend it as you see fit. And as you're doing that, if you find that there's stuff that you want, you see that you need, you can talk to us. Absolutely. If it's out of left field, we'll say, that's all on you. But if we look at it and say, that's actually really good for the product, that makes sense. Absolutely. The more we can get into the product, the better it is for us and for you. Because ultimately, what we're focusing on here is building a product that we can support for the long term. We have documentation, we have training, and we're going to make sure it's a supported product so that you guys don't have to.
26:58
Sam: Yeah, great question. I see one in the back over there. Yeah.
27:02
Audience Member 3: Is there like an API library for scheduling something like automatic work order stops and starts, or doing like, basically, you know, automated sample collection on the machine?
27:11
Jason: Yes.
27:11
Sam: Yeah, so the question was. Sorry, the question was about the API hooks that you have and kind of how you can build your own things with the API. Jason, yes.
27:18
Jason: Yes, I said we got 380 functions there, so absolutely, you can build your solutions in there. Everything that we do through here is going to be calling one of those system functions. So we can be called from a tag. You could stick it on the end of a web service call if you wanna do it from another system. However, that data is in there. But yeah, everything's through an API.
27:37
Sam: Yeah, but like for example, that downtime report that we do, there is a system.Kanoa of.events.getdowntime events for this asset with this start date, this end date. And yes, there's a lot of other variables you can kind of put into there. But yeah, we're giving you 400 system functions like that to put in and retrieve data from that database.
27:53
Jason: Let's have Dan. And we were promised a mic runner. Where's our mic runner? All right. Okay.
28:00
Audience Member 4: Hey, Jason, is this available as a trial?
28:03
Jason: Yes. Yeah, I mean it's just modules, so it works exactly the same, as Ignition, but you can do trial license. We actually think we do one better. One of the things here is that there's work and effort involved in getting the modules and getting it up and running. We'll just give you a container. So we've got a bunch of Linux cloud containers. We can run eight docker containers all running at the same time. So we'll actually get it where it's configured, it's set up for you. You can get in there and the design it, you can play with it and try it out. And if you do want to like an extended period, we can give an extended period trial license.
28:38
Sam: Yep.
28:39
Audience Member 4: Please.
28:41
Sam: But no, definitely those for the, the integrators and the people that just kind of wanna try this stuff out. Those containers are a really fast way to get onboarded. You meet with us, I kind of show you some of the basics and setups, and we kind of go through a basic configuration and then you usually have it for two weeks to show it with your teams and start to play around and see if it's going to be the right thing for you. So, if you are interested in something like that again, you can reach out to us, booth, website, however you want to. And we're happy to schedule some time to get you connected with one of those. Any other questions?
29:13
Audience Member 5: I saw that the quality module looked really good. We do a lot of that with our shop orders. So would I need both modules to essentially execute an order that collects a lot of data?
29:24
Jason: Yeah, actually, I can take this one. So when we built them separately, because there are a lot of people, they already have their own MES solution. They want a quality one. That's saying one of the things that we can do is everything through our APIs forget... Ethan, I didn't recognize you there. Nice to see you, man. So you can create a view table of assets. You could do it for work orders. You could pump it in if you didn't wanna use it at the same time here. If you're going to use quality in here, but you wanna configure assets and stuff. Yeah, we can absolutely figure that out.
29:54
Sam: Yeah, in the front.
29:55
Audience Member 6: So one of the things you guys started out with was providing a full solution. Not that this isn't. But to go from a great piece of software to return on investment. How are you guys tackling that?
30:09
Sam: Yeah, so we do think that... So I think this is our last question. I've just got flagged for. But it's actually a really great one. Because that was, as you said, that was a big part of our philosophy that we're not doing this for fun, as much fun as we find it, oddly. But we wanna drive continuous improvement. The software only gets you there so far. A lot of it is then around adoption and change management and actually intentionally doing continuous improvement. So a lot of what we think we're kind of trying to provide in this software is something again that kind of is intuitive, that with minimal training people can go in and actually be using, which we know is a huge adoption hurdle for a lot of these systems. That's why we really wanted to embrace things like language support, which I also think is a hugely important hurdle that we wanna be able to cross over. But then really a lot of it is also kind of just whether it be through Kanoa or the teams managing the projects or a trusted integrator or consultant really working with that end user to talk about their continuous improvement goals and how they're going to achieve them and having an intentional plan to do so.
31:08
Jason: Yeah. And to add to that, still the same last question. It's the nature of the beast of MES. Every implementation is going to have different challenges. So you can go into a company where they've really got their stuff together and they don't need any. They've got it figured out. But you've got the other companies where you need like it's the connections to PLCs, the manual lines is a real part of the data collection, which is going to be a challenge. We go in, we always talk about an engineering study, but it's a collection of meetings that go over the first week of in there where we're first off, we'll do education. So we're PMPs, we're lean, Six Sigma certified. We've been doing this for a really long time. We know the pitfalls and we know the risks of MES projects. We'll start off with half a day of education with all the stakeholders from everyone from operations, maintenance, quality, IT, finance, planning to basically discuss. And we've done this to various levels of degrees of success in that some companies have actually after that training, they've just stopped because they said we realized we weren't ready as an organization, we are not ready and it's a waste of time.
32:17
Jason: You have the other ones who they say, "I hear what you're saying, Jason, just write the software." It's like, seriously. So we can, we'll provide changes, whatever it's needed there. We'll do change management, we will help. We say, you need a project chart. So you certainly need a vision for what it is you're doing. You need a cross functional team. You need stakeholder agreeing and buyin. And let's figure out who's being affected by this one. Let's create a process map of what your existing systems are, because we're going to be deprecating some of those in here by the very nature of that act. That's where you start to actually uncover areas for continuous improvement just in implementing.
32:53
Sam: That's a great question to end on. Thank you all so much.
32:55
Jason: Thank you.


Modern manufacturing generates vast amounts of data from diverse sources, creating challenges in data integration and utilization. Traditionally, data silos have hindered the scalability of analytics across manufacturing and supply chains. The Snowflake AI Data Cloud breaks down these barriers by seamlessly converging IT and OT data, accelerating smart manufacturing initiatives. Join us to explore how Snowflake empowers manufacturers to harness the full potential of their data, driving innovation and operational excellence in the era of AI and Industry 4.0.
Transcript:
00:05
Greg Sloyer: Well, thank you for coming. Sort of during lunch, before one of the keynotes, I'd like to thank Inductive Automation for having us present again. This is our second year presenting at the conference. My name is Greg Sloyer. I'm from Snowflake. I am the Manufacturing Industry Principal, so I look at the business side of things from Snowflake. All the usual, do not buy or sell stocks based on what I'm talking about. Don't plan your 401(k)s and retirement. I've been doing data and analytics for manufacturing, supply chain, operations, logistics, all that for abo ut 17 years now, not all of which was Snowflake. Prior to that, I had 20 years in the chemical industry, DuPont, BASF, and I ran global supply chains and logistics and all sorts of things like that in the chemical industry. So, why is Snowflake at the Inductive Automation ICC conference? I will set this up by saying, Snowflake, how many people are familiar with Snowflake today? Okay, so about half. So, Snowflake started out as a data warehouse, data lake kind of thing in the cloud.
01:17
Greg Sloyer: It's been about 12 years now, and in 2014, the big thing here is we operate across AWS, Azure, and GCP, so all across all the major three clouds. Our big thing, especially in the 2018 timeframe, when you see this disrupt collaboration and this cool-looking thing in the middle, which is maybe a little hard to see, but there's a lot of starbursts and fireworks-looking things, that is data sharing in Snowflake. This is between customers and suppliers, between partners and OEMs, between logistics groups and manufacturers, between what we call our marketplace providers, so data providers in Snowflake, providing things like weather data, commodity pricing, freight rates, logistics, things like that. There's about 2,600 data sets or so in Snowflake that are available. Really cool thing is we do this all without moving data. We're not moving data in Snowflake. It is pointers. We've gotten rid of the ETLs and FTPs and emails, and heaven forbid you put stuff in CSV files and ship them over to a friend of yours. This is all essentially permissions.
02:39
Greg Sloyer: You give permission for somebody to see a table or set of data, or they give you permission to see a table or a set of data or a set of tables. Once that permission is granted, that data shows up in your database like it's one of your tables. So now you are extending your, to incorporate that data in analytics and reporting; you extend your SQL with a join statement. That's what it comes down to. That was in 2018. We've been exploiting that, and more so we have been now building applications. So you're seeing major applications like Blue Yonder for supply chain and others replatforming to Snowflake. And this has really been the progression, and we continue to add on to this. A lot of AI, Gen AI, ML types of capabilities, I'm gonna talk about a couple of them today, being brought to the data. So what we didn't want you to do is spend a lot of time bringing all the data and talk about IT data and OT data today, bringing all that to Snowflake just to then pull it out and have to do something else with it somewhere else.
03:45
Greg Sloyer: The idea is let the data there; let's bring all those capabilities to the data so you can operate all within Snowflake. We launched what's called a manufacturing data cloud at Hannover Messe about a year and a half ago, April of last year. And we looked at what was needed in the industry, manufacturing in general, and what were a lot of these opportunities people are struggling with, things like that. Hopefully, a number of these resonate. So one was IT and OT convergence. Okay. This has been a big topic now for a number of years, and Snowflake had been great at bringing in typical ERP, especially like SAP data, into Snowflake. We've been doing that for a number of years. Lots of big customers who are doing that today with not just one SAP or ERP system but 10s, 20s, 30s. And all of this is published when I provide a name, but Carrier, for example, has 140 ERPs that they consolidate their data into Snowflake on. What we weren't as strong was on the OT side. So bringing in the shop floor data.
05:15
Greg Sloyer: This is where we really pivoted about 18 months, two years ago, working very closely with Inductive Automation, Cirrus Link, and a number of other partners to provide different architectural ways to bring the shop floor data into Snowflake and take advantage of the time series capabilities and a number of those other capabilities we'll talk about in terms of AI, ML, Gen AI, things like that, to the data, which is sort of that third point, which is bringing and really deploying advanced analytics to the data. The middle one is taking advantage of that data sharing. So this is broadening the visibility outside of IT, outside of OT, so the enterprise, and really extending that to that partner network, broadening the view of the supply chain, incorporating that visibility into the decision and analytics process. Really taking advantage of a lot of these different Snowflake capabilities. The difficulties, and I'm sure many of you have experienced this, is that for years, decades, the shop floor manufacturing sites have generally been an island. Different organizations, different functional reporting roles from that systems sort of standpoint.
06:18
Greg Sloyer: OT sometimes reported into the CIO, but generally not. It reported into VP manufacturing. This created a lot of separations from a systems standpoint, made it not technically difficult but more organizationally difficult to sort of integrate and bring that data in, integrate it with sort of the rest of the data. There's some architectural discussions that happen, things like that. So different opportunities. And for those of you who have multiple plants, what I always say is if you have 50 plants, you probably have 48 different MES, MRO, LIMS, or QM, and all those systems because a lot of those plants came up from acquisition. As I said, it's an island. Investments weren't made, or if it wasn't broken, we're not gonna fix it. That kind of thing. So a lot of that is changing, but also the architectural patterns that you have to utilize that data, especially to bring it to the cloud, need to account for the fact that all these plants are different. They may have different protocols, different configurations, all that kind of stuff. So the solutions have to be adaptable.
0:07:38.7
Greg Sloyer: And that's really where, in partnering with Inductive Automation, all that, we've helped simplify the environment of bringing that data into the cloud, into Snowflake. So let's first take a very broad view as we look at the supply chain. So as I mentioned, we have everywhere from marketplace providers with commodity data, pricing, availability, geopolitical kinds of things that impact supply, the logistics areas, really bringing that sort of view to the plant, as well as then when you look outside from the customer standpoint, especially if you're building connected products, so products out in the field that are generating their own data after they've left manufacturing. How do I incorporate and create that visibility up and through the supply chain, through the ecosystem, to be able to make decisions more holistically to not just help manufacturing but to help that whole enterprise environment? And then this is the thing we were really putting in 18 months or so, two years ago, which is that ability to get much more fine-grained, granular data at speed.
09:04
Greg Sloyer: I won't call it real-time. Let's call it near real-time into the cloud. This is not replacing shop floor systems. If you have a safety system, your cloud is not your best spot to put those. That's gonna be edge-driven kinds of stuff. But as we look at how do I take advantage of that data, how do I bring and broaden access to it, how do I look across those 50 plants, how do I run much more advanced mathematics on that data to do root cause, cycle time, predictive quality, all those kinds of things? That's really why we're pulling that data into the cloud and combining it with a number of other components of the data. For example, let's say you are great at doing quality control and things like that, even looking at that from the shop floor, the isolated plant level. What we wanna do is show that vision extending that supply chain to say, okay, that's not the only variables going into your manufacturing facility. Your supplier quality, that delivery variability, all of those things come into play when you start looking at quality or predictive maintenance, those aspects. And then how do returns, how does warranty quality, for those of you more on the discreet side, how does that impact and how can I utilize that information from customer service, from field maintenance, things of that nature to see what that potential root causes were that started in manufacturing and started in supply?
10:37
Greg Sloyer: So again, being able to broaden that view for those organizations that have started moving beyond doing really cool and fancy analytics on their shop floor data, how do I paint that vision of the future? And this is where we really see the extending of that data and incorporating more of the IT types of data into those decision processes. So Snowflake has many, many more partners than this. These were the partners as part of our manufacturing cloud launch. Marketplace partners, there's 2,600, 2,700 different data sets in the marketplace, but there's everywhere, as I mentioned, financial data, there's ESG data. If you type in ESG into Snowflake marketplace in the search, you're gonna come up with 40 or 50 different data sets that are available, freight rates for kites and dart, things of that nature. From that perspective, again, to help provide that greater visibility. As I mentioned, Snowflake has really been doubling down in terms of applications and the capabilities there, building those on Snowflake.
12:00
Greg Sloyer: So Blue Yonder and a number of others are replatforming. In a lot of cases, there's ones that were built specifically by companies on top of Snowflake, taking advantage of that power of the cloud and cross-cloud. So as they build something, it's not just in AWS or in Azure; it goes across all three. And then system integrators, the SIs there. And again, many, many more are partners. But these were ones that stood up and said, I have built in Snowflake a supply chain or manufacturing or operations type of solution with a customer. And they're raising their hand and going, "Yep, they did a great job. We did it all in Snowflake. And that's how they were on this list." And this continues to expand as we work through. The main areas I mentioned so supply chain optimization, smart manufacturing, and connected products are the three areas where we start utilizing the manufacturing data, the supply chain data, and that sensor data, whether it's coming from the shop floor or from connected devices out in the field, to be able to really provide that visibility, take advantage of the cloud infrastructure.
13:23
Greg Sloyer: And depending on which booths you go to behind you here, you'll see slightly different versions of this. This is my extended version. And sometimes today if I get surprised by a slide, it's because somebody's legal, they're legal, our legal, somebody's marketing department got a hold of them. So I always enjoy this 'cause then the slides are as much a surprise to me as they are to you. So with Ignition, and we've had this now in place with them a little over a year, I wanna say closer to 18 months that we've been working with Ignition or Inductive Automation and Cirrus Link. But this is the easy button for getting data into Snowflake. This is zero code. When the part of the secret sauce is that IoT bridge for Snowflake that is available via Cirrus Link. And this drops the data in from Ignition. So not just the tag data, but the metadata all around that, that structured data. So all of that lands in Snowflake. And if you have a chance to see a demo, Arlen Nipper, I don't know if Arlen's in the audience today. Arlen and others have done this many, many times. I'm probably on many, many calls per month with him with different customers and prospects.
14:55
Greg Sloyer: And within that demo, Snowflake goes from knowing nothing about your shop floor to knowing everything about it that's coming through Ignition. So very, very fast, very, very easy method for getting the data into Snowflake. And these are some of the reasons why we're one of the partners of this versus some of the other ways you can land that data within the cloud is we really looked at this with them in the process engineer. So the plant is driving the configuration in Snowflake. Snowflake is not defining a structure that you've got to be so many levels deep and it has to have certain kinds of attributes and all that. It is driven from the edge. So the plant defines how you look in Snowflake. Snowflake does not define that. That's one of the keys. And then some of these other nuances, for those of you who get into the much more excruciating detail about data types and things like that and what can you land. But we're landing this all with MQTT. The cool part of the demo for me in terms of the processes within Snowflake is that MQTT is great for transmitting data and for storing data.
16:22
Greg Sloyer: Really, really small footprints for both. Allows you to go very quick, allows you to get the data up because of that event sort of driven change control that they have. Not so great for BI and for analytics. It has lots of nulls. Mathematics tends not to like nulls, and BI tends to not like a lot of zeros with a spike and a lot of zeros with a spike. Snowflake's got a, we use views in Snowflake, so you're not gonna repopulate and have to store all that data. But from the view perspective, it's hydrating those nulls with the previously good value. So now you have an analytic data set that your data scientists tend to like without having to code anything. This is out of the box. It's driven the moment you've set up this connection. And your BI tools like it because it's not that flatline spike, flatline spike. So, great. You got the data in Snowflake. Now what can I do with it? This is where the past year or two, Snowflake has been bringing in a lot of AI, ML, Gen AI capabilities into Snowflake. We continue to release new stuff. This happens to be one of our, I'll call it an easy button.
18:01
Greg Sloyer: 'Cause different levels, different organizations have different capabilities around data science, around analytics use, things like that. And we have for the data scientists in the room, Python, Java, Scala, all that can be done in Snowflake. It's not just a SQL house. So you can be writing all the cool data science stuff. I don't think I have it on here, but we're in the booth right across the hall. For those of you into optimization, you can be running optimization, mathematical optimization in Snowflake through the Python libraries. It's really cool. I've been, in the '90s, I watched optimization fail. And I'll talk a little bit on why I think it failed. But the ways we're getting and the capabilities that now are being brought to this data are really, to me, driving a lot of really cool stuff happening in manufacturing. But this two lines of code here with anomaly detection, this is using an ML function. I think of this as the trend function in Excel. So for all of you who are familiar with Excel and you use the trend function, all you had to know to generate a forecast or see the trend of data was trend parameters, and what are those two, three, four parameters I had to put in the trend function.
19:21
Greg Sloyer: You did not have to know the mathematics behind it was least squares at the time. You didn't have to care. You could just write a trend function. This anomaly detection, and there's about a dozen of these, what we call Cortex ML functions, that are available is something similar. You have to know the parameters, just like I had to know with the trend function. But now I can run an ML-based, I think it's gradient boost, anomaly detection on the data as it lands. I don't have to be a data scientist to apply that mathematics to the data. So there are functions like forecasting, and there's a couple of others that are out there that are more manufacturing-based. Like I said, there's about a dozen overall. But the ones I tend to see for manufacturing, supply chain, or anomaly detection, forecasting, and there's some contribution factors, things like that, that really get exciting, 'cause you're applying ML techniques without having to be a data scientist. So simplifying this approach.
20:22
Greg Sloyer: The screens here that I'm showing are actually built in what we call Streamlit. This is a Python-based graphics package that is in Snowflake. So the data does not have to go out. It will not compete with a Tableau or a Power BI. We can also go to there. So if you wanna do really cool and fancy dashboards, super. We operate with those. But for folks, data scientists especially, who wanna just show quick, easy visualizations of the data, of the results of their very cool mathematics, this is available for them as well. So why do I think optimization failed? But why do I get nervous about being able to use advanced analytics broadly across your organization? And I'll point to this and go, there's different reasons. But the biggest one, and why I think in the 90s optimization failed, for example, and what I don't wanna see with things like Gen AI and AI and ML, which are really cool tools, is that we had organizations wanting to go from here to here without doing the groundwork in between. There was too much change management.
21:33
Greg Sloyer: There was too much. There was not enough data governance, data quality in those processes. Optimization is great if you've got really good data quality, especially pricing, timing, things like that. The mathematical models rely on that. That is no different for Gen AI, AI, and ML. The square root of a bad number is still a bad number. It doesn't get better because I threw cooler mathematics at it. So this is where we, working with the partners, working with you folks, it's not that we're gonna say, "No, don't ever do this." What I'm saying is, as a warning, keep in mind that those structures in place, where there's governance, and this is really where the IT and the OT coming back together to help, through this process, really help then create the environments where AI and ML are gonna be a lot more successful. Make sense so far? All right.
22:45
Greg Sloyer: So data foundation is necessary. We build these out. We work with the customers and the partners to deploy these things. Like I said, we've been great at IT data, really excited about all the partnerships we have to bring in the OT data, take advantage of our time series, geospatial capabilities, things of that nature. So you can do all sorts of cool math with that. And then extending those with the partner data or sharing that data with your partners, customers, suppliers, logistics, for example. So what's that mean? So from the Unified Namespace, this is what we are continuing to develop: bringing IT, OT, connected products, getting all that within Snowflake, improving that visibility, and allowing you then to run greater AI, ML models, Gen AI, at the data, not, again, separating it out. So that you can take advantage of not just the ingestion of that kind of data, but what do I do with it after I've got it somewhere? So with that, any questions before I send you across the hall to the 1:00 o'clock keynote?
24:15
Audience Member 1: For a lot of us, the issue is not just the data. It's also the application. So you saw, basically, a lot of the applications there. What's the result look like if you wanna get the data out of Snowflake and give it to an individual in order for someone on the shop floor to be able to use it? Does it have to live alongside each other? And should we not think about it like it's a replacement for a data broker? It's just something that lets you do higher-level data?
24:37
Greg Sloyer: Generally, the question is, is there a path to go from Snowflake, let's say, back to Ignition as well? There are organizations that have gone down that route. I would say that the Ignition group, Inductive Automation, is the best ones to talk to. There's always the security and protocols and things like that that you have to work through on that. Technically, I do not believe it's an issue. But generally, it's been a one-way path to go up into Snowflake, because then you're looking, like I said, if you have 50 sites, you may have 50 Ignition brokers or whatever, and they're coming up into Snowflake. So you're looking more holistically at that data. I've not seen SAP data go down to Ignition or anything like that. That's usually staying up within Snowflake. Sure.
25:25
Audience Member 1: Oh, somebody else. So at the beginning of the presentation, you talked about how it's kind of a big permission space, rather than storage space. But then later on.
25:32
Greg Sloyer: For the, for data sharing.
25:41
Audience Member 1: Okay, 'cause when we saw the architecture diagram, if you define it in the namespace for Cirrus Link, it moves up. Where is the storage part in that situation?
25:50
Greg Sloyer: So it's in Snowflake. The data is coming into Snowflake. It's stored there. You have chosen, as a customer organization, AWS or Azure or both, let's say, for different reasons. And Snowflake sits on top of that. So physically, they can talk to you about where it makes most sense. But generally, it's in Snowflake. One last question real quick. Yes.
26:11
Audience Member 2: No. Yeah, that was it.
26:14
Greg Sloyer: No. Oh, okay. All right. Super. So I've already been shown the hook kind of thing 'cause they want you to get across the hall for the 1:00 o'clock. But thank you. Appreciate your time. And we are across the hall for any more detailed questions.


Sepasoft’s workflow solution can map out and execute the production process for almost anything – including made-to-order bobbleheads! Our demo will showcase how simple it is to manage production workflows, collect real-time data, and utilize document management with 3D models and form entry. We’ll also highlight how to authenticate and verify every action during production for compliance and accountability using Electronic Batch Records (EBR) and electronic signatures. Join us to see the latest Batch Procedure technology in action.
Transcript:
00:00
Tony Nevshemal: Hey everybody. Welcome and thank you for coming to our session today. I'm really excited to be here at ICC. It's actually my first ICC. But when I started... Well, today, my colleague Doug and I, sorry about that, are gonna be presenting "Sepasoft's Workflow Solution: Building Bobbles with Batch." We're gonna be building these really cool bobbleheads today using Sepasoft's Batch [Procedure] Module. And within Sepasoft, there's often been some controversy about how we start... How we named our module "Batch" because it's, some people think it's a misnomer. That it only applies to batch manufacturing. However, it truly is a workflow solution. It'll handle any workflow that's incorporated or associated with your manufacturing, and we intend to show you something of that today.
00:55
Tony Nevshemal: My name is Tony Nevshemal. I'm the CEO of Sepasoft, and I'm also the new guy, having joined just recently. Many of you know Tom, Tom Hechtman was the prior CEO of Sepasoft, and he has transitioned to the CTO role where he's in charge of the product roadmap, product innovation, and thought leadership. Prior to joining Sepasoft, I was actually at, a CEO of an ERP, a manufacturing ERP. And prior to that, I was an operations director at a large manufacturer. I'm very happy today to come down the Purdue pyramid to level three where all the cool kids are and one of them is Doug. So Doug, introduce yourself.
01:38
Doug Brandl: Yeah, thank you. My name is Doug Brandl. I'm an MES Solutions Engineer with Sepasoft. My background is, I've got 10 years of experience in pharma as an automation engineer and consultant, and then application development before then. But I grew up around the MES space, I grew up around the standards. My father was really involved in them, and our dinner table conversations with me and my brothers and my family often involved talking about operations, responses, and all the different object models. It was a bit nerdy, a bit geeky, push the glasses right up your face. But I've got an ingrained, internalized understanding of the space and I've been with Sepasoft for a little over a year and thank you to everybody who went to our session last year, and thank you for coming to this one today.
02:36
Tony Nevshemal: Well, and before I joined, I endeavored to take all the training classes at Sepasoft for all of our modules. But one of the training classes I have not taken yet is our Batch [Procedure] Module. So Doug is in the unenviable position of walking me through our Batch [Procedure] Module, the unit procedures, changing up a recipe, and you guys get to see it all in real time today. A quick word about Sepasoft before we proceed. Sepasoft is of course an Inductive [Automation] Solutions Partner. We have the broadest and deepest MES solution on the platform. We have batch processing production workflows, we'll be showing some of that today. We have genealogy and WIP inventory with our Track & Trace Module. ERP connectivity, we can hook up to pretty much any ERP, and we have a direct connector with SAP.
03:31
Tony Nevshemal: We're well known for our production efficiency and scheduling with our OEE and downtime, quality tracking is handled with SPC. We have a bunch of ancillary modules such as settings and changeover, document management, barcode, those types of things. And you can control it all at the enterprise level with our multi-sync management, multi-site management, not sync. I'm very happy to tell you that this week we're announcing another bullet point added to this list, and that's SepaIQ. So please come to our session on Thursday. SepaIQ is really an exciting breakthrough that we've made, that Tom's made, and it relates to our manufacturing, machine learning, AI, data contextualization, all of those topics. So please come to our session on Thursday to learn more about that.
04:21
Tony Nevshemal: And finally, a quick word about a change we've made regarding our Quick Start program at Sepasoft. Our Quick Start program is effectively access to our design consultation engineers. We've opened up that access to be universal to any and all Sepasoft customers. So to the extent that you need expertise with your MES project, whether that's at architecture, design, implementation, rollout, consider us part of the team because when you succeed, we succeed. So I think that's enough of that. Let's get into the presentation.
04:55
Doug Brandl: Yeah. To give everybody some context on what we're doing, we are receiving orders from our ERP system for made-to-order bobbleheads. And we're going to run through to assembly, and we're going to try and highlight, and I challenge you to think of it this way, the procedural control and workflow of what it takes to go from order to execution of making these bobbleheads. And Tony will have to put them together for us. We're gonna leverage our best procedure tool, we're gonna use our Track & Trace modules. We'll, hopefully, if we have time, be able to see some of the genealogy of lot consumption, and you'll see a handful of our components that we use to do all this and our recipe editor.
05:43
Tony Nevshemal: Yep.
05:45
Doug Brandl: Alright.
05:45
Tony Nevshemal: Alright.
05:46
Doug Brandl: So first things first, you guys are gonna have to excuse me, I've got to turn around to do this. We're gonna refresh our orders off of our ERP system, and I like this bobblehead for the Sepasoft company logo, that's awfully convenient that one's right at the beginning. So we're gonna go ahead and start a batch, and as you can see, we've got our batch ID, proceed to the review page before we can assemble. So what we've got here is, this is just a standard Perspective page, we've got our document viewer, which is an HTML5 WYSIWYG. You can do a lot of things in it, a lot of really cool things. In this case, we're embedding a WebGL model, this we do with the help of the Web Dev Module. And over here on the right side, we've embedded some form entry fields and all of this gets tracked to the batch, this gets tracked to the electronic batch record, the EBR, and I'll show you what all of that looks like here in a minute. But I guess probably before we go, I should give you a quick overview of the recipe so that we can...
07:00
Tony Nevshemal: Yeah. Is there a way to graphically view that?
07:01
Doug Brandl: Yeah. I put a little slide out here. Right over here is a visual representation, and this is also very similar to... Sorry. This is our recipe that we're gonna be executing and we here have "Review Station" which in this case is gonna be my computer where I'm going to do some 3D model review. We're going to do some authentication challenges. This links into the identity provider provided by Inductive [Automation].
07:29
Doug Brandl: And we'll challenge for some electronic signatures. We've got some logic that we can do to that where you can require double signatures, you can set up which roles need to be to gate certain steps. And then after our review, if we're happy with our model, we go through the assembly, so I have an equipment phase here. If you're not familiar with the standards, think of the phase as like a step. In this case, this equipment phase is a simulated PLC where I'm going to send to our printer, our 3D... Our beautiful Amazon printer here. Our 3D models that we're going to print, we're going to e-sign to make sure it didn't turn to spaghetti, and then we're going to measure, record the values to our SPC modules and then assemble our 3D, our little 3D bobblehead. Alright, so Tony.
08:26
Tony Nevshemal: Yes.
08:27
Doug Brandl: Well, I guess this is all me, I'm the reviewer. As far as... This looks appropriate to me. I'm not really seeing any mesh errors.
08:36
Tony Nevshemal: And all components, all three are present.
08:38
Doug Brandl: Yes, all of this is present. So I'm gonna go ahead and click through these and I'm gonna say this is all good, and I'm going to... You can't see it in the bottom right because it's covered by my shadow, but down here, we've got our button to finish this document. Now, when I do this, I'm gonna slide this back out. You can see where you've been and where you're going with our batch monitor. And when I click on this and expand it, I can see all of the relevant metrics that we're capturing as part of this step. I can see, right up here, I can see the model is appropriate. So this is really good for auditing and figuring out what really happened during the execution of a batch. Slide this guy back out, and I can see I've got an e-signature required to complete the review step.
09:28
Doug Brandl: I will go ahead as a reviewer, do this challenge, so here I am Doug, and my password. Alright, I accepted that. I could also reject it, which in our batch, in the recipe that you saw or branches, you can get pretty complex in your conditions that you put in there to do really whatever it is that you need. Next up, I guess we go to our assemble stage. Here, this is just a simple Perspective page that I put up tied to our fake little PLC. You can see I say that the state is running. Our PLC is saying that it is running, but in reality, it is waiting for some filament. So Tony, if you don't mind, could you scan some...
10:21
Tony Nevshemal: Sure. Beep.
10:24
Doug Brandl: Perfect. Alright, there we go. Okay, now we're off to the races. So, while this is running, I'm just capturing a handful of metrics, we're looking at filament consumed, layers printed, extruder speed, etc.
10:35
Tony Nevshemal: How did you build these screens?
10:37
Doug Brandl: Yeah, this is just standard Perspective. All of these are tag-driven, so this, when you install our modules, you get an MES tag provider. And as you configure which phases, which, as you configure the batch module, you can expose each step when it executes for a particular unit, you can expose all of those values as tags. So all of these are just tags, and I just... It's a very simple like plain old Ignition Perspective. And then, again, on this while it executes, I didn't pull it up fast enough, but we are tracking, you see Base_Out at the top, we see filament. These are material transfers, so this is actually piggybacking our Track and Trace Module. It allows us to consume material, track lot usage, and we'll see that hopefully at the end with our trace graph, and then it'll also... You get a file name, you get the extruder speed, all of that gets tracked live, and you can store those values as they change, you can store the last value, so that you can... And you can see all of this in your EBR at the end after execution.
11:56
Tony Nevshemal: And for those that don't know, what's an EBR?
11:58
Doug Brandl: Electronic batch record. Alright, so we'll go over to our measure. I forgot I have a e-signature here. Alright.
12:07
Tony Nevshemal: Well, it looks like they printed.
12:09
Doug Brandl: Okay, they didn't turn to spaghetti.
12:11
Tony Nevshemal: No.
12:11
Doug Brandl: Alright.
12:12
Tony Nevshemal: We got the parts.
12:13
Doug Brandl: So I'll go ahead and sign off. Or would you like to sign off?
12:16
Tony Nevshemal: Sure.
12:17
Doug Brandl: Yeah. And again, this is any identity provider in Ignition that you set up, so you don't need to do anything crazy, it's just part of the platform. Alright. Now we're good, hopefully. Well, I hit the login button. Now we're good to go to our measure. Alright, so we've got some annotations now here on our 3D model. Tony, I need you to take some measurements here.
13:00
Tony Nevshemal: Okay.
13:02
Doug Brandl: So let's look at the head first.
13:04
Tony Nevshemal: Which one?
13:06
Doug Brandl: And I want you to get the diameter of that section on the 3D model.
13:15
Tony Nevshemal: So that is 6.12.
13:17
Doug Brandl: Alright, and then let's go to the base. If I can put that. There we go. Now we're gonna grab that right there, the diameter.
13:32
Tony Nevshemal: Alright, 6.16.
13:37
Doug Brandl: And then finally, let's go for the spring diameter.
13:43
Tony Nevshemal: 6.02.
13:47
Doug Brandl: Perfect. So I'll go ahead and complete this step. Now, I don't know if you guys noticed, but part of our process, we measure, we record the values to SPC, which it popped up while I was looking away, but we record the values to SPC and then we go to assembly. But we may run into a problem in the future, so I think there's an opportunity for us to modify this recipe and for Tony to dabble in the batch recipe editor, so we are good there. Now it's just assemble.
14:19
Tony Nevshemal: Alright.
14:19
Doug Brandl: If you don't mind.
14:22
Tony Nevshemal: So how do I assemble?
14:23
Doug Brandl: No, that's...
14:24
Tony Nevshemal: Okay. So you take...
14:25
Doug Brandl: Yeah. Take the spring, put it in the hole. Now, obviously you use your imagination and your projects, this could obviously be significantly more complex. You don't have to use a 3D model like we are here, you could use documents. We can retrieve these out of controlled document management systems. The world is your oyster when it comes to this. Alright, cool. It is assembled. I'm gonna go ahead and complete the step. Alright, so we have, we've completed our assembly and now we're gonna send the label to the printer and that's that. But we did notice that there are some opportunities. So Tony, if you don't mind, I'd like for you to go ahead and go into the recipe editor and modify the recipe, and let's see if we can account for times where... Let's go with the spring is not gonna fit in the hole. We're not gonna be able to assemble this. So we've got our happy path, we've got our green path through this workflow, but we don't have a red path, we're not handling exceptions appropriately, so this is a great opportunity to show you how easy it is. So Tony, can you open up the assembly unit procedure on the bottom left?
15:39
Tony Nevshemal: Sure.
15:41
Doug Brandl: And scroll on down, and after the "Record Values" and the "Record Transition," we're going to insert a branch into this workflow, so you can delete that line right there. And then I want you on our logic controls here in the editor to drag on "Or Begin." What this is gonna let us do is this is gonna let us say, "When this condition is met, you go down this path. When a different condition is met, you go down another path," etc., etc. And you can change these. So connect that, and then we're going to put in those conditions.
16:16
Tony Nevshemal: Okay.
16:16
Doug Brandl: So if you could drag two transitions in, the transition is where you're going to be able to put in that expression, and we'll have one for our green path and one for our red path. Or happy and sad path. And go ahead and connect those guys. Perfect. And then let's edit. You can connect them to the next one as well.
16:41
Tony Nevshemal: Sure.
16:42
Doug Brandl: And then let's go ahead and edit that transition. Let's give it a name.
16:47
Tony Nevshemal: So this is good measurements, right?
16:49
Doug Brandl: Yes. And then this transition expression, so this transition expression, what we can do is we can look up through the recipe, through what's been executed, and we can pull out some of those metrics. So we had our operator record on that document, we had them record the diameters of the spring and of the head and the base, so what we're gonna do is we can grab those values and apply some rudimentary logic. So Tony, we called it "measure," is the name of that step, of that phase. "Measure" and then you're gonna say ".diameter" and let's go. So in this case, our good one is when the spring is smaller than the head and the spring is smaller than the base.
17:35
Tony Nevshemal: Right, so when the spring...
17:37
Doug Brandl: And, nope, we don't need to...
17:44
Tony Nevshemal: Oh yeah. Just less than...
17:46
Doug Brandl: Yeah, maybe too tight.
17:47
Tony Nevshemal: "Measure.Diameter_Spring" is... "Measure.Diameter_Base" right?
18:19
Doug Brandl: Yes.
18:20
Tony Nevshemal: Okay.
18:21
Doug Brandl: Go ahead and save that. And then let's do the same for... Let's do the inverse, the logic inverse of that for this red path, so let's just call this "rejects."
18:31
Tony Nevshemal: Reject.
18:31
Doug Brandl: Reject measurement. And then our transition expression is going to be when the spring is greater than or equal to the base, or the spring is greater than or, and... Is greater than or equal to the head.
19:00
Tony Nevshemal: Spring, is greater than or equal to. What did I do first?
19:11
Doug Brandl: You did the head first.
19:12
Tony Nevshemal: Alright, so this is base. Okay.
19:13
Doug Brandl: Perfect. Save. And then what do we... What do you think we should do?
19:19
Tony Nevshemal: Well, let's say... So if it fails its measurements, that means you're not able to assemble. So we should probably tell the assemblers.
19:27
Doug Brandl: Yeah, probably don't wanna waste their time.
19:28
Tony Nevshemal: Right.
19:28
Doug Brandl: Yeah. So let's throw in a user message. So we have some built in... You have like a whole standard library of phases that you can drop in. And in this case I've configured it so that our assembly station can have a user message. So if you can just click that, drag it over into that unit procedure and connect it. And let's go ahead and configure it.
20:00
Tony Nevshemal: So we'll call this "notify"?
20:02
Doug Brandl: Yeah, like "notify operator" or something.
20:04
Tony Nevshemal: Yeah. Okay.
20:14
Doug Brandl: And then let's just give them a message down at the bottom where it says "parameter value."
20:21
Tony Nevshemal: Yeah. What do we wanna say here?
20:24
Doug Brandl: Let's just say "assembly not possible."
20:25
Tony Nevshemal: Okay.
20:26
Doug Brandl: We'll keep it simple. In your own projects, I'm sure that you'd probably wanna put more in there. And then go ahead and save that.
20:33
Tony Nevshemal: Yep.
20:33
Doug Brandl: So I'm not covering it. But you can also do calculations where you can pull in values. So a lot of our phases have that. Yeah, let's go ahead and require acknowledgement on it.
20:43
Tony Nevshemal: Yeah.
20:44
Doug Brandl: There's a lot of ability to make it dynamic so it's not all static. It's not like you're always gonna say the same thing. Sometimes you want to include values from previous steps or maybe include batch parameters as part of the message or part of any other phase. So we do have also the ability to include that as part of like a calculation. But we're not doing that here. So let's go ahead and hit save.
21:05
Tony Nevshemal: Alright.
21:08
Doug Brandl: And then we're gonna put a transition on this. So every phase needs to have a transition after it's done. And in this case, we're just gonna say "complete." Once the notification has been sent and this phase is... The execution of it is complete, we'll continue on and we'll terminate the batch. So you can go ahead and insert suggested here. And what this does is it's gonna look at the link up and just say whenever that step is complete. And this is good. We'll go ahead and save it, and then put on a terminator in the logic controls on the...
21:39
Tony Nevshemal: Let's try it without a terminator.
21:41
Doug Brandl: We can't do it.
21:42
Tony Nevshemal: Can we validate it?
21:42
Doug Brandl: Yeah, you wanna validate it? So if you don't do this, we do have some validation of our recipes where it'll look at it and it'll tell you what's wrong. And in this case, it's saying the assembly unit procedure, UP5 transition needs to be followed by something.
22:00
Tony Nevshemal: Okay, cool.
22:00
Doug Brandl: Let's go ahead and drag the terminator on and connect it. And then let's validate. Again, make sure that that resolved that issue. Recipe is valid. Cool beans. Let's save it.
22:16
Tony Nevshemal: Alright.
22:21
Doug Brandl: Alright.
22:22
Tony Nevshemal: Right. Let's run it again.
22:23
Doug Brandl: Yeah, so we'll fly through this for the second time so that we can get to questions since we've got four minutes to go. So, alright. This is gonna be the world's fastest 3D printer here. I'm gonna go ahead and kill all of these old orders. These are on the old recipes. So we do version our recipes. So these are using, it's the version 61 of that recipe. We're going to reset this and I'm gonna go retrieve some more orders from our ERP system and that'll be version 62. So refresh orders right here. Alright. So this is the same steps. I'm gonna go fast for the sake of brevity.
23:04
Tony Nevshemal: Let's quickly review them.
23:05
Doug Brandl: Yep. Oh, this looks great. We've seen this one before. Check, check, check. Check. E-sign. I'll go in as an admin. Password.
23:21
Tony Nevshemal: Cool.
23:22
Doug Brandl: Cool, cool, cool. Close those.
23:24
Tony Nevshemal: It's printing.
23:25
Doug Brandl: Yeah, let's go over to our print. Beep boop, scan the lot. We're printing. We are printing at 50 layers a second.
23:36
Tony Nevshemal: Yeah. It's screaming.
23:37
Doug Brandl: This is a fast printer. I can tell who has a 3D printer in here and knows how frustratingly slow that they are. Alright. We're gonna have an e-signature.
23:50
Tony Nevshemal: Okay.
23:51
Doug Brandl: Verify it didn't turn to spaghetti. So I'm gonna go ahead and sign that one as well. Tony, it didn't turn to spaghetti, did it?
24:00
Tony Nevshemal: It did not. We have something.
24:03
Doug Brandl: Alright. So now we're on our measure step. So this is after this step is where we added our transition. So let's go ahead and measure the head outer diameter.
24:16
Tony Nevshemal: Okay, that is 6.2.
24:20
Doug Brandl: 6.2. Let's measure the base.
24:25
Tony Nevshemal: That is 5.9.
24:26
Doug Brandl: Whoa. Now let's do the spring.
24:33
Tony Nevshemal: That is 6.02.
24:35
Doug Brandl: 6.02. Alright. So clearly we are gonna violate our recipe. So when I do that, let's go ahead and take a look and see what happened. So right here, I expand this. Sorry, let me make this a little bit bigger here. I just like watching him walk back and forth with the shadow. So here you can see this transition. So we proceeded down this route here and you can look at this transition and you can see what specifically caused us to go down whatever path it was. And in this case, it was our spring is greater than or equal to our base. Our base was too tiny or our spring is too big. And then we have our notification. So that notification's up on the top right here. And we did require acknowledgement. So I'm gonna go ahead and sign in as an admin. Password.
25:35
Doug Brandl: And here we have our... Just a standard batch message list. This is, again, one of our components where I can click on it. Assembly not possible. I'm gonna acknowledge that. And again, all of this is tracked to the EBR. There's an awful lot that I wanna show you guys as it relates to our EBR, as it relates to our trace graph. I'll hit the trace graph really fast and then I think we're gonna have to go move on to Q&A. And if you want more you can come over to our booth and I would be happy to show this to you. Alright. So here I'm looking at all of the different types of filament, all the different batches. So here what I'll do is I'll slide that over. So right here I can see we have a completed bobblehead. This right here is the assembly unit procedure for that particular batch.
26:24
Doug Brandl: Looks like it was one that I had done on the fourth, I guess. I could see which filament I consumed. I can get the lot number for the base. So I create... As part of this step, I'm also creating that lot. I can see everything in and I can see all of the material that is created as part of it. And then if I click here, I can see all of the five other batches that use this same material. So this is really useful if you're looking, if you're doing any investigations for quality, for recall, any of that stuff. So this is a really good way to visualize, what did I use? I received green filament and I have it on this particular assembly, this batch right here. So I know all of the bobbleheads that came out that used that specific green filament. And this trace, there's not a realistic limit on this. So it does run back. You can chain all of your material transfers back and forth. I think that's all I've got time to show. Does anybody have any questions? I think it's the Q&A time. Yeah, go for it. Oh, she's going to give you a mic. Yeah.
27:39
Audience Member 1: The object model that you have, the recipe, like how accessible is that? Let's say that I've got basically something that's dynamically generating parts from like a pick-and-place machine, right? And I'm not gonna have all that data until it hits the end of the line as a transaction. Can I write all of that at once? Can I then query essentially every transaction I've had for these measurements and get something like capability? Or am I gonna need to layer in other modules like traceability and SPC to do that kind of stuff?
28:07
Doug Brandl: So if you're doing anything with material tracking, you're gonna need the Track and Trace Module. So material transfers as part of the batch. So you could do all the built-in phases, but when it comes to material in and material out and tracking any of that, and suppose you've got 100 different types of dynamic materials, you can set those for the material in property on the phase. So if you want, I can show that to you probably over at our booth. I can show you what that looks like. But yes, you can do that. But it does require the Track and Trace Module.
28:41
Audience Member 1: Okay.
28:42
Doug Brandl: Yeah.
28:43
Audience Member 2: Hi. Is there an array-based entry? I see the graphical method to put all these essentially routes in, but is there an array base or some other way that you could do it in bulk and not all the clicking and dragging?
28:57
Doug Brandl: Yeah, you can script this too. You can script the creation of recipes, of batches. You could pull it, some people even pull it out of their ERP system and dynamically create recipes. So all of this is backed. So we have this frontend here, we have these components. If you don't want to click and drag and you've got some more complicated system, you can script the creation of all of these recipes. And the execution. Yeah.
29:27
Audience Member 3: Does the system have a functionality to do order maintenance to modify existing batches in run to reflect the new recipe?
29:36
Doug Brandl: At the moment, I don't believe we do. Yeah. I'll let Tom answer that.
29:41
Tom Hechtman: To start a recipe, that's a ISA-88 model. So you have your master recipe and you create a control recipe. So once that... Sorry. Once you create that control recipe and you're executing it, it's isolated from the master recipe at that point. Now, if you modify phase or templates, we have templates and different things like that, you do have ways to push those changes down into your recipe and such.
30:13
Audience Member 4: And you can create something... Are there already existing scripts to help facilitate that that you need to customize for your use case?
30:20
Doug Brandl: Yes. So I definitely encourage you to reach out for the Quick Start program, reach out to our design consultation team. They've got a lot of experience doing that.
30:29
Audience Member 4: Awesome, thank you.
30:31
Doug Brandl: Yeah. Any more questions you guys have, please come visit us over at our booth and I really, really, really encourage you come on Thursday to Tom and Mark's presentation. It is very exciting what they're doing. So show up if you can. Alright, thank you guys.
30:47
Tony Nevshemal: Thank you.

