I4.0 Accelerator for Driving Edge to Cloud Business Outcomes
49 min video / 43 minute readSpeakers
Arlen Nipper
President & CTO
Cirrus Link Solutions
Travis Cox
Chief Technology Evangelist
Inductive Automation
Pugal Janakiraman
Industry Field CTO - Manufacturing
Snowflake
Come and learn with Cirrus Link and Snowflake what your data has to say. Snowflake, Inductive Automation & Cirrus Link have partnered to provide Data Cloud Solutions. With Ignition UDTs, MQTT, and Sparkplug, discover how easy it is to leverage Snowflake’s platform to gain derived data insights immediately through native AI tooling. Learn about the impact of the recent partnership of NVIDIA and Snowflake. See how this disrupting technology, in conjunction with Ignition, will elevate and simplify your journey to data insights.
Transcript:
00:00
Travis Cox: Let's do it. Hello, everybody. Welcome. Hope you guys had some fun here today, so far. I know the session's been pretty amazing so far, yeah? We definitely have another great session for you now. Hope you guys are excited about this one, Accelerator for Driving Edge to Cloud Business Outcomes, and we're gonna show a complete edge-to-cloud solution today using data models, and we're gonna actually bring in the Data Dash and kinda show you how all that comes into play.
00:30
TC: Got three amazing speakers, really two, besides myself. We got Arlen Nipper, who is the CTO for Cirrus Link Solutions. He's the man, the myth, the legend behind MQTT. I'm sure a lot of you know him. Excited for having him here today. We also have Pugal Janakiraman. He's the Industry Field CTO for Manufacturing for Snowflake, and he's responsible for building higher level solutions to kinda drive business outcome for manufacturing. And we're really excited about this particular session. We're gonna kick it off with Arlen. He's gonna show, we're gonna show Ignition Edge and Ignition, how we can bring that in through MQTT to the cloud, bringing that from IoT Bridge over to Snowflake. We're gonna show you that whole journey here this morning. So Arlen, without further ado.
01:16
Arlen Nipper: Thank you. Thanks, guys. Thanks, everybody. Everybody enjoying it? This has been awesome so far. So real quick, Cirrus Link Solution, we've been around... This is our 11th year now. We've been growing year on year. This has been a fantastic journey for us. And we started eight years ago. I was over in stage two. And I did the first ever MQTT engine demo. That was our first Ignition module. From there, we've developed a whole line of Ignition modules, as well as products that we support, including the Chariot standalone MQTT broker, and all of the IoT Bridge products that we've developed for getting data out of Ignition into the cloud. So where I'd like to start is largely due to the community and all of the feedback and the involvement of all of you.
02:13
AN: We started with MQTT and the first demo that we did was just Arlen and one of the engineers I worked with. And we had a little binary way that we published MQTT. It was great. As we started going to conferences and all of that, everybody goes, oh, we do MQTT, and we do MQTT, and we do MQTT. But if we would've plugged it all together, nothing would've worked, because the topic namespace would've been different, the payload would've been different. So we started on a journey for our own sanity five years ago. We said, mm, let's invent a spec. And since we have Engine and we're running on Ignition, let's call it Sparkplug. And so we started the Sparkplug specification. And again, it was internal. People started looking at it, Ignition users. I still remember Chevron going, "Well, Arlen, who owns that?" And we said, "Well, it's up on our public GitHub site. You can download it, it's open source." "No, really, who owns it?"
03:12
AN: So at that point, we kinda went on this journey of taking the Sparkplug spec to the Eclipse Software Foundation, which is a standards body and we worked for three, almost four years, in getting the spec cleaned up and getting it ratified. And at the end of last year, Sparkplug 3.0 was officially released. And from that, what you see up here, is that resulted in the release of a Technology Compatibility Kit. So that means that if you're doing MQTT Sparkplug, whoever wants to do it, you can download the conformance kit and you can run your client against it and get conformance-tested and get listed up onto the Eclipse website, so that we have interoperability. So when Todd Anslinger at Chevron orders your module or buys your product, he can be assured that it is Sparkplug B compliant going forward. And then other thing interesting from that is that because of Eclipse and their relationship with the IESO, IEC standards body, now Sparkplug is pending, but it'll be an international standard, IEC 2237. So now Sparkplug will be an international standard.
04:29
AN: And then the last thing I wanted to mention is that I know a lot of you, especially in manufacturing, you deal with a protocol called MTConnect. MTConnect's been around for about 15 years. There's probably over a million CNCs and Lays and Autoclaves that talk MTConnect. And the cool thing about MTConnect is they already do data models, but they do them with XML. So if you want to get the spindle speed from a current MTConnect, you do a get and it sends you back a 300K XML file that you can parse down and find the spindle speed. And what they've realized is they wanna be able to publish those MTConnect models using MQTT Sparkplug. So we are working with the MTConnect Foundation to natively have MTConnect agents running on CNC machines and Autoclaves and all this other equipment, be able to publish that information natively. And you can imagine, that means you could have a whole factory with all of this machinery. You turn it on, it publishes into Ignition, you automatically learn everything about those machines, which would be pretty cool. That's our end goal, if you will.
05:46
AN: So the other interesting thing, we hadn't even thought about it, so I had Chris run a report and say, well actually, how many people are using MQTT Sparkplug? And at this point in time, there are over 1,300 separate companies that are using MQTT Sparkplug. And six years, seven years ago, if I were to put this pie chart up, it would have been 95% oil and gas. And over the last four or five years, you can see, we've expanded pretty much across this technology, across all of the verticals that Inductive Automation is in. So the adoption for MQTT Sparkplug across all of the industry section has been huge going forward. So real quick, I just wanted to review this. What does Sparkplug do? Well, it does four important things. Number one is it gives you plug-and-play auto-discovery. So with a well-known, with Sparkplug, you know what the topic is, you go subscribe to it, it publishes a message, you get the message, and you go, oh, I know where you came from and I know what you wanna do.
06:58
AN: So, high level, gives you plug-and-play auto-discovery. Number two, very important, as we're finding out, as Colby and Carl talked this morning, this is digital transformation. And to do that, you can't have data in the data swamp, you have to have contextualized data that you can actually see from a business-level standpoint of what that data is. So with Sparkplug, we can publish a model, or the definition of that. Now, you instantiate that and create the asset, and I hate the word, but we'll call it that, you create your digital twin. Now, everybody's notion of a digital twin is different. I think ours is the best and we'll see that in the demo here in a little bit.
07:43
AN: The third thing that Sparkplug does is that we have been wrestling with registers from PLCs and our sensors and our flow computers for the last 47 years that I've been doing this. Modbus register 40002, and it's got a value of 17. 17 what? Degrees, gallons, we have no idea, so what do we do? We sat a human being in a chair, and we said, "Okay, Arlen, engineering high is this, engineering low is this, engineering units is that, and I hope I typed it all in correctly because you're gonna run your plant with all of that information that I just typed in."
08:21
AN: But with Sparkplug, we create a digital object that I can go back five years from now from this Snowflake demo that I'm gonna do, find that tag, and I can tell you the name, the value, the timestamp, the engineering high, the engineering low, the quality, and any other custom property you wanna decorate that measurement with and get it into Snowflake, we can do that now with Ignition. And then the last thing Sparkplug does is it gives us that state management. Because if I can't guarantee that I know the state of all your process variables, if you're doing command and control, or you're going to the cloud, then you're not gonna trust that, you're not gonna use Sparkplug. So, Sparkplug tells you that you are online, that value is last known good, and then if your network goes down, you're gonna know about it, all the tags will go scale in Ignition, but when it comes back up, we know at the edge, at the Ignition Edge, everything we would have published goes into a store and forward queue, and now we can do store and forward.
09:24
AN: So with Ignition on the left side, we've got that brownfield connectivity that we need to connect to all those different protocols, all those machines, and bring that into the Ignition platform. From the platform, we've got a really cool tool called UDT, and with that UDT, we can organize that data, we can give it context, we can give it engineering use, give it engineering high, we can give it asset properties because it's very important. Think of like PI Asset Framework, you've got all your asset information over here, which is different from your historical data over here, but we're gonna be able to put that together in one single database, and then we can take MQTT transmission and publish that to an MQTT infrastructure, where it can be consumed by what? Well, it can be consumed by Ignition, for sure, but we're introducing IoT Bridge for Snowflake. So those Sparkplug messages coming from Spark, from our MQTT transmission module into a server, well, IoT Bridge sits there, it's an MQTT client, it knows how to receive those messages coming in, and then using Snowpipe Streaming, we can do sub-millisecond inserts into rows into Snowflake data tables.
10:45
AN: So that means that we can take all of that contextual data we have in Ignition, and by a click of a button, get all of that natively into Snowflake, the data cloud platform. But wait, what is Snowflake, right? So I'll bring Pugal out, Pugal will tell us. Now, Pugal and I have a bit of a history. We've been working together since AWS IoT, and right before Christmas last year, Pugal called me, he said, "Hey, Arlen, I'm the manufacturing CTO for Snowflake," and I said, "Great, Pugal, that's fantastic. What's Snowflake?" And so here it is, it's incredible technology, and here's Pugal to tell you about it.
11:31
Pugal Janakiraman: Thanks, Arlen. Okay. So what is Snowflake? There is a reason why we sat together and picked Snowflake as a platform to build this out, because this is an Industry 4.0 journey. There is a whole bunch of requirements around Industry 4.0. One is that the attractive thing around Industry 4.0 and value proposition is you need very high level of compute, you need an extremely performant database out there, because this is a big data problem. You're bringing in huge volume of data, spanning IT and OT data sources into one location, whether you call it as unified namespace or a centralized location where you can facilitate IT and OT convergence, you need a high-performance database out there. So, the challenges I have seen, been in the middle of a few hundred of these Industry 4.0 initiatives, is today if customers want to go build an Industry 4.0 solution, if they pick a cloud vendor, you have to learn around 200, close to that amount of services, elemental services, stitch it together to build a solution, govern all of it, go through the whole journey of learning that and go from there.
12:45
PJ: That is hugely challenging for most of the customers we work with. So what do we do here? Snowflake is a globally connected cloud vendor agnostic data platform. So what does it mean? You don't have to go learn hundreds of services from multiple cloud vendors and build an Industry 4.0 solution. We got that covered. It's one single managed service from Snowflake. We take care of security, we take care of governance, we take care of scalability. Every one of it is taken care by us. And after that, much more cool, your API of choice is still SQL. You don't have to learn hundreds of new services. You continue to use SQL as a mechanism to leverage data which is present in Snowflake, whether it is around building dashboards or you want to build an AI and ML model or build inference around those models, you still use SQL as an API for doing that.
13:38
PJ: So this is extremely powerful, one-stop shop, easy button to adapt to the cloud. And that's what we bring to the table, Snowflake as a company. The other one, as I said, you need a highly performant database to do that. So Snowflake is a cloud-native database built 100% on cloud, and it is one of the most performant database today in the market today. Again, this is not a marketing statement. If I had to pick a number, I just brought up a number on what really is the kind of transactions which happens in Snowflake today. So April of this year, 2.9 billion queries was launched in the Snowflake data platform. And around just in one single customer, one single table, there are around 50 trillion rows out there. For us to go operate and pull up millions of rows and visualize that, it's no big deal. We do that on a daily basis.
14:33
PJ: And it's around the largest number of queries within one-minute interval a customer is executing, around 160,000. 177 petabytes of data just on five customers, what is being maintained within their database. So big data handling, we do it on a daily basis. That is our lineage. We started as a data warehousing company and built a data platform around it. So handling this volume of data is pretty much a daily affair for us. So other one around collaboration. There is a whole bunch of customer ecosystem built around Snowflake. Data sharing between different customers, it's a matter of you don't copy the data over, you can just refer to the data and still run analytics. Why is it important? You got a whole bunch of OEMs and you got a whole bunch of suppliers out there. If you want to share quality records or you want to share connected product performance data to your supply chain, you don't need to copy the data over.
15:33
PJ: Data can still reside on-premise or it can reside in whatever is your cloud vendor of choice. You can run analytics without the data movement out there. So we provide that kind of collaboration mechanisms. Another cool thing, with the volume of data, just visualizing billions of records or millions of records, human mind cannot comprehend that and derive inferences out of it. We provide AI and ML-based analytics. In fact, yesterday we demonstrated how you can just provide the data set to our pre-built anomaly detection algorithm. It is going to tell you that there is an anomaly going to happen and you might want to take a look instead of getting into an unplanned downtime kind of situation. So we do that as well. We provide all this reference architecture as part of Snowflake data platform. And obviously, with all these capabilities, it accelerates the analytics adoption, whether it is on IT or OT data or a mix of both.
16:31
PJ: So that's what Snowflake brings to the table from a manufacturing perspective. There's a lot of technical detail behind this. Feel free to stop by at our booth. We can go through this in detail on, any level of detail on what you would like to understand around what Snowflake brings to the table, technically speaking. Just to summarize, so what does it mean for customers and partners? So we got it covered, whether the data is sitting in silos of database and on-prem systems or it could, across different organizational boundaries, data is distributed, or it is distributed across multiple cloud vendors, across multiple regions, we can run analytics seamlessly. So I think that is one of the major value proposition we bring to the table. So any data products you build and offer to your customers, it's global in nature. It can scale. We got the security covered. There is seamless collaboration which is possible between you and your customers, and your suppliers.
17:31
PJ: It's not an issue at all, okay? Performance, as I said earlier, we got the performance factor covered as well, okay? Added to that we got thousands of customers today using Snowflake for various analytical needs today with pre-built integrations with popular systems like SAP, in addition to OT systems which Arlen talks about and which he's going to demonstrate as well. And we provide Snowflake Marketplace where you not only can take the products you've already built today on Ignition, you can monetize those data products and offer it through our marketplace to thousands of customers we got around the world. So that's what Snowflake brings to the table. Instantly scalable. You can build global data products which you can take it to your customers. So pretty much that's a Snowflake value proposition.
18:25
PJ: So again, quickly before I hand it over to Travis, this is how the journey started for us. Ignition on Edge with zero coding using Snowpipe Streaming API, send the data to Snowflake. So again, this is one of the best integration built by any cloud vendor as of today from a cost point of view and a fidelity of data point of view. To accurately represent every possible manufacturing data in cloud, you need to support around 13 data types. No other cloud vendor does that today. So maximum they support is four data types, which means all the other data types, you slam it on the existing data types you support. And there is always loss in translation issues associated with that.
19:10
PJ: In our case, we support all 13, Sparkplug B is an associate. We support all 13 of it, and this is the lowest possible cost integration with high performance, near real-time analytics, we can perform as well. That's what we built and launched as part of manufacturing cloud between Inductive Automation, Cirrus Link, Opto 22 as a joint solution offering. Okay. We have made that much better now with Snowflake, with Ignition Cloud Edition as a connected applications available in Snowflake, and along with that, in addition to OT data, you got IT data, you got third party data like weather, traffic information, supply chain information already being managed in Snowflake, you have an opportunity to build applications on top of Cloud Edition and take it to your customers. And every applications you have built and launched at Edge seamlessly will work in cloud, with this edition. I think again, this is a cloud vendor perspective. With that, I'm going to give it to Travis to talk about from Ignition point of view.
20:11
TC: Alright. Thank you.
20:19
TC: Alright. So everything that we are showing on this slide here is something that's available today. And we're gonna show a full example of how, with a demo with Arlen and myself, how we go from Edge to Cloud going into Snowflake, back into Ignition Cloud Edition so we can show some dashboards, get information out there. And what we're talking about is what Snowflake's calling Connected Apps, right? We're simply gonna be deploying Ignition Cloud Edition to our Azure AWS account, and we're gonna connect to Snowflake through JDBC, and be able to be able to get that data from there and put it onto dashboards. So we're gonna show you what that looks like. However, we're thinking future and how this can even grow and get even bigger as we go forward.
21:01
TC: And there is a potential future landscape where... Whoops. All of that can be simply running all within Snowflake's cloud environment, so that you could spin it up really, really fast and get these solutions going quickly. So, but the idea is really simple, right? The focus of this is being able to get data that is modeled, customers need to... Basically it's a culture shift, right? Where they have to think about how they're gonna standardize on data and their data models across their entire organization, and the idea of this is to get it into a storage where that data is stored with its context, so we can go a lot further. So, what's really funny about this whole thing, when we got introduced to Snowflake is, at the end of the day, it's a database and we can connect to it just like we connect to every other database within Ignition through JDBC. And you can install that JDBC driver really easily in Ignition and you can issue queries just like we do with any other database.
21:54
TC: And so, we're gonna show that here today. It's very, very easy to get connected, very easy to issue those queries. We can issue anywhere within Ignition and they also do provide REST API so you can actually go a little bit further as well with that. There was nothing we had to do in day one. We just had to install the JDBC driver and get started. And from the very beginning of our company, we've been centered around SQL databases. This is just now a database that's highly scalable, it's in the cloud that allows for a lot more opportunity that we can... Where we can... For what we can do with that data. And a lot of that is around AI and ML, as Pugal was saying, there's anomaly detection and forecasting services that are built into Snowflake, and you basically train models and you can can do the detection on those just by running simple SQL queries against Snowflake.
22:45
TC: So it's very easy to work with this. However, it doesn't have to be within that. Any other service or tool that's out there that wants to be able to do that same thing, you can connect to the database the same way and you have all that data, you have all the context, you can go and learn everything that's there and go a lot further, right? And with this, what we're talking about too is not only you get the storage, you get these kind of services, but you get those results back into Ignition so that we can provide that information back to our operators, can provide alarms, whatever it might be. So it's kinda that full circle kind of integrated solution. So that's all I wanted to say really, in terms of Ignition and Snowflake. We're gonna get into the demo a lot more, but I did wanna bring up the Community-Powered Sparkplug Data Dash, because we thought for the conference here, we wanted to show this whole thing in action.
23:31
TC: And well, we got all the community to participate, where they're basically leveraging Ignition or Ignition Edge or potentially have a smart device that speaks MQTT Sparkplug and they're gonna build a data model, publish that up to a Chariot broker that's in the cloud. Real simple. Then we can use the IoT bridge for Snowflake by Cirrus Link and all that data from Sparkplug goes directly into the Snowflake database. We're showing it on a dashboard within Ignition, but it's going to Snowflake database as well. And we can easily go and query that data. And we went one step further and we're actually showing the anomaly detection within the Data Dash. So we'll do a demonstration of this in just a moment, but wanna show you just how easy it is for this solution. And it's all something we could do right now. It's very, very simple to get started with this whole thing. So with that, Arlen, I'll bring it over to you for the demo... Start at the demo here.
24:23
AN: Alright. Cool. Thank you. All right. Real quick, the topology is, I've got some simulated devices. Some of the devices are in Stillwater, Oklahoma that I'm actually talking to publishing those up to distributor running on Ignition on an EC2 instance in the Cloud. And so what we're gonna do is we're gonna go into Ignition, we're gonna build our "digital twins," but they're much more than digital twins. We're gonna show all that context and then we're gonna say, "Okay. Well now we've got this single source of truth. How much code are we gonna have to write to get it into a highly scalable Ignition or into a highly scalable cloud database?" And then from there, Travis is gonna go, "Oh. Well I've got that data in there. Let's see what I can do with Ignition Cloud Edition."
25:13
AN: So we're going to do the live demo, which we always love doing. All right. So, I know it's a bit of an eye chart, but it's hard to zoom in on the Tag provider. But I've got a Tag provider, Smart Factory and Smart Factory, underneath that I've kind of got the whole unified namespace of, I've got Smart Factory one and under Smart Factory one, I might have some building management systems because we've got BACnet/IP with Ignition now, I might have some Opto 22 KYZ meters and I've got my equipment in the factory, right? I've got CNC, a lathe, haul-off machine. And then down here you can see I've got the notion of an extruder. And this extruder has some process variables, some temperatures and some pressures and things like that. And had we... The way that we've been doing this going forward is that executives came to operations, they go, "Hey guys, we heard there's digital transformation. We gotta get all of our data in the cloud."
26:15
AN: "Okay. Well let's put all of our data in the cloud." So they go out and they write a bunch of code and they go in here and they go, "Okay. Let's do this and then let's pretend this is the cloud over here. And boom. Okay. We're done." We've got all of our data going into the cloud. It's all going into a data lake. But wait a minute, without some context, how can I use this? So I come into my data lake and I wanna look at something, and I've got 148 degrees, 148.85 degrees, where'd that come from? What machine was it attached to? What plant did it come from? I don't know. Oh. That's over another database. So I need to write some code. And then maybe there was some other asset information, now I've gotta get some code. And what happens is we've got terabytes of data hitting data lakes in the cloud and nobody's doing anything with it because it's too hard and you can't get any context from the data. So, let's drain the swamp. And before we do that, let's go into that extruder and actually give it some context.
27:34
AN: So I wanna build a UDT of an extruder model. And every time that extruder shows up, the first thing that I want to do is I probably want to give it some asset information. Asset ID, asset serial number, location, anything else that you want to be available to you on each instance of that extruder in Snowflake that you want to be up there, you can define in your UDT and it'll be automatically published up there. And now that I've got my asset information, I can go back to that melt temperature and say, "Look, for that machine when melt temperature shows up, I don't care if it came from Allen-Bradley PLC or a Modbus or Rockwell, I want to know that it represents melt temperature, it's 0 to 225 somethings. Those are in degrees C, it's using absolute deadband.
28:22
AN: There's my deadband percentage and my scale mode and anything else again that I want available to me in Snowflake when I'm done with this demo, I can define in this UDT. So now that I've defined my machine, very, very simply using tools on platforms and I can go in and define a dryer and a bunker, and now I can come back and take those nebulous tags and look at the fact that this extruder actually was, extruder seven, was a model of an extruder. And you can see here I've got my asset ID Wile E. Coyote, asset serial number B549 courtesy of Hee Haw, location in Oklahoma and all my process variables. And since it is a UDT, I can use the Power Perspective or Vision to be able to start taking that and maybe when the extruder feeds into a bunker, and the bunker feeds parts when it comes out into a CO2 dryer, and maybe I've got an Opto 22 EMU and it's measuring the three-phase power on that extruder. But my point is, is that at 3:14 on September 27th, this is the single source of truth of my factory.
29:48
AN: This is the single source of truth. I didn't define it in the cloud and then try to bring it back down and iterate back and forth, I know this is my factory. So I just came off of a really cool demo from Snowflake and I go, "Wow. What if I could get that single source of truth into Snowflake? How hard would that be?" So what I'm gonna do is I'm gonna go to the Azure or AWS marketplace and I'm gonna download the IoT Bridge for Snowflake. I'm gonna install it. And when I install it, it's going to go in to my Snowflake console here and it's gonna create two very simple databases, a node database and a staging database. And in here, I have a very simple Sparkplug device message table that you can see right now is empty. And when we installed it, we also added some convenience and I could get a view, and since it's all going up from UDTs, I've got a view that says, "Hey, tell me about all the UDTs that are in that factory or all the factories." Oh, well, I don't have any factories yet. So I need to fix that. Let's go back into our Ignition configuration. And you can see here that I demo a lot. I've got a lot of tag providers and if you look at Smart Factory, it's pointing to the Snowflake MQTT server. So that's great. I'm gonna come over here and I'm gonna enable my MQTT transmission. Okay? And when I did that, what happened?
31:36
AN: When I did that, MQTT transmission looked into the Smart Factory Tag provider and it says, "Hey Arlie. You've got all these models, you got dryers and extruders and conveyors." And so we're gonna publish those using Sparkplug. And the Snowflake Bridge was sitting there listening to an MQTT server. It was a very... It wasn't doing anything. All of a sudden, messages started showing up. Remember that advantage, auto discovery. "Oh. We got an extruder." Now I'm gonna put that into Snowflake using Snowpipe Streaming. So 15 seconds ago, I didn't know anything. Let's go back to our Snowflake console and let's hit Refresh. And lo and behold, we now have a Smart Factory 1 with views of every machine that we've got in that factory.
32:32
AN: Before I go look at one of those, let's ask the SQL database, what models do I have? Let's ask it again. "Oh. Arlen, you've got an extruder, a chiller, a dryer." So now I literally know everything that was in that UDT on Ignition. Now that I know all of the models, I can go back over here and say, "Well, now that I know that, let's go to that extruder and let's do an SQL query, which everybody knows SQL and single, this unified namespace, Smart Factory, Smart Factory 1, line seven, extruder seven, when did the message arrive? What was its sequence number, and all of my process variables in real-time, all hydrated, no holes in the database. I literally could start using this today. So if I know SQL, it took me five minutes to get all my machines defined, get everything up there in real-time. And now for every machine I had in that Smart Factory, I now have a single source of truth of all the real-time data is showing up in Snowflake. Pretty cool. Now, once it's in Snowflake, what can we do with it from there? And with that, I'll turn it back over to Travis.
33:55
TC: Sweet. Alright. So, again, once it's in the Snowflake database, it's just a matter of going and doing, issuing queries against that. So, I'm going to switch over and show you the Sparkplug Data Dash here. And so this is our server that we have that's running in the cloud. And you can see that we've got a Snowflake database connection here that is connected and valid. So what we did first though is we went to the driver's part here in Ignition and then JDBC drivers, we had a bunch of pre-built ones that come with it. Now we're working on getting the Snowflake one built into Ignition, in a new build. But for now, you can go download the JDBC driver and simply just go ahead and install it.
34:37
TC: And we have some instructions on that, a little Read Me on how to do that. Real simple. Get that installed. Once we have that installed, we can go and make a connection like we have here. And so just like any other databases, of course, once I have that valid connection, I can go anywhere in Ignition, and I can use it. So I'm gonna open up the designer here and what we've done for the Data Dash, and I'll go and show you the application in a minute. But we just basically, if I go to the Snowflake, we have a bunch of predefined name queries that basically go and query certain tables. So, he was showing that, that Sparkplug device messages table, and so if I go and look at this, you can see that we're just doing a standard select query against that Sparkplug device messages table.
35:21
TC: And we're looking for... And this one I'm filtering for specific group ID, Edge node ID, and a specific data model that I wanna look for, that we're using for the actual dashboard itself. So it's incredibly easy for me to go into Ignition. In fact, we can go into the database query browser against the Snowflake database and we can easily start saying, "Select star from stage DB, sparkplug device messages." And so we can just bring that data back and anywhere in Ignition within that. And in those queries, we can have... There could be millions of rows. In fact, with the Data Dash, we've got over 120 million rows at this point that we've been logging with that and it's very, very high performance to get that information back.
36:12
TC: So as you can see, that's how we have developed it with the Data Dash. Let's actually go and show the outcome of what we built. So we're gonna go to tryignitioniot.com. So if you haven't checked out Data Dash, simply go to tryignitioniot.com on your phone. You can go... There's the... On the tech lounge, there's a TV up there that has this application open. So here's what we did. We asked participants to go and do exactly what Arlen just showed. He built an extruder machine, a data model. Build any kind of data model that you want, right? Provide that context, provide those parameters that you wanna associate, provide the engineering units and the engineering ranges of the values. Basically create a UDT within Ignition or any other device that speaks Sparkplug, and have that published up to a cloud MQTT broker. With IoT Bridge, everything he showed, that all came into Snowflake and it's all ready to be discovered. So, this dashboard, you can go and you can actually go and see these data models. So if I go look at, for example, I'll use Opto 22's EPIC c-store. We're just showing a visualization of this. Let's go to a different c-store.
37:20
TC: So, we're just showing a visualization of that data model. So you can see the information up here. So there's a perspective template that corresponds to that data model, so that we can easily look at that live data. But again, that history is all going into Snowflake and it's accessible so that we can query that. So let's go over here to the Snowflake tab. And the first overview of this is basically just a discovery of all the data models that happen to exist within Snowflake. So much like he just showed how all those views got created, well now we can actually go and query those, and we can discover information about this. So for example, let's go in. Since I was using the Opto 22 c-store, I'll go into the Stillwater and look at that particular data model. So there, on the right-hand side, we can see all of the parameters that are gonna be... That are part of this is like the UDT definition. All the parameters that are there, what the data model is, here are all of the process variables that are in there.
38:17
TC: For the process variables, like, for example, if I look at this freezer compressor, I'm gonna get, of course, that it's KW and I get the range, 0 to 1500. So this is all... I can have Ignition completely independent from all of the... Not even connected to the MQTT broker, and I can see all the data models that happen to exist within Snowflake, because again, using Sparkplug, those templates were sent to a broker and into Snowflake, again, it's that same exact context. So very, very easy to see that. So this overview is kinda just showing all the data models that are in there, and we've got a whole slew of them with this, so let's see if I can clear this out or there's no exit on that, but we have a whole slew of different data models that are there. At the end of the day, then we can go and query the history very, very easily, and build dashboards and we can go a lot further.
39:06
TC: So I'm gonna show you two kind of demos, one is we're just gonna go and query the history, bring it back into trends, so we're gonna go and select... I'll need to go down to one of those instances, those data models that we have, I don't wanna look at that data, so we'll go... Again, we'll look at the Opto 22, since we're on there, we'll go to Stillwater, look at the EPIC c-store, and because we have the data model stored, you can see here's all the tags, all the process variables associated with it. We already know what those are, and I'll go and select a particular instance. So here's our c-store 405, here's my date range that I'd wanna query the history on, and we'll just select some process variables. I'm not gonna select all of them, we'll just do, let's say, the compressor, all the freezer system, we'll bring those back. I'll apply. And basically, at this point, we're gonna go and issue the... For that time period that we have up here, we're gonna issue a query to get back that history. The idea is that we can simply just go and query all that data. We can bring it back on trends... Hey, there we go, just took a few for that information to come back.
40:03
TC: So, not only is all that data stored there, we can discover that, we can understand what it is, we can query it, put it back onto a dashboard very, very easily. So that's kind of one demonstration of what we're using with Snowflake. The other, of course, is going to the ML/AI side. We're talking about anomaly detection. And so if I go back over here to the map and we look at a particular location, let me go back to that, that Stillwater one, on that freezer, where we have that Compressor KW, we do have the Anomaly Detection turned on in Snowflake. We trained the model based on good data already and just basically ran a SQL query to train the model. And once it's trained, then we continuously, since that data is piping through the bridge into Snowflake all the time, on the Snowflake side of the task that's running, very, very quickly, that is basically looking at the last bit of data we brought in and we're gonna run it through that model to see if it detects any anomalies. Now we're kind of manufacturing this by clicking a button that says Trigger Anomaly, but it is going through that whole system, kinda coming back, where we're getting that feedback back in Ignition. So if I go ahead and do that, what we're doing is gonna...
41:08
TC: We're gonna spike that Compressor KW, which of course, is gonna cause that anomaly to happen, but as you can see, that came back extremely fast, running that model very, very quickly on the Snowflake side. We got the anomaly that's an alarm within Ignition, we could do something about that, but those can be running all the time. And because we trained the model off of that UDT, any new site that has that same data model can take advantage of that same... The same thing that we've built, so we can easily do anomaly detection across the entire enterprise on those data models.
41:41
TC: So it's very, very easy to get these things going, to go further with all of this, not only are we showing how we can get the data into... Get it into Snowflake and how we can leverage those UDT models, we can easily bring it back into dashboards and show that data very effectively. So with that, I think we'll just be opening up to questions.
42:11
TC: So anybody have questions out there? Yes? We have one down here...
42:14
Speaker 4: I know it's hard to say, but what's the rough startup cost of getting the MQTT,
42:22
And then the Snowflake?
42:26
AN: Free. It's one of the rough startup costs... Everything that you're seeing there, you can run in trial mode, right? So you'd probably have to get a test account, and you can get a test account from Snowflake. For the IoT Bridge, that's 30 days free. So you can do it for 30 days, basically for free.
42:47
TC: The whole thing would be, so you got... You've got Ignition you could do in trial period, no problem, in trial period, we can also provide longer trial licenses if required. The IoT Bridge is 30 days free, easy to work with, and with Ignition Cloud Edition, that would be the broker, that would be in the cloud, you'd wanna have some broker up there, it could be that, it could be something else, so you can run that for a couple of hours or a few hours. It's pretty low cost, maybe a dollar per hour. And then with Snowflake, I believe, when you create the account, there's a... I think credits you already get.
43:17
PJ: Yeah, they are some credit options, we can work with you on that. I would say it's pretty much everything is... When you do the compute, you do the reporting, it's pay-as-you-go... It's like electricity bill. When you use it, you get the bill; otherwise, we're not going to charge you. So, pay-as-you-go model. That's what it does. And again, I think having done those kind of Industry 4.0 initiatives,
43:38
AN: Multiple effort, I would say this is the lowest cost possible startup cost around Industry 4.0 because even four years back around what the initiatives which used to happen, a few hundred thousand dollars, we can connect three machines and we can do a business outcome. That was the pitch. It's no longer there. It'll be hardly a few thousand dollars to get it started. At pilot level, I don't see that as a challenge.
44:06
TC: And yeah, and one thing to mention is that... Oh, I lost my train of thought... Oh, well, we'll come back to that.
44:13
AN: Well, no, I think... What I was gonna mention is that, the other thing that's really different here, it was an advantage, Snowflake didn't have an IoT service when we started this project, so they had no notion of charging by the measurement. So it doesn't matter if you're publishing a 1000 tags or 50,000 tags, you're running in a compute warehouse, so you're not charged by the measurement like you are on all the other data services, you're just running in a compute warehouse; as long as you stay within that warehouse, you know your cost.
44:47
PJ: In fact, there are two advantages which came with that. When Arlen mentioned there is no IoT service, [0:44:53.8] ____ but last year when I took this role, I told Arlen that this time, when we do the integration between Snowflake and at the edge, for edge-to-cloud business outcome through Inductive Automation, they should be the best-in-class integration ever built on this planet, so far. Again, I think there, we had an advantage because we didn't have an IoT service. There are two major advantages which came with it; one, there is no additional cost factor. We are not gonna charge you for an IoT service which other cloud vendors are going to do.
45:26
PJ: The other one, pretty much every IoT service as a sub-optimal view of the manufacturing asset world, and they have done the modeling, that always comes to the challenge when you try to move that edge data to the cloud, there is always a compromise made on the data model. When you try to change the data model, you've got a bigger problem associated with it. So these are all the challenges we never had, so we made sure that we can handle every possible data types. And data ingestion, in our viewpoint, should be a commodity, because either way, we don't make a lot of money in data ingestions, it's pretty much nickel and dime to move the data from edge to the cloud, it's really around compute, that's how we charge you. So we are trying to keep it as easy as possible to move the data into the cloud.
46:09
TC: I remembered my train of thought real quick, which is for existing customers who already have Ignition, it's incredibly easy to take advantage of this. We're talking about simply just getting MQTT transmission, just plopping it in, if you have models already built, it'll be that quick to get integrated again.
46:24
AN: Exactly. If you already have Ignition, we're probably talking less than a day.
46:27
TC: We're talking, for new customers though, for people that maybe have a new site or a new facility or something, or they haven't had Ignition at all, it's going with Ignition Edge or your full Ignition, putting it in to connect to PLCs, bringing those... Building the models is super easy. In fact, we've also built a kit with Opto 22, where they have their EPIC controller with Ignition Edge on it already ready to go; especially for energy, with the energy monitoring units to basically pump those energy UDTs in the cloud, so there's a lot of easy ways to get started. Other questions? There's one in the back up there.
47:07
Speaker 5: So, for the piece that you were speaking about, in terms of ML or the pre-trained models, can you go into a little more detail about A, the training that goes into those pre-built models and B, the explainability behind those models?
47:21
TC: Yeah, so for the Anomaly Detection Service, the way that that works is, you're basically kinda like calling a stored procedure almost. You're specifying, you're doing a train model call and you're specifying the data set that you'd wanna train it on. And so in our particular case, we're doing one of those [0:47:37.1] ____ as of use that Arlen showed, for a particular...
47:39
TC: So we did it for this, the c-store, we did it on that, on that freezer compressor, we basically brought back the data from the time period that we'd wanna train in... We trained it on, I think, a few thousand rows of data that was good. So we call that function once and it creates an object in Snowflake, that is the anomaly detection object. And much like you're creating a table or a view or a task screen like that, you're creating one that you can then run again later. So then next time, when you want to do a detect anomaly, you just run another SQL query that is saying... Basically, call this anomaly detection name, you say detect anomaly, so you give it a new query or a new set of data you'd wanna run through, and it will give you back a result, a table that's gonna show you, if all the data, if there's anomalies or not, what the variation is, all of that. And so we just basically take that, that result and if we see anomalies, we then trigger that alarm to come back to Ignition. So as simple as that, two queries: One to train and one to detect. It's as simple as that.
48:40
Speaker 6: Okay. Is there any plans to add discovery tools for engineers who like to look at trends initially to build out some ideas before they run it through the model?
48:54
PJ: If you can swing by the Snowflake booth, we can go deeper into that. That's a longer conversation, if you don't mind.
49:02
AN: Alright.
49:02
TC: Alright. Thanks, everybody. Awesome.
49:03
AN: Thanks, everybody, appreciate it.
Want to stay up-to-date with us?
Sign up for our weekly News Feed.