Resources

Browse our ever-expanding library of useful articles, case studies, videos, webinars, and more.

Featured

View All Resources
case study Manufacturing

Iron Foundry Gains Competitive Edge & Increases Efficiency with Innovative Technology

With help from Artek, Ferroloy implemented Ignition to digitally transform their disconnected foundry through efficient data collection and analysis while integrating the new system with existing software and specialized machinery.

11 min video

Watch the case study
webinar

Accelerating The Journey From Edge To Cloud To Results

In this webinar, find out how an integrated and proven set of technologies can make the edge-to-cloud journey much faster and easier. Industry experts will explain how to drive successful business outcomes through tools like unified namespace (UNS), digital twins, data lakes, KPI visualization frameworks for OEE and other metrics, and a lot more.

60 min video

Watch the webinars
Boost PLC & Device Interoperability With New Drivers Esther Fawson Mon, 03/11/2024 - 11:05
On March 28, learn how to remove the limits of what you can connect your system to. You’ll discover how Ignition makes it a breeze to improve PLC connectivity. You’ll get up to speed on the new DNP3, IEC 61850, Mitsubishi, and Micro800 drivers for Ignition. Plus, you’ll have the opportunity to ask any questions related to the drivers or interoperability at large.
article Guide

What Is MES?

Simply put, a MES system is used to monitor and manage work-in-process on the factory floor, covering resource scheduling, production workflows, recipe management, traceability, inventory, quality assurance, document control, and more. A successful implementation of MES software will not only improve efficiency, but make manufacturers into better businesses.

3 min read

Read the guide
webinar

How Digital Transformation Starts With SCADA

In today's rapidly evolving industrial landscape, we cannot overstate the significance of a robust and modern Supervisory Control and Data Acquisition (SCADA) system. SCADA systems are at the heart of operational technology (OT), where we find most of the data needed for Digital Transformation. Ignition SCADA bridges the gap between OT and information technology (IT), facilitating the seamless flow of data essential for monitoring, control, decision-making, and much more.

57 min video

Watch the webinars
case study Water/Wastewater

Water Utility Implements Ignition System to Improve Efficiency, Compliance, and Reporting

California American Water found that the SCADA system at its Monterey facility was struggling to maintain the high standards required of a water utility in a “hydraulically challenged” area and chose Flexware to replace its legacy SCADA system with Ignition.

10 min video

Watch the case study
article Guide  |  Data Center

NERC CIP Best Practices with Inductive Automation and Ignition

This guide addresses best practices for using Ignition and working with software vendors in a CIP-compliant way, with recommendations based on specific CIP standards.

1 min read

Read the guide
Empower Innovation With Unlimited Licensing Esther Fawson Tue, 01/09/2024 - 12:41
Discover the pure magic of an unlimited licensing model in this webinar, with examples of real-world projects that benefited tremendously from having unlimited tags, clients, screens, devices, and more, for one single, astonishingly affordable price.
icc | 2023 IA Session

Build-A-Thon

The conference is guaranteed to go out with a bang as the Build-a-Thon closes out ICC once again. Join us for the conclusion of the ultimate Ignition challenge, where the final two teams compete for the glory of developing the most elevated Ignition solutions and being crowned Build-a-Thon champions. Who will wear the orange winner’s blazer after the votes are all counted? There’s only one way to find out, so stick around to catch the competitive spirit and enjoy an unforgettable music performance from IA’s Department of Funk that you’ll be humming for weeks!

76 min video

Watch the video
icc | 2023 IA Session

Technical Keynote

Developing industry-defining software is no easy task, but someone has to do it. Join our Development team as they highlight recent improvements and upgrades, current developments, and a behind-the-scenes peek at the future of Ignition before answering questions directly from the Ignition community.

60 min video

Watch the video
icc | 2023 Community Session

From LinkedIn Connections to Community Leaders: The Automation Ladies Experience

What happens when two passionate ladies in industrial automation meet on LinkedIn and decide to create a podcast? Magic. And growth, lots of growth. Dive into the journey of the Automation Ladies podcast and how it has become an engine for both business growth and network expansion. Nikki and Ali will unpack how amplifying your voice online can have real-world business benefits. If you want to grow your customer base, attract top-tier suppliers, or strengthen your community, this talk should have some actionable takeaways on the power of creating an authentic personal brand by sharing your journey with the world.

46 min video

Watch the video
icc | 2023 IA Session

An Overview of Ignition’s MongoDB Connector Module

Earlier this year, we introduced a connector module that allows an Ignition Gateway to integrate with MongoDB. This session provides an overview of MongoDB, outlines the connector module's capabilities, and demonstrates how you can most effectively leverage it to elevate the functionality of your existing deployments.

42 min video

Watch the video
icc | 2023 Community Session

Hitting a Home Run with Ignition

Ignition is not limited to industrial applications alone; its powerful features extend to use cases of all kinds. From its intuitive design features to its robust scripting capabilities, you can harness the full potential of its flexible architecture and rich tool-set to create innovative solutions in non-industrial automation development. Witness this potential firsthand through a baseball scoring and statistics app developed entirely in Perspective, while providing examples of how tags, persistence, scripting, and views can be utilized in a non-industrial setting. Our goal is to inspire others to elevate their lives and hobbies in new creative ways with Ignition.

45 min video

Watch the video
icc | 2023 Community Session

The OG Perspective: 10+ Years of Ignition Wisdom and Beyond

In this session, we'll explore more than a decade of experience with Ignition, sharing valuable insights as a long-time member of the Ignition community. We'll take a practical look at how Ignition has evolved and its role in modern manufacturing, including topics like MES, OEE, AI, and more. It's an opportunity to gain practical knowledge and understand the journey from the early days to today's automation landscape.

42 min video

Watch the video
icc | 2023 Community Session

Rising to the Challenge - Adventures in System Conversion

The folks at Flexware are no strangers to a challenge. When the opportunity to convert a large system over to Ignition arose, they took it head on. Join them in this session where they'll talk about the project and share their lessons learned, talk about custom tools, and describe their thought process.

41 min video

Watch the video
icc | 2023 IA Session

Learning Ignition Fundamentals

Whether you're new to Ignition or just want a refresher, this session is made for all. The Inductive Automation Training team covers all the basic knowledge and fundamental features you need to get started with Ignition.

45 min video

Watch the video
icc | 2023 Community Session

Integrator Panel

Which new innovations will prove vital for future success and which flash-in-the-pan trends are destined to be forgotten by ICC 2024? During this panel discussion, some of the Ignition community's most successful integration professionals share how they are responding to emerging technologies and techniques that are driving the evolution of the automation landscape.

44 min video

Watch the video
article Guide

Using Keycloak with Ignition

Keycloak is an open-source Identity and Access Management solution for adding authentication to applications or services. With Ignition, Keycloak functions as an Identity Provider to authenticate users and define roles to access client/session views.

10 min read

Read the guide
icc | 2023 Community Session

Tyson’s Smart Factory Journey

This session provides an overview of how Tyson has standardized operations with Ignition as a SCADA platform, highlighting and detailing how consistent data and dashboards allow for faster implementations. The talk will also include best practices that Tyson has developed, and will identify some of the key integrations that have helped simplify and streamline data collection processes.

28 min video

Watch the video
Don’t Get Lost in the Cloud: Tips & Tricks for Successful Ignition Deployment and Management Emily Batiste Fri, 12/01/2023 - 12:34

With the release of Cloud Edition, it's never been easier to get Ignition running in the cloud. But are you ready for it? From security concerns to misconfigurations, there are plenty of pitfalls to stumble upon when managing applications in the cloud. But fear not, as help is on the way. Join the experts from 4IR in this session where they'll provide helpful tips and tricks for deploying and managing Ignition in the cloud.

Transcript:

00:04
Susan Shamgar: Hi. So my name is Susan Shamgar. I'm a Technical Writer at Inductive Automation, and I'll be your moderator for today's session, "Don't Get Lost in the Cloud: Tips & Tricks for Successful Ignition Deployment and Management." To start things off, I'd like to introduce our speakers for today. First up, a longtime member of the Ignition community, Joseph Dolivo. Currently serves as the CTO of 4IR Solutions, an Inductive Automation Solution Partner focused on cloud, Digital Transformation, and life sciences. For more than a decade, Joseph has focused on modernizing manufacturing by intelligently adopting state-of-the-art technologies and principles from the software industry. James Burnand is a 20+ year veteran of the industrial automation ecosphere, who has now turned his focus toward providing the infrastructure for manufacturers to reap the benefits of the cloud for their plant floor applications. He weaves cybersecurity, operational requirements, and management into 4IR Solutions' offerings and provides education and consulting for companies looking to begin their journey into a cloud-enabled and a highly automated OT infrastructure. Please help me welcome James and Joseph.

01:20
James Burnand: Thank you, Susan. Your payment will be after the session. We really appreciate that. Hi, everybody. Welcome to the session. Hello people live streaming. So Joe and I are here to talk to you about the cloud today. So we've talked all week about what we do in the cloud, but what we really want to do today is help you understand what are some of the considerations, what are some of the tools, and what are some of the methodologies that you should consider if you're going to be doing deployments in the cloud. So to start off, I'm going to review a little bit about that and go into a little as to why the cloud is in use today, what are some of the benefits, where are we seeing adoption taking off. And then from there, Joe is going to go into the real deep technical details about what things you can do, what tools you can use, and how to actually go about doing that.

02:08
Joseph Dolivo: Yep. We're excited. We'll get as deep as we can with the time that we have, but definitely save your tomatoes and everything else for the Q&A session afterwards. As long as my voice holds out, I will answer as many as we can, and we'll have contact info provided for future questions.

02:22
James Burnand: Alright. So let's get started. So why do people care about the cloud? I know we've been talking about it. It's become this huge discussion point. There's a lot of attention around different opportunities that are opened up, be they AI, be they flexibility, but ultimately one of the most basic things that's important about using the cloud is you only pay for what you use. So you're not buying a set of servers and computing resources that will have the capacity you need for the lifecycle of those assets, you're not buying five years worth of storage that you're eventually hopefully going to use five years from now plus your safety factor. You're literally paying for just what you're using and as you consume it, that price and that cost goes up. So controlling cost is really... If you think about why people are using the cloud in the first place, that's the biggest reason.

03:15
James Burnand: But the other benefits you get is that you are able to scale things. So not only do you get to only pay for what you use, but you have the ability now to theoretically endlessly scale those resources based on what the growth of a system is or the growth of the amount of data that you collect or the collection of different applications that you deploy. It also opens up opportunities with capability. So there are things that are just hard to do that you can go and install a service from a cloud provider that they do it for you. There's managed services, there's application functions, there's third-party plugins. There's all sorts of things that become remarkably easier to do when you take advantage of those precompiled and prebuilt resources that you can buy from a public cloud provider.

04:03
James Burnand: So what do we see people using it for and what are good use cases? So a lot of organizations that use the cloud, our folks, what we've seen in this conference quite a bit is people who have very distributed systems. So telemetry-type systems, places where it doesn't matter where my server is, everything that I'm collecting from is remote, that's a really great use case for the cloud. Or where there's a lot of focus on data and processing, and I need to be able to use more advanced functions and features to be able to provide the insights that I need. The other thing is that when you look at some of those services I described in the last slide, things like time series databases, AI applications, data warehouses, Snowflake, these are all things that become very easy to integrate with and use and take advantage of when you have the cloud.

04:48
James Burnand: So those data-centric applications just make a lot of sense to be able to use those resources for them. And then one of the things we... One of the most basic things we love using the cloud for is backing things up 'cause it's really hard to back things up in a way that it's easily recoverable, testable, and you can be sure that when it's time to go and restore those backups that they're available. The cloud is a fantastic and very cheap way to store long-term backups of systems that you're running on the factory floor. So what I will say though is just like playing soccer in scuba gear, it's not a... Just because you can, doesn't mean you should. You don't use the cloud for everything. And so what we found is that one of the really great opportunities, one of the really great options that people are starting to explore a lot more now is hybrid cloud.

05:38
James Burnand: So I grabbed a definition off of... I forget I Googled it, but a hybrid cloud is a computing environment that combines on-premises data centers, also called a private cloud with a public cloud, allowing data and applications to be shared between them. Really what it means is you install a piece of cloud in your building. So you put hardware in that provides a conduit, access, and ability to deploy those really cool applications that are precompiled, those services that the cloud providers give you into a piece of hardware that happens to live inside of a building. So a factory or a transfer station or wherever the local needs might be. So you get that low-latency, high-capability system that's running locally on site. You have the ability to cut the cord to the Internet and it still runs, but you get the benefit of running those cloud services down inside of the building.

06:35
James Burnand: I see it as being fairly revolutionary. I think it's still really new for a lot of folks. It's a concept and a way of thinking about deployment that not a lot of people are really that deep into yet, but I personally see that it's... I think it's going to be the future for a lot of the bigger systems. So who's using it today and what are they using it for? SCADA systems for distributed telemetry systems. We're seeing a lot of MES systems being cloud-deployed, especially things like OEE. We're working with our friends at Sepasoft on a number of different opportunities right now where there's, I want to be able to deploy across this fleet of facilities, I want to be able to create a consistent fabric of OEE application access and Ignition in databases.

07:22
James Burnand: And to be able to do that in some plants, it's super easy 'cause hey, they got great resources, engineers that understand what's going on, but it's really difficult to do in facilities where there's maybe not any sort of local support or they don't have people that are really understanding exactly how to build and maintain those systems. Using cloud or hybrid cloud for those sorts of solutions really makes it an equal playing field for all the users and all the locations that are going to have access to that application. The other piece that we're seeing is a lot of ingestion. So we saw some Snowflake stuff this week, which was really, really cool. We're seeing that there's this pull of all this information up to these data warehouses. Analytics tying together sales data and financial data in with production information in new and innovative ways that lets you make better business decisions and it's only being unlocked by the type of solutions that people in this room are putting together to ingest that information in. The other kind of piece to this is tying together with existing cloud services, things like ERP systems, cloud-based databases. There's just a ton of opportunity in pulling those things together. So that's what we're seeing today.

08:35
James Burnand: So challenges and risks, I would say the one thing to remember is the cloud is public. So when you go and you do a deployment, yes, you get access to all this really great technology, all of these applications, all of these things that you're able to do. But ultimately, if you're not careful, you are deploying those things in a publicly accessible location. There's lots of ways to remediate that, lots of ways to manage that. Really, what we find is the most critical part of that is making sure that you have a plan for how you're going to manage those assets. There's ways to be able to deploy in public clouds and have no external access to them, only internal to your facilities, but you have to plan all that stuff up front. So Joe's going to walk through all kinds of technology pieces around that.

09:21
James Burnand: I'm throwing the warning flags up and saying, just remember that it's public and that it's something that, yes, there's a policy in place for most major organizations to be cloud-first because of that first slide around cost savings, but it's not as simple as deploy and forget because if you do that, you're potentially opening yourselves up to all kinds of new risks and challenges that will unfortunately be potentially costly. I would also say that it's difficult to dabble in this space. So there's a big difference from what we've seen in being able to get something working versus having something sustainable and maintainable over time. So tools like cloud formation templates, which I know Joe is going to talk about, these are things that make it real easy for us to be able to build up an infrastructure in the cloud very quickly.

10:12
James Burnand: Even Ignition Cloud Edition lets you just start a virtual machine and run Cloud Edition and it's there and it's going, but you really do need to make sure that you're following best practices, hardening guide, best practices from the cloud vendors to ensure that you are putting in security as a consideration even for systems that you're testing, even for systems that you're just trying to figure out. Because what tends to happen, as I think many people in this room have seen, is I'm just going to start off small. I'll install Ignition here, and that's all it'll ever be used for. Six months later, it's like, "Well, I can use it for that. Well, I can use it for this. Well, I can use it for that." So you end up creating this burgeoning and growing set of applications. And when it's on-prem, the risk is a little bit... Well, it's a lot less because you don't have this public access. When you're doing that in the cloud, unfortunately, you have to be more careful. I believe Joe is going to take over talking now.

11:05
Joseph Dolivo: Well said. I think we're trying to differentiate between the ease of getting started, which is great for demos and learning and testing, and then production-grade systems. So we know a thing or two about production-grade systems. If you guys have seen the Data Dash that's going on right now, all that Ignition infrastructure is part of one of our managed service platforms called FactoryStack. What we're going to try to do is to take you through some of the lessons learned that we've had in working in this space for a long time before Cloud Edition was a thing, but then to give you some very practical takeaways that you can implement in your own systems, and also give you a little bit of insight behind what we've done and productized. And I will just say, coming out of the Technical Keynote, there are a ton of things that are coming in Ignition 8.3 that we are super excited for because it's going to make a lot of the stuff that we have to do now manually a lot easier for all of us.

11:52
Joseph Dolivo: So very, very exciting. Tried to categorize this into five different categories. Again, we could spend days talking about all of this, but they're largely broken down into networking, security, access management, data management, and cost management. And of course, especially with regards to network and security and access management, there's some overlap. So we've come up with a couple of different examples from each of these that we'll talk through. And again, as you have deep questions, please let us know and we'll go down into the weeds during the Q&A if we can. So I'll start with networking. So encrypt all the things. You hear a ton about encryption really in two different categories. There's encrypting things at rest. That's obviously important for data storage, making sure things aren't getting changed after the fact.

12:37
Joseph Dolivo: But also when it comes to networking, we're talking about in transit. So Ignition as a tool has great support for SSL certificates so that any traffic that's going into or out of your Ignition system will be encrypted, but it's not just Ignition. When you're deploying these production systems, you don't just have one Ignition gateway. Typically, you're going to have multiple Ignition gateways in a gateway network. The Ignition Gateway Network uses something called gateway network certificates that you can use to basically encrypt communication between Ignition gateways using the same principles that you use to encrypt your web traffic and all of that. So that's really key. And again, Ignition isn't just talking to other Ignition systems. It's also talking to databases, for example. So when you're configuring your databases, very important to enforce SSL encryption. There's a setting in the Ignition gateway configuration to do that.

13:27
Joseph Dolivo: And even more so, you can go down to the level of basically restricting access to certain ciphers. So I'm going to use certain cryptographic ciphers, I'm going to require TLS 1.3, for example. So focusing on encryption is a key part of everything that you're doing, is really, really critical. The other thing that you'll tend to hear about which is still very important and a good step one is to use a VPN. VPNs have been popular for a long time for good reason. They're a really nice, easy way to extend, let's say, an on-premise network into the cloud. Cloud providers have really good tools to make that easy, but if you just rely on a VPN, then you're doing what you call perimeter security, and we'll touch on security more in a minute, where you're securing the outside, and then as soon as somebody gets in the door, you now have... It's kind of free reign.

14:16
Joseph Dolivo: So a VPN is a tool, but it's a tool in defense and depth. So don't rely on a VPN by itself. Encrypting traffic, whether or not it goes through a VPN is important. So that's encryption. Limiting external connectivity. So we've got Ignition running in the cloud. Again, you probably have a database, for example. Best practices would suggest that you don't provide external access to the database unless you need to and typically you won't. So your Ignition system can be publicly accessible via web browser, mobile device, designer access, things like that. The database, you would probably want to be locked down inside of a virtual private network or a VPC depending on your cloud provider. I'll use both terms interchangeably.

15:00
Joseph Dolivo: And then there's a bunch of these cloud-native services that James had alluded to that are things like data lakes, digital twin services. And again, depending on if you're going to funnel all that data through Ignition, you don't want to have outside access to those systems. And the cloud providers provide really good tools, private endpoints, private link. Those are things you can use to basically expose even some of those managed services into your private network without having to go out through the public Internet which is the default. So highly, highly recommend that for anything that you're going to be doing which requires access from the outside. And the last one here is about minimizing hops. So especially for production-critical systems, getting data in a timely manner is very important.

15:44
Joseph Dolivo: And now we're not just talking about, oh, I'm sitting across from my server in my plant. I'm talking about having to go up to a cloud system and back in order to communicate. And the cloud is global so you can pick regions and then you can deploy things. I could be sitting here in California connected to a cloud server in Arkansas, which is actually what we're doing for the Data Dash here. And so by default, when you're starting to add these different layers of networking complexity into your systems, you risk introducing a whole bunch more latency to applications like Ignition. So one of the recommendations that we have if you're going to be deploying this inside of, let's say, an orchestrator like Kubernetes, which has been talked about a couple times, would be to look at the network interface that you're using to expose those workloads.

16:29
Joseph Dolivo: So for example, if you're using Kubernetes, by default, it deploys an overlay network called Kubenet, and it's got this virtual address space that's disconnected from everything else. It's introducing another network hop. The cloud providers provide integrations with something called the Container Network Interface that lets you expose the same IP addresses, same address space you're going to use for your virtual machines or for other kind of workloads, also for the containers that are going to be running Ignition. That reduces the network hop, makes your application more performant. Same thing when it comes to these complex architectures where you have load balancers in place. Every hop, every proxy you put in place is going to slow that down. So be very careful and selective about where you're introducing those kind of latencies. So we could have a whole session on networking.

17:14
Joseph Dolivo: That's a couple of highlights. Security, natural progression from talking about networking. Keep your systems up-to-date, and you're saying, "Well, of course, that's obvious." But when you actually look at the scope of systems we're talking about, let's take Ignition as an example. You've got your application, so you're going to be making changes to your application to fix bugs, to implement features and all of that. That application resides on Ignition, so keeping Ignition up-to-date, for sure. Doing that in a production system where... I love IT people, but you can't just push down security patches at any point in time. You've got a production system. You can't do that. So Ignition is a component of that and most applications are also built on a database. You're using the Sepasoft MES modules. It's built on a database.

17:57
Joseph Dolivo: Now, you've got to do those updates in tandem. So I need my database and my Ignition system to be in lockstep and if one of those is not in step, you think you're taking backups. We'll get to backups in a bit. Are they in sync? Are they cohesive? And now you're going down to a level below Ignition that's running in an operating system. Whether it's containerized or not, I got to patch that operating system. Maybe I've got an orchestrator like Kubernetes, maybe I've got add-on modules for providing other functionality. So looking at these systems as something that is living and breathing and you don't just set it and forget it is incredibly important. And to James's point, it's so easy to set something up once and then you forget about it and say it's good enough.

18:38
Joseph Dolivo: These air-gap networks don't really exist anymore. Maybe they never did, but nowadays it's not something to look at, especially when you're talking about the cloud. So reducing attack surfaces, the more stuff that's available on the public cloud, the more targets there are for attack. You go to shodan.io, you can see all the industrial OT network traffic that's available. It's terrifying, but you should check that out if you haven't heard of it before. So we want to do everything that we can to minimize the exposure to applications, to data from the outside looking at limiting external connectivity like we talked about as part of that. One thing I want to highlight within the Ignition ecosystem, Ignition has first-class support for containers. Containers are great because when you distribute a container, there's a couple of sessions on that at the conference. You're basically just distributing the minimum set of files that you need to run an application, and that's it. And you're decoupling it from everything else that's required like a kernel and everything else to run an operating system, Windows updates, all that kind of stuff.

19:39
Joseph Dolivo: So if your kind of target that you're deploying is basically these containers that have minimal packages installed, you're not having everything out of the box, you might get with a Windows updates WordPad, calc. So that really, really helps you to minimize that attack surface and it's, again, one less set of targets that attackers are gonna be able to go after. And then, of course, there's monitoring for breaches, and I can't tell you how many times two years down the road, somebody will find out that, oh yeah, somebody has been in our systems and they may have modified our data. We don't know what happened. We're gonna have to do a product recall or put out an announcement. So doing active monitoring is really, really important. It's something that there's a number of tools available to do that.

20:20
Joseph Dolivo: There's some that are kind of OT-specific, and you'll see 'em inside of OT networks from companies like Claroty and Nozomi and things like that. But there's also a lot of IT-centric tools that really work well in the cloud environment. A lot of them are based on machine learning to do like anomaly detection. So I'm gonna kind of pick... These are the sort of typical traffic patterns that I might be seeing in a cloud environment. If all of a sudden I see a huge spike in network traffic, or if I see access logs from users or accounts that I don't tend to see, maybe I raise a flag, I send a notification, I require manual intervention. And then tuning that in a way that you're not getting so many false positives, that is the same problem we talk about with alarms all the time.

20:58
Joseph Dolivo: It's, "Oh I've got so many alarms, I'm just gonna ignore 'em all." So there's a balance there, but the fact that you don't just kind of set this up and ignore it, you have to be actively monitoring for breaches. So super, super important. Again, we could have a whole session on security alarm. Let's talk about access management. So there was a question that came up in the Technical Keynote talking about using YubiKeys for authentication with Ignition and things like that. Access management is hugely important. And another universal principle that you'll hear, and it ties in really, really nicely I think with Ignition is to practice the principle of least privilege. So in terms of user accounts, that means if I'm gonna be authenticated and authorized to use a service, I wanna be provided with the least amount of access that I need to be able to do my job.

21:44
Joseph Dolivo: And that's for two reasons. One, in the case of kind of a malicious actor, that reduces the damage that can be caused if that account is compromised. And it also just helps people from kind of shooting themselves in the foot or doing something by mistake that they wouldn't ordinarily try to do. So for example, in the kind of Ignition roles, you may say, well, I'm only gonna give an operator certain roles so they can't accidentally change the configuration of the system. If I'm an administrator, I may have elevated roles, but we also tend to just say, you know what, I'm just going to use an administrative account that has access to do everything because it's too much work to go through a process and then you end up getting in trouble when that happens.

22:24
Joseph Dolivo: So enforcing roles in a way that is consistent and clear is really important and there are tools that you can use to do that especially if you are taking the management of that outside of, let's say, just Ignition. You can use something like... Entrada [Entra] ID is what it's called now, but I never get it right. It used to be Azure AD, so basically the cloud extension of Active Directory, and you can have all of your groups and roles centrally managed across your organization. And then you can have the concept of, let's say, a supervisor and a supervisor can have certain access granted in Ignition, certain access granted in other applications, your ERP systems, your CRM systems and things like that, and you have that all managed in a single place.

23:03
Joseph Dolivo: The last part on principle of least privilege is that it doesn't just apply to named user accounts. It also applies to, let's say, service accounts. And so this is an example. We'll talk about databases more in a minute, but when you're configuring access to a database, that database may not need, or that database user account may not need the ability to delete records. Maybe I can only do inserts, especially for audit trails. I'm gonna be able to insert into the audit log. I don't wanna have somebody that can update or delete from those. So think about the principle of least privilege in terms of the system accounts as well in addition to named users.

23:37
Joseph Dolivo: Password management. I'm super excited. Again, Technical Keynote talking about using a system like HashiCorp Vault, where you can have the dynamic password authentication. Right now, there are certain accounts like the database connection in Ignition, which is more or less kind of hardcoded. It's sort of encrypted in the configuration, but some of those things are kind of hardcoded. But for other things like logging into Ignition, the safest way to manage passwords is to not manage them, and again if you're using a system like Entrada [Entra] ID, or AWS, IAM, or OKTA, or Duo, or some other system, you've got an enterprise security company whose stock price and revenue is based on them doing a good job with all of that. So we recommend not having to manage it yourself. It's one less thing you have to deal with. So for our platform, we don't see any passwords at all from users. We say, nope, we don't wanna deal with it.

24:30
Joseph Dolivo: And then of course, monitoring and auditing access. So Ignition by itself, you configure an audit log. It logs a whole bunch of different events that are occurring by default, which is great. You also have a script function that you can use to add additional logs manually based on things happening in your application. And depending on again, the system you're using for identity and access management, you could also have sort of a central audit log in the cloud that you can use to monitor. So every time somebody logs in, every time somebody asks for elevated privileges, so there's tools like PIM, Privilege Identity Management, where maybe I'm gonna be given read-only access to a service, and I have to go through an approval process to give me temporarily elevated access rights to some other system. Well, that's gonna be audited and logged and it's maintained for a certain duration of time and then that'll be it. So again, active monitoring, similar to threat management when it comes to security. Really important for access management.

25:23
Joseph Dolivo: A couple more here, data management. So take backups and again, that sounds great in theory. Backups include a lot of different systems. And Ignition's actually really, really great in the fact that you can go in the gateway configuration page, you can schedule backups to be taken on a schedule, and if the volume to which you are storing those backups is, let's say, cloud-replicated, that's great. You can get cloud-based encrypted backups, multiple availability zones and multiple regions out of the box really, really easily. Again, most systems aren't just Ignition. There's gonna be a database component, there's gonna be other systems that you have to take and some systems are not as nice to... They're not as kind of allowing for doing live backups like Ignition has and the official kind of application process for doing backups is I'm gonna spin down a workload, and then I'm going to copy a volume somewhere else and I'm gonna spin it back up.

26:16
Joseph Dolivo: So we have to do some of that with manual pipelines and things like that. But if you have the ability to kind of coordinate the backups of all your systems together, really, really important. And then the backup's no good if you take it and then two years later you need to get it and you realize that the backup failed, or the backup was incomplete. So it's really, really important, especially for production systems, that you are doing regular verification of those backups. A really easy way to do that, especially if you're using Ignition in containers, take a backup of a database, take a backup of the Ignition gateway other stuff, and then spin up a brand new environment. I'm gonna say, okay, this is now my dev environment. I'm gonna restore a gateway backup, I'm gonna restore databases, and I'm gonna do some spot checks or automated testing to confirm that those are all still working.

26:53
Joseph Dolivo: So we do that regularly for all customer instances. It's something you should do as well. Really, really important. Data residency requirements, so especially when you're talking about production systems, again, in the cloud, you've got all these different regions you can deploy into. Certain cloud services you'll find are only available in certain regions as well and certain regions have availability zones or don't have availability zones. It's really important to know where your data is going and where your data is being stored at all times. And there are a lot of industries, a lot of companies that have very specific regulations to say, my data cannot leave the United States. For example, my data cannot leave Canada, my data cannot leave this particular geographic region. So keeping that in mind is really important 'cause you may say, well, yeah, my workloads are running inside of US-East-2, but to get there, it has to go up through this other system running somewhere else.

27:45
Joseph Dolivo: And now if the data's being... Even if it's encrypted, my data's going somewhere where it's not supposed to be. That's a big no, no. Same thing with storage. You could say, well, if the cloud providers have the concept of paired regions where you could say, you know what, I'm gonna store most of my data in US-East-2, but it's paired to something in Canada-West-1. So for disaster recovery purposes, that may or may not be okay depending on what your team's kind of requirements are due to regulations or company policy or anything else like that.

28:17
James Burnand: And maybe I can just quickly add to that if my mic comes on. When you're also architecting your solution, availability zones and regions become a huge important consideration. So for example, you can buy storage that's mirrored across three of those. So availability zone for everyone's benefit is a completely separate data center that has a separate power feed, it has separate network connections, but it's inside of a region. So US East, for example, for Azure has three availability zones that you can buy services from as US East. So depending on the reliability requirements of the application that you're deploying, you need to choose the services that have the right level of reliability. So by default for us, for example, when we do storage, we'll actually have storage that's mirrored across three availability zones in a single region so that way we can tolerate two buildings burning down before your system will stop. So just to kind of put a little perspective around that is that there is also a cost consideration as a part of that. So if you're going to buy something that is available across regions, for example, it's going to be more expensive than if you're getting something that's dedicated to a single availability zone in a single region. So your application architecture matters from a cost perspective.

29:29
Joseph Dolivo: We are definitely getting the cost as the next big pillar here as well. So well said, James. And the last point on here is just data integrity and retention. So I need to maintain data for seven years, 10 years due to regulatory purposes. The storage providers inside of the cloud, or the storage accounts inside of the cloud providers allow you to do, for example, immutable data. So I'm gonna push data into an archive storage tier. AWS Glacier is an example, Azure Storage account has an equivalent, where nobody's gonnae able to touch it, and it's gonna reside for some extended period of time. So that's really, really, important for compliance purposes and it doesn't even necessarily have to be data in your live system. You may say, you know what, having a 10 terabyte drive on this managed database service is really expensive.

30:17
Joseph Dolivo: But I need to maintain the data, but I'm not actually gonna query it unless an auditor comes and starts knocking on my door and says, "Show me the data." So you could store all of that older data in kind of much cheaper archive storage and then if you need to restore it to say, "Hey, look, I've got it," then you can go through a process to do that when you need it. A really good way to save cost, which is our final category for today. So cloud makes it so easy to get up and running, and the cloud providers wanna incentivize you to just pump all the data up. We're not even gonna charge you. If you're not going over an encrypted connection, we'll ingest all your data for free. That's become pretty much a standard. But once it's up there, they're gonna charge you for using it.

30:52
Joseph Dolivo: And there's a lot of stuff in the news recently. Hey.com recently talked about how much money they're saving by going out of the cloud and there's a lot of... So we talked about some of the reasons you may or may not want to use the cloud, but once you... You're really paying for sort of the flexibility and scalability that you get. So for the Data Dash, we said we're gonna spin up five servers. Give me five servers Azure and boom, we have five servers up and running. But you're paying for that dynamicism and flexibility. So if you know, for example, I'm gonna run Ignition Cloud Edition for a year at least, you go to the AWS marketplace, you go to provision Ignition Cloud Edition, it'll tell you if I know I'm gonna run this workload for a certain amount of time, I can basically commit to paying for a year and I'm gonna get a pretty sizable discount on the infrastructure cost.

31:36
Joseph Dolivo: 30%, 35%, something like that, that's huge, especially when you're talking at scale. And it's not just Ignition systems that can do that. You can do that with databases typically, you can do that with storage. So trying to estimate the workload that you have and then being able to kind of predict what you're gonna need is really, really useful as you've been running. Again, not so much for experimenting. When you're in a production system, that's important to consider, and it's something we do as well. So we actually will forecast out based on our customers. We're gonna commit to using this amount of resources and we get a cost savings from that. So that's reserving capacity up front. Another thing is called... And different cloud providers have different terms for it or basically spot instances. So this is where maybe I don't need a workload running all the time.

32:18
Joseph Dolivo: Maybe I need to do like a... I was gonna say batch job, but batch means something else in our automation industry, but I'm gonna run a report at 2:00 AM every week, for example. And it's something that's gonna run for a while and then it's gonna shut down. I don't need it running all the time. Or maybe I'm gonna just spin up a temporary dev system. I don't need it for a long period of time. If it goes down, it's not a big deal. You can leverage these cheaper spot instances where you basically will say, well, I only want to pay for a compute between this price and this price and if it becomes available, great. If not, shut it down. Or if somebody else is willing to pay a higher price for it, they're gonna steal my VM out from under me.

32:55
Joseph Dolivo: You can have incredible cost savings when you do that. It's also good for like a lot of GPU-based workloads like ML and AI training. So that's, again, not so much for Ignition production systems, but certainly for either dev and test systems or if you need some kind of temporary scalability like, hey, I need to add another frontend node to my Ignition server 'cause I'm anticipating more load during shift one, or something like that. So that's something else to consider. Huge, huge implications on cost if you do it right. And then I can't tell you how many times I've heard from customers saying, "Well, I got the bill at the end of the month and it was 10 times higher than I expected." So making sure that you're putting monitoring in place and alerting in place so that if you're starting to exceed your typical usage trends, you're able to identify that quickly and early.

33:39
Joseph Dolivo: So this has saved us a number of times. I talked to a couple of folks in the room about this where we had logs that we were aggregating that basically hit a trip wire and our system alerted us. We were able to make a change so that we didn't get $3,000 cost after that. And the cloud providers themselves and a lot of the cloud-native tools have ways of doing that. We'll talk about our tools in a minute. We use Grafana Cloud as an example for aggregating all of our metrics and logs across all of our systems. So you can set up alerts and notifications. You can do it in Azure, AWS, and GCP so that way you won't be surprised when the bill at the end of the month comes. So super important.

34:19
Joseph Dolivo: Just to kind of give you some insight, if you're kind of looking like, "Well, where do I kind of get started with this?" These are tools that we use. There's a whole bunch of them. It's really hard to pick, but I'll just kind of go through some of the icons so you're aware of them. Obviously, you know Ignition right in the center. Everything that we do and most of everything that you do is built all around Ignition. If I start at the top left, there we go. There's a laser pointer. So that is Kubernetes. We don't recommend that for most folks. It's one of those things if you have to ask, you probably don't need it. Something that we use internally, and there's a really great session that Kevin Collins did earlier today talking about kind of the nuts and bolts of that.

34:56
Joseph Dolivo: We use that because we're orchestrating Ignition across tons and tons and tons of customers. So if you're a bigger customer, you have a lot of Ignition instances to deploy, a lot of other workloads alongside a single gateway you need to deploy, a really good tool to consider. If you need to run one Ignition server, it probably doesn't make sense. Going clockwise I guess, Grafana is the next one. So this is what we use. I kind of hinted at it for metrics and log aggregation. It gives us really good deep insight into our containerized workloads as well as all of the kind of cloud provider-native services. So we can see how we're doing on cost, we can look at our CPU and RAM performance, all that kind of stuff. It's really nice to have a single pane of glass. And there's other systems out there that can do that.

35:37
Joseph Dolivo: We like Grafana. Great visualizations as well. Git, so when you're making changes, especially in kind of an enterprise space, it's not a cloud-native technology. I call it a cloud-adjacent technology. It's kind of in the same realm doing version control. Again, super excited for the changes coming at Ignition 8.3 that will make this more comprehensive beyond projects. We did a whole session on it last year. We're doing a workshop on it in a couple of weeks. But we basically run Git inside of the cloud to maintain backups of our project configuration, both for Ignition as well as other services. And then currently we support AWS and Azure. I love GCP as well. That's a great one. And then finally, the one that you may not recognize this logo here, this is called Pulumi.

36:17
Joseph Dolivo: So there's this whole suite of tools called Infrastructure as Code is the buzzword. Terraform is kind of the market-leading most popular one. They've been in the news recently due to some licensing changes that they've made around their open source offering, but we've been using Pulumi, which just lets us use our programming expertise that you'll have from Ignition Python, for example. You can use that to provision all of your infrastructure. So we never manually go and download a VM and download Ignition and go do the installer, even though it's only three minutes. We never do it. We use everything as containers and it's all provisioned using this tool called Pulumi. So there's a ton of good tools out there. We highly recommend being in automation as we are that you leverage some of these where it makes sense for you. I think...

36:58
Joseph Dolivo: So we've got additional resources. We made reference to some of these. So there's best practices, obviously the Ignition Security Hardening Guide, concepts for Kubernetes. AWS and Azure have their own. GCP also has some. These are links inside of the PowerPoint, which will be sent out. Definitely take a look at all of these. And then the two sessions, there was a good higher-level one on Ignition in the cloud. If you didn't get a chance to see it, watch it on the Livestream or the recording afterwards. And then the "Deployment Patterns for Ignition on Kubernetes" that Kevin Collins did. So really, really good sessions with really, really good, good info. And question mark means questions. We're ready for the tomatoes.

37:41
Audience Member 1: I didn't bring my tomatoes today, but one question I have, you guys alluded to it earlier that this is a space that's difficult to dabble in. So many of us being integrators or service providers here, what offerings do you guys have for providing a sandbox environment for people to get familiarized with your platform and potentially show it off marketing material style for potential clients?

38:08
James Burnand: I'll take that one. So we're in the process of hopefully soon announcing some really cool local versions of what we offer that you'll be able to actually run locally on your machine as a test environment. As it stands right now, we set up demos for integrators all the time with their own separate subdomain on our development system. So then you get gateway and database access. You can throw your projects up there, you can test playing with them, and you can make sure that they work. But one of the cool things about how the products that we built work is all of this complexity is kind of encapsulated in those. So you get designer access and you can get database access, and it looks just like a normal Ignition project. So from our perspective, we're trying to help make this technology easier to be able to adopt and that's kind of been our business model from the beginning.

39:02
Audience Member 2: Getting into cloud and cloud infrastructure and tools is... Can be a scary thing. And I think I've seen that with a lot of customers and even with myself thinking about how do I even get started? Can you guys talk to what you would say to somebody who wants to get over that fear and even just get their feet wet with cloud infrastructure and how they can start seeing those benefits and how do you overcome that first step?

39:31
Joseph Dolivo: Something that I would say is cloud is a spectrum. You don't either adopt it or not adopt it. There's kind of a spectrum of adoption. And so the easiest way that we've seen to kind of justify the use of cloud is just use it for offsite backups. Are you really, I like to say, take the tape drive down to the bank vault everyday. Is anybody doing that? Some people are doing that. Use it for encrypted multi-site offsite backups. That's kind of the Trojan horse, if you will, to kind of cloud adoption. And then use it for the things that it's really, really well suited for and tailored for like scalability. You know what, I'm gonna spin up a dev system, for example. I'm gonna play around with it. That's a really nice way to get companies more comfortable with it.

40:09
Joseph Dolivo: We spend a lot of our time with heads of IT and security folks kind of talking about why this is okay, how this can fit within their kind of existing IT landscape. It's actually kind of interesting because I'll say prior to maybe three or four years ago, the cloud was a scary thing for almost everybody. And we've really had this excitement that we've seen from a lot of customers I think driven by let's say Ignition's use of a lot of IT technologies, for example, where all of a sudden you talk to a chief security officer and they're like, "Oh, you're using containers, you're using this. I get it. You're speaking my language now." So that's actually helped I think to make it a little bit more palatable. But yeah, start from offsite backups. Super, super simple would be my point around that.

40:49
James Burnand: Yeah, I would only add to that that I actually think probably the best place to kind of focus learning attention if you're a traditional automation person and you're looking to figure out kind of how does all this work is I would focus on containers, learning the different container architectures, how networking works, how you actually set up those systems. And Kevin Collins' GitHub page is fantastic for anybody that hasn't been to it. Absolutely you need to go to it. I don't have the URL handy, but certainly it has so many resources that will help you learn about how to work with these architectures. And then really what you're doing is you're taking that Docker-centric architecture and you're using these prebuilt functions and tools to make it easier to actually do a more coordinated deployment.

41:34
James Burnand: One of the things Joe didn't mention is our Grafana system that's providing us all that alerting and monitoring, what it often is telling us is that it fixed something. So Kubernetes had a problem and it took care of it, and we get a teams message that says, "Yeah, the problem happened and the problem is taken care of." So like that part of kind of the progression and the ability to automate and take advantage of these tools at scale is the ultimate goal, but none of that happens if you don't first start focusing on things like containers.

42:04
Joseph Dolivo: Yep. The last part I'll add is looking at containers, it's another one of those kind of cloud-adjacent technologies. You can run containers on-prem and you can run 'em in the cloud. So start doing the things that will work well in the cloud, but just do 'em on-premise. So we've seen a lot of that's kind of hybrid cloud is kind of a similar idea with that. Thanks.

42:24
Audience Member 3: Do you have customers that are ingesting or exgesting? What's the opposite of ingest? I don't know. Doing that thing...

42:37
Joseph Dolivo: Expulsion?

42:39
Audience Member 3: In other cloud technologies. Like IoT Core, for instance. Are people using IoT Core to get data into your systems or then beyond just a normal database thing? Are there other places where data's going out of your environment?

42:56
Joseph Dolivo: For sure. So there's... And it is funny 'cause IoT Core is something, a service that AWS and other services have had. GCP made a lot of news recently where they actually sunsetted one of their IoT products. And so Cirrus Link is here. They have a great broker. HiveMQ is here. There's a number of kind of broker technologies, I'll say, for getting data up into the system and then also kind of pushing it back out. So Ignition is a good fit for integrating with all of those, a lot of those kind of event-based systems. Again 8.3 is coming and it's gonna make this easier. But you can ingest into Azure Event Hubs, you can ingest into AWS IoT hub, IoT Core. So those all work. The one thing to keep in mind too is that not all of those services, they may support MQTT, but they may not be fully compliant with things.

43:44
Joseph Dolivo: So for example, we went down a whole road with like store and forward and avoiding data loss. Going up into MQTT, there's some nuances to the TCP Keepalive Timer and all these kind of things that could result in data loss. A lot of systems that are sort of compliant, somewhat compliant outside the ecosystem don't support all of those. So that's something to keep in mind for sure. Once you get data up into Ignition in the cloud, then you can kind of push it out, but we found... We've seen a lot of benefit. If you're gonna push data into Ignition running in the cloud, whether it's [Ignition] Cloud Edition or whatever, keep it in there to do all of your visualizations and stuff like that if you're gonna use an Ignition and then push it out after that. So I hope that helps.

44:25
Susan Shamgar: Alright. Thank you, everyone. I believe that is all the time that we have for today. So can we get one more round of applause for James and Joe?

44:40
James Burnand: Thank you. Thank you, everybody.

0:44:40.6 Joseph Dolivo: Thanks everybody.

Wistia ID
d3abebnje3
Hero
Thumbnail
Video Duration
2684

Speakers

James Burnand

Chief Executive Officer

4IR Solutions

Joseph Dolivo

Chief Technology Officer

4IR Solutions

ICC Year
2023.00
icc | 2023 Community Session

Elevate Your OT Data Securely to the Cloud

Ignition Cloud Edition! Awesome! But wait… How can I possibly connect my PLCs or I/O systems to the cloud? Won’t that jeopardize them? And require heavy IT involvement? What’s the payoff? In this session, we’ll discuss how to use Ignition Edge and Ignition Cloud Edition together to quickly create scalable, high-performance, cybersecure architectures for democratizing your OT system’s data. Whether in brownfield or greenfield environments, you’ll unlock the power of edge-to-cloud hybrid architectures that are cost-effective, easy to manage, cybersecure, and deliver more value to your organization.

45 min video

Watch the video
video The Ignition Effect

What Is The Ignition Effect?

"The Ignition Effect” is not just about technology, but how Ignition creates a ripple effect that reshapes systems and sparks solutions. This series offers a panoramic view of the transformative power of Ignition told by the people who use it every day. Watch these videos to witness the impact Ignition has on its community and explore what it can do for you! 

7 min video

Watch the the ignition effect
icc | 2023 IA Session

We Love Ignition. But Can it REALLY Scale?

Can it REALLY scale? This is a question we have received for the last 10 years. Delve into the realm of enterprise Ignition rollouts with industry insights from the lens of an enterprise integrator. Uncover the strategies and best practices that accelerate the implementation and ensure the long-term sustainability of Ignition. Don’t just believe us – hear it firsthand from a guest appearance with one of our enterprise end users.

42 min video

Watch the video
icc | 2023 IA Session

Deployment Patterns for Ignition on Kubernetes

Kevin Collins returns to ICC for a demonstration of how to harness the combined power of Ignition and Kubernetes. This session offers an in-depth look at methods for effectively automating deployment, scaling, and managing containerized Ignition applications.

59 min video

Watch the video
webinar

Data Centers: How DCIM Improves Your Daily Operations

In this webinar, experts from Inductive Automation and ATS Global will look at those common requirements and present how an open data center infrastructure management (DCIM) solution based on Ignition can help you to comply, and maybe even change the public opinion about Data Centers in the long term. We’ll also present a new Ignition demo for data centers.

46 min video

Watch the webinars
icc | 2023 Community Session

Separating Design From Development - Using Design Tools with Ignition

Building screens in Ignition is a breeze, but did you know you can design screens even faster by mocking them up using a design tool? Join us for this session as we talk about the benefits of moving the design process outside of a development platform. We'll cover topics such as design vs. development, UI vs. UX, benefits of using design tools, and an introduction to the design tool Figma.

43 min video

Watch the video
icc | 2023 IA Session

Ignition Exchange Resource Showcase

Since the Ignition Exchange’s introduction in 2019, members of the Ignition community have contributed hundreds of resources ranging from pre-built templates, tools, and scripts to Ignition-powered retro arcade games — all available for free. Discover the full potential of the Ignition Exchange as we highlight some of its most valuable assets, including a handpicked sampling of the top Exchange resources developed by IA engineers.

41 min video

Watch the video
icc | 2023 IA Session

Ignition Diagnostics and Troubleshooting Basics

Ignition offers numerous built-in tools for gathering diagnostic information about the health of your system. This session offers an overview of these tools and explains how our Support Division leverages this information during the troubleshooting process. By the end of this session, fixing problems will feel like shooting code in a barrel.

46 min video

Watch the video
article Guide

Hosted/Multi-tenant Ignition Cloud Edition Guidance

Inductive Automation is proud to permit the licensing use of hosted and multi-tenant applications at no additional cost to the licensee when using Ignition Cloud Edition. Hosting enables flexible resource sharing and “pay as you go” service models. Multi-tenancy can enable the broad delivery of custom Ignition applications as a service. This change enables new service provider roles with the potential to benefit the greater Ignition community. However, these models inherently introduce risk to stakeholders.

13 min read

Read the guide
icc | 2023 IA Session

Introduction to Automated Testing of Perspective Projects

Learn the most effective ways for leveraging automated testing to safeguard your development-to-production process. This session will start by outlining how the core tenets of testing apply to automated testing, leading directly into best practices for verifying that your Perspective project development is production-ready.

38 min video

Watch the video
icc | 2023 Panel

Industry Panel: ICC 2023

61 min video

Watch the video
I4.0 Accelerator for Driving Edge to Cloud Business Outcomes Emily Batiste Tue, 10/31/2023 - 15:19

Come and learn with Cirrus Link and Snowflake what your data has to say. Snowflake, Inductive Automation & Cirrus Link have partnered to provide Data Cloud Solutions. With Ignition UDTs, MQTT, and Sparkplug, discover how easy it is to leverage Snowflake’s platform to gain derived data insights immediately through native AI tooling. Learn about the impact of the recent partnership of NVIDIA and Snowflake. See how this disrupting technology, in conjunction with Ignition, will elevate and simplify your journey to data insights. 

Transcript:

00:00
Travis Cox: Let's do it. Hello, everybody. Welcome. Hope you guys had some fun here today, so far. I know the session's been pretty amazing so far, yeah? We definitely have another great session for you now. Hope you guys are excited about this one, Accelerator for Driving Edge to Cloud Business Outcomes, and we're gonna show a complete edge-to-cloud solution today using data models, and we're gonna actually bring in the Data Dash and kinda show you how all that comes into play.

00:30
TC: Got three amazing speakers, really two, besides myself. We got Arlen Nipper, who is the CTO for Cirrus Link Solutions. He's the man, the myth, the legend behind MQTT. I'm sure a lot of you know him. Excited for having him here today. We also have Pugal Janakiraman. He's the Industry Field CTO for Manufacturing for Snowflake, and he's responsible for building higher level solutions to kinda drive business outcome for manufacturing. And we're really excited about this particular session. We're gonna kick it off with Arlen. He's gonna show, we're gonna show Ignition Edge and Ignition, how we can bring that in through MQTT to the cloud, bringing that from IoT Bridge over to Snowflake. We're gonna show you that whole journey here this morning. So Arlen, without further ado.

01:16
Arlen Nipper: Thank you. Thanks, guys. Thanks, everybody. Everybody enjoying it? This has been awesome so far. So real quick, Cirrus Link Solution, we've been around... This is our 11th year now. We've been growing year on year. This has been a fantastic journey for us. And we started eight years ago. I was over in stage two. And I did the first ever MQTT engine demo. That was our first Ignition module. From there, we've developed a whole line of Ignition modules, as well as products that we support, including the Chariot standalone MQTT broker, and all of the IoT Bridge products that we've developed for getting data out of Ignition into the cloud. So where I'd like to start is largely due to the community and all of the feedback and the involvement of all of you.

02:13
AN: We started with MQTT and the first demo that we did was just Arlen and one of the engineers I worked with. And we had a little binary way that we published MQTT. It was great. As we started going to conferences and all of that, everybody goes, oh, we do MQTT, and we do MQTT, and we do MQTT. But if we would've plugged it all together, nothing would've worked, because the topic namespace would've been different, the payload would've been different. So we started on a journey for our own sanity five years ago. We said, mm, let's invent a spec. And since we have Engine and we're running on Ignition, let's call it Sparkplug. And so we started the Sparkplug specification. And again, it was internal. People started looking at it, Ignition users. I still remember Chevron going, "Well, Arlen, who owns that?" And we said, "Well, it's up on our public GitHub site. You can download it, it's open source." "No, really, who owns it?"

03:12
AN: So at that point, we kinda went on this journey of taking the Sparkplug spec to the Eclipse Software Foundation, which is a standards body and we worked for three, almost four years, in getting the spec cleaned up and getting it ratified. And at the end of last year, Sparkplug 3.0 was officially released. And from that, what you see up here, is that resulted in the release of a Technology Compatibility Kit. So that means that if you're doing MQTT Sparkplug, whoever wants to do it, you can download the conformance kit and you can run your client against it and get conformance-tested and get listed up onto the Eclipse website, so that we have interoperability. So when Todd Anslinger at Chevron orders your module or buys your product, he can be assured that it is Sparkplug B compliant going forward. And then other thing interesting from that is that because of Eclipse and their relationship with the IESO, IEC standards body, now Sparkplug is pending, but it'll be an international standard, IEC 2237. So now Sparkplug will be an international standard.

04:29
AN: And then the last thing I wanted to mention is that I know a lot of you, especially in manufacturing, you deal with a protocol called MTConnect. MTConnect's been around for about 15 years. There's probably over a million CNCs and Lays and Autoclaves that talk MTConnect. And the cool thing about MTConnect is they already do data models, but they do them with XML. So if you want to get the spindle speed from a current MTConnect, you do a get and it sends you back a 300K XML file that you can parse down and find the spindle speed. And what they've realized is they wanna be able to publish those MTConnect models using MQTT Sparkplug. So we are working with the MTConnect Foundation to natively have MTConnect agents running on CNC machines and Autoclaves and all this other equipment, be able to publish that information natively. And you can imagine, that means you could have a whole factory with all of this machinery. You turn it on, it publishes into Ignition, you automatically learn everything about those machines, which would be pretty cool. That's our end goal, if you will.

05:46
AN: So the other interesting thing, we hadn't even thought about it, so I had Chris run a report and say, well actually, how many people are using MQTT Sparkplug? And at this point in time, there are over 1,300 separate companies that are using MQTT Sparkplug. And six years, seven years ago, if I were to put this pie chart up, it would have been 95% oil and gas. And over the last four or five years, you can see, we've expanded pretty much across this technology, across all of the verticals that Inductive Automation is in. So the adoption for MQTT Sparkplug across all of the industry section has been huge going forward. So real quick, I just wanted to review this. What does Sparkplug do? Well, it does four important things. Number one is it gives you plug-and-play auto-discovery. So with a well-known, with Sparkplug, you know what the topic is, you go subscribe to it, it publishes a message, you get the message, and you go, oh, I know where you came from and I know what you wanna do.

06:58
AN: So, high level, gives you plug-and-play auto-discovery. Number two, very important, as we're finding out, as Colby and Carl talked this morning, this is digital transformation. And to do that, you can't have data in the data swamp, you have to have contextualized data that you can actually see from a business-level standpoint of what that data is. So with Sparkplug, we can publish a model, or the definition of that. Now, you instantiate that and create the asset, and I hate the word, but we'll call it that, you create your digital twin. Now, everybody's notion of a digital twin is different. I think ours is the best and we'll see that in the demo here in a little bit.

07:43
AN: The third thing that Sparkplug does is that we have been wrestling with registers from PLCs and our sensors and our flow computers for the last 47 years that I've been doing this. Modbus register 40002, and it's got a value of 17. 17 what? Degrees, gallons, we have no idea, so what do we do? We sat a human being in a chair, and we said, "Okay, Arlen, engineering high is this, engineering low is this, engineering units is that, and I hope I typed it all in correctly because you're gonna run your plant with all of that information that I just typed in."

08:21
AN: But with Sparkplug, we create a digital object that I can go back five years from now from this Snowflake demo that I'm gonna do, find that tag, and I can tell you the name, the value, the timestamp, the engineering high, the engineering low, the quality, and any other custom property you wanna decorate that measurement with and get it into Snowflake, we can do that now with Ignition. And then the last thing Sparkplug does is it gives us that state management. Because if I can't guarantee that I know the state of all your process variables, if you're doing command and control, or you're going to the cloud, then you're not gonna trust that, you're not gonna use Sparkplug. So, Sparkplug tells you that you are online, that value is last known good, and then if your network goes down, you're gonna know about it, all the tags will go scale in Ignition, but when it comes back up, we know at the edge, at the Ignition Edge, everything we would have published goes into a store and forward queue, and now we can do store and forward.

09:24
AN: So with Ignition on the left side, we've got that brownfield connectivity that we need to connect to all those different protocols, all those machines, and bring that into the Ignition platform. From the platform, we've got a really cool tool called UDT, and with that UDT, we can organize that data, we can give it context, we can give it engineering use, give it engineering high, we can give it asset properties because it's very important. Think of like PI Asset Framework, you've got all your asset information over here, which is different from your historical data over here, but we're gonna be able to put that together in one single database, and then we can take MQTT transmission and publish that to an MQTT infrastructure, where it can be consumed by what? Well, it can be consumed by Ignition, for sure, but we're introducing IoT Bridge for Snowflake. So those Sparkplug messages coming from Spark, from our MQTT transmission module into a server, well, IoT Bridge sits there, it's an MQTT client, it knows how to receive those messages coming in, and then using Snowpipe Streaming, we can do sub-millisecond inserts into rows into Snowflake data tables.

10:45
AN: So that means that we can take all of that contextual data we have in Ignition, and by a click of a button, get all of that natively into Snowflake, the data cloud platform. But wait, what is Snowflake, right? So I'll bring Pugal out, Pugal will tell us. Now, Pugal and I have a bit of a history. We've been working together since AWS IoT, and right before Christmas last year, Pugal called me, he said, "Hey, Arlen, I'm the manufacturing CTO for Snowflake," and I said, "Great, Pugal, that's fantastic. What's Snowflake?" And so here it is, it's incredible technology, and here's Pugal to tell you about it.

11:31
Pugal Janakiraman: Thanks, Arlen. Okay. So what is Snowflake? There is a reason why we sat together and picked Snowflake as a platform to build this out, because this is an Industry 4.0 journey. There is a whole bunch of requirements around Industry 4.0. One is that the attractive thing around Industry 4.0 and value proposition is you need very high level of compute, you need an extremely performant database out there, because this is a big data problem. You're bringing in huge volume of data, spanning IT and OT data sources into one location, whether you call it as unified namespace or a centralized location where you can facilitate IT and OT convergence, you need a high-performance database out there. So, the challenges I have seen, been in the middle of a few hundred of these Industry 4.0 initiatives, is today if customers want to go build an Industry 4.0 solution, if they pick a cloud vendor, you have to learn around 200, close to that amount of services, elemental services, stitch it together to build a solution, govern all of it, go through the whole journey of learning that and go from there.

12:45
PJ: That is hugely challenging for most of the customers we work with. So what do we do here? Snowflake is a globally connected cloud vendor agnostic data platform. So what does it mean? You don't have to go learn hundreds of services from multiple cloud vendors and build an Industry 4.0 solution. We got that covered. It's one single managed service from Snowflake. We take care of security, we take care of governance, we take care of scalability. Every one of it is taken care by us. And after that, much more cool, your API of choice is still SQL. You don't have to learn hundreds of new services. You continue to use SQL as a mechanism to leverage data which is present in Snowflake, whether it is around building dashboards or you want to build an AI and ML model or build inference around those models, you still use SQL as an API for doing that.

13:38
PJ: So this is extremely powerful, one-stop shop, easy button to adapt to the cloud. And that's what we bring to the table, Snowflake as a company. The other one, as I said, you need a highly performant database to do that. So Snowflake is a cloud-native database built 100% on cloud, and it is one of the most performant database today in the market today. Again, this is not a marketing statement. If I had to pick a number, I just brought up a number on what really is the kind of transactions which happens in Snowflake today. So April of this year, 2.9 billion queries was launched in the Snowflake data platform. And around just in one single customer, one single table, there are around 50 trillion rows out there. For us to go operate and pull up millions of rows and visualize that, it's no big deal. We do that on a daily basis.

14:33
PJ: And it's around the largest number of queries within one-minute interval a customer is executing, around 160,000. 177 petabytes of data just on five customers, what is being maintained within their database. So big data handling, we do it on a daily basis. That is our lineage. We started as a data warehousing company and built a data platform around it. So handling this volume of data is pretty much a daily affair for us. So other one around collaboration. There is a whole bunch of customer ecosystem built around Snowflake. Data sharing between different customers, it's a matter of you don't copy the data over, you can just refer to the data and still run analytics. Why is it important? You got a whole bunch of OEMs and you got a whole bunch of suppliers out there. If you want to share quality records or you want to share connected product performance data to your supply chain, you don't need to copy the data over.

15:33
PJ: Data can still reside on-premise or it can reside in whatever is your cloud vendor of choice. You can run analytics without the data movement out there. So we provide that kind of collaboration mechanisms. Another cool thing, with the volume of data, just visualizing billions of records or millions of records, human mind cannot comprehend that and derive inferences out of it. We provide AI and ML-based analytics. In fact, yesterday we demonstrated how you can just provide the data set to our pre-built anomaly detection algorithm. It is going to tell you that there is an anomaly going to happen and you might want to take a look instead of getting into an unplanned downtime kind of situation. So we do that as well. We provide all this reference architecture as part of Snowflake data platform. And obviously, with all these capabilities, it accelerates the analytics adoption, whether it is on IT or OT data or a mix of both.

16:31
PJ: So that's what Snowflake brings to the table from a manufacturing perspective. There's a lot of technical detail behind this. Feel free to stop by at our booth. We can go through this in detail on, any level of detail on what you would like to understand around what Snowflake brings to the table, technically speaking. Just to summarize, so what does it mean for customers and partners? So we got it covered, whether the data is sitting in silos of database and on-prem systems or it could, across different organizational boundaries, data is distributed, or it is distributed across multiple cloud vendors, across multiple regions, we can run analytics seamlessly. So I think that is one of the major value proposition we bring to the table. So any data products you build and offer to your customers, it's global in nature. It can scale. We got the security covered. There is seamless collaboration which is possible between you and your customers, and your suppliers.

17:31
PJ: It's not an issue at all, okay? Performance, as I said earlier, we got the performance factor covered as well, okay? Added to that we got thousands of customers today using Snowflake for various analytical needs today with pre-built integrations with popular systems like SAP, in addition to OT systems which Arlen talks about and which he's going to demonstrate as well. And we provide Snowflake Marketplace where you not only can take the products you've already built today on Ignition, you can monetize those data products and offer it through our marketplace to thousands of customers we got around the world. So that's what Snowflake brings to the table. Instantly scalable. You can build global data products which you can take it to your customers. So pretty much that's a Snowflake value proposition.

18:25
PJ: So again, quickly before I hand it over to Travis, this is how the journey started for us. Ignition on Edge with zero coding using Snowpipe Streaming API, send the data to Snowflake. So again, this is one of the best integration built by any cloud vendor as of today from a cost point of view and a fidelity of data point of view. To accurately represent every possible manufacturing data in cloud, you need to support around 13 data types. No other cloud vendor does that today. So maximum they support is four data types, which means all the other data types, you slam it on the existing data types you support. And there is always loss in translation issues associated with that.

19:10
PJ: In our case, we support all 13, Sparkplug B is an associate. We support all 13 of it, and this is the lowest possible cost integration with high performance, near real-time analytics, we can perform as well. That's what we built and launched as part of manufacturing cloud between Inductive Automation, Cirrus Link, Opto 22 as a joint solution offering. Okay. We have made that much better now with Snowflake, with Ignition Cloud Edition as a connected applications available in Snowflake, and along with that, in addition to OT data, you got IT data, you got third party data like weather, traffic information, supply chain information already being managed in Snowflake, you have an opportunity to build applications on top of Cloud Edition and take it to your customers. And every applications you have built and launched at Edge seamlessly will work in cloud, with this edition. I think again, this is a cloud vendor perspective. With that, I'm going to give it to Travis to talk about from Ignition point of view.

20:11
TC: Alright. Thank you.

20:19
TC: Alright. So everything that we are showing on this slide here is something that's available today. And we're gonna show a full example of how, with a demo with Arlen and myself, how we go from Edge to Cloud going into Snowflake, back into Ignition Cloud Edition so we can show some dashboards, get information out there. And what we're talking about is what Snowflake's calling Connected Apps, right? We're simply gonna be deploying Ignition Cloud Edition to our Azure AWS account, and we're gonna connect to Snowflake through JDBC, and be able to be able to get that data from there and put it onto dashboards. So we're gonna show you what that looks like. However, we're thinking future and how this can even grow and get even bigger as we go forward.

21:01
TC: And there is a potential future landscape where... Whoops. All of that can be simply running all within Snowflake's cloud environment, so that you could spin it up really, really fast and get these solutions going quickly. So, but the idea is really simple, right? The focus of this is being able to get data that is modeled, customers need to... Basically it's a culture shift, right? Where they have to think about how they're gonna standardize on data and their data models across their entire organization, and the idea of this is to get it into a storage where that data is stored with its context, so we can go a lot further. So, what's really funny about this whole thing, when we got introduced to Snowflake is, at the end of the day, it's a database and we can connect to it just like we connect to every other database within Ignition through JDBC. And you can install that JDBC driver really easily in Ignition and you can issue queries just like we do with any other database.

21:54
TC: And so, we're gonna show that here today. It's very, very easy to get connected, very easy to issue those queries. We can issue anywhere within Ignition and they also do provide REST API so you can actually go a little bit further as well with that. There was nothing we had to do in day one. We just had to install the JDBC driver and get started. And from the very beginning of our company, we've been centered around SQL databases. This is just now a database that's highly scalable, it's in the cloud that allows for a lot more opportunity that we can... Where we can... For what we can do with that data. And a lot of that is around AI and ML, as Pugal was saying, there's anomaly detection and forecasting services that are built into Snowflake, and you basically train models and you can can do the detection on those just by running simple SQL queries against Snowflake.

22:45
TC: So it's very easy to work with this. However, it doesn't have to be within that. Any other service or tool that's out there that wants to be able to do that same thing, you can connect to the database the same way and you have all that data, you have all the context, you can go and learn everything that's there and go a lot further, right? And with this, what we're talking about too is not only you get the storage, you get these kind of services, but you get those results back into Ignition so that we can provide that information back to our operators, can provide alarms, whatever it might be. So it's kinda that full circle kind of integrated solution. So that's all I wanted to say really, in terms of Ignition and Snowflake. We're gonna get into the demo a lot more, but I did wanna bring up the Community-Powered Sparkplug Data Dash, because we thought for the conference here, we wanted to show this whole thing in action.

23:31
TC: And well, we got all the community to participate, where they're basically leveraging Ignition or Ignition Edge or potentially have a smart device that speaks MQTT Sparkplug and they're gonna build a data model, publish that up to a Chariot broker that's in the cloud. Real simple. Then we can use the IoT bridge for Snowflake by Cirrus Link and all that data from Sparkplug goes directly into the Snowflake database. We're showing it on a dashboard within Ignition, but it's going to Snowflake database as well. And we can easily go and query that data. And we went one step further and we're actually showing the anomaly detection within the Data Dash. So we'll do a demonstration of this in just a moment, but wanna show you just how easy it is for this solution. And it's all something we could do right now. It's very, very simple to get started with this whole thing. So with that, Arlen, I'll bring it over to you for the demo... Start at the demo here.

24:23
AN: Alright. Cool. Thank you. All right. Real quick, the topology is, I've got some simulated devices. Some of the devices are in Stillwater, Oklahoma that I'm actually talking to publishing those up to distributor running on Ignition on an EC2 instance in the Cloud. And so what we're gonna do is we're gonna go into Ignition, we're gonna build our "digital twins," but they're much more than digital twins. We're gonna show all that context and then we're gonna say, "Okay. Well now we've got this single source of truth. How much code are we gonna have to write to get it into a highly scalable Ignition or into a highly scalable cloud database?" And then from there, Travis is gonna go, "Oh. Well I've got that data in there. Let's see what I can do with Ignition Cloud Edition."

25:13
AN: So we're going to do the live demo, which we always love doing. All right. So, I know it's a bit of an eye chart, but it's hard to zoom in on the Tag provider. But I've got a Tag provider, Smart Factory and Smart Factory, underneath that I've kind of got the whole unified namespace of, I've got Smart Factory one and under Smart Factory one, I might have some building management systems because we've got BACnet/IP with Ignition now, I might have some Opto 22 KYZ meters and I've got my equipment in the factory, right? I've got CNC, a lathe, haul-off machine. And then down here you can see I've got the notion of an extruder. And this extruder has some process variables, some temperatures and some pressures and things like that. And had we... The way that we've been doing this going forward is that executives came to operations, they go, "Hey guys, we heard there's digital transformation. We gotta get all of our data in the cloud."

26:15
AN: "Okay. Well let's put all of our data in the cloud." So they go out and they write a bunch of code and they go in here and they go, "Okay. Let's do this and then let's pretend this is the cloud over here. And boom. Okay. We're done." We've got all of our data going into the cloud. It's all going into a data lake. But wait a minute, without some context, how can I use this? So I come into my data lake and I wanna look at something, and I've got 148 degrees, 148.85 degrees, where'd that come from? What machine was it attached to? What plant did it come from? I don't know. Oh. That's over another database. So I need to write some code. And then maybe there was some other asset information, now I've gotta get some code. And what happens is we've got terabytes of data hitting data lakes in the cloud and nobody's doing anything with it because it's too hard and you can't get any context from the data. So, let's drain the swamp. And before we do that, let's go into that extruder and actually give it some context.

27:34
AN: So I wanna build a UDT of an extruder model. And every time that extruder shows up, the first thing that I want to do is I probably want to give it some asset information. Asset ID, asset serial number, location, anything else that you want to be available to you on each instance of that extruder in Snowflake that you want to be up there, you can define in your UDT and it'll be automatically published up there. And now that I've got my asset information, I can go back to that melt temperature and say, "Look, for that machine when melt temperature shows up, I don't care if it came from Allen-Bradley PLC or a Modbus or Rockwell, I want to know that it represents melt temperature, it's 0 to 225 somethings. Those are in degrees C, it's using absolute deadband.

28:22
AN: There's my deadband percentage and my scale mode and anything else again that I want available to me in Snowflake when I'm done with this demo, I can define in this UDT. So now that I've defined my machine, very, very simply using tools on platforms and I can go in and define a dryer and a bunker, and now I can come back and take those nebulous tags and look at the fact that this extruder actually was, extruder seven, was a model of an extruder. And you can see here I've got my asset ID Wile E. Coyote, asset serial number B549 courtesy of Hee Haw, location in Oklahoma and all my process variables. And since it is a UDT, I can use the Power Perspective or Vision to be able to start taking that and maybe when the extruder feeds into a bunker, and the bunker feeds parts when it comes out into a CO2 dryer, and maybe I've got an Opto 22 EMU and it's measuring the three-phase power on that extruder. But my point is, is that at 3:14 on September 27th, this is the single source of truth of my factory.

29:48
AN: This is the single source of truth. I didn't define it in the cloud and then try to bring it back down and iterate back and forth, I know this is my factory. So I just came off of a really cool demo from Snowflake and I go, "Wow. What if I could get that single source of truth into Snowflake? How hard would that be?" So what I'm gonna do is I'm gonna go to the Azure or AWS marketplace and I'm gonna download the IoT Bridge for Snowflake. I'm gonna install it. And when I install it, it's going to go in to my Snowflake console here and it's gonna create two very simple databases, a node database and a staging database. And in here, I have a very simple Sparkplug device message table that you can see right now is empty. And when we installed it, we also added some convenience and I could get a view, and since it's all going up from UDTs, I've got a view that says, "Hey, tell me about all the UDTs that are in that factory or all the factories." Oh, well, I don't have any factories yet. So I need to fix that. Let's go back into our Ignition configuration. And you can see here that I demo a lot. I've got a lot of tag providers and if you look at Smart Factory, it's pointing to the Snowflake MQTT server. So that's great. I'm gonna come over here and I'm gonna enable my MQTT transmission. Okay? And when I did that, what happened? 

31:36
AN: When I did that, MQTT transmission looked into the Smart Factory Tag provider and it says, "Hey Arlie. You've got all these models, you got dryers and extruders and conveyors." And so we're gonna publish those using Sparkplug. And the Snowflake Bridge was sitting there listening to an MQTT server. It was a very... It wasn't doing anything. All of a sudden, messages started showing up. Remember that advantage, auto discovery. "Oh. We got an extruder." Now I'm gonna put that into Snowflake using Snowpipe Streaming. So 15 seconds ago, I didn't know anything. Let's go back to our Snowflake console and let's hit Refresh. And lo and behold, we now have a Smart Factory 1 with views of every machine that we've got in that factory.

32:32
AN: Before I go look at one of those, let's ask the SQL database, what models do I have? Let's ask it again. "Oh. Arlen, you've got an extruder, a chiller, a dryer." So now I literally know everything that was in that UDT on Ignition. Now that I know all of the models, I can go back over here and say, "Well, now that I know that, let's go to that extruder and let's do an SQL query, which everybody knows SQL and single, this unified namespace, Smart Factory, Smart Factory 1, line seven, extruder seven, when did the message arrive? What was its sequence number, and all of my process variables in real-time, all hydrated, no holes in the database. I literally could start using this today. So if I know SQL, it took me five minutes to get all my machines defined, get everything up there in real-time. And now for every machine I had in that Smart Factory, I now have a single source of truth of all the real-time data is showing up in Snowflake. Pretty cool. Now, once it's in Snowflake, what can we do with it from there? And with that, I'll turn it back over to Travis.

33:55
TC: Sweet. Alright. So, again, once it's in the Snowflake database, it's just a matter of going and doing, issuing queries against that. So, I'm going to switch over and show you the Sparkplug Data Dash here. And so this is our server that we have that's running in the cloud. And you can see that we've got a Snowflake database connection here that is connected and valid. So what we did first though is we went to the driver's part here in Ignition and then JDBC drivers, we had a bunch of pre-built ones that come with it. Now we're working on getting the Snowflake one built into Ignition, in a new build. But for now, you can go download the JDBC driver and simply just go ahead and install it.

34:37
TC: And we have some instructions on that, a little Read Me on how to do that. Real simple. Get that installed. Once we have that installed, we can go and make a connection like we have here. And so just like any other databases, of course, once I have that valid connection, I can go anywhere in Ignition, and I can use it. So I'm gonna open up the designer here and what we've done for the Data Dash, and I'll go and show you the application in a minute. But we just basically, if I go to the Snowflake, we have a bunch of predefined name queries that basically go and query certain tables. So, he was showing that, that Sparkplug device messages table, and so if I go and look at this, you can see that we're just doing a standard select query against that Sparkplug device messages table.

35:21
TC: And we're looking for... And this one I'm filtering for specific group ID, Edge node ID, and a specific data model that I wanna look for, that we're using for the actual dashboard itself. So it's incredibly easy for me to go into Ignition. In fact, we can go into the database query browser against the Snowflake database and we can easily start saying, "Select star from stage DB, sparkplug device messages." And so we can just bring that data back and anywhere in Ignition within that. And in those queries, we can have... There could be millions of rows. In fact, with the Data Dash, we've got over 120 million rows at this point that we've been logging with that and it's very, very high performance to get that information back.

36:12
TC: So as you can see, that's how we have developed it with the Data Dash. Let's actually go and show the outcome of what we built. So we're gonna go to tryignitioniot.com. So if you haven't checked out Data Dash, simply go to tryignitioniot.com on your phone. You can go... There's the... On the tech lounge, there's a TV up there that has this application open. So here's what we did. We asked participants to go and do exactly what Arlen just showed. He built an extruder machine, a data model. Build any kind of data model that you want, right? Provide that context, provide those parameters that you wanna associate, provide the engineering units and the engineering ranges of the values. Basically create a UDT within Ignition or any other device that speaks Sparkplug, and have that published up to a cloud MQTT broker. With IoT Bridge, everything he showed, that all came into Snowflake and it's all ready to be discovered. So, this dashboard, you can go and you can actually go and see these data models. So if I go look at, for example, I'll use Opto 22's EPIC c-store. We're just showing a visualization of this. Let's go to a different c-store.

37:20
TC: So, we're just showing a visualization of that data model. So you can see the information up here. So there's a perspective template that corresponds to that data model, so that we can easily look at that live data. But again, that history is all going into Snowflake and it's accessible so that we can query that. So let's go over here to the Snowflake tab. And the first overview of this is basically just a discovery of all the data models that happen to exist within Snowflake. So much like he just showed how all those views got created, well now we can actually go and query those, and we can discover information about this. So for example, let's go in. Since I was using the Opto 22 c-store, I'll go into the Stillwater and look at that particular data model. So there, on the right-hand side, we can see all of the parameters that are gonna be... That are part of this is like the UDT definition. All the parameters that are there, what the data model is, here are all of the process variables that are in there.

38:17
TC: For the process variables, like, for example, if I look at this freezer compressor, I'm gonna get, of course, that it's KW and I get the range, 0 to 1500. So this is all... I can have Ignition completely independent from all of the... Not even connected to the MQTT broker, and I can see all the data models that happen to exist within Snowflake, because again, using Sparkplug, those templates were sent to a broker and into Snowflake, again, it's that same exact context. So very, very easy to see that. So this overview is kinda just showing all the data models that are in there, and we've got a whole slew of them with this, so let's see if I can clear this out or there's no exit on that, but we have a whole slew of different data models that are there. At the end of the day, then we can go and query the history very, very easily, and build dashboards and we can go a lot further.

39:06
TC: So I'm gonna show you two kind of demos, one is we're just gonna go and query the history, bring it back into trends, so we're gonna go and select... I'll need to go down to one of those instances, those data models that we have, I don't wanna look at that data, so we'll go... Again, we'll look at the Opto 22, since we're on there, we'll go to Stillwater, look at the EPIC c-store, and because we have the data model stored, you can see here's all the tags, all the process variables associated with it. We already know what those are, and I'll go and select a particular instance. So here's our c-store 405, here's my date range that I'd wanna query the history on, and we'll just select some process variables. I'm not gonna select all of them, we'll just do, let's say, the compressor, all the freezer system, we'll bring those back. I'll apply. And basically, at this point, we're gonna go and issue the... For that time period that we have up here, we're gonna issue a query to get back that history. The idea is that we can simply just go and query all that data. We can bring it back on trends... Hey, there we go, just took a few for that information to come back.

40:03
TC: So, not only is all that data stored there, we can discover that, we can understand what it is, we can query it, put it back onto a dashboard very, very easily. So that's kind of one demonstration of what we're using with Snowflake. The other, of course, is going to the ML/AI side. We're talking about anomaly detection. And so if I go back over here to the map and we look at a particular location, let me go back to that, that Stillwater one, on that freezer, where we have that Compressor KW, we do have the Anomaly Detection turned on in Snowflake. We trained the model based on good data already and just basically ran a SQL query to train the model. And once it's trained, then we continuously, since that data is piping through the bridge into Snowflake all the time, on the Snowflake side of the task that's running, very, very quickly, that is basically looking at the last bit of data we brought in and we're gonna run it through that model to see if it detects any anomalies. Now we're kind of manufacturing this by clicking a button that says Trigger Anomaly, but it is going through that whole system, kinda coming back, where we're getting that feedback back in Ignition. So if I go ahead and do that, what we're doing is gonna...

41:08
TC: We're gonna spike that Compressor KW, which of course, is gonna cause that anomaly to happen, but as you can see, that came back extremely fast, running that model very, very quickly on the Snowflake side. We got the anomaly that's an alarm within Ignition, we could do something about that, but those can be running all the time. And because we trained the model off of that UDT, any new site that has that same data model can take advantage of that same... The same thing that we've built, so we can easily do anomaly detection across the entire enterprise on those data models.

41:41
TC: So it's very, very easy to get these things going, to go further with all of this, not only are we showing how we can get the data into... Get it into Snowflake and how we can leverage those UDT models, we can easily bring it back into dashboards and show that data very effectively. So with that, I think we'll just be opening up to questions.

42:11
TC: So anybody have questions out there? Yes? We have one down here...

42:14
Speaker 4: I know it's hard to say, but what's the rough startup cost of getting the MQTT,

42:22
And then the Snowflake? 

42:26
AN: Free. It's one of the rough startup costs... Everything that you're seeing there, you can run in trial mode, right? So you'd probably have to get a test account, and you can get a test account from Snowflake. For the IoT Bridge, that's 30 days free. So you can do it for 30 days, basically for free.

42:47
TC: The whole thing would be, so you got... You've got Ignition you could do in trial period, no problem, in trial period, we can also provide longer trial licenses if required. The IoT Bridge is 30 days free, easy to work with, and with Ignition Cloud Edition, that would be the broker, that would be in the cloud, you'd wanna have some broker up there, it could be that, it could be something else, so you can run that for a couple of hours or a few hours. It's pretty low cost, maybe a dollar per hour. And then with Snowflake, I believe, when you create the account, there's a... I think credits you already get.

43:17
PJ: Yeah, they are some credit options, we can work with you on that. I would say it's pretty much everything is... When you do the compute, you do the reporting, it's pay-as-you-go... It's like electricity bill. When you use it, you get the bill; otherwise, we're not going to charge you. So, pay-as-you-go model. That's what it does. And again, I think having done those kind of Industry 4.0 initiatives,

43:38
AN: Multiple effort, I would say this is the lowest cost possible startup cost around Industry 4.0 because even four years back around what the initiatives which used to happen, a few hundred thousand dollars, we can connect three machines and we can do a business outcome. That was the pitch. It's no longer there. It'll be hardly a few thousand dollars to get it started. At pilot level, I don't see that as a challenge.

44:06
TC: And yeah, and one thing to mention is that... Oh, I lost my train of thought... Oh, well, we'll come back to that.

44:13
AN: Well, no, I think... What I was gonna mention is that, the other thing that's really different here, it was an advantage, Snowflake didn't have an IoT service when we started this project, so they had no notion of charging by the measurement. So it doesn't matter if you're publishing a 1000 tags or 50,000 tags, you're running in a compute warehouse, so you're not charged by the measurement like you are on all the other data services, you're just running in a compute warehouse; as long as you stay within that warehouse, you know your cost.

44:47
PJ: In fact, there are two advantages which came with that. When Arlen mentioned there is no IoT service, [0:44:53.8] ____ but last year when I took this role, I told Arlen that this time, when we do the integration between Snowflake and at the edge, for edge-to-cloud business outcome through Inductive Automation, they should be the best-in-class integration ever built on this planet, so far. Again, I think there, we had an advantage because we didn't have an IoT service. There are two major advantages which came with it; one, there is no additional cost factor. We are not gonna charge you for an IoT service which other cloud vendors are going to do.

45:26
PJ: The other one, pretty much every IoT service as a sub-optimal view of the manufacturing asset world, and they have done the modeling, that always comes to the challenge when you try to move that edge data to the cloud, there is always a compromise made on the data model. When you try to change the data model, you've got a bigger problem associated with it. So these are all the challenges we never had, so we made sure that we can handle every possible data types. And data ingestion, in our viewpoint, should be a commodity, because either way, we don't make a lot of money in data ingestions, it's pretty much nickel and dime to move the data from edge to the cloud, it's really around compute, that's how we charge you. So we are trying to keep it as easy as possible to move the data into the cloud.

46:09
TC: I remembered my train of thought real quick, which is for existing customers who already have Ignition, it's incredibly easy to take advantage of this. We're talking about simply just getting MQTT transmission, just plopping it in, if you have models already built, it'll be that quick to get integrated again.

46:24
AN: Exactly. If you already have Ignition, we're probably talking less than a day.

46:27
TC: We're talking, for new customers though, for people that maybe have a new site or a new facility or something, or they haven't had Ignition at all, it's going with Ignition Edge or your full Ignition, putting it in to connect to PLCs, bringing those... Building the models is super easy. In fact, we've also built a kit with Opto 22, where they have their EPIC controller with Ignition Edge on it already ready to go; especially for energy, with the energy monitoring units to basically pump those energy UDTs in the cloud, so there's a lot of easy ways to get started. Other questions? There's one in the back up there.

47:07
Speaker 5: So, for the piece that you were speaking about, in terms of ML or the pre-trained models, can you go into a little more detail about A, the training that goes into those pre-built models and B, the explainability behind those models? 

47:21
TC: Yeah, so for the Anomaly Detection Service, the way that that works is, you're basically kinda like calling a stored procedure almost. You're specifying, you're doing a train model call and you're specifying the data set that you'd wanna train it on. And so in our particular case, we're doing one of those [0:47:37.1] ____ as of use that Arlen showed, for a particular...

47:39
TC: So we did it for this, the c-store, we did it on that, on that freezer compressor, we basically brought back the data from the time period that we'd wanna train in... We trained it on, I think, a few thousand rows of data that was good. So we call that function once and it creates an object in Snowflake, that is the anomaly detection object. And much like you're creating a table or a view or a task screen like that, you're creating one that you can then run again later. So then next time, when you want to do a detect anomaly, you just run another SQL query that is saying... Basically, call this anomaly detection name, you say detect anomaly, so you give it a new query or a new set of data you'd wanna run through, and it will give you back a result, a table that's gonna show you, if all the data, if there's anomalies or not, what the variation is, all of that. And so we just basically take that, that result and if we see anomalies, we then trigger that alarm to come back to Ignition. So as simple as that, two queries: One to train and one to detect. It's as simple as that.

48:40
Speaker 6: Okay. Is there any plans to add discovery tools for engineers who like to look at trends initially to build out some ideas before they run it through the model? 

48:54
PJ: If you can swing by the Snowflake booth, we can go deeper into that. That's a longer conversation, if you don't mind.

49:02
AN: Alright.

49:02
TC: Alright. Thanks, everybody. Awesome.

49:03
AN: Thanks, everybody, appreciate it.

Wistia ID
n4vjppa7mj
Hero
Thumbnail
Video Duration
2953

Speakers

Arlen Nipper

President & CTO

Cirrus Link Solutions

Travis Cox

Chief Technology Evangelist

Inductive Automation

Pugal Janakiraman

Industry Field CTO - Manufacturing

Snowflake

ICC Year
2023.00
icc | 2023 Community Session

Sepasoft MES Orchestration for Digital Transformation

Manufacturing workflows are required to execute critical processes the right way – every time. The correct tasks must be carried out in the correct order, with the correct materials, approvals, quality checks, and accurate records, especially in regulated industries (e.g., 21 CFR Part 11). This objective, and true Digital Transformation, can only be accomplished with a platform that is integrated, agile, low-code, and feature-rich. Join us for a demonstration of our various MES offerings to showcase Sepasoft’s orchestrated workflow solution.

43 min video

Watch the video
icc | 2023 IA Session

What's That in the Sky? An Intro to Ignition in the Cloud

Is it a bird? A plane? No, it’s Ignition! There’s enough buzz around deploying Ignition in the cloud, you’d think it would give your system super powers. But does a cloud deployment align with your organization’s grounded, realistic objectives? In this session, we’ll introduce cloud deployment concepts, discuss which architectures and scenarios benefit the most from cloud-based integration, and share real-world Ignition use cases.

46 min video

Watch the video
icc | 2023 Keynote

Main Keynote: Elevating Automation

Let's kick off the 2023 Ignition Community Conference on a high note. Join Inductive Automation's leadership team as they reflect on the past year, look toward the future, and give you a bird's-eye view of our growing company, ever-evolving industry, and thriving Ignition community. This is ICC, elevated!

98 min video

Watch the video
New Possibilities at the Edge Esther Fawson Mon, 10/30/2023 - 12:35
As industrial organizations do more at the edge of the network, important new questions are arising. What is the relationship between edge systems and centralized systems? What can you do at the edge that you couldn’t do before? How can you use the edge with the cloud effectively?