ICC
Explore past sessions from the annual Ignition® Community Conference.
Integrating Ignition with Exciting Peripherals
Ignition is based on open standards, is deployable anywhere, provides data to anyone, and can integrate with virtually any system or device. This allows you to leverage best-in-class technology with seamless integration to Ignition. Perspective and the native iOS and Android application is a perfect example of this. Ignition enables people to extend their applications to a phone or tablet by leveraging the camera, GPS, NFC, Bluetooth LE, and other mobile tools. In this session, you’ll get some exciting use cases and live demos featuring one exciting OT peripheral and one very cool guest appearance you won’t want to miss!
45 min video
Build-A-Thon
Behold, another Build-a-Thon is upon us, complete with all the intrigue, feats of daring design, unexpected surprises, and singing that usually accompany such a monumental event. This year, teams from two top integration companies will battle to see who can design the best Ignition project. Don't miss all the excitement of witnessing the crowning of a new Build-a-Thon champion live at this educational, one-of-a-kind competitive SCADA event!
100 min video
Technical Keynote: What's New in Ignition 8.3
Traditionally, we've always held the Technical Keynote or Development Panel on Day Three of the conference, but this year, we've got something big to discuss, so we've moved it up to Day One of our conference content schedule. It's no secret that we've been working on the newest version of Ignition for several years now, and now we're finally able to dive deep into what's coming in Ignition 8.3 and how its powerful new features can lead users to their next big breakthrough idea!
69 min video
Main Keynote: Exploring the Impact of the Ignition Community
The global community of Ignition users includes large multinational enterprises, government and professional organizations, small companies, and individuals. While each uses the software differently, they all use Ignition to harness the power of automation to accomplish their own mission of making something better. In this keynote, we'll explore how Inductive Automation is supporting the efforts of the Ignition Community and the incredible impact their work has on the future and improving people's lives on a regional and local level.
56 min video
Build-A-Thon
The conference is guaranteed to go out with a bang as the Build-a-Thon closes out ICC once again. Join us for the conclusion of the ultimate Ignition challenge, where the final two teams compete for the glory of developing the most elevated Ignition solutions and being crowned Build-a-Thon champions. Who will wear the orange winner’s blazer after the votes are all counted? There’s only one way to find out, so stick around to catch the competitive spirit and enjoy an unforgettable music performance from IA’s Department of Funk that you’ll be humming for weeks!
76 min video
Technical Keynote
Developing industry-defining software is no easy task, but someone has to do it. Join our Development team as they highlight recent improvements and upgrades, current developments, and a behind-the-scenes peek at the future of Ignition before answering questions directly from the Ignition community.
60 min video
From LinkedIn Connections to Community Leaders: The Automation Ladies Experience
What happens when two passionate ladies in industrial automation meet on LinkedIn and decide to create a podcast? Magic. And growth, lots of growth. Dive into the journey of the Automation Ladies podcast and how it has become an engine for both business growth and network expansion. Nikki and Ali will unpack how amplifying your voice online can have real-world business benefits. If you want to grow your customer base, attract top-tier suppliers, or strengthen your community, this talk should have some actionable takeaways on the power of creating an authentic personal brand by sharing your journey with the world.
46 min video
An Overview of Ignition’s MongoDB Connector Module
Earlier this year, we introduced a connector module that allows an Ignition Gateway to integrate with MongoDB. This session provides an overview of MongoDB, outlines the connector module's capabilities, and demonstrates how you can most effectively leverage it to elevate the functionality of your existing deployments.
42 min video
Hitting a Home Run with Ignition
Ignition is not limited to industrial applications alone; its powerful features extend to use cases of all kinds. From its intuitive design features to its robust scripting capabilities, you can harness the full potential of its flexible architecture and rich tool-set to create innovative solutions in non-industrial automation development. Witness this potential firsthand through a baseball scoring and statistics app developed entirely in Perspective, while providing examples of how tags, persistence, scripting, and views can be utilized in a non-industrial setting. Our goal is to inspire others to elevate their lives and hobbies in new creative ways with Ignition.
45 min video
The OG Perspective: 10+ Years of Ignition Wisdom and Beyond
In this session, we'll explore more than a decade of experience with Ignition, sharing valuable insights as a long-time member of the Ignition community. We'll take a practical look at how Ignition has evolved and its role in modern manufacturing, including topics like MES, OEE, AI, and more. It's an opportunity to gain practical knowledge and understand the journey from the early days to today's automation landscape.
42 min video
Rising to the Challenge - Adventures in System Conversion
The folks at Flexware are no strangers to a challenge. When the opportunity to convert a large system over to Ignition arose, they took it head on. Join them in this session where they'll talk about the project and share their lessons learned, talk about custom tools, and describe their thought process.
41 min video
Learning Ignition Fundamentals
Whether you're new to Ignition or just want a refresher, this session is made for all. The Inductive Automation Training team covers all the basic knowledge and fundamental features you need to get started with Ignition.
45 min video
Integrator Panel
Which new innovations will prove vital for future success and which flash-in-the-pan trends are destined to be forgotten by ICC 2024? During this panel discussion, some of the Ignition community's most successful integration professionals share how they are responding to emerging technologies and techniques that are driving the evolution of the automation landscape.
44 min video
Tyson’s Smart Factory Journey
This session provides an overview of how Tyson has standardized operations with Ignition as a SCADA platform, highlighting and detailing how consistent data and dashboards allow for faster implementations. The talk will also include best practices that Tyson has developed, and will identify some of the key integrations that have helped simplify and streamline data collection processes.
28 min video
Don’t Get Lost in the Cloud: Tips & Tricks for Successful Ignition Deployment and Management
With the release of Cloud Edition, it's never been easier to get Ignition running in the cloud. But are you ready for it? From security concerns to misconfigurations, there are plenty of pitfalls to stumble upon when managing applications in the cloud. But fear not, as help is on the way. Join the experts from 4IR in this session where they'll provide helpful tips and tricks for deploying and managing Ignition in the cloud.
45 min video
Elevate Your OT Data Securely to the Cloud
Ignition Cloud Edition! Awesome! But wait… How can I possibly connect my PLCs or I/O systems to the cloud? Won’t that jeopardize them? And require heavy IT involvement? What’s the payoff? In this session, we’ll discuss how to use Ignition Edge and Ignition Cloud Edition together to quickly create scalable, high-performance, cybersecure architectures for democratizing your OT system’s data. Whether in brownfield or greenfield environments, you’ll unlock the power of edge-to-cloud hybrid architectures that are cost-effective, easy to manage, cybersecure, and deliver more value to your organization.
45 min video
We Love Ignition. But Can it REALLY Scale?
Can it REALLY scale? This is a question we have received for the last 10 years. Delve into the realm of enterprise Ignition rollouts with industry insights from the lens of an enterprise integrator. Uncover the strategies and best practices that accelerate the implementation and ensure the long-term sustainability of Ignition. Don’t just believe us – hear it firsthand from a guest appearance with one of our enterprise end users.
42 min video
Deployment Patterns for Ignition on Kubernetes
Kevin Collins returns to ICC for a demonstration of how to harness the combined power of Ignition and Kubernetes. This session offers an in-depth look at methods for effectively automating deployment, scaling, and managing containerized Ignition applications.
59 min video
Separating Design From Development - Using Design Tools with Ignition
Building screens in Ignition is a breeze, but did you know you can design screens even faster by mocking them up using a design tool? Join us for this session as we talk about the benefits of moving the design process outside of a development platform. We'll cover topics such as design vs. development, UI vs. UX, benefits of using design tools, and an introduction to the design tool Figma.
43 min video
Ignition Exchange Resource Showcase
Since the Ignition Exchange’s introduction in 2019, members of the Ignition community have contributed hundreds of resources ranging from pre-built templates, tools, and scripts to Ignition-powered retro arcade games — all available for free. Discover the full potential of the Ignition Exchange as we highlight some of its most valuable assets, including a handpicked sampling of the top Exchange resources developed by IA engineers.
41 min video
Ignition Diagnostics and Troubleshooting Basics
Ignition offers numerous built-in tools for gathering diagnostic information about the health of your system. This session offers an overview of these tools and explains how our Support Division leverages this information during the troubleshooting process. By the end of this session, fixing problems will feel like shooting code in a barrel.
46 min video
Introduction to Automated Testing of Perspective Projects
Learn the most effective ways for leveraging automated testing to safeguard your development-to-production process. This session will start by outlining how the core tenets of testing apply to automated testing, leading directly into best practices for verifying that your Perspective project development is production-ready.
38 min video
Industry Panel: ICC 2023
61 min video
I4.0 Accelerator for Driving Edge to Cloud Business Outcomes
Come and learn with Cirrus Link and Snowflake what your data has to say. Snowflake, Inductive Automation & Cirrus Link have partnered to provide Data Cloud Solutions. With Ignition UDTs, MQTT, and Sparkplug, discover how easy it is to leverage Snowflake’s platform to gain derived data insights immediately through native AI tooling. Learn about the impact of the recent partnership of NVIDIA and Snowflake. See how this disrupting technology, in conjunction with Ignition, will elevate and simplify your journey to data insights.
49 min video
Sepasoft MES Orchestration for Digital Transformation
Manufacturing workflows are required to execute critical processes the right way – every time. The correct tasks must be carried out in the correct order, with the correct materials, approvals, quality checks, and accurate records, especially in regulated industries (e.g., 21 CFR Part 11). This objective, and true Digital Transformation, can only be accomplished with a platform that is integrated, agile, low-code, and feature-rich. Join us for a demonstration of our various MES offerings to showcase Sepasoft’s orchestrated workflow solution.
43 min video
What's That in the Sky? An Intro to Ignition in the Cloud
Is it a bird? A plane? No, it’s Ignition! There’s enough buzz around deploying Ignition in the cloud, you’d think it would give your system super powers. But does a cloud deployment align with your organization’s grounded, realistic objectives? In this session, we’ll introduce cloud deployment concepts, discuss which architectures and scenarios benefit the most from cloud-based integration, and share real-world Ignition use cases.
46 min video
Main Keynote: Elevating Automation
Let's kick off the 2023 Ignition Community Conference on a high note. Join Inductive Automation's leadership team as they reflect on the past year, look toward the future, and give you a bird's-eye view of our growing company, ever-evolving industry, and thriving Ignition community. This is ICC, elevated!
98 min video
Build-A-Thon
The 2021 Build-a-Thon was the first ever to feature members of the Ignition community battling it out head-to-head. This year, we invited all of Inductive Automation’s Premier Integrators to apply for a chance to wear the Build-a-Thon blazer, and after three rounds of challenges, the final two integrators, DMC and Roeslein & Associates, will face off at the conference.
61 min video
Running Ignition in a Container Environment
Leveraging Docker can be a powerful technology for rolling out large systems and setting up flexible development environments. In this session, you'll hear practical tips for running Ignition in a container environment from Inductive Automation's Docker expert.
45 min video
How Far We've Come - Ignition Across the Enterprise
Ignition was always great for solving problems and beloved by Operations. But could it scale? Could it be deployed across an enterprise? Could it stand up to scrutiny in the boardroom while execs are aligning on their digital strategy? Absolutely. Over the past several years, Brock Solutions has been deploying Ignition across enterprises, helping customers accelerate their digital transformations. But don't take it from Brock; hear it from our customers' mouths about how and why Ignition has become the real deal in their enterprise landscape.
42 min video
Performance Tips & Tricks for Optimizing Gateway Networks
Getting the most out of your Ignition gateway network is important to your system’s performance, especially for large implementations. In this session, you’ll get expert tips about how to optimize the performance of your gateway network for heavy workloads.
60 min video
Stone Brewing Successfully Implements Modern Batch System
In this session, Stone Brewing and Wunderlich-Malec Engineering will showcase the first successful implementation of Sepasoft’s Batch Procedure Module. Going into the project, Stone Brewing hoped to upgrade to a flexible and modern batch system that could handle complex recipes. With the support of Wunderlich-Malec, Stone Brewing easily configured the module to replicate existing processes. Attend this session to learn about Stone Brewing’s quick adoption of Batch Procedure and more project highlights.
47 min video
Ignition: The New Enterprise Connection Platform
The quest for greater productivity and reduced costs is driving market forces and investments into new projects trying to combat today’s challenges from the supply chain, labor, and inflation. Learn how Ignition has advanced from the “New SCADA Platform'' to become the standard tool for OT-to-IT Enterprise Digital Transformation. The session will discuss and demonstrate how Ignition with MQTT/Sparkplug is the “Swiss Army knife” Digital Transformation platform from the edge to the cloud to achieve these goals. Get your Enterprise ready to Xperience and Xplore the serendipitous nature of your OT data!
48 min video
Modern Cloud Deployment Strategies
With the systems getting larger and the need for flexibility increasing, effectively running Ignition in the cloud can be a powerful deployment strategy. In this session, Inductive Automation’s architecture experts will talk about how to utilize the cloud for modern deployment strategies.
48 min video
Drain The Data Lake - Model And Contextualize Your OT Data at the Edge
Join a panel of Ignition community experts who helped the State of Indiana launch a Digital Transformation program for manufacturers quickly and simply. Energy data, manufacturing output, and other OT data can be collected and modeled in-plant, and efficiently published into cloud infrastructure and unsupervised AI for actionable insights with a pre-built “I4.0 in a Box” solution.
48 min video
Integrator Panel: How Integration Has Changed & Where It's Going
This panel will bring together some of the Ignition community's most accomplished integrators to discuss how the industry has shifted over the past decade and what technologies and practices will be vital in the future. From IIoT-enabled hardware and cutting-edge security tools to eliminating paper from the plant floor, changes in the last 10 years have altered how integrators approach business and opened up new opportunities. But which areas still have room for refinement and innovation? Hear experienced professionals give their insight and answer your questions about the industry's past, present, and future.
45 min video
Unlocking Innovation & Delivering New Services Through Digital Transformation
Digital Transformation has accelerated as a result of the pandemic as nearly every industry and every company has had to adapt to changing work conditions, market conditions, and environmental conditions. Those companies that are thriving in this new normal have uncovered new value in leveraging technology to accelerate innovation cycles and deliver entirely new products, services, and even business models. Imagine fully recovering from this pandemic better off than before it started with entirely new revenue streams that fill the revenue gaps with even greater profitability through new channels. Learn how this can be done and hear the stories of companies who have succeeded.
45 min video
Industry Panel: Exploring Digital Transformation
It takes coordination to revamp processes or upgrade machinery, but it’s a far more complicated task to establish change all the way from the plant floor to the C-suite. While the necessary Digital Transformation of manual operations may look different across a variety of industries, the critical benefits of increased stability, flexibility, and security remain consistent. Hear from a panel of industry thought leaders and experts as they explore how enterprise-wide solutions have led their companies to a new level of growth and answer your questions about large-scale Digital Transformation.
64 min video
Technical Keynote & Developer Panel
This year, the co-creators of Ignition, Colby Clegg and Carl Gould will be expanding the traditional developer panel into a new format. In this new Technical Keynote, Colby and Carl will cover the recent progress of Ignition and look at the roadmap for the near future of the platform. They will also get some help from a few Software Engineering Division all-stars to give further insight into specific aspects of the platform like security, advanced analytics, and design tools.
65 min video
Kanoa MES is a modern Smart Manufacturing solution designed in and for Ignition. Learn about the Kanoa MES Modules, Kanoa MES Database, and Kanoa APP Ignition project you'll use to get started with Kanoa MES. Check out a live demo of Kanoa Ops and Kanoa Quality to see how you can configure your MES in days and get insights into your manufacturing data with ease.
Transcript:
00:01
Jason: I'd like to start by thanking you all for coming today to hear what it is we're doing at Kanoa, and thank Inductive for creating Ignition, for creating this incredible platform that has allowed us all to do the amazing things that we're doing today. In 2018, we formed Kanoa to help companies implement Ignition-based MES solutions with a bent on project management, lean Six Sigma solutions, and change management to help them drive continuous improvement. We'd seen way too many projects fail, and not because of software, but because of people's failure to transition. And it seemed that most companies were so focused on the digital transformation part and the implementation of software that they really hadn't spent any time on the people side of making sure that these projects were successful. So for us, selling MES software and solutions is a really poor business model if companies do not get value out of the solutions that we're implementing. So we really do focus, we come in for companies here. If it's your first time implementing an MES solution, we're going to work with you. Once you've got a proven track record, you've rolled this one out, you've done your pilots, you've got production lines, and people are actually deriving value from it, then knock yourselves out, you can carry on, you can use this as much or as little as you want.
01:33
Jason: MES applications are not trivial, and there's a fair amount of customization that has to take place. What we found over the years was really the difficulty in keeping up with the constant pace of a release train. I mean, every few months or every five weeks, they keep changing, they keep adding to it, and we have gone through so many refactors, we all have. We went from... Well, we started on 7.5, then 7.9 to 8.1 was a refactor vision to perspective, a huge one. Then we started changing expression tags to reference tags to take advantage of MQTT. And this is none of this is a bad thing. It's a constant evolution. But we have constantly had to keep reinventing ourselves to remain relevant. And because of that customization and the constant change, we found that some customers ended up potentially throwing away their solution, and starting again every time with the big changes. So in 2020, we decided to take a fresh look at what an MES solution or platform should be. And from what we've learned over the years, keeping the good and replacing the bad. So for a while now, we've been touting MES for masses. This is not a communist manifesto, but it's more of a guiding principle that really drives the products that we develop, in a sense, do I press or do you?
03:03
Jason: You got, amazing technology, go, it should be affordable. And we follow because of this, we follow the same licensing model as Ignition. It works for them, nobody books at the cost of Ignition. And the Ignition platform has been so flexible, in the sense that you could throw everything on a single server, every single one of your sites, your enterprises, your assets, run it up in the cloud, and you could have Edge devices pushing it up, but you could also distribute it. At the end of the day, the architecture that you're going to use is going to be driven by whatever constraints, whatever requirements of the applications you're building. In that vein, we said, let's follow what Ignition does, if you truly want to have an MES cloud server. And we think that's a great idea. Everything it has to connect to ERP systems is up in the cloud. Why not have a connectivity up there and use MQTT and Edge devices to push it up? It should be accessible. And that's a fairly easy thing to do because we are building modules exclusively for Ignition, and their licensing of unlimited users, unlimited tags, has been a game changer since 2010 when I first started using it.
04:17
Jason: If you're going to drive continuous improvement, you want everybody inside your corporation, your company, to have access to the information that's going to allow them to drive continuous change. It should be intuitive, moving to perspective. We absolutely love this because we can really make the user interface intuitive. And quite frankly, if you look at Amazon or Google, any of those companies, we use them every day. Nobody has ever read the user manual to be able to buy something on there. We kind of feel the same. Yes, there are aspects of MES that might be a little bit more specific, but if you use the same interfaces that people are using on their phones, if you give it to them in the same devices, it can be on a computer, a tablet, or a phone, then we can make it intuitive. And if it's intuitive, people will use it. And it has to have value. Value is in turning data into information. So when we built Kanoa MES, we started from the ground. We started with data. Data is the most essential part of it. So we built a third form, normalized database schema that stores the data, and it's open, and it's accessible.
05:34
Jason: So we build our APIs, our system functions that will interact with that database. They will give you the analysis, and it's lightning fast, and it's the smallest footprint that has data integrity and constraints. But you can also call those same stored procedures if you want to share it with Power BI, or Tableau, or an ERP system, or SSRS reports, it really doesn't matter. But you build from the data, you get value.
06:01
Jason: And then finally, it has to remain relevant. Keeping up with Ignition release train is like trying to board a train that's got no doors. You're never going to do it, and that may sound like a bad thing, but consider what Ignition gives us with this release train. They keep us relevant. They keep us on top with the newest technologies. They ensure that security matters are handled. They've given us MQTT, they've given us Kafka. I still don't know what Kafka is, but they've given it to us. And that's what Ignition does. So in this journey, we've got to keep abreast of the train. So whatever solution you're building here, it's got to be relevant, and it's got to be upgradable. So we, in our design, have ensured that our modules and our implementation have the lowest coupling with Ignition, because they're going to make changes anyway. We want you guys to be able to update with impunity and not fear that you're going to be held back by using our solution. Now, having said that, give us a chance to check under the hood of 8.3 before you upgrade. But with that, that's enough about words. I'm going to hand it over to Sam. He'll show you around. Thank you.
07:12
Sam: Yeah, thanks, Jason. So, really, all of those design ethos that Jason is talking about has culminated into the Kanoa MES platform that we have built. This configurable, flexible, intuitive MES software that is really meant to empower teams and drive continuous improvements. Because we are not doing MES just because it's fun, even though it is for some of us, but we are doing it to really improve processes and make plants run better. So when you get Kanoa MES, there are three components that you get every single time to make sure that you are starting from a strong foundation. You get the Kanoa MES database, that third form normalized database schema that Jason was just talking about. That is where all of that core key MES data is stored, as well as all of your configurations. You get our Kanoa MES modules that plug into Ignition and give you almost 400 new system functions to go and call the data that you need from that database. And very importantly, you get the Kanoa app Ignition project. This project is designed to give you a starting point with all of your configuration and analysis and daily operation tools that you need to get started with an MES from day one and continue to expand, customize, and tweak that application using the power of Ignition to make sure it can fit your application.
08:30
Sam: There are three modules that we sell over at Kanoa, actually, I guess two that we sell, but three that we make. Kanoa Core comes with any other module that you get because that has a lot of those core functionalities that you're just going to need for any smart manufacturing system. Theming, languages, security, all of that is in our core level and is shared across all other modules in Kanoa. But really, the two things that we're here to look at today are Kanoa Ops and Kanoa Quality. Kanoa Ops is going to be your system for OEE, work order management, asset management, scheduling, and shifts, and all the analysis that comes along with that. And then Kanoa Quality is a pretty unique offering in that this is a form design and dispatching tool that also gives you the tools that you need to analyze the data that you got from those quality forms. All again designed within the Ignition application. So, I am going to try to do the fastest demo I have ever done in 15 minutes and try to give you all enough time for questions at the end. But I do plan to do a webinar within the next two weeks after ICC to do a more thorough one-hour demo.
09:32
Sam: So if you like what you see here, definitely come and keep track of our LinkedIn page and our website to get more information on that. But without any further ado, here is our Kanoa Ops system. So as I mentioned, we do have two modules, Kanoa Ops and Kanoa Quality. You can get them together, you can get them totally separate. I'm going to start with Ops and then do Quality second. So let's kind of go through the day in the life of a production operator and the way that you could be using our Kanoa Ops platform. We'll start with looking at our work orders, scheduling some work, running that work in production on a line, and then getting some of the data afterwards. Then we'll actually peek into the configuration as well. So if I'm going to go ahead and manage my work orders, I need some interface for actually downloading all of those production orders. This can be downloaded from an ERP. They could be made right here in Kanoa MES. You are just picking the work order name the material that you need to run and how much of it you need to produce.
10:31
Sam: Once you have all that, you need to actually schedule that work on a line. So we have our operations schedule here, where you can actually see we're taking advantage of the BIJC calendar component that we do include with any Kanoa purchase. And this lets us do all sorts of things like create non-production events with certain recurring rules and things like that. Really fantastic tool to help manage all of these schedules. We have our normal production schedule here, but I can also do things like pop open our work order list, drag and drop a new work order into our timeline. The system's going to go, see what material you're running, see the appropriate rate that it runs on that line, and schedule it for the proper amount of time, which I'm then going to delete before it tries to run two work orders at once. The other thing that we have in here is our shift scheduling. So our shift scheduling is really cool. What it gives you the ability to do is to define shifts at any level in the hierarchy, and an asset will look for its closest parent with a shift. So if your whole plant runs on a four shift complex rotation pattern, except for the packaging area that runs in a different shift schedule, you can manage that very easily within Kanoa.
11:38
Sam: So we have our work orders, we've scheduled that work on a line, we have all of our shift data, we're going to track our data within the context of those shifts. Now it's time to actually open up one of these lines and get some work done. So from here, you can see our main enterprise overview page. You'll notice a couple of things here. So we're kind of following an ISA-95 style hierarchy with our enterprises at the top, a number of sites with areas, and then OEE enabled assets underneath them. We like to say we're ISA 95 inspired but not restrictive. So if you wanted to have, say, a business unit layer and organize all of your sites into business units between your enterprise and your site, go for it. We totally enable all of that. We want to have a site in an enterprise, but besides that, we're really flexible. So I can click into my production area here and get a summary of how all of my production lines in this area are currently running. We can see we got a little bit of an issue over here on Pac Line 1, and our other lines are running at various degrees.
12:35
Sam: I can go ahead and click into Bake Line 2 here and get to what we call our asset Operations screen. The idea of this screen is that every BIM operator that is responsible for this piece of equipment, everything that I need to run it is right here within this interface. I can see my current production modes and states. I can go into my run control and manually override my mode to say we need to go into a changeover.
13:00
Sam: I could manually select another work order or another product that I need to run from here. I can also go ahead and check things like the schedule right here from this interface. And then one of the very common things is, of course, to go and check on all of my downtimes. So I'm going to go and say, what were all of my downtime codes over the last seven days? And then from an interface like this, I can always double-click into one. I can recode things, I can add comments, I can add, delete, or change downtimes that we have recorded. Again, we like to collect all this data automatically and perfectly whenever we can, but there are plenty of times you need to do some manual work afterward too.
13:38
Sam: One other report that I'll show really quickly is our run review. So, this is really critical in letting you kind of see all of those production events that have gone through a certain asset. So what I'm pulling up here is we can see I've done three production runs on this line. It's breaking them up by shift and I'm getting certain metrics like their total runtime minutes, their OEE downtime minutes, all here from this screen. So another type of... We also have some more complex analyses. I'll pull up our downtime report as one example, taking advantage of some of the Apex charts here. Thanks again, Travis and the Ignition team for helping prepare all that. We can see all of our downtime by category, by state, and reason, broken out and seeing how it distributes by shift. I can do a stacked bar chart of my total downtime by reason, by day, and down here at the bottom, I can put it all into a table with a handy little export to Excel button. 'Cause I can make you the greatest dashboard in the world. And what's the first thing that you're going to ask me? How do I download it to Excel? I'll take it.
14:44
Sam: I'll take it. So again, in the fastest demo ever, I also want to quickly show you some of the configuration about this because one of the coolest things about Kanoa is again, everything I'm showing you here, you just get in that starter project that we are going to give you, including all of these configuration tools that you need to get you a significant amount of the way into your MES implementation. So you can see over here, I have my asset hierarchy. I can drill into a site and an area. I can click into a specific line and see I have this OEE enabled. You say, something's OEE enabled, we're going to go ahead. We drop a UDT into the ignition designer. And that's where you're wiring up your points. Another interesting thing to note is that everything I've shown you here runs off of three tags per piece of equipment. Give me your Infeed count, your Outfeed count, and your state. Everything else is configured over here in the Kanoa app. And granted, I know it can get more complicated than that. There's a lot of ways that you can make it more complicated than that.
15:41
Sam: But you can get all of this with just three pieces of data per piece of equipment. We have things like modes and states, where I'm designating all of the modes that are appropriate for this, and our state list where I am associating specific states with an asset, giving it a PLC code. That's how we're tracking all of your downtime. But it's really great that all of this is right here, configurable in the app with handy, intuitive tools. I can come in here, we can drag and drop this mix line into Jacksonville Juices if you want. It'll let us do all of that on the fly. So drag and drop assets, rename things. All of your data goes along with you. It happens all live. So that is a very quick preview of Kanoa Ops. Let's totally switch gears here and talk a little bit more about Kanoa Quality. So Kanoa Quality is all about paper on glass, right? You're running around with a bunch of check sheets today. You need to move that into a digital system to not only just get that paper off the shop floor, make that data more real-time, but also as we're moving these systems into digital platforms, we can establish more accountability.
16:46
Sam: We have this sense of a state of each of your check sheets. We're tracking the state of these as they go through. So check sheets can become overdue or missed, and we can flag operations and management teams when the sheets aren't getting done the way they need to get done. And that starts with our main overview schedule. Here you can see I have one approved test in my queue. I have four missed tests. Let's go and just do one of those missed tests, a little bit late. I'll double-click into this. I can even get a little bit bigger because again, we're just using Perspective for all of this. That's an important point I'll mention is that all of this is built in Perspective and none of this is using custom components. We are just using regular Perspective components that we are providing to you in that open starter project. So we're going to take advantage of Ignition's inheritance features. You're going to make new projects that inherit our projects where you can then override screens, make your own screens, all with our examples that you can build from. So I'm going to come in here, we're going to do a couple of checks to make sure that we can switch over this packaging.
17:49
Sam: Our area is clear of debris. Our machine is shut off. I'm going to take out my rye bread packaging and it's going to weigh 566 pounds. I'm going to put in our next wheat bread packaging. Notice this control limit up here as I put in something that's 625 pounds, and that gets flagged as orange in our little progress bar and in our control limits. I do a final checklist to say yes, my tooling is out of there and yes, my machine is turned back on. I do a final check to make sure that all of this data is the way that I want it and I go ahead and submit. So that was a very manual test. It doesn't always need to be that way. We can get data automatically from PLCs. We can get data and do run quality checks that don't have any manual data. And it's more like an event-based historian. The advantage of doing that is that we get all of that data into Kanoa Quality and then we can run our analysis on it. So I'm going to come into something like our fermentation temperature check, where I believe every 20 minutes this goes and collects three points out of our simulator and spits it back out here into this report.
18:52
Sam: Notice how quick that just happened. Right? Let's actually do for all of the data for this month in September, grab those three data points collected every 20 minutes, go get the data. It's done. That's the power of this database that we have in the background that's storing all this information. I can click into one of these zone temperatures, and I can chart that. This is where all of our SPC comes in. I can pick our Nelson rules. I can apply all of those. I can see my rule two violations, my threes. I can put it all in a histogram too.
19:19
Sam: Now, like ops, one of the most powerful things about this is, you don't need to go into the Ignition designer to do almost anything that I've shown you here. The only thing that you would need to do is to make certain tags available to the quality system so that you can just tie them in and get automated data. But the rest of this form design is done here in the app. If I come into our Kanoa quality configuration and look at our check sheets, I can take a look at that packaging changeover that we were doing earlier.
19:48
Sam: We can see things like if it's enabled, if this requires a sign off, if it's only appropriate for certain assets in my hierarchy, I can go into the checks themselves. And here is my machine shutoff check where you can see it's a string where I can add in specific instructions for my operators, where we can create a pick list of what shows up for them to be able to enter. The whole idea here being your quality managers and the ones that are making these forms, not necessarily the people that you want in your Ignition projects every day, they need a different interface to go in, add more instructions, tweak checks as things change. And that is why we give them this interface here. In addition to that SBC data and the configuration here, we did also talk about kind of the efficacy of the checks as well. So I can also do my check summary and by check sheet I can see how many are getting missed, how many are getting approved. I could put this on a shift heat map to see if there are certain shifts that are not doing the test they need to on time, again driving that continuous improvement and really trying to drive accountability around a lot of this data.
20:53
Sam: So I did it. That was a very quick demo. The one other thing that I will show really quick, 'cause I actually even have a little bit of extra time, is I didn't really get to talk too much about some of those Kanoa core functions that you get within every application. And there's three main things that you really get. One is over here and that we do have multi-language support. We are just using the embedded Ignition translation engines that you have in there. So we do have a couple of languages out of the box, though I've heard our Korean is terrible. We also have all of our themes in here. Jason would not let me do this presentation in grape, despite how bad I wanted to. These are also totally configurable. So you are totally welcome to go ahead and brand this for what you need for your specific company. And I will shift this back to blue before I go and show you the other main thing that you get out of the core modules, which is our security. So we're still using Ignition for all of your authentication, but we do add an extra layer of security here in Kanoa, just because the roles and permissions that you need in MES are a little bit different.
21:56
Sam: But we're doing it using things that you're all used to. We have our individual users that you put into groups. You give certain permissions to people in those groups, and you could do all of this by asset too. So I could be a manager for the packaging area, but just an operator somewhere else if I want to. So there's a lot of other exciting things that we have built or are building in the Kanoa Ops and quality platforms. We do have a mobile solution for Kanoa Quality if you wanted to run all of those checks on your phone with a slightly different interface. We do have a new dashboard editor as well as we are making new widgets to give people the capability to design their own MES dashboards. And we are also introducing lot tracking as a free upgrade in Kanoa Ops very, very soon. So we just need to upgrade some of the UX for it. The bones of it are all there and working, but it's really exciting to see that we can now have lot tracking and track traceability within our OEE solution, so that all of our counts are going to match up and all of those production orders and the tracking is all synced with a single source of truth.
23:02
Sam: So, again, that was a very fast demo. Keep an eye on our website and our LinkedIn if you wanna get more information on a webinar coming up soon. We do have a booth upstairs, but now is a great time for questions if anybody has anything they wanna ask us.
23:17
Audience Member 1: Can you talk about ERP integration?
23:20
Sam: Can we talk... The question is... Sorry, I'm gonna repeat it just 'cause I know there are some mics going around in the live stream. The question is, can we talk more about ERP integration? So, yes, we do a lot of ERP integration into these systems very frequently. Two of the most common points would be downloading all of those work orders that you have from an ERP into your MES. We can download them into the work order table and then have you schedule them manually, or we can fully schedule all of that work as well. The other one would be around material, something I didn't go through in the demo today, where we can download all of the materials that you run on your lines and then associate specific materials to specific assets with the rates that they are expected to run at, Jason you wanna talk more about that?
24:01
Jason: Yeah. Just to add, in terms of the interfaces, we can use all the tools that Ignition provides. So we can use web dev module for web services, you can use the Sepasoft one. If you wanna use the SAP business connector, it's really entirely up to you. Generally, we will do a RESTful API and then just have the ERP system pushing production orders down. If they push down a production order that's got the item, the item doesn't exist in our system, we will create it. If they then wanna put information about a start and end date for an item or an asset, we will create the association that this item can run on this asset. We'll give it default information. Every ERP integration that we've done is different. There are different business rules, so you've got to have that flexibility. But certainly, yes, web services are favored. We've done flat files. Hate doing flat files. Done middleware tables as well. Not really happy with those either. It's always funny when you have these digital transformation projects, they talk about everything they're going to do and then they're saying, yes, you can open this flat file and get the data out of it.
25:00
Sam: Great question. Any other questions? Yeah, right there.
25:06
Audience Member 2: So does the Kanoa Quality Module provide mechanisms to have a, say, PDF or image or something like that that is a helping guide in addition to the instructions and text?
25:16
Jason: Yes.
25:20
Sam: Yeah, sure.
25:20
Jason: Yeah. And again, everything that we've shown you here is we made a conscious choice. These are just Ignition perspective components. We've seen too many times where you'll get like a really complex component which doesn't allow for customization. So you can look at our views in here. If you're going to start like a production order, you're going to take a quality check, but you wanna have the operator do an additional step. I mean, you can go in, you can add, you can see how we're doing it in the background. They've got the PDF viewer, they've got the iFrame. So again, with every company you're going into is saying, sure, it'd be great to give them work instructions. Where do you store them? Is it in SharePoint? Is it on a network drive? How do you want to do it? We also add support for images. So particularly on the phone, we've now got it where people can take a quality check and it's saying, take a picture of a weld. From here we can use the phone, it will capture it. We store it as a blob in a database or we can push it out. All of that stuff is the customization.
26:13
Jason: What we're giving you here is not going to be 100% solution. It never is, but it gives you 80% of the way there. It's a fully functioning application. It's on you guys now to extend it as you see fit. And as you're doing that, if you find that there's stuff that you want, you see that you need, you can talk to us. Absolutely. If it's out of left field, we'll say, that's all on you. But if we look at it and say, that's actually really good for the product, that makes sense. Absolutely. The more we can get into the product, the better it is for us and for you. Because ultimately, what we're focusing on here is building a product that we can support for the long term. We have documentation, we have training, and we're going to make sure it's a supported product so that you guys don't have to.
26:58
Sam: Yeah, great question. I see one in the back over there. Yeah.
27:02
Audience Member 3: Is there like an API library for scheduling something like automatic work order stops and starts, or doing like, basically, you know, automated sample collection on the machine?
27:11
Jason: Yes.
27:11
Sam: Yeah, so the question was. Sorry, the question was about the API hooks that you have and kind of how you can build your own things with the API. Jason, yes.
27:18
Jason: Yes, I said we got 380 functions there, so absolutely, you can build your solutions in there. Everything that we do through here is going to be calling one of those system functions. So we can be called from a tag. You could stick it on the end of a web service call if you wanna do it from another system. However, that data is in there. But yeah, everything's through an API.
27:37
Sam: Yeah, but like for example, that downtime report that we do, there is a system.Kanoa of.events.getdowntime events for this asset with this start date, this end date. And yes, there's a lot of other variables you can kind of put into there. But yeah, we're giving you 400 system functions like that to put in and retrieve data from that database.
27:53
Jason: Let's have Dan. And we were promised a mic runner. Where's our mic runner? All right. Okay.
28:00
Audience Member 4: Hey, Jason, is this available as a trial?
28:03
Jason: Yes. Yeah, I mean it's just modules, so it works exactly the same, as Ignition, but you can do trial license. We actually think we do one better. One of the things here is that there's work and effort involved in getting the modules and getting it up and running. We'll just give you a container. So we've got a bunch of Linux cloud containers. We can run eight docker containers all running at the same time. So we'll actually get it where it's configured, it's set up for you. You can get in there and the design it, you can play with it and try it out. And if you do want to like an extended period, we can give an extended period trial license.
28:38
Sam: Yep.
28:39
Audience Member 4: Please.
28:41
Sam: But no, definitely those for the, the integrators and the people that just kind of wanna try this stuff out. Those containers are a really fast way to get onboarded. You meet with us, I kind of show you some of the basics and setups, and we kind of go through a basic configuration and then you usually have it for two weeks to show it with your teams and start to play around and see if it's going to be the right thing for you. So, if you are interested in something like that again, you can reach out to us, booth, website, however you want to. And we're happy to schedule some time to get you connected with one of those. Any other questions?
29:13
Audience Member 5: I saw that the quality module looked really good. We do a lot of that with our shop orders. So would I need both modules to essentially execute an order that collects a lot of data?
29:24
Jason: Yeah, actually, I can take this one. So when we built them separately, because there are a lot of people, they already have their own MES solution. They want a quality one. That's saying one of the things that we can do is everything through our APIs forget... Ethan, I didn't recognize you there. Nice to see you, man. So you can create a view table of assets. You could do it for work orders. You could pump it in if you didn't wanna use it at the same time here. If you're going to use quality in here, but you wanna configure assets and stuff. Yeah, we can absolutely figure that out.
29:54
Sam: Yeah, in the front.
29:55
Audience Member 6: So one of the things you guys started out with was providing a full solution. Not that this isn't. But to go from a great piece of software to return on investment. How are you guys tackling that?
30:09
Sam: Yeah, so we do think that... So I think this is our last question. I've just got flagged for. But it's actually a really great one. Because that was, as you said, that was a big part of our philosophy that we're not doing this for fun, as much fun as we find it, oddly. But we wanna drive continuous improvement. The software only gets you there so far. A lot of it is then around adoption and change management and actually intentionally doing continuous improvement. So a lot of what we think we're kind of trying to provide in this software is something again that kind of is intuitive, that with minimal training people can go in and actually be using, which we know is a huge adoption hurdle for a lot of these systems. That's why we really wanted to embrace things like language support, which I also think is a hugely important hurdle that we wanna be able to cross over. But then really a lot of it is also kind of just whether it be through Kanoa or the teams managing the projects or a trusted integrator or consultant really working with that end user to talk about their continuous improvement goals and how they're going to achieve them and having an intentional plan to do so.
31:08
Jason: Yeah. And to add to that, still the same last question. It's the nature of the beast of MES. Every implementation is going to have different challenges. So you can go into a company where they've really got their stuff together and they don't need any. They've got it figured out. But you've got the other companies where you need like it's the connections to PLCs, the manual lines is a real part of the data collection, which is going to be a challenge. We go in, we always talk about an engineering study, but it's a collection of meetings that go over the first week of in there where we're first off, we'll do education. So we're PMPs, we're lean, Six Sigma certified. We've been doing this for a really long time. We know the pitfalls and we know the risks of MES projects. We'll start off with half a day of education with all the stakeholders from everyone from operations, maintenance, quality, IT, finance, planning to basically discuss. And we've done this to various levels of degrees of success in that some companies have actually after that training, they've just stopped because they said we realized we weren't ready as an organization, we are not ready and it's a waste of time.
32:17
Jason: You have the other ones who they say, "I hear what you're saying, Jason, just write the software." It's like, seriously. So we can, we'll provide changes, whatever it's needed there. We'll do change management, we will help. We say, you need a project chart. So you certainly need a vision for what it is you're doing. You need a cross functional team. You need stakeholder agreeing and buyin. And let's figure out who's being affected by this one. Let's create a process map of what your existing systems are, because we're going to be deprecating some of those in here by the very nature of that act. That's where you start to actually uncover areas for continuous improvement just in implementing.
32:53
Sam: That's a great question to end on. Thank you all so much.
32:55
Jason: Thank you.


Unified Namespaces (UNS) have the power to streamline OT data by breaking through communication barriers between devices and applications. By leveraging the Ignition platform and MQTT, UNS can open the door to transformative potential for operational and enterprise applications. But what even is a UNS? Join Cirrus Link as they leverage Ignition and MQTT to implement UNS and their transformative potential for applications, and share details about the core functionalities of UNS. By the end of the session you'll be equipped with the knowledge to harness the power of unified data and unlock new possibilities for your industrial operations.
Transcript:
00:04
Susan Shamgar: Hello, welcome to today's session, "Demystifying the Unified Namespace with Ignition." My name is Susan Shamgar. I'm a member of the technical writing team here at Inductive Automation, and I'll be your moderator for today. To start things off, I'd like to introduce our speakers. Arlen Nipper has been designing embedded computer hardware, software, and SCADA infrastructure solutions for 47 years. He was one of the early architects of pervasive computing and the Internet of Things, and in 1998 co-invented MQTT, a publish-subscribe network protocol that has become the dominant messaging standard in IoT, designed to optimize and make use of data for both OT and IT. Throughout his career, he's been an industry leader advancing SCADA technology. Nathan Davenport has more than 18 years of experience in the software industry. He graduated from Portland State University with a Bachelor of Science in Computer Engineering and worked for the first 13 years of his career at Microsoft. In 2019, he joined Cirrus Link Solutions as a Senior Software Engineer to focus on IIoT and the challenges and opportunities it presents. In 2023, he took on the role of Director of Sales Engineering at Cirrus Link as a technical leader and strategist for the company's sales engineering team. Please help me welcome Arlen and Nathan.
01:29
Arlen Nipper: Thank you very much. Appreciate it. Hello, everybody. Welcome to ICC. This will be my ninth presentation here at ICC. And first of all, thank Inductive Automation for all the cool stuff that you guys do. It's awesome. I mean, this show, we get to meet all of our customers, talk to them about new ideas. And a lot of what we've got in the Cirrus Link product line actually came from talking to customers here at ICC. So, really enjoy it. So today we're going to be talking about demystifying the Unified Namespace with Ignition. So, before that, real quick introduction to myself, Arlen Nipper, and Nathan Davenport. Nathan will be out here in a little bit helping me with the demo. So, Cirrus Link, we were founded in 2012. Right after that, we became a Strategic Partner with Inductive Automation. So, that has been great. So, really, when you think about it, when we started Cirrus Link, we needed a platform. We had a good idea. We wanted to leverage MQTT for industrial computing. All we needed was a platform. So, with that, really what we do is our development team works on the Ignition platform, developing MQTT technology modules to run on the Ignition platform.
03:03
Arlen Nipper: Now with that, we have a few standalone products that we do. One is the Chariot Sparkplug-aware MQTT broker, and the other is the new product we're going to introduce today, is a free MQTT client. So, we work with this all the time, and I'll go through this in a little more detail, but finally having a first-class MQTT client that we can all use as these MQTT networks get bigger and more complex and bigger and more complex going forward. Also, this is the 25th anniversary of MQTT being published into open source. So, I was thinking about that, I was...
04:00
Arlen Nipper: Yeah, it's pretty wild when I think about it, I mean, Andy and I were trying to solve a problem for Phillips 66. There was no cloud, nobody was... There was no security. Security was security by obscurity. But as I was thinking through this now, 25 years later, if Andy and I would have built a kill switch into MQTT, I figure right now there's anywhere from three to five billion clients running right now as we're talking. A billion of them. So, if we turned it off, that means there was a billion Facebook Messenger applications that would quit working, car telemetry would quit working.
04:44
Arlen Nipper: You wouldn't be able to talk to your Alexa to turn your lights on and off. You wouldn't be able to open and close your Genie garage door. So, it's pretty incredible when you think about an open technology, very understandable, very simple, and how it scaled to all of the different applications that it's in today. So, we... Kind of who's using the MQTT Sparkplug technology today. So, at this point, we, Cirrus Link, have over 2,000 discrete companies that are using MQTT Sparkplug. And if I were to put this slide up six years ago, that would have probably been 60%, 70% oil and gas. That's where we came from, that's where our customer base was at the time. But if you'll notice, where the growth is, is in manufacturing, and manufacturing really, with SCADA in oil and gas, where we have remote telemetry, we almost had to have a UNS. We had to name at least our stations and our polling where we could have everything at least on a network when we're polling down that network to have it organized in somewhat of a hierarchy. But in manufacturing, basically, it was the wild west. I mean, they were everywhere with that. And really, I think that's one of the driving factors around the whole notion of the UNS and the popularity that it's gaining today.
06:22
Arlen Nipper: So, with MQTT as an enabling technology, it's interesting that something so simple that customers start using it immediately. So, I demoed in, I think, 2015, the first MQTT module, and immediately customers started using it. But more interesting to me is what they were doing with it and how they could innovate with it. And so as we proceeded, we went from, again, the first demo that we ever did was we kind of proposed the notion of, "What's the problem?" Well, the problem is that we have devices connected to applications, i.e., protocols are evil. If I have a protocol, I have to have a driver. The driver talks to the device, brings back the data. Well, the data sets within that application. Now I want to use it somewhere else, so I have to write something else to get it out of that polling engine into something else. So, that was the problem. And what we said is that really what we're proposing, at least in year one of this journey was, hey, we should look at connecting devices to infrastructure, not to application.
07:42
Arlen Nipper: And with that, we could start unlocking some of the stranded data that was out in the field. Because, you know, in 2015, we figured that 80% to 90% of the valuable data that the company could use was probably being left stranded out in the field. Now, as we progressed the next year, we said, well, let's put some tenents out there. The first tenent is let's decouple and connect to infrastructure.
08:10
Arlen Nipper: And the second tenent was, well if this is going to expand, if we really are going to explode to the Industrial Internet of Things, we better be able to show that this is a superior OT system to begin with. Because if we couldn't replace conventional legacy poll response systems, there would be no adoption in that. So, as we move forward, we also had just started in 2016, the Sparkplug specification where we could start saying, okay, well, MQTT is pretty cool. You can publish anything you want on any topic and it's got a problem though. You can publish anything you want on any topic. So, Sparkplug came out. We started trying to define what would be the best way to use MQTT in industrial automation architectures. So, by 2017 ICC, I think we had all the ingredients that we needed to kind of, for the emergence, if you will, of a UNS.
09:21
Arlen Nipper: Unified Namespace. So, we have the three tenents now. Connect your devices to infrastructure, not to applications. Be able to demonstrate a superior OT solution, and with that be able to provide a single source of truth. And we came out with this tenent, a single source of truth. And now it's being used everywhere. Oh, we're going to have a single source of truth. We're going to have the single source of truth from the edge, and that's the only way it's going to expand. And here we said the transformation to IIoT, it won't be IT down to OT. It will be enabling OT to put together an infrastructure that's ready for cloud enablement. And I think that's where we're going. That was kind of the evolution, if you will, of UNS. So, we'll go back to the problem that UNS solves. Well, the UNS kind of solves the problem of trying to get rid of data silos and stranded data. Now, if we go back, even go deeper into kind of the origin of the problem is, we had this automation pyramid, call it the Purdue model, you can call it the ISA-95 model, where basically you started with your sensors, your PLCs, you moved up to SCADA from SCADA, you moved up to your manufacturing operations.
10:55
Arlen Nipper: Get the data up to there. Well, we kind of stopped at level three for a little bit, and well, we need to get it up to the enterprise, so we must define a DMZ level. And then we can finally get it to the business networks, and they can start seeing that data. And ultimately now we'd like to get it to the cloud. Now, the problem with this, as you can imagine, is that, we'll start at the SCADA level, is that first, from an operational standpoint, we're only getting the data that operations need. Well, the business looks at it and go well, wait a minute, you're not pulling in this other value, say, from a tank level or this other value from how long a pump's been running. A pump. The pump is there, I'm controlling it, but you're not telling me how long it's running. And operations go, well, we like the way we're polling. We told you when we put in the SCADA system that we would change our polling algorithm.
11:57
Arlen Nipper: But now that you want to, we really don't wanna do that. And then you go up to the next level and you go, well, we finally got our data up to this level we need to go down, and we need to change some other applications. And now we've got it up to the next level. So, as you can imagine, as Travis said this morning, "Dream it, do it." But with this model, you could, it's like, "Dream it, forget about it." It was never going to happen. And then we finally get to the cloud level, and I think we started seeing this about five or six years ago.
12:34
Arlen Nipper: This notion of well, I've got all my process variables down here. If I wanna get them up through this pyramid, I'll finally get them up. And then I'm going to just take my whole, all the process variables that I got and I'm going to put them in a data lake. And this is going to be the most beautiful data lake and we're going to put all of our data in there and everything will be great. And over time we'll be able to use that data and take advantage of it. And then suddenly we figure out after a couple of years that it's turned into a data swamp and nobody uses it. And we have terabytes of data that are landing in data swamps because think about it, I've got a discrete register value, I wanna go look at it, now I've got to do a query of where did it come from? I've got to do another query of, well, what's the engineering units? I've got to do another query of on what was the deadband on that measurement? And by the time they try to get this data, it just becomes without context, without a UNS, data, literally, a data lake turns into a data swamp.
13:51
Arlen Nipper: So, we get to the definition, if you will, of a Unified Namespace. And really, it's all about taking your information and getting it organized in a way that you can use it within your company from the very edge where your single source of truth is up through all of your business systems, through your manufacturing systems, so that ultimately when you get to the enterprise, everything is organized in a way that we've got contextual data. And of course, we're doing that, a lot of that with Ignition, with the ability, you know, originally it was just the tag definitions. We could organize their tags, we could give them properties, we could give them engineering high, engineering low, dampening, or deadbands.
14:40
Arlen Nipper: Then we came along with UDTs, so we could take user data types and define a model, and then that model consisted of the measurements and the measurements had the contextual information along with them. So, with that, if we could take that, imagine if we could have that context all the way at the edge where the engineer defined the equipment and get that all the way up to the enterprise, all the way through the SCADA, then we literally would have a UNS. So, as I think about it, really kind of how this all came evolved, if you will, is that with protocols we got registers and we could define them, but they always were kind of standalone. When MQTT came along, we had to, is the nature of MQTT, we had a topic and a payload, and the topic again, by its very nature became hierarchical. You could use slashes and so you could have, I still remember Phillips 66, 1999. We probably had the first properly named UNS, if you will. If you go by the ISA-95 hierarchy, we had a pipeline. You know Pipeline 1. In Pipeline 1, we had booster stations, Booster Station 1, and booster stations we had PLCs.
16:08
Arlen Nipper: And then PLCs, under that we had flow computers. So, we had this hierarchy by the very nature of how MQTT worked and how we were getting that into an organization. So, from Walker Reynolds, to David Schultz, to Matthew Paris, everybody's had really good ideas now of leveraging what you can do with MQTT, but do it in a way that we can have that UNS go all the way up to the enterprise level. So, here's my Phillips 66, 1999 UNS. Now, one thing that kind of comes up in conversations when I'm on with the customer and go, "Well, Arlen, we've got our broker here. So, that's our UNS database." And remember that an MQTT broker is just for routing messages through your infrastructure. It is not where all of your historical data is. So, I just wanna point out here that if you put all your tags into an MQTT broker and you set the retain flag on all the tags, all you're going to get is last known good. So, within a UNS architecture, we still have to think about where is our UNS database going to be in that whole architecture.
17:35
Arlen Nipper: So, a lot of you, I think I've done about 200 Snowflake demos here since we introduced Snowflake last year at ICC. And this is kind of the architecture drawing that I've used in all 200 of those demonstrations. You can see where we've got this notion of Ignition, Ignition Edge, native devices like Opto 22, like Phoenix Contact, they can do Sparkplug B and being able to take the edge of network and publish that up to an MQTT broker. That engine is subscribed to on an Ignition gateway. And from there we really start using Ignition as the enterprise connectivity platform to create a data model i.e., a UNS, instantiate that, creating our digital twin. Now, I hate that word "digital twin." It has so many connotations you can't...
18:36
Arlen Nipper: But the difference here is we try to do... If you look out there, AWS has digital twin technology. Microsoft Azure has digital twin technology. Google has digital twin technology. But the problem is those digital twins are the way that they thought you should do a digital twin and they probably don't know what you do. And the interesting thing with what we can do with UDTs and Ignition is those are digital twins, the way that we as you, the customers use it. And that's the way that you can start leveraging really a digital twin using Ignition UDTs and then of course still keeping track of our real-time data. So, the digital twin that I create on Ignition Edge out in the field can be published to an Ignition gateway automatically. I don't... It's automatically that UDT is published up. We discover that, now I've got the UDT there. Now I can point that at a transmitter that goes to a broker that's connected to the Snowflake, the IoT bridge for Snowflake. And now going into Snowflake we literally have that UDT recreated where we can take advantage of that.
19:56
Arlen Nipper: So, that becomes really... Snowflake is really the kind of the ultimate UNS database so that you can go back and get all the historical data, but you still have all the real-time contextual data there as well. Now just to go into a little more, we are listening. So, over this year, last year we've had a lot of requests and we've added some features to Engine, Transmission, and Chariot that customers have been asking for. So, first thing is that the UNS enablement of MQTT Engine. Now what we did was as you know, if you can have edge devices again, you could have Ignition Edge and all of the native devices again, the Advantechs, the Opto 22s, the Eurotechs, the Phoenix Contacts, publishing that information into Ignition.
20:56
Arlen Nipper: Now we've got to keep track of that, that's an individual edge node and we've got to keep track of the metrics and that's the namespace that came in. And then you've got another edge node and this could be in a factory, another cell. And when you display that in the MQTT Engine, you've got all that information, but it's not really the UNS that you wanted. You would like to collapse that so that you could truly define that and have it all together by the time you get it to Ignition. So, what we have here is Edge Node 1 and Edge Node 2 and they're both going to connect to a broker, start publishing information and those are going to flow into Ignition in an MQTT Engine. Of course, you're going to have your Sparkplug view of G1, D1, E1, ACME Inc. Stillwater Coyote properties for my Coyote Property for Wile E. Coyote. And then we have our Anvil Production Plant and then down to the actual UDTs. And that's all good. But what I would like to do is have it in a new UNS view. So, now in Engine you can set it up and say, "Okay, I know you're going to have all the Sparkplug metadata there, but what I wanna deal with is a UNS view."
22:21
Arlen Nipper: So, now we can go in and start making really clean view right in Engine. You don't have to do tag copies or anything like that. Is that, that will show up in Engine exactly the way. So, think about it, what we're doing is obfuscating the group Edge Node ID and we're presenting at a hierarchical namespace just the way that you named it. So you can start having all of these edge nodes contributing to a single UNS.
23:03
Arlen Nipper: Okay. The second thing is a lot of customers have, especially in manufacturing, they have it set up where they're familiar with MQTT, but they have small devices, Raspberry Pi's, they've got very simple MQTT clients and they would like to connect their clients to a broker just to get some of the information that we can publish out of Ignition. So, with the Transmitter module now, you've got the ability to point to tag providers, folders, individual tags, and literally publish those one tag per MQTT message. So, in here I'm showing that we've got ACME Inc. Stillwater Coyote, Anvil Angle, JSON payload of a value and a timestamp and those can be published on retain and you can choose to publish the properties as well.
24:02
Arlen Nipper: So, you get the engineering units, engineering high, engineering low, those will be setting there in your broker so that as other MQTT agents come or clients come around and they connect to the broker, they're going to immediately get last known good. And then they can subscribe just to the data that they want. Maybe they just wanna subscribe to the Anvil Angle and that's all they wanna subscribe to. So, they don't have to be aware of Sparkplug and they can get very granular in how they actually use MQTT at that level. Now we've given you a double-barrel shotgun and you can shoot both feet off. So, this is going to create a lot of traffic and a lot of... If you can imagine if you had a million tags and you're going to retain all of those in a broker. That's probably going to blow your broker up. But we do have a lot of customers wanting this feature to be able to start really going out and getting very granular in the way that they can subscribe to topics that are coming out of Ignition. Number four. Oops, I think I skipped over one.
25:19
Arlen Nipper: Got them out of order here. Number four. For the last seven years people have been wanting to do alarms over MQTT. So, now with... And we had to have some infrastructure help from Inductive Automation and Inductive, thank you very much. But now we can generate alarm at the edge, publish that into Engine and then at Engine we can acknowledge that alarm and take it all the way back.
25:54
Arlen Nipper: So, now you can. Thank you. So, now you can use MQTT as your single secure connection into a hub-and-spoke architecture. And finally, I probably should bring Nathan out about this time. Nathan basically isn't all... Gets a lot of the customer calls where we're starting to design something or get into a debug situation where we've got different kinds of brokers, different kinds of clients out there, and we're having to debug them. So, we are announcing here at ICC that we now have the Chariot MQTT Client.
26:37
Arlen Nipper: So, it's free. Uniquely, you can subscribe to multiple MQTT servers. I think all of the other very simple MQTT clients, you can just subscribe to one broker. But when you're trying to solve a complex problem, you wanna subscribe to multiple brokers.
27:00
Arlen Nipper: Of course, it's got a built-in Sparkplug decoder, we can publish messages from it, and again it's free. So, if you download Chariot, which is our MQTT broker, the client is part of that, and you don't have to use Chariot broker at all, but you can run the client, and you can use that for free. You'll be able to download it as of today, I think. Right, Nathan?
27:26
Nathan Davenport: Yes.
27:27
Arlen Nipper: All right. So, with that, we are coming to the demo. So, I figure after eight years of doing live demos and never having a problem, I was probably pushing my luck. So, if we have a problem, I'm going to let Nathan demo it today.
27:57
Arlen Nipper: So, and I know a lot of you know Nathan, the guy is just incredible. He can, man, he can debug anything. So, really, what we're showing here is that we've got... It doesn't show up on the screen, it's being cut off. We've got three actually, hit escape on there. It'll show up. There we go. We've got three edge nodes publishing data. They will be publishing data. They're set up in a Unified Namespace that'll be going to one broker up into Engine to Ignition. From Ignition, we've got Transmission, where we'll be using the new feature of the UNS transmitter to transmit some of these tags out where you can subscribe just to individual tags. And then we've also got another transmitter going to another Chariot broker, going to IoT Bridge for Snowflake. And then ultimately, we'll get the entire UNS into our Snowflake data cloud platform. So, Nathan.
29:00
Nathan Davenport: Thank you, Arlen. Hi, everybody. I feel like every time we do these demos, Arlen makes them more and more complex, as you can see by the topology diagram. So, hopefully, the demo gods will cooperate today. We'll see. We'll see who's pushing their luck. All right, so as Arlen said, you see, we have three Edge gateways here. They're all hot. They're all connected to Chariot MQTT Server 1, and they're waiting for MQTT Engine here on the central gateway. Let me select it here so you can see, to come online, publish a state message on the appropriate topic. And that's the trigger that's gonna tell these gateways and the Sparkplug transmission clients, now is the time to go ahead and publish your birth messages and subsequent data messages. So, let's pivot over here to the Engine gateway. You can see it's disabled right now. And I'm gonna pivot back over to... And somewhere in here in the central gateway is MQTT Engine. Now, I have no UDT definitions here at all. I have no tags in the traditional Edge Nodes folder, and I have no UNS namespace. So, I'm gonna go turn on MQTT Engine, and you're gonna watch all the data flow.
30:23
Nathan Davenport: Let me get back to the topo diagram. From the three Edge gateways single source of truth all the way up to Engine. And we will take that single source of truth, that single stream of data, give you your standard edge nodes view with the Sparkplug overlay. And then we're also gonna give you the UNS hierarchy and topic namespace with that overlay removed. So, we go turn on MQTT Engine. Give it a second or so. That's about all it takes. And here, you can see your traditional... Oh, sorry. Trying to full screen, but that did not work out, so let's try it one more time. And here you can see your traditional Sparkplug nodes laid out much as you would expect them. Your group, your edge node, your device, and then your UNS namespace, right? So, as we drill down through your UNS namespace here, we're gonna go just a little bit deeper so that you can see that these tags here are in fact under the standard Sparkplug edge nodes folder with the Sparkplug overlay, and they're updating now in real time. And within your UNS namespace, you have that Sparkplug overlay removed.
32:01
Nathan Davenport: All right. We start at the enterprise, site, the area, and so on. So, we give you both copies of these tags, the Sparkplug overlay, without the Sparkplug overlay. And these tags very much behave just like you would expect them. If you were to write to this load tag, for example, that's going to send a Sparkplug command message down to the edge. Transmission's going to unpack that, write to the tag, that's gonna create an OPC UA right down to the PLC, and that write is going to bounce back all the way to Engine, and this Engine tag is gonna update. So, let me prove to you that this is actually working. The load's gonna go from 44 to 46 let's say. You see it go all the way down, come all the way back. Now, this is the perfect time, I think, to go ahead and demo alarms across MQTT. So, I'm not gonna show you the alarm at the edge, but Arlen went ahead and put an alarm on this load tag at the edge, and once we cross a set point of 50, I believe.
33:11
Arlen Nipper: 50.
33:13
Nathan Davenport: That alarm's gonna fire. So, let's say all of a sudden our load jumps to 51. This alarm that you see here came all the way from the edge all the way down to Engine, and then we basically via Engine, wrote it into the alarm status table. So, from here, from the Engine side, we can go ahead and acknowledge this alarm, which is then gonna fire another command back down to the edge to acknowledge it at the edge. So, let's do that. So, the alarm's been acknowledged. Now let's say that the load drops back down below our threshold. You'd very much expect this to be cleared. So, we drop down to 40, let's say. Well, 40, let's say. And that alarm has been cleared. You can see it here in the table. We've cleared that alarm. We have alarms across MQTT. You guys have been asking. We finally brought it to you. Thank you.
34:25
Nathan Davenport: Like Arlen said, it took about seven years and a little bit of help from Inductive, but we got it finally. All right, so let's. Oh, and I probably should have showed you as well that of course we have your UDT definitions over here as well. We can't create the instances without the definitions. You get all of that single source of truth. Let me pop back over to the topology diagram. So now, what I'd like to show you guys next is, we're gonna turn on Transmission here on this gateway where we also have Engine. We have a traditional Sparkplug transmitter. It's gonna push your data up through MQTT server number three, through the IoT Bridge for Snowflake into your Snowflake back in that Sparkplug Protobuf. And then we also have another transmitter. I'll show you the transmitters here in a second so we can kind of anchor you in, in reality in the product itself. We have another transmitter, UNS transmitter, specifically configured to consume the Engine UNS tags. So, let me show you what I mean. Pivot back over to Transmission. We're not enabled yet. You guys know what a traditional transmitter looks like. This is the transmitter right here going to Snowflake.
35:44
Nathan Davenport: It too is picking up the UNS tags, but it's Protobuf encoding them, while the UNS transmitter also pointing to exactly the same folder. But what we're gonna do is we're gonna shred these tags. We're gonna publish out one message or a message per tag on a topic that is the full UNS path all the way down to the leaf tag. So, now you have full control for consumers of your data. Maybe one MQTT client wants to simply consume the load tag, maybe another wants to consume the angle tag. And let me also show you that there's no data here in our Snowflake backend as well. I'm gonna refresh it just so you believe me. These three schemas here or folders are sourced by the bridge, produced by the bridge, but we should be seeing our ACME INC. schema folder, all of its views, all of its UDTs, all of that. We don't see it yet. We're gonna go here, we're gonna turn on Transmission, and we are going to publish this data out. So again, let me pop back over to the topo diagram. So, how can I convince you guys that the UNS transmitter publisher, has in fact published all of those tags, one topic per tag? Well, we're going to use the free client that we built for you guys, the free MQTT Chariot Client, to basically view this data at the server.
37:24
Nathan Davenport: Okay. Let me see whether my session has expired yet. So, we're gonna refresh just to be sure. I'm gonna turn this server on. It's not actually accepting connections yet. Now it is. And here is the MQTT Client built into the Chariot server. Looks like we need to connect to localhost, and we do. So, I have the Chariot Client here connected to localhost, this server right here, MQTT server number 2, and the MQTT Server 3, the server that's serving up the tags to Snowflake. Sparkplug data. So let's go back. Real quick we'll pivot through this view. Looks like we have an active QoS 1 subscription on hash. If I go over here into the topic tree viewer, you can see we're connected to both servers. We have the top server and its topics. I'll expand them here in a second, showing you the data coming into Chariot server number 3, the one all the Snowflake data is flowing through. And then all of your UNS data is here on the local box. And you see your UNS reconstructed here. And as we drill down through this topic namespace, you're gonna find exactly the same data that you would be seeing in Ignition right now for these tags.
38:56
Nathan Davenport: So I turn on most recent, just so we can stick on angle. And every time the angle tag updates and the UNS transmitter publishes out a message, we're gonna auto-update the JSON payload down here. So what are we giving you? This payload at the very bottom is updating, I don't know, about once a second. We give you the qualified value. Timestamp, quality, an actual value. We give you the data type, and then we give you the full UNS path all the way down to the tag. You can retain it, publish on QoS 0, 1, 2, whatever you guys want to do. It's completely up to you. Like Arlen said, be a tad bit careful. If you got 20 million messages, it could be a little bit heavy. And now imagine if you also wanted the properties for that tag. So you guys remember possibly that Sparkplug is going to push all of the additional tag metadata. Things like engineering units, engineering high, engineering low, tooltip documentation, all that stuff comes in a birth message. But this is in Sparkplug. So we had to give you a way of retaining these properties. So as you can see here, we will also give you the ability to go cherry-pick out the metadata around a tag as well.
40:06
Nathan Davenport: Here in this example you've got engineering high and you have engineering units. So at this point now you can lock consumers down to topics, and they can consume one tag only if you want, or however many tags that you may want them to consume. Okay, and then as you can see here, the rest of your UNS is here as well. We got a little bit more time here, so I'm gonna show you a couple more things before we pivot over to Snowflake. Along with the connections view and the topic tree viewer, we give you the ability to subscribe to any topic you want and maybe publish out a message. So let's say, hey, we want to publish out a message on, I don't know, how about Arlen/Nipper with "Hello ICC 2024!" Bang. So I'm gonna publish that out there. I think I saw it, but things are streaming so fast. I can't quite tell. There it is, there it is. So we're gonna give you the ability to connect to multiple servers all at once, stream that data through, give you the topic tree viewer, give you the ability to publish out raw messages when you want to.
41:15
Nathan Davenport: I'm hoping here in the near future we'll get you a Sparkplug simulator into the publish side of this as well, so you can kind of simulate your own Sparkplug devices. Not here yet. Coming soon. Now we got to pivot over to Snowflake. I'm gonna refresh this view. Now here is the schema or the folder that contains all the views for ACME Inc. But before I go show you those views, let me show you where you find your UDT definitions, your models within Snowflake. The models are what we use to dynamically create the views such that we can hydrate them properly. So, if you scroll down into the views here, into the Stage_DB schema, there is a node machine registry. And if we preview this data, you'll find every UDT that was on that Engine gateway now up here in Snowflake on the cloud. You have your full context, your full UNS path, and there's all of your machines, your palletizer, your wrapper, your stamper, your drop test, your paint booth, your grinder, and a few other models there that are injected by the bridge themselves. Now, if I drill into the ACME Inc. view here, and let's say we drop into... Oh, how about the grinder, I think, and we preview this data.
42:36
Nathan Davenport: Here is all of your live tag data in Snowflake. Full UNS. It's still here. Yeah, we've stitched through the Sparkplug IDs, but your full UNS has been rehydrated here on the Snowflake side. And if I scroll all the way to the right, you'll find all of your UDT member tags as columns in this particular view. Let's pop back to the topo diagram. Again, a single source of truth. Three Edge gateways. In a couple of seconds, we can go all the way up through Engine, build you out your UNS, unroll that back out into another server via the UNS publisher, send data all the way up into your Snowflake Historian backend database in a couple of seconds.
43:28
Arlen Nipper: Awesome. Thank you, Nathan.
43:30
Nathan Davenport: Thank you.
43:36
Arlen Nipper: You made it through one.
43:37
Nathan Davenport: Woo. Nine more to go. And you can bet they're gonna get more and more complex too.
43:43
Arlen Nipper: That's right. That's right. So, I guess we're just about out of time, so if there are any questions, I think we could take a few and then we can call it a wrap.
43:54
Nathan Davenport: I think so.
43:55
Audience Member 1: I have a question on the Ignition Edge. It looked like you're just publishing the Sparkplug metric at that point. So, I noticed that your group edge node and device were G1, E1, D1.
44:05
Nathan Davenport: Correct.
44:06
Audience Member 1: And that's just getting stripped. So, when I bring it into the Engine in that middle one.
44:10
Nathan Davenport: Yes.
44:11
Audience Member 1: Okay. So, that's what's happening. So, that becomes the UNS piece.
44:14
Nathan Davenport: Exactly.
44:14
Audience Member 1: Okay, perfect.
44:15
Nathan Davenport: We still keep the context of where these particular tags in the UNS layout come from. We actually will have custom properties on those tags that'll tell you what your group ID, your edge node ID, and your device ID are so you don't lose context. But we remove that overlay to give you full UNS flexibility.
44:35
Audience Member 1: Okay, perfect. Just want to make sure I saw what I thought I did.
44:39
Nathan Davenport: You did.
44:40
Audience Member 1: Second one is that UNS transmitter. It looks like that's just using what used to be like the system.transmission component. So, it's flattening things. It's publishing a flat.
44:49
Nathan Davenport: Yes, it is flat, and it is similar to the Transmission, but we leverage the same code, our underlying agent code, that we use in the Sparkplug transmitter within the UNS transmitter.
44:58
Audience Member 1: Okay, perfect. Just want to make sure I understand that. The third thing is a comment. Does Rick Bullotta know what you just did?
45:03
Arlen Nipper: I hope so.
45:04
Audience Member 1: Yeah. There you go.
45:05
Arlen Nipper: Rick if you're watching.
45:07
Audience Member 2: Great products. You guys are awesome. But I have a question. So, your Unified Namespace. I mean, correct me if I'm wrong. It can filter out a lot of the stuff that's higher up in the tree so users only see the important parts that they want to see. What if I change my structure that I'm transmitting? I change a folder name or whatever, does that break everything or does it still work? How do you handle that?
45:35
Nathan Davenport: If there are no collisions, if you make a tag a folder, or a folder a tag, we have a hard time kind of resolving those types of collisions, if you will. But if you're simply adding other folders and so on, they should just pop in right in the middle of this UNS structure and it should function much like you would expect it to.
45:52
Audience Member 2: What if you delete, though? You change something.
45:55
Nathan Davenport: Worst-case scenario, what you do is you probably come back over to the Engine side, you'd blow all those tags away, you'd refresh from the edge, and we'd recreate that UNS in milliseconds from the ground up.
46:05
Arlen Nipper: And we've talked about this back and forth. 50% of the people we talked to say, "Oh, you should delete it." And the other 50% said, "Hey, just stale it." So, the customer did know it went away, and then he can go and delete it. So, we went with the staling.
46:21
Arlen Nipper: And that's all the time that we have.
46:22
Susan Shamgar: I think we have time for one more question.
46:24
Arlen Nipper: Okay, one more.
46:26
Audience Member 3: So I'm just trying to wrap my head around it a little bit more. Is SPB-V1 still there in the background, or is it like...
46:33
Arlen Nipper: It can be. But once we... In Engine, if you select the fact that you only want a UNS view, then you'll have metrics on G1, E1, D1. But those will only be for debugging. But the actual namespace that we created does start at ACME Inc. And so there's nothing. It's gonna stay that way.
47:01
Audience Member 3: So, then bridging that namespace should be a cakewalk.
47:03
Nathan Davenport: Yep. Should be. And as you can see it here. Just as Arlen said, for the UNS publisher, we strip off those pieces, and we republish on the full UNS topic. So this is the topic that the UNS transmitter's publisher is publishing.
47:14
Arlen Nipper: Well, I think he means the, in Engine, your UNS, the actual UNS tags.
47:20
Nathan Davenport: Ah, got it.
47:22
Arlen Nipper: Yep.
47:23
Audience Member 3: Thanks.
47:25
Nathan Davenport: Good question.
47:26
Susan Shamgar: All right, great. Thank you so much. And can we have another round of applause for Arlen and Nathan?
47:30
Arlen Nipper: Thank you, everybody. Good job.
47:37
Nathan Davenport: Thank you, sir.


Speakers

Arlen Nipper
President & CTO
Cirrus Link Solutions

Nathan Davenport
Director of Sales Engineering
Cirrus Link Solutions
Learn more about our networking and automation portfolio as a complement to Ignition. We will showcase our PLCnext technology with Modular I/O, Ethernet switches, and new MQTT / MODBUS protocol converter products. We will introduce you to new upcoming technologies based on Single Pair Ethernet and APL.
Transcript:
00:00
Arnold Offner: Hi there. My name is Arnold Offner, and this will be my very first presentation to you Ignition system integrators, end users, and partners. It's just recently that Phoenix Contact became a member of the Inductive Alliance Partner Program, and so my presentation today will not include a true demonstration of product as such, but I wanted to talk to you about some of the examples of some of the work we've already been doing with the Ignition software package. I wanted to thank you all for coming to this presentation. At least I was not holding you up from lunch, and I'm glad you took the time after lunch to be here this afternoon. At Phoenix Contact we're actually a German-headquartered company, but there are a lot of similarities between ourselves and Inductive Automation. If you look at the idea that they're family-owned, privately held, and they essentially have grown organically just like we have. Here in the US, we're about 1000 people. You probably know us for terminal blocks, those green terminal blocks that you find on a lot of electronics out there. You probably know us for power supplies and relays, but our real connection with Inductive Automation and the Ignition software package is related to the hardware that I would call networking and automation.
01:19
Arnold Offner: And so in my presentation today, I wanted to show you some of those examples, but more importantly, I wanted to give you an outlook into where our partnership is gonna go and how that might benefit applications of yours in the future because I've heard people talk about the IIoT, I've heard people talk about applications that might involve the edge, and what I wanted to share with you today is some of the work that I'm doing at Phoenix Contact that involves, I think, a technology that will be very interesting for you towards the end of my presentation. And then Marcus will run around with the microphone towards the end, and he'll get to see if there's any other questions you have at the end of my presentation. So we're good to start. You signed up for 30 minutes, so I'm gonna try and respect your time and make sure that this works for both of us. So at Phoenix Contact we talk about enabling the Digital Transformation. This actually is a campus picture of our headquarters in Germany, but we actually, shall we say, essentially, except for my accent, which you'll note didn't come from the US, came from way south of here.
02:21
Arnold Offner: We actually have a campus; this is our main campus in Blomberg, Germany. It essentially services our global operations but what it also does at the same time it allows us here in the US to actually, what we would say, think global but act local. In other words, a lot of my colleagues that you'll ever get to be meeting here in the US market or whichever market you're in, you will find that they are essentially local people who understand the local markets. I'm involved in development and manufacturing, and like I say, it's about this idea of the Digital Transformation; it's about automation; it's about networking. So now I'm gonna make sure that the clicker works. And it doesn't. Okay. I'll tell you what we'll do.
03:05
Arnold Offner: We'll do it the other way. Alright. So this, just to give you an idea, is the web interface that we've set up at Phoenix Contact, and if you come to our small booth outside in the hallway, you'll get to see two things that we're actually demonstrating. We've used the Ignition software package to show essentially our customers, yourselves, but to allow our salespeople just to tell a customer, "Hey, if you type in iiot.phoenixcontact.com, we will actually give you a sampling of some of the things we're working on right now." On this lower purple line, you'll notice that actually is our current, and this is just a screenshot, so it's not live; it actually is our solar installation that we have at our facility in Harrisburg, Pennsylvania. And then we also have about 18 electric vehicle employee charging stations.
04:00
Arnold Offner: And so we actually just run a very little interesting number each day. We basically think that about four miles is possible with one kilowatt of electric energy in an electric vehicle. And so that's where we come up with 550 miles of... miles that we've charged employees that have an electric vehicle in the employee fleet in Harrisburg. We have about 800 people in our facility in Harrisburg, so product marketing, sales coordination, design and manufacturing, and logistics. And then scattered around the country another 250 salespeople who are responsible for either customers or particular industries around the contiguous US. So one of the exciting reference projects that I did wanna mention, and you'll find more of this on YouTube if you go looking for it, involves a project we did last year, and actually last year and the year before, but it was successfully deployed during the course of early last year. It's with a hydro plant up close to Boise, Idaho, known as the Lucky Peak Hydro Power Plant. Derek Stone is one of the gentlemen who is actually very closely involved with this project together with our team, and we actually have two Gold Certified Ignition Software Engineers at our facility in Pennsylvania, which means they actually wrote a lot of the code that then helped the folks at Lucky Peak actually deploy Ignition together with our hardware in the upgrade that they were deploying in their facility near Boise, Idaho.
05:29
Arnold Offner: What I will tell you is there's two videos, and then there's also a set of articles that were written that actually discuss how the Phoenix Contact hardware together with the Inductive Automation software came to be, how this upgrade took place. So as part of my agenda for today, like I said, I really wanted to just talk to you about the things that we're displaying here to add to products you might already know from us, and I'm gonna save that one right at the end there for last. Like I said, Phoenix Contact back in 2017 launched a controller we know as PLCnext. It is a very interesting PLC because it's not your custom PLC. It's based and predicated around a Linux controller. In other words, it's based entirely on the Yocto platform. And what it allows customers to do is we develop software that allows you to use the typical IEC 61131 software programs, but it also allows you to write your own code. And so if you're best in class or you have a certain skill set, either as a programmer or as a company, it's also possible for you then to develop software code that you could then sell through a PLCnext store that we've created to go with it. I'm gonna cover a little bit of the Ethernet switch technology and then talk about protocol converters, but really my key discussion for today is to tell you about how we're very soon gonna be able to get into this space as well.
06:53
Arnold Offner: And that technology is gonna be based on what I wanted to share with you in a moment called Ethernet APL and SPE. And so that is Ethernet APL is predicated around its usage in the process industry, the heavy process industry, and SPE; if you've not heard about it so far, it actually is called... It's single-pair Ethernet. And single-pair Ethernet is gonna allow you to go 1000 meters, so 3280 feet. It's gonna allow you to essentially take 10 megabits of data all the way to devices. And some of the technology that I'll show you in a moment actually just stands to benefit from this capability. And then towards the end, I might still have a chance to then tell you some of the raffle prize winners of people who may have stopped by our booth already and some of the raffle prizes we have to give away. So the area of application where we get involved could be considered to be these five. Phoenix Contact is very well known in the factory automation space. So what we would consider production logistics, everything there, machinery, network machinery. If you then take a look further up to the side here, you'll see infrastructure. So we've done a lot of projects with customers that involve ports, harbors, pipelines.
08:07
Arnold Offner: We've been very much involved in the area of power plants, so the IEC 61850. And when I talk about networking, also realize that part of our portfolio also includes cybersecurity. So we also have capabilities to allow people to log in remotely to equipment, plant, machinery, and basically extract data without having to physically send a person there. And this is very important too if you consider that Germany as such is an export-led country, so a lot of technology that our customers buy from us in Germany that gets exported to other countries. It's a very expensive proposition to send a technician out to find out that the power cord was unplugged, and that's the reason why the machine's not working. It'd be good to know that ahead of time, and then also know what parts you need to take with you if you get called away to a site somewhere else across the globe. Phoenix Contact is probably best known for its products that were originally defined in, when electricity became a big thing in the '20s and '30s. And so our terminal blocks actually go back that far, in other words, to the time when electricity and electric trams, and this is even before the motor car became an essential, I think, part of everyone's life globally.
09:20
Arnold Offner: Then of course we're talking machines and systems, so whether it's logistics, whether it's going to be containerized packages, and the part that is not quite clearly shown here that I'd refer to in area number four is we also do a lot of work in the water, wastewater. And slowly but surely we're also starting to invest more and more effort right now into the oil and gas, the chemical industry, and that's the team that I'm involved with here in the US. So we've actually been set up in Harrisburg, Pennsylvania, as a center of competence for the process industry, and that is why a lot of the process-related products that I'm about to talk to you about today come from the work that the team, the development team, and the manufacturing team we have in Harrisburg is involved with. So with that, I wanted to first show you a product portfolio that we started in about the 2016-2017 timeframe. This is also a pointer, right? So I wanted to let you know we've developed an IO-Link Master, in other words, a device that allows us to connect independently using an Ethernet port to an IO-Link infrastructure. There is somebody already who's come to talk to us about a module we created, which was a serial gateway, but actually not a serial gateway; it's an Ethernet-based gateway which allows us to extract HART data from a HART installation.
10:41
Arnold Offner: So customers are revisiting their HART devices in the field, and those critically important ones we can actually extract that data off the side without compromising the performance of the plant. And then over to the very left is a set of four different families of products that we have, and I'm just gonna basically not belabor all the different types of protocols. Save to tell you that this is where we take a serial communication and we convert it to Ethernet, and the moment we can convert it to Ethernet, there's a lot of creativity that all of you could come up with to then deploy this into an Ignition-based package or a solution for your customers. The one important one that's exciting for us this week is we are doing a little bit of a soft launch this week about an MQTT protocol converter that we've created, which essentially is available in a form factor of one of those four. Two of those are actually raffle prizes, and essentially what we're doing is we are taking Modbus TCP or RTU and then either through a single port or through a dual port, which allows you to daisy chain things. You can then essentially take Modbus data and then convert that to MQTT along an Ethernet infrastructure. So those are the part numbers, and I know sometimes always people grab a cell phone and want to see part numbers, but that just to give you an idea of the Ethernet port count and the part numbers.
12:10
Arnold Offner: And the ones I marked in bold are actually the two protocol converters we've actually set up as raffle prizes for today. Alright, so the most important thing that I did want to talk to you about revolves around this topic of the field level today. The slide, as you'll notice bottom left, refers to something called Akima. Akima is the world's largest process control and process automation trade show anywhere in the world. It's held every three years. And back in 2021 is when we first, during the lockdown, actually conducted a virtual event where we talked about the space in the process automation space that is still not Ethernet compliant. And this is where a currently serial bus or analog connectivity currently exists. You're all aware of 4-20 milliamp loops. You're probably all aware of things like HART. You're probably also aware of things like PROFIBUS PA or Foundation Fieldbus. And this little area has never really been, until now, an area, a domain that you could actually do with Ethernet. And roll on into a time now that I'll refer to you as well, is we now have a technology known as Ethernet APL.
13:20
Arnold Offner: So it is a derivative of the IEEE single-pair Ethernet specification. And what we've actually gone and done is we have created an intrinsically safe connection, which allows us to go 200 meters from a switch in two wire down to a field device. And what we're able to do now is essentially tell you that Ethernet is now possible all the way to the very edge in a process automation application, probably the most difficult ones to engineer. And using this technique, this is the very same technique we do with single-pair Ethernet. So everybody who's ever worked with 4-20 or with HART or with any of the field bus systems know it's always two wires. It's not four pairs. So what we've actually gone and done is we've taken the IEEE 802.3cg standard, which essentially allows 10 megabits a second over a distance of 1000 meters. So make that 1000 meters. I always have to be careful how I get this right, but 3280 feet. And what that would allow you to then do is essentially have a smart device in the field that now has an IP address, which essentially can become part of an Ethernet network. So single-pair Ethernet is something that you're gonna hear more and more about. And I think at Phoenix Contact, because we've developed both things in the SPE space or the APL space, APL is nothing more than an advanced physical layer for process applications. So you can put whatever Ethernet protocol on top of that that you want.
14:49
Arnold Offner: We're essentially now going to be able to tell you that going to the edge means going all the way to the field instrument. What this also means, of course, is that this technology is still gonna live side by side with 4-20 and HART. It's still gonna live side by side with Fieldbus techniques, but it is gonna allow you now to get very advanced instrumentation or actuating devices to be part of your infrastructure at far higher speeds. And 10 megabits a second, just so you know what we're talking about, 10 megabits per second means we are 10,000 times faster than HART. And it also means that we are 300 times faster than any Fieldbus system out there. So sometimes when I do this presentation for my colleagues, I say, "This is an industry that no longer is gonna have time for coffee breaks." So the APL project is based around work that was done by four very notable standards development organizations. You'll probably recognize those four logos across the top. The FieldCom group is responsible, of course, for foundation, Fieldbus, and the HART protocol. The ODVA, as you know, is responsible for Ethernet IP. We're a member there. We're also a member of the OPC UA because what you now have to realize, if you start talking things in a digital space, OPC UA is probably gonna become the future platform on how this data transfer occurs.
16:04
Arnold Offner: And then, of course, the PROFIBUS, PROFINET organization, which also has its strength and is pretty much the rest of the world. But there are applications here too in the US that we're aware of. So those four standards organizations got together, together with 12 well-known companies in the space. And essentially what we did was we created a standard, a physical layer, which is protocol agnostic. And then what happened is at the end of August of 2022, the project, as it started in 2018, four years later, was dissolved. And then these standards development organizations went back to their particular members to then develop this technology further so that each of them can now create an Ethernet-based protocol on top of this advanced physical layer. So some of these companies you will recognize too, either because they are producing DCS systems or because they are producing sensors and actuators, commonly already used in the process space, or companies like Phoenix Contact, who we consider... Who I would consider to be in the infrastructure space. In other words, we are using a Layer 2 managed switch technology to actually create an IT-to-OT combination, which allows us to use regular Ethernet on the north side and then use the OT capabilities of APL into the field.
17:23
Arnold Offner: You will see that this is also a group of competitors that collaborate very well, just like I would say Alliance partners do within ICC each week, or each year when we meet for this week. And then the other thing I wanted to share with you is just some of the standards that we've also created and some of the documentation we've created, which allows us to essentially also basically create the foundation for now other companies to get involved in the technology too. And so I think I'm speaking to a lot of people who are gonna deploy the technology, and so the one thing I would mention is there is also an APL engineering guideline, which would probably become very interesting for you in the future. It's about 100 pages long, but it essentially takes the technology and the technique we've used from 4-20 and HART, and essentially now using APL allows us now to actually allow people to actually start defining, to start specifying an APL infrastructure moving forward. We are also currently, this is the part we're wrapping up right now, we're currently in the process of doing APL conformance testing, so interoperability, but all of these manufacturers are currently in the stages of either getting final certification or on the stages of still making sure that they have their class 1, div 2 standards, that they can conform to IECX or ATEX or any kind of national standards.
18:42
Arnold Offner: There are some countries that still insist on doing things a little differently. I'm thinking of countries like the UK. I'm thinking of countries like China, Korea, Japan. Those standards for those particular markets are still basically the steps that these manufacturers, including us, still have to work on. But the technology is now here, and what I wanted to share with you is that the technology is gonna allow us to actually develop some very, very interesting new concepts moving forward. And so in this Digital Transformation, we finally now have Ethernet all the way to field devices. I just wanted to give you a picture that shows the collaboration that occurred at the Phoenix Contact booth in June of this year in Germany. So together with the companies ABB, together with Endress and Hauser, together with KROHNE, and then a valve controller company here called Samson, we actually physically showed how this is possible. And I always like to point out that if you look at this ring, you're looking at a layer 2 managed ring, all right, with redundancy integrated. It can either be done with copper, or we've also got two SFP ports, which allow us to do it over fiber. And then we're using our PLCnext actually as an edge gateway. So in other words, it's not actually in part of the process. We're using the graphics here of any DCS manufacturer.
19:56
Arnold Offner: Our success so far has been with ABB and Honeywell. We're currently working with companies like Yokogawa and with Siemens on Next. And essentially our PLCnext is really doing what we call the NOA, the NAMUR Open Architecture. So it actually is extracting all the other data that is supposed to then, that is possible... That can be accessed while this process is running. And just to give you an idea, all of these devices are essentially then also push buttons that we have on a demo. I don't have that demo here this week. But essentially it's to give you an idea that all of these devices are essentially IP-driven devices, IP-connected devices. And so we can actually get into all the other information that this device actually has that would otherwise be hidden or not be available real time in a HART or a Fieldbus type application. So what I was gonna do is then just show you where I see this happening. And so I do have an example just to give an idea of where this goes. If you were to take a Coriolis massflow meter, you are probably looking at over 480 seconds in HART to get a downloaded piece of data back from the device. So you send out the command, and now that information comes back. It's gonna chew up a lot of time.
21:11
Arnold Offner: So close to eight minutes. With PROFIBUS, it's gonna take just about three. So about 180 seconds. And then if you're looking at Ethernet APL, this can be done in 10 seconds. And to give an example of how this really works, I wanted to show you an example of how Vega does it. So Vega, and this is a very interesting comparison. You'll notice the cursor moving around here. We're actually trying to connect to a Vega device using HART, while at the same time we're watching an envelope curve here continually being updated every two seconds using APL. So you can imagine the new kinds of business cases and the new kinds of performance categories that could be created in here. And I'm sorry that I dragged this out for two minutes, but you'll notice this bar is still filling up. And so who knows whether the HART information you're ultimately gonna get when that bar closes is actually correct. Because this Vega device on this side using APL has been able to keep tabs on it every two seconds and give you data that is very, very current and very, very accurate. And so, like I said, my comment to my colleagues when I talk about the speed is there's no more time for coffee breaks. People who use HART, I think a lot of them know that whenever you send a HART command, it's always a good time to head to the coffee machine or to go take a bathroom break because you're never sure if it's gonna be done by the time you get back.
22:29
Arnold Offner: All right. And with that, I wanna leave you with a topology that is actually what I see us doing in the future. Phoenix Contact has actually developed an SPE device as well. Some of those first SPE-compatible field devices are now out there. There is a company in Germany called Jumo that's already created three different types of devices, which they demonstrated last year at a microbrewery. So they have an environmental sensor, which in one package over two wires and 1000 meters can actually give us the air quality in here, would give us a CO2 level, and would give us the humidity and temperature. They have another device that does pressure. They have another device that does flow. And then what I wanted to show you here is just the use of Ethernet APL again with a Phoenix Contact product. And all of these devices now could run out 200 meters to APL devices in the field. And so that's why I just want to show you is that we are gonna be part of this discussion moving forward as we essentially take this kind of technology to the edge. What you can also notice, and that's the beauty of a network, is you'll notice we are running a ring structure. We are running another series of Phoenix Contact PROFINET, Modbus TCP, OPC UA, or Ethernet IP type devices.
23:45
Arnold Offner: We can then run them to redundant control systems. And essentially that network now allows us to do all kinds of things. And if you would for a moment just imagine that SCADA could also be Ignition. But I feel Ignition could be also used in other parts of the plant because asset management now becomes very interesting too. In other words, it's up to now the creativity of yourselves as to how you would use Ignition on this backplane here to basically do the things related to the instrumentation here in the field. With that I was gonna just mention, I don't know, is Mark there?
24:22
Mark: Yeah.
24:23
Arnold Offner: Are you ready with the list, Mark?
24:25
Mark: I am.
24:30
Arnold Offner: Okay. Did you bring them with you as well, or are they at the table?
24:30
Mark: I brought them with me.
0:24:33.9
Arnold Offner: Alright. Five minutes, perfect. We've got five more minutes. So Mark is our local sales guy here. Thank you, Mark. Alright. So is Chris Bomarito in the audience? Nope. We'll catch up with you later. Sabrina Rodriguez in the audience?
24:54
Sabrina Rodriguez: Here.
24:54
Arnold Offner: Okay. We have a switch for you, an unmanaged switch. Is Bram Fenter here from Element 8? Nope. Justin Davies from DCI?
25:05
Justin Davies: Right here.
25:05
Arnold Offner: Okay. We have a product for you too. We're getting you a MQTT Modbus protocol converter. Is Dallas Ward from Sierra Controls here? Nope. We'll catch up with him in a moment. And then I'm looking for Ryan Birch from California Resources Corporation. Alright. We'll catch up with him later as well. Thank you. Alright. We'll catch up with you and provide you with all your samples in a moment. And with that, I was gonna let Marcus hand the microphone out to anybody who had any questions. And that leaves us with three minutes. Go ahead. Anybody have any questions? So there, I can go back to the slide. The current companies that produce the APL devices that I showed in the table, and I think there's maybe three or four that have been added to that list, have created an IP-based control system that runs on the two wires.
25:54
Mark: Okay. So...
25:54
Arnold Offner: The switch, in turn, can still, in this current configuration, because there is a project out there right now that everybody's scrambling to get their hands on, the switch that we have also has proxy functionality. So we can substitute, if the APL device is not yet ready, we can use a PROFIBUS PA device of the same type. But if you think about 4-20, if it's a 4-20 device, it probably didn't do much more. If it was a HART device or a Fieldbus device, it probably does more. And so to your question, I would say it's not as much a 4-20 as much as it is more of the sophisticated devices that those field device manufacturers have never really been able to get into the marketplace.
26:32
Arnold Offner: Hope that answers your question. Yeah. If you look at IO-Link, I would say complementary, only because IO-Link is predicated more towards the factory automation space. And IO-Link is actually working in two other spaces still. They are starting to do what they call IO-Link Wireless, and they're also creating what they call IO-Link Safety. Think about IO-Link as truly being something that gets used in the factory automation space, whereas APL and SPE are gonna be covering essentially that entire market space, so both the EX and the non-EX market. So, complementary. Yeah. The only challenge is with three wires, it doesn't tie in very nicely to the two. And I have seen companies already start working on media converters that will convert IO-Link to SPE. But then your challenge still is you're talking about a gateway, and we're trying to eliminate gateways. Because IO-Link is a master-slave type configuration, and so it doesn't really have a great way to connect into Ethernet networks above.
27:38
Audience Member 1: For device manufacturers, do you see a big adoption happening for this on the end devices?
27:43
Arnold Offner: I would say end devices, the adoption is probably gonna come from best-in-class device manufacturers who have always had a lot more intelligence inside the device. So if you looked at that name, you're probably looking at really high-end devices that these companies produce. I think the challenge in the SPE space is think about it more of being a combination of devices just over two wires. Because what I probably forgot to mention is that over my 1000 meters at 10 megabits a second, we're also pushing out power. So we're doing the same thing we do with 4-20, but we are actually sending out power. And those power categories are then also defined. I don't want to go into too much of the weeds on that, but essentially it's a powered two-wire system.
28:23
Audience Member 1: So outside the working group, obviously they're adopting, they're part of it, but do you see other companies looking at it, asking about it, trying to get involved?
28:32
Arnold Offner: Yes, yes. And like I said, they are then working through their standards development organizations. So in the PROFINET space, currently in the Ethernet IP space, they've already created something called a constrained in-cabinet type technique, also using single-pair Ethernet. And so you're slowly but surely gonna see more of those companies step up and develop that kind of thing. Yes. Well, thank you very much for taking the time. I really appreciate it. I want to wish you a wonderful rest of the day, safe travels back home. And it's been nice meeting all of you. Thank you.


Learn how process manufacturers are leveraging the power of SafetyChain & Ignition to drive meaningful value in their production environments. We’ll cover how manufacturers benefit from seamlessly connected systems and the broader impact that has on various segments of their operations. You’ll hear about a case study where thousands of data points derived from a complex manufacturing process were leveraged to drastically improve quality and production metrics. Finally, we will showcase how easy it is for manufacturers to connect SafetyChain and Ignition with a live demo.
Transcript:
00:01
Geoff Nelson: Glad you're here. Appreciate your time here to talk to us. We are here to talk to you about SafetyChain and our Ignition Module to help capture real-time data for a digital plant management system. My name is Geoff Nelson, I am the VP of Technical Solutions for SafetyChain Software. This is Jonathan.
00:21
Jonathan: Hello, everyone. Welcome. Thank you for joining us today. I've been with SafetyChain a little over three years working as a Solution Engineer. But I primarily come from the food and beverage manufacturing industry. I spent over a decade working in a plant and SafetyChain is a plant management platform. So, I'm really excited to talk to you guys about the value that SafetyChain can bring and how we can leverage the Ignition Module.
00:49
Geoff Nelson: So, let's get into it. So, we will talk about SafetyChain, we'll give you some of our key applications where we kinda hit the plant management, plant floor, talk about digital transformation which I'm sure you're all pretty familiar with, give you a customer success story and then go into a demo. So, we'll try to get through these kinda quick so we can show you a demo and leave time for questions at the end. So, we are a digital plant management platform. We're a SaaS solution hosted in Microsoft Azure, we have native applications for Android, iOS and Windows and we help kinda pull everything together, the glue for the digital plant management platform. We are an alliance partner here with Inductive Automation and we will show you our module that we have.
01:35
Jonathan: All right. So, yeah, as Geoff's kinda said, we bring everything together. I like to think of us as kinda like a one-stop shop when it comes to plant management. And as you can see, everything here listed, these are some of the key use cases and applications that a lot of our customers use as and leverage in manufacturing. I'm not gonna go through the whole list, but we touch your whole process from shipping and receiving all the way to getting your product out the door. And primarily, we've come up in the food and beverage space, but we have had other applications outside of that. Now, as it relates to the Ignition Module, that specifically focuses on how we're capturing data for our customers.
02:13
Jonathan: So, with the Ignition Module, we can capture any data that's already mapped to their Ignition Gateway in SafetyChain. And SafetyChain is already a pre-built solution for you to extrapolate your data. You can aggregate that, graph it, trend it as you need. We could take for an example, temperature logs here listed right here. Instead of having a maintenance tech or operator go in and take hourly required check and write that down and put it away in a folder or binder, SafetyChain helps you digitize that process and take it a step further by automating it. So, with the module, we could set off a trigger where we're capturing that specific data tag on a specific routine or basis, or if there's a condition that needs to be met, we can trigger it as well. We have multiple ways of triggering that data collection point.
03:11
Geoff Nelson: So, like Jonathan said, these are the key areas of impact, what we do, it's pretty customizable. And so, you can build really your own process out within SafetyChain. And then he's highlighted here these blue ones as real-time, maybe ones that resonate more through Ignition. But then you can bring this data in and it can live next to all of these other impact areas as well.
03:36
Jonathan: So, again, I talked about my time in the plant. This slide kind of really illustrates the process around capturing data and using that data to further your continuous improvement efforts. SafetyChain is basically gonna help you do that and then the module's gonna help you also automate that. So, in SafetyChain, primarily you could collect data via a workstation or tablet. So, we're device-agnostic. You can use a Windows workstation, you can use an Android or iOS tablet. Typically, it's operators entering checks or maintenance guys entering their work orders, things of that nature. But you can also trend and track that data and find your opportunities for improvement, find your opportunities to save time and waste and then you gather insights and you act on that data. And as you're acting on that data, as I said, you're pushing your continuous improvement and you're pushing the bottom line so that ultimately you're growing as a company. That's what we try to help our customers and try to generate those success stories from helping them leverage that data.
04:41
Geoff Nelson: So, whether you're coming from paper and we're helping you create a digital process or you already have data being collected and you're just going to multi-site networked cloud solution, we help you come to ask questions that you didn't even realize you had because you're doing that CI, that continuous improvement, on your processes and on your data. So, I'm gonna tell you a little story about Egglife. So, I don't know who here is familiar with Egglife. They make the tortilla alternative out of egg so you can have tortillas that are made from egg. They took the Ignition Modules. They already were using SafetyChain, they had gone from a paper process to a digital process on their tablets. So, they're using tablets to gather information, they're gathering downtime, dwell time, temperatures, all sorts of information within SafetyChain and already performing analytics. Then, they moved to the Ignition Module because all of that data was available within Ignition but they still needed to collect the data for auditability, for compliance and for audits that come in. Now, they took about 12 manual processes. They had people going up to the machines or going up to the HMIs and collecting this data in a tablet.
05:55
Geoff Nelson: They moved it over to the Ignition Module and all this data's still being captured but in an automated way. So, they took 12 processes and automated them. So, that's people, that's time now that people aren't having to go walk up with a tablet and it's all within SafetyChain and they can still perform their analytics. It's all in the cloud and it's all stored long-term. So, an opportunity there for them to save time, save money. And then now, they can use those operators really to do something else more valuable than looking at a screen and collecting data. All right, now we're gonna jump into the demo. So, bear with me just for a second here. Okay. So, I am bringing up the module first and then we'll jump into SafetyChain. So, this is just an Ignition Gateway here which I'm sure you guys are all familiar with. We have a module that can be installed and it puts this SafetyChain piece here at the bottom. It's really easy, I mean, a few clicks to install a module and then you get your connections, your Form Collectors, your OEE Collectors and your Tag Collectors.
07:01
Geoff Nelson: This allows you to really grab any data that Ignition has access to and then put it into SafetyChain in different ways. So, I'm showing a Form Collector here and it'll make more sense when we start showing SafetyChain in a moment because we'll show you our Demo 1 tenant. I'm just gonna go into the kettle temps here and just show it to you real quick. And I am using this connection here at this site and connecting to a VM, so might be a little bit slow here. When this comes up, it gives you a user interface that allows for configuration to create an integration between Ignition and SafetyChain. So, Ignition has view into what SafetyChain data exists. So, we call these Forms. So, a person might be entering, writing on a piece of paper what their kettle temps are and then so we digitize that into a digital form within SafetyChain and then Ignition gets access to that. So, a user could come in here and access all of the forms that exist within SafetyChain, pull it into Ignition, set up a trigger, so when do I wanna send this data to SafetyChain? Do I want to do it time-based, every five minutes, every one minute? Do I wanna do it tag-based? So, if a temperature exceeds a certain value, does the dwell time exceed a certain value?
08:20
Geoff Nelson: Send this information to SafetyChain, there're different ways to do it. Do I wanna do it manually so you can actually script this execution? You can put it into a button and perspective revision into a screen, and then you map each of the fields. So, I can have my Kettle Temp 1, 2 and 3. I have which line is it coming off of. Average is a calculated value within SafetyChain, so I don't have to send it. The temps, the tags, the fields within SafetyChain have different sources. So, you can choose a tag, a static value, an expression. You can choose a data source which allows you to go to any data source that Ignition has access to. So, if you have a SqlConnection or something, you can then pump that data directly here through configuration. So, it allows you to really take, like we said, any of the data that Ignition has access to, package it and send it to SafetyChain. I'll show you one more. So, what this does is it pulls the SafetyChain context into Ignition. So, Ignition knows how to talk SafetyChain language. The other option that we have are tag and OEE Collectors which really just says, "Hey, give SafetyChain the tag," and then it will deal with it.
09:34
Geoff Nelson: So, here, it's not talking SafetyChain language, we're just sending, okay, this tag goes to SafetyChain and it is your in count or it is your out count so that SafetyChain can then track your downtime, it can track your throughput, your productivity. All we need to know is the tag, just send us the tag and we'll do the rest. So, in this context, all the business logic is in SafetyChain. In the Form Collector, the business logic is basically in Ignition but it allows you to really do the integration in whichever way is needed.
10:02
Jonathan: Yeah. And then we can visualize that. Once we have that data and we've set up all the configuration on the back end on the SafetyChain side, you can then see that visualization and reporting of your OEE and we're going to demonstrate that a little later as well.
10:15
Geoff Nelson: So, that's what we'll jump to now. So, that was the Ignition Module. So now, what does that look like in SafetyChain? So, this is what we call our reporting or our grid screen. It's just one click from the homepage to get here. And now, we can quickly start to see and visualize the data. So, all the data that comes in, whether it's from Ignition, from a user on their tablet or on their phone, on the PC app or even a web-based browser interface, they can put their data within SafetyChain and it all here lives together. So, if you wanna perform analytics, deep dive into the data, you can have your automated data right next to your manually collected data and start to make decisions. Once you come in and start to look at your data, so it looks like I am here looking really at our kettle temps. You can also start to then perform actions. So, you can create tasks to assign to users. So, for follow-up, you can trigger notifications. So, if this is out of compliance, you can also do verifications. So, you can create your own sign-offs and verifications within the system.
11:23
Geoff Nelson: So, if you're doing data verification, pre-shipment review, that all is done within SafetyChain. So, you can have your processes built out and start to see the digital plant management part of it. So, I'll just show here the verify and I don't know, Jonathan, this is your site more than mine. Is this data verifiable?
11:44
Jonathan: Yeah, for sure.
11:45
Geoff Nelson: Well, you see here, so we had Sign-off, Record Review, FS Coordinator and Pre-shipment Review. So, those are verifications that have been built into the system, it's just all configurable and this data then can be verified and it's all tracked historically, so who did what when is not alterable. So, a person would come in here, so I would be here with my user, I would go verify these. So, I'd select the ones that I wanna verify, I'd sign them and I would put my note and then all that's tracked historically and so all these... So, those records have been verified for that verification that I picked. Do you want to jump in here a little more...
12:26
Jonathan: Sure, yeah. While I'm doing that, as I said, I worked prior to SafetyChain in food and beverage space. How many people here have been a part of an audit before? Raise your hand if you've been a part of an audit. Was that fun as far as like getting prepared and...
12:49
Geoff Nelson: No.
12:49
Jonathan: No? Yes?
12:51
Geoff Nelson: Audits and fun.
12:52
Jonathan: I like to do audits on the weekends maybe. Yeah. So, one of SafetyChain's big claim to fame would be having our customers become 24/7 audit-ready. And I'm gonna show you our Programs feature as soon as I remember where the link is. Here we are. And yeah, I've been a part of a few audits myself prior to going through digital transformation. So, we're talking filing cabinets, we're talking binders, we're talking going through old emails and work orders, it can be a headache and it's usually across a few days. So, when I came to SafetyChain, I learned about our Programs feature. It really resonated with me coming from industry and I'm like, "I wish I had this back when I was on the other side of the desk." So, what SafetyChain can do, as Geoff said, is we do a lot of customization of our forms so that you can capture those and you have record and documentation of that for future purposes. You can link those back to your food safety or internal program and be able to be audit-ready at any notice. So, if you have multiple clauses, we'll look at our HACCP one right here.
14:18
Jonathan: I like to think of these as kinda like those binders but in digital form. You can see all of your forms that have been linked to that specific program so that if the auditor was to come in and say, "Hey, I need to see records from this date to this date for this specific clause," it's right there at a few clicks of your fingertips. So, you can see all the records, you can see all of your documentation, so that means any SOPs or work orders and instructions that you have already listed specific to those clauses and that program, you can put that in there and it's customizable. So, we're not just doing food safety, we're not just doing SQF or HACCP in this case, but we can do an internal program that's specific to your specific company guidelines. So, if you have an EHS program or safety program, if you have maybe a GMP audit that you do internally, you can set that all up and have that traceability in there as well.
15:14
Geoff Nelson: You could do, like he said, internal binders or maybe customer or auditor ones. So, if you build one specific to what an auditor might look for, you can do it that way. And instead of having a piece of paper or a single physical drawer or location, these digital binders or programs, you can have multiple assignments. So, that form that we looked at like the kettle temps or any of those ones that we build out can belong to multiple programs and it's just at that form level so then any of the data that comes in will go to all these. So, if you filter for... If it's in Master Sanitation and it's in Food Standard 9, the data will all exist there without further mapping. So, there's nothing you have to do after the initial configuration.
15:54
Jonathan: Right. So, as far as all of the prep work, you do most of the lift up front when you're doing your configuration, setting up your forms, storing your documents and then from there, as you're collecting your data, as you're going through your typical everyday work processes, it's automatically going to its right place as related to your program. So, these are just a couple examples in our demo environment. As you can see, you got your docs, your forms and then your records for a given time. You can go out longer. So, this is the date and time filter at the top. So, if you wanted to go back three months, you could do that and you'd have even more records there. Yeah. So, here we go with the same HACCP one that we were looking at. It's got 492 records and so if an auditor was to come here, walk in today and say, "I need to see your HACCP records," I can pull that up pretty easily. Should I go into OEE now?
17:00
Geoff Nelson: Yeah, let's do it.
17:00
Jonathan: All right. So, another one that's near and dear to my heart would be around line efficiency and OEE tracking. I spent some time as a production manager, so knowing how the lines are running on a given day, basis, knowing where your opportunity is to reduce downtime is a very big deal especially in the manufacturing space. And that's where our OEE module comes into play. So, with the Ignition Module, as Geoff said, we have the OEE collector and we can map those tags back to our OEE solution so that we can generate this right here. This is one of our main screens here, this is live monitoring. So, in real time, we're capturing those tags and the counts from the machines coming from the line and we could tell whether the line is up or down, we could tell how fast it's running and we could tell if we've made our plan for today or where we're at in regards to what we were scheduled to produce. So, very impactful.
18:02
Jonathan: We have different screens, we could look at a more abbreviated version as well where you don't see the graph and you just can see at a high level. It's green, that means it's running. If it's red, it's down. You can see what your current rate is, obviously, what your total downtime is right there and what your current OEE is for that specific run.
18:21
Geoff Nelson: And here, we're looking at a single line but this is made for a multi-line, even multi-site so you can look across locations, built for scalability to look. Yep, here we go. So, he's showing two lines now. And so in the slim view, you might look at multiple... There's different views even to make them even smaller. But you might put it up on a screen down at the plant floor, pull it up in your office, put it up on your phone at a different site to get you visibility into how you're performing. Are you currently down? Why were you down? Why are you not meeting your goals?
18:54
Jonathan: Right. Yeah, this is a lot more lines here now. So, yeah, this view kinda... I'd say like the supervisor, manager/operator view, so you can see specifically what's going on on your production floor from an OEE standpoint. And then from a reporting standpoint, we have some out-of-the-box reporting as well where you can focus in on how you're doing as a plant but also if you were looking at an enterprise view, how you're doing across all of your plants. As Geoff said, we're very scalable. So, if you had multiple plants, you could see how specific SKU was running across your different plants if they shared SKUs. This is the enterprise view right here. I need to go out in more time to skip data.
19:56
Jonathan: Here we are. So, you could see how each location is behaving, you could see how your shifts are trending, if you have one shift that's doing a little better than others, graveyard shift might be taking breaks when they're not supposed to, get your OEE by line and then also by SKU down here. And then another good one is our top five reasons like Pareto. So, figuring out where your biggest opportunities are from a downtime at the source and reason level. And then, we also have some customizable reporting as well in our report builder. So, even if you don't see exactly what you want here, odds are you can use the raw data to build that in SafetyChain as well. So, we have customers that wanna see something specific or specific tables, they're able to build that with the raw data that's being collected via the tags. All right, we are about 20 minutes in. Do you think we should take questions now or is there something else you wanted to show?
21:08
Geoff Nelson: Let's show... Can you show SPC real quick?
21:11
Jonathan: Oh, yeah... I'll let you do it.
21:13
Geoff Nelson: You know the data though, right?
21:15
Jonathan: You talking about the Ignition?
21:17
Geoff Nelson: Yeah.
21:18
Jonathan: Okay. Let me...
21:20
Geoff Nelson: So, like he's saying, this is all out-of-the-box functionality and we do have report builders so you can make customized reports and dashboards. Most of our system, almost everything also can be done through API too, so you can pull your data out into other systems. The Ignition Module allows those screens that we were just showing to be set up really within minutes with a couple of tags per line to get your in count, your out count or even just your in count will drive a lot of those screens. What he's pulling up now is an SPC dashboard just to show a little bit more analytics to the data that we pull in. Was there not one on the main screen?
22:03
Jonathan: I'm not seeing the ones that I built, but I'll probably use one of these other ones.
22:07
Geoff Nelson: This is our demo site like Jonathan said. So, we use... A lot of people use this site for a lot of different reasons and it may look a little different from time to time depending on maybe who we're demoing to. So, he's searching a little bit. I typically don't go in here, Jonathan does a little bit more than I do.
22:23
Jonathan: With my login, though.
22:25
Geoff Nelson: Oh, right, yeah. This is my login.
22:28
Jonathan: Let me change the secure profile.
22:30
Geoff Nelson: I don't know why my browser's doing that, that's weird. So, as you can tell, this is a part we didn't rehearse, guys. But you saw that chart before. We have a lot of out-of-the-box charts for your data to start populating. Is there one here? Hold on a second, this mouse works. Does this one work?
23:01
Jonathan: I haven't seen that one...
23:05
Geoff Nelson: Well, this is what I get for doing this. But you can build SPC charts that will show up on a dashboard. You can also have ones that will show up on the tablets. So, when a user's entering their data, as soon as they hit submit on the form, it'll pop up the SPC chart to show you how data has been doing across the line or the shift or the day and it could be paired with the Ignition data too. So, if you wanted them to be paired, they could be paired. And then what we're trying to show here too is kinda the plant management piece. We showed you verifications on your data, we show all the data that exists, the forms you can create. Those are all customizable. We also have ones that are kind of what you see over and over again like our OEE and production ones. Those are pretty basic. I mean, they're kind of a template or standard that we do over and over again. We showed you programs so you can collect your data and be audit-ready. We showed you the OEE and productivity screens and... Oh, here we go.
24:09
Jonathan: This is what I was looking for. Okay.
24:10
Geoff Nelson: This is what he was looking for and some SPC charts so the data can come in, you can be looking at your control limits. So, we have compliance, whether a value is in or out of compliance, so pass or fail and then we also have SPC control limits which are different. Are you in control for your process? We have alarming and alerting for different rule violations. So, you can see here the different rules you might violate. This data looks pretty good, so it's not really violating anything. A little bit crazy stuff. But a lot of out-of-the-box SPC charts, some reports and then you can build your own, customize like Jonathan said. So, with that, I think we will turn over for any questions. Yeah, back there.
24:51
Audience Member 1: First question, how does the licensing work on the module?
25:00
Geoff Nelson: Good question. Adam, do you wanna answer that? No. The module itself in the showcase is free. So, the module itself is free. And then in SafetyChain, we build into the licensing. We license by location, not by user. So, a specific location can have as many users. And then whatever you're purchasing is kinda how we do the cost. So, integrations typically has a cost to it. I think right now we're doing a deal on the IoT piece. Is that right, Adam?
25:27
Adam: Yep. Correct.
25:28
Geoff Nelson: So, if you... Till the end of the year, for now, I think it's a year free on the IoT piece. So, you would just be paying for SafetyChain on a per-location basis.
25:38
Geoff Nelson: Right now, our Ignition Module is sending data from Ignition to SafetyChain. We are frequently enhancing it, but we do not do bidirectional in the module. However, we do have APIs that can do just about anything. So, if you wanted to customize some work, you could pull any data down, record data, you could do tasks, really just anything. Not native embedding of a specific chart but this is all web-based. So, if you could embed a web page or like a browser framed, you could do that. But otherwise, the charts won't embed themselves. No. Was there a question over here? Yeah.
26:15
Geoff Nelson: So, your question was, basically can another system trigger events within SafetyChain? Is that right? The answer is yes, it can. And there are two ways really to do that. One, like I said, we have APIs that can do just about anything and you could call our either Task or Record API to create actions in SafetyChain. So, you could assign a task to a person to go do some work based on whatever other system was doing. The second one is, through Ignition, you could create a record in SafetyChain and that record could be tied to what we call a dynamic flow that says, "Hey, go do another thing." So, based on whatever you send us, we configure in SafetyChain somebody else to do whatever the work is. So, depending on whatever your use case is. So, yeah, you could definitely do that.
26:57
Geoff Nelson: Are the forms dynamic? Absolutely. Yeah, they're dynamic, customizable. So, you create your own from scratch. You can have fields that are hidden and dependent on values of previous fields. So, it's very customizable and dynamic. Yeah. Are the forms developed by a developer? No. So, it is a user interface where a power user would go in and create their form. Really the requirement there would be on change management, so deciding who has the power to go create forms. Because once you release it, that's it, everybody gets it. So, if you cause a problem or something that users didn't expect, then you'll hear about it. But yeah, it's a user interface for you to drag and drop fields and configure them. Yeah.
27:36
Geoff Nelson: Is there an enterprise piece? Yeah. So, that goes with how you wanna do your roles and permissions. So, you can choose a certain level of user that has access to edit and create forms and then nobody else has access. So, at an enterprise level, you could say, yeah, this is a group that we've created, maybe you pick a person per site or a region or however you wanna manage it and then... Yes, then they would be able to edit. And then you could even have a different set that can release them which means now that people can go use them.
28:02
Geoff Nelson: Good question. So, in the downtime, reasons are configurable. So, you would configure them in SafetyChain, so which options you have available to you and then users can come into that screen that we showed and just select the reason for it. So, you can do a category, you can do a source then you can do a reason.
28:17
Jonathan: Right. And there's a free text portion as well if you need to add more information and detail.
28:22
Audience Member 2: You said you were getting that... Does your tag fill that out?
28:26
Geoff Nelson: You can in our downtime tracking with our automated downtime tracking, you can do that. Yeah. And I think we're just about out of time here. Was there one more back there? Yeah.
28:35
Audience Member 3: Does it have access to if a user enroll info from like Azure Active Directory or Ignition?
28:44
Geoff Nelson: We do have SSO available. It's SAML 2 or OpenID Connect is what we support. So, yeah, Azure, Okta. I mean, just about any of them. Yeah. Last one here. Last one.
28:55
Audience Member 4: So, you said you have these apps, mobile app as well. So, if we are performing audits and on the shop floor, there's no internet connectivity, do we have like saving in offline...
29:04
Geoff Nelson: We do have offline mode and then when connection's restored, it will push back up. Yeah. So, on those native apps, it's offline mode. So, when you go down, it'll store it. Okay. I think that's all the time we have here. Thanks, everybody.
29:14
Jonathan: Thank you, everyone.


Eurotech will showcase the benefits of running Ignition on an ISA62443-4-2 certified device. This demonstration will highlight how Eurotech's advanced device management capabilities can simplify the process for OT systems integrators to securely manage applications remotely. Attendees will gain insights into how the integration of Eurotech's ReliaCOR 40-13 Industrial PC with Ignition software provides a robust and cybersecure foundation for industrial applications. This collaboration not only meets stringent cybersecurity standards but also enhances the efficiency and scalability.
Transcript:
00:06
David Bader: My name is David Bader. I lead business development for a company called Eurotech. Has anybody heard of Eurotech before? A few? Yep. So I've been with Eurotech for about a year. There's, I'm the least I have the least longevity with Eurotech, even for companies that are not Eurotech. There's a lot of Eurotech employees and other companies here as well. Dave.
00:21
David Woodard: So I'm David Woodard. I'm a Solutions Architect with Eurotech. I've been here a bit longer. I've been with the company for 11 years and have been in IoT and industrial automation for probably closer to 15. So pleasure to be here.
00:36
David Bader: Yeah, I beat him out in the longevity of being in the business for sure. So I've been doing automation for 40 plus years now. So I've been involved in systems integration and distribution. I worked for AWS for a short time and led robotics for AWS for a while. The idea of coming to Eurotech was to be able to bring that kind of security level that AWS and the other cloud providers offer in the cloud down to the edge. So Eurotech is, if there's one thing that you take from this, it's that Eurotech is a company that provides enablement at the edge, right? So we provide a secure way to orchestrate and maintain your systems kind of at the edge. And we're gonna talk a lot about cybersecurity today. There's two themes, overall themes, that we that we kind of talk about from Eurotech. It's cybersecurity and the ability for systems integrators and OT providers to be able to uplift your cybersecurity posture in an area where we normally would turn that over to IT, right? So the concept is if we can provide the IT level of cybersecurity down to the OT space, that's kind of gonna be the theme that we're gonna talk a little bit about.
01:55
David Bader: And then the other one is to maintain kind of a secure remote access and remote device connectivity, right? So being able to do things that you normally would do by plugging into the device remotely, but in a secure manner, right? So being able to do VPNs that have the IT kind of functionality that you would expect from an IT perspective. So I'm pretty informal. If people have something that's genuinely on your mind, say it, but we are gonna have some questions and answers at the end. So I'm gonna talk a little bit about what Eurotech does and where we are from a... why cybersecurity is super important. And I'm gonna it over to Dave at the end, that he's going to talk about this brand new cybersecurity wizard that we're introducing here for the very first time. So you guys, the very first time you're, hearing anybody's hearing about this. Dave's going to do a demo on that.
02:49
David Bader: So Eurotech's been around for 30 plus years. We're headquartered in a small town in Amaro, Italy, which is in the Northeast part of Italy, all the way up near...
03:00
David Woodard: Austria.
03:00
David Bader: Austria. Thank you.
03:01
David Woodard: Never think of it.
03:05
David Bader: We have operations in the U.S., we have a bunch of people in the U.S., we have people in Canada and all. So some of the things that we've kind of been known for over time way back in the beginning of the, of the Eurotech history, we kind of worked with some people people that worked for Eurotech at the time developed a small protocol called MQTT. Anybody heard of MQTT, right? Yeah. There's a few people that certainly know MQTT, right. So over time we've kind of evolved from kind of a board manufacturer into a industrial automation solution provider for hardware and software. So we're very excited to now be part of the Alliance Program and out in a table 11 right across the hall here is the very first Ignition Edge, a piece of hardware that is certified to IEC 62443 IEC, ISA 62443 cybersecurity standards. And I'm gonna talk a little bit about what that means to get through some of this stuff. Our portfolio is pretty big, right? We build hardware from gateways for different applications for transportation, for industrial automation, for medical, all the way up to pretty beefy GPU-based processors that run AI and those kinds of things.
04:25
David Bader: So for any kind of application, including running Ignition on any of these devices almost, we can meet your needs. But again, the differentiator is that cybersecurity and that remote device access in a secure manner, right? So I would guess, right? I've asked a few questions already. How many people talk about cybersecurity with their customers on a regular basis? So a good portion, which I would guess, right? Because you're in this room and you wanna learn a little bit more about it. What I find really interesting is there's also an equal number of people that don't talk to customers about cybersecurity, right? They say, "Hey, that's an IT function." And I think we've passed that threshold in the space where we have to talk from an OT perspective about cybersecurity because there's a large percentage of cybersecurity efforts that are being, that are stemming from the OT space. So is that something that you guys, that resonates right from an OT space? 20% of all of the attacks are happening from an OT, from the factory floor, which when I put this deck together, I was knocked out by that number. I would have thought it was 2% or 3%. It's more than 20% now. It's pretty amazing.
05:41
David Bader: So when you think about that, what does that turn? What does that kind of mean in terms of dollars? It's significant, right? So the average financial impact from data breaches way back in 2018 was seven and a half million dollars. It's significantly more now. But why, right? The concept is we have more connectivity in the factory floor now. It's not relevant that the person on the floor, he's not missing anything. He's just not been trained necessarily in cybersecurity. It goes back to that conversation earlier where I said, most people aren't even talking about it. We've got PLC systems. They're out on the floor. Anybody got GE 90-30 back still running in their plants. Right. So these things were not designed with cybersecurity in mind. Right. So and now we're asking to put more SCADA, more capability, more MES in the plant floor that are opening up all kinds of vulnerabilities, if we don't think about it. Yeah, I messed that one up. So this is just a brief slide that you can't read, right?
06:46
David Bader: That's too small. But it's a timeline of things that have happened in the world in the last 10 years or so, right? Maybe 20 years. So Stuxnet, that resonates. Everybody's heard of Stuxnet, right? That was a, that was a cyber attack that came through an HMI, right. On a machining center. So all the way up through Target got hit, Jeep got hit, BMW got hit, the Ukraine power grid, right? That's a big deal. These things are significant impacts to the world. So I picked two, two unique applications that happened in the OT space, Brunswick Corporation, billion dollar company that makes boats. They got attacked in June of '23. So that's pretty relevant, right? Really close, only a year away. And it disrupted their entire facility and it cost them 85 million dollars, right? So that's just a small company. Well, it's a big company, but it took their quarter two financials. They saw a big impact in their quarter two financials. So if I'm a CEO, I'm pretty upset about the fact that somebody was able to breach my system through a, through the factory floor. Right. And then on another level, there's a company called Applied Materials, which are big company, right? Multi-billion dollar worldwide company. February '23, still relevant.
08:09
David Bader: They got hit through one of their suppliers. So one of their major suppliers was attacked. There was a vector that came through. They were, they had vulnerable, they had access to Applied Material's systems, and it, and the attack came through their vendor. And that one cost them 250 million dollars. So we're not talking about peanuts when we're talking about OT attacks, right? This is significant dollars and significant impact to the business. And if you're a systems integrator and you're not talking about cybersecurity, in my opinion, this is a line of, sorry, in my opinion, this is a line of business, right? This is a way for integrators to spin up a new, a new way, a new piece of business, right? Talk to your customers about cybersecurity and how you can elevate it. So I'm gonna go fast. We build, everything that we do, everything that we're talking about today is about certified cybersecurity. There's a lot of ways that people address cybersecurity in the OT space.
09:08
David Bader: We think that being certified, building to standards, designing from secure by design is a significant piece, right? What we do is we have the ability from a remote access perspective to use VPNs. Everybody probably has the way to use a VPN, but we have an on-demand VPN capability that allows for automatic teardown, right? So if you're an IT guy, automatic teardown makes a big difference. Being able to connect, remotely access, do what you need to do, and then have the VPN shut down automatically. So you don't inadvertently leave it open and leave that attack vector space available. And please, Dave, jump in if there's anything that... So nobody wants to be this guy, right? Like it's not a good thing. So secure by design, right? The whole idea is if we do this right, everybody benefits, right? Suppliers of the hardware, suppliers the customer, and then we maintain that kind of security from the start, right? So again, I'm talking about secure by design. You have to build it from the beginning. It's not something necessarily that you walk in and say, yeah, I'm gonna create this really high level of cybersecurity in your plant without looking at the overall architecture.
10:31
David Bader: So you do a risk assessment, right? We don't do that. That's not what we do. I mentioned before, we provide enablement into the mitigation process. But you reach out to companies that provide cybersecurity risk assessments. They tell you where the vulnerabilities are. If you guys have that capability, that's fantastic. I think that that's a good way to do it. And then you build, you build these, you buy these features and these capabilities into the solutions that you have. So we provide hardware for running Ignition, right? But wouldn't it be great if you could buy the hardware that runs Ignition that also has this high level of cybersecurity and gives you this remote, this secure remote access capability. And that's the method that we are, that we're talking about. So we're doing it at 62443-4-2-1. So you're not gonna remember that specification. Excuse me, -4-2 as service level two, right? So Inductive Automation is already certified to 4-1. We're certified with our hardware to 4-2. And then the customer then can quickly and easily certify their entire system to 4-3. And that's really the enablement that we're offering is being able to have that customer get to a certified cyber solution in the field very quickly.
11:49
David Bader: And if we, if in the past, what you'd have to do is buy a piece of hardware that was hardened to a certain level. And it limited some of the functionality that you were able to load onto a server, onto gateway or other hardware. Right? So now with this wizard that we're going to talk about in a minute, is you can make these decisions in the field and work with the IT department to say, we wanna certify, or we wanna harden to this level. We wanna harden to 62443 right? Or we don't need to harden to that level, but we're gonna, we're not just going to leave everything open. And we'll show you, we'll walk you through a video that shows that, how that works. So what does secure by design mean, right? You wanna follow zero trust kind of principles and they're very standard and very well defined. So being able to say, we trust no one and nothing, right? So if you start to pass along keys and email certificates and all of that stuff, all of a sudden that becomes a real problem, right? That's not secure. If you're, if you have to email someone a certificate that's standing in front of a machine, inherently that's gonna be a problem because who knows who could get to his email, right?
13:03
David Bader: And then a continuous auditing and monitoring. So when we talk about zero trust, we talk about it from an entire ecosystem perspective. So we manage the certificates, the security certificates, from the TPM level, from the chip that's on our device all the way through to the IoT devices that are connected. We manage all of that. We maintain them. We keep them current and you don't have to worry about that as an integrator or as a customer or in any way. So I think that that's a really important piece. This slide, if anything you get from this presentation, the fact that Eurotech does this for you in an IoT perspective, and then also can do it all the way to the cloud. If your application calls for connecting to the cloud, that's super important, right? How am I doing? Okay. So I talk a little bit about 62443. I mentioned that maybe some have heard about it. There's a lot of things and you see here that I'm, talking about ISA and IEC 62443. Excuse me.
14:20
David Bader: So the, one of the key things is if you build to a standard, it's no longer subjective, right? So Eurotech many years ago decided that this standard was going to be kind of the worldwide, kind of the bar in which, which people should meet. Turns out that we were right. We made a good bet. It took us about two and a half years to become IEC 62443-4-2 SL2 compliant. And now we are the first and only, quite frankly, company that builds IoT hardware and enablement to that level. There's a lot of people that build to those standards, but have not yet gotten certified, right? We don't think we'll be the only ones. We think that we were the first, which is good to be first. So we use independent testing to validate that we're built to those standards. Again, what that does is it allows you to talk to your customers about building a secure system and maybe your customer doesn't want to certify. Maybe they don't wanna get to a 4-3, but you can say to them, "Hey, these are all of the components that you would need to, if you wanted to get to certify to a standard." Now, if anybody's from Europe, anybody here from Europe?
15:39
David Bader: Yeah, there's a few, right? It's not a guessing game anymore. It's required, right? You, you have to develop, you have to deliver 62443 standard products just to meet the law, just to meet the requirements. So we're an Italian company. We build in Italy and in Germany, all over the world, quite frankly, but we know that this standard is going to permeate not just in Europe, but beyond Europe, right? So what does that mean to us in the U.S. or in Canada? It's not mandated. It's not something that they're saying that we have to do, but quite frankly, it's an ROI conversation, right? It's something that when we talk to customers about this, we can put dollars and cents. I just showed you 250 million dollars right? It's pretty hard not to show the ROI on an investment in a piece of software that has a little bit more cost to it to get to that standard. But it's helping to prevent that 250-million-dollar hit, right?
16:37
David Bader: So even if the U.S. isn't mandating it, although we do mandate cybersecurity now in a lot of ways, right? It's suggested in a lot of ways. I think this ROI discussion in this line of business discussion for the integrators in the room is super important, right? We can now talk to customers about a higher level of cybersecurity at their OT level, at their OT floor. Make sense? How am I doing?
17:00
David Woodard: Fantastic.
17:05
David Bader: Okay, good. I like constant feedback. How am I doing? You guys feel pretty good so far? Okay. Nobody's left the room yet, which is very unusual actually. So I mentioned about how long it's taken and we like to show this slide on like every presentation that we do because it's actually a physical document that you get when you get to certification, right? It's not, oh I built to this standard, but I didn't get certified. No, we've actually gone through the certification. It is a physical document that we can send to you and, and say to your, to the IT team, look, we're buying product that is built to these standards. So how does this resonate worldwide? Right? There's a bunch of teams, people from Europe here. Obviously we talked about that. And then in vertical industries, right? In vertical industries, the 62443 standard it kind of travels to different areas, right?
18:00
David Bader: So if you're in industrial automation it's 62443, if you're in rail, it's Shift2Rail, energy is 62351, and so on and so on. Right. So there's, TSA is involved. So there's a lot of different almost, every standard is actually adopting 62443 as the core to the standard and then put it, putting it into individual, their individual requirements for their particular vertical industry. So I would say that in this slide, there, we'd be hard pressed not to touch every person in this room at some point in one of these verticals, right? Everybody's touching something in these verticals, right? And if we can meet the 62443 requirement, then these are all reciprocal standards that view 62443 as a, as kind of a guide, right? So if you've got a customer that's in a process automation and they're saying we need to meet TR 63069, then we can go in and have a conversation about 62443 and how that is actually 63069 at the core.
19:10
David Bader: For medical, I think medical is... Medical up there, we have medical 60601, 100% copy and pasted from the 62443 standard. So if you're in the medical space and customers are saying, "Ah, you gotta build something to 60601 standard," we can do it. We can help you. Make sense? Okay. So then I mentioned that we're certified to SL2. What does that mean, right? So I thought it was important to kind of make sure that people understood what that means. So the idea, right, is the SL1 is the components, right? To protect the components from casual access, casual mistakes and things like that. One of the things that the standard actually does is also includes tamper resistance, right? So if somebody goes in and messes with the server, there's a switch inside the server or inside the gateway, that is a bit, that ties back to our software that you can enunciate in Ignition or send an email from our software or any of those kinds of things.
20:08
David Bader: So if somebody inadvertently, a maintenance guy comes in and says, "Hey, I gotta upgrade the firmware or something on this," they can immediately get a response. You can literally shut the computer down if it's an onsite breach. So there's lots of ways that you can use that tamper resistance piece. And then SL2 is actually designed to mitigate and kind of prevent generally acceptable or generally recognized attack vectors. So Eurotech again felt that it was important enough for us to get certified to the SL2 standard. Not too many people have considered that. All the standards that we meet, not all of them, but many of them. So today, I thought that it was important that we talk about how do you get there, right? Like how do you put a... Take a computer, put it on a shop floor, what do you have to do to get to maintain that 62443?
21:08
David Bader: And these are all the steps. I'm not gonna go through every one of them, but there's at least 10, maybe more, steps that you have to do to build and harden an industrial IoT device to this standard. So what we've done is we've said, "Okay, you know, let's build something. What are these capabilities, right? So what are the advantages of having this?" There's a lot of words here, but the bottom line is that it's maintaining and monitoring to a rigorous standard the integrity of the environment, right? And then, can I ensure that it maintains that? So, yes. Right? So the idea is that when you certify, all of this gets continuously updated and as you keep your hardware and software current, it gets updated. So again, I mentioned this wizard that we're... Dave's gonna do a demo on.
22:00
David Bader: But the idea is, how does this work during deployment, right? So you can load all the software, whether it's Ignition or other software that you want. Then you walk through this wizard and it guides you to the level of security that you wanna provide at this OT space. Unheard of. Literally takes all of those steps that we talked about before. Excuse me. Now we can do it in the field and then can we be... Can we maintain that security with Ignition? Make sense? Lot of words, but pretty important. So Dave's gonna go through the video, he's gonna talk to that, and then we're gonna do questions and answers. So we went pretty quickly. Hopefully this touched a little bit. It wasn't just commercial. It was about providing some relevance to the market and where secure by design and standards really matter.
23:00
David Woodard: Great. So now that Dave's finished, we can come back to reality of doing all this, right? 'Cause if you wanna do this for a new customer or existing customer, doing all that level of security is really difficult. I think it's one of the most challenging parts of what we do, 'cause we have to understand the IT side and the OT side and how to do like, you know, understanding like that bridge is incredibly complicated and is very hard to do well. So that's why we came up with this, right? So, 'cause what I see a lot when I do integrations or when I do deployments or installations, is you don't do it. You say, "Okay, we need to get the POC working. We need to get this application working. We'll do security when once they buy in." Right? So once they say, "Yes, we wanna do it," then we do security. And then you realize that security is breaking what you did. So the wizard, what it will do for you, I'm just gonna play this video.
23:49
David Woodard: And I'll just talk while it's playing. So basically it's just a web application that's running on these gateways. So all of our gateways from the edge devices up to our more server class boxes, provides you with this walkthrough interface of setting up networking, enabling the secure elements that you need, and being able to do it while you're doing the deployment, right? Or if you say, "Okay, we just wanna get it working, but then we wanna see what happens if we enable this SSH policy. We wanna see what happens if we try to do this other thing." You can come back to this wizard, enable that feature, and see if it still works. So there's nobody on the command line, there's nobody like hacking your Linux file system stuff. There's nobody doing that crazy stuff in a working factory or in a pilot.
24:38
David Woodard: You do it in this wizard. Let the... Let our software manage it. And so here you can see it's doing the... All these things are relevant for the certification Dave was mentioning. So if you want to come here and just say, "Hey, I just wanna be IEC certified," you can click one button. It enables all those features and you're done. This video I think is three or four minutes, but you can do it in less than a minute. And I think even more importantly than that is this is not even necessary, right? So once you do this, once you say, okay, this is the standard that we want, these are the settings that we need, this is just a configuration for us, right? So you can say, okay, now I need to order 10 of these boxes, or a hundred, or a thousand. They can come preinstalled with all these settings already on there.
25:19
David Woodard: You don't have to worry about it anymore. I think the other cool thing about this, so you'll see now they're actually going through some provisioning with AWS. We can do the same thing for Azure. We can do the same thing for custom cloud endpoints. It is an extensible interface, right? So if you say, "Oh, but you know, we need this custom thing for our cloud services or for our customer," it is an extensible platform that you can add on to. So think that's it. We do have it running on this box I have here in front of me. So if you wanna come by our booth, I can plug this in and show you live what it does. We didn't wanna do that here just for timing, but, I'm happy to show that to you if you wanna come by the booth. So I think leave time for questions.
26:01
David Bader: Yeah. We have plenty of time for questions actually. Yeah. I think we have a half an hour for questions.
26:06
David Woodard: No.
26:08
David Bader: I'm just kidding.
26:10
David Woodard: Okay.
26:10
David Bader: Yeah. I'm making the guys in the booth nervous. Go ahead. Yes, sir.
26:15
Audience Member 1: So you are saying you're providing a software stack only for the remote device, the edge device, or there is also a kind of some cloud platform?
26:23
David Woodard: It's both. So there is the edge and there's a cloud platform.
26:25
David Bader: So the question, just to repeat the question, are we providing a software stack just for the device or is there a cloud platform? And Dave's answer is yes, it's both, right? So there's a component that loads to the software or to the device itself, and then there's a cloud-based product that allows you that remote device... The remote access and the remote device connectivity. Yep. And that, you know, we can shape and manipulate that depending on the scale. I mean, the whole concept going back to that first slide is, is we wanna be able to do this at scale, right? If it's one piece, two pieces, that's great. We're all for helping with that. But if it's a 100 or 200, we wanna make it super easy. See, I told you there's always somebody that leaves early. I'm just kidding. Any other questions? Come on. There's gotta be. Yes, sir.
27:13
Audience Member 2: So your own network that's used to update the unit themselves, can they, you can you use your own wire in the private cloud network as an input?
27:25
David Woodard: Yes. Yes.
27:25
David Bader: So let's repeat the question. Can we load our remote access capability onto a private cloud or onto a server or something like that? Yes. You know, it's Dockerable, it's containerized, and you can load it almost anywhere. Yeah.
27:41
David Woodard: Yeah. We have use cases where the cloud's actually running just in the factory. Like just, no, none of the data leaves that factory. It's all isolated there. So we just run the cloud directly in that factory.
27:49
David Bader: Inherently, you know, out of the box, it's running in the cloud, it's running in AWS, but we can work with you to do it anywhere. Yeah. And in fact, we have customers that build other components that buy that capability and load it onto their devices themselves and run it in their private cloud so it doesn't have to just be on our device. Yes, sir.
28:15
Audience Member 3: This is probably a dumb question, but does this only apply to your end devices?
28:22
David Bader: No.
28:24
David Bader: No. So that's what I just said.
28:25
Audience Member 3: Some of yours, but I might already have an installed base of a thousand of some other manufacturers, Linux based, whatever controller, can your software widget be configured so that I can use your software to just secure everything?
28:42
David Woodard: It is my favorite answer in the world, and I'll say it depends. So if you want that level of security, it would require you recertifying those devices. So at least taking one and saying, "Okay, we put your tech software on here, we've done all these same steps, but you have to have it re-certified."
28:56
David Bader: That's a 62443 requirement.
28:57
David Woodard: That's not... That's just a requirement. But what I'll say is, as long as your box is running Linux preferably, but we have the ability to understand how your operating system works and that we can tie into it and make these changes, then yes, we can run on other people's hardware relatively easily.
29:14
David Bader: Yep. Not a dumb question at all quite frankly.
29:15
David Woodard: No, it's a great question.
29:16
David Bader: It's a really good question.
29:17
Audience Member 3: I might not want to replace everything.
29:19
David Woodard: No...
29:19
David Bader: Exactly.
29:19
David Woodard: Yes. And if you don't need that level of security, we can also run in Docker, right? So if you just wanna deploy it and use it for remote management and use it for some of the, some security features, but not all, that's a really easy way to deploy it and at least try it and test it and see if it works.
29:34
Audience Member 3: You sell your software...
29:36
David Woodard: So that would be a service you... We don't do, right? So that is not what we get into. So we actually, we use two external companies. I'm blanking. Two, Nord is one, but I can't remember the other. But we do our own audits, so we have to send products to them. It is periodic, I think it's three times a year, four times a year. We send them new devices, they test them to say, "Yes, you're still certified." So you would have to do something similar. And those companies, and that's how they make their money, right? So it's...
30:01
David Bader: Yeah, I thought it was in here somewhere, but it's not. So yeah, I mean that's a normal thing that customers do on a regular basis.
30:08
David Woodard: But we do have contacts with these companies, so it's also, we can help you like at least make those contacts and have that discussion.
30:15
David Bader: Yeah. There's one down here.
30:18
Audience Member 4: So that's the box that you... It's also running Ignition?
30:21
David Bader: No. No, go ahead. You go.
30:23
David Woodard: So this is more... This was the first device that we certified. But this is definitely more of a what you call like a gateway or like a, I say gateway in an Ignition audience. You all think about the software, but this is like a hardware gateway. So this would be running very close to like your PLCs and stuff. You know, the box we have that's running Ignition is a more server class and if you come to our booth you'll see it. So it's got like a Intel processor. It's got more resources. You can run Ignition on this but it'll probably more for like the Ignition Edge product. Yeah.
30:56
David Bader: Yeah. Okay. We have time for one more question. Alright. Looks like it.
31:02
Audience Member 3: Sorry to ask a second question. So I'm with the utility grid sector and you have IEC 62351 TC57. Have you ever heard of NERC CIP?
31:13
David Bader: NERC CIP? I have not. NERC, NERC, but not NERC CIP. Yeah. No, but I'll tell you what, if you would, if you take a minute and when we're done, we'll go back out there and I'll write that down and we'll get you some answers on that because it just may not be on this slide and I may not have run into it. Yep.
31:34
David Woodard: There's so many certifications and regulations that's... Especially in energy, it's...
31:37
David Bader: And that's why I put this slide into deck two is to tie it all together around that core. But I think we hit the time almost perfectly, right? So we're good. We're, again, the booth is right across from this door. If you guys have any other questions or if you want people to send you a lot of emails, come see us. Yeah. Thank you.
31:55
David Woodard: Thank you.
31:56
David Bader: Great job. Great job.


Modern manufacturing generates vast amounts of data from diverse sources, creating challenges in data integration and utilization. Traditionally, data silos have hindered the scalability of analytics across manufacturing and supply chains. The Snowflake AI Data Cloud breaks down these barriers by seamlessly converging IT and OT data, accelerating smart manufacturing initiatives. Join us to explore how Snowflake empowers manufacturers to harness the full potential of their data, driving innovation and operational excellence in the era of AI and Industry 4.0.
Transcript:
00:05
Greg Sloyer: Well, thank you for coming. Sort of during lunch, before one of the keynotes, I'd like to thank Inductive Automation for having us present again. This is our second year presenting at the conference. My name is Greg Sloyer. I'm from Snowflake. I am the Manufacturing Industry Principal, so I look at the business side of things from Snowflake. All the usual, do not buy or sell stocks based on what I'm talking about. Don't plan your 401(k)s and retirement. I've been doing data and analytics for manufacturing, supply chain, operations, logistics, all that for abo ut 17 years now, not all of which was Snowflake. Prior to that, I had 20 years in the chemical industry, DuPont, BASF, and I ran global supply chains and logistics and all sorts of things like that in the chemical industry. So, why is Snowflake at the Inductive Automation ICC conference? I will set this up by saying, Snowflake, how many people are familiar with Snowflake today? Okay, so about half. So, Snowflake started out as a data warehouse, data lake kind of thing in the cloud.
01:17
Greg Sloyer: It's been about 12 years now, and in 2014, the big thing here is we operate across AWS, Azure, and GCP, so all across all the major three clouds. Our big thing, especially in the 2018 timeframe, when you see this disrupt collaboration and this cool-looking thing in the middle, which is maybe a little hard to see, but there's a lot of starbursts and fireworks-looking things, that is data sharing in Snowflake. This is between customers and suppliers, between partners and OEMs, between logistics groups and manufacturers, between what we call our marketplace providers, so data providers in Snowflake, providing things like weather data, commodity pricing, freight rates, logistics, things like that. There's about 2,600 data sets or so in Snowflake that are available. Really cool thing is we do this all without moving data. We're not moving data in Snowflake. It is pointers. We've gotten rid of the ETLs and FTPs and emails, and heaven forbid you put stuff in CSV files and ship them over to a friend of yours. This is all essentially permissions.
02:39
Greg Sloyer: You give permission for somebody to see a table or set of data, or they give you permission to see a table or a set of data or a set of tables. Once that permission is granted, that data shows up in your database like it's one of your tables. So now you are extending your, to incorporate that data in analytics and reporting; you extend your SQL with a join statement. That's what it comes down to. That was in 2018. We've been exploiting that, and more so we have been now building applications. So you're seeing major applications like Blue Yonder for supply chain and others replatforming to Snowflake. And this has really been the progression, and we continue to add on to this. A lot of AI, Gen AI, ML types of capabilities, I'm gonna talk about a couple of them today, being brought to the data. So what we didn't want you to do is spend a lot of time bringing all the data and talk about IT data and OT data today, bringing all that to Snowflake just to then pull it out and have to do something else with it somewhere else.
03:45
Greg Sloyer: The idea is let the data there; let's bring all those capabilities to the data so you can operate all within Snowflake. We launched what's called a manufacturing data cloud at Hannover Messe about a year and a half ago, April of last year. And we looked at what was needed in the industry, manufacturing in general, and what were a lot of these opportunities people are struggling with, things like that. Hopefully, a number of these resonate. So one was IT and OT convergence. Okay. This has been a big topic now for a number of years, and Snowflake had been great at bringing in typical ERP, especially like SAP data, into Snowflake. We've been doing that for a number of years. Lots of big customers who are doing that today with not just one SAP or ERP system but 10s, 20s, 30s. And all of this is published when I provide a name, but Carrier, for example, has 140 ERPs that they consolidate their data into Snowflake on. What we weren't as strong was on the OT side. So bringing in the shop floor data.
05:15
Greg Sloyer: This is where we really pivoted about 18 months, two years ago, working very closely with Inductive Automation, Cirrus Link, and a number of other partners to provide different architectural ways to bring the shop floor data into Snowflake and take advantage of the time series capabilities and a number of those other capabilities we'll talk about in terms of AI, ML, Gen AI, things like that, to the data, which is sort of that third point, which is bringing and really deploying advanced analytics to the data. The middle one is taking advantage of that data sharing. So this is broadening the visibility outside of IT, outside of OT, so the enterprise, and really extending that to that partner network, broadening the view of the supply chain, incorporating that visibility into the decision and analytics process. Really taking advantage of a lot of these different Snowflake capabilities. The difficulties, and I'm sure many of you have experienced this, is that for years, decades, the shop floor manufacturing sites have generally been an island. Different organizations, different functional reporting roles from that systems sort of standpoint.
06:18
Greg Sloyer: OT sometimes reported into the CIO, but generally not. It reported into VP manufacturing. This created a lot of separations from a systems standpoint, made it not technically difficult but more organizationally difficult to sort of integrate and bring that data in, integrate it with sort of the rest of the data. There's some architectural discussions that happen, things like that. So different opportunities. And for those of you who have multiple plants, what I always say is if you have 50 plants, you probably have 48 different MES, MRO, LIMS, or QM, and all those systems because a lot of those plants came up from acquisition. As I said, it's an island. Investments weren't made, or if it wasn't broken, we're not gonna fix it. That kind of thing. So a lot of that is changing, but also the architectural patterns that you have to utilize that data, especially to bring it to the cloud, need to account for the fact that all these plants are different. They may have different protocols, different configurations, all that kind of stuff. So the solutions have to be adaptable.
0:07:38.7
Greg Sloyer: And that's really where, in partnering with Inductive Automation, all that, we've helped simplify the environment of bringing that data into the cloud, into Snowflake. So let's first take a very broad view as we look at the supply chain. So as I mentioned, we have everywhere from marketplace providers with commodity data, pricing, availability, geopolitical kinds of things that impact supply, the logistics areas, really bringing that sort of view to the plant, as well as then when you look outside from the customer standpoint, especially if you're building connected products, so products out in the field that are generating their own data after they've left manufacturing. How do I incorporate and create that visibility up and through the supply chain, through the ecosystem, to be able to make decisions more holistically to not just help manufacturing but to help that whole enterprise environment? And then this is the thing we were really putting in 18 months or so, two years ago, which is that ability to get much more fine-grained, granular data at speed.
09:04
Greg Sloyer: I won't call it real-time. Let's call it near real-time into the cloud. This is not replacing shop floor systems. If you have a safety system, your cloud is not your best spot to put those. That's gonna be edge-driven kinds of stuff. But as we look at how do I take advantage of that data, how do I bring and broaden access to it, how do I look across those 50 plants, how do I run much more advanced mathematics on that data to do root cause, cycle time, predictive quality, all those kinds of things? That's really why we're pulling that data into the cloud and combining it with a number of other components of the data. For example, let's say you are great at doing quality control and things like that, even looking at that from the shop floor, the isolated plant level. What we wanna do is show that vision extending that supply chain to say, okay, that's not the only variables going into your manufacturing facility. Your supplier quality, that delivery variability, all of those things come into play when you start looking at quality or predictive maintenance, those aspects. And then how do returns, how does warranty quality, for those of you more on the discreet side, how does that impact and how can I utilize that information from customer service, from field maintenance, things of that nature to see what that potential root causes were that started in manufacturing and started in supply?
10:37
Greg Sloyer: So again, being able to broaden that view for those organizations that have started moving beyond doing really cool and fancy analytics on their shop floor data, how do I paint that vision of the future? And this is where we really see the extending of that data and incorporating more of the IT types of data into those decision processes. So Snowflake has many, many more partners than this. These were the partners as part of our manufacturing cloud launch. Marketplace partners, there's 2,600, 2,700 different data sets in the marketplace, but there's everywhere, as I mentioned, financial data, there's ESG data. If you type in ESG into Snowflake marketplace in the search, you're gonna come up with 40 or 50 different data sets that are available, freight rates for kites and dart, things of that nature. From that perspective, again, to help provide that greater visibility. As I mentioned, Snowflake has really been doubling down in terms of applications and the capabilities there, building those on Snowflake.
12:00
Greg Sloyer: So Blue Yonder and a number of others are replatforming. In a lot of cases, there's ones that were built specifically by companies on top of Snowflake, taking advantage of that power of the cloud and cross-cloud. So as they build something, it's not just in AWS or in Azure; it goes across all three. And then system integrators, the SIs there. And again, many, many more are partners. But these were ones that stood up and said, I have built in Snowflake a supply chain or manufacturing or operations type of solution with a customer. And they're raising their hand and going, "Yep, they did a great job. We did it all in Snowflake. And that's how they were on this list." And this continues to expand as we work through. The main areas I mentioned so supply chain optimization, smart manufacturing, and connected products are the three areas where we start utilizing the manufacturing data, the supply chain data, and that sensor data, whether it's coming from the shop floor or from connected devices out in the field, to be able to really provide that visibility, take advantage of the cloud infrastructure.
13:23
Greg Sloyer: And depending on which booths you go to behind you here, you'll see slightly different versions of this. This is my extended version. And sometimes today if I get surprised by a slide, it's because somebody's legal, they're legal, our legal, somebody's marketing department got a hold of them. So I always enjoy this 'cause then the slides are as much a surprise to me as they are to you. So with Ignition, and we've had this now in place with them a little over a year, I wanna say closer to 18 months that we've been working with Ignition or Inductive Automation and Cirrus Link. But this is the easy button for getting data into Snowflake. This is zero code. When the part of the secret sauce is that IoT bridge for Snowflake that is available via Cirrus Link. And this drops the data in from Ignition. So not just the tag data, but the metadata all around that, that structured data. So all of that lands in Snowflake. And if you have a chance to see a demo, Arlen Nipper, I don't know if Arlen's in the audience today. Arlen and others have done this many, many times. I'm probably on many, many calls per month with him with different customers and prospects.
14:55
Greg Sloyer: And within that demo, Snowflake goes from knowing nothing about your shop floor to knowing everything about it that's coming through Ignition. So very, very fast, very, very easy method for getting the data into Snowflake. And these are some of the reasons why we're one of the partners of this versus some of the other ways you can land that data within the cloud is we really looked at this with them in the process engineer. So the plant is driving the configuration in Snowflake. Snowflake is not defining a structure that you've got to be so many levels deep and it has to have certain kinds of attributes and all that. It is driven from the edge. So the plant defines how you look in Snowflake. Snowflake does not define that. That's one of the keys. And then some of these other nuances, for those of you who get into the much more excruciating detail about data types and things like that and what can you land. But we're landing this all with MQTT. The cool part of the demo for me in terms of the processes within Snowflake is that MQTT is great for transmitting data and for storing data.
16:22
Greg Sloyer: Really, really small footprints for both. Allows you to go very quick, allows you to get the data up because of that event sort of driven change control that they have. Not so great for BI and for analytics. It has lots of nulls. Mathematics tends not to like nulls, and BI tends to not like a lot of zeros with a spike and a lot of zeros with a spike. Snowflake's got a, we use views in Snowflake, so you're not gonna repopulate and have to store all that data. But from the view perspective, it's hydrating those nulls with the previously good value. So now you have an analytic data set that your data scientists tend to like without having to code anything. This is out of the box. It's driven the moment you've set up this connection. And your BI tools like it because it's not that flatline spike, flatline spike. So, great. You got the data in Snowflake. Now what can I do with it? This is where the past year or two, Snowflake has been bringing in a lot of AI, ML, Gen AI capabilities into Snowflake. We continue to release new stuff. This happens to be one of our, I'll call it an easy button.
18:01
Greg Sloyer: 'Cause different levels, different organizations have different capabilities around data science, around analytics use, things like that. And we have for the data scientists in the room, Python, Java, Scala, all that can be done in Snowflake. It's not just a SQL house. So you can be writing all the cool data science stuff. I don't think I have it on here, but we're in the booth right across the hall. For those of you into optimization, you can be running optimization, mathematical optimization in Snowflake through the Python libraries. It's really cool. I've been, in the '90s, I watched optimization fail. And I'll talk a little bit on why I think it failed. But the ways we're getting and the capabilities that now are being brought to this data are really, to me, driving a lot of really cool stuff happening in manufacturing. But this two lines of code here with anomaly detection, this is using an ML function. I think of this as the trend function in Excel. So for all of you who are familiar with Excel and you use the trend function, all you had to know to generate a forecast or see the trend of data was trend parameters, and what are those two, three, four parameters I had to put in the trend function.
19:21
Greg Sloyer: You did not have to know the mathematics behind it was least squares at the time. You didn't have to care. You could just write a trend function. This anomaly detection, and there's about a dozen of these, what we call Cortex ML functions, that are available is something similar. You have to know the parameters, just like I had to know with the trend function. But now I can run an ML-based, I think it's gradient boost, anomaly detection on the data as it lands. I don't have to be a data scientist to apply that mathematics to the data. So there are functions like forecasting, and there's a couple of others that are out there that are more manufacturing-based. Like I said, there's about a dozen overall. But the ones I tend to see for manufacturing, supply chain, or anomaly detection, forecasting, and there's some contribution factors, things like that, that really get exciting, 'cause you're applying ML techniques without having to be a data scientist. So simplifying this approach.
20:22
Greg Sloyer: The screens here that I'm showing are actually built in what we call Streamlit. This is a Python-based graphics package that is in Snowflake. So the data does not have to go out. It will not compete with a Tableau or a Power BI. We can also go to there. So if you wanna do really cool and fancy dashboards, super. We operate with those. But for folks, data scientists especially, who wanna just show quick, easy visualizations of the data, of the results of their very cool mathematics, this is available for them as well. So why do I think optimization failed? But why do I get nervous about being able to use advanced analytics broadly across your organization? And I'll point to this and go, there's different reasons. But the biggest one, and why I think in the 90s optimization failed, for example, and what I don't wanna see with things like Gen AI and AI and ML, which are really cool tools, is that we had organizations wanting to go from here to here without doing the groundwork in between. There was too much change management.
21:33
Greg Sloyer: There was too much. There was not enough data governance, data quality in those processes. Optimization is great if you've got really good data quality, especially pricing, timing, things like that. The mathematical models rely on that. That is no different for Gen AI, AI, and ML. The square root of a bad number is still a bad number. It doesn't get better because I threw cooler mathematics at it. So this is where we, working with the partners, working with you folks, it's not that we're gonna say, "No, don't ever do this." What I'm saying is, as a warning, keep in mind that those structures in place, where there's governance, and this is really where the IT and the OT coming back together to help, through this process, really help then create the environments where AI and ML are gonna be a lot more successful. Make sense so far? All right.
22:45
Greg Sloyer: So data foundation is necessary. We build these out. We work with the customers and the partners to deploy these things. Like I said, we've been great at IT data, really excited about all the partnerships we have to bring in the OT data, take advantage of our time series, geospatial capabilities, things of that nature. So you can do all sorts of cool math with that. And then extending those with the partner data or sharing that data with your partners, customers, suppliers, logistics, for example. So what's that mean? So from the Unified Namespace, this is what we are continuing to develop: bringing IT, OT, connected products, getting all that within Snowflake, improving that visibility, and allowing you then to run greater AI, ML models, Gen AI, at the data, not, again, separating it out. So that you can take advantage of not just the ingestion of that kind of data, but what do I do with it after I've got it somewhere? So with that, any questions before I send you across the hall to the 1:00 o'clock keynote?
24:15
Audience Member 1: For a lot of us, the issue is not just the data. It's also the application. So you saw, basically, a lot of the applications there. What's the result look like if you wanna get the data out of Snowflake and give it to an individual in order for someone on the shop floor to be able to use it? Does it have to live alongside each other? And should we not think about it like it's a replacement for a data broker? It's just something that lets you do higher-level data?
24:37
Greg Sloyer: Generally, the question is, is there a path to go from Snowflake, let's say, back to Ignition as well? There are organizations that have gone down that route. I would say that the Ignition group, Inductive Automation, is the best ones to talk to. There's always the security and protocols and things like that that you have to work through on that. Technically, I do not believe it's an issue. But generally, it's been a one-way path to go up into Snowflake, because then you're looking, like I said, if you have 50 sites, you may have 50 Ignition brokers or whatever, and they're coming up into Snowflake. So you're looking more holistically at that data. I've not seen SAP data go down to Ignition or anything like that. That's usually staying up within Snowflake. Sure.
25:25
Audience Member 1: Oh, somebody else. So at the beginning of the presentation, you talked about how it's kind of a big permission space, rather than storage space. But then later on.
25:32
Greg Sloyer: For the, for data sharing.
25:41
Audience Member 1: Okay, 'cause when we saw the architecture diagram, if you define it in the namespace for Cirrus Link, it moves up. Where is the storage part in that situation?
25:50
Greg Sloyer: So it's in Snowflake. The data is coming into Snowflake. It's stored there. You have chosen, as a customer organization, AWS or Azure or both, let's say, for different reasons. And Snowflake sits on top of that. So physically, they can talk to you about where it makes most sense. But generally, it's in Snowflake. One last question real quick. Yes.
26:11
Audience Member 2: No. Yeah, that was it.
26:14
Greg Sloyer: No. Oh, okay. All right. Super. So I've already been shown the hook kind of thing 'cause they want you to get across the hall for the 1:00 o'clock. But thank you. Appreciate your time. And we are across the hall for any more detailed questions.


4IR Solutions will demonstrate how their platforms can deliver OT, As-a-Service in the cloud or on premises making it easier, faster and cheaper to build and manage your Ignition infrastructure.
Transcript:
00:01
James Burnand: And we'll get some late streamers in here, from what I understand, so all of them will be pointed out and embarrassed as they come and sit down late. I'd like to be the first to welcome you to ICC 2024. I didn't realize I was gonna get that honor when we signed up for this time slot, but it just worked out that way, so I hope you guys had safe travels in. Looking forward to walking you through a little bit about what 4IR does and sharing with you some of what we think is some pretty cool stuff. So to get started, why do we exist?
00:29
James Burnand: While OT systems can be a little bit of a challenge to manage, so when you don't manage your OT systems, the risks that you face are unexpected downtime, security issues and risks to your data fidelity. These are problems that are fairly common across our industry, things that we run into on a fairly regular basis, and something that unfortunately is somewhat ignored in some cases inside of the manufacturing and industrial marketplaces. So to understand maybe a little bit about how does that happen? I'd first like to do a little classification exercise and all your lights turned yellow, which is kinda cool.
01:05
James Burnand: So first of all, how many, show of hands... How many folks in here are end users? Okay, we got about maybe a half. And integrators? We got a lot of integrators. Cool. And everybody else? There we go. Perfect. So what I've done is taken the opportunity to classify what we see is the different types of end users. Hopefully this doesn't offend anyone, rings may be true, but what we do is we're gonna lay out who we think are the folks that are out there that we run into.
01:33
James Burnand: So the first kind of end user we run into are what we call Yodas. Yoda is an exceedingly rare species. There are very few of Yoda species in the universe, and they are masters in their trade. They are considered so totally in control and capable of everything that is necessary for them. Jedi Master. We find these folks to be exceedingly rare, but they do exist and users that have totally figured out how to manage and operate and handle all of the different pulls and pushes of OT as well as all of the rest of the responsibilities that they have.
02:06
James Burnand: The next type of end user we run into, and this is very, very common, is what we call super heroes. So these end users wear a cape, they often have many responsibilities of which managing OT and doing things like updates and patching and security is just one of many, many things that they have as their responsibilities. We find that these folks have a strong desire to be better at managing their OT environments, but often face the issue that it's an important but not urgent issue until it becomes an urgent issue. I'd say these are the most common folks that we run across.
02:41
James Burnand: And the final type of end user we have are what we call Bon Jovis. These folks live on a prayer, and they don't realize the risk that they run until they unfortunately have something that happens. We tend to meet these Bon Jovis after they've had a security incident or they've lost a computer, or they've lost an application for a long period of time and dealt with a significant downtime or cost issue, that's when we usually meet the Bon Jovis.
03:08
James Burnand: So what we have done is we have created a solution that hopefully appeals to all of the folks, although I will say that the Yodas are far less likely to be interested. We offer OT as a service. So we call that FactoryStack and PharmaStack. We'll talk a little bit about what the difference is in a second, but what that means is that we offer as a service, a delivered platform that provides you all of the best practices from Inductive Automation, security hardening guide from database management, as well as the best practices from the IT provider, so folks like AWS and Azure, and we put that and we manage that in a very straightforward way, so that you can focus on applications, you can focus on process, you can focus on the things that matter towards the end goal of improving manufacturing, and someone else is taking care of your OT systems for you.
04:03
James Burnand: So I did mention PharmaStack briefly. PharmaStack is essentially an extension of FactoryStack, and really what it does is it adds in some additional capabilities around data retention, around data integrity, and around 21 CFR Part 11 compliance, so that for companies that are in the pharmaceutical space, they can use PharmaStack to be able to make things like change control and operation of their and validation of their systems faster and easier. Fundamentally, they do the same thing. I'm going to talk about them interchangeably, so if I say FactoryStack and you're thinking PharmaStack, don't worry, they do fundamentally the same things under the hood with again, the additions to PharmaStack being specific for that industry.
04:47
James Burnand: So what are we actually trying to do? We're trying to make it simple to deploy OT infrastructure. We're trying to make it easier, faster, cheaper, and more secure for you to be able to have these architectures and these capabilities deployed both in the Cloud and on-premise for you to be able to take advantage of those. And that sounds very wide and that sounds very kind of vaporous, so if you think about it from a... What is our mission is we're trying to simplify and give you access to these transformative technologies without you necessarily needing to learn them, so you can focus on what's most important for you, which is solving problems for end users or solving problems as end users.
05:27
James Burnand: So how does that work in the ecosystem? Is really an interesting thing. So what we've done is we've laid out a little bit about what is the Ignition ecosystem, so you start off with Ignition itself, there is the Ignition standard, Ignition Edge, and Ignition Cloud addition platforms. They're all somewhat similar in that they share a lot of commonality between them, if you've used them, if you've noticed that, and that they provide a basis for a lot of other things to happen.
05:42
James Burnand: On top of that, there's the modules, so we're showing the partner modules here from Cirrus Link and Sepasoft, who are strategic and solution partners for Inductive Automation, similar to us as solutions partners. These extend the capability of Ignition, so you can do things like communication to MQTT and to the Cloud, and MES capabilities and Sepasoft got some, neat stuff this week.
06:19
James Burnand: That extends the capability beyond, but it still doesn't solve any problems for end users. That's where the integration community comes in. The applications are truly the thing that solves the problems for the end user. This is where you build out a back system or a EBR system or whatever may be the end application that ends up providing that value to the end customer. And if the stack was this simple, it would be very easy to do. It's never the simple. What ends up happening is at least you need a database, you probably need time series data, you probably need source control authentication, MQTT brokers, external applications. All of these complexities are things that are part of these systems that are deployed, whether they're directly a part of it or whether they're integrated into it, they're important pieces, and our goal is to deliver that as a service.
07:10
James Burnand: A different view on what that looks like is this next diagram, and I apologize about the glare moving around on there. What you'll see is we have a couple of things shown on here where... Down at the bottom, we're showing a couple of different deployment locations, so on the left, this is essentially if we offer this as a SaaS, so that's where we deploy it inside of our Tenant, and it becomes a service that you just use.
07:42
James Burnand: The next is inside of your Tenant, so this is for bigger enterprise customers, typically where they already have a big, strong relationship with an AWS or an Azure or some big Cloud company, and they have a basis where they would like to control their data inside of their environment, we are capable of deploying and operating those workloads inside of that space for them really as a platform as a service or PaaS.
07:58
James Burnand: And the final option is on premise. And we'll talk about a couple of options that we offer there, but the ability to have the advantages of operating something in Cloud while it happens to live on-prem, so you can still have that low latency localized capability, but somebody else is taking care of it for you. In the middle, this is really what 4IR does. So managing, supporting, monitoring, providing all of the capabilities for disaster recovery, updates, and ensuring that there's 24x7 support in place for all of these systems is a key component of us ensuring that this is an available and operational system for you at all times.
08:42
James Burnand: And this layer of glue in the middle is really what we are best at. When it comes to the applications, that's your choice. If you want one Ignition Gateway or 12 Ignition Gateways are 200 Ignition Gateways. If you want an Azure SQL database or you want a Postgres database, for us, we're able to flex and provide what makes sense for your use case. So we work a lot with system integrators and end users to help them decide what goes in that application space, but fundamentally, it's up to you as to what it is you need to solve your problems.
09:15
James Burnand: The way that works is we essentially sell by instances. So there's an Edge version of Ignition and an Edge version of FactoryStack and a Cloud version of FactoryStack. They come with the core services that you can see on the top of those boxes, and on the right, you can pick from our application catalog is to what you want available inside of those different locations. And then from then on, it's operated to manage as a service.
09:43
James Burnand: So I think it's important to talk about how are people actually using this. So the very first use case that we'll talk about is, Hey, I've got a couple of plants, or I've got a plant and my executives really wanna see a report or a visualization or some information from that, and it's hard for them to look at on their cell phone, or it's hard for them to be able to get access to that information. So in this scenario, which is a fairly common use case is you have existing Ignition gateways and you simply publish data from those gateways up to an instance in the Cloud of choice, again, whether it's our Cloud or your Cloud, and you build applications up there that take advantage of security principles like multi-factor authentication and DNS attack protection, and the use of a modern suite of security tools so that you can provide a secure way for those end users to be able to access that information that used to be trapped inside of the facility.
10:42
James Burnand: The next use case is around enterprise application, so this is often... And we have a talk on this tomorrow. This is really where there's a single application where I want to go and provide this capability to a fleet of facilities or a fleet of assets. And OEE is a great example of that where, hey, I really wanna have a consistent OEE deployment across my X number of facilities or my fleet of facilities that I have, that can be a really challenging thing to do when you have different IT folks in different buildings and when you have different infrastructure in different buildings, and what we find is for certain types of applications, it makes a lot of sense to use an edge-to-cloud architecture where your edge is provided as a data pump, it's buffering information, it's doing all of the connectivity to those local applications, and you're actually hosting the applications themselves using the Cloud.
11:32
James Burnand: That doesn't mean it all has to be hosted in one gateway. Some of our customers will actually dedicate a gateway per site, so there's a one-to-one relationship between a Cloud application as well as an Edge data pump. We see that as being a very common use case, because what it allows for you to do is to deploy very quickly without having to stand up a bunch of complex infrastructure in the buildings and to be able to take advantage of consistency in the application itself, so using things like the EAM module or DevOps capabilities with Git to be able to manage and operate those projects that are located up in that Cloud position.
12:15
James Burnand: The next piece is an OEM Edge, so where we see this most often is machine builders or folks that are delivering a piece of equipment to a lot of locations, so they would put on a small IoT Edge instance inside of that machine and use it for capturing statistics, creating reports, creating a user portal. So if you're using Ignition Cloud edition, one of the things you're capable of doing is having multiple tenants connect to that instance in the Cloud, so you can imagine if you're a machine builder and you deliver a 100 of this piece of equipment into these locations, the ability to then have some sort of dialed home statistics gathering allows for you to do things like, number one is monitor the equipment, but also find common failure modes and use things like AI to generate insights and inference on how those systems are performing and most importantly is you can actually create upgrade packages for those pieces of equipment based on what you've seen on improvements that you've done on other pieces of equipment. So this allows for you to use that kind of spread out architecture, that Ignition enables to be able to provide an additional service, which is often a paid-for service to your users or to your customers.
13:33
James Burnand: The last one, which is, I'll say newer in this space, is a hybrid. So, is anyone familiar with hybrid? That term make any sense to anyone? Alright, no hands are going up. So what Hybrid is, is it's a little bit of Cloud in your building. So rather than using an Edge device that is essentially there to operate maybe some Docker containers or maybe there to just provide some function, maybe a database or an Ignition gateway, Hybrid Cloud is literally taking a piece of cloud and deploying it inside of your building, so you don't operate it by logging into the server. It looks like a server. So Stack HCI is offered by a bunch of the common vendors, you would know, so Dell is a good example. It looks like a Dell 750 chassis server, but you can't log into it.
14:26
James Burnand: What it is, is it's a thin operating system that connects up to the Cloud, and then you operate and deploy all of the workloads that are on that server sitting in your building through the Azure portal. The nice part about that is you get access to certain services that are available inside of Azure. So the nice part is now I all of a sudden I have access to hyperscale databases and VDI and Kubernetes clusters that lets me put not just FactoryStack, but a variety of different services that live locally can tolerate the internet going out and still operating, but I get the benefit of being able to manage them as if they were deployed in the Cloud because they're being deployed using that same common methodology.
15:11
James Burnand: I see this being a really important step in the next several years for manufacturing, moving from a completely on-prem sort of a set-up to somewhere where there's an on-prem and in cloud hybrid. Yes, the word means that, I guess. Where we see this is traditional SCADA, alarming applications, commonly places, things like regulated environments that want something physically on-prem or they have to have data residency that doesn't leave a building or a geography. These are common use cases for this. And again, we see this as being a very interesting, but also a very useful set of tools that not a lot of folks in the manufacturing space are using as of yet today.
15:54
James Burnand: Interestingly enough, there are several different ones out there. We believe that in this case, Stack HCI is the one we're advertising, 'cause we think that's kind of the furthest ahead. Amazon has their Snow series. If you take a look at that, or their outpost series, there's Anthos from Google and then stack, Azure Stack as it's called for Microsoft. There are others as well. Those are kind of the leading folks in this space, and it is a growing space.
16:26
James Burnand: Oops, so where do people start? So I talked about a couple of use cases, I talked about different ways of thinking about or looking at different types of applications, but most often this is where people start. They set up an Ignition system, a database, a Git repository, everything with integrated Entra ID, multi-factor authentication, everything monitored and secured and they look for a place or an application to use for it. Most often, it's typically focused around statistics or information gathering or unified namespace, integration with AI systems. These are all different use cases that kind of use the same architecture.
17:09
James Burnand: The nice part about this is you can start with a single gateway and a single database, and you can grow this to whatever meets the needs of your use case and your customer, so there are limits, but they're very, very high, and I haven't seen anyone getting or close to them yet, where you can start with a single gateway and you can run hundreds inside of the same infrastructure without making any real fundamental changes to the way it's built.
17:40
James Burnand: So part of what I think is important to understand is what does 4IR do in all this is that we are operating, managing it, and making it simple for people to use, so your interface as an integrator or an end user is the Ignition Gateway in the Ignition Designer. You don't really need to know or understand all of the inner workings behind this. What you need to know is that someone that understands OT is taking care of it for you, and that we are ensuring a simple interface for you to be able to use that takes care of some of the complexities that you may run into.
18:15
James Burnand: A good example. So one of the complexities that a lot of folks run into when they're putting stuff in the Cloud is SSL certificates. So anyone had that problem where their system goes down because of an SSL certificate?
18:29
Audience Member 1: Microsoft Azure.
18:35
Audience Member 1: Special server crashing and yeah, not a problem at all.
18:39
James Burnand: So in our case, we have automated a lot of what you see on the screen, so we use a tool called Pulumi that allows for us to automate the deployment, management, and updating of all of the infrastructure. That also includes certificates. So we don't just deploy a certificate, set it to 2029, and hope no one forgets about it in a few years. We rotate our certificates every thirty days, and there are some changes coming from the browser providers that probably is gonna become necessity in the next few months, maybe a year. But that automation allows for one of those potential downtime reasons to just sort of go away. It becomes something that you no longer need to have as a part of your mind or part of your maintenance plans, because now it's taken care of as a part of the platform that's deployed.
19:28
James Burnand: Maybe leave certificates. So we do have a presentation tomorrow that I'll talk about in a second where we will... We'll go through a little bit more detail what that is, but we do talk quite a bit about security certificates and scale as a part of that. So it's important to know how do you price stuff, and there's a real interesting part of this discussion around how you look at what the pricing is for when you're doing a deployment. So very similarly to if you're gonna go buy a server, right?
20:02
James Burnand: So if you're setting up an Ignition system in a plant, you're probably going to Dell, maybe buying a VMware 321 stack or OpenStack or Nutanix, whatever the case may be. But you're buying something and you're buying it with the intention of having enough capacity in that thing for the next six, seven years, depending on what your lease or your refresh cycle is on your hardware. It's a little different in the Cloud. So when you're in the Cloud, what you're doing is you're trying to figure out, what do I need today, and making sure that when I've created this, I have a flexible architecture. So as I consume more, I have the ability to expand my capability. So what becomes important as a part of this is understanding that the Cloud and on-prem have different ways of handling reliability. So by default, our systems take advantage of multiple availability zones.
20:51
James Burnand: So we have things like mirrored storage across three completely separate physical buildings that provide not just if some hard drives fail, but if a literal building blows up, the system won't actually have any, or it'll have minimal downtime to move some of the workloads across automatically. So the level of availability and reliability that we offer out of the box is actually higher than what most people are capable of doing inside of the four walls of their building. And we can still go up from there. The challenge is cost. So, you know, a lot of folks are like, it needs to be this, it needs to be that without actually going through and understanding what level of downtime can I tolerate as my business? Can I tolerate... And my cohort, Randy says, can you tolerate somewhere between a 100 milliseconds and a 100 days?
21:39
James Burnand: And the reality is that, you know, depending on what your lead time is for different hardware components or what the criticality of your application is, how much data you can tolerate to lose, those are the decisions that help you choose what level of availability that you need to have as a part of your deployed application. That is a direct correlation to what it costs from those hosting services from the Cloud. So we try to guide people through what it is they need, based on what their application is, what their user use case is, and try to create something that makes sense for those users, taking advantage of the technologies that you have available by using different cloud services and capabilities.
22:19
James Burnand: The other things that drive cost is how complex is the application? So how many gateways? What type of databases do I need? Do I need a VPN or no VPN? How long do I need to retain backups for? These are all, again, considerations that have a direct correlation to what I get charged from an Azure or from an AWS.
22:43
James Burnand: Important to highlight. So we do have a few partnerships in the industry. A lot of logos I think that are here this week. We work very closely with these partners on trying to create cohesive offerings as well as working with Microsoft and Amazon to ensure that our solution is qualified and follows all the best practices that they publish. We do work with a lot of systems integrators as well. I'm not gonna put logos up here, but I think it's very much a collaborative engagement with integrators because we don't build applications. That is not part of our business model. We are here to provide enablement and infrastructure and make sure that it's easy for system integrators to deploy these kind of systems or end users to deploy these kind of systems. But we do not build applications.
23:29
James Burnand: We do offer consulting. So if you are trying to figure out how am I going to do this? How does IT and OT talk together? How do I meet these security requirements? Or you get one of those big long checklists that says, do you have this? Do you have that? What's your policy for this? That's what we do. So if you're trying to go through that and figure out a way to create an offering for a customer that meets those obligations, we probably have an answer for that because that's our business.
24:04
James Burnand: So we talked a little bit about the ICC session. Tomorrow, it's just after lunch in stage 2. I encourage you all to attend. So I will be back up here. I'll have my cohort Randy to talk a little bit about Enterprise Ignition specifically. So we're gonna cover details around what makes an Enterprise unique, as well as we're gonna do a live demonstration of FactoryStack. That demonstration is gonna have a number of Ignition gateways running. We're gonna add a whole bunch, we're gonna upgrade a bunch, and we're gonna downgrade a bunch and kill a bunch. So a really neat demonstration of the technology in action and we're looking forward to sharing that with you guys. That's all I have for the presentation. Any questions?
24:54
James Burnand: So there are some that are deploying hybrid because of that concern and they need to have it in the building. But there are others, and it's not typically like consumer packaged goods or pharmaceuticals. It's like oil and gas is a better example where they have distributed fleets of assets and they're actually doing monitoring and SCADA control of those distributed assets using a central platform, which for them, isn't really that different as to what it would look like if they had a bunch of leased lines going to a building that has a dedicated server. So for them, this is a cost savings and risk reduction piece. So now rather than having no one updating their servers and being a little bit of a Bon Jovi, now they have someone caring for and monitoring their systems 24/7 and providing updates and providing kind of that surety of availability. The biggest downtime reason is often the internet connection, not either side of it.
25:58
James Burnand: Yeah. Yeah. It's all pipeline in that particular case I'm talking about. But yeah, like there isn't, you know, for direct control of process and equipment, we don't recommend using the Cloud. And to be honest, there is not a great set of reasons to take that risk on unless you need to. I personally think that in my professional career, we're going to see a time where the reliability of networks between factories and public clouds are at the point where people will start to do that. We're already seeing... We have a couple of like real big enterprise customers that are forcing all of their onsite SQL servers to be moved to a managed service in Azure by default. So you have to provide basically a set of reasons why they're not going to be moved. So they don't actually care what the application is, they're just trying to get rid of that cost of having to operate and maintain those SQL servers.
26:57
James Burnand: And their reasoning behind it is that they've invested in redundant WAN connections and a level of latency and availability between their buildings and their public cloud instance that is as good as it could possibly be, so they feel comfortable with that risk level. I think we're gonna get there in the industrial space, but not for a while. That's why I think hybrid cloud is so important because hybrid cloud allows for you to bridge that timeline and you can run Stack HCI offline for weeks and it still is local, it's still running virtual machines and clusters in the building, allows for your SCADA system to operate as if it was there. What you lose is visibility and the ability to pump back ups up to the Cloud.
27:41
Audience Member 2: All the software, all that's managed in the Cloud. What about hardware upgrades to the on-prem?
27:47
James Burnand: So that's actually managed from the Cloud as well. So the way that works is there's a Stack HCI OS, and again, I'm just talking about that particular instance that's a like a really cut down version of Windows server and it's upgraded kind of like a firmware on a PLC. And those upgrades are become available in Azure and you push those upgrades down to the system. So it's... they're more like unit upgrades than like doing Patch Tuesday. So it's more akin to like a firmware based device than it is like an operating system.
28:19
Audience Member 2: So no real concerns about hardware obfuscating the software?
28:25
James Burnand: So not really. The nice part about it is just like if you're running a VMware setup, you're obfuscating the hardware from the workloads that are running on top of it. So, you know, for you to migrate that cluster, migrate those virtual machines to another piece of hardware, even if it's dissimilar is not an issue. The level of availability of those systems is variable depending on how much hardware you buy. So you can do as little as one Stack HCI server, which gives you like a RAID 5 array and two power supplies and single server level of reliability. You can do two of them running as a pair and you can do 10 of them running as a cluster.
29:06
James Burnand: Yeah. So the question was, how difficult is it to get estimated price and Azure was the question, but it's similar for AWS and how accurate is that? So the cloud companies actually do a really good job of laying out what their service costs are, and they also have some fairly built-in discounting models. So one of the things you can do is you can reserve for a certain amount of time, and then you get a percentage off of that service cost. So for example, if I have a database and I reserve it for a year, I get thirty percent off that price. And that's basically a fixed quantity based on what the calculators say. So we can go in and what we do to try to simplify it for the end users is we'll kind of create a set of boundaries and say, okay, for this subscription you get a terabyte of storage, you get this much ingress, this much egress and these services, and we'll handle some of the risk of that minutia.
30:03
James Burnand: When it's in a customer's tenant or when you're trying to estimate, you know, is this a hundred bucks a month or 10,000 a month, the calculators are really easy to use provided you know the services that you're going to consume, in an approximation. The data's not the most expensive part, it's the services that cost more. So like for example, if you're gonna put in the storage for storing backups, that's rounding error compared to what it costs to put in like a SQL server database service.
30:32
James Burnand: Yeah. So the question was how does Ignition licensing work? We can only buy, 4IR can only buy Cloud edition, because we purchase Cloud edition through the marketplace, just like anyone else would purchase Cloud edition. Any other Ignition purchases are perpetual licenses. We need the eight digit key so we can be able to reup them whenever we need to. Or if we kill a gateway and bring one back up, we have to be able to reactivate it. But those are purchased either by the system integrator or by the end user directly. So we don't... Part of this, so the licensing that we'll provide is for the managed services that we purchase or anything that we purchase through Azure for things like, if I need an MQTT broker, if I need a flow license or an Ignition license that's typically purchased by the integrator as a part of that project that's being deployed. And then we will... Our requirement is that support is maintained on it, so upgrade protection is available so we can upgrade things. But that's kind of the all we're really looking for.
31:39
James Burnand: Okay. I think... I'm starting to feel like we might be out of time. So I wanted to say thank you all for your time today and I hope you guys enjoy ICC. Have a great one.


Sepasoft’s workflow solution can map out and execute the production process for almost anything – including made-to-order bobbleheads! Our demo will showcase how simple it is to manage production workflows, collect real-time data, and utilize document management with 3D models and form entry. We’ll also highlight how to authenticate and verify every action during production for compliance and accountability using Electronic Batch Records (EBR) and electronic signatures. Join us to see the latest Batch Procedure technology in action.
Transcript:
00:00
Tony Nevshemal: Hey everybody. Welcome and thank you for coming to our session today. I'm really excited to be here at ICC. It's actually my first ICC. But when I started... Well, today, my colleague Doug and I, sorry about that, are gonna be presenting "Sepasoft's Workflow Solution: Building Bobbles with Batch." We're gonna be building these really cool bobbleheads today using Sepasoft's Batch [Procedure] Module. And within Sepasoft, there's often been some controversy about how we start... How we named our module "Batch" because it's, some people think it's a misnomer. That it only applies to batch manufacturing. However, it truly is a workflow solution. It'll handle any workflow that's incorporated or associated with your manufacturing, and we intend to show you something of that today.
00:55
Tony Nevshemal: My name is Tony Nevshemal. I'm the CEO of Sepasoft, and I'm also the new guy, having joined just recently. Many of you know Tom, Tom Hechtman was the prior CEO of Sepasoft, and he has transitioned to the CTO role where he's in charge of the product roadmap, product innovation, and thought leadership. Prior to joining Sepasoft, I was actually at, a CEO of an ERP, a manufacturing ERP. And prior to that, I was an operations director at a large manufacturer. I'm very happy today to come down the Purdue pyramid to level three where all the cool kids are and one of them is Doug. So Doug, introduce yourself.
01:38
Doug Brandl: Yeah, thank you. My name is Doug Brandl. I'm an MES Solutions Engineer with Sepasoft. My background is, I've got 10 years of experience in pharma as an automation engineer and consultant, and then application development before then. But I grew up around the MES space, I grew up around the standards. My father was really involved in them, and our dinner table conversations with me and my brothers and my family often involved talking about operations, responses, and all the different object models. It was a bit nerdy, a bit geeky, push the glasses right up your face. But I've got an ingrained, internalized understanding of the space and I've been with Sepasoft for a little over a year and thank you to everybody who went to our session last year, and thank you for coming to this one today.
02:36
Tony Nevshemal: Well, and before I joined, I endeavored to take all the training classes at Sepasoft for all of our modules. But one of the training classes I have not taken yet is our Batch [Procedure] Module. So Doug is in the unenviable position of walking me through our Batch [Procedure] Module, the unit procedures, changing up a recipe, and you guys get to see it all in real time today. A quick word about Sepasoft before we proceed. Sepasoft is of course an Inductive [Automation] Solutions Partner. We have the broadest and deepest MES solution on the platform. We have batch processing production workflows, we'll be showing some of that today. We have genealogy and WIP inventory with our Track & Trace Module. ERP connectivity, we can hook up to pretty much any ERP, and we have a direct connector with SAP.
03:31
Tony Nevshemal: We're well known for our production efficiency and scheduling with our OEE and downtime, quality tracking is handled with SPC. We have a bunch of ancillary modules such as settings and changeover, document management, barcode, those types of things. And you can control it all at the enterprise level with our multi-sync management, multi-site management, not sync. I'm very happy to tell you that this week we're announcing another bullet point added to this list, and that's SepaIQ. So please come to our session on Thursday. SepaIQ is really an exciting breakthrough that we've made, that Tom's made, and it relates to our manufacturing, machine learning, AI, data contextualization, all of those topics. So please come to our session on Thursday to learn more about that.
04:21
Tony Nevshemal: And finally, a quick word about a change we've made regarding our Quick Start program at Sepasoft. Our Quick Start program is effectively access to our design consultation engineers. We've opened up that access to be universal to any and all Sepasoft customers. So to the extent that you need expertise with your MES project, whether that's at architecture, design, implementation, rollout, consider us part of the team because when you succeed, we succeed. So I think that's enough of that. Let's get into the presentation.
04:55
Doug Brandl: Yeah. To give everybody some context on what we're doing, we are receiving orders from our ERP system for made-to-order bobbleheads. And we're going to run through to assembly, and we're going to try and highlight, and I challenge you to think of it this way, the procedural control and workflow of what it takes to go from order to execution of making these bobbleheads. And Tony will have to put them together for us. We're gonna leverage our best procedure tool, we're gonna use our Track & Trace modules. We'll, hopefully, if we have time, be able to see some of the genealogy of lot consumption, and you'll see a handful of our components that we use to do all this and our recipe editor.
05:43
Tony Nevshemal: Yep.
05:45
Doug Brandl: Alright.
05:45
Tony Nevshemal: Alright.
05:46
Doug Brandl: So first things first, you guys are gonna have to excuse me, I've got to turn around to do this. We're gonna refresh our orders off of our ERP system, and I like this bobblehead for the Sepasoft company logo, that's awfully convenient that one's right at the beginning. So we're gonna go ahead and start a batch, and as you can see, we've got our batch ID, proceed to the review page before we can assemble. So what we've got here is, this is just a standard Perspective page, we've got our document viewer, which is an HTML5 WYSIWYG. You can do a lot of things in it, a lot of really cool things. In this case, we're embedding a WebGL model, this we do with the help of the Web Dev Module. And over here on the right side, we've embedded some form entry fields and all of this gets tracked to the batch, this gets tracked to the electronic batch record, the EBR, and I'll show you what all of that looks like here in a minute. But I guess probably before we go, I should give you a quick overview of the recipe so that we can...
07:00
Tony Nevshemal: Yeah. Is there a way to graphically view that?
07:01
Doug Brandl: Yeah. I put a little slide out here. Right over here is a visual representation, and this is also very similar to... Sorry. This is our recipe that we're gonna be executing and we here have "Review Station" which in this case is gonna be my computer where I'm going to do some 3D model review. We're going to do some authentication challenges. This links into the identity provider provided by Inductive [Automation].
07:29
Doug Brandl: And we'll challenge for some electronic signatures. We've got some logic that we can do to that where you can require double signatures, you can set up which roles need to be to gate certain steps. And then after our review, if we're happy with our model, we go through the assembly, so I have an equipment phase here. If you're not familiar with the standards, think of the phase as like a step. In this case, this equipment phase is a simulated PLC where I'm going to send to our printer, our 3D... Our beautiful Amazon printer here. Our 3D models that we're going to print, we're going to e-sign to make sure it didn't turn to spaghetti, and then we're going to measure, record the values to our SPC modules and then assemble our 3D, our little 3D bobblehead. Alright, so Tony.
08:26
Tony Nevshemal: Yes.
08:27
Doug Brandl: Well, I guess this is all me, I'm the reviewer. As far as... This looks appropriate to me. I'm not really seeing any mesh errors.
08:36
Tony Nevshemal: And all components, all three are present.
08:38
Doug Brandl: Yes, all of this is present. So I'm gonna go ahead and click through these and I'm gonna say this is all good, and I'm going to... You can't see it in the bottom right because it's covered by my shadow, but down here, we've got our button to finish this document. Now, when I do this, I'm gonna slide this back out. You can see where you've been and where you're going with our batch monitor. And when I click on this and expand it, I can see all of the relevant metrics that we're capturing as part of this step. I can see, right up here, I can see the model is appropriate. So this is really good for auditing and figuring out what really happened during the execution of a batch. Slide this guy back out, and I can see I've got an e-signature required to complete the review step.
09:28
Doug Brandl: I will go ahead as a reviewer, do this challenge, so here I am Doug, and my password. Alright, I accepted that. I could also reject it, which in our batch, in the recipe that you saw or branches, you can get pretty complex in your conditions that you put in there to do really whatever it is that you need. Next up, I guess we go to our assemble stage. Here, this is just a simple Perspective page that I put up tied to our fake little PLC. You can see I say that the state is running. Our PLC is saying that it is running, but in reality, it is waiting for some filament. So Tony, if you don't mind, could you scan some...
10:21
Tony Nevshemal: Sure. Beep.
10:24
Doug Brandl: Perfect. Alright, there we go. Okay, now we're off to the races. So, while this is running, I'm just capturing a handful of metrics, we're looking at filament consumed, layers printed, extruder speed, etc.
10:35
Tony Nevshemal: How did you build these screens?
10:37
Doug Brandl: Yeah, this is just standard Perspective. All of these are tag-driven, so this, when you install our modules, you get an MES tag provider. And as you configure which phases, which, as you configure the batch module, you can expose each step when it executes for a particular unit, you can expose all of those values as tags. So all of these are just tags, and I just... It's a very simple like plain old Ignition Perspective. And then, again, on this while it executes, I didn't pull it up fast enough, but we are tracking, you see Base_Out at the top, we see filament. These are material transfers, so this is actually piggybacking our Track and Trace Module. It allows us to consume material, track lot usage, and we'll see that hopefully at the end with our trace graph, and then it'll also... You get a file name, you get the extruder speed, all of that gets tracked live, and you can store those values as they change, you can store the last value, so that you can... And you can see all of this in your EBR at the end after execution.
11:56
Tony Nevshemal: And for those that don't know, what's an EBR?
11:58
Doug Brandl: Electronic batch record. Alright, so we'll go over to our measure. I forgot I have a e-signature here. Alright.
12:07
Tony Nevshemal: Well, it looks like they printed.
12:09
Doug Brandl: Okay, they didn't turn to spaghetti.
12:11
Tony Nevshemal: No.
12:11
Doug Brandl: Alright.
12:12
Tony Nevshemal: We got the parts.
12:13
Doug Brandl: So I'll go ahead and sign off. Or would you like to sign off?
12:16
Tony Nevshemal: Sure.
12:17
Doug Brandl: Yeah. And again, this is any identity provider in Ignition that you set up, so you don't need to do anything crazy, it's just part of the platform. Alright. Now we're good, hopefully. Well, I hit the login button. Now we're good to go to our measure. Alright, so we've got some annotations now here on our 3D model. Tony, I need you to take some measurements here.
13:00
Tony Nevshemal: Okay.
13:02
Doug Brandl: So let's look at the head first.
13:04
Tony Nevshemal: Which one?
13:06
Doug Brandl: And I want you to get the diameter of that section on the 3D model.
13:15
Tony Nevshemal: So that is 6.12.
13:17
Doug Brandl: Alright, and then let's go to the base. If I can put that. There we go. Now we're gonna grab that right there, the diameter.
13:32
Tony Nevshemal: Alright, 6.16.
13:37
Doug Brandl: And then finally, let's go for the spring diameter.
13:43
Tony Nevshemal: 6.02.
13:47
Doug Brandl: Perfect. So I'll go ahead and complete this step. Now, I don't know if you guys noticed, but part of our process, we measure, we record the values to SPC, which it popped up while I was looking away, but we record the values to SPC and then we go to assembly. But we may run into a problem in the future, so I think there's an opportunity for us to modify this recipe and for Tony to dabble in the batch recipe editor, so we are good there. Now it's just assemble.
14:19
Tony Nevshemal: Alright.
14:19
Doug Brandl: If you don't mind.
14:22
Tony Nevshemal: So how do I assemble?
14:23
Doug Brandl: No, that's...
14:24
Tony Nevshemal: Okay. So you take...
14:25
Doug Brandl: Yeah. Take the spring, put it in the hole. Now, obviously you use your imagination and your projects, this could obviously be significantly more complex. You don't have to use a 3D model like we are here, you could use documents. We can retrieve these out of controlled document management systems. The world is your oyster when it comes to this. Alright, cool. It is assembled. I'm gonna go ahead and complete the step. Alright, so we have, we've completed our assembly and now we're gonna send the label to the printer and that's that. But we did notice that there are some opportunities. So Tony, if you don't mind, I'd like for you to go ahead and go into the recipe editor and modify the recipe, and let's see if we can account for times where... Let's go with the spring is not gonna fit in the hole. We're not gonna be able to assemble this. So we've got our happy path, we've got our green path through this workflow, but we don't have a red path, we're not handling exceptions appropriately, so this is a great opportunity to show you how easy it is. So Tony, can you open up the assembly unit procedure on the bottom left?
15:39
Tony Nevshemal: Sure.
15:41
Doug Brandl: And scroll on down, and after the "Record Values" and the "Record Transition," we're going to insert a branch into this workflow, so you can delete that line right there. And then I want you on our logic controls here in the editor to drag on "Or Begin." What this is gonna let us do is this is gonna let us say, "When this condition is met, you go down this path. When a different condition is met, you go down another path," etc., etc. And you can change these. So connect that, and then we're going to put in those conditions.
16:16
Tony Nevshemal: Okay.
16:16
Doug Brandl: So if you could drag two transitions in, the transition is where you're going to be able to put in that expression, and we'll have one for our green path and one for our red path. Or happy and sad path. And go ahead and connect those guys. Perfect. And then let's edit. You can connect them to the next one as well.
16:41
Tony Nevshemal: Sure.
16:42
Doug Brandl: And then let's go ahead and edit that transition. Let's give it a name.
16:47
Tony Nevshemal: So this is good measurements, right?
16:49
Doug Brandl: Yes. And then this transition expression, so this transition expression, what we can do is we can look up through the recipe, through what's been executed, and we can pull out some of those metrics. So we had our operator record on that document, we had them record the diameters of the spring and of the head and the base, so what we're gonna do is we can grab those values and apply some rudimentary logic. So Tony, we called it "measure," is the name of that step, of that phase. "Measure" and then you're gonna say ".diameter" and let's go. So in this case, our good one is when the spring is smaller than the head and the spring is smaller than the base.
17:35
Tony Nevshemal: Right, so when the spring...
17:37
Doug Brandl: And, nope, we don't need to...
17:44
Tony Nevshemal: Oh yeah. Just less than...
17:46
Doug Brandl: Yeah, maybe too tight.
17:47
Tony Nevshemal: "Measure.Diameter_Spring" is... "Measure.Diameter_Base" right?
18:19
Doug Brandl: Yes.
18:20
Tony Nevshemal: Okay.
18:21
Doug Brandl: Go ahead and save that. And then let's do the same for... Let's do the inverse, the logic inverse of that for this red path, so let's just call this "rejects."
18:31
Tony Nevshemal: Reject.
18:31
Doug Brandl: Reject measurement. And then our transition expression is going to be when the spring is greater than or equal to the base, or the spring is greater than or, and... Is greater than or equal to the head.
19:00
Tony Nevshemal: Spring, is greater than or equal to. What did I do first?
19:11
Doug Brandl: You did the head first.
19:12
Tony Nevshemal: Alright, so this is base. Okay.
19:13
Doug Brandl: Perfect. Save. And then what do we... What do you think we should do?
19:19
Tony Nevshemal: Well, let's say... So if it fails its measurements, that means you're not able to assemble. So we should probably tell the assemblers.
19:27
Doug Brandl: Yeah, probably don't wanna waste their time.
19:28
Tony Nevshemal: Right.
19:28
Doug Brandl: Yeah. So let's throw in a user message. So we have some built in... You have like a whole standard library of phases that you can drop in. And in this case I've configured it so that our assembly station can have a user message. So if you can just click that, drag it over into that unit procedure and connect it. And let's go ahead and configure it.
20:00
Tony Nevshemal: So we'll call this "notify"?
20:02
Doug Brandl: Yeah, like "notify operator" or something.
20:04
Tony Nevshemal: Yeah. Okay.
20:14
Doug Brandl: And then let's just give them a message down at the bottom where it says "parameter value."
20:21
Tony Nevshemal: Yeah. What do we wanna say here?
20:24
Doug Brandl: Let's just say "assembly not possible."
20:25
Tony Nevshemal: Okay.
20:26
Doug Brandl: We'll keep it simple. In your own projects, I'm sure that you'd probably wanna put more in there. And then go ahead and save that.
20:33
Tony Nevshemal: Yep.
20:33
Doug Brandl: So I'm not covering it. But you can also do calculations where you can pull in values. So a lot of our phases have that. Yeah, let's go ahead and require acknowledgement on it.
20:43
Tony Nevshemal: Yeah.
20:44
Doug Brandl: There's a lot of ability to make it dynamic so it's not all static. It's not like you're always gonna say the same thing. Sometimes you want to include values from previous steps or maybe include batch parameters as part of the message or part of any other phase. So we do have also the ability to include that as part of like a calculation. But we're not doing that here. So let's go ahead and hit save.
21:05
Tony Nevshemal: Alright.
21:08
Doug Brandl: And then we're gonna put a transition on this. So every phase needs to have a transition after it's done. And in this case, we're just gonna say "complete." Once the notification has been sent and this phase is... The execution of it is complete, we'll continue on and we'll terminate the batch. So you can go ahead and insert suggested here. And what this does is it's gonna look at the link up and just say whenever that step is complete. And this is good. We'll go ahead and save it, and then put on a terminator in the logic controls on the...
21:39
Tony Nevshemal: Let's try it without a terminator.
21:41
Doug Brandl: We can't do it.
21:42
Tony Nevshemal: Can we validate it?
21:42
Doug Brandl: Yeah, you wanna validate it? So if you don't do this, we do have some validation of our recipes where it'll look at it and it'll tell you what's wrong. And in this case, it's saying the assembly unit procedure, UP5 transition needs to be followed by something.
22:00
Tony Nevshemal: Okay, cool.
22:00
Doug Brandl: Let's go ahead and drag the terminator on and connect it. And then let's validate. Again, make sure that that resolved that issue. Recipe is valid. Cool beans. Let's save it.
22:16
Tony Nevshemal: Alright.
22:21
Doug Brandl: Alright.
22:22
Tony Nevshemal: Right. Let's run it again.
22:23
Doug Brandl: Yeah, so we'll fly through this for the second time so that we can get to questions since we've got four minutes to go. So, alright. This is gonna be the world's fastest 3D printer here. I'm gonna go ahead and kill all of these old orders. These are on the old recipes. So we do version our recipes. So these are using, it's the version 61 of that recipe. We're going to reset this and I'm gonna go retrieve some more orders from our ERP system and that'll be version 62. So refresh orders right here. Alright. So this is the same steps. I'm gonna go fast for the sake of brevity.
23:04
Tony Nevshemal: Let's quickly review them.
23:05
Doug Brandl: Yep. Oh, this looks great. We've seen this one before. Check, check, check. Check. E-sign. I'll go in as an admin. Password.
23:21
Tony Nevshemal: Cool.
23:22
Doug Brandl: Cool, cool, cool. Close those.
23:24
Tony Nevshemal: It's printing.
23:25
Doug Brandl: Yeah, let's go over to our print. Beep boop, scan the lot. We're printing. We are printing at 50 layers a second.
23:36
Tony Nevshemal: Yeah. It's screaming.
23:37
Doug Brandl: This is a fast printer. I can tell who has a 3D printer in here and knows how frustratingly slow that they are. Alright. We're gonna have an e-signature.
23:50
Tony Nevshemal: Okay.
23:51
Doug Brandl: Verify it didn't turn to spaghetti. So I'm gonna go ahead and sign that one as well. Tony, it didn't turn to spaghetti, did it?
24:00
Tony Nevshemal: It did not. We have something.
24:03
Doug Brandl: Alright. So now we're on our measure step. So this is after this step is where we added our transition. So let's go ahead and measure the head outer diameter.
24:16
Tony Nevshemal: Okay, that is 6.2.
24:20
Doug Brandl: 6.2. Let's measure the base.
24:25
Tony Nevshemal: That is 5.9.
24:26
Doug Brandl: Whoa. Now let's do the spring.
24:33
Tony Nevshemal: That is 6.02.
24:35
Doug Brandl: 6.02. Alright. So clearly we are gonna violate our recipe. So when I do that, let's go ahead and take a look and see what happened. So right here, I expand this. Sorry, let me make this a little bit bigger here. I just like watching him walk back and forth with the shadow. So here you can see this transition. So we proceeded down this route here and you can look at this transition and you can see what specifically caused us to go down whatever path it was. And in this case, it was our spring is greater than or equal to our base. Our base was too tiny or our spring is too big. And then we have our notification. So that notification's up on the top right here. And we did require acknowledgement. So I'm gonna go ahead and sign in as an admin. Password.
25:35
Doug Brandl: And here we have our... Just a standard batch message list. This is, again, one of our components where I can click on it. Assembly not possible. I'm gonna acknowledge that. And again, all of this is tracked to the EBR. There's an awful lot that I wanna show you guys as it relates to our EBR, as it relates to our trace graph. I'll hit the trace graph really fast and then I think we're gonna have to go move on to Q&A. And if you want more you can come over to our booth and I would be happy to show this to you. Alright. So here I'm looking at all of the different types of filament, all the different batches. So here what I'll do is I'll slide that over. So right here I can see we have a completed bobblehead. This right here is the assembly unit procedure for that particular batch.
26:24
Doug Brandl: Looks like it was one that I had done on the fourth, I guess. I could see which filament I consumed. I can get the lot number for the base. So I create... As part of this step, I'm also creating that lot. I can see everything in and I can see all of the material that is created as part of it. And then if I click here, I can see all of the five other batches that use this same material. So this is really useful if you're looking, if you're doing any investigations for quality, for recall, any of that stuff. So this is a really good way to visualize, what did I use? I received green filament and I have it on this particular assembly, this batch right here. So I know all of the bobbleheads that came out that used that specific green filament. And this trace, there's not a realistic limit on this. So it does run back. You can chain all of your material transfers back and forth. I think that's all I've got time to show. Does anybody have any questions? I think it's the Q&A time. Yeah, go for it. Oh, she's going to give you a mic. Yeah.
27:39
Audience Member 1: The object model that you have, the recipe, like how accessible is that? Let's say that I've got basically something that's dynamically generating parts from like a pick-and-place machine, right? And I'm not gonna have all that data until it hits the end of the line as a transaction. Can I write all of that at once? Can I then query essentially every transaction I've had for these measurements and get something like capability? Or am I gonna need to layer in other modules like traceability and SPC to do that kind of stuff?
28:07
Doug Brandl: So if you're doing anything with material tracking, you're gonna need the Track and Trace Module. So material transfers as part of the batch. So you could do all the built-in phases, but when it comes to material in and material out and tracking any of that, and suppose you've got 100 different types of dynamic materials, you can set those for the material in property on the phase. So if you want, I can show that to you probably over at our booth. I can show you what that looks like. But yes, you can do that. But it does require the Track and Trace Module.
28:41
Audience Member 1: Okay.
28:42
Doug Brandl: Yeah.
28:43
Audience Member 2: Hi. Is there an array-based entry? I see the graphical method to put all these essentially routes in, but is there an array base or some other way that you could do it in bulk and not all the clicking and dragging?
28:57
Doug Brandl: Yeah, you can script this too. You can script the creation of recipes, of batches. You could pull it, some people even pull it out of their ERP system and dynamically create recipes. So all of this is backed. So we have this frontend here, we have these components. If you don't want to click and drag and you've got some more complicated system, you can script the creation of all of these recipes. And the execution. Yeah.
29:27
Audience Member 3: Does the system have a functionality to do order maintenance to modify existing batches in run to reflect the new recipe?
29:36
Doug Brandl: At the moment, I don't believe we do. Yeah. I'll let Tom answer that.
29:41
Tom Hechtman: To start a recipe, that's a ISA-88 model. So you have your master recipe and you create a control recipe. So once that... Sorry. Once you create that control recipe and you're executing it, it's isolated from the master recipe at that point. Now, if you modify phase or templates, we have templates and different things like that, you do have ways to push those changes down into your recipe and such.
30:13
Audience Member 4: And you can create something... Are there already existing scripts to help facilitate that that you need to customize for your use case?
30:20
Doug Brandl: Yes. So I definitely encourage you to reach out for the Quick Start program, reach out to our design consultation team. They've got a lot of experience doing that.
30:29
Audience Member 4: Awesome, thank you.
30:31
Doug Brandl: Yeah. Any more questions you guys have, please come visit us over at our booth and I really, really, really encourage you come on Thursday to Tom and Mark's presentation. It is very exciting what they're doing. So show up if you can. Alright, thank you guys.
30:47
Tony Nevshemal: Thank you.


This session provides an overview of Cirrus Link to include MQTT Architectures, the MQTT Modules and their use cases. It will also touch on MQTT SparkplugB, the Unified Namespace as well as cloud connectivity through the cloud injector modules and IoT Bridge products.
Transcript:
00:00
Nathan Davenport: My name's Nathan Davenport. I'm the Director of Sales Engineering here at Cirrus Link. What do we do? What do we build? How do we integrate into the Ignition platform? And where did we come from? So, Cirrus Link provides MQTT centric software for industrial automation solutions. We've been doing this stuff a long time. We have 85 plus combined years of experience of MQTT on staff. We have the co-inventor. You guys know who Arlen Nipper is. The co-inventor of MQTT. He is our president and CTO. And he himself has tons of experience. We were founded in 2012. As you guys know, we are strategic partners with Inductive Automation. We build a lot of different modules. MQTT and Sparkplug centric for the platform. And we also created an open source, the Sparkplug specification. So, created it, and then we basically gave it away to the Eclipse Foundation, and we helped kind of shepherd that stuff through. What is MQTT? So, the spec itself describes how to implement a message-oriented middleware Pub/Sub infrastructural. What in the world does that mean? I'll talk about that a little bit more here in a second.
01:05
Nathan Davenport: Where is it used? It's used all over the place. Every major cloud provider has some type of MQTT endpoint, right? You have AWS IoT Core, you have Azure IoT Hub, and you have Google, well, you used to have the Google endpoint, but I think they retired that thing. But we do have the IBM Watson IoT, right? They all use MQTT. What is so special about MQTT? Well, it was originally designed for real-time, mission critical SCADA systems. It's simple, it's efficient, it's stateful, it's open. What does the basic MQTT architecture look like? So, in this particular example, I have two clients and a server in the middle. So, the first client connects. The other client's connected as well. Client number one subscribes on #. # is the wildcard, right, topic within the MQTT topic namespace. So, any message published on any topic should be received by this client, MQTT client one. So, client two publishes on a topic of hello/ICC with a payload of hello ICC 2024 bang. The server then gets that message, looks up in its table to figure out who has the appropriate subscription such that they should be getting that message, delivers said message to MQTT client one, and it gets the message on the topic hello ICC. Hello ICC 2024 bang as the payload.
02:29
Nathan Davenport: So, we have three core MQTT modules for the Ignition platform. I will give you a summary here. We'll look at a little bit of a topology diagram next, and then I'm gonna pivot over to the Ignition web portal so you guys can get a little bit of context, and I can show you some of the features within engine transmission and distributor. So, what does transmission do? Transmission is the MQTT client that runs on the Ignition platform. Its job is to basically consume tags. It consumes tags, converts that data, tag data, into Sparkplug messages, puts that stuff on the wire, sends it off to MQTT distributor. What is distributor? Distributor is your on-platform MQTT server, right on the ignition platform MQTT server. It can handle up to 250 clients simultaneously. Engine is primarily a consumer of data. It accepts, it subscribes on the appropriate topics, gets Sparkplug messages in, converts that data basically into tags, and renders those process variables as tags in the engine tag provider within the Ignition tag subsystem. So, what does an Ignition-based MQTT architecture actually look like? Well, you have to start with a server. So, we have the MQTT server here in the middle, and we have a host client as well.
03:47
Nathan Davenport: That's a Sparkplug host client. Those two, the server and the client, are both on the same gateway here in this particular architecture. And then for this topology diagram, I have two edge clients. One's running the Edge SKU, one's running full-blown, talking to your PLCs, RTUs, and so forth. What does that mean? Well, engine's the host application. Distributor is the MQTT server. Transmission is the Sparkplug edge client. So now, I'm gonna pivot over here so I can talk a little bit about the features within engine. So what I wanna show you guys first are the default namespaces with an engine. So, the namespaces really define which MQTT topics the engine client subscribes on. Some Sparkplug B, for example, means if you have this namespace enabled, the engine client is going to connect. It's going to subscribe on spBv1.0/#. It's gonna get all the Sparkplug messages in from all of your edge clients across all of your plant floors out in the field, and so forth. These others are a little bit less interesting, but really, all you need to know is that the namespace defines the subscriptions that the MQTT client makes.
05:04
Nathan Davenport: That's how we get data into the platform. I don't think we talk enough about custom namespaces, really, within engine. Custom namespaces, however, are specific to taking in generic MQTT data on any topic. So, this is non-Sparkplug data, right? So, let's say you have a bunch of MQTT devices in the field. None of them speak Sparkplug at all. They're all publishing maybe JSON payloads, right? So, custom namespaces allow you to define the topics that you want to subscribe on in order to get this data from your edge clients and route it to the appropriate location within the engine tag provider. So, this one happens to be subscribing on B/#. Not super interesting. And within custom namespaces, we have the ability, if it is a JSON payload, we can basically parse the object, take each individual object property, and make that a separate tag within the subsystem itself. If you don't have JSON parsing turned on, you're basically gonna get the entire string payload represented as a string within the tag, right? So, if you're publishing JSON, take advantage of the auto JSON parsing. Know that it's pretty strict. If you don't have JSON that is exactly formed properly, we're gonna choke, we're probably gonna throw some errors, and you're not gonna get your tags into your engine provider.
06:26
Nathan Davenport: All right. So, I'm gonna talk about sets and servers briefly, and then we'll move on to transmission. So, what is a set? We use server sets in order to define redundant sets of MQTT servers. In this particular case, engines connected to two MQTT servers simultaneously, they are in two different server sets, and therefore, we connect to both of them. But if I were to put these two servers into a single server set, so for example, if I swapped out the chariot server set and I put default in, we would connect to the first server available within that set. If that guy dies, we're gonna walk to the next server. We're not necessarily gonna walk back, we're gonna move to the server that's available, we're gonna stay there until something changes, and then we will, if that server dies, walk to the next server. This can be two, this can be three, this can be N. Doesn't matter. All right. Transmission. So, an engine, like I said, it's primarily a consumer of data, right? Transmission, on the other hand, it is publishing tag data. So, it has servers too, just like engine does. It has server sets as well, just like engine does.
07:41
Nathan Davenport: And it uses those exactly in the same way. The edge will, you define your redundant pairs on transmission, on engine, in exactly the same way. The engine server sets came along, I don't know, one or two versions back. And it wasn't really a use case we were thinking of. Logs were pretty spammy if you had engine pointing to multiple servers, and we weren't able to connect all of them simultaneously. So, we thought, well, we already have a mechanism for that. Let's port server sets to engine. So, we did. Like I said, the server set really is kind of the glue between the server definition and the thing you're trying to consume or publish. So, in this case, the server set binds a transmitter, we call it, to a server. So, in this particular case, I have two transmitters. I'm kind of cheating. I'm making it look like I have two edge gateways, two edge nodes on two different boxes, when in fact, I really have two edge nodes on the same gateway. As you can see, you're pointing to a tag provider, you point to a tag path. Tag pacing period is kind of a cool thing. That is the amount of time that we wait in order to aggregate all tag changes into a single message.
08:53
Nathan Davenport: So, imagine if you have 10,000 tag changes within a second. We're gonna package up all 10,000 tag changes into one message, put it on the wire, on the appropriate Sparkplug topics, publish it out to your consumer. We have some optimization settings in here in order to do things like maybe you wanna compress your data. Maybe you wanna use aliases instead. You have very long tag names. We can swap in an integer representation for a particular tag name and just send that integer ID, if you will, across the wire. And so, we'll publish the birth, full context along with your alias once it hits MQTT Engine. Engine caches all of that stuff in a lookup table, and it knows exactly which alias corresponds to exactly which tag. There's a few other things in here, too. We do natively support UDTs. So, if you wanna send UDTs as true UDTs, you have a UDT defined on the edge. You want that UDT at Engine, you would turn convert off and let those UDTs be published. Very valuable for cases where the models, the UDT definitions need to flow along with the data so that the consuming side can rebuild those models, right?
10:08
Nathan Davenport: Rebuild instances of those models. All right. And then a few other settings around history, right of course, we support history. If the edge client has a disconnect at the edge, so your network drops, we'll start storing every tag change either in memory or on disk so that when that connection is resumed, we can package that data up, put it on the wire, get it over to your consuming client. Also, even if your connection to the MQTT server is good, but your consuming client Engine is not there, we use what we call a state message between Engine and transmission so that transmission can basically realize when Engine has gone away. So the server's there, but maybe Engine has died. Gateway failover, somebody pulled the power cord, right? Whatever. Transmission will still basically take itself offline and store that data until the primary consumer comes back. Because if you're publishing live data, but there's nobody there to consume it, it's gone, right? It's gone forever. Hopefully you have history at the edge. Okay. All right, you guys, let's get back to the presentation at hand. So we went through this already. So which markets are we in? We're in about every market that you can think of.
11:28
Nathan Davenport: This is just the top markets for the last 12 months. We've seen a significant uptick in manufacturing, which is great news for us, right? Manufacturing is really starting to pick up this technology. Use it broadly. Deploy it broadly. That's good news. We have what we call Cloud Injector Modules as well. These do not rely on MQTT infrastructure in the slightest. You don't need a MQTT server. You don't need Engine. You don't need transmission. You simply need the ejectors to point to your tags and consume tag change events, convert that to basically Sparkplug JSON, and send it off to whichever cloud endpoint you wanna push it to. So we support Google, Azure, AWS. Then all the endpoints there, I'll tick off a few Kinesis on the AWS side. That's probably the most popular endpoint to publish data to. We support DynamoDB, but I think that's used far, far less frequently than Kinesis Streams. We support the Firehose configuration within Kinesis as well. On the Azure side, it is IoT Hub, Azure IoT Hub and Azure Event Hub probably as the two most popular endpoints to publish data to. We also support Azure Edge and IoT Central, but those are far less popular.
12:54
Nathan Davenport: So what's so great about these things? You can deploy 'em anywhere you want to. You wanna put them on the edge and push data to the cloud? Go ahead. You wanna put it on the central gateway where we aggregate all of your data and push it up from the central gateway? You can do that as well. There are some benefits to doing that. We can more tightly pack the data on the central gateway and ensure that those messages are as large as they can be so that when we push it, we're right up against the max message size without having to start splitting messages. That's saving you money, right? Every time you push a message, the little cash register turns over. So better to put it on the central gateway. We can better pack your data. We can better optimize those payloads, and we can ensure you're paying the smallest amount of money for the data that you're pushing. Same efficient tag reporting scheme as transmission. It's just that we basically put it into a JSON payload for you. The Chariot MQTT server.
13:50
Nathan Davenport: So in the case where maybe you need to split your MQTT server off from your Ignition gateway, maybe the load is too high. Remember, Distributor has a max client count of 250 clients. So if you're talking thousands of clients, chances are you are gonna need Chariot. So what's so great about Chariot? Well, we've had previous versions of Chariot. This is Chariot V2, we call it. Chariot V1 was kind of using some libraries that we didn't write from scratch. So what did we do? We clean room, wrote an MQTT server from the ground up. Purpose built for OT and industrial applications. Built-in MQTT and network debugging tools. This is probably the coolest thing in Chariot, in my opinion, is that it is both Sparkplug aware, right? And every time we find something crazy in the field, we're debugging production outages, we're debugging network connectivity issues, configuration issues, we go build in what we call alerts so that we have basically monitors that can run against your data flow and identify issues like colliding client IDs at the pure MQTT level, right? When an MQTT client connects, it has to have a unique ID.
15:00
Nathan Davenport: If it does not, you come in, the old client's getting bounced. New one wins, the currently connected client loses. So we can identify those things for you. In Sparkplug Land, we have what are called group Sparkplug ID collisions, or group plus edge node collisions. Your unique identifier within the Sparkplug namespace is your group, plus your edge node ID. And so we can run into cases where either your static configuration or your dynamic configuration basically stands up two Sparkplug edge clients with exactly those same IDs, group plus edge node. And when that happens, consuming applications like engine, host applications, get confused because they're getting two duplicated data streams from two different clients, and the sequence numbers that should be flowing in order which are across the two duplicated data streams eventually get out of order and engine doesn't know what to do. And so it says, hey, I got a sequence number out of order. Actually, engine has the ability to reorder messages too for broker implementations that won't deliver messages in order, but it's never gonna make it out of this hole. And so what you get stuck in is what we call a rebirth storm. Sequence numbers come in and out of order, engine squawks, requests a rebirth, those clients spin up again, publish messages again, we're back in the same state, it's a nightmare.
16:24
Nathan Davenport: So anytime we find this stuff, we try to go build it into the product. I'll show you that here in a second. Easy to install, easy to configure, built on top of Java just like Ignition. So it's cross-plat, any Windows distro, any Linux distro. Excuse me. We have either one-time perpetual licenses where you can buy the license up front and do the install yourself, or we offer Chariot as a marketplace offering in a cloud environment like Azure, a cloud environment like AWS. So in that case, you just go to the marketplace, you click, you deploy, you don't install anything, software's already there. License is basically a no-op because you're already licensed and you are getting billed via runtime by your cloud provider. Web-based administration, we got a full web front end. It's fully backed by REST. Anything you can do in the Chariot UI, you can do via REST. Great for maybe we have a UI feature you want that we don't have. Hit the REST endpoint. Maybe you wanna spin up MQTT credentials on the fly after you've deployed. Let's say you have a thousand of 'em. You don't wanna go type that stuff into the UI? Hit the REST endpoint.
17:34
Nathan Davenport: Very, very, very flexible and valuable at deployment time, in my opinion. We support LDAP and Microsoft Active Directory integration. Highly secure. Of course, we have the ability to use TLS, right? Both for HTTP and MQTT. We're always making sure that we have all of your data being encrypted, whether it's web traffic, whether it's MQTT traffic. Now, before we get to that slide, let me show you Chariot here. So this is the Chariot front end. I'm purposely starting on the Sparkplug page because I kinda think this is one of the more interesting features within Chariot itself. It's Sparkplug aware, guys, right? So when your Sparkplug assets come online and publish messages, we know about them. We can discover exactly how many edge nodes you have. You have three. Only two happen to be online. They belong to one group. You have three host clients for some reason. We'll talk about that here in a second. And you've got three devices. Two of which are online. So you can get some information here, but it's far more interesting if we pivot over to... Oh, hold on. I think I lost. My session has been timed out, so let me log back in. There we go. Back to Sparkplug.
18:57
Nathan Davenport: Same data, just in list form. And I'm going to have to speed up because I'd like to give you some time for questions. I've got six minutes left. So I'm gonna try to blast through this stuff if I can. Cool stuff within the edge node space. You can see the edge node, exactly what its Sparkplug IDs are, whether it's using a primary host ID. This guy's not. That's probably a bad thing. What it's client ID is, which IP it connected from. When was it last online and offline? How many metrics? What is a metric? A Sparkplug metric is an ignition tag. This particular edge node has four tags associated with it and a single device. We also cache your birth messages and have some other interesting stuff in here. So if you have somebody complaining that I'm not getting a tag value at engine, you must not be publishing it. Well, go check the birth message. Is it in the birth message? If it's in the birth message, I published it. Let's get to the engine side. Let's go figure out what's going on. Maybe the message didn't make it. Maybe engine didn't like it. But clearly you can see exactly what we published here. If you were publishing UDT definitions, they would be in this node birth as well. You can copy it out. You can hit that via rest as well. I'm running out of time, so I can't tell you about some of those other cool features there.
20:08
Nathan Davenport: Raw MQTT view, this is kind of your underlying clients, right? For every Sparkplug client, there really is an underlying MQTT client. This is the basic MQTT data per client. So MQTT engine's connected from this IP, subscribed on these topics. Here's the alerting that I was telling you guys about. It's about half MQTT specific. The other half is Sparkplug specific. Diagnostics is not all that interesting. Those are threads and so forth, so I'm not gonna show you that. Accounts. This is your admin accounts. Log in via rest. Log in via the UI. Manage your chariot server, right? MQTT credentials. These are your credentials that you use to be able to define exactly which topics your clients can publish and subscribe on. If it's # #, that guy has root privileges. They can basically do anything that it wants, get all messages published on any topic, right? Server config. Just pure server stuff. Are we doing MQTT over TLS? That's typically 8883. Do we wanna use WebSockets, secure WebSockets and so forth? Do you wanna allow anonymous? Please don't do that in production, folks. Let's not do that. The MQTT client. I'm gonna tease this thing I can't show you now. We've added a thing cliented to the chariot server.
21:26
Nathan Davenport: Arlen and I will talk about this tomorrow at 2:45. Come check it out. It's gonna be awesome. These are kinda like your MQTT spy equivalent within the chariot server. Licensing. Kinda boring. You can add your license. You can deactivate it. We can handle offline. We can do online activations. System config. You need to upload some certificates, right. Do backups. Do restore. All of that stuff is done right here. All right. Let's keep going 'cause I'm running out of time. So just like we have the cloud injectors for the ignition platform, we have what we call IoT bridges for different cloud solutions. We have three of them. One for Azure. Specifically, it hits the Azure digital twin endpoint. One for AWS SiteWise. And one for Snowflake. And These are the solutions that require the UDT definitions to be published from the transmission side because if we don't have those models, we can't create your digital twin in ADT. We can't create your instances of said models in SiteWise. And we surely can't build the right views dynamically in Snowflake without those models. These bridges allow you to basically consume Sparkplug messages natively and then forward them off to the cloud endpoint of your choice.
22:41
Nathan Davenport: These are all deployable through cloud marketplaces. We don't yet offer an installable package that you can drop anywhere you want to. What are the new cool features that we're going to announce that I keep teasing but not showing you? Guys, we keep getting a ton of requests around UNS. UNS is super popular, right? How in the world does it work with Sparkplug? What do I do with my Sparkplug IDs? Why is it that I can't get my metric path or my tag path into the topic? We've tried to solve all of these problems for you guys. Engine now has a UNS namespace configuration piece within the Sparkplug B namespace. It will allow you to lay your tags out exactly as you've always wanted to without the Sparkplug overlay. We're gonna demo that tomorrow. Come check it out. We have a UNS publisher for transmission. This thing will take each one of your tags and publish out one message per tag change where the MQTT topic is the full tag path. So now you can say, well what if I wanted to cherry pick out five tags, push 'em out to my server and have them retained, and have the entire tag path as the MQTT topic? Because maybe I want one guy in IT to be able to consume these 10 tags and another guy in IT to be able to consume these 10 tags.
24:07
Nathan Davenport: Now via MQTT credentials and access control list, you can lock that stuff down and you can publish exactly what you want, one tag per topic, one message. We have alarms over MQTT now. People have been asking for this for years. We finally got to the point with platform support. Thank you guys at IA. So we can get alarm, active alarms from transmission to engine. We can act them back and forth. We can clear them, and so on. Then as I teased, we have the best in class Chariot MQTT client available for free. So you guys can download the Chariot MQTT server, install that thing, run it without paying us any money, and that we typically have a two hour trial timer under which almost all of our features run, but the thin client, the MQTT client in Chariot does not adhere to timer. So you can run that thing 24 hours a day for the rest of your life, and you don't have to pay us any money. Thanks you guys. And I don't think we have time for questions, but I am at booth number two, right down the hallway. Come talk to me. I would be happy to talk to you guys about this more. Thank you.


SiteSync leverages the LoRaWAN sensor connectivity technology to allow industrial users to bring stranded assets and manual measurements into a central source of truth for data visualization, alarming, and advanced AI analysis all powered by the Ignition Platform. SiteSync enables field users to deploy IIoT sensors with the same ease of commercial IoT systems via preconfigured devices and QR codes so that these Digital Transformation initiatives can be implemented at scale. In addition to LoRaWAN sensors, SiteSync recognizes that many end users have thousands of HART compatible sensors and the additional HART data is another stranded asset that can be used for Digital Transformation. SiteSync will introduce a new asset management tool focused on HART sensors all powered through the Ignition platform.
Transcript:
00:00
Sarah Sonnier: Hi everyone. My name is Sarah. I'm here with SiteSync, and today we're gonna be talking about bringing stranded data into Ignition. Gimme a sec; my clicker's not working. There we go. Wrong way. So I'm Sarah; I'm a data scientist. I am also the lead developer of SiteSync. SiteSync is an Ignition... An easy way to get IIoT data into Ignition. And I don't meet a lot of data scientists in this field, so I'm gonna tell you a little bit about what I do.
00:36
Sarah Sonnier: Data scientists bring data in from different sources. They bring it together; they model it so that it is clean and usable to make insights out of. So I can create reports, dashboards, do machine learning, and send it off to my end user, who is gonna make actionable decisions off of it. But you can see from this donut chart, a lot of the time that I'm spending is not doing the fun stuff of data science. It's not doing that modeling; it's not... Or predictive modeling. It's not doing machine learning or making those reports. A lot of the time I'm spending is collecting data and cleaning it, which is less than glamorous, especially in the IIoT field. I specifically work with IIoT in industrial data. This data is very disparate. It is everywhere. It can come from multiple different systems. It can come in many different formats. So a lot of my time is spent wrangling this data so that my end users can get value out of it.
01:34
Sarah Sonnier: And so this is a data science hierarchy of needs. If you're familiar with Maslow's Hierarchy of Needs, you can't reach self-actualization or be your best self unless you have a strong foundation. Same thing applies in data. If you don't have a strong foundation of where this data's coming from, is it contextible? Am I reliably getting it? Is it the same measurement every time? If you don't have secure pipelines or trusted ways where you're getting that data from one platform to another, and if it's not easy to model clean, normalize, you're gonna have a hard time doing machine learning, doing reporting, and getting the value out of your data.
02:16
Sarah Sonnier: The whole reason we collect data is to be able to tell what's going on in a process and to be able to make our processes better. So if you don't have a really nice and strong base of your platform collecting that data, the value is diminished. This is what I look like often as I'm struggling with the bottom of my pyramid because I am having to go out there and actually go out to do that collection process. As someone who is in a predictive field, I would never have been able to guess how many times I would've had to wire a terminal block, mount something on a DIN rail, assemble a edge computer to be able to get the data that I need to be able to do this analysis for my end users. It's shocking to me because this data can be tricky to get, especially stuff on the edge.
03:02
Sarah Sonnier: So, you need somewhere that this data is easy to process. Ignition is my favorite data platform of choice. Whenever I have a request, I have someone come in, and they say, Hey, I have a problem. How do we tackle it? My answer is always, Can we do it in Ignition? And usually the answer is yes. The reason I like to do my data projects in Ignition specifically is because it helps me deal with the tough pieces of this pyramid. So the collection, modeling what will go down it, but it helps me take care of a lot of it so that I can handle the stuff at the top. So Ignition Perspective is the top layer. That is where we can do reporting, visualization really flexibly, where you come in and show my end user, Hey, this is going on in your process right now. This is what it was looking like three weeks ago. It's a really easy way for me to quickly take the data from the source and show it to my end user.
04:04
Sarah Sonnier: Then we have modeling. In the data science world, you make models or objects of what you're trying to show, report on, and do machine learning on. Same thing comes natively in Ignition through UDTs. UDTs let you model the process, the instrument, the asset that you are tracking. The fact that it is built in here and I can do transforms, I can do many different things at that level, where that data's coming from is huge. I can context that data, where it comes from, and as it gets sent off to other systems, that context is priceless.
04:35
Sarah Sonnier: Ignition is flexible. I'm able to do it in a bunch of different ways. By it, I mean go through and host it and move different ways. I can pull in data different ways. Flexibility is priceless to me when I have a bunch of different requests from different end users, and they're trying to all do different things, but the end goal that they're trying to get is to get that value out of their data. And finally, Ignition is open, meaning I can pull data in from anywhere really. I can pull it in from a SQL server database. I can pull it on OPC UA, MQTT, IoT devices. The fact that I can have one place where I can pull all my data into, I can flexibly deal with it, I can model it and visualize and export it for my end user. It's huge. And that makes my job as a data scientist so much easier. So when my job as a data scientist is easier, that makes me a happy data scientist. I can spend less time down here trying to figure out how am I getting my data in, when is it being measured, and spend more time doing analysis and delivering value for my end users.
05:39
Sarah Sonnier: So we're in Industry 4.0; we're moving into Industry 4.0, and the promise of Industry 4.0 is you can bring a bunch of... Capture more data. We can capture more data than ever. We can store it cheaply, but, and we can do that analysis, but we need to have the tools in place to be able to capture that. Something that is driving Industry 4.0 is we can measure more things than ever for cheaper than ever, which is really cool. Ignition is a great platform for Industry 4.0. You can come in; you can do your analysis. Countermeasure would be alarming and alerting. You can do responses in Ignition, and because it is so open and flexible, you're able to capture as many events as possible. It's scalable and structurable. SiteSync comes in, and it helps you capture more events and more insights than ever through IIoT and gathering stranded assets in Ignition. So we're bringing that data in and we expose it to you in your Ignition platform. Once it's in Ignition, you can do whatever you like with it, which is a beautiful thing. The speed that this is increasing at is crazy. The amount of data that's being generated is... It's hard to fathom.
06:53
Sarah Sonnier: So who is SiteSync? SiteSync is an IIoT Ignition module that helps bring stranded data into Ignition. We got our start, as many good Inductive Automation stories do, through Arlen Nipper. Arlen brought us a yoga gala, sushi sensor, and he was asking for help, how to deploy it at an end site. As we were helping Arlen, we went through and we realized this really wasn't gonna be scalable. It was really tough to get these data into a platform, and one of the things that we found was there were a bunch of different platforms and a bunch of different places this data could go. So, for example, some vendors have clouds that they wanna do the analysis on. That's fine and good if you're doing residential IoT or commercial IoT. But if you're dealing with data at the control layer, cloud is kind of a no-go.
07:47
Sarah Sonnier: And if you're gonna send data up to the cloud, it's probably not gonna come back down to the person at the cloud that actually needs that data. It's gonna come to people like me doing analysis, but it's not gonna be actionable for that person in the field. The other thing is this data would traditionally go to a traditional system like a DCS. But this data, this IIoT data, the insights, it doesn't behave like traditional instruments. You've got a lot of data. It comes up in a JSON format. There's a lot of attributes, and it doesn't check in at the rate that a traditional instrument would. It's not a continuous readings. So storing it in a DCS, it doesn't always make sense or rarely makes sense because it's not the same kind of data. This data is stuff about your process where the stuff in the DCS is the process. We're telling that this temperature is what's happening. But you could do supplemental measurements, and that's where you can get that value out of IoT. So this data needed a home. Where are you gonna put this data, especially in the industrial side?
08:51
Sarah Sonnier: So, SiteSync and Ignition is the home for your industrial data. It comes in; it's a good place where you can bring it in, marry it in with other parts of your process. Ignition is a great end platform for your data to come through. So we wanna create a nice landing space for the stranded data, the dark data, to have a nice place where it can be modeled. It's flexible, it's open, we can pull in anything we want, and be able to realize that value if that's through sending it to another platform through Sparkplug. If that is doing visualizations and dashboards, Ignition is a great place for this third kind of data.
09:30
Sarah Sonnier: I'm gonna talk a little bit about IoT, the trends; as you can see, it is steadily going up. There are 18 billion IoT devices installed today. It's a crazy number, and the number's crazy because it's really cheap, and it's really easy to get these measurements. These are way easier to install than a traditional instrument. A traditional instrument's probably gonna be around a hundred thousand dollars from specking it out to the actual install to bring it to your historian, where this is probably 1% of that cost to be able to install at one point and bring it somewhere, which is attractive. But as this velocity increases, you need to have a place where you can capture this data and get the value of the data. If we're just going into a data lake, that's nice, but how can we marry that data into other things about your process? Get that context to be able to deliver the value. We can see here that cellular is one of the biggest players in this. We're seeing a lot of cellular-enabled sensors. Another one is this LPWAN group of sensors that's NBIoT and LoRaWAN, both Grade 4 industrial applications.
10:44
Sarah Sonnier: Because we have all this data and we're getting all this processes... Because we're measuring so much data about these processes, we need a good place to hold it, store it, and analyze it. Otherwise, what's the point of gathering it? Data is valuable, and we're able to measure things we were never able to measure before. It's just doing that learning curve of how do we bring it all together. As a data scientist, this is very exciting to me that I can get more data about my process, and I can deliver more insights. I can say, Hey, something's going wrong here. Where previously it was kind of a black box.
11:21
Sarah Sonnier: So I wanna go over four different use cases from end users who are deploying LoRaWAN and IOT devices and how Ignition is helping them with their use cases. So back to our pyramid, we're gonna start at the bottom. The core thing is Ignition is open, meaning I can pull any kind of data that I want into my Ignition environment. What we're looking at right here is a corrosion monitoring sensor. This corrosion monitoring sensor takes a measurement once, maybe twice, a day, and it just measures the thickness of a pipe. It's pretty cool. Traditionally, you would measure corrosion by going around and doing operator rounds, taking a measurement to go off to a system. We had a customer install these on their pipes, and they were able to consistently get trendable data, meaning they were able to take a sample at the same time every day at the same exact location.
12:15
Sarah Sonnier: Which is huge in the world of data science because if I don't know exactly how that measurement was taken, can I trust it? If I see one is significantly different than another, was it a different operator? Was it a different day? Like, is it a different time of day? Like, how can I tell? By being able to standardize those measurements that are being taken, you're able to trend it, and being able to trend it is huge. This customer found that they had an erosion problem happening. It was slight, but they were able to see that after a cleaning happened on the pipe, the pipe got thinner. So they were able to come in and see, Hey, something happened between Monday and Tuesday. What happened? They brought in data from their other processes into Ignition, and they were able to easily see that, hey, I know exactly what happened between Monday and Tuesday. We had a pipe cleaning. They would never have been able to put all of that together without something like this, an IoT sensor. We have so many use cases like this where just starting to do monitoring, even if just a little bit of monitoring, is so much more consistent than doing traditionally polling it and being able to consistently take those measurements means that we can take better insights off of it.
13:33
Sarah Sonnier: Ignition is flexible, so really flexible, which is awesome. We had an end user here, and he was trying to monitor the power usage of different buildings in his campus. So he came to us and he is like, Hey, can we do this? Sure, absolutely. Working with an internal team with him to get this deployed, he wanted to do all on-prem, all on the edge. He said, "Okay, great." So we started building his application to be able to do an analysis to say, "Hey, how much power am I using every 15 minutes with a delta doing this calculation on it?" And he comes to us later he says, "Hey, actually the team that I was working with, they've lost. They've been reallocated to another process. I don't think I'm gonna do the project anymore." And we were able to flex, take all of the project, the logic, everything that we had built for him, and put it into a cloud application, which let him continue to gather his data. It's a little bit different 'cause this is not industrial data, but the flexibility of Ignition is huge for me because I don't like doing double work. I was able to just bring that data straight into another platform, another Ignition one, and he was ready to go within than 30 minutes, which was awesome.
14:42
Sarah Sonnier: Because he's now able to get his data, he was able to see as they closed a building on his campus, the power usage goes significantly down, which was really cool. He was able to see it in real time. He was also able to see, like as people came into a building, their power usage throughout the day; you could see it drop off exactly at 4:30. It was crazy. The flexibility lets me deliver to my end users the request and what they're trying to do. So they just wanna know what's going on in my process, and I can say yes with Ignition, which is awesome.
15:18
Sarah Sonnier: The next one is modeling. So in data science, modeling is very important. It means that I have a repeatable object that I can always use every single time. I can also make changes to my object and apply to everybody. This its object-oriented; as a programmer, I love this. So this is an MCC cabinet, and I'm able to pull in data from multiple different sources. Let me back up. This is the MCC cabinet over here, and there's a little sensor inside of it that it's able to measure the temperature, humidity, and light within that, which is great. But what happens if the room gets hotter? We could say, "Oh, we can alarm when it gets hot inside, but if the AC goes out in the building, we're gonna get a lot of alerts." So what we ended up doing for this customer was being able to do a temperature delta. We were able to measure the ambient temperature within the room and do a calculation to say, Hey, is my cabinet significantly hot, or is my room significantly hot? Being able to use UDTs to model what this cabinet looks like, being able to alert an alarm right there, and apply it to hundreds of MCCs is huge. It's a great time saver, but it also gives me a consistent format that I can do my analysis on. I can do automation on and I'm a huge fan of UDTs.
0:16:35.4
Sarah Sonnier: Modeling makes data science possible. This one's a fun one. So SiteSync has a Perspective project that you can do. You can look at your asset health on, you can deploy devices on, you can get a little diagnostics, and I had a customer that was deploying these. These are manual valve position sensors, and you have to calibrate them. There's a couple different ways to do it, and all of them are a little bit tricky. It's not an easy... It's not like installing a Ring doorbell. It's a little bit more complicated. So I had a customer, and they were deploying 300 of these at a site, and they called me up, and they said, "Hey, kind of having a hard time with this calibration process. Do you think that we could add this to where we're doing this onboarding?" So in SiteSync, you can onboard these devices into your Ignition environment.
17:28
Sarah Sonnier: And I said, thought about for a second, and I was like, "Sure, I think we could do that." They said, "Okay, well, we're gonna go to lunch. Like, let me know how's it going after lunch." And I was able to pull it together pretty quickly, and I was able to allow these users to calibrate in the field as they were going. And so I tested it out on my side;it all looked good. And then I get a call; I sit in the cube, and I get a call from the front desk, and they're like, "Someone from the field is calling." And I was like, "Okay." And it was an instrument tech, and they said, "Hey, I see a new button on the interface of Perspective; can I click it?" I was like, "Sure." And so we together were able to calibrate this valve within like an hour or so of that request coming in, which is crazy.
18:11
Sarah Sonnier: And the valve, the instrument tech was so excited. He said, this makes my life so much easier. I don't have to fuss with another app. I don't have to do this calibration process. You're able to just push this right to my Ignition project, right to my app. No crazy update process, just ready to go. He's like, "That is huge." This end user was able to install all 300 of these by themselves without hiring a third-party contractor. Saved them something like $30,000 and gave them the confidence to go out and deploy their own IoT sensors to monitor their processes. To be able to flex and quickly apply changes to my interfaces to give updates. And they wanna say, Hey, can I see the calibration status on the same page? Absolutely. So this is what we ended up building.
19:02
Sarah Sonnier: They're able to come in and see, Hey, what's my current configuration? and very easily configure these in the field. Being able to flex with my customer and being able to meet their needs makes me a happy data scientist because I can help them, and that makes me happy.
19:20
Sarah Sonnier: So we're talking a little bit about LoRaWAN and the Perspective side. I wanna show you... And we talked about how many devices are out there; something of like 40 billion IoT devices are projected to be installed by 2030. To be able to get to a scale like that, to be able to capture your data, you need to be able to easily onboard in a normalized fashion so you can know exactly this is what my device is, this is what it's measuring, here's how it's modeled. If you're gonna deploy a large fleet of these, anything, it needs to be standardized. I don't know if you've ever impaired at a project where someone started Modbus mapping one way and then started Modbus mapping another way. We don't want that at the IoT scale because there's so many devices, there's so much data. We need a strong foundation to be able to capture that. So this is... Oh. This is a video. I'm gonna get it.
20:28
Sarah Sonnier: This is a video of someone provisioning a device in SiteSync. So this is a Perspective-based project. It's using the native Perspective app. I'm able to quickly get in all of these device keys through scanning a QR code. We can talk about how complicated this is at the booth. It's very complicated, but I'm able to quickly onboard a sensor into your Ignition system, context it by giving information about where it goes, where it's installed, and over here, it flashed, and it showed that that device was instantly added to your tag provider as a UDT. It's that fast to bring a sensor on, have it contexted in its correct format, and then we can quickly see data come in through. It's about a minute from launching this to getting data in, and that's how fast you can add devices and add measurements to your Ignition system.
21:21
Sarah Sonnier: There we go. We've done a lot of different LoRaWAN projects. We've worked with a lot of different companies that had different configurations. Because Ignition is so flexible, we're able to do it at any scale. Whatever you're looking for, if it is at an all-in-one edge gateway where you can come in and jam everything on one machine, if it's a traditional Ignition server, if it's something like an enterprise deployment, we are able to help you bring value to your customers by bringing that IIoT data into Ignition. Once it's in Ignition, that's where it becomes fun. So I've been talking about IIoT data; I've been talking about LoRaWAN data. I'm gonna shift gears for a second.
22:11
Sarah Sonnier: I'm gonna talk about another kind of data. It's a stranded data more or less, but it's not IIoT; it's actually kind of older but has a lot of value. Data is data. So I wanna talk about HART. HART is an Highway Addressable Remote Transducer, which doesn't mean a whole lot to me, but what I do know about this is it is... Runs on a four to 20 current loop. It is the largest industrial protocol period. It is huge. It has an install base of 40 million devices. Devices are critical instruments in the field.
22:54
Sarah Sonnier: That 40 million is significantly smaller than the 18 billion or 40 billion IIoT devices. There's a reason for that. These are critical measurements that exist already in your process. IIoT, it's easy to pop a couple of temperature sensors out there and figure what's going on. This is the temperature transmitter; this is the valve position sensor. Something about HART, though, is these are smart instruments, meaning you're pulling a measurement out of it. A primary variable, if you will. This measuring how open or closed my valve is. But these devices have up to 240 variables within them that are able to tell you about your process, what's going on, maintenance, when was it calibrated and it's all out there in the field. But because of existing data infrastructure, the data's not really being polled. It's kind of in the same scenario of IIoT. Like, where does this data go? It doesn't really go into a DCS; it's not a primary variable, but it is interesting information about your process, and it really isn't pulled into a layer that can be analyzed easily. Well, in legacy systems. I'm sure that there are newer systems that are much easier to pull this out of.
24:12
Sarah Sonnier: So we had an end user come to us. The end user was using our LoRaWAN Ignition module, and he asked, he said, "Hey, like, I can see what the value is in this. I have a problem. Could we take a look and see if we could eliminate this process?" This process was he had to go into an asset management system, or his team did, poll a CSV of every single valve. When you have hundreds of valves, that is a huge, monotonous, tedious task to be able to poll to get the status of everything in your process. And he said, "There's gotta be a better way to do it." And I agree. If your process is tedious, if it means a human has to go out and do that download, it's likely something will get missed or it could get pushed off for a more pressing task. He asked, "Could we bring this data into Ignition so we could do that alerting and alarming? We can pull it easily; we can send it off to other systems easily." And I said, "Sure," because Ignition is flexible, it is open, it's easily modelable. We can absolutely do it.
25:19
Sarah Sonnier: So currently, as I mentioned earlier, you're typically bringing in one or two variables into your control system. That's just because you don't wanna clog up your DCS. You don't have the resources to pull it in. And honestly, PV, or the primary variable, is what you're trying to bring in. This is, in my case, a valve position sensor. And that is PV is how open or closed it is. But because there are 240 HART variables, you're leaving 90% of the data of your process in the field.
25:54
Sarah Sonnier: This is data that is huge for preventative maintenance. This is data that you already have; you're generating it; it's in assets that you already own. We're just not pulling it into a system that's easy to do predictive maintenance. As a data scientist, being able to get values like this and being able to quickly alert an alarm and say, "Hey, I think this might need attention; we might need to order something." Being able to give that insight to my end user is huge. I can quickly... Like this is a gold mine for me, being able to deliver those insights. IIoT is like that for me because I can quickly get new measurements. These are measurements that already exist that I just can't get.
26:37
Sarah Sonnier: So we ended up building a HartSync. This is something in beta. We're in active development, and it's a way to easily get that data stranded out in the field into your Ignition system. We're modeling; get it into UDTs based on what kind of device it is. So we're able to speak HART. We're able to come in and see exactly what's happening in your loop. If you're interested, we would love to talk to you about the beta group or what features you would like to see happening within this. The other thing is I would love to talk about different hardware architectures, 'cause I'm seeing a lot of different end users have different hardware architectures. So what does that look like for you today? If you're not pulling in hard data or I would love to also talk about what would that look like. Would you be interested in something like that? So please come see me. I'm in a booth out there. We could talk about this. This is huge for me because you're able to come in, you can get status, you can do requests, and you can talk to assets you already have.
27:40
Sarah Sonnier: And it's like that old commercial: it's like, It's my money; I need it now. This is your data; let's go get it. So bringing it into that home where you can do your predictive maintenance and everything, like that, is the value. It could be amazing.
28:00
Sarah Sonnier: In summary, Ignition and SiteSync equals your data has a home. We are able to bring in stranded assets, data about your process, data that doesn't belong anywhere else but can be easily and effectively married to other pieces in your process. And you can easily make insightful reports. You can make decisions off of it. If we go back down that pyramid, I'm able to collect it. I'm able to flexibly collect it so multiple different places. I'm able to model it, and I'm able to visualize it easily. When all of those are taken care of in Ignition for me, I'm able to do the fun stuff of machine learning, doing reporting, and whatever crazy dashboard request my boss comes up with because he's always got one. But yeah, this is super impactful. This is gonna take your end user from having to do all of that to being able to just get that value out of the data. And yeah. Thank you so much, and if we have any questions, I'll just take them.
29:09
Audience Member 1: I guess we'll just bark out the questions.
29:09
Sarah Sonnier: Sure.
29:10
Audience Member 1: Do you look at an IO-Link master or anything with, yeah, basically I/O or... Yeah, with... Yeah.
29:21
Sarah Sonnier: So I'm a data scientist. I am accidentally in this hardware space. Please do come talk to me about the booth with someone who can answer that question. But unfortunately I can answer your data questions and your data accessory questions. I can hear you.
29:39
Audience Member 1: Well, yeah. I'm sure everyone else can hear me, probably.
29:48
Audience Member 2: I have a question.
29:49
Sarah Sonnier: Yeah.
29:54
Sarah Sonnier: The Things Network?
29:54
Audience Member 2: Yeah. With your product? Sorry.
29:55
Sarah Sonnier: Yes, absolutely.
29:57
Audience Member 2: And how does that work?
30:00
Sarah Sonnier: We have API integrations into all of the major LoRaWAN network servers. So we're able to quickly sync devices both to Ignition and your LoRaWAN network server.
30:07
Audience Member 2: Thanks.
30:10
Sarah Sonnier: Yeah.
30:23
Audience Member 3: Is it being ready to use the HartSync?
30:32
Sarah Sonnier: So HartSync is a new product. We are in beta with it. We're active development. So it's still being worked on. Do you have any, like, questions, comments, concerns?
30:39
Audience Member 3: Is it related to Y HARTs for pH?
30:44
Sarah Sonnier: It could be. We are getting requests for Y HART, and I would love to talk more about those use cases. Right now it is for traditional loops. So we're going through a mux, maybe a modem on the control loop, and being able to forward that data off.
31:00
Audience Member 3: Okay.
31:00
Sarah Sonnier: And primarily what I'm offering is a way to pull that into Ignition. I don't really understand... I don't... Not that I don't understand, but I don't know all the configurations that could happen to get that data there.
31:08
Audience Member 3: Ah, okay. Cool. Thank you.
31:12
Sarah Sonnier: Thanks.
31:18
Audience Member 4: To follow up on the HART stuff, so is that a... It's in beta right now, but this is a separate module similar to SiteSync functionally.
31:26
Sarah Sonnier: Yes. Functionally, very similar, where the goal is to get that stranded data into Ignition as modeled. It's different in that it's a totally different protocol, but yes, same idea. It's a module you can install wherever you wanna do it. Edge, standard, wherever you wanna do it.
31:42
Audience Member 5: Thank you.
31:46
Sarah Sonnier: One more question.
31:48
Audience Member 6: How are you bridging the hardware gap on the analog interface that's going to the HART device to capture multiple devices, because a lot of controllers will be able to integrate in that and then provide it up to whatever DCS or SCADA system you have? How are you guys bridging that hardware gap?
32:08
Sarah Sonnier: I'm using a mux at this point, but I do wanna talk about what that looks like for other pieces. Essentially, if I can get access to that HART data, that's what I care about: getting that data to me, that's another person.
32:22
Audience Member 7: So, I guess good job on the HART module. This is Karthik here, so, but...
32:31
Sarah Sonnier: Hi Karthik.
32:32
Audience Member 7: Hello. So wanted to ask you, I know we are gonna capture the data here, but have you thought about how you're gonna integrate the data like you do for your lower network data, right?
32:47
Sarah Sonnier: Integrate, meaning sending it off to other places? Yeah, that's a built-in function of Ignition, which is awesome. So Ignition has a... As a open... And I really focused on the open bringing data in, but it's really open to bringing data out. So you can use Cirrus Link Sparkplug transmission to send data out. You could do API integrations out, you could sync it to a historian, your own database. The possibilities are pretty much limitless, which is what makes Ignition a great data platform. I'm really flexible and able to meet my client's request 'cause they're always changing. Thank you.


Hive MQ Exhibitor Demo: Comprehensive Data Management Solution with MQTT, Sparkplug and UNS
In today’s data-driven world, effective data management is crucial for manufacturers seeking to harness the full potential of their production assets. As industrial environments become increasingly connected, the need for a comprehensive data management solution that ensures real-time, reliable, and scalable communication is more critical than ever. HiveMQ with its enterprise MQTT platform that is highly reliable, scalable and secure provides that ideal platform working with the Ignition ecosystem. We will showcase some of our new product offerings like our Sparkplug module for DataHub enabling metrics fan out and other offerings that will complement the Ignition Edge platform, building the UNS framework to streamline data collection, integration, and dissemination, ultimately driving smarter decisions, greater operational efficiency, and supporting advanced use cases like AI.
28 min video
Opto 22 Exhibitor Demo Break Through the Status Quo in Industrial Automation
Tired of closed PLC platforms with proprietary protocols and high licensing costs? This presentation shows you how Opto 22's groov products can help you break through the status quo in industrial automation. With groov EPIC and RIO systems running Ignition Edge out-of-the-box, you can control edge operations and securely democratize production data from the plant floor to IT systems—even to the cloud. Discover the open, cybersecure architecture and free support and training resources that make Opto 22 groov hardware ideal for your next Ignition project.
32 min video
Flow Exhibitor Demo: Stop Coding, Start Scaling: Optimize Data Transformation for KPIs, Batch Reporting, OEE, and Beyond
Using our OEE template as an example, we'll demonstrate how you can streamline your Ignition projects by avoiding complex coding and scripting. This is all about scaling your data processing while adding centralized data and engineering governance. Every new KPI we calculate, event we detect, and batch we process, will be served back to Ignition, an MQTT broker, and to the enterprise data warehouse.
33 min video
How Ignition Saves Time Money & Lives for Medical Charity
See firsthand how UK charity SERV Kent uses Ignition to create an AWS cloud-based volunteer management system to revolutionize its medical transport operations. Driven by Ignition Perspective, this application replaces archaic manual processes with intuitive interfaces featuring real-time geolocation data transfers, GDPR-compliant security, and optimized volunteer, vehicle, and product management. Hear Chris Taylor discuss the project’s challenges, solutions, progression, and future enhancements and breakthroughs that will bolster SERV Kent's mission-critical endeavors.
32 min video
Ericsson Exhibitor Demo: Edge Computing and Private Cellular Networks for Smart Manufacturing (formally Cradlepoint)
Ericsson’s 5G-focused solutions turn connectivity into productivity by delivering intelligent communications at the edge that are more secure, versatile, and easier to manage than WiFi. See real-world business-critical use cases that exemplify how private 5G solutions accelerate operations, improve reliability, and enhance working conditions, all while reducing cost and latency.
26 min video
How Ignition is boosting SCADA in the Biotech Industry
With a demand for flexibility and a strong focus on quality, SCADA systems play a crucial role in ensuring smooth operation of processes within the highly regulated Biotech industry. As a leader in the field, Cytiva is accustomed to developing solutions designed for the lab environment. Attend this session to get a peek into the technical aspects where Ignition has been leveraged to help meet customer demands, including dynamic OPC connections and integrated eLearning.
40 min video
Learning Ignition Fundamentals
Whether you're new to Ignition or just want a refresher, this session is made for all. The Inductive Automation Training team covers all the basic knowledge and fundamental features you need to get started with Ignition.
44 min video
Breaking Through Limits: Igniting Transformation in Manufacturing
Follow Entegris and NeoMatrix's joint journey to digital transformation. Beginning in 2008, the two organizations recognized the need to upgrade the SCADA platforms of multiple machines, and they chose Inductive Automation's solutions. From Ignition's precursor FactoryPMI and FactorySQL to today's Ignition 8.1 with Perspective, this session will take you on a tour of how these partners established Ignition as its standard OT platform for increasing scalability and cost savings as they continue to grow globally and expand to multiple manufacturing industries.
47 min video
Breakthrough to the Other Gateways: A Deep Dive Into the Gateway Network
Multi-gateway deployments are becoming more commonplace, and Ignition's gateway network provides the backbone for redundancy, enterprise management, and sharing data between gateways. Join us for this session and take a look at various Gateway Network parameters and settings that drive customer solutions.
45 min video
Breaking Through Manufacturing Challenges with DxOps Transformation
Learn how to combine consistent processes with novel concepts to break through challenges in downtime tracking and OEE visibility. In this session, RoviSys will share how their DxOps Transformation approach, used at Nice Pak, helped overcome high variation between production lines and facilities, lack of data connectivity, extensive turnover, and data integrity gaps. Learn how to standardize integration methods for better scalability and real-time tracking and see how these solutions can enhance efficiency in your facilities. Don’t miss this "how to" guide on transforming challenges into opportunities for breakthrough success!
44 min video
Scaling to New Heights: Enterprise Ignition with Ease
In this session, 4IR Solutions will showcase best practices and technologies to rapidly deploy and remotely manage large-scale Ignition systems in the cloud and on-prem across hundreds of sites. We'll demonstrate zero-touch provisioning and real-time updates to a fleet of Ignition installations.
42 min video
Optimizing Load Time in Ignition Perspective
How can you ensure that screens load fast and actions are snappy when using Ignition Perspective to create bigger and better projects? Learn how in this presentation, which will discuss strategies for optimizing screen development, organizing nested views, and analyzing Perspective execution. You’ll also get a look at simple rules-of-thumb for bindings, complex custom svg components, and where to strike the balance between performance and maintainability.
37 min video
Creating Predictive Maintenance Alert using Ignition + Canary DB
This session provides an in depth walkthrough of how Shamrock Foods Company is able to collect motor data and use it to alert maintenance personnel of a potentially failing asset. This tutorial will walk you through the steps from PLC amp data to Ignition, Ignition data sent to Canary DB, Canary DB calculations of average + Standard Deviation of data, and back to Ignition to generate alarms.
37 min video
How Ignition is Enabling the Future of Oil & Gas
The oil & gas industry relies on SCADA for all its major production activities. But oil & gas companies often have large-scale, complex requirements that require unique solutions to not only monitor the field, but also integrate that data throughout the enterprise. Attend this session to learn how Ignition is meeting the unique requirements of oil & gas companies with Techneaux and Bifrost.
41 min video
How To Harness Modern MES for AI and Innovation
Learn from MES-experts Sepasoft how MES fuels the success of AI and BI initiatives, driving organizations toward actionable insights and a competitive edge. In the Industry 4.0 era, the success of AI and BI technologies in manufacturing hinges on high-quality data. Manufacturing Execution Systems (MES) play a crucial role integrating with the plant floor and enriching production data with essential metadata, plus adding valuable context for machine learning and advanced analytics. MES provides real-time visibility for informed decision-making and cuts the typical 80% time investment data scientists devote to becoming subject matter experts and preprocessing data.
52 min video
Ignition to ERP: Best Practices and Lessons Learned
Looking to leverage Ignition to seamlessly connect with Microsoft Dynamics 365 Supply Chain (D365)? This session will cover best practices and lessons learned from two perspectives: an Ignition developer, and an enterprise solutions architect. Flexware Innovation’s Ignition Team and Enterprise Solutions Team work together to merge IT with OT for true digital transformation. From this collaboration emerged a set of best practices (and lessons learned) that will be shared with the Ignition community. Presentation examples will center on D365, but the foundational architecture principles can apply to your ERP system, too.
40 min video
Standardizing the Unstandardized: Strategies for SCADA Systems
SCADA systems can become complex and unwieldy when managed by numerous engineers or when ownership changes through acquisitions. In this session we will focus on strategies and implementation methods for using Ignition to transform disorganized systems into standardized, efficient operations. This presentation will cover best practices from small, unique projects to large-scale projects with multimillion-tag counts. Highlighting the similarities and differences between these types of projects, this session emphasizes the importance of standards in data modeling and a robust validation and verification process. Implementing these techniques enhances system performance, reduces costs, and increases user confidence — all of which are critical for the successful delivery of projects of any size to clients and stakeholders.
45 min video
Level up your Python: Best Practices for Clean and Consistent Code
Gain valuable insights into writing clean and maintainable Python code, whether you're a Python beginner or a seasoned developer. In this session, you’ll get practical knowledge of PEP 8, explore best practices for code formatting and style, and discover tools to streamline your workflow.
45 min video
Break Through Power & Energy Barriers with Ignition
What’s the power of tracking your organization’s energy use? Understanding your energy data reduces your operational costs, and helps you assess equipment health and meet regulatory or ESG guidelines. It’s hard to manage what you can’t measure. In this session, you’ll see how to quickly incorporate energy monitoring into your Ignition projects using free Ignition Exchange resources. Plus, you’ll hear from a State of Indiana representative who created the Energy INsights program that helps Indiana-based manufacturers address energy use while taking steps toward digitally transforming their business operations.
44 min video
Deeper Dive into 8.3: New Features
We have even more exciting Ignition 8.3 features to show you! Join us in the second of two sessions as we continue to share what’s new with 8.3. This time, we’re looking at some project-level resources and other features available through the designer, including new Perspective features, changes to the Tag Historian Module, and the brand-new Event Streams resource.
52 min video
Industry Panel: Driving Innovation and Transformation in Industrial Organizations
Hear from a panel of industry thought leaders and experts as they explore how utilizing data and technology can inspire new ideas, open new opportunities, and drive digital transformation efforts in industrial organizations.
49 min video
Closing Keynote: Where Do We Go From Here?
In this final session of the conference, we'll look forward to what's next. Join Inductive Automation speakers for exciting presentations and an engaging Q&A panel about the road ahead for Ignition's development, the expansion of technical support, and the evolution of Inductive Automation's customer experience.
79 min video
Integrator Panel: What Tech and Trends Are Breaking Through?
Discover which pivotal new technologies and trends that are reshaping the future of automation for industrial organizations. In this engaging panel discussion, some of the Ignition community's most successful integration professionals will share their strategies in response to these evolving technologies.
45 min video