Inductive Automation Blog
Connecting you to ideas, tips, updates and thought-leadership
from Inductive Automation
4IR Solutions will demonstrate how their platforms can deliver OT, As-a-Service in the cloud or on premises making it easier, faster and cheaper to build and manage your Ignition infrastructure.
Transcript:
00:01
James Burnand: And we'll get some late streamers in here, from what I understand, so all of them will be pointed out and embarrassed as they come and sit down late. I'd like to be the first to welcome you to ICC 2024. I didn't realize I was gonna get that honor when we signed up for this time slot, but it just worked out that way, so I hope you guys had safe travels in. Looking forward to walking you through a little bit about what 4IR does and sharing with you some of what we think is some pretty cool stuff. So to get started, why do we exist?
00:29
James Burnand: While OT systems can be a little bit of a challenge to manage, so when you don't manage your OT systems, the risks that you face are unexpected downtime, security issues and risks to your data fidelity. These are problems that are fairly common across our industry, things that we run into on a fairly regular basis, and something that unfortunately is somewhat ignored in some cases inside of the manufacturing and industrial marketplaces. So to understand maybe a little bit about how does that happen? I'd first like to do a little classification exercise and all your lights turned yellow, which is kinda cool.
01:05
James Burnand: So first of all, how many, show of hands... How many folks in here are end users? Okay, we got about maybe a half. And integrators? We got a lot of integrators. Cool. And everybody else? There we go. Perfect. So what I've done is taken the opportunity to classify what we see is the different types of end users. Hopefully this doesn't offend anyone, rings may be true, but what we do is we're gonna lay out who we think are the folks that are out there that we run into.
01:33
James Burnand: So the first kind of end user we run into are what we call Yodas. Yoda is an exceedingly rare species. There are very few of Yoda species in the universe, and they are masters in their trade. They are considered so totally in control and capable of everything that is necessary for them. Jedi Master. We find these folks to be exceedingly rare, but they do exist and users that have totally figured out how to manage and operate and handle all of the different pulls and pushes of OT as well as all of the rest of the responsibilities that they have.
02:06
James Burnand: The next type of end user we run into, and this is very, very common, is what we call super heroes. So these end users wear a cape, they often have many responsibilities of which managing OT and doing things like updates and patching and security is just one of many, many things that they have as their responsibilities. We find that these folks have a strong desire to be better at managing their OT environments, but often face the issue that it's an important but not urgent issue until it becomes an urgent issue. I'd say these are the most common folks that we run across.
02:41
James Burnand: And the final type of end user we have are what we call Bon Jovis. These folks live on a prayer, and they don't realize the risk that they run until they unfortunately have something that happens. We tend to meet these Bon Jovis after they've had a security incident or they've lost a computer, or they've lost an application for a long period of time and dealt with a significant downtime or cost issue, that's when we usually meet the Bon Jovis.
03:08
James Burnand: So what we have done is we have created a solution that hopefully appeals to all of the folks, although I will say that the Yodas are far less likely to be interested. We offer OT as a service. So we call that FactoryStack and PharmaStack. We'll talk a little bit about what the difference is in a second, but what that means is that we offer as a service, a delivered platform that provides you all of the best practices from Inductive Automation, security hardening guide from database management, as well as the best practices from the IT provider, so folks like AWS and Azure, and we put that and we manage that in a very straightforward way, so that you can focus on applications, you can focus on process, you can focus on the things that matter towards the end goal of improving manufacturing, and someone else is taking care of your OT systems for you.
04:03
James Burnand: So I did mention PharmaStack briefly. PharmaStack is essentially an extension of FactoryStack, and really what it does is it adds in some additional capabilities around data retention, around data integrity, and around 21 CFR Part 11 compliance, so that for companies that are in the pharmaceutical space, they can use PharmaStack to be able to make things like change control and operation of their and validation of their systems faster and easier. Fundamentally, they do the same thing. I'm going to talk about them interchangeably, so if I say FactoryStack and you're thinking PharmaStack, don't worry, they do fundamentally the same things under the hood with again, the additions to PharmaStack being specific for that industry.
04:47
James Burnand: So what are we actually trying to do? We're trying to make it simple to deploy OT infrastructure. We're trying to make it easier, faster, cheaper, and more secure for you to be able to have these architectures and these capabilities deployed both in the Cloud and on-premise for you to be able to take advantage of those. And that sounds very wide and that sounds very kind of vaporous, so if you think about it from a... What is our mission is we're trying to simplify and give you access to these transformative technologies without you necessarily needing to learn them, so you can focus on what's most important for you, which is solving problems for end users or solving problems as end users.
05:27
James Burnand: So how does that work in the ecosystem? Is really an interesting thing. So what we've done is we've laid out a little bit about what is the Ignition ecosystem, so you start off with Ignition itself, there is the Ignition standard, Ignition Edge, and Ignition Cloud addition platforms. They're all somewhat similar in that they share a lot of commonality between them, if you've used them, if you've noticed that, and that they provide a basis for a lot of other things to happen.
05:42
James Burnand: On top of that, there's the modules, so we're showing the partner modules here from Cirrus Link and Sepasoft, who are strategic and solution partners for Inductive Automation, similar to us as solutions partners. These extend the capability of Ignition, so you can do things like communication to MQTT and to the Cloud, and MES capabilities and Sepasoft got some, neat stuff this week.
06:19
James Burnand: That extends the capability beyond, but it still doesn't solve any problems for end users. That's where the integration community comes in. The applications are truly the thing that solves the problems for the end user. This is where you build out a back system or a EBR system or whatever may be the end application that ends up providing that value to the end customer. And if the stack was this simple, it would be very easy to do. It's never the simple. What ends up happening is at least you need a database, you probably need time series data, you probably need source control authentication, MQTT brokers, external applications. All of these complexities are things that are part of these systems that are deployed, whether they're directly a part of it or whether they're integrated into it, they're important pieces, and our goal is to deliver that as a service.
07:10
James Burnand: A different view on what that looks like is this next diagram, and I apologize about the glare moving around on there. What you'll see is we have a couple of things shown on here where... Down at the bottom, we're showing a couple of different deployment locations, so on the left, this is essentially if we offer this as a SaaS, so that's where we deploy it inside of our Tenant, and it becomes a service that you just use.
07:42
James Burnand: The next is inside of your Tenant, so this is for bigger enterprise customers, typically where they already have a big, strong relationship with an AWS or an Azure or some big Cloud company, and they have a basis where they would like to control their data inside of their environment, we are capable of deploying and operating those workloads inside of that space for them really as a platform as a service or PaaS.
07:58
James Burnand: And the final option is on premise. And we'll talk about a couple of options that we offer there, but the ability to have the advantages of operating something in Cloud while it happens to live on-prem, so you can still have that low latency localized capability, but somebody else is taking care of it for you. In the middle, this is really what 4IR does. So managing, supporting, monitoring, providing all of the capabilities for disaster recovery, updates, and ensuring that there's 24x7 support in place for all of these systems is a key component of us ensuring that this is an available and operational system for you at all times.
08:42
James Burnand: And this layer of glue in the middle is really what we are best at. When it comes to the applications, that's your choice. If you want one Ignition Gateway or 12 Ignition Gateways are 200 Ignition Gateways. If you want an Azure SQL database or you want a Postgres database, for us, we're able to flex and provide what makes sense for your use case. So we work a lot with system integrators and end users to help them decide what goes in that application space, but fundamentally, it's up to you as to what it is you need to solve your problems.
09:15
James Burnand: The way that works is we essentially sell by instances. So there's an Edge version of Ignition and an Edge version of FactoryStack and a Cloud version of FactoryStack. They come with the core services that you can see on the top of those boxes, and on the right, you can pick from our application catalog is to what you want available inside of those different locations. And then from then on, it's operated to manage as a service.
09:43
James Burnand: So I think it's important to talk about how are people actually using this. So the very first use case that we'll talk about is, Hey, I've got a couple of plants, or I've got a plant and my executives really wanna see a report or a visualization or some information from that, and it's hard for them to look at on their cell phone, or it's hard for them to be able to get access to that information. So in this scenario, which is a fairly common use case is you have existing Ignition gateways and you simply publish data from those gateways up to an instance in the Cloud of choice, again, whether it's our Cloud or your Cloud, and you build applications up there that take advantage of security principles like multi-factor authentication and DNS attack protection, and the use of a modern suite of security tools so that you can provide a secure way for those end users to be able to access that information that used to be trapped inside of the facility.
10:42
James Burnand: The next use case is around enterprise application, so this is often... And we have a talk on this tomorrow. This is really where there's a single application where I want to go and provide this capability to a fleet of facilities or a fleet of assets. And OEE is a great example of that where, hey, I really wanna have a consistent OEE deployment across my X number of facilities or my fleet of facilities that I have, that can be a really challenging thing to do when you have different IT folks in different buildings and when you have different infrastructure in different buildings, and what we find is for certain types of applications, it makes a lot of sense to use an edge-to-cloud architecture where your edge is provided as a data pump, it's buffering information, it's doing all of the connectivity to those local applications, and you're actually hosting the applications themselves using the Cloud.
11:32
James Burnand: That doesn't mean it all has to be hosted in one gateway. Some of our customers will actually dedicate a gateway per site, so there's a one-to-one relationship between a Cloud application as well as an Edge data pump. We see that as being a very common use case, because what it allows for you to do is to deploy very quickly without having to stand up a bunch of complex infrastructure in the buildings and to be able to take advantage of consistency in the application itself, so using things like the EAM module or DevOps capabilities with Git to be able to manage and operate those projects that are located up in that Cloud position.
12:15
James Burnand: The next piece is an OEM Edge, so where we see this most often is machine builders or folks that are delivering a piece of equipment to a lot of locations, so they would put on a small IoT Edge instance inside of that machine and use it for capturing statistics, creating reports, creating a user portal. So if you're using Ignition Cloud edition, one of the things you're capable of doing is having multiple tenants connect to that instance in the Cloud, so you can imagine if you're a machine builder and you deliver a 100 of this piece of equipment into these locations, the ability to then have some sort of dialed home statistics gathering allows for you to do things like, number one is monitor the equipment, but also find common failure modes and use things like AI to generate insights and inference on how those systems are performing and most importantly is you can actually create upgrade packages for those pieces of equipment based on what you've seen on improvements that you've done on other pieces of equipment. So this allows for you to use that kind of spread out architecture, that Ignition enables to be able to provide an additional service, which is often a paid-for service to your users or to your customers.
13:33
James Burnand: The last one, which is, I'll say newer in this space, is a hybrid. So, is anyone familiar with hybrid? That term make any sense to anyone? Alright, no hands are going up. So what Hybrid is, is it's a little bit of Cloud in your building. So rather than using an Edge device that is essentially there to operate maybe some Docker containers or maybe there to just provide some function, maybe a database or an Ignition gateway, Hybrid Cloud is literally taking a piece of cloud and deploying it inside of your building, so you don't operate it by logging into the server. It looks like a server. So Stack HCI is offered by a bunch of the common vendors, you would know, so Dell is a good example. It looks like a Dell 750 chassis server, but you can't log into it.
14:26
James Burnand: What it is, is it's a thin operating system that connects up to the Cloud, and then you operate and deploy all of the workloads that are on that server sitting in your building through the Azure portal. The nice part about that is you get access to certain services that are available inside of Azure. So the nice part is now I all of a sudden I have access to hyperscale databases and VDI and Kubernetes clusters that lets me put not just FactoryStack, but a variety of different services that live locally can tolerate the internet going out and still operating, but I get the benefit of being able to manage them as if they were deployed in the Cloud because they're being deployed using that same common methodology.
15:11
James Burnand: I see this being a really important step in the next several years for manufacturing, moving from a completely on-prem sort of a set-up to somewhere where there's an on-prem and in cloud hybrid. Yes, the word means that, I guess. Where we see this is traditional SCADA, alarming applications, commonly places, things like regulated environments that want something physically on-prem or they have to have data residency that doesn't leave a building or a geography. These are common use cases for this. And again, we see this as being a very interesting, but also a very useful set of tools that not a lot of folks in the manufacturing space are using as of yet today.
15:54
James Burnand: Interestingly enough, there are several different ones out there. We believe that in this case, Stack HCI is the one we're advertising, 'cause we think that's kind of the furthest ahead. Amazon has their Snow series. If you take a look at that, or their outpost series, there's Anthos from Google and then stack, Azure Stack as it's called for Microsoft. There are others as well. Those are kind of the leading folks in this space, and it is a growing space.
16:26
James Burnand: Oops, so where do people start? So I talked about a couple of use cases, I talked about different ways of thinking about or looking at different types of applications, but most often this is where people start. They set up an Ignition system, a database, a Git repository, everything with integrated Entra ID, multi-factor authentication, everything monitored and secured and they look for a place or an application to use for it. Most often, it's typically focused around statistics or information gathering or unified namespace, integration with AI systems. These are all different use cases that kind of use the same architecture.
17:09
James Burnand: The nice part about this is you can start with a single gateway and a single database, and you can grow this to whatever meets the needs of your use case and your customer, so there are limits, but they're very, very high, and I haven't seen anyone getting or close to them yet, where you can start with a single gateway and you can run hundreds inside of the same infrastructure without making any real fundamental changes to the way it's built.
17:40
James Burnand: So part of what I think is important to understand is what does 4IR do in all this is that we are operating, managing it, and making it simple for people to use, so your interface as an integrator or an end user is the Ignition Gateway in the Ignition Designer. You don't really need to know or understand all of the inner workings behind this. What you need to know is that someone that understands OT is taking care of it for you, and that we are ensuring a simple interface for you to be able to use that takes care of some of the complexities that you may run into.
18:15
James Burnand: A good example. So one of the complexities that a lot of folks run into when they're putting stuff in the Cloud is SSL certificates. So anyone had that problem where their system goes down because of an SSL certificate?
18:29
Audience Member 1: Microsoft Azure.
18:35
Audience Member 1: Special server crashing and yeah, not a problem at all.
18:39
James Burnand: So in our case, we have automated a lot of what you see on the screen, so we use a tool called Pulumi that allows for us to automate the deployment, management, and updating of all of the infrastructure. That also includes certificates. So we don't just deploy a certificate, set it to 2029, and hope no one forgets about it in a few years. We rotate our certificates every thirty days, and there are some changes coming from the browser providers that probably is gonna become necessity in the next few months, maybe a year. But that automation allows for one of those potential downtime reasons to just sort of go away. It becomes something that you no longer need to have as a part of your mind or part of your maintenance plans, because now it's taken care of as a part of the platform that's deployed.
19:28
James Burnand: Maybe leave certificates. So we do have a presentation tomorrow that I'll talk about in a second where we will... We'll go through a little bit more detail what that is, but we do talk quite a bit about security certificates and scale as a part of that. So it's important to know how do you price stuff, and there's a real interesting part of this discussion around how you look at what the pricing is for when you're doing a deployment. So very similarly to if you're gonna go buy a server, right?
20:02
James Burnand: So if you're setting up an Ignition system in a plant, you're probably going to Dell, maybe buying a VMware 321 stack or OpenStack or Nutanix, whatever the case may be. But you're buying something and you're buying it with the intention of having enough capacity in that thing for the next six, seven years, depending on what your lease or your refresh cycle is on your hardware. It's a little different in the Cloud. So when you're in the Cloud, what you're doing is you're trying to figure out, what do I need today, and making sure that when I've created this, I have a flexible architecture. So as I consume more, I have the ability to expand my capability. So what becomes important as a part of this is understanding that the Cloud and on-prem have different ways of handling reliability. So by default, our systems take advantage of multiple availability zones.
20:51
James Burnand: So we have things like mirrored storage across three completely separate physical buildings that provide not just if some hard drives fail, but if a literal building blows up, the system won't actually have any, or it'll have minimal downtime to move some of the workloads across automatically. So the level of availability and reliability that we offer out of the box is actually higher than what most people are capable of doing inside of the four walls of their building. And we can still go up from there. The challenge is cost. So, you know, a lot of folks are like, it needs to be this, it needs to be that without actually going through and understanding what level of downtime can I tolerate as my business? Can I tolerate... And my cohort, Randy says, can you tolerate somewhere between a 100 milliseconds and a 100 days?
21:39
James Burnand: And the reality is that, you know, depending on what your lead time is for different hardware components or what the criticality of your application is, how much data you can tolerate to lose, those are the decisions that help you choose what level of availability that you need to have as a part of your deployed application. That is a direct correlation to what it costs from those hosting services from the Cloud. So we try to guide people through what it is they need, based on what their application is, what their user use case is, and try to create something that makes sense for those users, taking advantage of the technologies that you have available by using different cloud services and capabilities.
22:19
James Burnand: The other things that drive cost is how complex is the application? So how many gateways? What type of databases do I need? Do I need a VPN or no VPN? How long do I need to retain backups for? These are all, again, considerations that have a direct correlation to what I get charged from an Azure or from an AWS.
22:43
James Burnand: Important to highlight. So we do have a few partnerships in the industry. A lot of logos I think that are here this week. We work very closely with these partners on trying to create cohesive offerings as well as working with Microsoft and Amazon to ensure that our solution is qualified and follows all the best practices that they publish. We do work with a lot of systems integrators as well. I'm not gonna put logos up here, but I think it's very much a collaborative engagement with integrators because we don't build applications. That is not part of our business model. We are here to provide enablement and infrastructure and make sure that it's easy for system integrators to deploy these kind of systems or end users to deploy these kind of systems. But we do not build applications.
23:29
James Burnand: We do offer consulting. So if you are trying to figure out how am I going to do this? How does IT and OT talk together? How do I meet these security requirements? Or you get one of those big long checklists that says, do you have this? Do you have that? What's your policy for this? That's what we do. So if you're trying to go through that and figure out a way to create an offering for a customer that meets those obligations, we probably have an answer for that because that's our business.
24:04
James Burnand: So we talked a little bit about the ICC session. Tomorrow, it's just after lunch in stage 2. I encourage you all to attend. So I will be back up here. I'll have my cohort Randy to talk a little bit about Enterprise Ignition specifically. So we're gonna cover details around what makes an Enterprise unique, as well as we're gonna do a live demonstration of FactoryStack. That demonstration is gonna have a number of Ignition gateways running. We're gonna add a whole bunch, we're gonna upgrade a bunch, and we're gonna downgrade a bunch and kill a bunch. So a really neat demonstration of the technology in action and we're looking forward to sharing that with you guys. That's all I have for the presentation. Any questions?
24:54
James Burnand: So there are some that are deploying hybrid because of that concern and they need to have it in the building. But there are others, and it's not typically like consumer packaged goods or pharmaceuticals. It's like oil and gas is a better example where they have distributed fleets of assets and they're actually doing monitoring and SCADA control of those distributed assets using a central platform, which for them, isn't really that different as to what it would look like if they had a bunch of leased lines going to a building that has a dedicated server. So for them, this is a cost savings and risk reduction piece. So now rather than having no one updating their servers and being a little bit of a Bon Jovi, now they have someone caring for and monitoring their systems 24/7 and providing updates and providing kind of that surety of availability. The biggest downtime reason is often the internet connection, not either side of it.
25:58
James Burnand: Yeah. Yeah. It's all pipeline in that particular case I'm talking about. But yeah, like there isn't, you know, for direct control of process and equipment, we don't recommend using the Cloud. And to be honest, there is not a great set of reasons to take that risk on unless you need to. I personally think that in my professional career, we're going to see a time where the reliability of networks between factories and public clouds are at the point where people will start to do that. We're already seeing... We have a couple of like real big enterprise customers that are forcing all of their onsite SQL servers to be moved to a managed service in Azure by default. So you have to provide basically a set of reasons why they're not going to be moved. So they don't actually care what the application is, they're just trying to get rid of that cost of having to operate and maintain those SQL servers.
26:57
James Burnand: And their reasoning behind it is that they've invested in redundant WAN connections and a level of latency and availability between their buildings and their public cloud instance that is as good as it could possibly be, so they feel comfortable with that risk level. I think we're gonna get there in the industrial space, but not for a while. That's why I think hybrid cloud is so important because hybrid cloud allows for you to bridge that timeline and you can run Stack HCI offline for weeks and it still is local, it's still running virtual machines and clusters in the building, allows for your SCADA system to operate as if it was there. What you lose is visibility and the ability to pump back ups up to the Cloud.
27:41
Audience Member 2: All the software, all that's managed in the Cloud. What about hardware upgrades to the on-prem?
27:47
James Burnand: So that's actually managed from the Cloud as well. So the way that works is there's a Stack HCI OS, and again, I'm just talking about that particular instance that's a like a really cut down version of Windows server and it's upgraded kind of like a firmware on a PLC. And those upgrades are become available in Azure and you push those upgrades down to the system. So it's... they're more like unit upgrades than like doing Patch Tuesday. So it's more akin to like a firmware based device than it is like an operating system.
28:19
Audience Member 2: So no real concerns about hardware obfuscating the software?
28:25
James Burnand: So not really. The nice part about it is just like if you're running a VMware setup, you're obfuscating the hardware from the workloads that are running on top of it. So, you know, for you to migrate that cluster, migrate those virtual machines to another piece of hardware, even if it's dissimilar is not an issue. The level of availability of those systems is variable depending on how much hardware you buy. So you can do as little as one Stack HCI server, which gives you like a RAID 5 array and two power supplies and single server level of reliability. You can do two of them running as a pair and you can do 10 of them running as a cluster.
29:06
James Burnand: Yeah. So the question was, how difficult is it to get estimated price and Azure was the question, but it's similar for AWS and how accurate is that? So the cloud companies actually do a really good job of laying out what their service costs are, and they also have some fairly built-in discounting models. So one of the things you can do is you can reserve for a certain amount of time, and then you get a percentage off of that service cost. So for example, if I have a database and I reserve it for a year, I get thirty percent off that price. And that's basically a fixed quantity based on what the calculators say. So we can go in and what we do to try to simplify it for the end users is we'll kind of create a set of boundaries and say, okay, for this subscription you get a terabyte of storage, you get this much ingress, this much egress and these services, and we'll handle some of the risk of that minutia.
30:03
James Burnand: When it's in a customer's tenant or when you're trying to estimate, you know, is this a hundred bucks a month or 10,000 a month, the calculators are really easy to use provided you know the services that you're going to consume, in an approximation. The data's not the most expensive part, it's the services that cost more. So like for example, if you're gonna put in the storage for storing backups, that's rounding error compared to what it costs to put in like a SQL server database service.
30:32
James Burnand: Yeah. So the question was how does Ignition licensing work? We can only buy, 4IR can only buy Cloud edition, because we purchase Cloud edition through the marketplace, just like anyone else would purchase Cloud edition. Any other Ignition purchases are perpetual licenses. We need the eight digit key so we can be able to reup them whenever we need to. Or if we kill a gateway and bring one back up, we have to be able to reactivate it. But those are purchased either by the system integrator or by the end user directly. So we don't... Part of this, so the licensing that we'll provide is for the managed services that we purchase or anything that we purchase through Azure for things like, if I need an MQTT broker, if I need a flow license or an Ignition license that's typically purchased by the integrator as a part of that project that's being deployed. And then we will... Our requirement is that support is maintained on it, so upgrade protection is available so we can upgrade things. But that's kind of the all we're really looking for.
31:39
James Burnand: Okay. I think... I'm starting to feel like we might be out of time. So I wanted to say thank you all for your time today and I hope you guys enjoy ICC. Have a great one.


Using our OEE template as an example, we'll demonstrate how you can streamline your Ignition projects by avoiding complex coding and scripting. This is all about scaling your data processing while adding centralized data and engineering governance. Every new KPI we calculate, event we detect, and batch we process, will be served back to Ignition, an MQTT broker, and to the enterprise data warehouse.
Transcript:
00:01
Jeff Knepper: Data is the new oil. If we're digging in our backyard and a bunch of crude comes up out of the ground, we are not rich. We have a huge liability on our hands. Oil is valuable only after it's been captured, after it's been processed, cleansed, transported, distributed, and value has been obtained from it. Data is no different. Let's talk about data a little bit. Our engineers, engineers, engineers, engineers, anybody in the room, an engineer? You've seen a trend before. That's actually not data, by the way. If we took just one point off of that trend and took away all of the context of what's happened before and what's happened after, that's data. This is information. But there's other types of data that we work with. We have transactional records. And what do we keep doing in our goal of obtaining information. We look up at our trends. We say, "Oh, I see I've moved from a state of zero up to a state of 10 to a state of 20."
01:12
Jeff Knepper: And I want to know what's going on with this other data when these state changes are happening. So we draw little lines down our trend charts and we write a bunch of code and we try to figure out how that value has changed during that state. Or we write a bunch of code to try to figure out what's been happening across a couple of shifts. Or we write a bunch of code to figure out what's been happening every single hour, or every single minute, or every single quarter, or how it compares this period to that period, or what it was like when that operator was using it compared to when this operator was using it.
01:49
Jeff Knepper: The point is, in order to try to turn data into information, there is a common factor, and it is we write a bunch of code. It is not easy to turn raw data into information. Our goal is to help you model, transform, and distribute information. My name is Jeff Knepper. This is my partner, Leonard Smit. And today we wanna spend a little bit of time to introduce you to the problems that Flow Software helps you solve. So what is Flow? Flow is a pre-built solution to help you connect to the data sources that matter most to you and your operation. So what are those? Well, in this case, I think we can agree, Ignition. And what is Ignition? Ignition has time series data, Historian. It has transactional data, SQL Bridge. It has manually entered data, Web Forms. And then there's those other sources that Ignition connects to. My quality systems, maybe that's not been built in Ignition, my ERP system, my MES, my CMMS, my enterprise asset management system.
03:06
Jeff Knepper: These are all sources of data that your engineers and your team and your customers really would like to have access to and be able to work with without writing a bunch of code. So we connect to those systems and we do a really neat thing. We normalize those various data formats into one easy to work with format and do a point in time normalization. Now, that's a bit of a measure of words. But what is a point in time normalization? Well, think about those trend charts. We've got pressure that is just moving all over the place. And I've got a transactional record for a batch.
03:48
Jeff Knepper: So batch changed here. Pressure has been moving all over the place. How do I link a point in time on pressure change to a batch record when the timestamps don't match? What Flow does is carry the change from the transactional record forward and assign a virtual timestamp to it, every time I have another timestamp change in my system. So I get a wide table format perfectly set up for row-based calculations so that I have no fear of loss of data when I start to do my transformations. With the data in this type of format ready for me to work with, then I apply my process events, my calendar events, anything that my operations team wants to identify as being important to them. And that is shift patterns, it's financial quarters, it's state changes, it's customer changes, lot runs, batch changes, whatever it is that you need. Maybe safety incidents. It's not limited to machines. And then I aggregate in my model, I write rules in my model on how I'm gonna bring that data together. At that point, Flow's engines execute all of the rules of this model and output the results to a database. Now you have pre-processed and contextualized data ready to be shipped elsewhere. So here's an example of the information that was on the screen before, the trend with the states, the energy usage now contextualized across a batch run. In our little mock simulation here, we're producing water for the fine prisons in the surrounding area.
05:42
Jeff Knepper: Johnny Cash said the water in Folsom wasn't very good, so we're trying to help fix that. We can see that we had a lot. We had a batch start, we had a batch end, we had a duration, we have a customer ID, we have a SKU, and we have a quality check. So now that I've sliced the data by batch runs, I can further slice it. I can say, well, give me lot number one and break lot #1 down into every single stage of that process. I wanna see how much energy did we use during my startup. I wanna see every single downtime. I wanna know why we were down. All of this can be done inside of an information model. Once we've abstracted the underlying data sources, their namespace, and giving your people a really easy namespace that they can interact with. They can code information, they can add comments, they can manually enter data that's missing because it hasn't been automated yet. All of this so that now you have information that's ready to send to any other application, that you wanna send it to. So Lenny, why would I do this in Flow? Why wouldn't I just build it myself?
07:00
Leonard Smit: Well, let's think a little bit of what you need to do to actually get all of those slices, processed and captured. Well, obviously, I need to be able to collect a whole bunch of different data sources, time series data sources, relational data sources. I need a way to actually ingest that into a solution wherever I'm gonna build it. I need to do this point normalization that Jeff has spoken about to get me the boundary values for each and every timestamp whenever the data changes either in the time series data by the source or the relational data source. I need to be able to detect my events. When do I detect an event? When does a batch start? When does a downtime occur? So I need to be able to write all of those rules on when to go and slice it based on my event information.
07:43
Leonard Smit: I need to be able to contextualize that information with data coming from all of these different data sources. And don't forget about the human 'cause the human can also give me context to that data as well. I need to be able to figure out well, we spoke about contextualizing or slicing it per shift per day per hour what is a shift? What is a day? I know it sounds silly to ask you what a day is, but a day means different things for different people 6:00 to 6:00 in the morning or 7:00 to 7:00 or midnight to midnight how do you cater for all of that. How do you not take that information and aggregate it up how do I do a shift to date or a day-to-day total or roll up a whole bunch of production to give me a line production KPI.
08:23
Leonard Smit: I need to be able to version this. If something changed from where I'm getting the data and I'm rerunning the calculations, I need to be able to version it. I need to be able to rubber timestamp it for auditing and governance compliance as well. And then I need to, well, I need to show it somehow. I need to train it. I need to put it on a dashboard. I need to put it on a visualization. And I need to send it out potentially to the hyperscalers, I wanna push this result of this KPI to Snowflake, or I wanna push it to a BI report. So this is just some of the little steps that you need to be able to do, just to be able to get that contextualized data going. I don't know about you, but when I see this list, I just see fingers going crazy and people just coding their life away.
09:04
Jeff Knepper: I could hear the keyboard clicks, and I could see the lines of code starting to develop. And what's the issue with writing code? Maintaining it. The more you write, the more you maintain. The more you write, the less it scales. How do I take the current data project that I'm working on, when I'm done with it, pick it up, and bring it over and put it on this site and reuse it without editing a bunch of code? And if I have to edit a bunch of code, now how do I take it and go put it on 20 other sites? You won't. And if you do, congratulations, you've built a product and now you get to maintain it for the rest of your career. So what does Flow do? Flow starts right after data collection and helps you templatize and model all of these steps while then executing the transformations and the publishing of data according to the rules of your model. Stopping just before you include the information in front of your people or back into other applications to take action on your decisions.
10:13
Jeff Knepper: And you might look at this and say, well, yeah, Jeff, but that's what I do with Ignition. And I'd say, yes, that's what you do with Ignition. That's what you've been doing with Ignition. And we absolutely love Ignition. Our goal is to take your Ignition project and give you a dedicated resource to templatize and execute all of these data transformations and then feed it right back into Ignition so that you continue to build on your Ignition work. And if you're an integrator, to take that work that you've just done, put it into your template library and bring it with you to your next job, standing on the work of, standing on the shoulders of previous work.
10:55
Jeff Knepper: So we know that coding affects us negatively, yet we keep doing it. Maybe it's just time that we say, no more custom code, no more scripting, no more burying this work in applications where I don't have access to it again, no more Excel worksheets that I'm afraid to open up because they're so fragile. And most importantly, centralizing all of this so that I can add governance to my transformations. I wanna make sure that when I define a KPI, that that KPI does not change as I move from one application to the other. I wanna make sure the way I cleanse data is universal across all of my sites, not just based on how one engineer wrote one program over here. Last point, to do all of this work and to do it all in code and do it all in scripting requires a team, it requires people, and it requires people that cost a lot because it is a skill set the more code that you write. Okay, so if you wanna do this and you want to do it well and you wanna get away from code, there's three things that we've identified and built into Flow that will help you do it.
12:05
Jeff Knepper: The very first thing is the ability to write templatized information models. Information models are not, are not master data models. You're not trying to represent every single point inside of your process. An information model is more use case driven. It is the key information that your organization cares about, and it's your ability to govern how that information is brought together over and over and over again before getting shipped out to other applications. Intelligent execution engines, these are simply the bots that do the work that you write into the model. But manufacturing has a ton of challenges. Lenny already touched on it. Data comes in late.
12:51
Jeff Knepper: Data values change. I have to rerun KPIs. If I do secondary aggregation, like rate of change of one KPI over another, if I change an underlying KPI, now I need to rerun those KPIs. That's why it's really important to focus on having complete relational dependencies inside of your model, which of course we've accounted for, and always monitoring the underlying databases. So if data arrives late, or if underlying values change, it triggers the rerunning of all of these calculations, even the republishing of all of those calculations. So if you're publishing this data back to a SQL database or up to a cloud infrastructure like Snowflake or AWS into a Kafka stream, our engines are smart enough to know something's changed, republish. And of course, we're versioning all of the results internally so you always have access to what it was and what it is now. Finally, the last piece is universal information access. Nothing that you build inside of this model, none of the results that get published can stay locked away. I get asked all the time, how are you different than this analytics tool or this analytics tool? And my question is always, well, what you've built in that analytics tool, can you ship it anywhere you want?
14:14
Jeff Knepper: Can you search the model that you've built? Can you move the data freely out of it? And the answer is almost always, no, I can't. And that is the missing piece on almost every single information model and strategy that we come across. So in order for it to scale, we know it needs to stay platform agnostic, be templatized, be API driven, stay away from code as much as possible, be flexible, run on different OSs, and give you universal governance. With that, how about we watch Lenny build a model in real time? You wanna do it in real time?
14:50
Leonard Smit: Let's do it.
14:51
Jeff Knepper: Of course.
14:55
Leonard Smit: So we're gonna cover a little bit of use cases today. Obviously, we're gonna cover a little bit of a batching use case. And then we'll likely touch around OEE as well. But let's get cracking and building out a little bit of a model. Again, the visualization that we started off with today, I've got my state engine, I've got my quality samples, and I've got my energy usage. And I wanna try and make sense of all of this data by contextualizing all of these different data points with one another. So what I'm gonna do is I'm gonna go to our configuration environment. So this is the environment that I'm gonna go and create my information model with all of my KPIs. I'm also gonna use the same tool to hook up to all of my different data sources. Now, those data sources can be time series data as well as our relational databases that we spoke about. Let's quickly connect to a SQL database.
15:44
Leonard Smit: What I'm gonna do, I'm gonna go to my data sources on the right-hand side here, and I'm gonna go and create a new connection to a Microsoft SQL database. Inside of the SQL database will be a lot of MES transactional type of data that I would like to have in context and sliced by my energy information that normally sits in my time series historian. So let's do that let's quickly create a connection there. I'm gonna populate this is gonna be my MES data that I'm gonna connect to and I'm gonna just populate the server name of where that database resides and give the name of the database. So my database is called ACME MES so that will now create a connection to that SQL database.
16:28
Leonard Smit: Test the connection, make sure that it can connect to that database yes it can, hit the save button and that will now create a new database connection. Flow will go and deploy that so that my engines will be able to get the data out of that transactional database. Now, when I select this database out of my data sources, what it's gonna do is it's actually gonna go and browse that SQL database and it's gonna see what is the tables and the structure that lives inside of it. So there we go. I've got all of my different tables. I've got my order history, I've got my SKU that I'm making, my work order number, the customer, all of that data is now available for me. Jeff, can you help me please write a joint statement on all of these tables to get it in context with my time series data?
17:18
Jeff Knepper: And this is where it falls apart for me because no, that's not my job. I'm an engineer. I've got knowledge of my process, but I don't know how to do all this database work, but I need access to this data. So absolutely not. No, I cannot. No, I will not.
17:31
Leonard Smit: Okay, cool. So we've got you covered. We can write a definition file for a SQL schema. And what that means is that I can work either with the vendor or with someone, a champion in my organization that understands that database, to make it easier for us as the engineers to actually interact with that data in relation to my time series data potentially. So I've done this before. I've created another connection to this. But in this case, I've loaded what we call a definition file to that SQL database. Now when I browse the namespace, I get a complete different view. I get a tag-like structure that will tell me what is my customer name, what is the SKU that I'm running for all of the different work orders that's loaded within that database. I don't have to write a single piece of SQL code to do all of the joins to that database. And literally, I can now take this, drag it into my data preview here, and that will now do the point normalization for me, and give me what is the work order that I'm running in context to all the other relational data that I have. So I've got my energy usage, I know what the state of the machine is, I know what my sample is, and look at that, I've got the work order number now in context to that data as well.
18:47
Jeff Knepper: And this is where all of the timestamps have already been prepared and aligned perfectly. So that I don't have to worry about looking backwards to try to figure out, well, what was the work order change when I'm looking at this period of time. We've essentially, take in all of the timestamps, align them, ready to go. Now I can work with this information.
19:09
Leonard Smit: Correct. All right, so job number one, done connecting to data sources, doing the point normalization for me, making it easy to use the data that I've got in my production floor. Let's quickly extend my model now. So I wanna go and be able to track this batch, all the different states that my batch goes through. And I wanna do that within the model environment within Flow. So let's go and create a new folder here. I'm just gonna extend my model. This is my ICC live demo. And within that I'm gonna go and create a metric where I'm gonna go and add all of these different inputs that I'm gonna use to add context to my batch of it. So this is gonna be all my inputs and literally all that I have to do is I would go and create tags that represent all of this data for me inside of my model.
19:56
Leonard Smit: So I would go and create a tag that tells me what is the work order and literally drag it across from the namespace into the model. What is the SKU that I'm running? And let's go and marry that with some data from my time series historian. So I've got a little simulator running where I actually now have the state of what this machine is. So I'm gonna add that to my namespace as well. So there we go. This is the actual state of the machine. Let's add that to the inputs and we can also go and add some or the actual energy that I'm using for that as well. And why not marry that with the data from my LIMS system that does all the QC checks for me. Click on that, drag it across and there's my QC check for that as well.
20:44
Jeff Knepper: So three different systems?
20:47
Leonard Smit: Three different systems.
20:49
Jeff Knepper: Represented in one model?
20:49
Leonard Smit: Represented in one model normalized. And what's the best part about this? I don't have to keep this tag name within what is this in the time names historian. I can normalize these names and this I can just, can simply say this is my energy usage. So give it nice normalized names within my model. Okay. I can now use these tags to go and slice the data to all bunch of different ways. And the first way that we're gonna do it is we're gonna slice it per batch. So I'm gonna use what we call an event within Flow, drag that event across this is gonna be my ICC batch tracker and now I've got rules on when do I need to start the event, stop the event and what's the different context that I need to add to that event?
21:29
Leonard Smit: Let's open that up. And it's simply a drag and drop environment. I say, "Oh I wanna go and trigger this batch every time my state value changes from the control load." And that's it. There's my rule for starting a batch. Okay, what context do I wanna add to this batch? Well let's add some attributes to this batch. I wanna know what is the context of the SKU and literally I go and drag the tag onto that attribute to add the context to it. What is the context of my work order number? Again, drag the work order from my tags or my inputs and drag that as context. So by adding more and more attributes, by adding more and more context to the data that I have available. And I can go on to add the quality check as well. Now the model needs to be able to execute live, no point in just building the model without me having the capability to execute these rules in runtime. So deploy it out. What that means is Flows engines will now starting to execute the model. We will backfill on the historical data if there is in the database and we already start to create these batch events for us. There we go. Sliced batches with information and that will give me the context of my different work orders that's now available for you.
22:51
Jeff Knepper: So now written into Flow's database are each of these events with the context from three different databases stored with each of them.
23:02
Leonard Smit: So let's slice it even more. Let's slice it per hour. So I'm gonna take my energy usage, slice it per hour and I'm also gonna slice it per the batch. So I'm gonna add more and more context as we go along. So I'm gonna slice it per the batch and I'm gonna give it context of the work order. So all of those lines that you saw Jeff drew on the PowerPoint of slicing all of this data, that's exactly what I've done here. Slicing it per my calendar, which will give me context of time, batches, hours, days, weeks, and I'm slicing it by my batch information as well. And again, deploy this out and Flow, the Flow engine will do all of this work in the backend. Okay, adding that context, saving it in the database, having that available for everybody to share.
23:46
Leonard Smit: Okay, now that's just a very simple little batch example, how we slice it, how we add the context to that and how we can get that information now sliced not only per the work order but then also per my hour and having that work order number available as well. Cool. We have a little bit extended the model as well just to show you a little bit of what we can do from potentially an OEE kind of example. So here I can see I've got an OEE example built out and I've got all the typical kind of KPIs that I track from an OEE perspective. I've got my schedule, I've got my available time, I've got all the production information. And you'll notice a little bit different in the model. Those guys are blue. The one that I've built is like this grey kind because this all comes from a template. All right, so we've templatized exactly how a line OEE should look. And we utilize that inside of our model here.
24:43
Jeff Knepper: And when we say we've templatized it, what we mean is as engineers, we've templatized it. Flow, has not told you how to define OEE. In fact, I promised Rick Pallotta that I wouldn't even say OEE today. But you get to define how you do calculations and KPIs according to what your organization has structured those rules and those expressions as, you're not forced to our definitions of them.
25:11
Leonard Smit: Now how can I use this model to even extend my KPI definitions? Well I've got four lines for the one, two, three and four. I know what is the total production per line for the hour, but now I wanna do a line roll up. So what is the total production for all four of the lines? Add it together. Okay. Now the nice thing about having a nice normalized model is I can utilize the model for Flow to go and automatically discover all the measures that I need to KPIs that I need to add together to get me to that point. So I can very simply go and say, okay, underneath production I would like a metric for total production and inside of my total production I'm gonna go and create a new hourly KPI and a calculation for that. This is gonna be my total. And inside here I can now define rules based on what is happening within my model definition.
26:06
Leonard Smit: So I can go and create a new collection of measures that I'm gonna filter within my model. And I'm gonna say, "You know what? I would like to know all the measures that's got a UOM within my model. And that UMM is gonna be units." Okay, so what am I doing? I'm searching the model inside here for anything that's got the unit of measure. So that I will go and search the model and by now the definition you'll see that guy's got units, it's gonna bring back all the measures that's got that associated with it. I can extend this obviously to now also go and search for anything in the model by its name. And again, it will go and filter the model and bring back all of those definitions for me. So I can easily do those type of roll-ups within the solution as well. I made a mistake with my spelling there. I'll fix that now.
27:00
Jeff Knepper: So the beauty of this is that as you move to different pieces of equipment or different processes that have different counts, maybe one side has four, the other side has seven, you're not hard coding these numbers in. Flow is finding that automatically for you based on the logic that you've built into Flow, identifying the correct number of measures or metrics and then aggregating them according to the rules of the collection. Lenny, for time.
27:27
Leonard Smit: Yes.
27:28
Jeff Knepper: I think we should talk about how we get information out of Flow.
27:32
Leonard Smit: 100%. So there's a few ways. What we can do is obviously we like Ignition, we play well, we like Perspective. It's a very good tool to create very rich dashboarding capability. So I just played around a little bit and we are introducing the first search engine for industrial data. We call it Floogle.
27:52
Jeff Knepper: It's Floogle.
27:54
Leonard Smit: Yes.
27:55
Jeff Knepper: Floogle.
27:57
Leonard Smit: I haven't heard about Floogle before. It is a information search site and this is a little Floogle doogle.
28:01
Jeff Knepper: Sure.
28:02
Leonard Smit: And if you see Travis, that's Travis by the way. He's celebrating his twentieth year at Inductive Automation. So if you do see him in the passage, just give him a pat on the back and congratulate him with twenty years at IA. But the point is, I can now search my model. All right, so I can search that Flow model for anything that's got bad in it as an example. So there we go. I've got my actual bad inputs. I've got my five minutely bad production figure, so I'm not searching it by tag name in the historian who can tell me what the tag name for the historian was for that point when we did the demo. Nobody knows. Nobody cares.
28:39
Leonard Smit: Of course we've now normalized this nice model where I can search thing by nice proper names. So click on that, it will pull the data from Flow's API and I can use trending capability within perspective to get my information available as well. I can also go and embed the entire Flow dashboard with this using iframe technology and I can have a complete dashboard that's been built within Flow with all my states and all my downtimes already capable as well or populated as well. One thing that we wanna do is also we need to be able to get human interaction into this. We talked about getting data from transactional data, time series data, but what about human factors? Now we do have the capability to go and do calculation or capturing of frames based on our batches.
29:26
Leonard Smit: In this case, well I wanna know this downtime, what was the state of the downtime? It was an electrical fault based on the electrical guys. So again, humans can go and enter the state classifications for us, but we can also go and add data just based on time. You've got a meter out in the field. It's not historicized, I don't know why it's not historicized as yet. But the point is, I can go and manually add that data in there and again, if I do change the value then it will go and do a full audit trail on that value. Flow will store that data point and we'll have a full history of who changed it, when was it changed, and by whom it was changed as well.
30:08
Jeff Knepper: Thank you. That was a lot to try to show, but I appreciate it. Hopefully you were following along and seeing if nothing else. The fact that this is a drag and drop tool that allows anyone with a little bit of experience to be able to build models. Lenny touched on a point and it's something that's really hit home to our team and that is that in an age where I'm building art using Midjourney, there are still operations and processes that do not have data historians. Worse yet, there's operations and processes that have data historians that have locked data away and made it almost impossible to get data out of or licensing models that are frankly a bit abusive. We do not like that. It actually keeps us from being able to do our job in helping build an information model. And so today I'm really pleased to present on behalf of our founder Graham Walton and the entire Flow team, a new product I'd like to introduce you to Timebase.
31:07
Jeff Knepper: Timebase is a completely free purpose-built industrial historian. We are opening up beta testing for it in two weeks and we are releasing Timebase on December the second. It is performant, it is secure and it is easy to get data out of. We're developing a trending tool currently that will I think change the way you trend information. And of course we have ensured that there is a full rest API to be able to pull the data out of and a licensing model that frankly lets you grab Timebase and install it, whether it's in Docker or in Windows and use it across an organization however you see fit. So if you could help us out and be beta testers, that would be wonderful. We'd love to get your experience with the UI. We'd love to be able to tweak the product. And with a December 2nd release date, it is coming up on us fast. So thank you everybody for coming in today. Our next action would be if you'd like to book a consult or see a live demo for yourself, we'd be happy to do so. You can scan the QR code to get more information about our solution. Yeah, I think we probably wanna end there, but our booth is right outside if you'd like to come ask us some questions, but please do not be late for the technical keynote that kicks off at 1 o'clock.

