Inductive Automation Blog

Connecting you to ideas, tips, updates and thought-leadership
from Inductive Automation

Tyson’s Smart Factory Journey Emily Batiste Mon, 12/04/2023 - 08:28

This session provides an overview of how Tyson has standardized operations with Ignition as a SCADA platform, highlighting and detailing how consistent data and dashboards allow for faster implementations.  The talk will also include best practices that Tyson has developed, and will identify some of the key integrations that have helped simplify and streamline data collection processes.

Transcript:

00:05
David Grussenmeyer: Hello, everybody. I'm David Grussenmeyer, I'm the Industry and Education Engagement Manager at Inductive Automation. Welcome to today's session, "Tyson's Smart Factory Journey." I'll be your moderator today. To start things off, I'd like to introduce our speakers today. We have Chris Windmeyer, Lead Controls Engineer for Tyson Foods. Chris has over 20 years of experience in the CPG space working directly at production sites, with more than 10 years of that in controls in SCADA. He's designed, deployed, and supported multiple MES solutions over the last 15 years.

00:44
David Grussenmeyer: I also have Geoff Nelson, VP of Technical Solutions with SafetyChain. Geoff has always been interested in how things work, which led him to a degree in mechatronics from CSU Chico. After college, he worked as an intern at Csys Labs for a few months before landing his first job as an automation engineer at USS-POSCO Industries. Geoff worked there for five and a half years before moving on to SafetyChain Software, where he is currently the Senior Director of Customer Engineering. Geoff loves his work and enjoys being able to make people's jobs easier and safer. So please help me welcome Chris and Geoff.

01:32
Chris Windmeyer: Alright, thanks everybody for showing up today. I'm normally the man behind the curtain, so public speaking isn't my forte, so if you all hang in there, I'll get through this. Alright, today I'd like to walk you through an overview of Tyson's journey from disconnected systems to a standardized, scalable data collection and SCADA dashboard system. When we started our journey, every site had their own SCADA solution. We had installations at Rockwell, Wonderware, there were even some Ignition out there. These systems were designed and developed by integrators, OEMS, and even some in-house developers. The data that was collected was data that was important to the plant at that time, but there were no standards, so reporting on an enterprise level was complex at best, so the decision was made to create a team that would define standards and build that solid foundation that all other platforms could utilize. The platform we chose, Ignition.

02:31
Chris Windmeyer: Currently, as with many sites, there are a number of challenges, the first one being outdated infrastructure, including network equipment, servers, and even the controlled hardware. The next challenge we had to overcome was the multitude of different equipments, you might have two plants that are making the exact same product, but they use two completely different pieces of equipment. Next, the issue of inconsistent data that was collected in a format that was not standard or normalized. It was collected manually and either entered into a homegrown system into a database, an Excel sheet, or it was data that just wasn't collected at all. And finally, the issue was, there's no overall vision. There were multiple projects, multiple teams that would all be working to achieve the same goal, but they were all operating in silos, there were no communication between them. That's where the Smart Factory Foundation team was formed to create that programmatic collaborative approach across the entire enterprise with a unified vision to create a standard, scalable solution for enterprise reporting.

03:41
Chris Windmeyer: A solution is only sustainable if the site is ready for it, so we did surveys and assessments to validate that the site was ready for our solution. If a site was not ready, we defined a plan to get them ready, whether that involved hardware upgrades, infrastructure upgrade, network upgrades, or even training for the on-site technicians that would ultimately support that solution. One thing we learned early on was that if the site did not have ownership of the solution, it would not be used or maintained, so from the start, we made sure that every team member, every team group saw the benefit of this solution. To ensure the on-site support teams would have ownership, we made them part of the development team. Our team rolled out the configs and the data sets and set the base configs.

04:31
Chris Windmeyer: After that, we invited the on-site tech to configure the tags and actually develop the screens. That way, they had ownership in it, they built something that was visible to the entire corporation. So, as you can see, there's more to creating a solution than just creating a project and walking away. A solution is only as good as the people supporting it and if there's no ownership, it will not be supported or used. So enough with the philosophical backstory, let's get on to why you all got here. So the first thing we decided to do was we created a tag naming standard and a standard folder structure. We started out, as you can see, with the plant code, which is our ERP system's code. Underneath that, we defined an area; now this has changed a little bit, so we've decided to use our ERP resource as the area and then adding machine, the equipment underneath that. This way when we pull in data, which we now have data coming in from our ERP system, we can tie that resource back to process orders, back to material specifications, runtimes thresholds, all that information can be displayed on the SCADA screens.

05:48
Chris Windmeyer: Underneath each equipment layer, we define three separate folders. We've got a machine info folder, which basically is just some static data with a machine description and a serial number, things of that sort. The raw data folder is where we actually pull in the tag data from the PLCs themselves. Within the standard data folder, we take the data that we got from the raw folder and we normalize it. Let's say equipment vendor one is reporting temperature in Celsius, vendor two in Fahrenheit, so within the standard data folder will show a Fahrenheit data, same would be true for a line speed if it's feet per minute versus meters per hour, so we pick a standardized value and that is what we go forward with.

06:38
Chris Windmeyer: Along with this, we also partnered with... Tyson just joined the OMAC, which they help develop standards and best practices across the industry. Tyson does sit on the advisory board, and we're trying to promote that and move it forward as we go. Currently, the PackML model that we're using is only batch process, we've actually been able to get them to agree to start working on a technical spec for continuous processes as well. So this is a machine state model defined by the PackML. We use these... Not every single one of these, but we use these to calculate machine uptime and throughput, so most of the ones you use is execute like a running state; you've got a held or an idle state, a stop state, so you use those within the templates to define your machine parameters and running stats. This is one of our screens that we developed, and we chose a four-quadrant grid for all of our screens across all of the enterprise. The upper left-hand corner screen is always showing equipment metrics such as uptime, quality, performance for the current hour.

07:55
Chris Windmeyer: The second screen, we chose the critical statistics for that piece of equipment also over that same hour. In the lower left-hand corner, we take those same critical points, but we map them for the entire shift, so you can see how that shift has been running with those critical points. The lower right-hand quadrant, we chose to show alarms and downtime for that same shift. Another example of one of our screens showing the same four quadrants, a different piece of equipment, still the same form metric for watching. It looks a little different, but we were able to use our EAM server. We defined a centralized development server, so we define all of these components up on the development server and then we can push updates out to all the projects, if a change was needed because every component is inherited from a global project that gets shared down through the EAM.

08:57
Chris Windmeyer: We've defined this simplicity of scale; we were looking to build a solution that was scalable, so when we went from that first screen to the second screen, we changed two tags, we changed the tag path and the name of that piece of equipment, and referencing back to that standard data folder, as long as it is the same type of equipment that we've defined those equipments types as our base. So if you go from one oven to another oven, you pass in that tag path, that screen comes alive. That's all the changes that's needed. So, one of the main highlights I'd like to call out, our team was able to develop, we do a lot of, in the plants, there's a lot of human interaction with a piece of paper, and they're taking metrics out on the floor: temperatures, speeds of conveyors. It's all a piece of paper; somebody either goes and enters it into an Excel sheet or, even worse, it just gets thrown in a drawer and nobody even knows it exists.

10:02
Chris Windmeyer: So what we developed was an actual process controls and monitor form that's following the TPS 2.0 methodology. Every form, every cell on here is able to be defined by the plant at runtime, so they can totally define what checks they wanna see. Every check has upper and lower limits, it has alarm values, the tags that you see here, the cells that you see here gray are actually pulling in directly from the PLCs. So an operator doesn't even have to go look at an HMI that's... Oh there's my value, they fill in the information they want. And they hit the submit button. The other thing we've been able to do, you can see we've tied in the SKU data, which is pulled in from our ERP system, so these metrics actually get tied back to that order that was running at that time, so that when this gets ingested into our cloud infrastructure, we can report on it across the corporation to see what kind of issues we had. And like I said before, these are entirely customizable. We built them so that we can actually have reoccurring time, so let's say we needed one of these checks that happen every hour, so if that hour passed and that check was not done, there's actually an alarm that gets sent out and say, "Hey, this check is past due." The same goes for a setpoint that's out of spec, so when they submit that and it's out of spec, it can actually send an alarm on to the supervisor or FSQA.

11:39
Chris Windmeyer: So one of our next and one of our really good integrations we had, SafetyChain is our quality control piece that we use in Ignition or not in Ignition that we use currently at the plants, so we had a really highly complex system. We were pulling data from the local Ignition database up into the cloud, and we were going to Monarch, which was pulling metadata from SafetyChain, overlaying those, building that data, submitting that SQL query back to SafetyChain, and submitting that data. And what we've been able to do with the new Ignition module that's just come out, is we're able to go straight from Ignition straight to SafetyChain. This will pull the form in directly from SafetyChain; all the fields at the bottom are coming in from the form in SafetyChain. It can pull data in from any data source that is available to Ignition. You've got the full expression language scripting available to you as well, so this was a huge time saver and for scale and rollout. So if we have this template, I can take it, copy, paste, change my data source, done. Five minutes versus days in the old format. So with that being said, I'd like to turn it over to Geoff and...

13:05
Geoff Nelson: Thanks Chris. So I've been working with Tyson about five years. I've been with SafetyChain for 10, and we've been partnering with Tyson to deliver this quality module. It's usually where we come into a place and we're replacing paper clipboards. Chris did mention that a little bit. Tyson's case is a little bit more unique. They had a homegrown system called Plant View. Plant View right?

13:32
Chris Windmeyer: Yep.

13:32
Geoff Nelson: And it did digital collection already, so there was high expectations for what SafetyChain would come and do and replace Plant View. So as a quality system, we came in and became part of the Tyson tech stack to make sure that they're within regulatory compliance for quality. And we set up a lot of integrations because their current system had it already, so they were pumping data into SafetyChain to make sure their temperatures are within compliance or weights, or what have you, through production, through quality. But it was difficult; like Chris said, they had the system called Monarch, which is a homegrown system.

14:09
Chris Windmeyer: It is. That's correct.

14:09
Geoff Nelson: Home-built system. So they built the system in Monarch to do these queries, pull this data, create that JSON payload, and send it to SafetyChain, very complex. Took a lot of time, and there were some issues with some of the queries.

14:19
Chris Windmeyer: Yeah, very unstable.

14:21
Geoff Nelson: Very unstable. And so what we did is we said, "There's gotta be a better way for us to accomplish this." So we sat down, SafetyChain. So I went and met with Tyson, Tyson architects, engineering, IT, also with Travis Cox came and joined us, to discuss what can we do. We also had, actually, Cam Bergen, CEO for mode40, came to help. We just sat in a room to say, "What's the right approach here?" How can we make a scalable solution here? And came up with this Ignition module. So, we decided with the plant level, the enterprise level, the way that the Ignition deployment had been done and was going to be done at Tyson, we would leverage the gateway and the data that was available there and create a module that made it a lot easier to pass the data to Tyson easier, more scalable, reliable. It's configurable now instead of how long would it take to create something like this in Monarch?

15:19
Chris Windmeyer: Probably days.

15:21
Geoff Nelson: Days, yeah. Now it's minutes, right? Potentially.

15:22
Chris Windmeyer: Exactly.

15:26
Geoff Nelson: So much easier. And then it leverages Ignition's framework. So we have store and forward, there's queuing. So if anything fails, you can see why it failed, if the connection is lost for whatever reason, it will queue and send back up when the connection is restored. So a lot more reliability, a lot of scalability there moving forward with the module. So it just shows the partnership piece of the equation. SafetyChain's a digital plant management platform. We're there as that quality system with Tyson. But now we've been able to develop this module to allow any of the system data to come in. But the main point was the partnership coming together. Inductive Automation, along with Tyson, who's a great partner to create this module and extend scalability and reliability of the data. That's all I have.

16:23
David Grussenmeyer: Okay. So now we're gonna open it up for Q&A. We have a mic runner. Please, if you have a question, you can raise your hand. If you're in the front rows, you can come down to one of these mics down here or just one. And make sure to state your question into the mic so that we get it for the Livestream audience as well. So, any questions?

16:42
Audience Member 1: Hi, so I just had a question with regard to how SafetyChain is able to streamline the integration from when it was done from Monarch. Can you maybe elaborate a little bit more on the details of how it streamlined that?

16:58
Geoff Nelson: Yeah, so Tyson has Ignition. Tyson had decided on Ignition as a standard for their enterprise. And so the data could become available through Ignition. Monarch was a tool that was essentially, it was homegrown, it was essentially a way to pull queries on data. So very complex queries to pull data and create a JSON payload to call an API. Through the module, we've made it configurable to actually download form configuration from SafetyChain so that Ignition knows what's my data set from SafetyChain; what is my thing I'm looking at. What are its temperatures, where it's at, what it's about. And it can pull any of the data sources from Ignition. So it can look at tag, the historian, name query, SQL query, and perform expressions to manipulate the data potentially and pump it directly to SafetyChain. So it's a bidirectional integration, essentially through the module to know what SafetyChain needs, downloads it, and then sending it back up to SafetyChain. So you could trigger like time-based temperature checks, for example, in an oven, or you could do it. You have some really complicated triggers, right?

18:06
Chris Windmeyer: Yeah. So we've got queries we've defined that, say in an oven, for example, where a temperature has to be at 128 degrees for 60 minutes, and then what we send up is the previous 60 minutes, every minute temperature of four different probes in each oven. So we've really been able to define that trigger within the form itself. So it can use any, like I said, any data source within that's available to Ignition. And you can define those triggers in the tags. You can define it in the form itself as what triggers it. You can have it time-based. It can be a, you could say, let's do a temperature check every 60 minutes every time this tag goes true. I mean, it's very, very user-friendly and easily defined.

18:55
Audience Member 2: So, Chris, I like the story you told about the way that you would take the application to plants and then get individuals at those plants to learn how to build screens and things like that. Can you kind of go more into that process and kinda like what was involved to get their buy-in? Did you have to get them Ignition certified? Did you walk them through it? Just how did that process work out?

19:12
Chris Windmeyer: Sure. So, like I said, from the start, you have to get buy-in from ELT teams. So you've gotta make sure you've got buy-in from the business unit as well. Plant management all the way down to the people that are actually gonna be doing the support. So once we got that and we got approval to go to a site, we would do our validations. Okay. Is the network up to speed? Is the virtual infrastructure okay? How about your automation techs are gonna be supporting it? Do they know Ignition? They do. Okay. They're certified. Great. We go, if they're not, we've actually taken and cherry-picked out classes that from Inductive University and said, "Okay, you guys need to go through this. We built a tier-based solution, so four different tiers, and before we get to the site, you need to be through tier three."

20:01
Chris Windmeyer: And then once they're through that, we get everything set up, we get the server spun up, we get the base project created, and then we build, say the first line 100%, and then line two we work hand in hand with the onsite techs and say, "Okay, you're gonna build this solution." So they create the tags, they map the tags in from the raw data into the standard data. They change the tag and the views, and they're rocking and rolling. And that really gets buy-in from the site. They've now got ownership; they develop something on their own that's showing real-time data, live data that everybody can see. And that is a key point. You've gotta have buy-in from every level, or it won't be sustained.

20:46
Audience Member 3: Was there anything that was like a catalyst within Tyson to commit to this sort of big strategic project? Or did you have to champion that? And how did you get traction for a big automation strategy instead of each site being sort of content with the status quo and just getting by?

21:06
Chris Windmeyer: Well, I mean, that was the big piece, is the plants were fine with it. They had the data they wanted, but at a corporate level, there was no visibility into that data. Yeah, you could get the ERP data, but that's really, really high level. We wanted to be able to see, from a corporate level, that granular level. So we get the ERP data up, now we've got the Ignition data tagged to it, and you can really see it at that level. So it was pushed down from the higher management, said, "Hey, we need a solution for this." And the solution was create a team and go out and figure out how to do it.

21:44
Audience Member 4: The question from me is, you guys standardized the screens that you wanted to be able to, let's say, compare maybe a plant to a plant. Did you also enforce the standardization for maybe projects that the guys that you cherry-picked would say? You know what? It's probably good if I wanna visualize this other process that I have here that's not tied to those standard screens. Did you standardize that as well?

22:08
Chris Windmeyer: So what we've done is we've given them templates. So if they do create, come up with a new piece of equipment, they've got our standards, and we've told them, "Okay, go ahead, build it on your own. Once it's complete, come back to us. We'll make sure it fits our standards, make sure it rolls with everything else." We give them the feedback, we allow them to make the changes. So they develop that component, which would then get rolled out, should be to multiple sites. So then that really builds that buy-in. They're now not just building something for their plant; they've just built something that's just spread across the entire corporation.

22:48
Audience Member 5: How's it going? Specifically, if you don't mind, what's your experience been with SafetyChain specifically? My company is a food manufacturer as well. And there is a lot of paperwork with USDA and all the other... And I'm just curious, can you speak to like your experience with SafetyChain specifically? Are you extremely happy, moderately happy? Is there a range?

23:12
Chris Windmeyer: That's kind of a loaded question.

23:13
Chris Windmeyer: So I don't... I don't really use SafetyChain myself. I've learned enough about it, just so I can see that when I submit data, it showed up over here. Okay, great. I have worked with the team that does some of the report building. They seem to be really impressed with the reporting capabilities and how they can see the data they need to be able to see.

23:32
Audience Member 5: Alright. Thank you.

23:39
Audience Member 6: Hey, Chris.

23:40
Chris Windmeyer: Hello.

23:40
Audience Member 6: Your tech paths are pretty cryptic for the average person. And so I wanted to ask what the process was like for your design team to choose between easy to read, kind of anybody would understand versus the engineering or ERP tag structure that you came up with?

23:54
Chris Windmeyer: So we went through a couple renditions, but since we did ultimately want to get that ERP data down in an easy way, that's kind of why we chose to go with that more cryptic kind of definition. Yeah, it's a little more complex for the plant engineers that have to support it, but really it is defined. I mean, step by step, we've got documents to define. Okay, this piece of equipment goes in this spot, and it's very easy to follow, and once you've worked with it for a few days, you kind of start to understand it.

24:28
Audience Member 7: Yeah. I saw that you were bringing in raw data, right? And then you're normalizing in your tag structure. Are y'all historicizing and operating off of both of those data sets, or are you strictly operating and historizing the normalized data?

24:38
Chris Windmeyer: So it depends. So if the data coming from the PLC is already normalized, it stays in that raw data folder. If we need to make some changes to it and normalize it, then it'll go into the standard data folder. So we are using both, depending on the case.

24:54
Audience Member 7: Gotcha. And follow-up question: would there be valuable or value, excuse me, to you to consolidate like your tag counts by having some kind of tag type that would allow you to transform that data before it entered the system. Instead of having to maybe say, reference it and do calculations on it.

25:10
Chris Windmeyer: Yeah, I mean, a lot of times we don't have access to the PLCs themselves, so we have to basically take the data we get as it is. There's a lot of options we're still pursuing. I mean, this team is not even a year old yet, so we're still kind of getting our feet wet and still defining those standards and redefining them. So anything's possible and under consideration at this point.

25:43
Audience Member 8: Hey, Chris, you talked about a little bit about the start of, about your Digital Transformation journey. I'm curious, if you were to go back now and you had all the stakeholders in the room, if there was any one thing that you would do differently or if you'd want to emphasize for people that might be kicking off smart factory journeys themselves in the coming week.

26:07
Chris Windmeyer: I kind of came into this team about six months after it was spun up, so I don't really 100% know where it totally started from. But I think that was a big thing that we learned kind of after the fact. We did kind of in the beginning, try to show this to a plant and say, "Here's what you're getting." That wasn't quite received so well, so we stepped back a little bit, made sure we had buy-in, made sure that that support was gonna be there. So I think that really is the key to this whole thing, is getting that buy-in and making sure everybody's aware that of what's coming.

26:41
David Grussenmeyer: Alright. I think we got time for one more question.

26:43
Audience Member 2: Oh, okay. Well, I get a second one then. So now you have Ignition at all these plants. You have all these technicians trained in how to use Ignition. It's piping all this data up to the enterprise, but do they have the leeway to do other stuff with the system? And are they playing with their new toy and what are they doing?

27:01
Chris Windmeyer: Yeah, so we've given onsite techs a lot of leeway. They basically, they're free to do, it's theirs. I mean, we put our solution out there, our standard data is there. As long as they don't mess that project up, they're free to go do as they want. Maybe they wanna make a one-off for their wastewater solution. Well, we're not monitoring wastewater in our enterprise reporting, so go have fun, play.

27:28
David Grussenmeyer: Awesome. Well, let's give a round of applause for Chris and Geoff.

Wistia ID
jodjgyxp4f
Hero
Thumbnail
Video Duration
1667

Speakers

Chris Windmeyer

Lead Controls Engineer

Tyson Foods

Geoff Nelson

VP of Technical Solutions

SafetyChain

ICC Year
2023.00
Don’t Get Lost in the Cloud: Tips & Tricks for Successful Ignition Deployment and Management Emily Batiste Fri, 12/01/2023 - 12:34

With the release of Cloud Edition, it's never been easier to get Ignition running in the cloud. But are you ready for it? From security concerns to misconfigurations, there are plenty of pitfalls to stumble upon when managing applications in the cloud. But fear not, as help is on the way. Join the experts from 4IR in this session where they'll provide helpful tips and tricks for deploying and managing Ignition in the cloud.

Transcript:

00:04
Susan Shamgar: Hi. So my name is Susan Shamgar. I'm a Technical Writer at Inductive Automation, and I'll be your moderator for today's session, "Don't Get Lost in the Cloud: Tips & Tricks for Successful Ignition Deployment and Management." To start things off, I'd like to introduce our speakers for today. First up, a longtime member of the Ignition community, Joseph Dolivo. Currently serves as the CTO of 4IR Solutions, an Inductive Automation Solution Partner focused on cloud, Digital Transformation, and life sciences. For more than a decade, Joseph has focused on modernizing manufacturing by intelligently adopting state-of-the-art technologies and principles from the software industry. James Burnand is a 20+ year veteran of the industrial automation ecosphere, who has now turned his focus toward providing the infrastructure for manufacturers to reap the benefits of the cloud for their plant floor applications. He weaves cybersecurity, operational requirements, and management into 4IR Solutions' offerings and provides education and consulting for companies looking to begin their journey into a cloud-enabled and a highly automated OT infrastructure. Please help me welcome James and Joseph.

01:20
James Burnand: Thank you, Susan. Your payment will be after the session. We really appreciate that. Hi, everybody. Welcome to the session. Hello people live streaming. So Joe and I are here to talk to you about the cloud today. So we've talked all week about what we do in the cloud, but what we really want to do today is help you understand what are some of the considerations, what are some of the tools, and what are some of the methodologies that you should consider if you're going to be doing deployments in the cloud. So to start off, I'm going to review a little bit about that and go into a little as to why the cloud is in use today, what are some of the benefits, where are we seeing adoption taking off. And then from there, Joe is going to go into the real deep technical details about what things you can do, what tools you can use, and how to actually go about doing that.

02:08
Joseph Dolivo: Yep. We're excited. We'll get as deep as we can with the time that we have, but definitely save your tomatoes and everything else for the Q&A session afterwards. As long as my voice holds out, I will answer as many as we can, and we'll have contact info provided for future questions.

02:22
James Burnand: Alright. So let's get started. So why do people care about the cloud? I know we've been talking about it. It's become this huge discussion point. There's a lot of attention around different opportunities that are opened up, be they AI, be they flexibility, but ultimately one of the most basic things that's important about using the cloud is you only pay for what you use. So you're not buying a set of servers and computing resources that will have the capacity you need for the lifecycle of those assets, you're not buying five years worth of storage that you're eventually hopefully going to use five years from now plus your safety factor. You're literally paying for just what you're using and as you consume it, that price and that cost goes up. So controlling cost is really... If you think about why people are using the cloud in the first place, that's the biggest reason.

03:15
James Burnand: But the other benefits you get is that you are able to scale things. So not only do you get to only pay for what you use, but you have the ability now to theoretically endlessly scale those resources based on what the growth of a system is or the growth of the amount of data that you collect or the collection of different applications that you deploy. It also opens up opportunities with capability. So there are things that are just hard to do that you can go and install a service from a cloud provider that they do it for you. There's managed services, there's application functions, there's third-party plugins. There's all sorts of things that become remarkably easier to do when you take advantage of those precompiled and prebuilt resources that you can buy from a public cloud provider.

04:03
James Burnand: So what do we see people using it for and what are good use cases? So a lot of organizations that use the cloud, our folks, what we've seen in this conference quite a bit is people who have very distributed systems. So telemetry-type systems, places where it doesn't matter where my server is, everything that I'm collecting from is remote, that's a really great use case for the cloud. Or where there's a lot of focus on data and processing, and I need to be able to use more advanced functions and features to be able to provide the insights that I need. The other thing is that when you look at some of those services I described in the last slide, things like time series databases, AI applications, data warehouses, Snowflake, these are all things that become very easy to integrate with and use and take advantage of when you have the cloud.

04:48
James Burnand: So those data-centric applications just make a lot of sense to be able to use those resources for them. And then one of the things we... One of the most basic things we love using the cloud for is backing things up 'cause it's really hard to back things up in a way that it's easily recoverable, testable, and you can be sure that when it's time to go and restore those backups that they're available. The cloud is a fantastic and very cheap way to store long-term backups of systems that you're running on the factory floor. So what I will say though is just like playing soccer in scuba gear, it's not a... Just because you can, doesn't mean you should. You don't use the cloud for everything. And so what we found is that one of the really great opportunities, one of the really great options that people are starting to explore a lot more now is hybrid cloud.

05:38
James Burnand: So I grabbed a definition off of... I forget I Googled it, but a hybrid cloud is a computing environment that combines on-premises data centers, also called a private cloud with a public cloud, allowing data and applications to be shared between them. Really what it means is you install a piece of cloud in your building. So you put hardware in that provides a conduit, access, and ability to deploy those really cool applications that are precompiled, those services that the cloud providers give you into a piece of hardware that happens to live inside of a building. So a factory or a transfer station or wherever the local needs might be. So you get that low-latency, high-capability system that's running locally on site. You have the ability to cut the cord to the Internet and it still runs, but you get the benefit of running those cloud services down inside of the building.

06:35
James Burnand: I see it as being fairly revolutionary. I think it's still really new for a lot of folks. It's a concept and a way of thinking about deployment that not a lot of people are really that deep into yet, but I personally see that it's... I think it's going to be the future for a lot of the bigger systems. So who's using it today and what are they using it for? SCADA systems for distributed telemetry systems. We're seeing a lot of MES systems being cloud-deployed, especially things like OEE. We're working with our friends at Sepasoft on a number of different opportunities right now where there's, I want to be able to deploy across this fleet of facilities, I want to be able to create a consistent fabric of OEE application access and Ignition in databases.

07:22
James Burnand: And to be able to do that in some plants, it's super easy 'cause hey, they got great resources, engineers that understand what's going on, but it's really difficult to do in facilities where there's maybe not any sort of local support or they don't have people that are really understanding exactly how to build and maintain those systems. Using cloud or hybrid cloud for those sorts of solutions really makes it an equal playing field for all the users and all the locations that are going to have access to that application. The other piece that we're seeing is a lot of ingestion. So we saw some Snowflake stuff this week, which was really, really cool. We're seeing that there's this pull of all this information up to these data warehouses. Analytics tying together sales data and financial data in with production information in new and innovative ways that lets you make better business decisions and it's only being unlocked by the type of solutions that people in this room are putting together to ingest that information in. The other kind of piece to this is tying together with existing cloud services, things like ERP systems, cloud-based databases. There's just a ton of opportunity in pulling those things together. So that's what we're seeing today.

08:35
James Burnand: So challenges and risks, I would say the one thing to remember is the cloud is public. So when you go and you do a deployment, yes, you get access to all this really great technology, all of these applications, all of these things that you're able to do. But ultimately, if you're not careful, you are deploying those things in a publicly accessible location. There's lots of ways to remediate that, lots of ways to manage that. Really, what we find is the most critical part of that is making sure that you have a plan for how you're going to manage those assets. There's ways to be able to deploy in public clouds and have no external access to them, only internal to your facilities, but you have to plan all that stuff up front. So Joe's going to walk through all kinds of technology pieces around that.

09:21
James Burnand: I'm throwing the warning flags up and saying, just remember that it's public and that it's something that, yes, there's a policy in place for most major organizations to be cloud-first because of that first slide around cost savings, but it's not as simple as deploy and forget because if you do that, you're potentially opening yourselves up to all kinds of new risks and challenges that will unfortunately be potentially costly. I would also say that it's difficult to dabble in this space. So there's a big difference from what we've seen in being able to get something working versus having something sustainable and maintainable over time. So tools like cloud formation templates, which I know Joe is going to talk about, these are things that make it real easy for us to be able to build up an infrastructure in the cloud very quickly.

10:12
James Burnand: Even Ignition Cloud Edition lets you just start a virtual machine and run Cloud Edition and it's there and it's going, but you really do need to make sure that you're following best practices, hardening guide, best practices from the cloud vendors to ensure that you are putting in security as a consideration even for systems that you're testing, even for systems that you're just trying to figure out. Because what tends to happen, as I think many people in this room have seen, is I'm just going to start off small. I'll install Ignition here, and that's all it'll ever be used for. Six months later, it's like, "Well, I can use it for that. Well, I can use it for this. Well, I can use it for that." So you end up creating this burgeoning and growing set of applications. And when it's on-prem, the risk is a little bit... Well, it's a lot less because you don't have this public access. When you're doing that in the cloud, unfortunately, you have to be more careful. I believe Joe is going to take over talking now.

11:05
Joseph Dolivo: Well said. I think we're trying to differentiate between the ease of getting started, which is great for demos and learning and testing, and then production-grade systems. So we know a thing or two about production-grade systems. If you guys have seen the Data Dash that's going on right now, all that Ignition infrastructure is part of one of our managed service platforms called FactoryStack. What we're going to try to do is to take you through some of the lessons learned that we've had in working in this space for a long time before Cloud Edition was a thing, but then to give you some very practical takeaways that you can implement in your own systems, and also give you a little bit of insight behind what we've done and productized. And I will just say, coming out of the Technical Keynote, there are a ton of things that are coming in Ignition 8.3 that we are super excited for because it's going to make a lot of the stuff that we have to do now manually a lot easier for all of us.

11:52
Joseph Dolivo: So very, very exciting. Tried to categorize this into five different categories. Again, we could spend days talking about all of this, but they're largely broken down into networking, security, access management, data management, and cost management. And of course, especially with regards to network and security and access management, there's some overlap. So we've come up with a couple of different examples from each of these that we'll talk through. And again, as you have deep questions, please let us know and we'll go down into the weeds during the Q&A if we can. So I'll start with networking. So encrypt all the things. You hear a ton about encryption really in two different categories. There's encrypting things at rest. That's obviously important for data storage, making sure things aren't getting changed after the fact.

12:37
Joseph Dolivo: But also when it comes to networking, we're talking about in transit. So Ignition as a tool has great support for SSL certificates so that any traffic that's going into or out of your Ignition system will be encrypted, but it's not just Ignition. When you're deploying these production systems, you don't just have one Ignition gateway. Typically, you're going to have multiple Ignition gateways in a gateway network. The Ignition Gateway Network uses something called gateway network certificates that you can use to basically encrypt communication between Ignition gateways using the same principles that you use to encrypt your web traffic and all of that. So that's really key. And again, Ignition isn't just talking to other Ignition systems. It's also talking to databases, for example. So when you're configuring your databases, very important to enforce SSL encryption. There's a setting in the Ignition gateway configuration to do that.

13:27
Joseph Dolivo: And even more so, you can go down to the level of basically restricting access to certain ciphers. So I'm going to use certain cryptographic ciphers, I'm going to require TLS 1.3, for example. So focusing on encryption is a key part of everything that you're doing, is really, really critical. The other thing that you'll tend to hear about which is still very important and a good step one is to use a VPN. VPNs have been popular for a long time for good reason. They're a really nice, easy way to extend, let's say, an on-premise network into the cloud. Cloud providers have really good tools to make that easy, but if you just rely on a VPN, then you're doing what you call perimeter security, and we'll touch on security more in a minute, where you're securing the outside, and then as soon as somebody gets in the door, you now have... It's kind of free reign.

14:16
Joseph Dolivo: So a VPN is a tool, but it's a tool in defense and depth. So don't rely on a VPN by itself. Encrypting traffic, whether or not it goes through a VPN is important. So that's encryption. Limiting external connectivity. So we've got Ignition running in the cloud. Again, you probably have a database, for example. Best practices would suggest that you don't provide external access to the database unless you need to and typically you won't. So your Ignition system can be publicly accessible via web browser, mobile device, designer access, things like that. The database, you would probably want to be locked down inside of a virtual private network or a VPC depending on your cloud provider. I'll use both terms interchangeably.

15:00
Joseph Dolivo: And then there's a bunch of these cloud-native services that James had alluded to that are things like data lakes, digital twin services. And again, depending on if you're going to funnel all that data through Ignition, you don't want to have outside access to those systems. And the cloud providers provide really good tools, private endpoints, private link. Those are things you can use to basically expose even some of those managed services into your private network without having to go out through the public Internet which is the default. So highly, highly recommend that for anything that you're going to be doing which requires access from the outside. And the last one here is about minimizing hops. So especially for production-critical systems, getting data in a timely manner is very important.

15:44
Joseph Dolivo: And now we're not just talking about, oh, I'm sitting across from my server in my plant. I'm talking about having to go up to a cloud system and back in order to communicate. And the cloud is global so you can pick regions and then you can deploy things. I could be sitting here in California connected to a cloud server in Arkansas, which is actually what we're doing for the Data Dash here. And so by default, when you're starting to add these different layers of networking complexity into your systems, you risk introducing a whole bunch more latency to applications like Ignition. So one of the recommendations that we have if you're going to be deploying this inside of, let's say, an orchestrator like Kubernetes, which has been talked about a couple times, would be to look at the network interface that you're using to expose those workloads.

16:29
Joseph Dolivo: So for example, if you're using Kubernetes, by default, it deploys an overlay network called Kubenet, and it's got this virtual address space that's disconnected from everything else. It's introducing another network hop. The cloud providers provide integrations with something called the Container Network Interface that lets you expose the same IP addresses, same address space you're going to use for your virtual machines or for other kind of workloads, also for the containers that are going to be running Ignition. That reduces the network hop, makes your application more performant. Same thing when it comes to these complex architectures where you have load balancers in place. Every hop, every proxy you put in place is going to slow that down. So be very careful and selective about where you're introducing those kind of latencies. So we could have a whole session on networking.

17:14
Joseph Dolivo: That's a couple of highlights. Security, natural progression from talking about networking. Keep your systems up-to-date, and you're saying, "Well, of course, that's obvious." But when you actually look at the scope of systems we're talking about, let's take Ignition as an example. You've got your application, so you're going to be making changes to your application to fix bugs, to implement features and all of that. That application resides on Ignition, so keeping Ignition up-to-date, for sure. Doing that in a production system where... I love IT people, but you can't just push down security patches at any point in time. You've got a production system. You can't do that. So Ignition is a component of that and most applications are also built on a database. You're using the Sepasoft MES modules. It's built on a database.

17:57
Joseph Dolivo: Now, you've got to do those updates in tandem. So I need my database and my Ignition system to be in lockstep and if one of those is not in step, you think you're taking backups. We'll get to backups in a bit. Are they in sync? Are they cohesive? And now you're going down to a level below Ignition that's running in an operating system. Whether it's containerized or not, I got to patch that operating system. Maybe I've got an orchestrator like Kubernetes, maybe I've got add-on modules for providing other functionality. So looking at these systems as something that is living and breathing and you don't just set it and forget it is incredibly important. And to James's point, it's so easy to set something up once and then you forget about it and say it's good enough.

18:38
Joseph Dolivo: These air-gap networks don't really exist anymore. Maybe they never did, but nowadays it's not something to look at, especially when you're talking about the cloud. So reducing attack surfaces, the more stuff that's available on the public cloud, the more targets there are for attack. You go to shodan.io, you can see all the industrial OT network traffic that's available. It's terrifying, but you should check that out if you haven't heard of it before. So we want to do everything that we can to minimize the exposure to applications, to data from the outside looking at limiting external connectivity like we talked about as part of that. One thing I want to highlight within the Ignition ecosystem, Ignition has first-class support for containers. Containers are great because when you distribute a container, there's a couple of sessions on that at the conference. You're basically just distributing the minimum set of files that you need to run an application, and that's it. And you're decoupling it from everything else that's required like a kernel and everything else to run an operating system, Windows updates, all that kind of stuff.

19:39
Joseph Dolivo: So if your kind of target that you're deploying is basically these containers that have minimal packages installed, you're not having everything out of the box, you might get with a Windows updates WordPad, calc. So that really, really helps you to minimize that attack surface and it's, again, one less set of targets that attackers are gonna be able to go after. And then, of course, there's monitoring for breaches, and I can't tell you how many times two years down the road, somebody will find out that, oh yeah, somebody has been in our systems and they may have modified our data. We don't know what happened. We're gonna have to do a product recall or put out an announcement. So doing active monitoring is really, really important. It's something that there's a number of tools available to do that.

20:20
Joseph Dolivo: There's some that are kind of OT-specific, and you'll see 'em inside of OT networks from companies like Claroty and Nozomi and things like that. But there's also a lot of IT-centric tools that really work well in the cloud environment. A lot of them are based on machine learning to do like anomaly detection. So I'm gonna kind of pick... These are the sort of typical traffic patterns that I might be seeing in a cloud environment. If all of a sudden I see a huge spike in network traffic, or if I see access logs from users or accounts that I don't tend to see, maybe I raise a flag, I send a notification, I require manual intervention. And then tuning that in a way that you're not getting so many false positives, that is the same problem we talk about with alarms all the time.

20:58
Joseph Dolivo: It's, "Oh I've got so many alarms, I'm just gonna ignore 'em all." So there's a balance there, but the fact that you don't just kind of set this up and ignore it, you have to be actively monitoring for breaches. So super, super important. Again, we could have a whole session on security alarm. Let's talk about access management. So there was a question that came up in the Technical Keynote talking about using YubiKeys for authentication with Ignition and things like that. Access management is hugely important. And another universal principle that you'll hear, and it ties in really, really nicely I think with Ignition is to practice the principle of least privilege. So in terms of user accounts, that means if I'm gonna be authenticated and authorized to use a service, I wanna be provided with the least amount of access that I need to be able to do my job.

21:44
Joseph Dolivo: And that's for two reasons. One, in the case of kind of a malicious actor, that reduces the damage that can be caused if that account is compromised. And it also just helps people from kind of shooting themselves in the foot or doing something by mistake that they wouldn't ordinarily try to do. So for example, in the kind of Ignition roles, you may say, well, I'm only gonna give an operator certain roles so they can't accidentally change the configuration of the system. If I'm an administrator, I may have elevated roles, but we also tend to just say, you know what, I'm just going to use an administrative account that has access to do everything because it's too much work to go through a process and then you end up getting in trouble when that happens.

22:24
Joseph Dolivo: So enforcing roles in a way that is consistent and clear is really important and there are tools that you can use to do that especially if you are taking the management of that outside of, let's say, just Ignition. You can use something like... Entrada [Entra] ID is what it's called now, but I never get it right. It used to be Azure AD, so basically the cloud extension of Active Directory, and you can have all of your groups and roles centrally managed across your organization. And then you can have the concept of, let's say, a supervisor and a supervisor can have certain access granted in Ignition, certain access granted in other applications, your ERP systems, your CRM systems and things like that, and you have that all managed in a single place.

23:03
Joseph Dolivo: The last part on principle of least privilege is that it doesn't just apply to named user accounts. It also applies to, let's say, service accounts. And so this is an example. We'll talk about databases more in a minute, but when you're configuring access to a database, that database may not need, or that database user account may not need the ability to delete records. Maybe I can only do inserts, especially for audit trails. I'm gonna be able to insert into the audit log. I don't wanna have somebody that can update or delete from those. So think about the principle of least privilege in terms of the system accounts as well in addition to named users.

23:37
Joseph Dolivo: Password management. I'm super excited. Again, Technical Keynote talking about using a system like HashiCorp Vault, where you can have the dynamic password authentication. Right now, there are certain accounts like the database connection in Ignition, which is more or less kind of hardcoded. It's sort of encrypted in the configuration, but some of those things are kind of hardcoded. But for other things like logging into Ignition, the safest way to manage passwords is to not manage them, and again if you're using a system like Entrada [Entra] ID, or AWS, IAM, or OKTA, or Duo, or some other system, you've got an enterprise security company whose stock price and revenue is based on them doing a good job with all of that. So we recommend not having to manage it yourself. It's one less thing you have to deal with. So for our platform, we don't see any passwords at all from users. We say, nope, we don't wanna deal with it.

24:30
Joseph Dolivo: And then of course, monitoring and auditing access. So Ignition by itself, you configure an audit log. It logs a whole bunch of different events that are occurring by default, which is great. You also have a script function that you can use to add additional logs manually based on things happening in your application. And depending on again, the system you're using for identity and access management, you could also have sort of a central audit log in the cloud that you can use to monitor. So every time somebody logs in, every time somebody asks for elevated privileges, so there's tools like PIM, Privilege Identity Management, where maybe I'm gonna be given read-only access to a service, and I have to go through an approval process to give me temporarily elevated access rights to some other system. Well, that's gonna be audited and logged and it's maintained for a certain duration of time and then that'll be it. So again, active monitoring, similar to threat management when it comes to security. Really important for access management.

25:23
Joseph Dolivo: A couple more here, data management. So take backups and again, that sounds great in theory. Backups include a lot of different systems. And Ignition's actually really, really great in the fact that you can go in the gateway configuration page, you can schedule backups to be taken on a schedule, and if the volume to which you are storing those backups is, let's say, cloud-replicated, that's great. You can get cloud-based encrypted backups, multiple availability zones and multiple regions out of the box really, really easily. Again, most systems aren't just Ignition. There's gonna be a database component, there's gonna be other systems that you have to take and some systems are not as nice to... They're not as kind of allowing for doing live backups like Ignition has and the official kind of application process for doing backups is I'm gonna spin down a workload, and then I'm going to copy a volume somewhere else and I'm gonna spin it back up.

26:16
Joseph Dolivo: So we have to do some of that with manual pipelines and things like that. But if you have the ability to kind of coordinate the backups of all your systems together, really, really important. And then the backup's no good if you take it and then two years later you need to get it and you realize that the backup failed, or the backup was incomplete. So it's really, really important, especially for production systems, that you are doing regular verification of those backups. A really easy way to do that, especially if you're using Ignition in containers, take a backup of a database, take a backup of the Ignition gateway other stuff, and then spin up a brand new environment. I'm gonna say, okay, this is now my dev environment. I'm gonna restore a gateway backup, I'm gonna restore databases, and I'm gonna do some spot checks or automated testing to confirm that those are all still working.

26:53
Joseph Dolivo: So we do that regularly for all customer instances. It's something you should do as well. Really, really important. Data residency requirements, so especially when you're talking about production systems, again, in the cloud, you've got all these different regions you can deploy into. Certain cloud services you'll find are only available in certain regions as well and certain regions have availability zones or don't have availability zones. It's really important to know where your data is going and where your data is being stored at all times. And there are a lot of industries, a lot of companies that have very specific regulations to say, my data cannot leave the United States. For example, my data cannot leave Canada, my data cannot leave this particular geographic region. So keeping that in mind is really important 'cause you may say, well, yeah, my workloads are running inside of US-East-2, but to get there, it has to go up through this other system running somewhere else.

27:45
Joseph Dolivo: And now if the data's being... Even if it's encrypted, my data's going somewhere where it's not supposed to be. That's a big no, no. Same thing with storage. You could say, well, if the cloud providers have the concept of paired regions where you could say, you know what, I'm gonna store most of my data in US-East-2, but it's paired to something in Canada-West-1. So for disaster recovery purposes, that may or may not be okay depending on what your team's kind of requirements are due to regulations or company policy or anything else like that.

28:17
James Burnand: And maybe I can just quickly add to that if my mic comes on. When you're also architecting your solution, availability zones and regions become a huge important consideration. So for example, you can buy storage that's mirrored across three of those. So availability zone for everyone's benefit is a completely separate data center that has a separate power feed, it has separate network connections, but it's inside of a region. So US East, for example, for Azure has three availability zones that you can buy services from as US East. So depending on the reliability requirements of the application that you're deploying, you need to choose the services that have the right level of reliability. So by default for us, for example, when we do storage, we'll actually have storage that's mirrored across three availability zones in a single region so that way we can tolerate two buildings burning down before your system will stop. So just to kind of put a little perspective around that is that there is also a cost consideration as a part of that. So if you're going to buy something that is available across regions, for example, it's going to be more expensive than if you're getting something that's dedicated to a single availability zone in a single region. So your application architecture matters from a cost perspective.

29:29
Joseph Dolivo: We are definitely getting the cost as the next big pillar here as well. So well said, James. And the last point on here is just data integrity and retention. So I need to maintain data for seven years, 10 years due to regulatory purposes. The storage providers inside of the cloud, or the storage accounts inside of the cloud providers allow you to do, for example, immutable data. So I'm gonna push data into an archive storage tier. AWS Glacier is an example, Azure Storage account has an equivalent, where nobody's gonnae able to touch it, and it's gonna reside for some extended period of time. So that's really, really, important for compliance purposes and it doesn't even necessarily have to be data in your live system. You may say, you know what, having a 10 terabyte drive on this managed database service is really expensive.

30:17
Joseph Dolivo: But I need to maintain the data, but I'm not actually gonna query it unless an auditor comes and starts knocking on my door and says, "Show me the data." So you could store all of that older data in kind of much cheaper archive storage and then if you need to restore it to say, "Hey, look, I've got it," then you can go through a process to do that when you need it. A really good way to save cost, which is our final category for today. So cloud makes it so easy to get up and running, and the cloud providers wanna incentivize you to just pump all the data up. We're not even gonna charge you. If you're not going over an encrypted connection, we'll ingest all your data for free. That's become pretty much a standard. But once it's up there, they're gonna charge you for using it.

30:52
Joseph Dolivo: And there's a lot of stuff in the news recently. Hey.com recently talked about how much money they're saving by going out of the cloud and there's a lot of... So we talked about some of the reasons you may or may not want to use the cloud, but once you... You're really paying for sort of the flexibility and scalability that you get. So for the Data Dash, we said we're gonna spin up five servers. Give me five servers Azure and boom, we have five servers up and running. But you're paying for that dynamicism and flexibility. So if you know, for example, I'm gonna run Ignition Cloud Edition for a year at least, you go to the AWS marketplace, you go to provision Ignition Cloud Edition, it'll tell you if I know I'm gonna run this workload for a certain amount of time, I can basically commit to paying for a year and I'm gonna get a pretty sizable discount on the infrastructure cost.

31:36
Joseph Dolivo: 30%, 35%, something like that, that's huge, especially when you're talking at scale. And it's not just Ignition systems that can do that. You can do that with databases typically, you can do that with storage. So trying to estimate the workload that you have and then being able to kind of predict what you're gonna need is really, really useful as you've been running. Again, not so much for experimenting. When you're in a production system, that's important to consider, and it's something we do as well. So we actually will forecast out based on our customers. We're gonna commit to using this amount of resources and we get a cost savings from that. So that's reserving capacity up front. Another thing is called... And different cloud providers have different terms for it or basically spot instances. So this is where maybe I don't need a workload running all the time.

32:18
Joseph Dolivo: Maybe I need to do like a... I was gonna say batch job, but batch means something else in our automation industry, but I'm gonna run a report at 2:00 AM every week, for example. And it's something that's gonna run for a while and then it's gonna shut down. I don't need it running all the time. Or maybe I'm gonna just spin up a temporary dev system. I don't need it for a long period of time. If it goes down, it's not a big deal. You can leverage these cheaper spot instances where you basically will say, well, I only want to pay for a compute between this price and this price and if it becomes available, great. If not, shut it down. Or if somebody else is willing to pay a higher price for it, they're gonna steal my VM out from under me.

32:55
Joseph Dolivo: You can have incredible cost savings when you do that. It's also good for like a lot of GPU-based workloads like ML and AI training. So that's, again, not so much for Ignition production systems, but certainly for either dev and test systems or if you need some kind of temporary scalability like, hey, I need to add another frontend node to my Ignition server 'cause I'm anticipating more load during shift one, or something like that. So that's something else to consider. Huge, huge implications on cost if you do it right. And then I can't tell you how many times I've heard from customers saying, "Well, I got the bill at the end of the month and it was 10 times higher than I expected." So making sure that you're putting monitoring in place and alerting in place so that if you're starting to exceed your typical usage trends, you're able to identify that quickly and early.

33:39
Joseph Dolivo: So this has saved us a number of times. I talked to a couple of folks in the room about this where we had logs that we were aggregating that basically hit a trip wire and our system alerted us. We were able to make a change so that we didn't get $3,000 cost after that. And the cloud providers themselves and a lot of the cloud-native tools have ways of doing that. We'll talk about our tools in a minute. We use Grafana Cloud as an example for aggregating all of our metrics and logs across all of our systems. So you can set up alerts and notifications. You can do it in Azure, AWS, and GCP so that way you won't be surprised when the bill at the end of the month comes. So super important.

34:19
Joseph Dolivo: Just to kind of give you some insight, if you're kind of looking like, "Well, where do I kind of get started with this?" These are tools that we use. There's a whole bunch of them. It's really hard to pick, but I'll just kind of go through some of the icons so you're aware of them. Obviously, you know Ignition right in the center. Everything that we do and most of everything that you do is built all around Ignition. If I start at the top left, there we go. There's a laser pointer. So that is Kubernetes. We don't recommend that for most folks. It's one of those things if you have to ask, you probably don't need it. Something that we use internally, and there's a really great session that Kevin Collins did earlier today talking about kind of the nuts and bolts of that.

34:56
Joseph Dolivo: We use that because we're orchestrating Ignition across tons and tons and tons of customers. So if you're a bigger customer, you have a lot of Ignition instances to deploy, a lot of other workloads alongside a single gateway you need to deploy, a really good tool to consider. If you need to run one Ignition server, it probably doesn't make sense. Going clockwise I guess, Grafana is the next one. So this is what we use. I kind of hinted at it for metrics and log aggregation. It gives us really good deep insight into our containerized workloads as well as all of the kind of cloud provider-native services. So we can see how we're doing on cost, we can look at our CPU and RAM performance, all that kind of stuff. It's really nice to have a single pane of glass. And there's other systems out there that can do that.

35:37
Joseph Dolivo: We like Grafana. Great visualizations as well. Git, so when you're making changes, especially in kind of an enterprise space, it's not a cloud-native technology. I call it a cloud-adjacent technology. It's kind of in the same realm doing version control. Again, super excited for the changes coming at Ignition 8.3 that will make this more comprehensive beyond projects. We did a whole session on it last year. We're doing a workshop on it in a couple of weeks. But we basically run Git inside of the cloud to maintain backups of our project configuration, both for Ignition as well as other services. And then currently we support AWS and Azure. I love GCP as well. That's a great one. And then finally, the one that you may not recognize this logo here, this is called Pulumi.

36:17
Joseph Dolivo: So there's this whole suite of tools called Infrastructure as Code is the buzzword. Terraform is kind of the market-leading most popular one. They've been in the news recently due to some licensing changes that they've made around their open source offering, but we've been using Pulumi, which just lets us use our programming expertise that you'll have from Ignition Python, for example. You can use that to provision all of your infrastructure. So we never manually go and download a VM and download Ignition and go do the installer, even though it's only three minutes. We never do it. We use everything as containers and it's all provisioned using this tool called Pulumi. So there's a ton of good tools out there. We highly recommend being in automation as we are that you leverage some of these where it makes sense for you. I think...

36:58
Joseph Dolivo: So we've got additional resources. We made reference to some of these. So there's best practices, obviously the Ignition Security Hardening Guide, concepts for Kubernetes. AWS and Azure have their own. GCP also has some. These are links inside of the PowerPoint, which will be sent out. Definitely take a look at all of these. And then the two sessions, there was a good higher-level one on Ignition in the cloud. If you didn't get a chance to see it, watch it on the Livestream or the recording afterwards. And then the "Deployment Patterns for Ignition on Kubernetes" that Kevin Collins did. So really, really good sessions with really, really good, good info. And question mark means questions. We're ready for the tomatoes.

37:41
Audience Member 1: I didn't bring my tomatoes today, but one question I have, you guys alluded to it earlier that this is a space that's difficult to dabble in. So many of us being integrators or service providers here, what offerings do you guys have for providing a sandbox environment for people to get familiarized with your platform and potentially show it off marketing material style for potential clients?

38:08
James Burnand: I'll take that one. So we're in the process of hopefully soon announcing some really cool local versions of what we offer that you'll be able to actually run locally on your machine as a test environment. As it stands right now, we set up demos for integrators all the time with their own separate subdomain on our development system. So then you get gateway and database access. You can throw your projects up there, you can test playing with them, and you can make sure that they work. But one of the cool things about how the products that we built work is all of this complexity is kind of encapsulated in those. So you get designer access and you can get database access, and it looks just like a normal Ignition project. So from our perspective, we're trying to help make this technology easier to be able to adopt and that's kind of been our business model from the beginning.

39:02
Audience Member 2: Getting into cloud and cloud infrastructure and tools is... Can be a scary thing. And I think I've seen that with a lot of customers and even with myself thinking about how do I even get started? Can you guys talk to what you would say to somebody who wants to get over that fear and even just get their feet wet with cloud infrastructure and how they can start seeing those benefits and how do you overcome that first step?

39:31
Joseph Dolivo: Something that I would say is cloud is a spectrum. You don't either adopt it or not adopt it. There's kind of a spectrum of adoption. And so the easiest way that we've seen to kind of justify the use of cloud is just use it for offsite backups. Are you really, I like to say, take the tape drive down to the bank vault everyday. Is anybody doing that? Some people are doing that. Use it for encrypted multi-site offsite backups. That's kind of the Trojan horse, if you will, to kind of cloud adoption. And then use it for the things that it's really, really well suited for and tailored for like scalability. You know what, I'm gonna spin up a dev system, for example. I'm gonna play around with it. That's a really nice way to get companies more comfortable with it.

40:09
Joseph Dolivo: We spend a lot of our time with heads of IT and security folks kind of talking about why this is okay, how this can fit within their kind of existing IT landscape. It's actually kind of interesting because I'll say prior to maybe three or four years ago, the cloud was a scary thing for almost everybody. And we've really had this excitement that we've seen from a lot of customers I think driven by let's say Ignition's use of a lot of IT technologies, for example, where all of a sudden you talk to a chief security officer and they're like, "Oh, you're using containers, you're using this. I get it. You're speaking my language now." So that's actually helped I think to make it a little bit more palatable. But yeah, start from offsite backups. Super, super simple would be my point around that.

40:49
James Burnand: Yeah, I would only add to that that I actually think probably the best place to kind of focus learning attention if you're a traditional automation person and you're looking to figure out kind of how does all this work is I would focus on containers, learning the different container architectures, how networking works, how you actually set up those systems. And Kevin Collins' GitHub page is fantastic for anybody that hasn't been to it. Absolutely you need to go to it. I don't have the URL handy, but certainly it has so many resources that will help you learn about how to work with these architectures. And then really what you're doing is you're taking that Docker-centric architecture and you're using these prebuilt functions and tools to make it easier to actually do a more coordinated deployment.

41:34
James Burnand: One of the things Joe didn't mention is our Grafana system that's providing us all that alerting and monitoring, what it often is telling us is that it fixed something. So Kubernetes had a problem and it took care of it, and we get a teams message that says, "Yeah, the problem happened and the problem is taken care of." So like that part of kind of the progression and the ability to automate and take advantage of these tools at scale is the ultimate goal, but none of that happens if you don't first start focusing on things like containers.

42:04
Joseph Dolivo: Yep. The last part I'll add is looking at containers, it's another one of those kind of cloud-adjacent technologies. You can run containers on-prem and you can run 'em in the cloud. So start doing the things that will work well in the cloud, but just do 'em on-premise. So we've seen a lot of that's kind of hybrid cloud is kind of a similar idea with that. Thanks.

42:24
Audience Member 3: Do you have customers that are ingesting or exgesting? What's the opposite of ingest? I don't know. Doing that thing...

42:37
Joseph Dolivo: Expulsion?

42:39
Audience Member 3: In other cloud technologies. Like IoT Core, for instance. Are people using IoT Core to get data into your systems or then beyond just a normal database thing? Are there other places where data's going out of your environment?

42:56
Joseph Dolivo: For sure. So there's... And it is funny 'cause IoT Core is something, a service that AWS and other services have had. GCP made a lot of news recently where they actually sunsetted one of their IoT products. And so Cirrus Link is here. They have a great broker. HiveMQ is here. There's a number of kind of broker technologies, I'll say, for getting data up into the system and then also kind of pushing it back out. So Ignition is a good fit for integrating with all of those, a lot of those kind of event-based systems. Again 8.3 is coming and it's gonna make this easier. But you can ingest into Azure Event Hubs, you can ingest into AWS IoT hub, IoT Core. So those all work. The one thing to keep in mind too is that not all of those services, they may support MQTT, but they may not be fully compliant with things.

43:44
Joseph Dolivo: So for example, we went down a whole road with like store and forward and avoiding data loss. Going up into MQTT, there's some nuances to the TCP Keepalive Timer and all these kind of things that could result in data loss. A lot of systems that are sort of compliant, somewhat compliant outside the ecosystem don't support all of those. So that's something to keep in mind for sure. Once you get data up into Ignition in the cloud, then you can kind of push it out, but we found... We've seen a lot of benefit. If you're gonna push data into Ignition running in the cloud, whether it's [Ignition] Cloud Edition or whatever, keep it in there to do all of your visualizations and stuff like that if you're gonna use an Ignition and then push it out after that. So I hope that helps.

44:25
Susan Shamgar: Alright. Thank you, everyone. I believe that is all the time that we have for today. So can we get one more round of applause for James and Joe?

44:40
James Burnand: Thank you. Thank you, everybody.

0:44:40.6 Joseph Dolivo: Thanks everybody.

Wistia ID
d3abebnje3
Hero
Thumbnail
Video Duration
2684

Speakers

James Burnand

Chief Executive Officer

4IR Solutions

Joseph Dolivo

Chief Technology Officer

4IR Solutions

ICC Year
2023.00
Elevate Your OT Data Securely to the Cloud Emily Batiste Mon, 11/20/2023 - 15:28

Ignition Cloud Edition! Awesome! But wait… How can I possibly connect my PLCs or I/O systems to the cloud? Won’t that jeopardize them? And require heavy IT involvement? What’s the payoff? In this session, we’ll discuss how to use Ignition Edge and Ignition Cloud Edition together to quickly create scalable, high-performance, cybersecure architectures for democratizing your OT system’s data. Whether in brownfield or greenfield environments, you’ll unlock the power of edge-to-cloud hybrid architectures that are cost-effective, easy to manage, cybersecure, and deliver more value to your organization. 

Transcript:

00:05
Bryson Prince: Awesome. Alright. Hey guys. I'm Bryson Prince. I'm a Software Support Engineer for the Inductive Automation Support Department, and welcome to "Elevate Your OT Data Securely to the Cloud." I'll be your moderator today. So basically I'm just here to introduce our lovely speaker, Benson, and then afterwards with the Q&A, I'll be helping out with the microphones. Just for the Q&A portion, please remember if you've got a question, you either need to come down to one of these mics on the stand or we'll have a mic runner run up to you. Okay? So to introduce Benson off, he is the Vice President of Product Strategy at Opto 22, with 30 years experience in information technology and industrial automation. Benson Huegland?

01:00
Benson Hougland: Hougland.

01:02
Bryson Prince: Hougland? Sorry.

01:02
Benson Hougland: No problem.

01:03
Bryson Prince: Hougland drives product strategy for Opto 22 automation and control systems, which connect and secure the real world of OT with the systems and networks of IT and cloud. Benson speaks at trade shows and conferences including IBM Think, Arc Forum, and ISA. His 2014 Ted Talk introduces non-technical people to the IoT. So please help me in welcoming Benson.

01:28
Benson Hougland: Thank you, Bryson. Okay, welcome everyone, and welcome to this, what will be an action-packed session. So fasten your seat belts, we're about to get started here. Special shout out to all you Livestream attendees as well. Thanks for joining this session, mom, dad...

01:53
Benson Hougland: Appreciate that. So let's jump in. The title of this session, of course, "Elevate Your OT Data Securely to the Cloud." My name is Benson and I'll be your host for this journey from the edge to the cloud. I decided to forego the obligatory about Opto 22 slides here, get straight into the session. But for those of you who don't know much about Opto, real quickly, we're a California-based manufacturer of industrial automation hardware and software. Been in business for 50 years. And we have applications all over the world in a myriad of industries. Here's a drone shot of our headquarters, based in lovely Temecula, California, about an hour north of San Diego. Here is where we design, manufacture, support everything we make, 100% made in the USA. So, there you go. Go USA, right?

02:46
Benson Hougland: So this is the agenda in your programs. You've all already read this and hopefully that's why you're here. But in short, we're gonna cover how to use Ignition Edge at the edge to, along with Ignition's new Cloud Edition, to create a scalable high-performance and cyber-secure automation architecture to pull data from both greenfield applications and brownfield applications, democratize that OT data and of course deliver new value to your organization. Now's a good time to mention, as you probably know, this session will be recorded. We are gonna cover a lot of materials, so don't feel like you gotta remember everything I do up here. Getting to this session's agenda, we'll start off with, why? Why should we do this? Followed by some architecture diagrams. Then we're gonna roll up our sleeves, well, not gonna roll up my sleeves, they're already rolled up, and we're gonna actually build this thing literally out of the box to the cloud with OT data in 35 minutes.

03:43
Benson Hougland: So, wish me luck. We'll also cover the important question, of course, which is what if you lose the connection to the cloud, what happens next? Finally, we'll have some time at the end to answer some questions. Okay, so why? Well, our industry is still somewhat in the Industry 3.0-type world. And that simply means where you have devices like this that are tightly coupled to software applications, could be Ignition, could be other software applications, but generally they're very tightly coupled. And this rigid architecture does impose some limits on how our automation systems can grow. And it also limits our abilities to start taking some advantages of some of the massive resources that are available to us in the cloud. So these new cloud smart architectures leverage something called publish-subscribe data methods. Okay? So this runs counter to what you're probably very used to in terms of command-response models where the software asks the device for something and it responds.

04:44
Benson Hougland: But modern IIoT and Industry 4, or 4.0 architectures, they employ this notion of edge data producers pushing their data up into infrastructure for anybody to access. So a couple other things real quick, simple manageable access by any authorized user. Of course, the ability to scale your applications up in the cloud as those systems grow, whether it's compute or users. And there was just a great session put on by Brad Fischer about Ignition Cloud Edition. Check out that recording. He covers some more of the whys. System-wide resiliency, we'll talk about store and forward at the edge. And local control is always, always available. And one thing I wanna make a point of, this isn't a session about Ignition Edge or the edge device to the cloud, but note that this same exact architecture works on a standard gateway on-prem just exactly the same way. So, keep that in mind as we move forward.

05:45
Benson Hougland: So we've seen this graphic over the past year or so. We saw it again in the Keynote and of course, this was first presented last year when they introduced Ignition Cloud Edition. What we're gonna do is we're gonna actually do a little circuitous route around the gateway, not that the gateway's not important, the standard gateway, we're just showing you another architecture that could work. Again, this... What I'm gonna show you works well with the standard gateway, as well. So what do we got in front of us? Some hardware. Yeah, I'm a hardware vendor and I'm at a software show, but I love this stuff. I take it everywhere I go. So what do we got here? First, we're gonna start with the brownfield PLC AB CompactLogix. I think it's a pretty old one. Found it somewhere. Put a power supply on it, put a stack light on it.

06:30
Benson Hougland: I've got here a groov EPIC. An EPIC is a Edge Programmable Industrial Controller. So I've got that with some I/O. This will represent my greenfield application, in which case it's going to be simulating a convenience store. But I actually have real I/O, all that connected to it as well. Then I'll be using Ignition Edge running on that platform and Ignition Cloud Edition where I'm gonna push this data up to. Now, this session doesn't cover bringing up or tilting up Ignition Cloud Edition. There's a lot of sessions here that cover that. I'm just gonna cover the Edge portion and I will pop up into Ignition Cloud Edition to get this whole thing going. We're gonna talk about technologies like OPC, OPC UA. We're gonna talk about MQTT, Sparkplug B, and of course we're also gonna talk about VPN.

07:18
Benson Hougland: That's kind of a bonus. Alright, couple things on the network architecture. Anybody in here have a little idea about how IT networks work, IP addresses and so on? Maybe so. Here's my OT network. It's represented by this side of the table. So that's my OT network, traditionally a fixed IP network, non-routable IP address space. And here you can see the PLC has an IP address, a fixed IP address, 'cause that PLC doesn't have DHCP, it has no security, it has nothing. Well, it's a good PLC, but other than that, we're gonna connect that up to the EPIC on its own network segment. So we're gonna configure east zero to be on that OT network. And on the other network interface, which is on the EPIC, we'll use that to connect to northbound type of networks. That could be the IT network, it could be a cellular router, which indeed I have here.

08:12
Benson Hougland: So I've got a cellular router that represents my northbound network. It could be any network as long as it has a valid gateway to where? Ideally the Internet at some point, could go through your corporate network, through all its firewalls. But as long as I can get out to the cloud, I'm good to go. So that's kind of the architecture we'll be looking at. And then we're gonna work with all the software that's pre-installed, ready to go on the EPIC to get this going. Okay, here we go. Build time. Alright, so as I said, fasten your seat belts. We got a lot to unpack here. I'm gonna move fast. So, another reminder, it is recorded, so don't feel like you gotta remember everything. First step first. Let's configure the EPIC. We're gonna start at the edge. We've got a processor, we've got a power supply, we've got various I/O modules that represent signals we need within the convenience store.

09:01
Benson Hougland: We've got multiple network interfaces. The device is a web server, so all of my configurations, almost all of them, are done through a web browser and we will be configuring Ignition Edge IIoT. First thing first, let's assemble it. I just pulled it outta the box. This entire session is actually the steps I took for setting this up and all of the CStores that are in the Data Dash. So I've got my chassis power supply I/O modules. I put on the processor, I connect my two network connections, remember, the OT network to the PLC and the other network to my upstream valid gateway network. And I apply power. Once I've done so, I go to my favorite browser, I look there, I got my browser. I'm gonna open it up and I'm going to enter the default host name. The default host name is printed on the inside label of the EPIC. Terrific.

09:52
Benson Hougland: So I enter that in and the first thing it does is say, create administrator account. Let me be clear, there are no default accounts on this system. There are no back doors. If you lose your credentials, you have to reset to factory default. We can't help you. This device is meant to be secure, zero trust out of the box. Okay? So remember your credentials. Once I've done that, you'll come up with a screen that looks just like this says, great, you're ready to go. Let's start configuring things. So we're gonna jump into groov Manage, groov Manage is the application to manage the device. It is web-based, it is a web server. So I'm just using my browser. It is responsive. I could be doing this from my phone. It manages all of EPIC's features, of which I'm gonna go through four of those right there.

10:39
Benson Hougland: And it also manages all the pre-installed applications, which of course you can see, well I got a laser pointer, Ignition right there. So we're gonna get to that soon enough. But first we gotta get this thing going. So I'm gonna jump up to users. I'm gonna click on there and I can see there's my administrator account I created earlier. I can see what my permissions are, my API key if I want to use that, but I can also easily create new users on the device. So these new users, I just click add, I give it a name, I can give it some permissions. Maybe they just don't get to see the local operator interface, whatever. That's fine. But I've also got LDAP in here. How cool is that? So now I can connect to LDAP server and use Active Directory to manage the users on this edge device. It's meant to be enterprise-ready in that regard. So you just work with the IT group, put the LDAP settings in, you're good to go.

11:31
Benson Hougland: Okay, moving on to networking. I'll click on the network tab, I'll click on status. There I can see that I have my two Ethernet connections connected in, but they're not configured yet. I'm gonna go ahead and configure those coming up right now where I click on configure. I go into the dialogue box and of course the first thing I wanna do is change that default host name to something I can remember. I can't remember that. So I'm gonna call it EPIC-LC2-Showdemo. Then on Ethernet zero, remember that's the static network, so I need to give it a static IP address on the same network as that PLC. Okay? So I entered that in, I put in a subnet mask.

12:09
Benson Hougland: Now Ethernet one, the upstream network, I'm just using DHCP services so I'm not gonna change anything there. I'll just let myself get an IP address, DHCP and for... Because it's fun, I'm gonna go ahead and put in VPN. I go to my VPN administrator, they gimme an OVPN configuration file. I plug it in and I click save and boom. And just a few minutes here or a few seconds actually, I'll see that the network is restarting with all my new settings and that guy should go to connected in just a moment. There it goes. I've even got an IP address now on the tunneled interface. So now I have, if you're counting, I've got three interfaces now. I'm gonna go back and just look at the status and I'll see I'm connected on a static network, DHCP network, and VPN. Networking is done, dude, let's move on.

13:01
Benson Hougland: System time. This device does connect to NTP servers so we can keep the time updated. So it's easy to do. You go in there and you set your zone. Again, this is out of the box, so it's set to universal time zone. I'm gonna set it for my region and locality. So we're gonna go to Los Angeles here, there we go. And I click set zone, boom, I'm done. I can also change my time server. So I want to go to an on-prem time server, I can go, whatever you want. So in this case, I am using standard time servers. System time's done. Let's move on.

13:33
Benson Hougland: Certificates. This is a secure device and if you've ever worked with certificates before, you know they're a pain in the butt. But what we're gonna do is I've actually, did I go one ahead? I may have. There we go. I'll click on the web server certificate button. I'm gonna go into the certificate and I'll see that it's self-signed, in other words that's the certificate that shipped with the product, but it's tied to the old host name. So I wanna update that cert. So here I go, I click on it, I create a certificate, the certificate's being generated on the EPIC. And then once I've done, again, this is just standard forms you fill out to create a certificate, nothing too different than what you're used to except it's all form-based. There are no SSL tools you need to use, no command line, just click and go.

14:18
Benson Hougland: Now I will go actually to the next thing where I'm going to download all those certificates for safekeeping, but I can also download a CSR. What's a CSR? It's a Certificate Signing Request. Thank you. Certificate... Need to get you a Tootsie Roll or something. Certificate signing request. I take that file and I send it to my IT administrator. He signs the certificate, gives it back to me, I put it back in, upload it, and I'm good to go. Again, no open SSL tools, nothing like that. Just go ahead and put the new cert in, the new signed cert from your IT department and you're good to go. What's nice about that is once it's done, it will reload and then I will get this really nice little browser lock padlock. So if you're ever doing banking or anything like that on the Internet, you wanna see that to know that you have an encrypted authenticated connection. So there we go, we're all done. That is the commissioning process for the EPIC. We're ready to move on.

15:17
Benson Hougland: The next thing I'm gonna do is configure the controller that, you know, it is a PLC too, so it does all kinds of stuff. But first we're gonna do the controller. So I'm gonna confirm the control engine that I want to use. We give you choices, you can use CODESYS, you can use our own PAC Control. We're gonna actually go into PAC Control and you can see that I have a control engine running. However, there's no application in there yet. So we're gonna take care of that next. And to do that, remember I'm on the IT network when I did all this work, the upstream DHCP network. So if I try to go in here and download a control program, I won't be able to because the firewall port is blocked. So I'm gonna go in to PAC Controller and I'm gonna open the firewall port for Ethernet one, the network that my PC is on. And that's super simple to do. I click there, I have administrator access, so I can do this and I confirm that ETH1 is indeed open and now I can download my control program.

16:14
Benson Hougland: I'm only doing this on a temporary basis and I could have done this from the OT network. But anyway, I'm all set there. I'm gonna go into my PAC Control IDE, PAC Control program, and I'm simply gonna download the strategy into the device. And of course it says its memory's cleared. Yep, this is out of the box, we'll put it in there, click run, boom. I now have a control program running in there. This is not a programming class, so I'm not gonna explain the control program, but I have it done. It's in there now. So I come back to my groov Manage screen and I can see I have the six running charts, good to go.

16:51
Benson Hougland: Next, OPC. Well, we have all those controller tags, how do I expose them to other applications? OPC, I'll go ahead and add the OPC server. But what's unique here is I'm setting up the OPC UA server here for Ignition Edge to get to the data, 'cause I want Ignition Edge to get all these tags. So I'm just using anonymous access, allow reads/writes, boom. Done. Oh, one thing I do wanna mention on that one, we do wanna make sure we're taking note of that discovery endpoint. We'll use that in Ignition Edge. Next, what is the OPC server gonna serve up? Well, it's gonna serve up all my control strategy tags. It's also gonna serve up all of my I/O tags. So I just give it a name and I confirm that I have OPC UA server. You probably see MQTT there too. Ignore that. That's our native MQTT. We're gonna do MQTT in Ignition Edge.

17:42
Benson Hougland: So I do the same thing for the I/O system. I can get access just to the I/O and not to the control program if I want. I go ahead and put that in there. And finally, I'm running a PID loop. It's on this little guy right here and I wanna get that data as public access as well. So there we go. Last step, public access, read and writable. And we are done. Now we have a control program running in there. OPC server set up, EPIC is set up. Let's go in into what you probably all have been waiting for. Ignition Edge. Let's do it. Okay, so starting Ignition Edge, it's pretty difficult. You gotta click a button and you gotta choose the platform you want and then you enable it. Everybody got caught up there. Any problems?

18:28
Benson Hougland: Pretty simple. Once it gets going, same thing. We start, just like if you downloaded Ignition Edge to your computer, you're gonna actually go through the end user license agreement. You click next, you're gonna create a username and password, and you click next. And then it's gonna ask you if you want to start the gateway. Also, check your ports. We'll open up all those firewall ports for you when you spin up Ignition. The gateway is starting. Boom, we're ready to roll.

18:57
Benson Hougland: We are now in Ignition Edge, that's how easy it is. So, now that we're in Ignition Edge, we've got the gateway started, the next thing is to install the MQTT module. But wait a minute, Benson, you said everything was pre-installed. Aha, it is, however, the Ignition Edge MQTT Transmission Module, which I need is developed by Cirrus Link Solutions, and that means it's simply quarantined. So I go down to the quarantine area and I click install, accept the certificate or actually the end user license agreement for that. I come down, I accept the terms, I accept the certificate, get the module installed and I'm done. Still haven't downloaded a single thing from anywhere, it's all built in. Okay, now that I've made that step, we're gonna take a quick look at the status page on Ignition Edge. There it is. You can see my host name up there in the top square, and look, Ignition Edge automatically has visibility on all those configured NICs that I configured back on the networking page, which is a good thing because I need to get to that PLC on that 172 network. So let's do it now. We're gonna go to Device Connections, same stuff we see in Ignition, create a new device, we're gonna click the proper driver. We'll go ahead and do that, click next, again, pretty difficult part here. Gotta give it a name. AB-PLC, that's gonna come up later.

20:20
Benson Hougland: The IP address of the PLC, next, create a new device, done, that's it. The Allen-Bradley Driver in Ignition is pretty slick in that regard. So that's all set. Next, I'm gonna create the OPC connection between Ignition Edge and the OPC UA server running on EPIC where all my control strategy tags are. Go in here, put in that endpoint I mentioned earlier, it's all local host 'cause everything is on the same device. So I put that in and I just start going through the motions. Click next, click next, accept the certificate, yes, check my settings, looking good, click finish. Give it a name because I'm gonna reference this name later in a UDT, more on that in a moment. We're just gonna call it the CStore OPC UA Server because it represents a convenience store and I'm connected. Everybody still caught up? We're good?

21:15
Benson Hougland: Okay, let's go on. Now we're gonna start doing MQTT. First thing, we gotta set up the memory store and this is important, I come in here, I'm just gonna edit the existing memory store, I give it a name, I give everything a name, drives my colleagues crazy. Accept the defaults, done, dude. All set, I've got my memory store and this is important, because if my connection to Ignition Cloud Edition, or any upstream broker, fails I will start storing that data in that memory store and on the resumption of that connection, I'll start forwarding that data up. We're gonna actually check that out later. Next is the server sets, super simple configuration here, again, give it a name, and I'll just go ahead and edit the default server set. Again, give it a name, description if I like, and finally the primary host ID. This is important in MQTT, but it's not important to this session. If you wanna know more about primary host, I'm happy to tell you but I'm gonna put that in, but that host is my Ignition Cloud Edition, okay? So that's Engine, MQTT Engine, running up on the cloud. Boom, done. Move on. Next, the transmitter, this is where the heavy lifting occurs, okay? So here I'm gonna go in, give it a name, getting tired of saying that.

22:32
Benson Hougland: The tag provider, always Edge on Edge, the tag path, where do I wanna send my... Where do I want all my tags to live? The server set I just created and I'm going to use UDTs, so I check that. The memory store I just configured, click that, and then this is where the rubber meets the road, this is your MQTT namespace. So I put in group ID, ICC session, remember that, it's gonna come up again later. The Edge Node is Opto 22-Harris Center and the store name is EPIC-CStore-520. That is my MQTT namespace already set up. That's pretty much it, but I haven't connected to the server yet and if I need to connect to the server, guess what I need? Because I said I'm gonna send data up there securely, I need some credentials, I need a way to connect to that server. So I'm gonna switch over to Ignition Cloud Edition in the designer gateway, you'll see that up at the top, it's Ignition Cloud Edition. I'm gonna go into config and the beauty of Ignition Cloud Edition, it includes all the modules you need, including MQTT Distributor, there it is. What is Distributor? It's an MQTT broker built right in. So I go in there, I'm gonna create a new user. Now I'm calling it ICC Session, on second thought I probably shouldn't have, but we're just gonna call the user ICC Session, give it a password, and give it the rights that I can read and write to that broker in the cloud sitting in my Ignition Cloud Edition and I'm done.

24:05
Benson Hougland: So now I'm gonna go back to Ignition Edge and enter that data in. So let's go back to Ignition Edge, go to my servers tab and here is where we actually make the connection. I'm gonna delete the existing sample broker in there, server, and create a new one. Give it a name, give it a URL, that is the URL for my Ignition Cloud instance :8883, a secure and encrypted port. I put in the server set that I've already configured and I put in the brand new credentials that I just created. We good? Create new server, once I do so, now Ignition Edge is reaching out through this cellular modem up to the cloud and establishing a connection of which I see I'm indeed connected. We're good. Now what tags do we wanna set up in there? That comes next, that's of course designer. So the beauty of designer, it's already built into the device. I just click, it downloads it from the EPIC onto your PC so you can install designer. First, you install the launcher, you put in your manual configurations, point to the host name of my EPIC, accept the now valid certificate that I have in there, click add designer, open the designer, and log in. Remember, we created a username and password for Ignition, I put that in. Voila, I'm now an Ignition designer, which I'm sure most of you who use Ignition are very familiar with this interface.

25:39
Benson Hougland: So there it is. Now you see I changed the panes because all the work I'm gonna do in Ignition Edge for this session is all in the tag browser. Yes, you can build Perspective screens, yes, you can do all kinds of other stuff, but we're just gonna focus on getting the data to the cloud. First thing I'm gonna do is delete the default folder, get rid of that, it's gone. Remember ICE tags, I put in the transmitter settings, that folder's there. First things first, let's import UDTs. Now I could have done all this just with tags, but I thought it'd be kind of fun to have a UDT for my AllenBradley PLC and a UDT for all my CStore strategy tags. So I simply import those UDTs I already created. The good news? Those UDTs and all that work I did, I've already put up on the [Ignition] Exchange 'cause it was required for the Data Dash and I got the socks to prove it.

26:31
Benson Hougland: So there you go. All my UDT definitions are now in here, all folderized, everything is ready to go. Now I need to instantiate those, instantiate into the tags folder, ICE tags. Go here, new tag, new tag from instance, there it is, AB PLC. Come in and fill out my parameters, give it a name. Two parameters, we'll click on there, that device connection name, AB-PLC, I configured earlier. Put that in there, click okay, and let's see if we have live data. Well, of course we do, right? There it is, AB ControlLogix, which I named it, there's my parameters, there's my AllenBradley data in my designer. Let's do the EPIC CStore, that's about the same, we just go in, we're gonna click new tag from instance, choose the UDT, pull it in, give it a name, go to parameters. My parameters for this UDT are a little bit more, I've got different... My OPC server name, my MMP name, all the stuff that I need to make that connection work is all built in there, so you can use this UDT anywhere you like. So I plug all that in, click apply, take a look at my tags, boom, there they are, all in nice folders. So I've got my car wash, my freezer system, my fuel system, everything is in there, all ready to go. Okay, all my tags are in my designer. Now what? We need to get them up to the cloud. Well, that's gonna be a lot of work, so let's stand by.

28:07
Benson Hougland: We're gonna take this slowly. First, I'm gonna open up Ignition designer up on the cloud, but that's just to show you the data tags coming in. I don't have ICC Session in my Edge nodes yet, I've got some other projects in there. So I open up Edge designer, overlaid it over Cloud Edition designer, and I'm gonna go back up to read/write, go to my MQTT Transmission folder, there it is. Come to Transmission control, and it's just one checkbox, click, refresh, hold on, there it is. ICC Session, all my tags are now in the cloud.

28:47
Benson Hougland: Thank you, thank you. That is pretty damn cool, right? I didn't do anything else up in Cloud Edition except get it spun up and set up some credentials in that primary host. Now all my data's up there. Woohoo, we got data in the cloud, how cool is that? Well, let's do something with that. So I'm still in designer up on the cloud. I'm gonna go up into my standard template here, I'm just using the standard Flex Container template. I'm gonna make two containers, I'm gonna first in the top container do this the old-fashioned way. I'm gonna drag tags from my PLC folder, and I'm gonna drop it into the container. First one I'll do is a PLC waveform. Pretty simple, just kinda cool little gadget there, put that in. Second one is a stack light, that stack light, drop that in, I'm gonna give it a name, red stack light. Now that was the old-fashioned way. The new way is this way. For my CStore, I've got a Perspective template tied to that UDT, I drag the UDT on the canvas and boom, all my data's there. The entire template, all of the different tabs for all the car wash system, it's all in there, I'll switch over and just like that, I have a complete application for this particular EPIC, all built in with the tools that are available in Ignition, very, very cool. So when you start looking at a dozen CStores or hundreds of CStores, all the steps are the same.

30:19
Benson Hougland: Okay, so I can actually... It looks like I can actually control this thing. Who wants to see this live?

30:25
Audience Member 1: Yeah.

30:25
Benson Hougland: Live? Live? Okay.

30:26
Audience Member 1: Live.

30:27
Benson Hougland: Good, you guys are a great audience.

30:31
Benson Hougland: So I'm gonna actually click over first. That is the... That's what it looks like, all I did is go to dark mode, I added some other CStores in Germany, I've got Spain, I've got Australia, I've got them all over the world. But this is the one we're working with, Epic CStore 520. And it's just a standard template, fully mobile-responsive. Let's take a look at it, I've already opened it up here. This is... Anybody want to guess the word I'm gonna use? Live. This is live. My PC is connected to the ICC WiFi network, it's not connected to this, this system is all going through my router. So what I'm gonna do, guys, is I'm... And gals, I'm gonna actually click on that browser right up there at the top. Oh, somebody's gotten ahead of me, somebody just turned on the stack light, I'm gonna go up and click on that red stack light, that means from this PC through the ICC WiFi network up to the cloud, I'm gonna send a command. This guy is connected to the cloud on a persistent, secure, authenticated connection, when I send that command, it's gonna send it back down to this guy, 'cause it's bi-directional. But let's hang on a second, you're going through the cloud and all that, it's gonna take forever. So I hope that I still have time, 'cause it'll take a while for this to work.

31:47
Benson Hougland: But that's okay, we're good. Okay, ready? Three, two, one. Huh? Did you see it? Let's do it again, let's turn it on. Three... I didn't even count that time. That's how fast it is because if you put these systems together and they don't operate at high performance, what's the point, right? It's gotta be secure, it's gotta be easy, but it has to be high performance and that's pretty... And I'm not suggesting you're gonna operate your AB PLC stack light from the cloud, that's totally your call, I just wanna show you that it can happen. Okay, so real quickly about the app, I've got HVAC here, this is my store temperature, this is my PID loop. You got a disturbance on my PID loop and we'll see the process variable go down, all this is being published up. And we'll start to see that come in, there it goes, I'm... I should have a shirt that says, no SIM tags, I love working with real data. So there you go, we've got all my tags coming in, I've got a bunch of other stuff, this is all available to you guys to see as well. My fuel system, my freezer. And while I'm on the freezer, I can actually trigger anomalies that go where? Snowflake. This system is connected to what you guys have been hearing about this conference, the Snowflake system. So that's pretty cool as well.

33:05
Benson Hougland: And you can see in Germany, there we are, weather in Germany right now. Oh, somebody else just started the car wash.

33:15
Benson Hougland: There's Las Vegas, there's San Diego, there's Boynton Beach, Florida, there's Madrid, Spain, there's Melbourne. All of this data was built exactly the way I just showed you. So pretty cool there. Alright, so I am getting close on time. Thanks for playing.

33:32
Benson Hougland: I do appreciate it. Alright, so we do have a URL for this and if you wanna play from your own phone, some of you already got started. There's the QR code, have a ball. And I love hearing the beeps, I don't think you're bothering me. Alright, a couple closing slides.

33:53
Benson Hougland: A couple of closing sides. I've got my OT network, I've got my IT network, I'm moving all the data and I've got my workstation. What's cool is because I set up VPN, I can access this system from anywhere in the world with a valid set of credentials, multi-factor authentication and I can tunnel right in. What's more, is I can use that to tunnel right to the Allen-Bradley PLC, more on that in another session. And finally, when I talk about the VPN on my... This week, one of our good friends, Corso Systems, Alex Marcy, posted on LinkedIn that he was on an airplane, a 737 MAX 8, and he was indeed connected from airplane WiFi to his EPIC and to Ignition. I thought that was pretty cool, so I just threw that in this morning. Finally, if you're like, "Oh, cloud, this makes my head explode." I highly recommend the guys over at 4IR Solutions, these guys know cloud, they know it better than anybody. But what's more, they know these, part of their business is to put these in a plant floor, collect the data and get it up to PharmaStack or up to FactoryStack. So, huge shout out to these guys, see their session tomorrow at 2:45 in one of these stages. Finally, the question, what happens when it goes offline? When we lose a connection up to the cloud, no problem, we'll start storing data.

35:17
Benson Hougland: But more importantly, I still have local control, there's a built-in HMI in here, or you could put Perspective on here, I have complete control over the system while it's disconnected. When it reconnects, I'll then take all that stored data up to a week buffer or several million tags, can't remember how many.

35:35
Audience Member 2: 10 million.

35:36
Benson Hougland: Thank you, thank you. 10 million tags, and we'll send that back up too. So you're not gonna lose data by connecting the cloud, in fact, it's an arguably more secure way and a better buffering system than anything you could do before. How'd I do?

35:53
Benson Hougland: Alright.

35:55
Benson Hougland: Thanks.

35:57
Benson Hougland: Thank you.

35:58
Benson Hougland: Thank you very much, I appreciate that. So I'd like to open it up to some questions. Anybody have any, any at all? I'd love to hear them.

36:08
Bryson Prince: Up top there. Oh, sorry, there first.

36:10
Benson Hougland: Oh, it's the press, I feel like I'm in Ted Lasso.

36:14
Audience Member 3: Independent.

36:16
Benson Hougland: Yeah, The Independent, thank you.

36:18
Audience Member 4: Can you do this from like multiple devices or fleets of devices, if you do this, does it... You have to do this for each device or can you populate to multiple devices?

36:31
Benson Hougland: Yeah, each Edge device gets configured very similarly to this. This is just a simple example that we're using to illustrate this, but we have other customers some who just got Firebrand Awards that are using the same concept of an EPIC being deployed and based on the application, they connect to other devices, however many devices you need pulled in through here, modeled, and securely pumped to the cloud. If you're asking if we're doing like cloud deployments out to edge devices, no, we're not doing that yet. Stay tuned. Good question, though.

37:04
Bryson Prince: Up here.

37:05
Audience Member 5: Just... Excuse me. Just curious, if you wanted to use the native MQTT right from the groov EPIC to Ignition Cloud, is the MQTT payload configured in a way that if you were using the Distributor Module and MQTT Engine Module in cloud, would it recognize the tag structures, the folder structures, similarly to how MQTT Transmission Module allows for?

37:33
Benson Hougland: The answer is almost yes.

37:37
Benson Hougland: This is an Ignition conference, naturally, I'm gonna use Ignition Edge, but yes, the MQTT native client that's built into EPIC will publish all the data, will do store and forward. Everything I described except one thing: that is the UDTs, so we already have pre-templatized the native client to send the data up. Then you just use, put the UDTs in the cloud, easy enough to do, but in this case, I wanted to use UDTs at the Edge. So Ignition Edge with its UDT capabilities and, and the ability with Ignition Edge to communicate to other systems with those built-in drivers made Ignition Edge perfect for this type of application. But to answer your question, MQTT native in EPIC and in RIO supports everything I just showed with the exception of creating UDTs.

38:26
Audience Member 6: Yeah. Not Ignition related, but...

38:30
Benson Hougland: Okay.

38:30
Audience Member 6: Does the groov EPIC have IO-Link drivers?

38:35
Benson Hougland: We don't have IO-Link drivers today, we've been discussing that quite a bit, but that would be an IO-Link, essentially an IO-Link master. Our customers have been doing this, they're simply using an IO-Link gateway. In fact, we have a pretty large OEM that's doing just that. So good question, though, thanks. Most of our drivers are gonna be your standard stuff, Ethernet-based, Ethernet-based. Great, that's a good question, too. After this session, over on stage one, my good friend, dear friend Arlen and Pugal and Travis are gonna talk about Snowflake. And when they do, they're gonna talk about an accelerator kit and that accelerator kit, guess what it includes? That guy. So stay tuned for that, definitely attend that session, they're gonna talk about Snowflake, about all this stuff, but the same concept that I just went through here. Another question?

39:26
Audience Member 7: Yes. So for the... In your example, you had one UDT that had all of your tags. Is there, I guess, more basic UDTs you could have that...

39:37
Benson Hougland: Oh, yeah.

39:38
Audience Member 7: I guess if you built as a new function block or what have you, it could just add that in rather than one giant UDT.

39:45
Benson Hougland: Yep, yeah, you're... Good catch. I thought, you know I got all these tags, and it's all based on different things in a CStore, the car wash, the freezer, the fuel system. I was like, yep, I could... I actually started doing that with separate UDTs. I was like, well, hang on a second, I wanna be able to drag that UDT up in Cloud Edition right on the canvas and not build a bunch of pages and then figure out how it works. So I put it all in one UDT so that when I created the template, I could drop that on, and everything was all tabbed, everything was done. That's why I did it that way. But yes, you can do a multiple, whatever you like on UDTs, for sure. Let's cool that guy down again. Whoa.

40:29
Audience Member 8: I was wondering about the number of device connections you can have to the Opto 22. So if I'm not mistaken, Edge comes with two device connections right now, but you can add more?

40:39
Benson Hougland: Yes, you can.

40:40
Audience Member 8: What is the limitation, from a performance perspective, of adding 100 more CompactLogix to your Opto 22?

40:50
Benson Hougland: Yeah, you're gonna run into a point to where CPU and RAM start to play a role, just as it happens in Ignition server sizing, right? You wanna figure out how many tags you got, you're gonna how many can you... So this guy is a Linux computer, it is a PLC, but it's a gateway, it's an HMI... It's everything, it's the smartphone of PLCs. And it is running a four-core ARM processor with four gigs of RAM, one and a half of gigs of that RAM is allocated to Ignition Edge.

41:21
Audience Member 8: Will that starts to affect the scan time on the PLC side?

41:25
Benson Hougland: Nope, that's got its own real-time thread.

41:27
Audience Member 8: Okay.

41:27
Benson Hougland: Yep, that guy is, he's guaranteed to do what he's supposed to do, and then Ignition Edge, Node-RED, groov View, your C application, your Python, whatever, takes the rest of the threads. It is a multi-threaded application, so we can use all four cores.

41:42
Audience Member 8: Thank you.

41:42
Benson Hougland: You're welcome. Keep them coming, keep them coming.

41:47
Audience Member 9: So as a Internet-connected device, I assume there's semi-frequent security updates.

41:53
Benson Hougland: Thank you.

41:53
Audience Member 9: What's that process like, and what's downtime and PLC impacts?

41:58
Benson Hougland: Yes, indeed, and that's super important. We do frequent updates for our firmware to address anything that might have happened in Linux security things, updating any of the other software on that. It is a monolithic firmware update, so you're not having to figure out, well, this piece of software has this firmware, none of that. One big firmware download addresses all the security updates and they're all in there. And since you brought up security, one thing to know, I think I can do this. I'll pay anybody in this room a million bucks if you can crack that PLC right now. Huh, a million bucks, it's all...

42:37
Benson Hougland: I don't have it on me, so, yeah.

42:41
Benson Hougland: No, and the point being is this, is that we designed these systems, all this connection to the cloud is 100% outbound. There are no firewall ports that need to be opened either at the corporate level, at this device level, level two, level three, or DMZ, it doesn't matter. It's all outbound communications, persisted, encrypted, authenticated, and we keep that persisted so we can receive traffic back if we need to, but everything is meant to make this thing secure. This guy is not secure, if I get on that 172 network, I get to go crazy, but when it's behind that guy, there's no chance to get to this guy unless I explicitly configure that, and again, that's a session for another day, we've done some really cool stuff with being able to do remote access to unsecure PLCs on the other side. I can't wait for next year to do that one.

43:34
Bryson Prince: This will be the last one, sorry everyone.

43:36
Audience Member 10: What version of Ignition do you put on the groovs and as I buy them throughout the year, are they gonna come in with different versions?

43:44
Benson Hougland: Exactly.

43:44
Audience Member 10: Or do you maintain that?

43:45
Benson Hougland: And that's a good point, because we actually take the Ignition Edge that is available on the website. So all we do is take that one, put it into our firmware wrapper, so as those new Ignition Edge editions come out, we update the firmware, then you get the new edition. What we have had some people going through shell access, this does allow you to do that as an option to update their Ignition instance. And we don't necessarily recommend that, but it's possible, but otherwise, the reason we do that is so we can test it, we can put it through our whole QA suite and test everything is working properly and then we release it. So we will tend to lag a bit, but yeah, that's exactly how we do it.

44:28
Bryson Prince: Can we thank Benson one more time?

44:29
Benson Hougland: Oh, thank you guys.

44:31
Benson Hougland: Thank you guys. Thank you very much, I appreciate it.

44:35
Benson Hougland: We do have a booth, thank you. We do have a booth outside, right at the entrance of stage one, we've got a bunch of engineers here to answer more of your questions. Thank you so much, have a great conference.

Wistia ID
ew3xqrq7bi
Hero
Thumbnail
Video Duration
2686

Speakers

Benson Hougland

Vice President of Product Strategy

Opto 22

ICC Year
2023.00
Separating Design From Development - Using Design Tools with Ignition Emily Batiste Tue, 11/14/2023 - 14:34

Building screens in Ignition is a breeze, but did you know you can design screens even faster by mocking them up using a design tool? Join us for this session as we talk about the benefits of moving the design process outside of a development platform. We'll cover topics such as design vs. development, UI vs. UX, benefits of using design tools, and an introduction to the design tool Figma.

Transcript:

00:09
Rob Lapkass: Alright, welcome back from lunch, everyone. Let's get started on this afternoon session, shall we? My name is Rob Lapkass. I'm a Training Content Creator here at Inductive Automation, working on Inductive University. I'd like to welcome you to today's session, "Separating Design From Development - Using Design Tools With Ignition." I'll be your moderator for today. To start things off, I'd like to introduce our speaker today. Doug Yerger is a Principal Engineer at Grantek Systems Integration. Doug's 30-plus years of experience includes the architecting, design, implementation, commissioning, and support of PLC control systems, robotic applications, Vision applications, database applications, MES implementations, warehouse management systems, SCADA systems, and HMIs. Doug serves as a leader within Grantek, providing governance, technical direction, and facilitating knowledge propagation. Please join me in giving a warm ICC welcome to our next presenter, Doug Yerger.

01:27
Doug Yerger: Thanks, Rob. As Rob mentioned, my name's Doug Yerger. I'm with Grantek. And before we get started, I just wanna cover a couple terms we're gonna go over today: design and development. But what actually are those terms that we're referring to? Design is your look and feel of your user interface. In the industry, they usually call them UI and UX in the web industry, and that's the user interface and the user experience. So these are gonna cover things like your theming, your screen layouts, navigation, also including things such as what's going on each screen, what's the function of each screen, what type of numeric indicator you're gonna use? You want analog gauges? Do you just want a numeric display, spark lines? Anything like that. But also that user experience is all your user workflows. So, what is the purpose of each screen? What are you trying to do and what is the user trying to accomplish on each screen?

02:37
Doug Yerger: And development is building the application itself, so constructing the screens, tag creation, device connections, scripting, setting up databases. All of that is the development. So that's the building of the application, and the designing is actually creating the ideas of what you're gonna want to do in there. Some of you might be asking, "Well, why are we going to split them?" We had similar thoughts at Grantek, but at some point, we had had enough pain points that we decided we really needed to split them. I'm gonna talk a little bit about Grantek's journey, and you can see how well it matches up with yours. Prior to adopting Figma, which we chose as our design tool, we did our designing, excuse me, within our development environments like Ignition.

03:24
Doug Yerger: We found this led to a number of inefficiencies, headaches, overruns, and missed opportunities. One of the first things we identified was rework that we had to do from undocumented or late request. In some industries, especially those that work without detailed function requirement specs and FRS, or design specifications, or DS, we would reach the 60%, 90% review cycle, and someone would bring up a major item about the user workflows that was not documented anywhere and hadn't been talked about. This often required major rework of the screens, often breaking bindings. I mean, we've all moved things one container to another in Perspective and had to rework them all. So breaks those bindings, and there's a lot of rework from those.

04:14
Doug Yerger: And of course, the project manager's inevitable discussion on whether we need a change order or not for those changes. One project in particular comes to mind for this. We had a customer that was migrating a process. They had a very manual process. It had been a product development. They had developed it, but it was very manual, and basically, the initial project was migrating an access database that was tracking their product into a SQL server with a frontend Ignition project. Well, we were very careful to make sure everyone was in the meetings by asking the customer who was in the meetings, and the customer even canceled some meetings just because people weren't available. So we thought we were doing all that due diligence. We went through there, the project grew through those discussions, adding in things like we're recording test measurements, product validation, and even the shipment processing and interfacing to their ASRS systems.

05:18
Doug Yerger: Well, we're going through all that, all those... Quite a bit of rework during those, but they were all covered by change orders because they changed the function of what was going on, and growing it. We get to the 90% review, a pre-SAT review, and they brought in someone from their quality group that was gonna be responsible for setting up and qualifying each production run. Well, this person went through and said, "Well, these are the steps we have to follow to qualify this." And it meant they had to put things on three different screens, and was all over the place in our application because those were the user workflows we had already discussed and had worked toward. It made a lot of late work in our process. So, put this on the top of the list here because of that. Next point is, if you're doing gap projects, and I'm sure you've all experienced this, part of that is doing your design specs.

06:21
Doug Yerger: Well, when you're doing your design spec, you often need screenshots of what the things are gonna look like, so you can talk to those points. Usually, our engineers, sometimes they use Paint, sometimes they use other applications, but 90% of the time, they would just go into the development environment and mock-up the screens. And that's always creating dummy tags and things like that to keep from overlays, wire framing, or anything else so you can get a nice, clean screenshot. This led to two potential issues. First, we have to rework everything now because they're on dummy tags and not production tags because those are all defined in the DS and have to be approved and worked through. And second one, technically, it's violating the gap process 'cause you're not supposed to be starting development until the design's approved. One other feature that we've experienced with this is since we're doing those designs in a very nimble editor, when the DS does go through revisions and there's changes, it's very quick to update it because it only takes a matter of a few minutes to update versus going in the design environment, moving things around, and getting things working again.

07:41
Doug Yerger: Another pain point is when customers acknowledge they wanted to work through all of their workflows and basically have a discussion saying, "Well, what are we gonna do?" We don't really know exactly what we want the user to do in here. We're gonna develop those as we're doing the project. Well, since you don't necessarily know what needs to be on each screen, how do you start developing? Well, good example of this was an internal solution we had. We were migrating an IIS web-based solution to Ignition Perspective. One of the things we identified in that IIS solution was that the user workflows often started right-hand tab, working your way across to the left hand to put in new products. The workflows weren't very intuitive. You had to know what you were doing to even use the product whatsoever. Well, we said we wanna fix that because, the way we were designing, the way we had to use it is you had to build from the bottom up for all your products.

08:45
Doug Yerger: Instead, the customers always wanted to build from the top down. They have a finished queue, which is at the top of that architecture, and they wanted to always build down, but our system was forcing them to go the other way. So we said, "Okay, let's go ahead and design a whole new user workflow." Well, the way we went through that is we started... In our design tool, we just literally sat down and typed out what these workflows are gonna be. Just literally sitting down, typing notes as we went, writing our design tool of what they were gonna be. As we worked through those, we started coming up with little bit of a thing, saying, "Well, what do we think about this idea of structuring them this way or that way?" And there were several slides like this, of different ideas of what to do. That grew into a more formal one as we went. And then the final design of what the end project looks like. This project is still in our design tool. This is not in Perspective. And just a quick side note, you'll probably... If you can actually read the iChart up here, it's actually showing Oreos. We just chose that 'cause everyone knows Oreos and likes Oreos, so we use that as our demo product as we're going through this.

10:12
Doug Yerger: The final pain point we have are true Agile projects that basically there's not a final goal set in a true Agile project 'cause it's whatever you're using for each sprint. Grantek, we've often done what I call hybrid Agile projects. And what I mean by that is, there is actually a functional spec saying what our goal is, but we're using Agile to define our user interface and user workflows to achieve that functional goal. Still falls under this category for me because there's still a lot of changes happening. Each sprint, you're bringing in new features, which cause a lot of free work and things like that. Things to point out on here is, in the middle of the screen, you can see what's called queue. It's the water queue. That started as an actual vertical list in our original designs. We had the issue there is one, it looked out of place on the screen, but it also grew and shrank in size, making things move around the screen, and kind of just was an eyesore on the screen.

11:12
Doug Yerger: So we ended up deciding on this horizontal list of the items in the queue. All those cards are clickable, which would bring up the queue management screen separately to do any reordering or rescheduling that you wanted to do. Another item, it would be that point-of-view section on the bottom. Those are the point-of-view valves that are in the area that this screen would be accessing. And each of those cards, when you click on them, this lower right section is actually a slide-out that retracts or slides out when you click on the card and brings up your... Putting in a manual control and gives a little more status of it, versus the card itself. So those were our pain points, and we acknowledged that these were getting very costly and causing a lot of extra time on projects and things like that. So we said, "Well, what can we do to fix this?"

12:09
Doug Yerger: Well, Perspective is effectively developing a webpage. So we said, "Well, what do they do in the web world?" We looked at that and that's where we got the idea of using a design tool to actually help. We evaluated several design tools, narrowed it down to two, then did very good cost analysis and workflow analysis with our salespeople and engineers in that we did a little survey inside on which one we thought would be better, and we ended up settling on Figma as our design tool. Figma is a web-based, fully collaborative design tool, and whether working in a browser or their stand-alone application, users are provided the exact same working environment.

12:55
Doug Yerger: On the left is a desktop application, which I keep in dark mode. On the right is the exact same design, open in a Chrome browser. As you can see, everything's identical shown between the two, including the two sidebars. Left-hand sidebar is very similar to your project browser and the Ignition designer. It's gonna show your object hierarchy. And most of the pages, if you're on a different screen, it is context-sensitive so it'll show file structures if you're up at the file level versus inside one of the projects. The right-hand pane is like your properties panel. It's going to change very drastically depending on what type of object you're selected. Currently, what you're seeing there is for that one grouping that's selected inside there. As mentioned, Figma is fully collaborative, meaning users work on the exact same design file concurrently. You can actually see each other's cursors and changes in real time.

14:03
Doug Yerger: In this screenshot, you can see in the upper top, there's my avatar picture as well as a "D" next to it. That's telling me Dylan's in that same file that I'm in. So I know he's working in the file, and since he happens to be in the field of view I have in the file, I can see his cursor, and it has his name there. I don't know if you can notice it there. Right here, it's showing his cursor. If this was live and not a screenshot, you would actually see that cursor moving around as he's doing it. For demonstration or training, you can actually click on one of the avatars on the top and your screen will lock to what they're seeing on their screen, so you can follow exactly what they're doing as they work through whatever they're doing to demonstrate or show. Figma allows commenting directly in the designs. So in the screenshot, you can see the highlighted row in that table has a little comment balloon on it. That's obviously saying there's a comment on that item. Up in the task bar, you see a little red dot on that balloon up on that icon up there. That's telling me there's an unread comment in this file. It is file-sensitive, so if I switch to another file, that dot will go away. So it's telling you where you're at on that.

15:30
Doug Yerger: If we click on that comment icon or on the bubble, it's going to bring up... The right-hand sidebar is gonna switch. If you notice on the left there, it's showing those properties, but it's gonna switch to the comments over here. And it's gonna show all the comments in the file in the right-hand bar. If you click on that comment or click on the bubble itself, it'll bring up the comment on the screen. Great feature of it, you can tag users in your organization. When you tag them, it will actually send an email to them notifying them that there's been a comment in the file that needs their attention.

16:09
Doug Yerger: Another great feature on the commenting is you can share these designs with anyone inside your organization, but also anyone outside your organization. Sharing can be done at the team level, project level, file level, object level. And you can share it for full editing access, so you have an extended workforce, so you can share it there. Or, if you're sending it to a customer, you can send it to read-only access. Read-only users still have that ability to comment directly on the file. Great way to share your designs with customers. No more taking screenshots, sending it to them, having them work through what's going on. The layout tools they have, very simple and align with standard web terminologies, so they align with Perspective very well. That makes that transition very simple. In this screenshot of the property panel that shows up on the right-hand toolbar, you can see you have... At the top, you have some alignment tools. You have the sizing, the rotation, corner radiuses, positional constraints as we come down, and then you're actually gonna have text, fonts, as well as the colors that are used within your selection.

17:29
Doug Yerger: Figma has a feature called Variance, which allow you to have one item. You can kind of think of it as a UDT 'cause you're gonna create instances of it. Here, you can see a very large palette of buttons that one of our customers uses. All of those buttons are the same button in Figma. They're all set... Everything on there, you'd put an instance on the page, and to change anything on there is simply a property change within that variant. Makes reusability great, makes the templatizing great, gets that consistency across to all those applications that you're looking for. In addition, utilizing variables and styles allows you to create all those same themes in Figma as well in Ignition Perspective, create user light and dark modes, even have customer versus standard palettes, your own standard palettes in there. And taking that idea along to show you, that upper-left one there is showing, in Figma, our standard color palette. Lower right is a screen we have in Perspective to just kind of confirm they're all loaded properly. And if you notice, all the colors are identical between those two. Little side note. You see the repeating colors at the top; there's two rows that repeat. That's because the top rows are our Grantek standard colors, and then below that are primary and neutral, what would be used in the projects.

19:07
Doug Yerger: If you have a customer that has their own color schemes they wanna use, you change those bottom two ones to whatever colors you want, and it's gonna change... Everything in our designs will automatically change to their color palettes, no reworking each component individually because they're all done by name. Figma's very easy to get started with. As you can see in the top left, there's only a few tools that you're gonna use. It's very simple to get started with, but it's also very powerful, and as you use it, you learn more and more advanced features, so you can get started very easily with it and grow from there. And like Inductive, Figma has a lot of stuff online to track what's doing it. Inductive has Inductive University, Figma puts most of their stuff on YouTube, all out there, publicly available, free of charge. And in addition to Figma's own published ones, there's a great design community around Figma, and you can find... I know I follow at least a half dozen additional design creators that are constantly talking about different uses with Figma and different features.

20:19
Doug Yerger: Figma, just like Inductive, very, very user-focused. They have a user conference that's toward the end of the summer, so just happened a few months ago. And like Inductive, like we saw this morning, announcing a lot of new features that are upcoming and ones that have been there. And I saved what I think is the best on the features. Last is what they call prototyping. What you're gonna see here is this is all within Figma, but prototyping is linking those design pages you've created into a working model. So as you're clicking through here, this is a recording of me clicking through different areas, showing different features, but it gives you that ability to have discussions with the customers and let them see your user workflows. They get to experience it, and they're not having to envision it from screenshots and descriptions. They can actually see it, play with it, and work through them. It's been great. Just like the designs themselves, you can share the prototypes with your customers so they can actually work through them, play with them themselves, give you direct feedback from them.

21:38
Doug Yerger: So when should you use a design tool instead of jumping headfirst into development? Hopefully, some of you have already seen times where you could see a benefit of the design tool, but I'm gonna cover things that I put on the top of our list. First recommendation: if it's a Perspective project, use a design tool. Perspective, as we've mentioned before, is as much a web design as it is an HMI design, so it benefits greatly from using web design philosophies on there. Doesn't mean that you can't use a design tool for Vision. We've done it numerous times on that. But usually, those Vision projects are gonna hit one of the other points we're gonna talk about, so therefore, it makes sense to use it in a design tool. Next one is you have complicated workflows. We always wanna keep that user experience as simple as possible, enjoyable as much as it can be to the user to use, but at least you don't wanna make it work for them to actually use your application. 'Cause you all know, if it's difficult to use, they won't use it. So, excuse me.

23:00
Doug Yerger: So, that means grouping common entries together on the same page, getting everything together as much as you can, getting that workflow nailed upfront, so everyone knows what's going on, and especially the designers themselves, they really get a feel for what needs to be where and going on with that user workflow. Another one is large projects with lots of screens. These projects have a lot going on, and sometimes things that seem unrelated turn out to be related. And once again, those user workflows are gonna go through there. Another benefit on those large projects is when the developers are going in to start on it. Your development team's probably not one person. You're gonna have a team of developers working on different screens. If they have a design to look at, they're all gonna end up having that same look and feel. I'm sure we've all seen SCADA applications out there where all the user workflows are great, everything's there, but some screens just seem a little different than others as you're working through them.

24:08
Doug Yerger: By having a design in front, everyone's working to that same goal and that same look and feel. Next time I would recommend using that design tool is whenever there are unclear user requirements or part of the project is to develop them during the project. We've seen that example earlier where we discussed how we did that with our internal solution, because basically, if you don't have user requirements defined, how can you start development? You're shooting in the dark; you're just throwing things out there. And by doing it in a design tool, you can create four, five, half dozen different ideas to discuss with the customer, and you're not spending all that time to do it in your development environment. You're doing it much more quickly, much more nimbly, and you can make changes very quickly. One example there is we went on a customer site, did a review, and they came back and said, "Well, I really don't like how we're seeing all the building layouts, status of all the building layouts." And on the whiteboard, he drew what he wanted.

25:15
Doug Yerger: The following afternoon, we sent him screenshots from the design tool of the finished design of the updated version, and he was like, "That was incredible getting them turned around that quickly." We mentioned this before when we were talking about things, but if your actual project is to develop the design spec, you're gonna need that documentation, so doing it in a design tool makes those screenshots and everything easy to create, gets rid of a lot of extra work upfront, 'cause you know there are gonna be changes coming when you're having discussions with those customers on getting that design spec approved. Last one, if we consider HMIs at level one, visualization, SCADA at level two, then you have MES, management, and dashboards, and all that beyond that. I would say that SCADA level two and above are always gonna benefit from using a design tool. Not saying we haven't used it for HMIs, but often with HMIs, we'll use it for maybe a simple... To discuss an overview screen to say, "How can we get this clear?" To bounce ideas together. But if it's systems that you've done many times, you probably might not need the design tool and can jump right into development. But that's gonna lead us to our final topic.

26:42
Doug Yerger: Oh, I shouldn't say that, but yeah. So when should we go from design to development? So, when are we gonna start? Basically, as soon as you have a comfort level that your design's not changing drastically anymore. What that's gonna mean, it could mean you officially have your sign-off from your customer. That's the gold standard, but often doesn't happen. A lot of times, you're gonna say, "Okay, well, they've tentatively approved these sections of the design and we're still working with other ones," and it's like, well, you can take those over to development. You don't need to move over with the full design. So you can move over individual parts as you need to. Figma just released a new feature where you actually can flag the components in Figma itself, and saying, "These are ready for development." Nice feature about that dev mode that they have is, once you flag it, if you make any changes to it, it keeps a change history right in the dev mode interface. So, say you tweak the color from your very darkest color to slightly lighter on that, the developers don't have to dig through to what changed. They can actually look at that change history and go, "Oh, the color changed here." They know the one thing they need to update over on the other side instead of checking everything out.

28:07
Doug Yerger: Is it a one-way transfer? So, do we only go from design to development once, and do we never go back? Wish I could say yes, but, in truth, no. If you get a new feature request, you wanna discuss a new workflow, anything like that, you're gonna wanna go back into that design environment. Rule of thumb I like to go is if you have any notion of the question, "Well, what's that gonna look like?" Do it in the design tool. You might just be mocking up one section of the screen. You might be revamping the design completely to do what you're gonna do. But as you use your design tool, it's gonna become a lot faster for you to do that design in those design tools versus doing it in your development environment. One final note I wanna make on this is that I keep talking about design and development like there's gonna be separate teams in your organization. For Grantek, and I think mostly controls engineer... Controls industry, I should say, design and development are actually the same teams. We don't have separate sections yet. We may in one day, as these keep advancing, but currently, it's the same team of people, but it's still design and development as two separate rules that they're working on.

29:30
Doug Yerger: So the same people could be designing in the morning, developing in the afternoon, jumping back and forth all day long, but that's what you're gonna go through. Basically, you don't need that 100% design to jump over, and you can always jump back and forth any time you want. Now, I guess I'll invite Rob back out here to answer any questions you guys all might have.

30:00
Rob Lapkass: Well, thank you for that informative presentation, Doug. And this brings us to our Q&A part of our session. We ask that you direct your questions to one of the mic runners. We've got two people down here on the floor. We've got a couple of folks up in the balcony, so any questions you have for Doug, please fire away. Right here.

30:25
Audience Member 1: Hey. Okay, so we have your design team doing the Figma mock-ups. Did you have any success with exporting that design, whatever done on Figma, as an imported JSON object so we can reduce the Perspective development?

30:41
Doug Yerger: They do have an export ability. It's gonna bring in your basics of size, corner radiuses, colors, things like that, so you could write some scripting and bring those in, but it's not gonna be able to map that a button is a button, because in Figma, it's a rectangle with colors and corner radiuses, so it doesn't know that it's gonna map to a button component in Figma. You probably could add some variables in there and create... Add some additional variables onto that item, identify it as a button, and then build some additional scripting as a, "Oh, I'm going to read this." I see this additional JSON that's tagged with it and go, "Oh, it's a button; I'm gonna create a button control and scheme it that way." So effectively, you could create those schemes. We haven't evolved that far yet. We're actually looking at it ourselves 'cause some of the export features are fairly new as well.

31:39
Audience Member 1: Thank you.

31:42
Audience Member 2: I'm curious about how you manage the design when there are collaborators and the options when you're presenting to customers. Are there different versions that are running at the same time of the mock-up? What's the practical way that you manage that?

32:03
Doug Yerger: To be honest, we often just go on a team share and show them what we want them to see. But there is full version control. You can actually create branches of your design. And then, once a branch is finished, it can be flagged for review, and then the owners of that design file can actually approve it, and it'll merge it back with that branch. So it does have that full control as well.

32:31
Audience Member 2: Can you do some sort of repo integration with it also? For collaboration.

32:40
Doug Yerger: I don't know if it supports things like Git or things like that. The version control is mostly done within Figma itself, that I know of. We haven't really tried that with a DevOps environment. Typically, what we'll do is we'll have, "Okay, this was rev zero," and we'll copy inside the design file itself and copy that design in and say, "Here's gonna be rev one that we're working on," and we'll go from there. And then we'll have branches within that rev until we're happy with it. And then, when it's released to the customer, that becomes fixed and we'll copy it down for the new work in progress. It's all kept within Figma itself, not a repository.

33:23
Rob Lapkass: We had a couple questions down here.

33:26
Audience Member 3: There's quite a few add-ins for Figma. Are there any that you found especially helpful?

33:35
Doug Yerger: I'm trying to remember which ones I have applied 'cause I put them on so long ago. Hit me up after this, and I'll look at my Figma file and tell you which ones I'm using 'cause I actually don't remember which ones I'm using. I know I have a couple of them in there that are used regularly, but I don't remember their names.

33:56
Audience Member 4: Hi, we're heavy users of Figma for our design, and I'm just curious to learn from you. One barrier or one thing that could be more efficient with our workflow is just trying to... When we're working with other designers or developers that we're taking off pieces, just seems like we have a lot of things that are in comments in Figma and then a lot of things that are out on backlogs, or scrum boards, or whatever else. And is there any way you've found to kind of integrate Figma with some sort of other workflows and development patterns like that?

34:24
Doug Yerger: You're saying like an Agile tool...

34:26
Audience Member 4: Yeah.

34:27
Doug Yerger: Yeah, there's actually several integrations. Figma's integration is several of the common scrum tools already, so you can pull things off the board and say, "These are what we're working on," and publish them that way.

34:39
Audience Member 4: Sounds good, thanks.

34:42
Audience Member 5: So early on, you mentioned doing a fairly robust analysis of various tools in the space. What are some of the other ones that you might have evaluated?

34:50
Doug Yerger: We did a high-level evaluation of probably five of them. We had narrowed it down, at that time, to Balsamiq, Adobe XD, and Figma. We ended up choosing Figma, one, a little bit around costing. We like their license model. It has a great... It ties into our SSO, so we actually log in with our domain credentials to log into it. We control access even internally in our organization through AD groups, through our IT department. So it had a lot of those nice-to-have features that we really liked.

35:31
Audience Member 6: I saw in your development that you're using a lot of test data. And I'm wondering, did you use test data from the customer's site or like autogenerated test data? And how did you go about collecting that if it was customer's data that you were presenting to them?

35:48
Doug Yerger: In Figma itself, that's all just pure text. It's just data we made up and typed in for the design to give them an idea of what it is. It's not dynamic at all. It's not tied to anything. And that's part of what makes it stronger, is we're not having to tie it to anything. We're just showing them what it is. We did know, like in the one I believe is from an alarming tool, we knew what the typical data was gonna be. And in that Oreo example, it's like, okay, we know there's a 24-count package, so we're gonna have a tray. We're gonna have an outer wrapper. We made up numbers. We made up... 'Cause it was just ours to put placeholders in that we knew what they were gonna be and it would make sense to a customer on what we're looking at. We have done ones where it's for a specific customer, where we'll take the real data and put it in in place so they see their data in there. But it's still not dynamic. It's for a specific example 'cause it's static in Figma.

36:47
Audience Member 6: Thank you.

36:48
Audience Member 7: Hi, Doug.

36:49
Doug Yerger: Hello.

36:50
Audience Member 7: Thank you for the presentation. That was good. This is just more of a comment. We now use Figma. We have Sam, and I think the biggest benefit that we found out is, as developers, we're engineers. We may not be the most artistically minded people. So despite the fact that Perspective allows us to create beautiful-looking UIs and UXs, it still looks like something an engineer has created. The artistically minded people... It basically allows the more artistically minded people within our company to truly actually help us develop beautiful-looking UX/UI experiences without actually us having to let them into the designer. And that's been huge on both sides. So, we end up with better-looking applications and not messed up by people like this.

37:37
Doug Yerger: Yeah, and that's one of those things we took from that web design world where you have all those wonderful designers that are saying, "This is gonna be a great-looking web page," but they have no idea what HTML and CSS even are. And they don't need to know; that's not their job. And that's what we're trying to separate there. So yeah, you can bring in your marketing department, let them decide what makes sense for it.

38:01
Audience Member 8: Hi. I've suffered most of the pain points that you show there, so I relate to all that. My question is, in addition to the user interface design, do you use any other tool for the designing of the relation between tables, and databases, and such?

38:27
Doug Yerger: We do use a few tools on that. Usually, it's kind of visualization. To be honest, if I'm doing them, I will actually draw them in Figma 'cause it's so easy to draw in Figma that I will just throw them in Figma and design. Some people like using Visio, but I hate trying to draw connecting lines in Visio because you know they're going to constantly change their path and change how they look whenever you touch anything. So I usually just will draw them in Figma myself when I'm doing it, because I've taken that... I know a lot of people use Paint to draw screenshots and all that. I usually just bring up Figma and draw things in there 'cause I can do it so quickly now.

39:07
Audience Member 8: Alright, thanks.

39:07
Doug Yerger: Thank you.

39:10
Rob Lapkass: Well, I'll jump in with one. I've done a little bit of work with Figma, but if someone in our audience was interested in getting started with a design tool like Figma, what might be the top three or five capabilities or design capabilities, the biggest bang for the buck, the quickest impact kind of things to prioritize on?

39:31
Doug Yerger: Well, I'll start with the first thing that's great about doing Figma: it's free to use to start with. You can have two projects in it. You do have limited number of people you can invite to your file, but if you're just wanting to evaluate it, free to evaluate; you don't even gotta need to contact them. You're just gonna log in, create a login on their site, and start using it. Big bang for the buck. We went over those pain points. Really, where I saw its value right upfront was when we had that IIS application where we know the one we have, we're not real happy with it, we wanna just rediscuss these workflows. How do we want it to work? And we actually sat down with a consultant and worked through typing down what we wanted, and making notes, and just discussing things. We had IIS up on a different screen. Discussed what we wanted, and that's how we came up with that list of the workflows, and then we could just slowly grow that from there. So I think that's one of the big ones is, a lot of times, it's just you can do it as a collaborative tool. They even have a whiteboarding section inside Figma that you can put stickies and notes and everything else around there as well, if you're into that phase of it. I usually stick more on the design side of Figma.

40:57
Rob Lapkass: Sort of a follow-on to that, what kind of design capabilities would you like to see built or added on to Perspective? 8.3, of course.

41:09
Doug Yerger: Being able to integrate with the design tools, as the gentleman out there mentioned, would be great, where we could actually bring things natively into it. Obviously, having to set them up and saying it is a button or whatever, setting up the right class, but that would be a wonderful feature.

41:23
Audience Member 9: So while we're all waiting for 8.3 and those drawing tools, I can see that Figma might be helpful for like SVG creation and sort of standardization that you could then use to import.

41:37
Doug Yerger: Yes, and when you're drawing in there, you can draw basic shapes and things like that, and you can export straight to SVG.

41:46
Audience Member 10: Did you investigate Lucidchart? Did you do something like that? Did you look at that at all?

41:51
Doug Yerger: We had not looked at that one very much. Our analysis started with saying, "What's really big in the web industry?" So we went with the top hitters there, was what we were gonna start with, because we figured that would be our best bang for the buck and have the most knowledge out there for learning it ourselves.

42:14
Rob Lapkass: Oh, looks like we got one over there.

42:18
Audience Member 11: Hi, Doug. You mentioned that oftentimes, the designer and the developer are the same person. What's your opinion on whether that should actually be the same person or whether there should be a separate job for that?

42:32
Doug Yerger: I think that in our industries currently, you're gonna see that developer staying as part of the design team. But as the gentleman from Kanoa mentioned getting in people who are graphic designers, and that might really help with getting snazzier layouts and more eye-catching ones, especially when you're dealing with dashboards and management tools where they want all those things to be flashy and really eye-catching, versus an HMI that we're bound by ISA101 what we're supposed to make it look like.

43:07
Rob Lapkass: Alright. Well, it looks like we're drawing near to the end of our scheduled time. Thank you for all the questions, and how about another round of applause for Doug?

43:21
Doug Yerger: Thanks, Rob.

Wistia ID
a5hhmjeten
Hero
Thumbnail
Video Duration
2607

Speakers

Doug Yerger

Principal Engineer

Grantek Systems Integration

ICC Year
2023.00