Ignition + Docker: How to Use Containers for Faster Development

56 min video  /  45 minute read Download PDF
 

Speakers

Don Pearson

Chief Strategy Officer

Inductive Automation

Kevin Collins

Lead Software Engineer

Inductive Automation

Keith Gamble

Information Solutions Engineering Manager

Barry-Wehmiller Design Group

Joseph Dolivo

Chief Technology Officer

4IR Solutions

In the never-ending quest to develop and deploy automation projects more quickly, containers represent a powerful leap forward — especially when paired with Ignition. In this webinar, thought leaders from Inductive Automation and the Ignition community will discuss effective ways to use Ignition with the Docker platform, which is widely regarded as the de facto standard for building and sharing containerized apps.

  • Learn about the basics of Docker and why you should use it
  • See how to use containers in development, testing, and production
  • Find out how to manage multiple projects with Docker Compose
  • And more!

Webinar Transcript

00:00
Don Pearson: Hello, everyone and welcome to today's webinar, “Ignition + Docker: How to Use Containers for Faster Development.” My name is Don Pearson, I serve as Chief Strategy Officer with Inductive Automation and I'll be the moderator for today's webinar. Joining me today are Kevin Collins, Joe Dolivo, and Keith Gamble. Kevin Collins is Lead Software Engineer here at Inductive Automation and some even call him the Duke of Docker. So we're happy to have him with us here today. Joe Dolivo is the Chief Technology Officer at 4IR Solutions. 4IR Solutions is an official Inductive Automation Solution Partner that provides an easy way to deploy Ignition and its partner ecosystem into the cloud via a fully managed solution. Keith Gamble is Information Solutions Engineering Manager at Barry-Wehmiller Design Group or BWDG. BWDG is an Ignition Enterprise Integrator that provides a broad range of services including packaging automation, process engineering, facility design services, utility system engineering, software engineering, and just a whole bunch more. But I'd like them to introduce themselves. So could each tell us a little bit more about yourselves and what you do. I guess we'll start with Kevin, then go to Joe and then to Keith. Kevin.

01:21
Kevin Collins: Thanks, Don. Yeah, I'm glad to be here today. I've been with Inductive Automation since late 2020 and since then one of my big focus areas has been our Ignition container image and we've come a long way over the past couple years and I'm pretty excited to share with everyone kind of some good workflows for how to leverage this technology and get folks developing faster in Ignition.

01:53
Don Pearson: Thanks, Kevin. How about you, Joe?

01:56
Joseph Dolivo: Thanks Don, and thanks Kevin. I'm Joe Dolivo with 4IR Solutions. Actually been using Ignition as a container before it was officially available. And really 4IR Solutions has been around for about six years now, and it was back in 2021 around the time when the official Ignition container became available that we kind of pivoted our whole company around providing this managed Ignition infrastructure and it's all built on containers. So I'll be talking a little bit about how we're using containers in production today.

02:26
Don Pearson: Great, Joe, thanks, and Keith.

02:29
Keith Gamble: Thanks, Don. I'm Keith Gamble. I have been working with Ignition in containers as well since before it was more officially supported. I've been at Barry-Wehmiller Design Group for approximately three years. We're focused on anything from SCADA through software engineering applications. We've been using Ignition in... Docker more specifically since it was officially supported and really pushing to greatly enhance developer efficiency and optimize our production environments.

02:57
Don Pearson: Thanks, Keith. And thanks to all three of you guys for being with us today. Since we're gonna be talking about our software platform, Ignition, I might wanna just say something at the beginning of the webinar about that to those who may be new to Ignition. So let me take a quick moment and introduce it. Ignition is a universal industrial application platform. It's for HMI, SCADA, MES, and IIoT. We've been around for 20 years, Ignition around for 12 of those with Inductive Automation. And it's basically expanded across 57% of Fortune 100 companies are now using Ignition, I think it's 45% plus of Fortune 500. It has many features like an unlimited licensing model, modular configurability, and a scalable server-client architecture. But we'll be talking a little bit more about that as we get into the webinar a little bit later on.

03:49
Don Pearson: Look at the agenda. Here's the agenda for the webinar. First off, Kevin, will give us a short overview of containers, then we'll move into a discussion about using containers for development and we'll look at a live shared environment. Next, we'll have a quick discussion about configuring an environment for testing. We'll follow that with a quick discussion about using Docker in production. And we'll wrap up by giving you helpful hints to Docker resources and of course answer any questions that you may have. If you do have questions during the presentation, just type them into the questions area of the GoToWebinar control panel. We'll answer as many questions as we can with the time we have at the end, but if by chance we just don't get to your question, I'm sure there'll be plenty of questions today. We do encourage you to reach out to us, reach out to one of our talented account representatives who can help you get an answer to your question. So with that as a quick intro, we've got a lot to cover today. So Kevin, I am going to hand it over to you to get us started.

04:49
Kevin Collins: Thanks, Don. Yeah, I want to just take at least a few brief moments to talk about what makes containers unique. Set the stage, if you will. So this technology defines a methodology for bundling applications and their dependencies, the things that you would typically install to run an application, into portable packages that you can easily move from one system to another. That's the packaging side of containers, but we're also talking about runtime, how to run those applications with segregated execution of your applications. Container technology leverages sandboxing to give your application its own isolated view of the world while sharing the resources of a host system. As such, we get a minimal runtime footprint with containers that allows us to achieve better density and better resource utilization than traditional virtual machines. And there'll be more on that later on. Now, we could spend a ton of time geeking out over container technology.

05:58
Kevin Collins: Its various fascinating aspects, but this is a one-hour webinar. So let's get to task. What does container technology mean to you, the Ignition developer, and how can you leverage it to empower and expedite your development tasks? Building on that previous slide, we're able to apply containers to make what was already a pretty quick and painless Ignition installation, that famous five-minute install. We're able to make that even quicker with containers. We're able to model complex architectures with multiple gateways, databases, MQTT brokers, a variety of different services in a way that was much harder to provision, configure, and manage before in the example of segregating workloads via multiple virtual machines that you have to manage. And then finally, we're able to leverage this technology to organize our multiple projects. Many of us definitely in the systems integration community are balancing different client projects.

07:11
Kevin Collins: Those end users out there might have a couple different test environments that they wanna manage. And what we're gonna show you today is going to help with getting that done. Now, what we're gonna use here in the demo is Docker Desktop. That's gonna be what we're using for the container runtime and backing container engine. But this could be Podman. There's other, even Docker Engine on a Linux host directly can be used to achieve the exact same thing that we're doing today. Docker Compose, that's going to be the tool that we use to manage those sets of multiple containers. And Traefik is gonna be highlighted as a container-aware reverse proxy that's going to really enhance how we connect into these different environments that we're bringing up. So without further ado, let's jump into the live action. And Keith's gonna get us started here.

08:21
Keith Gamble: Yep. Thanks, Kevin. Getting started with Docker, we wanna talk about Docker Compose as it's often the very first part of any project you're gonna be working on with Docker. What Docker Compose does, is it allows you to define multiple different containers, whether it be Ignition, a database, a web server, and define them all in a text format that is easily creatable, copyable, and editable. The Docker Compose specification defaults to having a file. You'll see here named docker-compose.yaml could be whatever, but for this example, we've gone with the default. And it defines an array of services in the YAML syntax. Each service is going to be your container. In this example, we've created a container, call it gateway, that's going to be leveraging the Inductive Automation Ignition image version 8.1.26. We also can define the ports on the container. Docker maps the container port into your host.

09:24
Keith Gamble: So in this case, when we access our laptops port 8088, it's going to be mapping to the container's port 8088. This is how you can run many different Ignition gateways in one environment without having that port overlap and crossover. The volumes is where you can define a place to put persistent data. In this case, we're creating a named volume called gateway data and mapping that into the containers, the Ignition data folder. This means that if we delete this container and recreate it, we can persist that data as we develop. The environment variables are attributes that you might want to give your container on startup. In this case, we've given it the username “admin” and an extremely secure “not password.” We've defined our Ignition edition. In this case you can do standard, Edge, and in the future Cloud, as well as we've auto accepted the Ignition EULA.

10:20
Keith Gamble: Then, we pass in a command argument to the JVM Start for Ignition. In this case, we're just naming our gateway, webinar demo gateway. Each of these different environment variables and commands are available in the documentation to show you all your different options. There's a wide array of them and they're a great way to customize the container you're defining. Each of our volumes we need to define as well in here so we can give them specific attributes. In this case, it's a simple volume with no details, but it could be all kinds of things. So each of these different keys in the YAML file are, in this case, the ones we've pre-picked to show, but there's a wide array of them and they can do all kinds of things from networking to adding extra functionality, custom health checks, all kinds of things into your environment.

11:10
Keith Gamble: However, in this example we're using it to create just our one gateway. So if we wanna start this gateway, we can go ahead and run the Docker Compose up command. We like to add the -d so the logs don't take over your terminal. And when we run that, it's going to automatically look and pull that Inductive Automation Ignition image. In this case, this is on Kevin's computer and he already has that image so it doesn't need to pull it. So it goes ahead and it creates that network. The network in this case, it automatically named simple example default as you can see because that's our folder name that we're working out of. And the gateway we created was simple example gateway 1. Now, there's a lot of tools you can use to customize naming of all these things but that's kind of the layout it automatically does. So now that we have this container created, we have our port 8088 map to it, we could go to our browser and access local host 8088 to hit this Ignition gateway.

12:14
Keith Gamble: So it may still be starting up. Oh, there we go. It already is starting up and running and now we can go ahead and do whatever we want to on this example. So, that's awesome. Great. We can work on this but what if we want multiple gateways? So, this is where we go back to our Compose stack. In this case, I wanna create a second gateway. So I'm just gonna go ahead and copy this definition. I need to change a few things so it's unique. This will be gateway 2. I'll use the same image. I need to give it a new port because 8088 is already taken. Remember, the left port is going to be our container, our laptop host, and the right port will be our container’s port. I'm going to give it a different volume so the data doesn't overlap.

12:58
Keith Gamble: And then, I'm going to change the name of the container that's created and then add this second volume here as well so that it'll create. So now, if we go back and we do another Docker Compose up on our file, we'll see that it went through and it created that second gateway, but it said the first one was already running. This is because when you make changes to a Docker Compose file, it doesn't automatically recreate everything every time you do a new up, it only recreates what you've changed. This means that you can work on one system with another in parallel without having to stop and rebuild that second system all the time. Now that we've created this second system, if we go to our browser again, we can go to that port 9088 and we can see our second Ignition gateway running. Perfect, two down. This is great and this works really well when you need to run a lot of containers. But what happens when you have many, many containers and now you've gotta remember ports, this project is 8088, that project is 9088. Those projects are 7088 and 6088. It can become really annoying to have to deal with those multiple ports. One of the tools that we can leverage to resolve that locally is a reverse proxy. Kevin, would you like to talk more about that?

14:23
Kevin Collins: You bet, Keith. So yeah, as you can imagine when you expand this out, as with most things, things start to get harder when you do more of any given thing. So we've got to leverage tools to make our lives easier. Keith mentioned our simple example here, which exists as a Compose project. And you can see over on the left I've got a few different other folders, one of those being proxy. So let's open up that Docker Compose definition. Now, there's a lot of settings here. Each containerized application has its own particular methodology for configuration management and setup. There are of course a few common themes like publishing the port so we can access it from our host. We're not gonna dive into all of the details here because part of what this demo's about is as much as informing you of how to do some of these things, it's also to open your eyes and really showcase the power of this technology and how you can apply it. So we don't want to get too lost in the weeds. After all, we've only got an hour to show you all this good stuff that we've prepared. So ultimately, what we're gonna do next is change to our other Compose project in the working directory proxy here. And guess what? We're gonna do a very similar Docker Compose up with our -d.

15:58
Kevin Collins: This starts the various services in our definition. Here we just have one service and we're also attaching a shared network that we're going to use for our other solutions. Now, the default network from our simple example, attaches to the built-in bridge network that's part of any Docker installation. But Compose lets you easily create other subnets that you can attach your containers to. And that gives you some different ways to organize your connectivity for your different containers. So here, if we take our example, we can see you can apply labels to your container and Traefik I mentioned in the earlier slide is a container aware reverse proxy. That means you can tie it in to your Docker engine. Through this bind mount here, we're actually attaching this container to let it read what's going on in Docker so it can actually see the configuration of the different containers you're launching.

17:06
Kevin Collins: And the way Traefik works is it lets you apply labels to automatically create routes in your reverse proxy. So what does this mean for us? If we come back over to Safari, we can bring up a new tab and we can now use this proxy.localtest.me. And suddenly, now we can access our Traefik dashboard. And here we can see some routes that were created for Traefik itself. But let's extend our original simple example and free ourselves from having to work with those ports. So if we come back over to our simple example, instead of ports 8088 and ports 9088 for these to connect to, we're simply going to add some labels such as traefik.enable=true. And we're gonna give it a host name of gateway 2. And so what this is going to set up for us is a new way to connect to our gateways. But there's one other step that we need to do and that is change our network, sorry, networks, to be our default network. And it's going to be that same proxy network that we created with our reverse proxy. And we're going to give it an external flag of true that will help tell Compose to not worry about creating this network as it did up here...

18:49
Kevin Collins: When we originally ran the solution. Instead, it's going to simply attach to that other network. So now if we come back to our simple example, we'll do a Docker Compose down to stop our two gateways and bring the solution down. And now we'll bring it back up with the new configuration. At this point now, if we come back to our Traefik dashboard, we can already see a couple new routes for our new gateway. And now instead of port 8088, we should be able to refer to it by name. And perhaps as with all things, I've got to actually show you what is going on here. So, let me take this down all the way and restart it here. One second. Some of these things are easier to do if you've done them from the start, so let's bring them back up and I'll take you over to here. Yes. Okay, so here is the Traefik dashboard. Now that you can see it, we now have a couple new routes for our gateway 1 service, and I think we have an issue. What live demo would be complete without at least some issue? Oh, that's right. We've got to have it actually bring down that original network we created and now change it. Compose thinks that we didn't actually change anything since we still had a default network. So sorry for that small misroute there.

20:52
Kevin Collins: So, one of the things that I'll show you here, because we might as well take this opportunity to learn, is we can use other commands to inspect our containers. So what I'm going to do is actually inspect this gateway 1 container and determine what network it's attached to to make sure it's attached to our proxy network. So we have that proxy network connection. Okay, so I think it should be functioning. Ah, I maybe just didn't wait long enough. So here we are now, gateway 1, we have to wait for those services to start up. Gateway 2, localtest.me, and we're back on track. So now you can see with our reverse proxy, we've been able to set up a easier path to connect to our containers. So now that we're back on track, I've got a couple little slides to show. So basically, originally we were connecting to just our one gateway. And we used local hosts since we're running this on our local system. We expand the solution, we add another gateway, we get into those ports and things start to get more difficult to manage. Then finally, we introduce Traefik, which then lets us refer to those under the names gateway 1 and gateway 2.

22:29
Kevin Collins: So that's really where we're at when we talk about linking up Traefik in our Compose projects, but we've got more in-depth examples to show you. And for that, I'm gonna hand it over to Joe to take us through our first project.

22:50
Joseph Dolivo: Awesome. Thanks, Kevin. So inside of here we have three different project folders that are essentially the reference architectures that Inductive has on the Ignition website inside the documentation, as well as on some of the other pages for architectures. But those are, instead of being something that you're manually downloading and installing and configuring, these are essentially configured out of the box and defined as text, as code, here inside these Docker Compose files. So this is an example where for a traditional, an architecture where you want more than one Ignition server. Let's say you wanna split Ignition into a frontend and a backend, the frontend would be handling the actual load from the Perspective clients. So you're joining with your web browser. Then the backend would typically be connecting to actual devices like PLCs and things like that.

23:42
Joseph Dolivo: So we have a couple of different services that are defined in here. So we have frontend 1. So this is the service name for the frontend Ignition service. You can see a lot of things that are the same in here as the other examples. We're using a couple more configuration keys in here, like environment files to reference environment variables that are defined in a file. We're specifying particular modules that we wanna be starting up. You can see here we have the labels here for Traefik, particularly a frontend 1 host name. This is how we're gonna be able to access this by name instead of worrying about port mapping. And then otherwise, that's pretty similar to the other examples we were looking at. If we scroll down to the second one here, you see we have backend 1.

24:23
Joseph Dolivo: Again, this operates very, very similarly. We've got different set of modules that are defined down here on this line because now we're connecting to actual devices. We also have, if you look at the volumes here and the frontend example had this as well, we actually have a separate folder that we're using to map backups as well as the actual data directory, which is gonna give us our, the statefulness if you will. So every time we run this, if we tear the container down and we spin it back up, we're gonna maintain our state, we're still gonna have all that data restored. And then down here we can see now that we have a host name of backend 1. And then finally, to make this a little bit more interesting, we also have a database service. So in this case, we've got a version of Postgres 15 that's running on here.

25:05
Joseph Dolivo: There's a concept in Docker and Docker Compose called aliases here for the network. And so basically over the network defined above. So in this case we have this database network, these are hostname aliases that we're gonna be able to use to reference this database service. So basically from within an Ignition configuration, when we're setting up our database connection string, we could use database 1, we could use more.useful.alias or something similar to be able to reference that. Again, not having to worry about internal host names. And then you can see we also have another volume mapping here for the data directory for the database, as well as some environment variables. And otherwise, this is pretty much the same with the network and the volume keys that we already had showcased. So with all of this defined, we have three services. We can, again, go ahead and CD into the directory and then do a Docker Compose up -d and we should have all three of these services that are up and running and available for us to take a look at. Kevin, was there anything on this one that you wanted to talk through as these are coming up?

26:05
Kevin Collins: No, I think you've got it. While we wait for these to spin up, we'll do a simple illustration of connecting these to that database. You can also notice one of the things we glossed over was that environment file. We used this environment file to store some of those common provisioning environment variables to get us all the way through the manual commissioning process. So that's helpful and using the end file directive here, lets us get some code reuse by not having to copy and paste all those shared environment variables on each one. Coming back to our back-end gateway, and again we're referencing all of these by just port 80 on our host. Our Traefik config is actually configured with TLS certs. So all of our TLS termination is actually being done by our reverse proxy, and it's automatically forwarding to port 443.

27:15
Kevin Collins: You can see we've got the certificate validity up there as well. The last thing that I think is probably useful to show is just how quickly we can connect a database now. So this is our back-end server. We're gonna just create a database connection named Postgres. We're gonna use that more.useful.alias. This could be whatever it needs to be. And you can define those additional aliases like Joe mentioned, in your Compose definition. So now I've created a database connection, it's valid. I'm already ready to start collecting tag history, doing whatever I need to do. And this is all encapsulated within this Compose project that we set up, project 1. So yeah, I think hopefully already we can see some of the utility of being able to manage these multiple projects.

28:19
Kevin Collins: If we look at something like the Docker extension here in VS Code, we can see the different projects here that we have running, and each of these can be taken down rather easily and then brought back up in the future. So, to continue the demo, the other things that we want to show you is how to add some additional supplementary components to your solution to add even more capability. So one of the tools that you see in our demo folder here is called MailHog. This is an SMTP test utility. So again, we're defining just a simple Docker Compose stack with a port publish that we're going to use to loop back from our containers. This is an example of a shared service that we're gonna use across all of our different solutions that we're project 1, project 2, project 3. So again, we can do a Docker Compose up, and if we come back over to Safari, now we have under MailHog, we've got a SMTP dashboard here. And if we create, for example, an email profile, let's just create a default email profile. We're gonna loop back to our Docker host through this special host name called host.docker.internal, and we're gonna connect to that port 1025 that we're using for our MailHog utility. We have that now, and now we can test from admin@example.com to someoneelse@another.com.

30:14
Kevin Collins: And now suddenly we can test our email. We can inspect those payloads. This is a helpful test tool and it's something that you can integrate into your Docker stack. So that's one that I rather enjoy. Another one that you might find helpful if you extend this approach to perhaps a shared server in your environment where you're deploying these containers and you want to give some access to folks who might not be comfortable with the terminal in managing these containers manually. We have another add-on called Portainer. So for our Portainer solution, again, I'm trying to emphasize that a lot of these add-ons are simply a few lines of YAML, and then suddenly you can take advantage of these advanced applications to better organize your development stack. So here we're going to have portainer.localtest.me.

31:16
Kevin Collins: So let's come back over to our browser here. And on first launch we enter some credentials to access the system. In a more permanent installation you could add multiple authentication methods to this. But this is really quick to give you visualization into the different containers that you have running. Suddenly, now you can use this interface to manage your containers on a shared development host. You can do things like inspect the containers, you can look at their logs and see the logs, all kinds of different things. So just really easy adding these complementary services to really empower your development stack. Now finally we have, and we mentioned earlier on, you can also prepackage configuration into some of these solutions as we've done with project 2, our scale-out architecture. This one is, and if you've ever visited our website, you can find these architectures on there.

32:25
Kevin Collins: This is our classic scale-out architecture. We have a front-end gateway and multiple back-end tag gateways connected over the gateway network. So again, we can do a Docker Compose, and all of these solutions are running concurrently on my system, and they're independent. The only common resource that they have is being shared through that reverse proxy for easy connectivity. So if we look at this one, we're just going to look maybe at our front-end gateway. So gateway-fe is our target name. So let's come over and take a look, gateway-fe. And we're starting up, and the goal for this one is from a clean start, we can actually have multiple containers representing multiple Ignition gateways, and they will come up connected with the gateway network out of the box. So this is great for demo purposes. If you need to quickly spool up a common architecture, you can create these baselines and then start from here versus all of the work you would normally have to do to get a gateway online, multiple gateways online, set up the gateway network connections, set up the database connections to the MariaDB database that you had to install.

33:57
Kevin Collins: Again, hopefully you can see the power once you really start to embrace these container technologies, how quickly you can get started and then move on to the task at hand, which is building the incredible applications that all of you out there do. Okay, so that is most of the live demo aspects. I want to come back to the slide deck, and talk a little bit more about test. And for that, I'll hand it back over to Keith.

34:33
Keith Gamble: Thanks, Kevin. One of the things that is really valuable with Docker is it rapidly, it makes it much easier to quickly spin up staging and test environments, create digital twins of our production environment, and it gives each developer a way to have their own fully integrated test environment. A couple different tools we can do to manage that is for one, we talked about aliases in networks, but there's also the hostname key, which really just allows us to give each of our containers a specific name. That way if our production gateway is production frontend and we have a production backend, well, we could simulate it in our test environment to call them the same thing. That way when we're in test, we don't have to spin everything up, change some connections behind the scenes, and then run our tests.

35:24
Keith Gamble: We could just spin it up, run all our tests, and wait to see how it succeeds. One of the other benefits is creating multiple environments, having development, test, and production Docker Compose stacks make it a bit easier sometimes to manage having a suite of tools that you need to spin up and bring down for every project. Kevin mentioned that you may have two gateways in a database, so on and so forth. Well, you could do the same thing and just have a development copy, a test copy, and a production copy that gives each of your developers a way to spin this up locally and get through the entire process on their own before we move things along the dev test production path. A third tool that can really help with testing and validation is you mentioned Portainer, but you could do the same thing leveraging a container server somewhere on your enterprise network that you use to spin up and break down containers for non-container leveraging users.

36:19
Keith Gamble: Maybe all of your developers are working on a project together, but you've got a project manager who doesn't need to dig into the weeds of Docker every time they wanna see one of your development branches. Well, you can go ahead, spin that container up on the container server, give the project manager a URL that they can go directly to, and now they can go through and see the application that's being worked on while the other engineers are working on their own local environment. Testing really becomes a lot easier when you start spinning up and breaking down containers so that you can turn Ignition into a tool to use ad hoc through your development lifecycle and your testing and production lifecycle. One comment on one of the things that Kevin was doing before that didn't get said that I wanna highlight, not exactly a testing thing, but is the efficiency of all these different gateways running together. I think Kevin has seven Ignition gateways running on his laptop after all that effort and they probably are using very little memory and CPU utilization. That's one of the values of Docker is ad hoc containers just coming up and down and using them for whatever. So after the testing phase, one of the things that we start moving into is our production environment. So I wanna pass it over to Joe to talk more about that.

37:32
Joseph Dolivo: Awesome, thanks Keith. So as I mentioned before, so we're actually taking container technologies like Docker and we're using these in production and at scale. So certainly the things that we're gonna talk about can also apply when you're doing it in develop... In testing environments. But there's a couple things that are really, really important to make sure you're considering for production, and we'll talk through a couple of those right now. Mainly they are resource optimization, including things like costs, orchestration, and monitoring. So let's dive into the first one of those. So let's say, this is a simple example where you've got a single host and that host can be a VM, it could be a laptop like Kevin's laptop. It could be a server, a shared server that you're running at your organization, but whatever that host machine is, and you've got a workload that you wanna run on there. And let's say that the size of the workload and the size of the host are basically meant to represent the resources available to you.

38:25
Joseph Dolivo: You could see that this one is not really optimally sized. We've got a small workload that's using up some percentage of the host but not the whole thing. There are really three different things that we can do if we wanna be a little bit more efficient with our resource utilization. The first thing you could do is we could allocate more resources to that actual workload, so something we're not showing but that Docker supports is you can basically set requests and limits on those workloads. So I can request a certain allocation of CPU or I could request a certain allocation of memory, and then of course, Ignition has arguments that you can pass the JVM to allow it to use a certain amount of memory so we can be a little bit smarter about using the resources that we have allocated to us. Another thing we could do is we could say, "Well, we've got that available capacity, can we use that for other services?" So like Keith mentioned, there's a number of other smaller services that we can run, you could have a whole bunch of these gateways if they're not really doing a whole lot, especially for development and testing, you can probably cram a few more on there.

39:25
Joseph Dolivo: For us, we have like a Git frontend which doesn't use the same quantity of resources as Ignition does, and so we're able to cram multiple services onto that same post to be, again, more efficient with our usage. And then the final option, which depending on your environment, you may or may not have is you can actually shrink the host down. So if you're running this in, let's say a cloud provider where they have different SKUs available to you, you can't necessarily pick the exact RAM and CPU that you wanna allocate in storage, but you maybe have a different family of VMs, let's say, that you could use. So those are three options you would wanna consider for optimizing your resources and in all of these cases, especially when you're looking at changing SKUs of a VM, there can be some cost implications to that as well.

40:11
Joseph Dolivo: So the next topic to talk through is orchestration. So you picture a conductor who is orchestrating an orchestra, and when the examples that we're showing are all basically running on a single host, so this is an example where my workloads are, let's say I've got a single Ignition gateway, I've got a database. Well, I'm just gonna put those on a single host running Docker, and I've got a gateway and a database that are in a container. That works really, really well for development and testing.

40:38
Joseph Dolivo: Now, let's say we want a more complicated example where we're actually gonna be looking at having those same two services, but maybe we wanna add high availability to it and we wanna add redundancy to it, so if you look at the... Oops there's the next example. Now you can see, well, we've got three different nodes, which is what orchestrators tend to call the hosts, and I wanna have a primary and a secondary Ignition server, and I wanna make sure that those two gateways are spread across different VMs, different nodes, in case there's a connectivity issue. Same thing with the database, I wanna have a replica that's set up, and I want the backup replica of that database to be at a different VM than the primary in case there's a problem. Maybe I wanna have some available space open on one of those VMs, so that if I do have a problem, I can move that workload over. This is a really complicated scenario to have to manage yourself, and so this is where orchestrating tools like Kubernetes or Nomad or even Docker Swarm come in handy.

41:36
Joseph Dolivo: So this is what we're using in production, and this is what a lot of customers are already starting to use. And what's nice about this too, is that under the hood, it's all Docker or a comparable or compatible runtime like container D that's actually running those containers on the node, but you have this higher level that's basically scheduling these workloads to these different nodes and handling some of those other common services like networks and volumes and exposing those out to be accessed out externally. So that's orchestration.

42:08
Joseph Dolivo: And then the final point here is really around monitoring. So folks in the industry may hear this called observability, but this is where you wanna have health checks to make sure that your services, like the Ignition gateways are actually available, your databases are actually available. You can collect metrics on those services, let's say I wanna monitor the CPU and RAM and storage, and I want to know ahead of time if they're approaching some limit that I have, or if it's gonna bog down other services that are on my machine. I'm gonna collect logs, so I wanna know if there's certain error message from Ignition, I wanna be able to respond to those. If I'm filling up my database storage and I'm no longer able to write my write-ahead logs and I'm losing records, that's something I wanna know about. And then for all of those cases, I probably wanna set up alerts, so if I'm approaching my limit, my storage limit, I wanna get an alert so that I can proactively react to that before I've got a problem, like my storage is already full. Or for a health check, if I'm failing a health check, I wanna know about that as soon as possible so that I can go in and start to investigate and troubleshoot and do something about it.

43:11
Joseph Dolivo: And then ultimately, if I'm gonna be making the service available to others, like let's say folks internal to my organization, I might have an SLO or a service-level objective, so I'm gonna try to provide 99.9% availability of the service, so when I'm scheduling maintenance windows or I'm doing other activities, I wanna make sure that I'm meeting that target or be able to track how closely I'm getting to that target. And the screenshot on the right is just a very simple example of what we're doing for production where we're showcasing the CPU load on the Ignition gateway, as well as actually pulling in the logs that we're formatting from that Ignition container. So those are some major production use cases to think about and some examples of how we're doing things in production.

43:55
Kevin Collins: Alright, so the next thing that we're gonna share with you is some of those resources. We wanna make sure that we get some time for questions, hopefully that QR code will work for you and take you to a GitHub Gist with the links that you see on the screen, so you don't have to type them out. But if, I'll just quickly run through these here, we have our container image documentation, so that'll teach you about all of the different configurable aspects of our Docker image. We have a new elective studies course on Inductive University, that actually walks you through creating a Docker Compose development stack, one that's maybe more basic than the ones that you might have seen here in the demo today, but that course helps you get the fundamentals, so that way you can then further your journey and create some of these more advanced examples as you go forward. Keith has quite a few different solutions out on the web that you can look at and benefit from, the links are there for those, and Joe has some great information on running containers in production with 4IR's FactoryStack.

45:19
Kevin Collins: There is also the Barry-Wehmiller Design Group project templates, these are some of the resources that you'll find in those links. These give you pre-packaged templates that you can start from, so that way you don't have to build these out. If you know that you've gotten an IIoT project and you want a few gateways for MQTT development, that's there, a basic project template, a SQL Server and Ignition template. Those and a lot more you can find on the links, many of which are open source reposts, so I'm sure Keith would also appreciate any pull requests or feature enhancements to those as well. And with that, I'm gonna hand it back over to Don, that's it for our demo.

46:19
Don Pearson: That's plenty. Thank you very much, Kevin, Keith, Joe, excellent. Now, you created a lot of questions through this little thing, so I'm gonna say a couple of things on a couple of slides here but ask you to also be... I've highlighted a few questions I'll be asking, but you guys can look through it and grab any that you think you may wanna do it, we'll have a little Q&A time here. A couple of things if you are new to Ignition just, it only takes about three minutes to download, you can use it in trial mode for as long as you want absolutely free. So the door is open for any who wanna really dig in and learn Ignition better. I also wanna make a couple of comments about our Discover Gallery, which is coming up here. A quick reminder about Discover Gallery at the Ignition Community Conference. So next slide is, ICC if you don't know about it, is coming up in September, but we have a Discover Gallery every year where we showcase really the most exceptional Ignition projects from around the world. The deadline to submit your project is April 28, it's coming up, that's this Friday.

47:23
Don Pearson: You get the submission form, go to the conference website at icc.inductiveautomation.com, and look for the Discover Gallery page. If you have any questions about it, please email us at icc@inductiveautomation.com. So definitely looking forward to seeing your entry. On the international level, just briefly, if you're outside of North America, we have a network of international distributors who provide sales and technical support in your language and time zone, cover of all areas, I saw Gilles that you were on the call today from France, AXONE-IO, but you can see the countries covered there. They provide sales, technical support in your language and time zone, and to learn about the distributor in your region, just please visit their website or contact our international distribution manager, Yegor Karnaukhov.

48:13
Don Pearson: So that's a little bit on the international. So let's move over to Q&A. If you wanna speak with someone, here are some extensions for all the folks at our office, sales representatives at headquarters in California, 800-266-7798 is where you can call in. Now it's time for the Q&A, and I'm gonna let you guys say whatever you wanna say and jump in, but a couple of things that I wanted to mention in terms of the thing. I'll confirm, yes, the webinar recording will be available after the event, so I think give our team maybe a day and it'll be up there. And I see quite a few questions from Johnny, so the presentation of slides will also be available just so people know that. “As you look at this, what hardware are you running all of this demo on?” It's a question from Joe, can you answer that, Kevin?

49:08
Kevin Collins: Yeah, sure. Sure. Running this just on a 2019 MacBook Pro, the last of the Intel ones as it happens, but I will mention that the Ignition Docker image is multi-arch, so you can even run it on ARM64 machines. But yeah, just a MacBook running Docker Desktop.

49:32
Don Pearson: And Sam has a question just... And thanks, Kevin. “Will these Compose files be available after the session?”

49:39
Kevin Collins: Yeah, I think it makes sense to release these. What I'll probably do is post these with a link on the forum, and I will probably also update that just that I linked in the presentation with a link to that forum post, and we'll go ahead and put them there that way folks can download this and try it out or use it to extend their own solutions.

50:07
Don Pearson: That sounds great. Then this was a question that came earlier on, Maddy says “I was a bit late, is there a GitHub URL for the project source I can get?”

50:17
Kevin Collins: Yeah, that's what we're gonna probably publish just through the forums, whether it ends up as a GitHub repo or just a package that we upload there, TBD.

50:30
Don Pearson: Okay. Another question from Johnny here, “How do you handle Ignition updates?”

50:35
Kevin Collins: So Ignition updates, that is really as easy as... And I'll just rapidly show you in the simple example, you take that tag, you change it to 27, and then you re-run Docker Compose up and you’re upgraded.

50:56
Don Pearson: Cool. Now, as you guys looked over the questions yourselves, some of them are very long, so I'm not gonna read a long question, but Kevin, Keith, Joe grab a question that you think you wanna answer. We just have a couple of minutes left, and I wanna reiterate as you do that, that you can certainly follow up with us and contact one of these reps and they will get to the answer for you if they don't have it themselves. So I don't wanna discourage anybody from getting their question answered, but... Anybody wanna grab one here?

51:24
Kevin Collins: Yeah, I'll go ahead and do a bit of a rapid fire. I do wanna let everyone know we... Yeah, we are gonna collect these and some of these questions have some depth, but we can't cover them here, we'll get to them. So a couple of them, “How much disk space does the container image use?” The container image, I think, comes in around a gigabyte and a half, so that's your starting point, and then it's just whatever data you create in your session. That's what's gonna be stored and grown in your volume, the other one is...

52:02
Keith Gamble: “Where the image is located in the YAML files?” So those images that we have called out, the Inductive Automation Ignition are by default pulling from Docker Hub, which is a public Docker container registry held by Docker. And so Inductive Automation has their images published there as well as I think my development image we have mentioned was published there as well, it's a pretty common place for them. However, internally, you can also create your own images ad hoc as you build them out and define them, and use those or you can have your own company internal container registry as well.

52:37
Kevin Collins: Yeah, and that's probably a good opportunity to also plug, if you go look at last year's ICC, I did a session on the common things for building a derived image, that is start with our official image, add your own custom functionality, that's a good resource there as well. Joe, did you have any questions that you wanted to address? Or I can go through a couple of others.

53:00
Joseph Dolivo: Yeah. There's a bunch of stuff in here, there's a couple about if an update fails, what do you do? So always, always, always, important to take a backup before you do that, you could take a gateway backup that you can provision from the container itself or from the Gateway Configuration page. So have a backup in place in case the backup... In case the update fails and then you can just change the version number back and then you can bring the container up and restore from that backup. Some of the things around TLS certs, that's another one. So I can say again, for production use cases, you can also use services, tools like ACME that will basically use like Let's Encrypt or ZeroSSL to dynamically get new certificates. You could also manually provision certificates, which I think is maybe what you did for the demo. Kevin, is that right?

53:46
Kevin Collins: Yeah, I used... I run a, it's called Smallstep, it's a kind of a roll your own certificate authority that I use for local development use, and I basically just created a wildcard certificate for that localtest.me sub domain, which is just a loop back to local host. You apply that one certificate and key to the Traefik reverse proxy, and then suddenly you've got TLS without having to configure each and every service, and then it's just going through the standard HTTP over 8088 for the backend between the reverse proxy and Ignition. Yeah, I see...

54:32
Don Pearson: Go ahead and take one more question maybe Kevin, and then pass it back to me. We'll wrap up.

54:39
Kevin Collins: Yeah, I'll take one on licensing, so if you are wanting to run containers in production, you do use our leased activation, that's the eight-digit license key. For that, once you put that into your Ignition gateway, it will contact the licensing server and license that gateway automatically, and all of that licensing is session-based, so that means that if you tear down the container and rebuild it, if it's an existing session, it'll pick that up off of your named volume or it will lease a new session from that license to keep you up and running, and that's how you can get multiple gateways licensed within containers. But yeah, past that we'll certainly follow up on the other questions and get you all some answers that you're looking for. I hope that you all engage this technology and let us know how you're benefiting from it.

55:43
Don Pearson: Listen, we really appreciate the work that you did today, and Joe and Keith, before we move to the last slide, you can see you have contact information for Joe from 4IR and Keith from Barry-Wehmiller Design Group, so please, if you wanna go directly and ask them questions, I'm volunteering your time, Joe and Keith. So you may get some there, really appreciate everybody. Moving to the last slide, it's just a thank you for joining us. We'll be back soon with webinars about unified namespace and Ignition Cloud Edition, so follow us on social media, sign up for our weekly news feed, and you can keep up on all the details. With that, we are complete. Bye for now, and have a great day.

Posted on April 11, 2023