Ignition Community Live with Inductive Automation's Dev Team

Containerization & Ignition: Unpacking the Possibilities

56 min video  /  42 minute read

Containerization is becoming an increasingly common way to develop, run, and share applications more efficiently. In this informative episode of Ignition Community Live, you’ll get a closer look at the benefits of containerization and of using Ignition together with Docker, the leading containerization solution. Join us to learn more about how containers stack up against virtual machines, reasons to use Docker as well as reasons not to, the official Ignition Docker image for 8.1, use cases, future plans, and more!


Webinar Transcript

Kent: Hello, welcome to another episode of Ignition Community Live. Today, we're gonna be talking about Containerization & Ignition: Unpacking the Possibilities, as you can see on there. Really, today we're talking about Docker and all the stuff that that brings in. And really, I love the little subtitle here, "AKA, using Ignition for containers, and containers for fun and profit." So it's gonna be an interesting episode today. We have a lot of special speakers that you're gonna be able to hear. First off, though, is myself, Kent Melville, Sales Engineering Manager, here at Inductive Automation. I'll just be acting as moderator today, but we brought in the heavy guns to talk to you about all of these containerization options.

Kent: Let me introduce you to some members of our software development team. So first off, you can see Paul Griffith, he's a Senior Software Engineer. Next, Perry Arellano Jones, Senior Software Engineer. And last, we got Jonathan Kaufman, also a Senior Software Engineer. And so these guys have really been working hard to bring the first proof of concept out there for our Docker image as a company. Since we've released that, we've taken some big steps forward as a company. And we'll have one more person come in who will be presenting today. But before we jump to that, Paul, Perry and Jonathan, if you could each take a minute real quick and just introduce yourself, tell us a little bit about what you do for Inductive Automation.

Paul: Hi, thanks, Kent. This is Paul, I am a Senior Software Engineer. So my day-to-day is a lot of activity on the forums, collecting bug reports from users there, trying to help people through issues, and then obviously working on the software directly. The three of us, we kind of all work together on build, and CI and CD responsibilities. So we kind of share that aspect of everything together. And so that's why we're in charge of this whole Docker effort that we're doing now.

Perry: Yep, this is Perry, and pretty much just to echo Paul, all of us tend to kind of work a little bit everywhere. But as we continue to build out our Docker and container capabilities or plan on building those out, that's become more of a role, I think, for the three of us, and we definitely have been using it more and more internally for development, which you'll hear a little bit more about today, I guess. But in general, yeah, we're just focused on making the product useful and usable for everybody, whether that's fixing bugs or adding features. We all kinda share all the load there.

Jonathan: And my name is Jonathan, I've been with the company for about three-and-a-half years. And again, to echo what they said, same situation, and then also I helped build the binary installers, Perspective Workstations, some of the launchers, stuff like that. So we're all pretty excited around what Docker unlocks for us and what the future holds for it.

Kent: Perfect. Thanks, guys. And so as many of you on the line may know, we released a Docker image on Docker Hub, and we'll be talking about that today. And there was some interest there because we had already had a member of the Ignition community who had been posting an unofficial Docker image of Ignition of... Well, a bunch of images of different versions and whatnot. And people said, "Well, what does this mean for Kevin Collins and his images?" And we said, "Well, [chuckle] we have a long way to catch up with all the great work Kevin has done."

Kent: And actually, since we released those, we actually have a surprise appearance today. We have none other than Kevin Collins joining us. And when I say joining us, I actually mean that more than just in today's Ignition Community Live, but Kevin Collins has actually joined Inductive Automation and is now a Lead Software Engineer for us. And so Kevin, I'll go ahead and turn it over to you to tell us a little bit more about yourself and to get the presentation started.

Kevin: Thanks, Kent. Like Kent mentioned, some of you already know me from my contributions towards Ignition on Docker over the past few years. In fact, I was just looking at the commit history of that GitHub repo this month, and as of December, it's three years since my first commit back in the Ignition 7.9 days to get this program off and to start exploring Ignition on Docker. I've come to the development group here at Inductive via a slightly different path than most. I've been a systems integrator for the past 13 years. So I came into the industry with a computer science education, passion for technology, and I've always tried to bring forward and apply new technologies, new processes, techniques, wherever they're applicable and wherever they can glean benefit. And speaking of new technologies, we're here today to talk to everybody about a deployment methodology for your Ignition development and installation needs.

Kevin: Let's take a look at our agenda. So we're gonna be covering a few topics here today. The first of them is what is Docker? What is it? Where does it fit within the broader container ecosystem? We'll talk about why you might wanna leverage it, where you can use it and some of the advantages that it can bring. Next, we'll dig a little deeper into using Docker and how some of the different constructs fit together. This will be where we lift the hood and start talking through some of the parts and pieces that make this technology work. And then once we've got a good foundation for our conversation today, we will dive into a live demo of using Ignition in Docker and show you exactly how easy it is to get things up and running using containers. And finally, we'll have some time at the end to talk about current abilities, known limitations and plans for the future.

Kevin: Alright, so let's get started. What is Docker? It's probably good to start with a quick review on why Docker and container technology in general is different from the virtual machines that we're used to deploying on today. And virtual machines are great and they still have a definite place in our system's ecosystem, they allow us to multiplex our physical machines and run multiple isolated systems in parallel using less hardware.

Kevin: Common place to deploy multiple machines on that single physical host. However, they end up duplicating a lot of functionality unnecessarily. When you spool up multiple VMs, you're bringing up multiple copies of operating system kernels, multiple copies of device drivers for simulated hardware, independent memory, and storage allocations, all of this adds up to resource use. That's maybe not as efficient as it could be. Containers, and you can see in this image down below, these allow us to achieve some of the same isolation benefits that we get from virtual machines with greater efficiency. So we're not having as much of that duplication. We're able to better use the resources from that physical host machine. We'll talk about more of the reasons to explore and use containers in some of the follow-up slides.

Kevin: Now, another note on this one here that I want you to keep in mind is that Docker did do a lot to pave the way towards adoption at scale of container technology, but it is one of the many container runtimes out there. Docker itself, for example, runs containers on a project called Containerd. CRI-O is another popular container runtime for Kubernetes. I just wanna kinda keep that in mind when... Because it's common to bind terms together, Kleenex and tissues, search and Google. I just wanna make it clear that just because we're running a container doesn't necessarily mean it is a Docker container, but Docker is certainly one of the most prevalent systems out there for containers. One of the great things about Docker's tooling is that it's really great at building images, not just running them. And that's an important aspect, and one of the main reasons why using Docker is so convenient because it gives us a lot on both sides. Docker's made it easy to build OCI-compliant images that can run across all of those popular container runtimes, including, of course, Docker Engine. We'll get into more of these details later this morning as well. Now, the last note on the slide is that everything we're talking about here today serves as a foundation to more advanced concepts such as container orchestration, where we add another layer in the stack to manage the life cycle of these containers.

Kevin: Tools like Kubernetes and Docker Swarm are important, and really no presentation about containers would be complete without at least mentioning them. But we're not gonna be getting too deep into these today, look for more on this in future sessions maybe. Next, I want to share a bit of high-level views on the Docker ecosystem. So this simple graphic depicts some of the high-level components that we're gonna be looking at today. Ultimately, we can kind of view these three major pieces, the client, where we're going to issue commands and those are gonna be API calls to the Docker daemon, that's gonna be running on our host computer. Now, maybe this client is also right there on the host. There's some options for having a remote Docker host as well, and that's why this client can certainly sit outside of the main server. Ultimately, our applications that we run are going to be comprised of containers and then the templates for those containers, which are images. Those images are hosted on a remote registry, Docker Hub being a principal example of one of those, but you can also have private registries as well. So at this point, we've established what some of the high-level components are, it's time to dig a little deeper, and for this, I'm gonna hand it over to Jonathan to talk about why we might want to use Docker.

Jonathan: Thanks, Kevin. Yeah, so there's a couple of different reasons why. And so why use Docker? I think one of the big ones is isolation from the OS. Containers leverage and share the host kernel, so that makes them much more efficient in terms of system resources than virtual machines. And containers also apply aggressive isolation constraints to processes without any configuration required on the part of the user. So it really just ensures that those containers are separate from the OS. It's also easily repeatable and a reusable configuration. Docker supports Docker files and compose files which are configuration as code. They're portable and scalable, so you can build locally, deploy it to the cloud and run it anywhere. You can also increase and automatically distribute container replicas across a site or multiple sites. There's also a large ecosystem of existing images that you can couple your or compose any of your images with.

Jonathan: It's loosely coupled, so containers are highly self-sufficient and encapsulated, allowing you to replace or upgrade one without disrupting others. Some of those examples are databases, MariaDB, Postgres, MS SQL, load balancers, and proxy tools like NGINX, you can spin up your own Docker registry, VCS tools for version control, and even other custom images. A great example is Kevin's unofficial Ignition images there. It also comes with a complete set of tools to build, run, and share those images. So that repeatability is pretty much baked into Docker and you can use built-in commands to not only build, run those images, but publish them to public registries, private registries, et cetera. So when may Docker not be ideal to use? Docker is great. Docker unlocks a lot of possibilities. However, it is a new layer of abstraction.

Jonathan: The onboarding costs of this are something that you need to think about as well, so getting your team up-to-date on not only what Docker is, but how to manage it, how to deploy containers from images, updates, etcetera. Docker does make a lot of that pretty easily, but it is a new layer of that abstraction again. Ignition runs on x86-64 or AMD64 architecture only. So our... Currently. So our official images don't support additional architectures like arm hard float or arm64 at this point. You also might not wanna run it on resource-constrained devices, so some edge-of-network devices that have limited resources. An additional toolset might not be desired or really even necessary there. Also, if you have any operating system requirements. So a really big example of this is Ignition with the OPC-COM Module. These are Linux-based containers, so something to be aware of. And IT may impose additional requirements which restrict the usage of Docker, so consult your IT department there as well.

Jonathan: Where can we use Docker? So Docker itself is supported on Windows, MacOS X and Linux. For Linux, it's most of your popular distributions like Ubuntu, Debian, Fedora, CentOS and Raspbian. It is supported on x86-64 or amd64 architectures, and there are... There is road map support to run Docker on arm64 for Windows eventually, as well as Apple Silicon and MacOS X.

Jonathan: Current Ignition images are usable as Linux containers, this is one big thing to think about or be aware of if you are running Docker on Windows. These are not currently Windows containers. So when using Docker for Windows, make sure you're set up to run Linux containers, which is the default, but if you've ever modified that, just something to think about. And it's important to think about, at least currently, the install of Ignition as a Linux install, no matter where you're running the container, no matter what OS your host is. And native Windows images are definitely a future possibility. So I'm gonna go ahead and hand it off over to Paul to pick it up from here and talk about using Docker.

Paul: Thanks, Jonathan. So when we talk about using Docker, we're really going into a bit more of the nitty-gritty details here. So when you're building images, you're essentially running a series of instructions that comes out of a Docker file. And you copy files in or you download files from the Internet, and you build those images together, or you build those layers together into an image, and that image is essentially just a static set of files. You can hold those images locally and never push them up to any other registry ever, or you can work off of existing images from pre-built repositories like Docker Hub, where you can start your own registry internally inside your company. There's lots of possibilities there because of the repeatable, composable nature of Docker images.

Paul: So we'll go into a little bit of detail in the next couple of slides about what the difference is between images and containers. So an image is, as mentioned, essentially just a set of files. It's built up in layers from the base image, which can be a core operating system kernel or it can just be a totally from-scratch image. On top of that base image, you're gonna add whatever minimal dependencies you have. So if you need certain locales installed or if you need fonts or configuration, any other respect, that's what you add as a dependency.

Paul: Then you install your application itself. In our case, we're just dropping an Ignition zip install into the container's file system or the image's file system. And then to better support running as a container, you can add certain supplemental tooling. So you'll hear the terms, entry point or health check. We'll go into that a little bit more on the container slide, but that's just extra tooling that you can add specifically to make your image run better as a Docker container, rather than just your base application by itself.

Paul: So what is a Docker file? Well, as I sort of alluded to, it's just a list of instructions that says, "Here's how I want to define my requirements." So it's a sequence of instructions that says, "Here, I want to download a file. Here, I wanna copy files from my local file system into this image. Here, I want to run a command inside of the image." And the really valuable thing about Docker files and this whole notion of building images is they're plain text files. So you can store them into a version control system, you can share them around very easily. And once you've built an image, it is totally static. There's no further configuration to do, and it is utterly repeatable. You can download that same image 100 or 1000 times on any number of different machines, and the core promise of Docker is that that image will just run. You don't have to keep re-building, and re-installing and doing all these extra steps each time.

Paul: So why would you want to create your own? Because that's, as we mentioned, a possibility and something that you can absolutely need to do for your own purposes. So you might just want to package up your own application. I hear a lot of talk from system integrators about dealing with PLC programming software. In theory, you could put your PLC programming software into a Docker file and have versioned Docker files for each version of your PLC programming software.

Paul: There's any number of things that you can do, you don't have to use an existing image that's published on the public registries. You could build it entirely for your own consumption. You might also just want to take one of those existing public registry images and add your own configuration to it, maybe default it to a certain locale, add localization tools or just add debugging tools that you might wanna use. If you're doing runtime inspection of containers, you might just wanna add a nice text editor for if you have to look at configuration files inside of the Docker file, inside of the container.

Paul: So that's images. But next, I'm gonna talk about containers because containers, they go hand-in-hand with images, but it's important to recognize the distinction. Where an image is essentially the static configuration, think of it as a UDT if... To use an Ignition example, the container is the runtime instance. So you take an image and you actually run it. You can use the Docker run command directly, or you can use a higher level tool. So that's where that orchestration that we talked about earlier comes into play. So when you run an image, you're going to pass some amount of runtime configuration. You can configure general Docker parameters, you can specify port configuration, container memory settings, various aspects at the general Docker level. Or if your container has an entry point script or if you're passing an entry point, you can also specify custom configuration for whatever application you're running.

Paul: So in the case of the Ignition container, you can specify the gateway name, for instance, at container runtime. That's not part of the image because it's a piece of configuration, that's something that goes into the container. So once you're running the image, which generally speaking, your application and your container are going to be tied together. So as long as the application is running, the container is running. And if something goes wrong with the application, then your container has something wrong with it. But sometimes that granularity is not as obvious, as just the process that is running your application dies, and therefore your container dies.

Paul: So for instance, in our Ignition image, we have what's called a health check. This is another layer of image customization that you can provide, that reports the status of whatever your application is, back to the Docker daemon. That status check then allows you or your orchestration system to take a certain action. You could automatically remove any misbehaving container, just delete it completely and recreate it because the entire point of containers is that they're somewhat ephemeral. Once you apply configuration, that's all you need to do, or you could just run an alert if something goes wrong with your health check. There's many different layers to containers, but this, the entry point, the application and the health check are probably the three most important aspects of it.

Paul: So then it really launches into, "What about using Ignition in Docker? How do you actually do that?" So we have two options available right now. There's... As we mentioned with 8.1, we started publishing an official Inductive Automation Ignition instance, so an image. So this is now available for everybody to use. So you just, "Docker pull Inductive Automation Ignition," and the latest tag is automatically going to get you the latest 8.1 stable release. But this first-party image is very much a first draft, it has some configuration options. You can specify gateway names and public HTTP settings, you can tweak your Java memory, but that's about it. So it's a pretty obvious next question, and that's part of the reason Kevin is here now to talk about it, "What about Kevin's image? And what's the future gonna be for that?" So I'll let the man himself talk about that here because there's no better person.

Kevin: Thanks, Paul. Yeah, so my unofficial Docker image, we've got a few links here for you to explore. Like was mentioned, there's a lot more configurability in the image that I've been maintaining on the side, things like automatic gateway provisioning, commissioning, rather, being able to configure aspects out of the box, like gateway network configurations, support for third-party modules. Take a look at what's out there. The good news on this image is that right now, all of the things we've talked about today, Docker files, entry point scripts, you can actually explore yourself on my GitHub repo, everything there is regarding how that image is constructed and how it's all bolted together, so if you want to learn about Docker. And to be quite honest, much in the same way that I did three years ago, I did it by exploring, "How can we get Ignition running in Docker?" So it's a great, great way to learn.

Kevin: But the story for today doesn't end here, we're gonna get into some demos. So I'm gonna guide you through what we've got prepared today. The first example is really just, "How do you get Ignition up and running in its most basic form in Docker?" You can see here in the little video preview here, a couple of simple commands, a Docker pull and a Docker run. So these are commands that we run from our Docker client, and it interfaces with the Docker daemon through that API and ultimately starts that container as a process on our host system. So just running these two commands here is really all you need to get started with a simple Ignition Gateway. And there's nothing to install as it relates to Ignition because all of that work is done already, packaged into that image file that you pull from the Docker Hub registry.

Kevin: After that simple example, you may start to think, "Well, what more can I do with this?" And this is where you start to expand out the use of that image, by customizing it with a few extra configuration elements that we pass in through that entry point script that we talked about earlier. So this is where you might be able to run a few different containers, and perhaps map in some different ports from your host machine into those containers, and this allows you to run multiple containers at the same time. So this example here shows how you can start to extend that Docker run command with some additional customizability. And all of these... We'll see this later. I'll show you the Docker Hub web page, but all of these commands are documented, so you can know more about how these bolt together.

Kevin: Next, we may think about, "Well, how can we do a... " One of these available applications such as a database with our Ignition Gateway, and this is where we're going to kind of touch on a tool called Docker Compose. Docker Compose allows us to coordinate launching multiple different containers, and it does that by a definition file that prescribes exactly the different run commands, volume commands, and all the things we'd otherwise have to execute manually. It kind of packages all of that into an easily readable configuration file, so much in the same way that a Docker file is a plain text representation of how an image is constructed in the various steps and actions that are taken to achieve that, Docker Compose and its YAML file allows us to do the same thing to describe a set of multiple containers and how to bring those up in tandem. And for that, here's an example of this Docker Compose file, but our last demo here today, we decided to do not as just a PowerPoint slide, but instead as a live demo. After all, this is Ignition Community Live, what would it be without some live action, right? So we're gonna move to a demo context here, and I mean, he's gonna turn on another friendly surprise. Hey there.

Kevin: Let's get into the demo. So the screen that you see here, this is Microsoft Windows, Windows 10, and I've installed Docker Desktop for Windows. Additionally, we have Visual Studio Code, which is the editor that I'm gonna use today to work through the compose file, and it also happens to have a nice extension that lets you visualize the different images that you have on your Docker installation as well as your running containers, so we'll see that during the demo here today. So we saw earlier some of those Docker run commands, let's talk through this composed definition, so we can really peel back the detail on what's represented here. So first, we have this list of services, Docker Compose does blend a little bit between local use of containers and a container orchestration tool such as Docker Swarm, and in Swarm, a container is part of a service, so that's why this says services instead of just containers, but for our purposes here, these are containers. So we define one container called Ignition Gateway 1, we can declare the image that we want that container to derive from, we can specify the ports in terms of the port on our host computer that we want to access, and the port on the container that we're running in isolation on the inside of that container, so we have a few port definitions, these are the standard HTTP port, the SSL port, and the Gateway Network Port.

Kevin: We also specify a command, so as I mentioned earlier, the official image, if you go to it on Docker Hub, you can see some of the runtime arguments that you can specify here. Well, we can model these arguments that we would normally place at the end of our Docker run statement within our Docker Compose file with this command node. So we're gonna do a dash in to set our gateway name, and then we're going to set the public address to just be local hosts since we're running this locally on our system here for development. And then finally, there are ways to preserve your data for the gateway to a volume, this is a way that you can persist the state of a given container, so that way in the future, upgrades to your gateway might be as simple as changing this image definition and then bringing back up the container and then it will inherit the new assets from that updated image. And that's why you want to persist the state of that container, so that way when you destroy the old container of 8.10, when you bring up that 8.11 container, then it will presumably be able to come back with your development assets and everything. Next, we describe another container, so this is the way we're going to actually bring up more than one Ignition Gateway on our system. So Ignition Gateway 2, and here we have the image and the same types of situations here with ports, command and volume. And finally we've got… go ahead, Jonathan.

Jonathan: Yeah, no, yeah, so I just wanted to call out a little attention here to detail albeit in the command, we have specified the HTTP and SSL port that we're gonna be able... That we're exposing on the host, and this ensures that designers and vision clients can access that gateway from the host into that container, even though we're mapping them differently inside that container.

Kevin: Yeah, it's a good point, yeah. These are typically going to match the left-hand side of your port publish. That's a good thing to remember. Finally, we've got a database container. For our example here today, we're using MariaDB, and there are a few environment variables that we're using to pass configuration into that container, and for these, again, just look at the Docker Hub registry and the MariaDB entry for more information on how to customize your container. So without further ado, all we need to run this is one command to bring up our composed stack, so I'm going to just go ahead and run that, and you'll see it do a few different actions. One of the other things that it does is it creates a special network on our system for each of these containers to talk to each other, and Docker takes care of some of the main resolution aspects from name to IP for these containers, so that way Ignition Gateway 1 can talk to the database by the name we've given this service, so Ignition-DB. We also see it create a couple of volumes for our image, for our container rather, for Gateway 1 and Gateway 2, and we also finally create the containers themselves, this is equivalent to your Docker run command.

Kevin: So we've done all of that by just bringing up our composed stack with the Docker Compose up command. So at this point, we're ready to go into our host computer, we're gonna do local host 8088, and remember that's going to be hitting our host machine on this left hand-side and piping into the container on the right, and so that ought to get us to Gateway 1. So let's take a look. So at this point, we already have our Ignition Gateway, and remember this machine, I've not installed Ignition yet, all I've done is installed Docker desktop for Windows, so that already easy and quick experience for installing Ignition that we've grown to love, this is one way that it gets even faster, which is pretty fun. So I'm gonna go through the commissioning process here, and these ports here, it's important for these ports to stay as the defaults because remember, these are gonna be the ports on the other side from the containers perspective. So we'll finish our set up here. Now our gateway is starting up, we'll go ahead and commission the other gateway, here we're going to use the other port to get to Ignition Gateway 2. Oops.

Kevin: 9088. This time, let's provision this one as an Ignition Edge Gateway. Like I mentioned before, just keep these ports the same, because even though we're publishing these out to our host at 9088, 9043, inside the container, we're going to keep that same port inside. Okay, so that one's off. So nowhere we are, we've got our Ignition Gateway up, let's finish out the database connection example, and as many of you probably have experience with installing a database is usually a bit of a time-consuming exercise, but containers can really make that simple. So from here, all we've done is define our MariaDB container definition through this Ignition-DB representation, we've defined some behavior within this MariaDB image, for example, it will create upon first launch an Ignition database and it will set the root password to example, as we've defined here, there are of course other ways to further protect credentials of this nature, but for this development context, this is a good example, 'cause it's nice and simple. We're gonna configure our MariaDB. Let me go ahead and log in here. So we'll just call this Ignition. Now instead of local host, since this container, Ignition Gateway 1 in our composed definition, can talk directly to the other container, local host here doesn't mean local host of our host machine. Local host here is from the perspective of the container, but all we need to do is put in Ignition-DB, that matches up with the name of the service.

Kevin: Our database is called Ignition, and our username, root, and password, example. And with that, we've already configured the database connection, we're ready to start adding tags, historizing tags, creating database tables, stored procedures, anything we need. Our database is online and running. The final component of today's demo that I wanna show you is how to leverage Gateway Network, so we've shown that we can bring up these two gateways together, let's connect them in with a Gateway Network connection. So from here, I'm gonna log back into my Edge Gateway, which you can see is come up for us, so let's go ahead and log into it.

Kevin: And let's configure a Gateway Network connection. We're gonna do an outgoing connection to reach up into our Ignition Gateway 1. And much in the same way that we did with our database connection, Ignition Gateway 2, up here, can talk to Ignition Gateway 1 by name, and Docker takes care of that DNS resolution for us. So at this point, we're connecting, and the last step is of course to acknowledge that connection at our Ignition Gateway 1 container.

Kevin: Now, you may have already noticed, one of the things that happens if you're using localhost to house multiple container instances, is that as you switch back and forth, you will have to log in to the gateway web page again. That's just kind of how this works. There are some easy workarounds though. You can modify your host's file and add in some custom names that point to localhost, and that'll allow your browser to keep from getting tripped up and forcing you to re-authenticate. There are some other great examples on how to use reverse proxies to also work in tandem with your containers and make it easy to reference them by name. There's a good ICC 2020 presentation that you can check out with some more details on that.

Kevin: So finally, the last step, we see our incoming connection, we're gonna go ahead and approve that. And then at this point, now we've brought up, from scratch with no other installation, two Ignition gateways, a database, and we've connected them together in something that we can bring up and bring down as needed. So think of this as just one of many different compose stacks that you might have on your system that you can launch and tear down as needed to really expedite your installation speed, and just how quickly it can be to get up and running. So the final thing that we can do here is a Docker compose stop, and that's just going to bring down the containers that we've launched. So hopefully, that's a pretty good representation of some of the power that you can unlock when you start leveraging Ignition on Docker for your development and testing needs.

Kevin: So with that, I just wanna maybe take you through a couple other links here. We visited the official Docker image. This is also a place where you can search for other images, so for example, that MariaDB container that we used. This one here has all the information that you might need for being able to configure and customize this container. If you find that something's missing from the official image, take a look at my unofficial image. The Docker Hub page here has a lot of content for being able to further customize your container instance. Take a look at that.

Kevin: And then finally, I mentioned before, the GitHub repo, which is a great place to learn about things like the Docker file, and what goes into creating that? No better way to learn if you're already familiar with Ignition than by looking at some of these examples and relating those to things you're familiar with. So with that, we're gonna jump back into the slide deck here, and I'm gonna hand it off to Perry to take us through this last bit and on toward Q&A.

Perry: Right on. Thanks, Kevin. That was a great presentation. I always love the live demos. So the live demo though does two things. One, it shows the power and ease of using Docker, but I think one of the other things that it does for those that have used Docker or use Ignition with any regularity, that it kind of highlights a little bit of some of the challenges that using Ignition in Docker right now present.

Perry: First and foremost, from the walk-through, obviously there was quite a bit of configuration that you had to do. You had to go and log into gateways and get everything kind of connected once the images were open live, but more importantly than even that is the issue of licensing, which is something that we've recognized in the past is challenging. But if you were at... Attended ICC or you followed what's going on with Ignition 8.1 and our Maker edition, licensing support for more flexible licensing models is absolutely something that we're working on and it's going to be coming to Ignition in the not so distant future. And so we're pretty hopeful that that's gonna unlock some very good capabilities in these container environments.

Perry: Beyond that, there's also the issue of commissioning. So Kevin had to log in and actually commission, and set ports, and create his accounts and things like that. That's another thing that I can't commit to some magic commissioning system, but we're certainly aware that that's a bit of a shortcoming right now. And so we're looking at how we can make that better, and that really goes for all gateway configuration, I guess. We're very much in the planning stages of essentially the next big iteration of what our Docker images' capabilities are gonna be. Carl and Colby famously like to say that, "First we make things possible and then we make things good." And so we've got the bare bones of some possibilities there, and now the focus is really gonna be moving forward in designing something that is actually good, and not just fun and a good use for development, but also in more live production environments.

Perry: And what good means for you might not be the same as what we're currently thinking of, so we definitely wanna hear from people. What kind of support? What kind of features set do you think that you need in order to make Docker containers more useful? And what kind of orchestration support are you interested in? That sort of thing. And to maybe get started in getting some feedback, I'm gonna go ahead and turn it over to Kent here in just a second. But I did wanna say, if you do have current ideas, please feel free to head over to inductiveautomation.com to our forums and start posting there and we can actually get some dialogue going. But with that, I'm gonna turn it over to Kent to bring us into Q&A.

Kent: Perfect, no, I thought that was a great presentation. Thanks guys for everything you just covered. And so we are jumping into the Q&A portion now. First set of questions here relate to licensing. I know Perry, you had just talked about this a little bit, but for those who may not have been following it closely with ICC and other things, specifically what are the limitations of trying to use our existing licenses within a Docker environment?

Perry: Sure, yeah. So currently, Ignition licensing... Well, let me back up a little bit. Prior to 8.1, Ignition largely only supported a single style of licensing that was highly dependent on the system that was executing, more specifically the actual hardware. And in a Docker environment, that hardware is virtualized and it's also transient. We keep talking about how these images can get brought up and spun down at will. Well, each time you create a new container from an image, that the definitions or the values that you retrieve from this virtualized hardware, change. And so what does that mean for your license, is that means that if you try to re-use it, the same license on the same container environment, a different container, same exact image, same exact configuration and everything, yeah, it starts to cause lots of issues because the underlying hardware is reporting different information than the previous container did. And so when the licensing tries to phone home, it says, "Hey, you don't have any more grants available."

Perry: So right now, there's a little bit of calisthenics you have to do to get the license into a certain place, and then make sure you get rid of and un-license a gateway before you pull it, a container down. So there's kind of a headache there. With the release of Ignition 8.1 in support of Maker, but also just in support of our longer-term plans, we introduced some new licensing models that allow for much greater flexibility and break that dependence on the underlying hardware. And so as we move toward the future and looking at 8.2, supporting that license model in different versions of Ignition beyond just Maker, is something that is definitely in the plans, and having that support is going to make licensing these containers much, much easier through subscription models or something else like that. I don't know the details, don't quote me on exactly how that's at work. I just know that the plans are kind of being worked on, and so we can bring that functionality to the broader Ignition environment in a coming release.

Kent: Perfect, thank you, Perry. And if any of you who are on the line today have further questions about specifically how to manage licensing within your particular environment, certainly you can reach out to us on the forum or contact support and we can talk you through what your options are today. But next question, Kevin, for you, since you kinda have been using Docker for the last three years, can you give any practical examples of when an integrator might choose a Docker-based deployment of Ignition, rather than a bare-metal or a VM-based deployment?

Kevin: Well, certainly the biggest use case, I think, right now is in bringing up official test and staging environments. Being able to rapidly create a development environment for your system is a great thing to complement your production installation. Past that, as we move forward, deployments of Ignition in containers are going to really allow us to leverage some of the available software-as-a-service models from the different cloud providers where you might not have to manage your VMs anymore, the underlying infrastructure that your applications run on is kind of managed and out of your field of concern. So right now, I think development environments are probably the best use, but I can see a future where it'll be much easier to get Ignition up and running for an upstream gateway in a cloud service in the future.

Kent: Perfect. Kevin, while we're on you, a lot of people are asking, "What specifically is the future of the unofficial image? What are your plans as far as which things will be pulled out of that and brought into the official image? And do you continue to maintain your unofficial image even as now you're working on the official image with us?"

Kevin: I think the unofficial image that I have out there still probably has a life to it. Like Perry mentioned, we're in the process of kind of planning and integrating some of the considerations for those capabilities into the broader roadmap and seeing where those pieces fit together properly. I think, though, that the unofficial image definitely still has a use for things like Ignition 8.0, Ignition 7.9 even, because I do maintain images for those other major versions where there isn't one for the official build. So I think we'll continue to see some updates in my spare time to that image. I don't think it's gonna go away altogether.

Kent: We're excited to hear that. We love all the work that you've done, and so excited to hear you're not abandoning that project. But as far as, not actually just for you, Kevin, but for the whole team, are we expecting in the near future parity between what is in the unofficial image versus what's in the official image, or will the official image kind of follow its own track and not try to replicate everything that's in the unofficial image?

Perry: Yeah, we're not shooting strictly for parity as a goal. I think that there's certainly going to be a lot of functionality that's the same, maybe flags or something like that. It will change, but clearly there's a need for a lot of the functionality that Kevin's been maintaining in his image. And so I think rolling forward, it's highly likely that much of that functionality will end up in the official one. Maybe not in the exact form, probably close, but we're gonna... Like Kevin said, we're kind of in the planning stages, so we'll have to see how that all shakes out.

Kent: Perfect. And this question may be for Jonathan to answer, somebody asked, "Will Ignition be releasing a Windows Docker container, so the image can more easily be run on a Windows host?"

Jonathan: Yeah, so that's something that we've talked about and are kicking around right now. Again, this is very much an initial proof of concept for us and kind of just exposing us to some of what those opportunities might look like for different architectures and stuff. One thing I do wanna just mention as part of that though, is that it is extremely easy to still run these containers on Windows. Windows' Docker default configuration is to run Linux containers. And so that's very easy now, that does limit some support. Like I mentioned, one of the most obvious examples is the usage of the OPC-COM Module. And so that is something that we are aware of. And I'm not gonna make any commitments as of this moment to what those architectures might look like, but I just wanna say we are extremely aware of that request and that demand.

Kent: Perfect. Today, we've talked a little bit about Kubernetes and similar systems that add an additional layer for orchestration of these containers. What are our plans as far as supporting Kubernetes in some more specific ways? Maybe adaption of current master standby capabilities or being more cloud-oriented. Any thoughts along Kubernetes and direct compliance with those?

Kevin: I think I'll take a stab at this one. Certainly, we want to make sure that the Ignition image can play in those environments. So initially, we're gonna be looking towards that same story of, "First, let's make it possible, and then let's make it easy." Upfront, we have to solve a few additional things within the image, making sure that the health checks are robust and perhaps even more granular, making sure that volume persistence in some of those aspects, we need to take a fresh look at some of those topics. Ultimately, the goal is for the image to be able to enable use in these different environments, but first we've gotta set the foundation. So I'm sure that will come, and certainly we'll be looking at that and identifying where the gaps are, so we can work on those.

Kent: Perfect. Well, we are at the end of our hour today, but we appreciate everybody's involvement. Feel free to please share this with co-workers or other people who you think might be interested. But thank you for your time, and this has been Episode 22 of Ignition Community Live. And as part of when we share this, there will be... We've shown lots of links today. All those links will be available with this recording on our website, so feel free to check that all out. Thank you, everybody, and have a great day.

Posted on December 9, 2020