Running Ignition in a Container Environment

45 min video  /  32 minute read
 

Speakers

Kevin Collins

Lead Software Engineer

Inductive Automation

Leveraging Docker can be a powerful technology for rolling out large systems and setting up flexible development environments. In this session, you'll hear practical tips for running Ignition in a container environment from Inductive Automation's Docker expert.

Transcript

00:09
Kevin Collins: So, welcome to the session. This is “Running Ignition in a Containerized Environment.” My name is Kevin Collins, I'm the Lead Software Engineer here at Inductive. And on the development team, one of my focus areas is on enhancing the deployment experience of Ignition, and I think there's a lot of potential for improvement and development in that in the container space, which is what we're here to talk about today. So first, let's hit the session agenda, we're gonna recap on what Containerized Ignition is all about. We'll then move to an overview of the Ignition Container image, show some details on how it's laid out, how it functions differently from a traditional installation. Next, we'll look and do a deep dive on how to extend our official container image with your own custom functionality. And then finally, we'll touch on some recent updates to the platform that will give you some further guidance on getting your containerized deployments of Ignition licensed. And I'm also leaving some time at the end for some questions, so let's continue on.

01:26
Kevin Collins: Now, before we get too far into it, I do want to throw up some goals there. These are some things that I hope we can translate that session agenda into things that you can focus on. Ultimately, I'm hoping that in this session, you'll get a good foundation for success, no matter which platform, which managed container solution on whichever cloud provider that you choose. The goal here is to get you some of that baseline knowledge of how things work so you can be successful in that future.

02:04
Kevin Collins: So, what is Containerized Ignition? We build an OCI image, that's the Open Container Initiative, boom. This is a standard for container images that helps unify how container images are created. Now, most folks have heard of Docker, Docker Engine, which is one container stack that you can use to run containers but there's other options out there. And we're not here necessarily to dive into the nuts and bolts of container engines, daemons, the OCI run times, that's really all I'm trying to say is that when we're talking about running containers, Ignition in a container, Docker is just one option. It's a much larger ecosystem and it's pretty exciting that the standards are the way they are, so that way we can enable Ignition in a lot more possible areas as container technology evolves. So what do you get when you deploy Ignition in a container? You get some similar isolation but ultimately, launching Ignition and container is much like starting any other given application on your system.

03:22
Kevin Collins: The principal differences are some of the isolation, the view of the world that that process has, and that's what containers help you with. They give you a nice compact single view of the world for your application to run in, and that helps with keeping things isolated and tidy when you're running multiple containers, which is another great opportunity that containers kind of help you unlock.

03:49
Kevin Collins: So as I said before, one of the goals is to shed light on how the image is put together, and I think you'll find some intriguing aspects as we explore this. So step one is overview of the Ignition Container image. We're gonna look at how the image is constructed, we're gonna look at the file system of our official image, then we're gonna talk about persisting Gateway state. So how to properly manage volumes, and we're gonna talk about the behaviors of some of those volume types, so that way, again, you're prepared for whichever container platform that you choose to use. So, image layers. How is a container image created? Won't spend a lot of time here, but I think it's important and it leads into the story for extending the image later. Container images ultimately are comprised of layers, and each layer adds one of two things, it either adds something to the file system or it adds metadata to the image. Things such as environment variables, which command's gonna be used to launch the container when you run it. Ultimately, the composition of an image is all of those layers put together, so at the end, you've got a root file system and then a package of metadata, and that's really what a container image is made up of.

05:21
Kevin Collins: Now, for the file system structure, this will be familiar to a lot of folks who've deployed Ignition on Linux in the past. We have our Ignition install location: usr/local/bin/ignition. We have a data volume, this is gonna be where we're focusing on data persistence shortly. We've got our modules folder there under user-lib/modules, this is where third-party modules and our base platform modules reside. In Ignition 8.1, the starting point for installing a module is just placing that module file in that folder. We've got JDBC drivers there for connecting to your databases, and we've got the Jython base library for your Python packages. If you're gonna extend and add third-party Python packages, you'll be visiting that path there.

06:17
Kevin Collins: Now, so far, we're in pretty decent parity with the traditional install of Ignition. This slide shows some of the deviations from that that are specific to the Ignition Docker image. One thing that you'll notice when you run Ignition in a container is that the wrapper log isn't there. We create a symlink right there to redirect that to standard out. Long story short, that's just a way to funnel that text-based log stream back to your container engine so you can use things like Docker logs and other commands to view the logs from that container. There's no need to duplicate that storage, so we just direct that out and we let that be handled by the container runtime. There's a few other symlinks in the web server folder, so right there. These are for retaining the key stores that we use for TLS on the Gateway wWeb Interface, and also the key store for our gateway network.

07:28
Kevin Collins: The reason we do some of that is because the nature of a container and deployment is such that you kind of want that to stick with your persisted data volume. Comparing that to a traditional installation where you might install Ignition once and then you're using gateway backups restored to change your context. Here, we want that to carry on with the container, so we redirect that to the data folder, so that way when we enable persistence, that stuff carries right along with it. So you can use the Docker CLI like we're doing it up top here to learn about image layers. If you want to learn more about how the layers are put together, you can use this tool called Dive. This lets you interactively explore an image, and you can use this against any container image, and it's kind of helpful to see which layers add which files, which layers make up the majority of space of your image. So this tool lets you kind of explore that and see for yourself how the image is put together.

08:40
Kevin Collins: And there's even some details for optimization that it provides to you. Now, links to this tool and a lot of other resources are gonna be available at the end. It's got a nice little QR code that hopefully works to share those with you, so don't worry too much about capturing everything here, it'll be available later.

09:02
Kevin Collins: So now, persisting the gateway state, that's our next stop. So when you deploy Ignition, it's a stateful application, similar to a database. We have our tags, the tag values, there's a lot of things that we need to retain for a given gateway deployment. Now, we preserve all of the gateway state in that data folder there, so that's gonna be what we target for our volumes, which is what we're gonna use to persist the gateway state across container lifecycle events like upgrading and things of that nature. Now, we're gonna look at a simple Docker Compose example, just kind of quick show of hands, how many people are already using Docker Compose in their workflows? Okay. So a decent amount. Definitely add that to your tool stack, it's a great way to manage multiple containers. So let's look at the behaviors of volume types. So this is step two in the persistence story. When you create a container, what you do to persist data is you attach a volume to it. And there are multiple types of volumes, there's a ton of different backing storage that might be associated with the volumes and different container engines behave differently with respect to how volumes are managed.

10:35
Kevin Collins: In this section here, we've got a single gateway, just using the Ignition image. We're using a named volume right here, just called “gateway-data” with no special configuration. And one thing I wanna highlight is named volumes behave in a unique way in that they seed the original contents that are inside your container image into that freshly provisioned volume for your container. What that means is that when you bind this volume to this path, if in the image as is the case with our data folder, you've got files and folders in there from the get-go, those will be seamlessly copied and ready for when you run your container. And from that point on, then everything's in that volume, your gateway state is safe, you can do whatever you want, throw away the container, create a new one, any of that, you're good because everything's in your volume. And these provide good performance, they're managed by the container engine, so they're a great option.

11:46
Kevin Collins: And if we see this start up, we just run one simple command to bring up our stack. You can see there's a volume created as well as a container. And if we look at our process listing, we will see, yeah, our container's running, good to go. And at this point, we could change, we could go back and change the image tag to 8.1.20, re-run that Docker Compose command and suddenly out with that old container, in with the new, but all of our gateway state is there and we're seamlessly upgraded. So this is a good story, we're in good shape here with named volumes.

12:28
Kevin Collins: Now, as you kind of look into some of the different offerings out there, bind mount is the second volume type that we're gonna explore. And I'll mention upfront that many of the managed container platforms that you may encounter out there, when you map a volume to a given path like we did with the named volume, they behave like a bind mount. So what does bind mount do? It explicitly replaces that target path. So here, we're using the long form of a volume definition to create a bind mount between a folder, maybe it's a folder on our desktop, to that destination inside the container. And so this bridge is the host to container boundary. Sometimes it can also limit your performance if you're on Docker Desktop for Mac OS or Windows. Anytime you have to cross that file system boundary, there's a potential for limited performance. But really in a production setting, mostly it's deployed straight on a container engine on Linux, so it's not a concern.

13:44
Kevin Collins: So let's see what happens if we attach a bind mount to that path and bring up our container. So we see there's just a container 'cause there's no actual volume, it's just a mapping to that folder we have on our computer. And we can see a different status than we are hoping for, exited with a non-zero return code. We're not in good shape here. And you see if you look at the logs, we can see an error here and it's looking for a file that doesn't exist. And the reason that that happens is because when you associated that empty folder on your desktop with that location in the container, that location in the container now is an empty folder in the files that Ignition expects to be there, are no longer there. So how do we get around this behavior? Thankfully, there's a good answer and that is Init containers. So Init containers are another way that you can have something run prior to your main container launching. So let's see an example. We have our Compose solution again and we're using a bind mount volume using the short syntax here. But then you can see there's a whole another Init container defined up there. Now that Init container, if we look, kinda zero in on it, we see we're using the same Ignition image, we're using a volume mapping that lets us basically...

15:24
Kevin Collins: And this will, again, all be in the resources that you have available to you at the end. What this does in essence, one simple operation, it looks for a marker file in that data volume location. If it doesn't find it, it performs that seeding for you. Okay? And that's the only job that this Init container has. When we look at our main service, the gateway that we have here, we can at least in Compose, we can add a “depends_on” to that, so that way, this container will only launch if that Init container completes and exits successfully. Now, Init containers are a first-class concept in Kubernetes, so you can define an Init container as part of your pod spec, but you can also do it in Compose, which is kinda handy. So now, let's look at how that looks. Here we've got two containers, one exited and another one started. So right away, we kinda know what's to come. We've got a “0” exit code on our Init container, it means it did its job, and even better news, we've got “running” and “healthy” status on our main container. If we look at a listing of the files in that location, again, this is on our host here. We can see that Ignition is populating all of those files back to our host.

16:51
Kevin Collins: So init containers are basically how you approach that challenge, where if your volume is behaving like a bind mount, that's how you get Ignition to behave itself. Okay. The next big topic now we've kind of set some of those fundamentals is extending the Ignition Docker image. So creating a derived image, it's one thing we're gonna show you. Leveraging a multi-stage build, integrating third-party modules, seeding a gateway backup, updating and augmenting your underlying OS image, and customizing the startup behavior. These are just a few of the things that you can do by extending that image, but we're gonna cover these today. So creating a derived image, it starts with setting up what's called a “build context.” So this is the type of thing that you do on your side when you start to create your own image. You'd create a folder, create a Docker file, and the starting point is this FROM statement. Now, there's good reference material on the Docker file syntax. You can find all of that on the web. But in essence, that's all you need to start building your own image, and then in your container run statement, instead of sourcing inductiveautomation/ignition image, you're instead going to launch whatever image tag that you give your own image. So at this point now, it's yours, it's your thing that you're creating.

18:31
Kevin Collins: Next is leveraging a multi-stage build, so as we start to kind of explore what it looks like to create our image and add our own custom functionality, this is a good technique to be able to perform some actions as part of that build, but perhaps do it in a way to where some of the activities, dependent packages, helper scripts, those types of things can remain outside of the final image, so you do this by defining these different FROM statements.

19:05
Kevin Collins: Every FROM statement gives you a stage of your build. And the easy thing to remember is the last FROM statement, that's gonna be your final image. Okay. Now, a lot of this syntax is probably new to a lot of folks. In our resources at the end, I have an aggressively commented version of all of this to kind of help unlock some of that and give you a lot more of the story here. Here, we're kind of focusing on the higher levels, and then the homework assignment is to try out some of this with the GitHub repo that we're gonna link you to and see how that works for yourself. In this one in particular, what we're doing is we're using this first prep stage to retrieve third-party modules. And then ultimately, all we're gonna do is copy from that prep stage the module files that we downloaded into our modules folder. If you remember, that's step one to installing a module is just loading the file there. Next, we move right into how to integrate those third-party modules so they come out of the gate ready and in a running state, and we're gonna do that by seeding a gateway backup into your derived image.

20:29
Kevin Collins: This also has the benefit of letting you package your own custom projects, tags, and other baseline functionality into your image directly, so when you launch it, it inflates to that target state that you're looking for. Now here, this is the one that's kind of a mess, but in essence, what it's doing is it's taking that base gateway backup, and it's automating the steps of accepting your certificate and accepting the end-user license agreement, two of the normally interactive steps that occur when you install a module through the web UI. Our goal is to kind of automate that so that way these things aren't something that is waiting for manual intervention when you launch your container. And again, much like the other one, we have one extra layer at the end to copy that base gateway backup that we've prepared into our final image. There's one other thing here, which is kind of a nice little tip, and you can use this for a multitude of purposes. What we're using it here for, this entry point defines what's gonna be launched when you start your container. Normally, our default image is gonna just launch our entrypoint script, but here, we're actually augmenting that with the restore of a gateway backup, and these are the arguments that you'll find on our documentation page, the dash r to restore a gateway backup out of the box.

22:07
Kevin Collins: So there's also updating and augmenting the underlying OS image. This is something that you might do to add other packages to the container image that you might need. If you're doing source control, maybe you need the Git CLI to be in your container and accessible within the gateway context. This is what you'd use. You'd run those commands to install those packages, those Ubuntu packages into your derived image, and then suddenly those tools are there for use in your container. Then finally, we've got customizing the startup behavior. This is a way that you can inject your own custom functionality that's gonna run right in sequence as the container launches. So I've put in just an example script here with a placeholder where you might run some of your own commands to do whatever it is that you need to do, this is kind of that last step of customization. If there's some things that you need to do to bootstrap your image when you launch it, you can put those right in here, oops, right in there. And then, you can notice we've changed to point to that entrypoint shim to launch.

23:30
Kevin Collins: And then what we do in that shim is we then hand off to the built-in entrypoint script. So this is a way, a convenient way to kind of augment any kind of custom bootstrapping that you need. This is how you get it done. There's other possibilities for doing things with derived images that wasn't able to necessarily bundle into the timeframe today, some examples of those are supplementing third-party Python packages, so you could do some of the pip install style retrieval of those and then bake those into your site packages folder within your Jython library. Bake those right into the image so they're ready to go in your application. This can also be used to integrate a custom certificate authority to make universal gateway network approval and make that story a lot easier. You can add additional JDBC drivers. This would be what you do to install the MySQL Connector, the one that it's not distributed with Ignition, you could use this to put that in there. You can preload additional environment variables. So if you look at our documentation page, there's quite a number of environment variables that you can use to customize the configuration of the container when you launch it, or you can use your derived image to prebake those to where when you run it, those are the defaults now.

25:05
Kevin Collins: So things like your gateway modules enabled environment variable that you might use to kind of declare which modules that are gonna be active in your container, any of those others that are on our docs page, you can preload those in your derived image.

25:23
Kevin Collins: And you can also preset your own TLS configuration for the gateway web UI as well. I've got some of these examples in the repo at the end, so some of this is available for consumption later too as you go to experiment on your own. Okay, so let's move to just some of the recent updates to the platform. We're going to talk licensing, 'cause that's a big aspect of production deployments using containers. We've had the one-time, six-digit license key since the beginning, this is a one-time exchange with Inductive to activate a given system. Here, you can upgrade that OS, you can upgrade Ignition, that license validity is maintained. And for a traditional install, that works quite well. Unfortunately, the container story changes some of how that functions. There are challenges to licensing Ignition in containers because those key system identifiers can change. Those things that we are normally considering immutable are able to change out from under us in a container. I kind of equate it to when you launch a container and then you stop and remove it in order to then create another one. That cycle, which is just a few commands at the Docker level, start, stop, remove, that's akin to uninstalling or just deleting your base OS of a traditional install, provisioning a new virtual machine, installing the OS, installing Ignition.

27:14
Kevin Collins: So that's why it's a bit different. In a traditional install, you do all of that, your license is gonna be invalidated. In the container world, that cycle is so much easier and the story is no different, you can end up tripping your license. So what does it mean? It means that you can do it, and in fact, if you restart the container, it'll be fine. If you restart the host in the container, you'll be fine but if you're deploying on an orchestration system like Kubernetes where it will happily delete and remove your container and create a new one, that's why volumes are important. In those situations and a few others, you end up with your license invalidated on the six-digit system. Now, we do have leased activation and our eight-digit license key and activation token, this helps to solve that, but there are some things to understand about it. It does require Internet access to the IA licensing service. Now, that can be configured in a fairly safe way with a single connection to licensing.inductiveautomation.com on a single outbound port 443, it's just an HTTPS interaction. But it does require Internet capability. And it functions by leasing a session and there's renewals of that as time goes on, so you may be familiar with our Maker Edition, this is the same under-the-hood mechanism that it uses to license your Maker Edition gateways, but it's not bound to that.

29:02
Kevin Collins: It's now able to be used in Edge and standard edition for production deployments. And in that, your license is maintained regardless of those underlying system identifiers, so as long as that session is still fresh, then you're in good shape. And at the moment, existing sessions are also cached, so that way, let's say that the system has just refreshed its licensing session, that's persisted to disk, so if you restart that container or the host that that container's on, when everything loads back up, it loads that session and kind of resumes from that point. So just a brief walk-through of that, the license config. We can do configurable durations for these sessions and for how often it refreshes. In this example, every hour a refresh is attempted, it'll attempt to get a new session, and then if it fails, it will keep retrying that every minute. So it does try to be as aggressive as possible with respect to renewing its license state to keep things working. But the reality is if that session duration expires and it was unable to refresh the license, it will revert to a trial mode. So that is the reality of it.

30:31
Kevin Collins: But the good news is, I think this is a story that we continue to look at, and I think things will continue to evolve over time, but the good news is that leased activation is a reality for deploying your containerized instances right now. And for cloud-based applications where you're already in the Internet, it's a great solution. Okay, so Q&A, and I hope the little QR code works. So there's mics upfront, would ask that if you have any questions that can pertain to any kind of container things related to hopefully Ignition, but if there are any questions, we can field them now. And I didn't even plant a question with him. So.

31:26
Audience Member 1: So you say. Oh, it doesn't work. Whatever, I can be loud. How'd you get started in Docker and kind of working it into Ignition and whatnot? Thank you for the seminar by the way.

31:40
Kevin Collins: Yeah.

31:41
Kevin Collins: So the question was, where did I get started with Ignition in Docker? The story goes back to about 2017, is when I started to learn Docker through my goal of containerizing Ignition. At that point, I was an end user, and I thought, okay, well, Ignition's the one SCADA platform that's multi OS, multi-architecture, there's this Docker thing. I kinda wanna learn it. Okay, let's turn this Linux application into a container and see what happens, and that's kind of what resulted in the, what I call the unofficial Ignition image that's up on GitHub, it's up on Docker Hub and you can pull it. And I still kind of maintain that today, but over time, the last couple of years, we've done a lot with the official image, and there's a lot better parity now, now that we're able to... Now that I've been able to do some of those things on the other side of the fence, the story is now getting a lot better because there's a lot better synergy between what we can do at the platform level plus the container aspects. Yeah, thanks for the seed question. Anything else out there? Now, we've got kinda... If you could... Are you able to... Okay good, just go ahead. Yeah, I'll...

33:07
Audience Member 2: The question is, there's a lot of configuration items that we change from site to site, which database is appointed to, what are the credentials? There's a lot of things that change when you go through and set up a gateway. Are those configurable through this process or how do you do that?

33:26
Kevin Collins: So the question was, there's a lot of configuration that folks typically do in a container deployment or really any deployment of Ignition, right? Site-to-site, there's different things to do. Does this process help answer that challenge? And I think yes, in a way, although at the moment, some of it, you have to kind of bake in yourself. So if you think maybe away from the container side of things for a moment and think, how would I do this maybe in a traditional install? One of the answers might be, you have a base gateway backup that consumes resources that you might plant in a certain location on the file system, so when you start it up for the first time, it consumes those. Performs those, like we can add database connections through scripting, device connections to some degree as well. So some of that bootstrapping you could conceivably integrate into a base gateway backup that you then integrate into your base image and then you would augment that approach with perhaps an entrypoint shim. We showed that was one of the last ones in customizing startup behavior, you could use an entrypoint shim to... I don't know why that did that.

34:50
Kevin Collins: I don't think I touched anything, but it's okay. Hopefully, you got the QR code. Ultimately, that entrypoint shim might consume some environment variables that you might use to then pass in that dynamic configuration into any given container, and then the generic logic in that base gateway backup that you seeded, that's gonna be what handles the late stage provisioning of your gateway. So in a way, yes, I think this does help you create that solution, I think that story will get a lot better as time goes on, so stay tuned for that. But at this time, this is how you would get it done. That's kind of the general approach. Other questions? Yeah, try that one.

35:39
Audience Member 3: Doesn't work. No. I got two questions. The first one, I know the setting up an image with the bind mount so you can access the files locally for Git, whatever you're doing, you mentioned the issue where the data folder is now empty inside gateway, so using an Init container to actually preseed that folder, so when everything starts, it's happy. My question is, is there a reason why the preseeding of that folder wouldn't just exist in the base image? Because all it's doing is putting the same kind of default file structure in place based off of the actual installation.

36:17
Kevin Collins: Yeah, that's a good question. So to boil it down, why do we even need an Init container to perform that seeding, why can't the image itself take care of that? And certainly, that is something that can be done. In fact, that unofficial image I mentioned has that exact type of functionality that I had put in it. Ultimately, we hope that soon that story can be different to where those files that are needed for bootstrapping the gateway startup are simply in the base image elsewhere, outside of the data volume. Ultimately, that's the real answer to it, and that's where we'd like to head but we're not quite there yet. Absolutely, there's no reason not to have that be done, and certainly you could introduce and inject that functionality if you wanted to do that, you just drop it in that entrypoint shim, so whatever functionality you want to do that... Actually, I just lied to you. You can't do it with an entrypoint shim, you actually have, that's why the Init container is necessary, but yeah, it would require a bit of additional tweaking to do that.

37:34
Audience Member 3: And then I got a separate question in regards to dev containers, I know you and I have talked about them. I guess, what are your thoughts on where dev containers stand with Ignition and kind of trying to open up more opportunities to have rapidly accessible development environments with the tool?

37:53
Kevin Collins: Yeah, so the question was, where does the concept of dev containers fit in and what does that open up for us in terms of Ignition development? Certainly there's, for those that aren't aware, dev containers are kind of a standard. I think Microsoft kind of championed it, but it's a way to define a structure that lets you launch like an editor, let's say it's VS Code, and launch that in the context of a container where maybe your development environment and all of its required dependencies are packaged into a container image. I think there is some good futures there, especially with respect to module development. I think using something like a dev container to automate a lot of the fundamental steps of getting an environment set up and all of the trouble that can occur from that, I think that could be solved nicely through a dev container. And that spec also allows us to use definitions like Docker Compose to where we could have a Java development environment with all of the required functionality in there and a sidecar Ignition gateway that we used to test something like a custom Ignition module. So yeah, I think dev containers are a really, really neat concept and I've been exploring them, they're pretty cool. Hey.

39:31
Audience Member 4: Hey. Have you guys thought about testing it with Podman?

39:36
Kevin Collins: Yes, yes, we have. I've done only preliminary testing, but the good news is that it seems to work okay. So Podman is one of those alternate container run times. It is also supported by its own kind of partner tools for building images as well. We talk about Docker, but there's Podman and Buildah and Skopeo, these are some of the tools that comprise another container ecosystem, and certainly I think Ignition works fine in that, that's one of those joys of that OCI standard is we get to play in that scenario. I think right now, it's still based on an Ubuntu base image, but even if you run Ubuntu base images on a Fedora Linux system with Podman, it's okay, it works. So we don't do any formal testing yet, but we are exploring it because I think Podman does introduce some interesting potentials.

40:47
Audience Member 4: Thank you.

40:47
Kevin Collins: Yeah.

40:49
Audience Member 5: On the licensing renewal config, where you said you can control the rate and duration, is that controlled on the customer side or IA side?

40:58
Kevin Collins: Okay, so the question was, what about that session refresh and session duration, and I mentioned it was configurable. So that is configurable when you work with sales to get a license, that's kind of when it's stamped out, and then at that point, it's not configurable like at the gateway level.

41:20
Audience Member 5: And if the licensing renewal fails, is there some sort of alarming or notification you can do from the gateway since I would imagine that in an unlicensed state, there's not that many services available.

41:30
Kevin Collins: No, but I created a ticket for just that, so that way we could indeed do alarming. So that should follow hopefully in the not-too-distant future. Because absolutely, you need to know. And then if it's an alarm, something we can attach to a system tag, you can use your standard alarm notification pipelines, all of that stuff while it's still licensed to distribute that out and get the appropriate engagement. Any other questions? I got a few minutes left.

42:00
Audience Member 6: I got a question. In a nutshell, what's the motivation for a lot of people to start using Docker containers versus traditional virtual machines?

42:12
Kevin Collins: Nice. So the question is, what's the motivation for using containers instead of virtual machines? I'd love to be able to do a demo of that because I think in the three minutes that I have left, I could achieve it if we actually had it on the laptop. But the biggest compelling story, I think still is in development, where you're using it to augment your development workflows, being able to, in a matter of one to two minutes, assuming you're on a decently fast Internet connection, the fact that you can bring up a solution with three Ignition gateways, an MQTT broker, an installation of SQL Server in a container, all configured in a composed stack that can talk to each other over the dedicated container network. And again, remember I said two minutes, right? You can bring up all of that and start working immediately. And you can also manage multiple environments, so maybe you have a few of those. As a systems integrator, it's a no-brainer, so we typically... When I was an integrator, I typically worked with multiple versions of different software. Well, that's another story that Docker helps solve, is that we can launch a container running Ignition 8.1.19, we can launch another one using 8.1.20 or the nightly build, or that K. Collins guy’s 7.9 images.

43:51
Kevin Collins: You can use those too. So that's part of the story that we're definitely trying to get out here. This session I know was a bit lower level, it is after all the advanced track. But a lot of what we covered here today is a compilation of a lot of the common questions that I've been getting asked when you go past that surface of just, “How do I start and run a container?” So hopefully this still has painted a good picture and give us a solid foundation to build from to solve some of the problems that you might otherwise inevitably run into and then have no answers for. But it took me longer to explain why Docker is compelling than it would have to just spool it up but hopefully that helped. Okay, I think we are at time, so I appreciate everybody coming to the session today. Enjoy the rest of your conference.

Posted on October 18, 2022