Deployment Patterns for Ignition on Kubernetes

59 min video  /  42 minute read


Kevin Collins

Lead Software Engineer

Inductive Automation

Kevin Collins returns to ICC for a demonstration of how to harness the combined power of Ignition and Kubernetes. This session offers an in-depth look at methods for effectively automating deployment, scaling, and managing containerized Ignition applications.


Kevin Collins: So you know you're in the right place. This is "Deployment Patterns for Ignition on Kubernetes." My name is Kevin Collins. I'm a Lead Software Engineer here at Inductive Automation. My focus is container technology; been doing a lot with our official Docker image. Interested in the development workflows that that enables. I think it's a really powerful technology. Today, we're gonna talk about what the story is for production deployments and one methodology for leveraging containers in that context.

Kevin Collins: So to get started, let's cover a little bit of the session agenda to just kind of set the stage here. We're gonna talk a little bit about what Kubernetes is. We're gonna try to give a high-level overview of what that is. We're gonna touch on the question of why Ignition on Kubernetes? Why are we interested in this technology? We're gonna cover some typical Kubernetes resources. So we're actually going to dive into some of the constructs that we're gonna leverage within Kubernetes to build our Ignition solution. Then we'll look at how those are assembled into an example architecture. We're going to do the scale-out architecture. We're going to take a look at what that looks like. There's a brief demo that we'll see. And we'll recap on what we saw and try to kind of circle back on exactly what we put into motion there. Then we'll talk a little bit about the road ahead. I think we're just at the beginning of this journey. There's a lot more we can do. So we're gonna touch on some of those aspects in that section there. I've got some helpful resources to share with you, so you'll have something tangible that you can take away from this session. Start doing some experimentation if you're so apt. And then we'll have some room for Q&A at the end.

Kevin Collins: Now, to kind of coalesce some of that into a set of goals to kind of think about as you watch the session, we're not going to do a comprehensive instruction on everything Kubernetes. We only have an hour, and we do want to get to dinner, so we don't have all day. But we do want to come away with at least an understanding of what this technology is, what it enables, and some of the things that you can do with it. Through that, I also want to provide you with a working foundation. So if you need to deploy Ignition on Kubernetes, I want you to come away from this session with at least some support to start doing that in a good pattern and in a way that's going to provide you with some success. Along that journey, we'll showcase some useful tools and applications. Again, we'll only be scratching some of the very small of the large surface, which is the Kubernetes ecosystem. There's a lot of powerful tools and technologies in that space. We'll touch on a few of them. But more than anything, we want to elevate your awareness on what's possible. Couldn't edit that before. So let's just jump right in. We've got a lot to cover.

Kevin Collins: So what is Kubernetes? So Kubernetes, at a high level, it's an open-source container orchestration platform, which are again more words. But to dive into it, Kubernetes involves multiple computers to form a cluster, and it's organized into a control plane for managing everything and a data plane for running your workloads. Handles deployment, scaling, management of containerized applications. That's what its intended purpose is. It gives us a standardized set of resources, a common API for most of the typical things that we do in application deployment. So we'll see what these resources are, at least some of the ones that we're gonna use to assemble our Ignition solution.

Kevin Collins: Now, Kubernetes is very modular. It's more akin... I like to think of it almost like you'd think of Linux in general. Linux, maybe it's Ubuntu, maybe it's Red Hat, maybe it's Arch, there's so many different distributions that, put together, can make Linux behave in different ways and make it solve different problems. Kubernetes is similar. It has a modular construction. Everything from how your containers are executing to how the networking is configured, to like the storage backends for your persistent volumes that you're gonna use to hold things like your data volume and Ignition. All of those things can have different implementations for those interfaces, so you have a wide variety of Kubernetes potential. Everything from the edge all the way up into massive cloud-clustered deployments.

Kevin Collins: So Kubernetes addresses some of those common challenges for deploying applications. Now, I don't want to sell it as something that requires no learning curve. It is yet another abstraction over containers. So there is some to learn, but there's a lot to gain, I think. Workload resiliency, so that multicomputer cluster gives us some potential to have recoverable workloads that can fail over to a healthy node when you have a hardware issue, for example. Scalability and load balancing is something it enables. You can easily scale your applications out to multiple containers. And Kubernetes again gives us that API to define those things in a way that's approachable. Service discovery and networking, so how your applications communicate within the cluster. There's a lot of helpful things that it does there for us, so we don't have to, for example, worry about the IP addresses of different containers. And there's some helpful constructs within Kubernetes to make that as easy as, for example, if you've used Docker Compose and your services within your Compose Stack can refer to each other by name. Service discovery enables that similar concept in Kubernetes.

Kevin Collins: And it's a consistent platform for on-premise, hybrid, cloud, and even multi-cloud deployments. Since it's that common base API, running it in AWS, in their elastic Kubernetes Service, or in Azure Kubernetes Service, there's less to change if you want to deploy a similar application to multiple environments. And that's just to name a few. Again, we could spend much more time than we have on what Kubernetes is, so we've got to keep moving.

Kevin Collins: Now, I talked before about the organization of Kubernetes into a control plane and a data plane. The control plane is what is going to handle all of the configuration that we put into Kubernetes. There's an API server as the entry point for developers putting resources into Kubernetes. So we're gonna talk to that API server in the control plane. There are controller managers for tracking the desired state. So Kubernetes is also declarative. The goal is to declare your target state, and the job of the control plane is to make that target state the reality. So your controller managers, locally, are going to work to coordinate that. In the cloud, there's cloud controller managers that help translate some of the Kubernetes API constructs into things like your AWS load balancers and your EBS volumes for storage. All of those things are handled for us by the control plane.

Kevin Collins: But the good news is that these are there, so that way we don't have to worry about them as the user as much. There's a scheduler for assigning workloads to a given node. So part of the control plane's duty is also to keep track of how much CPU, memory, and other resources are available on the given computers within that cluster. And the scheduler takes care of making sure that you don't overprovision a node to where your workloads suffer. And then finally, there's persistent state storage that's distributed across those nodes in your control plane for holding on to all of that persistent state storage.

Kevin Collins: Now, we've talked about the control plane, the data plane. This is where, for example, we're going to run Ignition. Multiple worker nodes work together for scheduling those containers, starting those containers. There are a few things going on in the background to help coordinate the launching of those containers based on instructions from that control plane and also for managing the network on those Linux hosts that are part of your data plane. So it goes together, you end up with both of these. As a developer talking to Kubernetes, you're coming in through that API server. This is where your command-line tools and other graphical tools that interface with Kubernetes, that's what they're gonna talk to. And then, as a user, you connect to running workloads just like you connect to the Ignition web page, you launch Perspective projects. That's you, as this user connecting to those applications in your data plane.

Kevin Collins: Okay, so why Ignition on Kubernetes? We want to be able to leverage that shared pool of resources to run our applications. Ignition can combine with other applications, like your MQTT brokers or databases. Other applications in your Kubernetes cluster can share that pool of resources. And it's able to leverage that expansive set of companion applications and tools. We're gonna see at least one of those that, for example, automates the handling of certificates. We're gonna see that, and then there's a whole lot more. Ultimately, I think this is a way to again provide better portability of the application bundles that you assemble. So we can talk that common language, and then if you go to deploy, there's only minimal adjustments then for the different Kubernetes implementations, be it Azure, EKS, or your own local cluster.

Kevin Collins: So that's the high-level overview. Like I said, we've got a lot to cover. So let's dive into some of the resources in Kubernetes that you're gonna use to assemble a typical Ignition solution. These are broken into categories. The first one is configuration. First resource that we're gonna talk about is called a ConfigMap. So this stores configuration files, environment variable definitions. We can even put scripts in there to use within our container definition. And each ConfigMap resource is limited in size. That's because it's stored within that control plane. We want to make that control plane continue to be quick so we can't bog it down with huge files. So there is a limit on the size of these ConfigMaps. Now, as to how we use this with Ignition, this is where we're gonna put like our environment variables, things that you would deploy your Ignition Docker containers with today to do some of that pre-configuration. That's where we're gonna put them, is in a ConfigMap. Other initialization files, we'll store those in a ConfigMap. And then I mentioned scripts. We can also package in helper scripts for bootstrapping our solution. In the demo today, we're gonna use those for gateway network connectivity, volume management, and more. So to reiterate, ConfigMaps are not for gateway backups, they're too big.

Kevin Collins: Also in the config realm, we've got secrets. So secrets are, as you might imagine, sensitive information that you're gonna store separate from your other configuration. These are passwords, private keys for your certificates, API tokens, and things of that nature. Now, I'll note, additional action is necessary if you want a default Kubernetes solution to have your secrets encrypted at rest. That's not the default, but there's plenty of solutions that help you get that done. Just kind of a reminder to make sure to apply security practices with regards to access to your cluster. Because a default setup, if they have access to that persistent state storage that I mentioned from the control plane, well, they have access to your secrets. But the good news is we separate those out from config, so it's kind of known where those live. In terms of Ignition, we're gonna use that for bootstrapping our initial gateway admin password, and we're gonna leverage secrets as the holders for certificate key pairs. So private keys for our gateway network certificates, we're going to see that in action.

Kevin Collins: So that's the configuration side. We have a few more resources to talk through. Once we're able to kind of define that configuration, it's time to define how are we gonna run our workloads. So for that, we have a Pod, and that is actually supposed to be like a little icon of peas in a pod, but it didn't come through. But containers in Kubernetes are members of a Pod. So you think of them like peas in a pod. Now, because of this... This is a little different from Docker, where we just talk about a running container. It's only now that I've mentioned the Pod. For the most part, it's pretty synonymous container with a Pod, but a Pod can actually have multiple containers. Because of this, any specification for these are always going to be colocated on a single node. And if you do have other containers in your Pod, things that you might call sidecar containers, keep in mind that those share that pool of resources: CPU, memory and storage. So keep that in mind when you're setting things like CPU and memory limits for your workloads. All of the containers in the Pod total up to that usage.

Kevin Collins: How do we use this with Ignition? Hey, this is where we get to run our Ignition container image. This is where we're also gonna define an init container as part of that Pod to perform some initial bootstrapping and configuration of our workload. This is also where we'll define the readiness. There's different conventions in Kubernetes to describe your application's health. One of those probes, as they call them, is a readiness probe. This is similar to if you've worked with Docker Compose, you might have seen the status of healthy alongside a container that corresponds to the Docker health check. In Kubernetes, it doesn't pay attention to that health check that may be defined in your image. I think the health check might not even be technically a portion of the OCI spec for a container image. So in Kubernetes, in a Pod spec, we have to define that readiness probe as a way to indicate that that container is ready to receive traffic from the outside.

Kevin Collins: Now, once we have a Pod specification, we can build on top of that, and the next workload resource we're gonna cover is a StatefulSet. So creating this resource ultimately creates Pods, but it's a higher level of management for those Pods. What it does is it gives us a consistent identity for a replicated set of Pods, where you might deploy a Pod and get a dynamic name. I know you all that have used Docker have deployed... Run a container, and then you look at the listing of containers, and you see that dynamically generated name. There's similar setups in Kubernetes. StatefulSet gives us a reliable and consistent identity for those Pods. So you might have a back-end workload. The first Pod is gonna be back-end -0, and it always will be that. In addition to just that consistent identity, we also get a guarantee on the association for the backing persistent storage for that Pod because we don't wanna end up in a scenario where we have a Pod attached to the wrong storage volume. We want to make sure that that association is guaranteed. That's what StatefulSet will do for us. This is the go-to resource for Stateful applications, as you might imagine from its name, things like databases and Ignition.

Kevin Collins: How we're gonna use it. This is gonna be the wrapper for that Pod specification that defines how we actually launch Ignition. This is also gonna be where we match up that persistent storage to support our Ignition container, so the data volume. We've gotta have storage that remains outside of the container lifecycle events to where if we have to recreate a Pod, we want that storage, we want all of our projects, our tags, our gateway state to remain; we define that here. So, so far, we've configured, we've define how we're gonna configure our running workloads, our Pods, we've talked about how we're gonna actually deploy them. What about storage? So for that, we have a resource called PersistentVolumeClaim. This provides us a facility for allocating storage for our workloads. And within Kubernetes, I mentioned early on, there's container storage interfaces. You can define what are called storage classes, to delineate a number of different types of storage resources that are available to your work loads, and this gives you some flexibility on the applications that you deploy. For example, you might have some storage in your cluster that's very performant and very high-speed that you wanna use through your Ignition container. But maybe you also have some slower storage that you wanna use for archival purposes or something like that, different storage classes can be mapped to your workloads to keep that assigned and organized effectively.

Kevin Collins: There's access modes within Kubernetes that define a set of guarantees for how that storage is gonna be accessed. So we have things like ReadWriteOnce, which guarantees that a storage volume is only going to have a single entity attached to it. We have multiple containers attached to an Ignition data volume. Bad things are gonna happen. It's not gonna work right. So we use some of these guarantees, lean on those in Kubernetes to make sure their workloads run the way that we need them to. I talked about storage class already. In terms of Ignition, again, this is where we're going to provision and manage our data volume. This is gonna be also where we can leverage those other access modes such as ReadWriteMany, to provide a shared volume across multiple Pods in our application. So this might be where you could have some shared files like gateway backups or other things, a lot of uses for that. Again, we get to lean on the definitions in PersistentVolumeClaim. We're almost through all of these resources, but it's important to understand some of these and how they're gonna be assembled later. We're now ready to talk about networking. How do we get traffic into those running Pods? First resource, Service.

Kevin Collins: So this is gonna be what provides access to those ports, is gonna be what exposes the ports from our Pod. This is gonna be based on that readiness probe. So the Service is in charge of matching with a certain number of Pods based on a selector, and it'll be watching those readiness probes, and then any traffic that wants to talk to that Service is going to be the set of available back-end Pods to talk to, is based on the readiness. This also sets up in-cluster DNS resolution, so we can have a name-based resolution, get away from those dynamically allocated IP addresses, and kind of raise ourselves up in the abstraction layer. There's various types of Services, everything from trust and in-cluster, headless Service. That is just really performing readiness checks and in-cluster DNS resolution, all the way up to load balancers that when you're in a cloud Kubernetes environment, you define a Service and upspin's an AWS load balancer that automatically materializes for you. You don't even have to do any work in the EC2 console for that. So wide variety of ways to get traffic into our back-end Pods. For Ignition, we're going to use this as a way to establish gateway-to-gateway communication for the gateway network.

Kevin Collins: We're also gonna use this for our next resource, called an Ingress resource, for flowing traffic in from the outside into our front-end Ignition gateways of that scale-out architecture. So Ingress, this defines a route to a Service. So we've got our configuration, we've got our Pod in our StatefulSet that's managing the creation of that Pod, we've got a Service that's watching those readiness probes. The final step before we get traffic from the outside in is typically an Ingress. This defines that route. And what it does is it... This resource, when we create it, pairs with an Ingress controller that's been deployed to the cluster. Again, back to that modular construction, your Kubernetes cluster might have traffic as a reverse proxy, or it could have NGINX or any other Ingress controller. This resource gives us a common starting point to define a path to that running workload. We can also, in this resource, to find TLS configurations to do TLS termination. This will help us set up encrypted communication to our Ignition gateways, and it's gonna use a Secret to store the private key and public certificate for that TLS operation. For Ignition and in our demo today, we're gonna use this to configure reverse proxy to load balance to our replicated set of front-end gateways, and we're gonna do that TLS termination.

Kevin Collins: Okay, so we've got a bit of a foundation. As you start to explore all the other resources, you'll see just how little that we've covered so far. It seems like a lot, but there's so much more. But in the interests of keeping things moving, let's talk about our scale-out architecture. So this is going to involve a redundant set of back-end gateways, multiple front-end gateways, gateway network connectivity, and a TLS-terminated Ingress. So once you know about these resources, how do you put them into action? How do you actually go about defining these talking to that API server in our control plan?

Kevin Collins: How do we assemble that? That's the next section that we're gonna cover. Now, there are two big tools that are typically used to build or to define and organize your Kubernetes applications. One that you may or may not have heard of is called Helm, and it's a great tool. We'll actually be using Helm in this demo as well. The other typical one is called Kustomize, and since it's Kubernetes, everything starts with a K because that's just how it is. We're gonna use Kustomize because I think it's a simpler way to kind of get started with defining your application. Ultimately, look at Helm, look at Kustomize as you're gonna build out your application. Choose is the one that you think works best for you. They both can achieve very similar goals. This is a tool for organizing this application that we're gonna build. So in our context here today, that application is a scale-out architecture of Ignition.

Kevin Collins: This lets us separate files into a directory, so that way we can keep things organized. And ultimately, what it'll do when we run that single command to build the solution is it will compile all of those into a set of final resources. These are gonna be the ConfigMaps, the secrets, the StatefulSets, the Services, the Ingress, all those resources that we've talked about. Kustomize is gonna build them all, and then we can finally send those to the API server, declare that target state and let Kubernetes do its thing to make that target state the reality that we wanna see.

Kevin Collins: One of the other things, once we define a base application, Kustomize is really good about providing the ability to overlay that base configuration for different environments. The example that you're gonna have to look at today as a base solution and then an overlay for AWS EKS, the elastic Kubernetes Service in AWS. But you can have multiple overlays for maybe your dev solution. You can customize those with... Hey, Kustomize. I promise, I didn't think about that until now.

Kevin Collins: Okay, so building the solution, how do we actually construct these things now that we know a little bit about the resources that are gonna be involved? So the first one is ConfigMaps, right? Kustomize lets us define a ConfigMap generator that, again, I mentioned, it lets you break things out into multiple files, so it has a mechanism to define these. And at the end of this, what we'll end up with are three ConfigMaps that are a combination of files that we're gonna use to bootstrap scripts that we're gonna... Oh, scripts up here. Files that we're gonna use, and then environment variables for our Pod to launch our gateway. Secrets are a similar way as a generator. That way you can have a file with your secrets. In the example that you're gonna come away with, there's like a .example file, so that way you can create your own environment variable file and put what you need in there. It's generated when you build the solution, so secrets are similar to ConfigMaps in that way. And then we have to build out our other resources.

Kevin Collins: So these are going to be a set of YAML file. For example, we'll have a back-end StatefulSet and Service for a redundant gateway pair, we'll have a front-end StatefulSet that is for our in-count of replicas on the front end. We're gonna have that Service and associated Ingress for our TLS-terminated application load balancer, and then we're gonna have gateway network certificates that we we'll use to automate gateway network connections that are fully secured, so that's pretty cool. Okay. So you're just referencing your customized solution, all of the different YAML files that you have. So that way, again, you can stay organized.

Kevin Collins: This is a typical convention that I've seen that I like, but they can be named whatever you want them. And there's no restriction on having each of these files contain a single resource. You can have one file with all of your Services and one file with all of your StatefulSets. However you wanna organize it, it's up to you. Now, building the solution, I wanted to talk a little bit about gateway network connectivity.

Kevin Collins: One of the questions that I see quite often is, how can we better automate gateway network configuration and have it also be secure in a best-practice implementation? This solution demos the use of one of those pieces of the Kubernetes ecosystem called cert-manager. It's an awesome tool for automating the provisioning of certificates in your cluster, and this is what we're gonna use to provision our gateway network certificate authority that we're going to take and have each of our gateways within our solution trust, and then we're gonna actually issue certificates for the gateway network client connectivity. And we're not going to have to run any OpenSSL commands, so you're welcome.

Kevin Collins: First, we're gonna define a certificate that'll be used for signing, and that's this up here. So this is like our root certificate. And then we're gonna define an issuer that's gonna issue the certificates for our gateway network front-end and back-end systems. That's this one right down here. In terms of building it out, we define a Secret that we're gonna use to package up the keystore that we're gonna load into our Ignition container using our init container.

Kevin Collins: We're gonna define certificates for our different gateways. And cert-manager, what it does is it extends the Kubernetes API, and it creates these resources that we can use, so certificate is not one of the core Kubernetes resources, the ones that we talked about, ConfigMaps, etc. Cert-manager is installed in the cluster and then provides its own custom resources that it looks at, its own desired state that cert-manager controllers turn into reality. So you can see kind of the same pattern. Ultimately, this produces a keystore that we're gonna use to create a certificate, bundle it up, and then inject that into our gateway. So when you kind of assemble everything, it ends up looking... This is just a snippet of the StatefulSet resource. You can see our init container up here. We're gonna use some of those helper scripts to factor things in a way that's ready for our Ignition gateway. You can see those invoked right here.

Kevin Collins: In our StatefulSet, we also define those volumes. We talked about persistent volumes. We can also map in those ConfigMaps and Secrets as folders within our Pod, and we can define those in a volume section here. And then we can mount those volumes at locations within our containers in our Pod. A lot of new terminology, but once you do it enough, you start to get the hang of it. So yeah, we'll use those volume mounts to define where those resources get placed in or running containers. That helper script, what it does in the end, finalizes some of the placement of that keystore that's generated for us. And the result is a fully secured and automatically approved gateway network connection between all of our front-end and back-end redundant pairs out of the box, no additional configuration necessary.

Kevin Collins: And it's secure, mutual two-way authentication and encryption with TLS. This is just one example of some of the flexibility in these application definitions in Kubernetes that you can enable. So once you build that base application with Kustomize, we talked about extending that. A lot of the way that you do that are through something called annotation. These Kubernetes resources that we talked about, they're not meant to encompass every potential configuration aspect across all conceivable Kubernetes cluster implementations across space and time.

Kevin Collins: So it has a construct called annotations, and that's a common pattern to augment a resource with environment-specific configuration. For us, we're gonna use annotations to customize our base definition for use with EKS. So let's create an overlay. That's what they call it. It looks like this. This is actually the entirety of the overlay, so this is the only change that we have to do to deploy this and get this running in EKS. And you'll notice we use a concept called patches. And ultimately, what we're doing is we're adding some Amazon-specific annotations to our solution. And what these are specifically doing is these are the connection to tell AWS that for this particular Ingress resource, we want to spin up a proper EC2 load balancer, application load balancer. So this is how we establish that connection. We can configure it; we're doing things like session stickiness.

Kevin Collins: Let's see. I think I have all that there. Yeah, session stickiness with the load balancer cookie; we're doing things like SSL redirect, and we can also integrate with things like Amazon Certificate Manager. So we can procure a certificate in AWS. Now granted, you can also configure cert-manager to do this automatically with Let's Encrypt as pluggable things for all of those different certificate engines. In the demo, I'm using AWS Certificate Manager, but this is where we configure it and SSL redirect.

Kevin Collins: So, all right, let's get to a demo. This is just a command-line-based demo, but the first thing that we're going to do is actually create an entire Kubernetes cluster. We're going to use a tool called EKS control. What this is going to do is it's going to spin up a Kubernetes control plane, EKS. It's going to automatically create a EC2 set of virtual machines to serve as your data plane. And it's going to do all that through cloud formation. So here, after just a few minutes, it's sped up slightly. We have a full cluster of two worker nodes. Here, you're seeing me use the Helm tool to install cert-manager and also an AWS load balancer controller. This is an example of using Helm. A lot of these, once these are available, it's just a single line to install them into your cluster.

Kevin Collins: Here, we're going to create a namespace to organize all of the resources that we're going to create today. This here is us bootstrapping that gateway network certificate authority. And then finally, this is how we apply our customized solution. It builds, compiles, and then it's communicating with our API server to create all of these resources. So there's some familiar faces in here, our ConfigMaps, Secrets, StatefulSet. Here's some of those certificates that we created or defined with cert-manager, and ultimately, our Ingress. And then, in terms of that, you wait a few minutes. Again, Kubernetes is all about declarative state. So it's going to see that you want to run this workload, it's going to pull the image after it schedules that workload to one of your worker nodes. And ultimately, it's going to let you put in your URL and get you a nice TLS-encrypted channel to your front-end gateways. Log in. Out of the box, you've got gateway network connections.

Kevin Collins: They're all approved, and they're all using TLS. And you've got that between your front-ends and your redundant back-ends. What you can't see in this picture is the other front-end gateway that's also been replicated out. So that can be how easy it is once you've built out an architecture to deploy it. So you can quickly deploy these and then get it online very quickly. So as a recap, what we did, we created a cloud formation-backed EKS cluster using EKS CTL, single command, small configuration file, and we've got an entire system that we can deploy applications to. We deployed an overlay of that base customized solution with a few tweaks for AWS, give it what it needs to create that load balancer, and we use an AWS Certificate Manager. After you've deployed this, if you go poking around in your EC2 console, you'll see all of these things that maybe you're even familiar with from working in AWS, things like your auto-scaling groups. These are your worker nodes, your volumes for persistent storage for both your node and your Ignition gateways.

Kevin Collins: You have a load balancer that was created from that one Ingress definition. Target groups that are tracking the health of that Service and feeding that load balancer with the instructions it needs to route traffic to healthy nodes. At the end of the day, you know, if you want to clean up against this, you should wipe clean the resources you've created before you delete the cluster, so there's some helpful hints in here. You don't want to end up with orphaned resources that you have to then go manually clean up. But doing that can be as simple as deleting the namespace that you created. All those resources that we deployed, if you delete the namespace, it cascades through and removes all of that, which will then translate to the removal of those associated resources in AWS. So you don't have to even touch AWS console as great, as it may be. So here is EKS CTL, deleting our cluster, scaling down those target groups to terminate our worker nodes, and then ultimately clean up the cluster. And now we're back down to $0 on that billing meter.

Kevin Collins: Okay. So the road ahead, what's next? All right. So if you haven't been able to tell, I'm excited about this technology and I'm excited about that road ahead. So things that we're thinking of: a Helm chart for easier customization of an Ignition deployment. Like I said, you're gonna walk out today with access to that, the very customized solution that you saw me deploy. Helm is, I think, a better way to kind of package an application for us as a vendor to deliver to you, so we're looking at that. We're also looking at what a Kubernetes operator might look like, so we talked about cert-manager and how it creates those custom resource definitions. We're not blind to the fact that there may be room for an Ignition operator, where we have custom resources for some of the things that you might be familiar with, like an Ignition gateway, or database connection, or device connections things like that, we're looking at what that might look like in Kubernetes. And then ultimately, we do want to provide some more examples of Kubernetes deployment, again, to kind of give you the foundation that you need to build so when you need to deploy in Kubernetes, you're not struggling for some of those core foundations to have a successful run right out of the gate.

Kevin Collins: If you have some ideas, if you've already messed around with Kubernetes, let me know. I'd be interested in potential things that you have in mind. Okay, helpful resources. I have a few links in here, but the bulk of the resources is actually in a GitHub repository that I'll share with you as well. Let's just pop through here. So there's some command-line utilities that you might have seen in the demo that augment the standard kubectl that we use for interfacing with the Kubernetes API. There's shell completion. I do recommend at least spending a little time with Kubernetes on the command line, but doing it without shell auto-completion is just an exercise in pain. So turn that on. Do yourself a favor. And then, once you kind of get a hang of it, there are plenty of visualization tools, graphical interfaces to use to see what resources are deployed, interact with them, patch them, edit them, do a number of things. Some of those are here. There's a lot more. Rancher is a popular one. I like that one. Cert-manager, I've got a link to that for you to take a look at that.

Kevin Collins: That's how you can quickly integrate with things like Let's Encrypt to automatically procure certificates. And then, if some of this technology sounds exciting, but you maybe don't want to manage it yourself, if you see the presentation today, go talk to 4IR Solutions; that's one of our solution partners. They leverage this technology to provide solutions, and it's not always about using a technology because it's the hot thing. It's about: does that technology help you solve problems? They're great at leveraging it to solve problems. So go talk to them. They've got a booth. I'm sure they'd be happy to discuss your interest, as would I. Q&A, okay, so a few things while you kind of scan that QR code if you're interested. That way, you don't have to wait for the slide deck to get some of it. We're gonna be doing Q&A up front here. There are a couple of mics. So if you are able to come down and ask questions down here, feel free to pop in. If that is problematic, Leeda will try her best to run a mic to you. So I'll open up the floor to questions. I'm sure there are some. Looks like we've got one to start here in the middle.

Audience Member 1: Hello.

Kevin Collins: There we go.

Audience Member 1: In your opinion, is this the best way to achieve high availability?

Kevin Collins: It is a potential way to achieve it. I think I had mentioned at some point early on in the presentation that Kubernetes is not without a significant learning curve. So there are a lot of things to learn. I think that once you get over that hump, you start recognize some of the power. So it is a way to deploy a more highly available solution. The Ignition product itself, we still have some dependencies on session management. So it's not necessarily the same experience as a full cloud-native app that's fully distributed. You still have some of the same constraints as a regular deployment of Ignition. Again, it's a stateful application. Just running it in Kubernetes doesn't automatically give you magical capabilities. But it is a way to achieve that. And I think it does provide a lot of abstractions to minimize some of the minutiae that you'd otherwise have to deal with in doing that.

Audience Member 2: I don't actually know if this is an issue or what's being done about license management within K.

Kevin Collins: Say that one more time.

Audience Member 2: I just, I've been playing with Ignition and Docker a lot and everything like that. But issuing licenses, like the actual Ignition license.

Kevin Collins: Oh, very good question. Okay, yes. So like other containerized deployments, Kubernetes is another way to deploy containerized Ignition. It does require our leased activation. Now, there is a future coming soon where we're going to have Cloud Edition as a container image that'll have some automatic integration with some of the metering services within the cloud providers that'll provide a license experience when you're running in those environments. But you still do have to deal with the eight-digit license key and the leased activation to leverage this method. And your mention about Docker, I do want to remind you that this isn't like Docker 2.0 or anything like that. All of the workflows that you have been learning about Docker, Docker Compose, those are still very much relevant. And you want to continue leveraging those on your local development because there's much less overhead than spinning up Kubernetes clusters on your local laptop and doing development. So you don't have to throw away what you've learned. This is another exciting technology that's more geared towards kinda those production deployments of Ignition.

Audience Member 3: In terms of production deployments, I guess, what are your thoughts on leveraging Swarm or Compose stacks versus Kubernetes, and which one would make more sense for on-prem-focused architectures?

Kevin Collins: So yeah, good mention of the fact that there is this thing called Docker Swarm, which is a cluster-based implementation of a similar concept to Kubernetes. In fact, it might have even predated Kubernetes. I'd have to research that. But Docker Swarm's been around for a long time. It's already in your installation of Docker Engine on Docker Desktop or wherever. And it is also a clustered method to deploy containers. It happens to also leverage something very similar to the Compose Spec for deploying those stacks and services into a clustered form. One thing that I think is a challenge with Swarm deployments, and it's always the challenge, is storage. How do you do distributed storage? Within Kubernetes, for running a local cluster, there's things like Longhorn. That's what I'm using at home, because I'm a huge nerd and I run Kubernetes at home. But that's a way to provide distributed storage in cluster, distributed block storage in cluster. And it's super easy to set up.

Kevin Collins: It's another Helm install. And you're off and running with distributed storage across a cluster of nodes. So that's a challenge in Swarm. There is some rejuvenated energy around Docker Swarm, which was kind of... Docker Swarm was kind of neglected for quite a while. But there is some renewed energy. So maybe we'll see more of that Swarm versus Kubernetes. Competition's good, right? So I think it's still viable. But just keep in mind some of those challenges, like storage, is usually the big one that you hit first. So I happen to like Swarm. But it has some shortcomings, pros and cons. Other questions?

Audience Member 4: I'll ask this one again, just since we have time. It seems like it would be way more helpful if there was like a licensing place where the containers could go acquire, check out, and check in licenses on demand. Is that being thought about?

Kevin Collins: It's being asked for more and more, which always helps drive discussions. So right now, there are no other plans for that specifically. We do have some actual in-progress things on the roadmap for bridging the gap a little bit. But it's not yet an on-premise license server that you can deploy. But keep asking. It's not an uncommon question. It's probably one that we'll revisit and have some more discussions on. Yeah, go ahead.

Audience Member 5: So I know you mentioned StatefulSets here. But if you're using something like a deployment set for two front-ends, so one fails or something, the other one's immediately there to take over, do you need to actively license both of the separate front-ends if one is purely a not-trivial true redundant backup for the primary?

Kevin Collins: You do have to have licenses for both of those at the moment. There is one ticket in our backlog that I'm aware of that involves enabling a setting that will automatically unactivate a leased activation license on a clean shutdown. And that will allow you to have better usage of those licenses if you scale up and scale down. But the reality is that still today, you have to have that count of sessions in your license. There's some movement in that area. I don't have anything really concrete to share that might make that, again, a little better experience and enable some more differentiated deployments without having to pre-purchase all of those seats. We'll see as that gets developed more.

Audience Member 6: Yeah, you described how to build an architecture from scratch. Can you describe what a user might do? And I know you mentioned earlier this target state situation. What a user might do if they wanted to, say, add another front-end gateway that was also TLS-certified?

Kevin Collins: Yeah, so in terms of extending one of these architectures, I should mention that what you saw here today, again, it doesn't solve the... It solves the replication of how we launch a container. But it doesn't solve the replication of the application content within that container. So you're still going to want to lean on tools like Enterprise Administration Module or customized derived images that prepackage your application state into a gateway backup, that is then the definition for what's launched. So you have to keep that in mind. But in terms of scaling out, it can be as simple as a command through kubectl, scale that StatefulSet to three replicas or scale in back down to two. So hopefully, that touches on your question. There's one more question up here before we come back down.

Audience Member 7: So are you thinking about moving all of the gateway configurations into the arguments that you can put in the YAML file?

Kevin Collins: Yeah.

Audience Member 7: Great.

Kevin Collins: One more down here. We've got a mic coming for you. There we go.

Audience Member 8: I swear, I'm not picking on you on licensing stuff or whatever. But I remember last year, when Cloud Edition was announced, there was conversations about being able to tag certain instances as production, dev, QA, or whatever. That conversation is still happening. And tying it back to licensing, because there was conversation about maybe how licensing dev license might be treated differently, production license.

Kevin Collins: I'm actually unfamiliar with what you're describing there. I'm aware of the discussion surrounding our different deployment modes that we're targeting for 8.3. And that's still something that we're wanting to do. But I'm not 100% sure what you're referencing there. So we may have to talk later.

Audience Member 8: Yeah, it might have been on a side conversation, too. I can't remember.

Kevin Collins: Yeah. Yeah. Nothing that I can recall on that note. And any other questions? We've got a couple minutes left before... One here, and we've got one. Okay. So we'll get both of those.

Audience Member 9: So what is the reason for scaling on the front-end and then doing redundancy on the back-end?

Kevin Collins: It's more about that high availability story, so being able to have some of that resiliency against a failure. So one of the things that you'll see in these resources in our StatefulSet definition, you can prescribe what's called a Pod affinity. So that scheduler, that's part of our control plane, it can look at that definition. And in this case, it says, if we schedule that back-end, if we have a redundant primary node on worker node one, the redundant backup Pod has to be scheduled on another physical node in our data plane. And so that way, if we have a failure, the redundancy takes over. Same on the front end. We can prescribe those guarantees that Kubernetes is going to schedule our workloads such that we have better resiliency against failure. So in terms of the example, I just picked the scale-out. But I think any architecture, you're going to want to have those considerations in mind. I had one question over here.

Audience Member 10: I don't really have a question per se, but I'm really excited about this work. I think it's very powerful, and I appreciate the work that has gone into this to make these tools available. Because this is so much potential to be very impactful.

Kevin Collins: Me too. Yeah, I see a huge story in the future in this realm. And I think the technology in general, Kubernetes, is not brand new. It's been around for quite a while. And I think we're at that point now where we're ready for the industrial space to start getting our hands into it. Just like virtualization lagged, we had a table talk at some point where we were talking about this. Virtualization lagged, containers were used prevalently in the IT sphere, and then we started doing it here at Inductive. And then, I think, distributed deployment with Kubernetes, cloud-native applications, it's time for us to start looking at that too. So I share your energy there. Okay. So that's going to be it. I do have one more slide for you, which is this one. Thank you for coming. And enjoy dinner.

Posted on November 20, 2023