Git Serious: Hybrid Cloud Deployment with DevOps49 min video / 46 minute read
4IR Solutions Corp
Technical Services Coordinator
Cameco Fuel Manufacturing
With Digital Transformation becoming more mainstream, we continue to see an increased adoption of enabling technologies like the cloud. But not all companies are willing or able to go "all-in" on cloud just yet. In this session, 4IR Solutions’ CTO Joe Dolivo will walk you through how to use Ignition to track and promote changes across multiple environments, no matter where they're hosted. Operational Technology leadership at Cameco Fuel Manufacturing will also walk you through the plans for their own hybrid cloud deployment, intended to run heavy production workloads on site while leveraging the cloud for remote site workloads, testing instances, backups, and monitoring.
Joseph Dolivo: Thank you guys. Nice to be here. There we go. I heard the volume come up. So let me get my speaker working here. There we go. So this is the title of our session. And hopefully the humor resonates with you folks. But we do wanna “Git Serious” and there's been a lot of discussions, especially today and yesterday sessions from Kevin Collins, from Kevin [McClusky] and Travis [Cox] talking about cloud technologies, talking about enterprise deployments. And so this is really where we have the opportunity to take some of these fundamental principles enabled by and supported by Ignition and really bring it to reality for production-grade workloads. So we already kind of did introductions so we're gonna dive through and go through our presentation, which is broken up into a couple of different sessions.
Joseph Dolivo: JP is gonna start out talking about basically why DevOps, so why should you care and what's important? Giving some information about the company, some of the kind of the business justification and value proposition. And then we're gonna jump into how to do it. So we're gonna be letting you in under the hood a little bit as to the products that we've developed so you can kind of do it yourself, understand what's involved in that. We will have time, we should have time for questions and things like that. There are some areas where, again, we can get very, very technical, so we're gonna try to stay at a certain high enough level unless the questions kind of bring us to that point. So without further ado, there's a bunch of content. So I will change the slide and give it over to Mr. JP.
Jean-Paul Moniz: Thanks, Joe. So Jean-Paul Moniz, Cameco Fuel Manufacturing, and I'm the Technical Services Coordinator there. So what is Cameco Fuel Manufacturing? We're a business unit of Cameco, one of the largest uranium producers in the world. Cameco Fuel Manufacturing, we supply the Canadian reactor fleet for nuclear fuel. CANDU Canadian reactors are... Boy, are... Oh my God, I forget that. Canadian CANDU reactors fuel online. So we don't... The fuel doesn't just spend and then shut down the reactor and replace it, but it's actually fueled online constantly, daily actually. So we manufacture the fuel bundle for those reactors. And we also manufacture the reactor components for the CANDU market and other reactor markets and whatnot. So in that business, we have two manufacturing facilities. We have a facility in Cobourg, Ontario that's a metal fab shop that produces all the components that go into a fuel bundle assembly. And then in our Port Hope facility, that's where we actually take uranium powder and pelletize it, and put it in the fuel, in the assembled fuel bundle, and then send the fuel bundles to our customers.
Jean-Paul Moniz: So there's actually some how-it's-made videos on our full processing from mining uranium, all the way to packaging up the fuel. And so I just thought we would take a second and I can sit there and show you specifically just the fuel manufacturing portion of it.
Video Narrator: The chemical processing has also caused the uranium to change color again. It's now a fine black powder. Using several tons of pressure, tools shape the uranium dioxide into pellets. A revolving wheel with protrusions guides the pellets into a channel conveyor. The conveyor takes the pellets into a furnace. Over 24 hours, the heat removes pores in the pellets. The pellets shrink, increasing the density of the uranium. The particles fuse together and harden into a ceramic. Here is a pellet before and after baking. A robot arm now loads the pellets onto a tray and levels them. A conveyor moves the tray forward. Ahead, another robot places zirconium fuel tubes on a rack.
Video Narrator: Zirconium is a metal that's highly resistant to both heat and corrosion, but neutrons will pass freely through it during the fission reaction. The rack of tubes meets up with the tray of incoming pellets. A robotic loader pushes the stack of 30 pellets into the tube. Another robot delivers the rods one at a time to an automated welder that caps the ends. The next robot retrieves the completed uranium fuel rod and transfers it to an assembly fixture. It arranges a total of 37 rods in an upright position within the fixture. After the rods have been welded and the bundle capped at both ends, a robot transfers it to a scale. This weigh-in confirms that there is the correct amount of uranium in the bundle. Prior to burnup in a reactor, the amount of radioactivity emitted by nuclear fuel bundles is very low, and they're safe to handle as workers prepare them for shipping. They're now on their way to the power plant where they'll be sure to generate a reaction.
Jean-Paul Moniz: So just some quick stats. We manufacture about 35,000 to 50,000 fuel bundles a year, and then that's the top-level assembly and in order to do that, we use about, I don't know, 1.6 million individual elements and lots of closure plugs. We have got three million of those and all the uranium pellets that go inside, we have got 50 million of those. And in order to do that, we use 220 employees. And what we like to sit there and say is, is that, we're all focused on safely energizing and clean air and we're all supporting the climate change initiatives. So Ignition at CFM, we've been a long-standing longtime user of Ignition, and we adopted it right in 2010. We did it for OEE applications. Once we adopted it in 2010 it set us off. It came in with the [Sepasoft] OEE [Downtime] Module. And we quickly adopted the OEE Module and used that and then also matured through the module's lifecycle from version one to version two and now we're on version three, full enterprise. 2019 was the first time we sat there and deployed MES Track and Trace [Module] on one of our manufacturing processes. And that was fully integrated from the equipment all the way up through the system. So as materials are produced on the machine, it's automatically... Genealogy is automatically collected and tracked within sort of human interventions have all been automated.
Jean-Paul Moniz: And then right now, 2022, we're sitting there and just completing full implementation across both of our plants and enterprise level for MES Track and Trace. So this is what our current architecture looks like. And most of Ignition users will probably be very familiar with this architecture diagram. It's right off of IA’s website, but this is literally our exact architecture that we have right now. So why are we even here talking about the DevOps... Sorry, DevOps in cloud? So fundamentally, and I mean, in regulated industries and whether it's food and drug or nuclear or any other sort of highly regulated industries, things like management of change and change of control are pretty important to sit there and make sure that we're operating normally and we're not inducing problems into the system and whatnot. And then also we sit there and as you scale up, implementing a system in one plant is one thing, but when you're implementing it in another plant and then you're integrating to the enterprise, that scale, it becomes pretty complex and complicated to manage.
Jean-Paul Moniz: And then when you're sitting there looking at updating module updates, OS versioning, operating hardware versioning, all that kind of stuff, it becomes pretty complicated to sit there and manage. And then the infrastructure to sit there and support all of this stuff... We're a manufacturing company and it is a little bit different in the processing industry, but if you sit there and take a look at a lot of our infrastructure, a lot of our infrastructure is only really there to sit there and support the MES-type activities. So things like Active Directory servers, update servers, antivirus servers, all that kind of stuff. So what we're looking to do is sit there and say, how can we use cloud-based technologies, containerization, all that kind of stuff to eliminate the need for all of that infrastructure? Because that infrastructure is really just there to support the use case, right? And from an OT perspective, we want to be focused on providing value down on the plant floor. We don't want to sit there and worry about “I gotta update my operating system or I gotta apply a patch or I have to sit there and update the Ignition system and whatnot.”
Jean-Paul Moniz: And so what we're talking about today lends really well to managing those issues. And then the last thing, I mean, in what we're talking about today really is helping lay the foundation for a very popular topic right now with Digital Transformation and whatnot, and that's IT/OT convergence, right? And so getting the IT groups and getting the OT groups talking the same language, using the same tool set, trying to work towards the same goals and whatnot. So what problems are we actually solving by doing this? Well, using a technology like it, it has been around in the software industry forever. And using that style of version control and applying it in the OT space is something that hasn't commonly been done in a widely adopted manner. What 4IR is bringing to the table with the service that they're managing is helping bring that in. And I think you're probably familiar with other things like… Trying to do that with PLC version control. So using those IT-grade tools and practices and bringing them into the OT space is helping align OT more with IT. The ability to duplicate environments in the OT space, sit there and take your production environment and almost try to copy it one for one, like into a QA environment is very difficult, right?
Jean-Paul Moniz: This technology and looking at how you can sit there and architect your systems with current technology allows us to get very, very close to that one-to-one relationship, right? And bringing in ideas of unit testing and whatnot, again, brings OT closer to IT, right? And then cybersecurity. I mean, everybody is talking about cybersecurity these days, right? And you sit there and you say, what is a lot of the focus on the OT side when it comes to cybersecurity? And I'm going to sit there and say, one of the biggest things is OT assets that can operate the same way as IT-grade assets that aren't getting updated, that aren't getting patched or can't run at the latest version. So using these technologies allows us to sit there, test updates quickly, and then talk about the idea of promoting production in a much more rapid manner than traditional OT groups have been doing in the past. And then also with using technologies like Ignition and then using containerization and then whatnot, we're also helping reduce our attack surface from a cybersecurity standpoint. So now we're removing operating systems, right? We don't have those operating systems in the environment anymore.
Jean-Paul Moniz: We are using... Through Vision and through Perspective, we can now sit there and reduce back attack vectors from our HMIs because now we can just sit there and run our HMIs on a plain Jane Windows 10 environment that's fully up-to-date, fully patched all the time, right? So this helps reduce the burden from a cybersecurity standpoint. And then mainly just maintaining high availability through using these hybrid and edge architectures where you could sit there and run your mission-critical workloads down at the edge on-prem, right, in a sort of a containerized cloud-based environment where you're using the same control plane that the cloud is using.
Joseph Dolivo: I think it's amazing to think about... Oh, sorry. I cut you off. Go ahead. I was just going to say it's amazing to think about, you know, historically you would have these outdated systems, and you could get away with it for a while, having a Windows XP box running your HMI software because all of these systems were siloed. They weren't connected to anything else. And nowadays, especially when you want to start removing those data silos and aggregating data up into the cloud, the security posture of all that becomes all the more important. And so some of the solutions that JP was talking about, or we are going to talk about become ever more important and necessary.
Jean-Paul Moniz: So in our future architecture, what we're looking at, and it kind of looks the same, but it's not at the same time. So we're still looking at having on-site Ignition instances so we can have that mission-critical and can quite frankly, in our systems, there's lots of stuff going on. And there will be that low-latency, high-availability type offering down at the plant dealing with connectivity issues and then all that kind of stuff. But what we're going to end up doing is taking that infrastructure that's normally running, you know, we've virtualized, but it's running on metal right now, take it, containerize it, put it on an edge device. All the cloud vendors now offer edge devices that can run these workloads. Run those workloads at the edge, and then up in the cloud, build out our development, our QA environment, and then also production environment for the enterprise piece. And then be able to sit there and get that full sort of end-to-end development QA production-style environment. So we can sit there, do those updates, test them, test them in QA, and make sure everything is working okay, and then load them in production in a much more rapid manner than we were able to do before.
Joseph Dolivo: Okay. So that's kind of going through the why. And I think JP has been incredibly, you know, kind of on the cutting edge, let's say, with some of these technologies and on how we can start to use them and adapt them and adopt them. And it's something that is really aided by the architecture, the flexibility, and the improvements that Inductive has been making to the Ignition platform. You've heard a lot about it this week. You've heard about Cloud Edition. You've heard about the licensing changes. So that's enabling a lot of what we're going to be talking about. And so we've done some of this. We've product tested some of this. The goal of this, of course, is to tell you how you can do some of this yourself. And so this is something that, again, these kind of enterprise deployments you are going to really require when you are smart and when you want to “git serious.”
Joseph Dolivo: So we're going to talk a little bit about the why. And this is kind of the objective of this session, which is to really understand the best practices. So some of this is stuff that you could find. And we've got some references later on that I can reference, referring to, but I'm just going to talk about some of the best practices guides that Inductive has put out. So we're going to kind of summarize some of that and also augment this with what we've seen actually doing this at scale in the cloud in production with a number of different customers, including the monitoring piece.
Joseph Dolivo: So that's the goal that we want you to get out of this session. And this is a breakdown of the Ignition application. And so if you think about Ignition... I'm going to keep turning this way. So there's kind of three different layers that you can think of. This is a little bit simplified. A couple of these, I've got some slides that are on where we're going to add additional detail to it. But if you think about the core, so this is basically Ignition itself. This is the installation files, for example. This is going to include the modules that you're plugging in there, like from Sepasoft or Cirrus Link, or if you have got custom modules. Also the licensing. So that's if you think of that as kind of the core, we'll talk about why that's important, but that's basically going to be common across all your environments. It's going to be pretty much the same. If you go a little up, now we're talking about gateway scope. So the Ignition gateway has certain settings, configurations, things like tags, and which are shared across all projects.
Joseph Dolivo: Tags is a key one. And again, that's the one in particular we'll come back to. There's nuances to that. Images as well. Images that are stored inside of the gateway, inside the image management tool, or you can access from the designer. And then just the general gateway configuration. So most of the stuff you would configure from the gateway webpage, database connections, and things like that, those are all kind of shared. And then one step up, you have project scope. So typically a gateway will have one or more projects. And so unique to those will be things like views, project-level scripts, all the other files that you have in there that you'll tend to see and in finishing projects. And so what we're trying to do is to sort of break down these different levels and then talk about, well, how do we encapsulate these scope items? How do we figure out how these are contained? And then based on how they're contained and how they're stored, we can talk about, well, how do you manage change for each of those separately? So for kind of the core Ignition, we're looking at using Docker. Kevin and Travis talked about other ways of doing that.
Joseph Dolivo: You could have sort of standard infrastructure as code, images that you deploy, and things like that. But you basically want to have a common installation file, if you will. Containers are a great way to do that. At the gateway level, you have of course your gateway backup file that you can generate from the gateway configuration page, or from the Ignition designer. And then at the project level, what this is, again our recommendation is that you use Git. And so there's a number of different ways to do this. We'll talk about this in a little bit, but basically having a branch to represent a particular environment that's going to be scoped to one or more particular projects. So, that's all core Ignition stuff. Of course, real applications have things outside of Ignition. JP mentioned software for doing PLC version control like Copia databases. Any real project that is probably gonna have a database, especially if you're doing historical data access, or you're doing any of the other, say Sepasoft modules that require it. And so there's techniques we could talk about maybe in the Q&A section or afterwards about how you can version control some of those things.
Joseph Dolivo: A theme here is breaking out, let's say the schema, which is gonna be the definition of tables and things like that, which would probably be the same across all environments from the data itself. So you're not gonna wanna have production data in your test system necessarily and vice versa. So that's a bit of a kind of an asterisk, if you will, we can dive into. But when you're looking at a real holistic system of which Ignition is a part, there is certainly more to consider. So why do we even talk about this? The goal is of course to minimize differences across environments, and then identify how those differences can be captured, and then how can we track all of those? So, we wanna basically figure out, well what should be different between environments? If I'm talking about the dev system, I may not have access to a PLC, or sensors, for example that's generating that data, especially if I happen to dynamically spin up a dev system in the cloud, for example.
Joseph Dolivo: So I may have PLC simulators, tag simulators, things like that from my dev system. My database connections may be different, they should be different. I'll have development database, I'll have test and production database. Other systems that I'm working with. So we do a lot of work with SAP integration. SAP folks will typically have dev QA as production SAP systems as well. So those are things that are gonna differ for every environment that you deploy. And then identity providers as well. So you may... Again, if you're running something on premise, you may not have access to Azure AD or Okta or something like that. You may be using a local LDAP server for authentication, or if you wanna also differentiate between the roles, and the rights you're gonna give individuals.
Joseph Dolivo: 'Cause maybe somebody's got admin access when they're doing development work, but you don't want them to have admin access to the production system, for example. So the idea is to kind of identify all these, keep those differences encapsulated and minimized, and then use that as a foundation for the rest of the discussion. So for the core, we really wanna look at using kind of a common installation that's going to be the same across all environments. And there can be some differences that we can capture in things like environment variables, which again, is a very nice tool that Ignition makes available. With the Docker containers, you can provision things like gateway network connections, you can provision even licensing. Those are basically variables that you can set when you're spinning up or spinning down an instance.
Joseph Dolivo: So we can capture those differences there when we provision system. For gateway scope, that is ultimately now some of this will change, of course with Cloud Edition, and some of the refactoring that was talked about for 8.3. But basically all that scope is captured in the gateway backup file. Of course gateway backups are used for more than just capturing the gateway scope. They're used for backup, and disaster recovery scenarios. So you generate a gateway backup file, it's gonna include all of your projects. Our recommendation, which also is supported by Inductive, is to actually, when you're restoring one of those gateways, if you're gonna take a backup of production environments, and you wanna replicate that in other environments, you can delete those projects out of it. So we're basically separating off project scope from gateway scope in doing that. And then for project scope, we really want you to treat these as Git repositories.
Joseph Dolivo: So, there's still some files in projects that are binary files if you will, you don't get the nice friendly text diff, if you will, but you can still track them in version control. So that's the recommendation that we found a lot of success with doing that, with the project-level scope items. So, what does this actually look like to maintain these differences? So ultimately, when you're making changes that are gateway scope, things like tags for example, they're gonna use your Ignition designer, and your gateway configuration page to log on to each individual environments. So here in dev test stage prod to make those changes. So that's the best way to do this until we have a programmable REST API that again is coming down the road.
Joseph Dolivo: But you can basically make these changes and then maintain separate backups for each individual environment, which captures those different differences. Compare this of course to what we can do with the project level where now you'll notice we have the Ignition designer connected only to the dev instance. And so the goal here is to not, for example, give you access to product, I can make live changes on product. There's a lot of reasons why you wouldn't want to do that. Where if you do do it, it should be in sort of a privileged context that's only limited to emergencies or something like that. But the way that you migrate now is you make your changes to one single dev system and then you use a concept in version control called the pull request, or the merge request to basically identify those changes, and then have a stage gate, if you will, for reviewing and approving those changes.
Joseph Dolivo: And then as those changes are approved to the next stage, they can be deployed into that environment and so on and so forth until they get to dev. So that's really the difference is that instead of making changes to every environment individually, you're gonna make them only in dev, and then really your projects themselves should be identical between all environments once they're all promoted all the way through. So a number of different options for doing deployments. And so again, we're trying to recommend what we tend to see, and what we've had success with.
Joseph Dolivo: But if you look at where this can get deployed, this is where hybrid cloud comes in. So a lot of cases, like what JP was talking about, production systems, a lot of times you're gonna wanna live there on premise. But other systems especially if they don't need to be long-lived, you can kind of spin them up, spin them down. That's really taking advantage of the value that the cloud enables with scalability, elasticity, and flexibility. So, that will be the case whether or not you're deploying them at an edge device on your laptop for example, if you wanna just test something or all the way up in production. So when you do the actual provisioning themselves, there's also a couple of different options for that. So if you've got short-lived branches, let's say you're doing your dev work and you basically want to create a new branch for a particular feature or a particular hotfix, and then when you're done with that, you can spin it down. That's an option. You may also work in a more traditional model where I'm gonna have a long-lived instance, so I'm gonna have my dev server, my integration server, and I'm gonna continue to connect to that and use that. And that's gonna be essentially my single source of truth for the development files.
Joseph Dolivo: So you can have kind of a new environment that you're gonna spin up, or you can be basically deploying into an existing environment. And even for an existing environment, again, there's sort of, you can go four levels deeper with how you do that. So you could basically have a push of changes. You could say, well, I've now completed, let's say a merger task. So if I take those changes and push them down to a server. Or I can do a pull where I'm going to notify the server and say, "Hey, I pushed an update. Go get it. Go do a pull," or a lazy way to do it, which we do sometimes as PLCs is to just have like a timer script. So I'm gonna do a get pull on a cycle and I'm gonna pull those files in. What's really cool about that is if you do that with a production system, or maybe a staging system, because you can make updates to these systems with zero downtime.
Joseph Dolivo: Now of course there are gonna be business requirements in certain industries, you may not want to do that. Nuclear, maybe pharmaceuticals, but it enables you to have zero downtime deployments of updates, which is completely unique and something that's really exciting that Ignition makes very possible. We talked about the short-lived and long-lived environments as well. So here's putting it all together, what this can kind of look like. But let's say that we want to have a full workflow going all the way from development system all the way through to production. How does this look? What does this... How does this work? So I'm gonna be my developer over on the left. And again, I'm gonna be logging into my Ignition designer. I've got my development system. I'm gonna be making some changes too. When I deploy that for the first time, I'm gonna have a gateway backup that I'm going to restore, provision, load it when I'm spinning up a new Docker container, for example, as part of the backup.
Joseph Dolivo: And then I'm gonna start doing all of my changes inside of there. I'll have some scripts which I can build into Ignition to do things like autocommits, get pushes to essential Git repository where all of those changes are tracked. And then I have a branch that those get pushed into. And now I can, as let's say a developer, go in and do a pull request to say, "Hey, I want to take this collection of changes that I've committed over some period of time and I'm ready to move those on to the next stage and I've completed my units of work, I'm ready to move to higher instances in the test." So when you do the pull request in the test, now you can have some CI/CD pipelines that will basically trigger some kind of event. And for us that may be like we kind of talked about on the last slide, we're gonna take those changes.
Joseph Dolivo: We're gonna push them to the test environment. The test environment, of course, has its gateway scoped, captured, and encapsulated by a separate gateway backup file, which is gonna maybe point to different database connections to different identity providers and then the cycle kind of continues. So you can do this for as many or as few environments as you want. Ultimately, we get all the way over to the right where we've got maybe an approval gate for a quality person, an operations manager or somebody else responsible for that production system. They're the one who's gonna approve the final stage. And what's really cool about this is that, we kind of have a bar with automation in there, but the nature of those and how you do the deployments can actually differ between environments in each one of those different cases.
Joseph Dolivo: So if you want, for example, to automatically pull in those changes to test you could do that for prod. If you need to have more of a manual process that's gotta be scheduled during a maintenance window, we gotta pull it back up, shut the server down and start it back up. We can do all of that. So and same thing for those environments being short-lived or long-lived. So maybe I only want my test server to exist at certain periods of time and my staging server exists. I can shut it down and then I can provision a new one from that gateway backup, which is where all that data is stored. So hopefully this gives you kind of an end-to-end example. Again, not the only way to do this, but this is what we've basically built out and what we've seen a lot of success in our customers have been using, especially for these more complicated enterprise multi-environment deployments.
Joseph Dolivo: So some considerations for this. So there's a couple of different approaches. We basically talked about long-lived and short-lived instances. Some reasons why you may want to consider one or the other. Long-lived instances are kind of what you get out of the box with Ignition, install Ignition where you can spin up a new container, it's gonna run as long as you allow it to run. It's easier to adapt 'cause it's more similar to what you're already used to. For folks who don't maybe have a background in software engineering, in version control, you can use Ignition's built-in conflict resolution for managing changes. So you get nice little diffs when you make changes to screens, it'll tell you if somebody else has it open so you can kind of rely on Ignition to do that.
Joseph Dolivo: The more traditional software-oriented approach, of course, is to have these short-lived instances where I'm gonna create a new feature branch. I'm gonna create a new bug fix. I want to launch a brand new Ignition instance. I'm gonna do my work. I'm gonna version integration through my dev branch and then I'm gonna shut it down. It's more complicated to do that. It introduces some additional complexities as well. 'Cause let's say for example, I'm gonna create these branches that I'm gonna run on my own local laptop. Now I've gotta make sure that my environments are identical. Folks here who have done this before or probably have seen the forums around making sure your Git settings are the same, your line endings are the same. You get resource JSON files that get kind of clobbered over each other sometimes.
Joseph Dolivo: So there's some nuances to doing that. When you have everything in either a centralized environment like a dev integration server or you have sort of standard builds that you're using for every dev instance that you spin up, you can avoid some of these problems. But this is more aligned with software development practices. So this is what you typically do and it gives you more granularity with regards to changes. So for example, if you've got an autocommit set up in your shared Ignition instance, if you autocommit every time you save, all those changes are gonna get pushed together. And now somebody's gotta maybe go through and sort and say, "Well, I made these changes and this person made these changes," or I gotta cherry pick my commits when I'm doing my pull request. So it is definitely a series of trade-offs. We found that the long-lived environment is a great starting point for folks that are kind of new to version control. And then you require the granularity you can get from some of the short-lived environments where you can support that as well and that's a good direction to move into. Also added complexity is if you have to now have a database, for example, now I've got test databases I have to manage too. So it does make the story more complicated.
Joseph Dolivo: Tags. I talked about tags, they are gateway scoped. Typically though, when you're changing tags, they're somehow affiliated with the project. So I'm gonna add a new screen and I'm gonna add new tags to the back of that screen. So how do you kind of manage those? The recommendation that comes from Inductive and what we basically implemented is when you're doing that designer save, you can do an export of tags into JSON file, and then you can track that with your version ecosystem. So that's kind of the easy part is having it there in the repository. So if I needed to look at three commits ago, for example, what did my system look like? I'll know what the state of the tags was from that JSON file that I can roll back to.
Joseph Dolivo: Launching into a new environment is where we have a little bit more choice, and it's a little bit more complex because there's no auto import tags, right? So you can do an automated tag import through scripting where you can use the tag import script file to basically import a file that you can reference that exists, you know, as a file on the Docker container, for example. You can have that be a trigger point where I'm going to use Web Dev [Module] or the Sepasoft Web Services Module, and I can basically notify the server, "Hey, you need to go pull in a new tag file from somewhere." So that's a way that we can try to automate that a little bit, or you just do it manually.
Joseph Dolivo: You're gonna have an SOP documented for approving a pull request for a particular environment. I'm gonna have to go in there and either import it using the designer, and then choose to override or skip or any of that. Or I can push them from EAM if I've got my environment already set up as part of the gateway network. The EAM modules, you can do it that way. Either way it's a manual step that you're doing. So a couple different ways to do it, depending on the complexity and the ability you have to automate some of that. Licensing, so this is a no-brainer, and especially if you're using containers or short-lived instances, use the eight-character license key and activation token. I think it was pretty recent now that you can actually activate those when you're spinning up a container as environment variables, and they don't just take effect the first time. So, for example, if you need to change your license key you can do that without having to start one from scratch.
Joseph Dolivo: So that's a relatively recent addition, I think, to the platform. External systems. So unfortunately not everything especially in the industrial automation world supports version control as well as Ignition does. We talked about PLCs, which there's tools that are getting there. Databases are another one. So it's important that you have manual SOPs for things that you can't automate. So this is where I'm gonna document the process for doing a pull request. The Ignition part might be really, really easy, but now we've got these other steps to do either with tags or other systems. You have to do this as well for like Sepasoft configuration. So you'll have to do an export of your MES objects. You have to do an import. So right now, there's not really an automated easy way to do that, but if you have an SOP that's documented that's how you can ensure consistency every time you do it. And then we talked about some of the other tools. So if you can automate it, you can leverage other tools for doing this, again, it's kind of outside the scope, but it'll make your job easier for these production environments. Some final thoughts around this is prefer immutability. That's a word we haven't used yet, but that's a big theme that you'll see in terms of infrastructure.
Joseph Dolivo: So if you get into a scenario, you've got a VM that you've been patching, you making updates to, and you get into a broken scenario trying to figure out what was the change that I made that I gotta figure out how to undo. It's really, really ugly when you treat your infrastructure like that. So it's much better if you can basically tear it down and spin it up again from a good node state. So immutability is gonna apply certainly with a Docker container. It'll apply with your files like your gateway backup that you're keeping track of. So you should always be able to spin that up from a good node state. That's just good development practices in general. Consider having automated provisioning of that infrastructure.
Joseph Dolivo: It was actually really exciting if anybody went to Kevin and Travis' session yesterday where Travis has been working on these cloud formation templates. For example, if you wanna deploy into AWS, cloud formation is an AWS concept, but there's equivalents, Terraform, Pulumi, other languages for doing similar things where you can run a script, Ansible, but you run a script and it’ll provision that the same time every time. So it's linked to, but distinct a little bit from immutability. Really good practice to have. And then you have this concept of infrastructure as code. So if I can define my virtual machine or my container runtime, my orchestration engine, my storage accounts, so I can define this as code, same thing, I can spin it up multiple times, make sure that it's identical and I can also track those changes to my environments in version control the same as I do my Ignition project. So it lends itself really, really nicely to doing that. And then again, this is kind of our recommendation, or recommendations.
Joseph Dolivo: There's other ways to do this. They each have their own pros and cons. I'm happy to talk about some of those. We've explored a whole bunch. And again, this we found to be kind of the easiest given some of the caveats that exist. So these are some additional resources that are out there. You guys are probably familiar with the Ignition 8 Deployment Best Practices Guide. There's also a documentation on a Docker image. We keep seeing new environment variables added to that, which are really exciting to make a lot of our configuration easier. We have to do less injecting things into the internal database, which is a little bit messy. And then if you're interested in branching strategies as well, this is kind of one of the de-facto articles out there on branching strategies. So it's a very, very hot topic in the software world.
Joseph Dolivo: Some people say you shouldn't have branches per environments, and there's good reasons for that. But something we can maybe talk about offline if there's interest. So, highly recommend you take a look at these as you go forward in your environment management deployment. So that's, we kind of moved through some of the scripted stuff. Hopefully that gives you some ideas or things to talk about. We're open to questions either around what CFM has been doing, hybrid cloud architecture, going all over Git, DevOps, whatever you have. So thanks for your time and attention.
Yousuf Nejati: Thank you guys. That was pretty awesome. You were speaking my language. I'm all into software engineering and seeing you guys use that with Ignition is pretty inspiring. So we're going to open up the floor to questions. I can... I'd ask that if you have a question then you please come up to the mic. If you don't want to come up to the mic, just give me a moment and I'll repeat the question just for the recording. So do we have any questions from the audience? Okay.
Audience Member 1: I got one.
Yousuf Nejati: Okay.
Audience Member 1: Can you describe in more detail the Git push part of your big overall diagram, exactly what's involved there in that script?
Joseph Dolivo: Yeah, for sure. So if you're talking about Git, there's a couple different stages of moving code from one place to another. From one given code to another. So there's a commit stage, and then there's the actual push. So what we do, what Inductive recommends in the best practices guide, is anytime you hit the save button in the designer, you can do an autocommit essentially. So it'll look at the changes that were made and it'll create a commit for that. Those changes... Those commits exist on your local server, which is running Ignition. Then there's a separate stage for doing the push of those files. So that will take all those commits and then it'll move them from one repository to another repository. So Git is somewhat unique, if you're familiar with SVN or other version control systems, in that it's not centralized, it's decentralized. So you have the full history of all those files located in every single Git server. So the changes that you make, the commits are all done locally on the Ignition server, and then you're pushing those to another server, which is really the primary repository for those. So does that make sense?
Audience Member 1: Well, I was more interested in the resources you were actually committing.
Joseph Dolivo: Oh, yeah. So what we do is you can create a Git repository to the projects folder. So inside of the Ignition installation directory, you've got the data folder projects. So if you make the projects folder itself a Git repository, anything that's in there gets committed. And so you can use that technique also for when you're exporting your tags file, for example, you can export that in the same projects file so that it'll automatically get picked up when there's changes so that you can commit those. So that's how that happens. The other... Actually a related point to that too, which is worth mentioning, is that if you don't want to autocommit, you also have the ability to manually commit. So that's more what you would do in traditional software. You kind of choose when to do it, but now you have to have an interface to that, and you have to provide that either via like an Ignition module plugin or give command line access or something like that. So that's another way to do it if you just don't want to autocommit, kind of wanna be safe. So, great question.
Yousuf Nejati: Another question over here.
Audience Member 2: So as I read that recently you integrated, having Git in Ignition, you mentioned the resource based on issues, just in version complex and stuff like that, do you have the best recommendation of how to manage that or is there a conflict? What's your best way?
Joseph Dolivo: Did you want to read the question? I missed that...
Yousuf Nejati: No, I just... Go ahead.
Joseph Dolivo: Oh, okay. Yeah, so it's complicated. I would say that definitely we found a lot of success when you're making sure that the environments are identical. So everybody, for example, if they're accessing it from their own laptop or your... The Ignition version is running, we basically... We spin up Docker containers using the same one, two image. And so we make sure that every environment is identical, so we don't have some of the alignment differences, and also the only environment, if we look at how we were showing the migration between different environments, we're only making changes to that data instance, which then get pushed forward. So the versions of things that are on the other systems kind of get wiped away. So it kind of reduces the presence of some of those merge conflicts that you may have if you're trying to take a commit from one environment and migrate it back down to a different one.
Joseph Dolivo: So that's where we've seen things get a little bit messier. But yeah, there are still some rough edge cases I would say. And I think the forums have discussed it kind of at length. I think that the real solution is gonna be for some of the content in the resource, that JSON file to be basically broken out into something else. So there's parts of that that makes sense for knowing when a file was changed, that Ignition needs to know, versus knowing the timestamp and some of those other hashes that could probably go away. So hopefully that helps.
Jean-Paul Moniz: That's a great question. There's also some architectural things that can you do to kind of help eliminate that, right? Like the whole conceptual idea of being able to sit there and have a development environment, QA environment, and a production environment that are as close to one-to-one really is sort of a new concept or now the technology is getting to a point where you're being able to get there closer sort of thing. Some of the announcements that were made this week at ICC relative to that cloud tooling and whatnot helped facilitate that.
Jean-Paul Moniz: And then there was one good point that I think it was Kevin that brought up when there was a question about separating backend from frontend. Right? And how do you do that? And the question came up about OPC tags. So when you sit there and think about OPC tags and how do I architect around that in this environment? So deploying broker technology is probably going to be sort of a key enabler for that. If you think about if I have my dev QA production environments running off of the same broker or looking at the same broker for sources of information, now we're starting to sit there and get rid of those differences and whatnot in those different environments or anything, so those are some things to think about.
Joseph Dolivo: Using server security, for example, if you do have multiple systems, dev, test, prod, maybe not prod, but dev, test, let's say, working out the same source of data you can use on the tag providers or the broker to do that. Then you can ensure, for example, that if I'm on my test server, my dev server, I don't want to change data on my test server, and the Ignition security model can account for that too, so.
Audience Member 3: [Question about pull review process].
Joseph Dolivo: Yeah, you certainly can. So as part of the pull review process, you're gonna be able to... It's going to basically do a diff of files in those environments. And so ideally, if you're not providing access, let's say to the prod itself, there won't be any changes. But let's say that you needed an emergency break fix and you had to make a change to production. If you're using a similar setup for how you're doing autocommits or whatever, you'll see that commit exists when you're logged into your Git server. We use Git Teams, you can use GitHub or GitLab or something. And so you can actually do something called cherry picking where I can take that commit or those commits and I can move those back to the dev systems and then to kind of get you back in sync. So if you just try to do a blind commit you can... Before you approve that, whatever your manual approval step is, you can do a check and say, "Okay, I see that my commit hash is the same as what's on my other system, but don't know if it's been changed." But it's actually a really good point as well.
Joseph Dolivo: And we look at pharmaceutical life science customers, they want to have some assurance that systems aren't changing under the belt, especially the validated system. So you can use a commit hash, for example, to provide some level of assurance that's not changing out from under your hand. So it is a good way to do that. Great question.
Yousuf Nejati: Yeah. Excellent question. I think we have time for one more question. Over there.
Audience Member 4: You said in the designer … if those JSON files are gonna get committed to that save as if they were modified. How do you sort that out when you're... I mean, do you just push those changes through to dev to run and test even though they're not really functional changes?
Joseph Dolivo: Yeah, that's a great question. So one of the things that we recommend doing, what you can do, for example, let's say you're gonna open up a screen 'cause you want to look at it, for example, you're not gonna make any changes that you can close the lid instead of just closing it down. So that's one thing you can be a little bit more diligent when you're developing to make sure that you're not gonna have that change reflected when you commit. That's also another case if you're really caring about that granularity, where you may want to not have an autocommit process. You might have a manual process where you're gonna actually stage the individual files. So you can choose to ignore some of those files if you know that they're not gonna be relevant.
Joseph Dolivo: Or in some cases you can just accept the fact that I'm gonna have a resource for JSON file but the actual screen contents themselves, if I were to compare the two most recent versions of those, the file itself is not gonna change. And maybe the bindings will change, but I know that that's not something that I care about. So you can just kind of accept them as is. You can choose to not stage those files if you have a manual process. And again, all this is gonna get easier with future versions, but that's kind of the recommended workaround that we have, if that makes sense.
Audience Member 4: Cool.
Yousuf Nejati: Alright. I think that's all the time we have.
Jean-Paul Moniz: There's one there. Sorry. Okay. So, go on.
Audience Member 5: I'm pretty glad that you brought up the Pulumi. For us that was a life changer…
Joseph Dolivo: Yeah. They don't pay, but I'm a big fan. My team's a big fan, I know of using it, so we use that...
Audience Member 5: I know that…
Joseph Dolivo: Yeah. What's important too is that you look at something like container technology in general is, by its nature, is multi-cloud, right? So we deploy into Azure and AWS, you can deploy it into GCP, you can deploy it into hybrid environments, right? Anywhere you have a cluster, you've got VMs. These same technologies apply running pipelines, infrastructure as code, whether you're using the cloud or not, these are cloud-enabled, cloud-native technologies that can just make your life easier for managing environments. Same thing with Git. So Git is not tied to the cloud. It works really well with it. Just like Ignition, you can use Ignition in kind of the old way, install it on bare metal or VM. But the power of it comes from starting to combine these technologies to really accelerate your business.
Jean-Paul Moniz: You say that probably from an integrator role or a support role or whatever, right? And you find that very useful. Think about it from an end-user perspective in terms of how much I get to clean up my environment. And just focus on what we need to focus on and forget about all the rest. So it becomes super, super powerful.
Yousuf Nejati: Okay, we're done. I think we have to stop here because of the next group that's gonna be coming in, but I'm sure you guys are gonna be available for questions after. Thank you so much. Another round of applause.