Defying Ordinary: A Deep Dive Into Unique Automation Projects48 min video / 38 minute read Download PDF
Sales Engineer II
Independent Automation Consultant
Director of Activities
Founder |Principal Executive Consultant
Zeibari | Co
Every year, Inductive Automation shines a spotlight on modern marvels in industrial automation at the Discover Gallery, but there’s a whole lot more to these projects than we could ever capture in the showcase. In light of that, we’re diving deeper into some of this year’s most novel Ignition projects.
Join us in this webinar to see what the best of the best are doing with Ignition these days and spark some inspiration for your own solutions. You’ll hear from the talented integrators and companies whose projects push the boundaries of what’s possible with Ignition, and get a closer look at how their cutting-edge solutions work.
- Gain a deeper understanding of large-scale Ignition projects
- Get inspired by creative real-world solutions in a variety of industries
- Learn about the Discover Gallery showcase
- Ask automation experts about their projects
Brad Fischer: Hello. Welcome everybody to today's webinar, "Defying Ordinary: A Deep Dive Into Unique Automation Projects." Today we'll be getting a closer look at some truly novel projects from the past year, so you can see what the best of the best are doing with Ignition lately. I'm Brad Fischer, a Sales Engineer II at Inductive Automation, and I'll be your moderator today. As a sales engineer, I get to help customers all over the world implement Ignition, talk about architectures, modules, best practices, and security. And prior to joining IA about two years ago, I actually worked as a system integrator, so I've been boots on the ground and I have been using Ignition for over a decade now. Joining me today is Dylan Powers, Systems Engineer at Grantek, Cédric Groc, Director of Activities at 2Gi Technologie, Olivier Marin, Project Manager at 2Gi Technologie, Independent Automation Consultant Eric Reisz, and Andre Zeibari, Founder and Principal Executive Consultant at Zeibari & Co. Dylan, let's start with you. Can you share a little bit about Grantek and what you do there?
Dylan Powers: Hi, I'm a Senior Systems Engineer with Grantek Systems Integration. We're a medium-sized systems integrator, and we do work in both the life sciences and food and bev silos. We support a multitude of different types of systems and projects. Right now, we're very focused on Ignition and developing on that platform.
Brad Fischer: And Cédric, why don't you tell us a little bit about yourself and what you do?
Cédric Groc: Sure, hello. I work for 2Gi Technologie, which is an Ignition Premier Integrator. We have been experts in implementing software solutions for industry for over 21 years. We help companies leverage industrial data and implement solutions such as SCADA, MES, IoT. We can also offer complete turnkey solutions, including hosting and secure telecommunications. And over the years, 2Gi Technologie has successfully implemented solutions in the manufacturing, chemical, water and wastewater management industries. Our DNA is really expertise. All our engineers are certified. As you can probably hear, the company is located in France. And thanks to its recognized expertise, including in major groups, 2Gi Technologie occasionally operates in Europe, French-speaking Africa, and the United Kingdom. So I'm Cédric, Cédric Groc, and at 2Gi, I'm Director of Activities. This means that I'm in charge of the company's growth, making sure that we constantly deliver the high level of quality, and that we remain aligned with our DNA and our strategy. And therefore, I oversee all customer relations, project management, marketing, and sales.
Brad Fischer: Thank you. Olivier, can you please introduce yourself and tell us a little bit about your role at 2Gi Technologie?
Olivier Marin: Yes, of course. Good evening. So I'm Olivier Marin. I'm Project Manager at 2Gi Technologie, and I'm in charge of defining the best solution architecture that fits the client's need and manage the project so it can be delivered in time. That's it.
Brad Fischer: Fantastic. And Eric, can you give us a little introduction to yourself?
Eric Reisz: Hi, everyone. I'm Eric, and I am an Automation Consultant. I primarily work with pharma corporations, whether it be big pharma, small pharma, or startups. And I try to understand their business and help them develop technological and automation approaches, specifically centered around data management and Industry 4.0 approaches to help them operate more effectively and efficiently.
Brad Fischer: Great. And last but not least, Andre, can you please talk a little bit about your role at Zeibari & Co.?
Andre Zeibari: Great, thanks Brad. Zeibari & Co. is a boutique consulting firm. We serve the biotech and pharmaceutical industries, specializing in Digital Transformation and operational technology strategy development and deployment. Our main focus is to get real value out of the cross-section of technology and people, from the shop floor of the manufacturing suite all the way to the top floor of the enterprise.
Brad Fischer: Fantastic. Thanks, Andre. I'm glad to have all of you here with me today. Before we get started, I'll quickly tell you a little bit about Ignition, which is the software platform that was used to build the amazing projects you'll be seeing shortly. Ignition is a universal industrial application platform for HMI, SCADA, MES, and IIoT. It's used by 57% of the Fortune 100. It has an unlimited licensing model, cross-platform compatibility, IT-standard technologies, and a scalable server-client architecture. It's web-based and web-managed with a web-deployed designer and clients, rapid development and deployment tools, and modular configurability. The Ignition platform removes barriers to project development so that integrators can build virtually anything they can dream up. Around here, we like to say that your only limit is your imagination. Alright, with that, let's start things off. So here's the agenda for today's webinar.
Brad Fischer: We'll start off with some background on what we mean when we say "defying ordinary" in automation. Then we'll get right into a deep dive into some truly unique automation projects from the past year. At the end, we'll finish up with some audience Q&A so you can get your questions answered. If you have questions during the presentation, go ahead and type them into the questions area of the GoToWebinar control panel, and we'll answer as many as we can. If we can't get to your question today, we encourage you to reach out to one of our account representatives who will be happy to answer it. Also, just so you know, a recording of this webinar and the webinar slides will be available within the next couple of days. Ignition is designed to address Industry 4.0's demand for increased connectivity, data collection, and advanced analytics. With the ability to communicate with all sorts of devices and databases, Ignition provides all of your data from one central location.
Brad Fischer: A single designer enables you to rapidly configure data collection, alarming and mobile-responsive screens, while an unlimited licensing model means any and everyone can launch a client session to view trends, alarms, entry forms, even personalize their interface in real time. Ignition is versatile enough to not only operate in a variety of industries, but allow each company to take their Digital Transformation journey to new heights. With vertical and horizontal scalability, Ignition can grow with your organization, allowing you to rapidly deploy new lines and develop new solutions, even leveraging artificial intelligence and machine learning. The projects we are featuring today will demonstrate how you can use Ignition to connect disparate systems into a holistic enterprise experience. These examples will demonstrate how increased connectivity, rapid configuration, operational versatility, and scalability allows teams to meet business needs.
Brad Fischer: Ignition makes it possible for organizations to exceed expectations when creating projects that defy the ordinary. And with that, let's start by taking a look at a project from Grantek. Dylan, over to you.
Dylan Powers: Hey, thanks Brad. So this is a project that Grantek designed and built for Merck Corporation. We were contracted with Merck to build an alarm management system for their West Point, PA facility to help improve their compliance and workflow. On the early conversations with the customer, we realized that we were going to have to incorporate a lot of features that push us outside of the standard scope of alarm management and alarming systems, such as advanced point management and advanced metadata management, reporting for alarm rationalization and things of that nature. So we developed some very custom solutions that were pretty fun to build, honestly. So the alarm management system is connecting to 11,000 different points built on using the Ignition Perspective Module. The architectures were using redundant gateways connected to a data aggregator, an OPC UA data aggregator. We also have a secondary independent Ignition gateway for reporting from web clients.
Dylan Powers: And then additionally, there's a developmental and engineering workstation for development work for further projects and testing of that nature. The system was built using a Postgres cluster. So we have two different Linux boxes, each providing replication via PGpool between the two databases. So some of the features that we built in, obviously we had alarm status monitoring and historization. We do use the native alarm journal within Ignition as the engine to historize our alarms. And then the frontend is all custom-built tables using scripting and queries and putting all of that together to show it on the frontend. The management interface allows users to specify any number of pieces of metadata associated with a point from persons responsible for the point, the area, location, comments, anything that you want to attach to the point is built into the system and then stored as metadata within the database. We also built an ad hoc and scheduled reporting tool.
Dylan Powers: So all of the reports are highly filterable to help with the rationalization of the alarms. The report scheduler allows for users to schedule themselves or others to receive the report on a defined frequency and they can specify any number of parameters associated with the report. And following the completion of that scheduling function, they'll start receiving the reports on the frequency they define. Some of the other pieces, so we built some interfaces that allow users to request changes to points and add these requests and check and track the progress of the request as it goes through the life cycle. Ultimately, the system administrators will be responsible for closing the request and completing the request and making all of the necessary changes to the point. So again, some of the features, real-time alarm management workflows. So we added some custom features to the workflow that allow users to kind of do a two-point acknowledgement on each alarm as they assign the alarms to different technical groups for resolution. And then you can add ad hoc annotation to the alarm event, which will be associated with that alarm event for future reporting.
Dylan Powers: So as the alarm is going through its life cycle, users can track in real time what's happening with the alarm event. We did have to build some custom data archiving features into this. We built some Python scripts that run through the alarm events and alarm events data tables and pull out anything older than two years and move those over to archive tables within the database for future reporting. And then we covered the report scheduling builder on the previous slide. So this is the report scheduling builder. You can see that on the left-hand side, you have the 10 different reports that we built for the project. So the workflow is that you select a report on the left-hand pane. On the right-hand pane of the view behind the overlay are the different schedules associated with that report. You can add, delete, edit, or duplicate reports with the buttons on the right-hand side. Once the report schedule pop-up builder is opened, either by selecting modify point, modify schedule, or add schedule, this would be the pop-up that you receive. Here, you can add any number of filters to the report, you can specify the recipient in the recipient section, and then you can specify the report timeframe, whether it's daily, weekly, monthly, or so forth.
Dylan Powers: And report type is you're selecting the format, so PDF, CSV, or HTML. This is the point management view that we developed for the system. So the idea of this view is that as you select on points within the points table, the left-hand pane, the alarm info pane will be populated with metadata associated with those points. So all of the information, any comments, alarm owners, and so forth, would be shown in the alarm info pane. On the right-hand pane, you can add points, inactivate points, or modify points. The modify point workflow is an elaborate carousel of embedded views that allow the users to specify all of the metadata associated with this point. One neat feature that we developed was the import and export feature associated with this view. So some of the tag data that goes into the alarm event is stored within the alarm UDT and part of the Ignition Tag Provider, other pieces of the alarm metadata are stored in the database.
Dylan Powers: So the import and export feature, you can select any number of alarms, export them to a CSV, make modifications to the points or add additional points, and then you can re-import that same CSV file back into the application. We'll go through a series of validation checks to make sure that the information in the respective columns is appropriately formatted, and then we'll insert that, we'll either insert or update that data into the database or the tag provider, depending on where that data lives. This is the alarm management screen. So this is the alarm summary view. Again, you have the alarm active table in the center of the view, and you can click on individual rows within the alarm table. That will populate the alarm info pane on the left-hand side. In the alarm info pane, all of the annotation associated with the alarm event is going to come up in real time in that pane. So if you have multiple users annotating the same alarm at the same time, each user will be able to see the annotation from the other as that alarm goes through its life cycle.
Dylan Powers: By double-clicking rows within the active alarm table, you get the alarm management overlay, which allows users to go through the two-part acknowledgement piece of assigning and answering, and additionally, you can also add response annotation ad hoc. The accept annotation is a feature for shift changes that as a shift change happens, users will go through and either accept the annotation or add new annotation to all the points that they're kind of walking into the alarms. You can also shelve, view the alarm history, and trend the alarms from this view. So some of the results of this system was really regulatory compliance and having a record of the steps that were taken on the alarm event to help with the regulatory compliance of these systems. So for instance, you will be able to pull a report showing the annotation of an alarm and be able to tell the story of that alarm a little bit more clearly. We also put a lot of effort into designing user-friendly interfaces that are intuitive and simple to learn. The biggest thing was convenient access to the system, or convenient access to the data, I'm sorry.
Brad Fischer: Yeah, thank you very much for sharing that with us, Dylan. So I have a couple of questions for you. How were you able to deliver so much functionality beyond a normal SCADA alarming system?
Dylan Powers: So that's one of the systems, that's one of the aspects of the Ignition Module that really is extensible by nature and allows you to build out, I mean, your imagination is the only limit. So you can build scripting and queries to pull data and join data together in a way that we've never really had available to us prior.
Brad Fischer: Great, and I assume adding functionality like this can result in a fairly complex solution. Is this a bespoke solution or were you able to design something that was reusable and extensible, something you could use in the future?
Dylan Powers: Yeah, we designed something that's reusable for future use. We're actually deploying the same solution for another customer. As we're going through that process, I think every customer has their own workflow. So we have to tailor the solution to the customer, but the backbone of the alarming events is really staying the same. So that's been exciting for us to have developed this solution and be able to deploy that for other customers.
Brad Fischer: Fantastic. Now for the next project, I'll pass things over to Cédric and Olivier to talk about 2Gi's project.
Cédric Groc: Thank you, Brad. And this is a project we did for Saint-Gobain PAM, a leader in water supply pipelines. It was created for their 150-year-old reference plant so that they could remain the leader in their market. So we created for them an enterprise solution and many modules were used for this project. You can see them on the next slide. You have, first of all, Vision for the HMIs, Perspective for visualization and also mobility, the Web Dev Module to connect to external sources, the Reporting Module to report all merged data, and also the Historian Module. For databases, we have used Microsoft SQL Server and it was mirrored.
Cédric Groc: And the Ignition architecture that we've chosen is hub-and-spoke. One of the very exciting features of that project was pipe tracking. And we also improved the quality control by using HMIs. And another important element is that we used interfacing capabilities to connect the SAP to SAP's ERP and the data stored in the VAX also. And thanks to Perspective's mobile capabilities, we were able to connect to provide geolocalization. And I will let Olivier give you more details about each of the features.
Olivier Marin: Yes, so on the next slide, here's a view of the pipe tracking feature. So as you can see on the left, an operator will add a tracker on the pipe, and RFID readers are installed at strategic positions on the line, and the information gathered in real time along the production of the pipe is attached to the unique identifier in the database. And at the end, all the information is tracked and traced, and you can get a single view of truth for each pipe during the production.
Olivier Marin: On the next slide, so you can see on the left, there is an operator in front of the pipe, and the screen shows him the real-time digital replica of the line. And on the screen, each pipe is one of the real ones you can see on the line. The quality control is done by the operator. We can then report the control result to the application. And what's great there is the natural integration of the production line and the smooth user experience. And last thing, but not least, there is no more paper.
Olivier Marin: Next slide, you will see on the left, an image of the architecture used to gather data from the PLCs to the ERP. So we are using SAP and BAC. And on the right is how to search the database and the reporting display in the project, so each person can access the same data everywhere. What's interesting there is that connecting to SAP was not initially planned, but it's been really easy to add the right module, which is the Web Dev Module, to connect to SAP via web services.
Olivier Marin: Next, you can see a view of geolocalization feature in the system. And on the left, the forklift driver can set the position of the products in the warehouse and also retrieves any product directly from a mobile device application. And we are using Perspective. On the right, you can see the desktop version for people in the offices using the same data and geolocalization. Cédric.
Cédric Groc: Thank you, Olivier. What's really incredible about this project is that we were able to create an enterprise solution that catered to all the needs of this very old plant with a team of only six people. With Ignition, we were able to manage the entire transformation, including SCADA, MES, track and trace, quality control, and many other functions. And we were able to accomplish all this in just a few months at an optimized cost. What we really learned is how much change management is key in such projects. And for that, the Saint-Gobain team was really helpful. And also, we could go very fast by using Ignition features for what they are and not doing so much or too much scripting or code around these features.
Brad Fischer: Fantastic. Thank you so much for sharing that with all of us. You mentioned the ease of integrating with SAP using Web Dev. What specifically made that easy? I feel like most people would think that an integration with SAP on the fly like that could be really difficult, kind of a huge shift in the project scope.
Cédric Groc: Well, there are many possibilities for integration with SAP. It really depends on your needs. In that particular case, we wanted to exchange up-to-date data almost in real time and with the highest level of security. The Web Dev Module enables data to be exposed from Ignition thanks to web services. And Ignition can also call external web services. This module had not yet been installed as there were no previous functional requirements that called for the implementation of web service before having to connect to SAP. And it was really quick to add. The module was really quick to add, even on production servers and without the need to stop or restart production servers. So Ignition there is a really game changer in this area. I mean, when it comes to patches or new features to be deployed, there's no need to shut down systems, which is essential for 24/7 plants.
Cédric Groc: On the SAP side, the Saint-Gobain team had a way to configure web services to share data. So we agreed on the data to be sent and received in a JSON format. From Ignition, a few Python scripts trigger the web service calls. The data is sent to SAP. The reverse is also possible, of course. And we use this technical capability to automatically report production quantities to SAP. And the huge improvement for Saint-Gobain is that all KPIs and production figures are now reliable and shared between the different Saint-Gobain teams for better decision-making.
Brad Fischer: Great. I know this project also used RFID and leveraged that to track those pipes as they moved not only down the line but into the warehouse and other facilities. Can you talk a little bit more about the hardware that you used and how you integrated that with Ignition?
Olivier Marin: Yes. So in this case, the RFID is a specific hardware managed by Saint-Gobain. And all RFID data are collected by many PLCs in the plant floor. And they are then sent to Ignition. And they are used to fit the track and trace system. And with those data, it's possible for the production to know where are the pipes on the line at each time of the production. And in the reporting, it is also possible to know what kind of operation has been done at each step of the production of the pipe. And you also can find some quality results in this information. And at the end, when the pipes are finished, we used the mobile devices with Perspective application to record the position of the pipes on the storage park and find them easily when it's time to prepare the orders. So that's it.
Brad Fischer: Yeah. Yeah. Thank you. So up next, we have another fantastic project. This is a data management system for the pharmaceutical industry. Eric and André, you've got the floor.
Andre Zeibari: Great. Thanks, Brad. And I just want to say thanks to Dylan, Cédric, and Olivier for their great presentation of their projects. Those are fantastic, guys. So this project is for the creation of a data management system and the automation architecture for the Center for Breakthrough Medicines. It was designed to provide a robust data integrity throughout the enterprise and to allow for the development of data contextualization from the shop floor all the way up to the top floor of the enterprise. It was also developed and built with our automation integrator friends, Skellig Automation.
Andre Zeibari: So a little background about it. The Center for Breakthrough Medicines is a contract development and manufacturing organization for the cell and gene therapy industry located just outside of Philadelphia. It was established in 2019 with the goal of being the first and only CGT CDMO to provide end-to-end capability from development, as you see listed in process development, analytical development labs, through the manufacturing of viral vector plasmids and cell therapy. The requirements for this project were really to have seamless, agile, scalable, adaptable monitoring of the manufacturing suites and to be able to contextualize all of the data from the shop floor, from the equipment process and room data. And it was critical to develop the architecture that way because of the fact that we are working with a CDMO.
Andre Zeibari: The other part of this was to make sure that it was cost effective and scalable. It needed to be flexible. It needed to be extensible for the ongoing development of automated capabilities. One of the things about CBM is that it was a startup, as I said, founded in 2019, really got started in 2020, and the capabilities developed over time. So we couldn't come out of the gate with everything built in. It needed to be scalable so that we could add capabilities as we went forward. The other key aspect of a CDMO is that we don't have the luxury of having a clearly defined end process state. We take the process from our clients and adapt it into our functionality in our operations floor. So we needed a system to be able to manage all of that.
Andre Zeibari: And finally, but certainly not least, the architecture needed to be resilient and redundant. We needed to minimize the risk to any data integrity issues. So we focused on having systems that were redundant where necessary, as resilient as possible. We pushed contextualization to the edge, so that allowed us to have a more modular infrastructure that we could develop once and reuse on different floors, different buildings, different locations, and new future sites. And also pushing the contextualization and all of the capabilities to the edge and as close as possible to the process equipment, also minimize the risk of any data drop between the site location on the floor or building that we were in in the data center, which was centralized.
Eric Reisz: In addition to those business requirements, we needed to think about what our data needs were and what we needed our data to do. As a CDMO, data is the lifeblood of our business, and we needed a well-developed data model to make sure we were operating effectively and efficiently. The way we approach this is by asking ourselves these five questions. What data do we have? What does it mean? How is it related? What can it tell us? And how do we take our data and put it back to work for us? As a startup, we didn't really have any end user asks yet. So one of the things we did is we sat down and thought through this process, and we developed a set of user requirements based on capability.
Eric Reisz: So first and foremost, being as this is a data management system, we needed to focus on data collection, data contextualization, and effective dissemination to end users and decision makers. One of the other things that's key there is being able to identify what data belongs to what client. As a CDMO having potentially hundreds of clients, we can't share data between clients. So we were able to use a UNS structure and assigning metadata via MQTT to be able to identify what data belongs to what client and prevent sharing of data between those clients through logical controls, which was very effective and very efficient.
Eric Reisz: The other thing that's huge in the pharma industry is that we're a regulated industry. So our validation approach and compliance is key to how we operate our business effectively and efficiently. We needed to make sure that we accounted for all this in our user requirements and made sure we had capabilities to fulfill all of these in our tech stack.
Andre Zeibari: Sorry guys, I was talking on mute. So the DMS, to get a little bit more information about exactly how it was built and what it was, it was an overarching system that encompassed the multiple technologies to manage all of the operational data in the facility. It had the primary repository and the systems for gathering all of the process data from the shop floor and laboratories, both time series and non-time series data. A separate subsystem existed that was dedicated to focus entirely on equipment and environmental variables. That was our environmental monitoring system there. This gave us the best of both worlds. We had all of our data in one place, yet we had two separate interfaces that were dedicated to different sets of users. Now, although the DMS is the overarching system, it was actually the second phase of the development of our data gathering and historization.
Andre Zeibari: We initially started on a much smaller scale with just the equipment and environmental monitoring in one floor for one lab and we relied entirely at that time on Ignition coupled with some Opto 22 remote I/O cards. The great advantage we had here with using it this way and with using Ignition specifically was our capability to come back and re-architect a larger system, which then enveloped the existing laboratory floor. We were able to migrate all of the data points to the newer system with almost no data loss due to downtime. So in order to choose the architecture, we went through a comprehensive selection process. We focused on open architecture as much as possible, Industry 4.0. We wanted to build an architecture that not only took advantage of all these modern technologies, but was somewhat future-proofed so that we could use these for years to come into the future. The several different technologies that we chose was obviously Ignition Perspective. For me, this was my second major deployment for Ignition in the cell and gene therapy industry.
Andre Zeibari: It was actually an easy decision to turn to Ignition again, as it's the most flexible platform on which to develop the automation backbone. We chose Opto 22 hardware, which as we discovered more about it, we saw as the perfect intersection of an ideal integration with Ignition and still had more standard programmable control aspect with excellent physical I/O. Canary Historian was used for the tens of thousands of points we historized, and we chose MQTT and Sparkplug B as our overall communications protocol between all edge components in our core systems.
Eric Reisz: So the resulting functionality we got from this tech stack gave us a group of open source technologies which enabled us to build whatever user interfaces were required and answer any user asks that would come in. We had a system which was developed on top-of-the-line cutting-edge technology, and because of the way that all of these different technologies are designed and the modularity of them, it's infinitely expandable. As a cell and gene therapy CDMO, we are limited to geographical areas for treating our patients, so we would eventually have to expand to a global organization. Because of the tech stack we use and because of the way Ignition and Opto and Canary and MQTT operate, we didn't really have to think about that from a geographical perspective. From our perspective as automation administrators, it's one system, one global system, making it easy to deploy and infinitely expandable, which was a huge win for us. One of the other major things that we were able to do is develop a client portal.
Eric Reisz: To my knowledge, no other CDMO has a client portal with the capabilities that we had, enabling clients to be able to see reports, get access to their data and enabling us to quickly and effectively and efficiently work with end users and get them any of the deliverables we have. Some of the displays we developed, to go through this a little bit, one of the major asks we had right out of the gate was enabling our end users to go paperless. There's always that desire to be paperless, so to that end, we developed a set of interface displays which would enable us, as asks came in, to take templates or forms, things that were manually being filled out on paper, and take those and develop them within Ignition. Now, related to our validation strategy, everything within Ignition is modular, enabling us to define everything very well and qualify everything individually and then as part of a whole.
Eric Reisz: So what we did here is it enabled us to take things like cleaning logs and equipment log books and develop an interface which enables the end users to fill those out and be stored in a database, all virtually utilizing Ignition. When we have a new ask come in, all we do is develop a new template to fill out and add a new tab. One of the other big asks we had was from the engineering department. Engineering was responsible for monitoring all the equipment that was out on the shop floor, ensuring everything was operating the way it should for processes and making changes based on whatever those processes would be. What we're looking at here is a look at our environmental monitoring system at a display that was specifically designed so that engineering could adjust set points for alarms and adjust criticality and enable and disable those alarms as they were doing work, as systems went out of use and then came back into use with the new client. This is the pop-up which would enable engineering to change set points. We had an e-sign associated with it and there was an audit trail behind everything, making sure that we stayed in 21 CFR Part 11 compliance.
Andre Zeibari: Great, so the benefits that we received from doing it this way were multiple. Ignition wasn't just a win from a technical perspective, it wasn't just a way for us to be able to deploy rapidly and do things in an open manner that made it easier for our engineers and our automation engineers to be able to do it. It was actually a much greater win from the business perspective because it really allowed for the democratization of process data. It gave us a streamlined approach to be able to manage the data. It gave us the ability to expand the system architecture as the company grew and as new areas and locations were added on. It allowed for faster and more effective data analysis because it made all of that data available in one place and instantly available to the business, which obviously at the end results in quicker decision-making and allows a CDMO to be more agile and move much more quickly, which is really the primary purpose of going with a contract organization like that from a client's perspective.
Brad Fischer: Thank you, Eric and Andre. That was great. I do have a couple questions about your project. You just mentioned kind of scaling this solution up in the future. Can you tell us a little more about how you built the project to make that possible?
Andre Zeibari: Yeah, great. Thanks for asking, Brad. It's primarily through the modular approach that we took, not only with the hardware layer, but also with the UNS architecture and the deployment of our assets throughout the shop floor and throughout the buildings. We modeled everything as close as possible to the shop floor, that allowed us to develop and deploy each building, each floor, each business area separately. And that in the future will allow us to take the very same approach when deploying new process control panels to the new business areas, as well as additional projects within Ignition to new buildings and new sites. And additionally, the rapid deployment capability that Ignition afforded us was key in the success of that strategy.
Brad Fischer: Great. Thanks. You also mentioned using SQL as well as the new MongoDB Connector. How were those both being used in the application?
Eric Reisz: So within the pharma industry, only roughly 40% of our data is time-series based data, which we have an historian for. Relational data makes up the other 60%. That's sample data, results, run files, whatever it might be that's coming from various pieces of equipment being used on the shop floor. That data doesn't always come in in the nicest fashion. And sometimes we have to do a little bit of work to get it into our database in a way that we can then disseminate that back to the end users. The connector for MongoDB and how easy it is to build SQL queries within Ignition made all of that so much easier, more effective, and more efficient when we operated that. So being able to take a display, have your batch trends for all your critical process parameters, and then be able to display the relational data that's associated with that batch right on the same display was a major win for us, especially for the operators and the people, again, that are making the decisions to develop the product as it's being made.
Brad Fischer: Great. Thanks, Eric. All of the projects that we spotlighted today were featured in the 2023 Discover Gallery. And not only that, each of them won a Firebrand Award, which is saved for the best of the best. For anyone unfamiliar, the Discover Gallery is a video showcase that shines a spotlight on ingenious industrial automation projects that were created with Ignition. Essentially, the Discover Gallery is our way of honoring and encouraging all the brilliant minds that create solutions with Ignition. You can follow that URL at the bottom of the slide to explore more of the projects. There are some truly amazing projects there for you to explore and get inspired by, so I highly suggest you check it out. And if you're interested in submitting a project for 2024's Discover Gallery, I encourage you to get permission from your customers now. And really, if you're an integrator who wants a high-quality case study video to tell your project's success story, getting into the Discover Gallery is a great way to do that.
Brad Fischer: We call Ignition users the Ignition community because there's a real back and forth between its members. They see what other people are doing with Ignition and get inspired to build even bigger and better projects. And each of the three projects we've seen today really emphasize just how much can be done with the platform. So if you've never tried Ignition, you can download a free trial of the most recent full version, Ignition 8.1.33, which features the brand new Google Map Component. It's quick to download, takes about three minutes, and you can use it in that trial mode for as long as you want. So you can dive right in and explore the platform that made it possible to build all the unique projects we saw today. And we have a ton of learning resources to help you learn all about how to use Ignition. I'll mention just a couple. There's Inductive University, which is a free online training website with hundreds of training videos so that you can learn Ignition step-by-step at your own pace.
Brad Fischer: And there's also a comprehensive online user manual that you can refer to at any time. For those of you outside of North America, we want you to know that we have a network of international Ignition distributors who provide business development opportunities and sales and technical support in your language and time zone. If you want to learn about the distributor in your region, please visit their website listed here on the screen, or you can contact our International Distribution Manager, Yegor Karnaukhov. And for those of you in Australia, we're acquiring the assets of Ignition distributor iControls and are excited to be opening an office in Brisbane, Australia. So very soon, you'll be able to get support from IA directly. If you'd like to speak with one of our account representatives here at our headquarters in California, please call 800-266-7798.
Brad Fischer: Now let's get to the Q&A. So to start out with, Dylan, I've got a question for you. We had a question come in about using Excel import/export instead of a CSV for the alarm configuration. Was there any research or investigation that you had done into that or any particular reason you had chosen CSV?
Dylan Powers: So we chose the CSV because there are some native scripting functions and methods that support the CSV directly as opposed to trying to go through and build something custom for a solution in which there's already a native scripting method within Ignition. So that's what led me down that path.
Brad Fischer: Sure, sure. That makes a lot of sense. You also mentioned, Dylan, that you were using some Python scripts to move the older alarm records to different tables. Is there a particular reason why you used Python versus trying to use something like a stored procedure on the SQL side?
Dylan Powers: Well, ultimately, they are stored procedures on the SQL side, but we're initiating those via Python scripts.
Brad Fischer: Okay, that makes plenty of sense. So I do have another question here. Is it possible to get an estimate of how many development hours these solutions took to build? So if all of you want to just kind of be rolling that around in the back of your mind, we'll circle back to that in a minute. But Eric, I have a question for you. How do you compare the Canary Historian against Ignition? What do you see as kind of the pros and cons there?
Eric Reisz: As far as a Historian system or just working together?
Brad Fischer: Yeah, let's talk about how those two work together and why one might pick Canary over the Ignition Historian.
Eric Reisz: Okay. We actually did start out with the Ignition Historian when we first got our first initial set up and running. It works very well for small-scale stuff. It's SQL-based. It makes it easy to get in and out of the data. We wanted to go with Canary because they have a lot larger tool set, especially with calculations and events that you can frame out. It's also a very robust, large-scale data historian system, something that we could use for our entire organization as we scaled up.
Brad Fischer: Right. And I'm assuming you used that third-party Canary Module that's available in the third-party module showcase, correct?
Eric Reisz: That is correct. Yeah. That actually made it extremely easy to interface between Ignition and Canary. Canary obviously also talks MQTT. So essentially what we did is we had Ignition on our edge layer, which contextualized the data in a UDT. We communicated via Sparkplug B up the information to our broker at our data center level, and then Canary and Ignition could both read that simultaneously. It cut down our validation of point addition significantly, which was a huge help, a huge time-saver, especially since we were very understaffed. Ignition and Canary work extremely well together. So that's something I very much enjoyed building that connection and being able to operate with both systems.
Brad Fischer: Fantastic. Yeah, thank you. So before we wrap things up here, I also wanted to mention our exciting new video series called The Ignition Effect. This series showcases the ripple effects Ignition has beyond its immediate benefits, featuring interviews with people who use it every day. There are several episodes up on our site right now at the URL listed on the screen, and we'll be releasing new episodes regularly into the next year. We'll be back on November 21st with a webinar about Ignition Edge, so be sure to keep an eye out for that so that you can register. Until then, stay connected with us on social media and subscribe to our weekly news feed email. You can also stay up to date on Ignition and Industrial Automation through our blog, articles, case studies, podcasts, and more. There's a ton of helpful content for you to explore on our website, so be sure to check it out. Thank you so much for joining us, take care, and have a great day.