Event Streams Module: Unlocking Next-Level DataOps in Ignition

57 min video  /  55 minute read Download PDF
 

Speakers

Travis Cox

Chief Technology Evangelist

Inductive Automation

The arrival of the Event Streams Module has brought Ignition’s DataOps capabilities to a new level of ease and organization. By enabling you to build a communication pipeline between various sources and handlers in a low-code/no-code environment, Event Streams unlocks vast possibilities for managing event-driven data. If you’re trying to wrap your mind around all that you can do with this module, or where to start, then this webinar is for you.

Join us for the deepest dive yet into the versatile Event Streams Module, led by Inductive Automation’s very own Chief Technology Evangelist. It’s sure to be the year’s biggest event about event-driven data!

  • See a comprehensive demo of Event Streams in Ignition 8.3
  • Explore an exciting array of sources and handlers
  • Learn about extending Event Streams via the SDK
  • Discover connectors, data buffering, and more

Transcript: 

00:00
Travis Cox:
Hello, everybody. Thanks for joining us for our webinar here today on "Event Streams Module for Ignition 8.3, Unlocking Next-Level Data Ops in Ignition." I'm so excited that you guys are joining me here today. My name is Travis Cox. I'm the Chief Technology Evangelist here at Inductive Automation. And over the years, I've worked very closely with integrators and end users to help them build their projects and to solve challenges. And a lot of that is how to work with data and how to move data around between different systems and to make the most out of the data. And so I'm really excited to share about how we're expanding that with Ignition. And today, I want to share some of the expertise that we have around this new Ignition 8.3 module by exploring kind of the art of the possible and give you a sense of really what it can do. So we've got a jam-packed agenda here today. I'm going to give you a quick introduction to our software Ignition and our company, Inductive Automation, for those of you who are new. Then I'm going to share some important background on the Event Streams Module, which we released last month alongside Ignition 8.3.

 

01:01
Travis Cox:
I'm going to take you through an extensive demo of the Event Streams Module and show all the various ways that you can use it and how it works. And then I'm going to tell you about a few other resources, and we'll conclude with questions. So in case you're not familiar with Inductive Automation, here are a few facts about us. We are a software company. We focus on our software Ignition. That's all we do. We don't do any hardware or implementation. We want to make the best software platform for industrial integration. And our software Ignition is used by 65% of Fortune 100, which means that it's being used every single day inside some of the world's biggest companies. We have over 4,000 integrators worldwide in our integrator program. We have a diversified customer base across all industries with thousands of Ignition installations over 140 countries. And we've been in the industry for over 22 years now. And we have just about 400 employees in our two offices, in our headquarters in Folsom, California, and in our new office that we opened a couple years ago in Brisbane, Australia. Our software is called Ignition, a universal industrial application platform for SCADA, MES, IIoT, and much more.

 

02:10
Travis Cox:
It acts as a central hub for everything on the plant floor and beyond. You can build any kind of industrial application with it. It's web-based, it's web-managed, it's web-deployable to desktops, industrial displays, TVs, mobile devices, phones and tablets, anywhere you want to get that data. And it has unlimited licensing model, it's cross-platform, and it offers industrial-strength security and stability. Okay, so with Ignition over the years, one of the things that people have always been trying to do with Ignition is to move data in and out of the platform and get it around from different sources. The first thing that we did at the very beginning was move data from PLCs and OPC servers to SQL databases. We have a strong set of PLC drivers, and we continue to expand that. This allows us to move data between PLCs and OPC servers and leverage the power of SQL to work with any database that's out there. Now, over the years, we've added many more connectors, such as the ability to connect to REST APIs. We've added MQTT, and that's been really crucial for Ignition. And now we've added Kafka with Ignition 8.3. And as we go forward, we plan on building more integration with more cloud-native services, such as Google PubSub, Azure Service Bus and Fabric, AWS SQS and SNS, and much more.

 

03:28
Travis Cox:
Of course, this multiplies the sources and the amount of data coming through Ignition. With the need for more data sources, we realized something, that we don't want to be prescriptive for how you should move data around. We also don't want you to write a lot of code for how to move data around as well. We need a simple and powerful framework to move data around in a low-code, no-code environment that is centrally managed, easy to troubleshoot, and provides a lot of flexibility. So that's why we created a solution called the Event Streams Module for Ignition 8.3. If you look prior to Event Streams, as people were moving data around, they did that in a lot of different unique ways. But it required a lot of expertise in Ignition to know all different parts of Ignition and to put configuration in multiple places. So if you look at a couple of these examples that we're showing here, one of the things people want to do is send tag data to external REST APIs. Well, that would mean that they have to write tag change scripts or they might write timer scripts that's going to use Python code to grab the data from the tags and then to go and send that, post that to a REST API that's out there.

 

04:37
Travis Cox:
So that involved writing code for that to happen. And you've got to kind of figure out when you want to send that data. Another example was that we want to receive JSON messages from MQTT. I want to subscribe to a topic that's just pure JSON. And in order to do that, we have to create a custom namespace Ignition. And we have to write the JSON message to a tag or a set of tags. And with MQTT, we've always been pretty prescriptive about how we work with that in that the data coming in and out is going through our tag system. But what if you didn't want to go to the tag system? What if you wanted to go direct out, publish data directly out to MQTT? Well, that required, you could do it, it required scripting and again, possibly timer scripts or tag chain scripts to make it all automatic. So, as you can see, we've had these connectors. There's different ways that we can move data around, but we've been pretty prescriptive for how that works. Another one that we've seen that's been pretty predominant lately is to be able to receive data from a REST API at Ignition. We want to add an endpoint to Ignition that a third-party solution can post data to that we can bring in a handle and do something with that data.

 

05:44
Travis Cox:
Well, that required the Web Dev Module, it required writing Python code to handle it. So you can kind of see a theme here, right, is that in order to really do the more advanced ways of moving data around between the different sources, we had to kind of figure out where that's going to be configured in Ignition, and often that meant we're going to write some code. Now, all of these examples and many more are now extremely easy to do with Event Streams, and we'll show how that works here today. So there are many high-level reasons to use the Event Streams Module in your Ignition system. First of all, it really levels up Ignition's DataOps capabilities. It makes it quick and easy to set up data pipelines and to coordinate with other teams, which is critical for data ops. It's highly extensible because its capabilities will keep expanding as new modules are introduced, and you can extend it through the SDK, as I'll demonstrate here today. And it promotes highly decoupled architectures, which is great for scalability. Event Streams let you connect your data and systems intuitively. It gives you a centralized way to map event data with greater ease and speed.

 

06:47
Travis Cox:
It's ready out of the box with an intuitive user interface, and you can quickly connect subsystems through a pipeline where you can filter, transform, and batch data however you want in a low-code, no-code environment. It helps you save time and effort connecting and managing data from external sources, and it allows you to create data endpoints from a range of sources like tags, SQL databases, REST APIs, IoT services, cloud providers, and much more. With its centralized event management, you can connect sources and scale systems quickly, all from inside Ignition. With Event Streams, you can easily see where your data is coming from and where it's going to go, and you can handle tag changes and manage database events and alarms all in one place. And last but not least, Event Streams allows you to integrate all of your systems. Because Event Streams can handle data from APIs, Kafka, MQTT, and other message queues, and push it to any internal or external source, it doesn't have to be a tag. And this means that you can subscribe to and publish data with additional contexts like metadata to now more systems. So let's talk about how Event Streams work before we get into demonstration.

 

07:50
Travis Cox:
An event stream is essentially a pipeline, and it maps data from a source to multiple handlers. In the pipeline, you can coerce, you can transform, filter, and you can batch data however you would like. Both the sources and handlers are both extensible and decoupled. So extensible means that we can add new sources and handlers through our SDK, whether Ignition does that or whether you would want to create or introduce your own module. And with being decoupled, it means that sources don't have to know about handlers and handlers don't have to know about sources. And that means you can easily connect disparate systems together. So let me give you an example here. Obviously, we have one source. So we set up a pipeline where the data can come from, let's say, an MQTT subscription. So data is coming in from MQTT. Once the data is published, that comes into the source, into the pipeline in Ignition, and Ignition is going to handle that. That gives us the opportunity to filter that data. We can then transform or manipulate that message. We can batch it to make sure that it's going to get to our final destination. And then we can handle that data like writing it to a SQL database or sending out to web service or publishing it to a different PubSub system like Kafka.

 

09:05
Travis Cox:
We can do anything that we want with this. So let's look at a couple examples of that. One would be that, just as I was mentioning, I'm going to get data from MQTT, bring it in, and we're going to write that record to a SQL database, be able to store that record or update a record in a database very easily. And this now, as you can see, would avoid having to go to any tag. I can subscribe directly to that PubSub system and write to a database without having to go through the tag system. Now, of course, tags are a big part of this, and you may want to, when a tag changes, you want to take that data and write it somewhere, like push it out to web service. Or you may want, of course, to bring data in and write it to a tag. There are many possibilities for what you can do because of this framework that allows us to uniquely connect these different sources and handlers. Now, a couple of important things about the Event Streams Module. One is that you can define one or more streams. So you can have many, many pipelines that are running in parallel in Ignition.

 

10:07
Travis Cox:
It is designed to be very scalable and highly performant to handle a large amount of data moving through this system. All the streams are defined in one place, and you do that in the designer. There's a new area called Event Streams, and I'll show you this. And you can easily see all the streams you have, and you can name them what you want, so that when people are trying to figure out how we're moving data around Ignition, we have a pretty good idea. We can go through those and see exactly what's going on. Every stream that you create is, or a pipeline you create, it's going to have one source. So that's where is the data coming from. Then you can have one or more handlers, and that's where is the data going to go. So once the data comes in from that source, we can send it to multiple destinations at the same time. So it's really very easy to work with and very powerful with that. Now let's talk really quickly about the sources and the handlers that we have right now. We've got quite a few different sources that we've come out with. This is all from the initial release.

 

11:07
Travis Cox:
We'll be adding more as we go forward. But right now you can bring data in from tags inside of Ignition. So when tags change, That could be a scalar tag, it could be a UDT tag that's changing, it could be a document tag, anything that's changing, we can bring that into a pipeline. Event Listener is listening for events, whether that is somebody from a Perspective application pressing a button and sending an event into a stream, or whether that's an alarm event that we want to publish to a pipeline and handle that alarm event. Or if we're sending a message through our gateway network, if we have multiple Ignition servers connected together, and we want to send data and send events through that gateway network and have it handled on a pipeline somewhere else, that is what the Event Listener is for. We have HTTP, and HTTP is the example I was saying before where we had to use Web Dev. Now, with the Event Streams, it's built in. It still uses the Web Dev Module, but it's built in where it'll add the endpoint to Ignition automatically for you. Again, I'll show you an example of this here today where then I can post data in and we can then handle that information. So a third-party system can post data to Ignition.

 

12:19
Travis Cox:
MQTT is now the ability to work with vanilla MQTT. I can subscribe to any topic that I want. It could be any payload format that we want. It could be a pure JSON message, a string message, it could be XML, it could be whatever you would like can now come into a source. And this is really unique because prior to having Event Streams with MQTT, we could handle JSON data, but it had to go into tags. Outside of JSON data, there wasn't a lot of support for other formats. So this is really important that we can now handle anything with MQTT. Of course, Spark Plug is still an important part, important payload specifications for MQTT. We can handle that here where we can listen for a particular message coming in from Spark Plug and handle that data. And then last but not least is a new module we added in 8.3 as well for Kafka. We've had a lot of customers ask for Kafka because they want to receive data in from their business side through a Kafka message and handle that Ignition. So those are all the streams that we have, or the sources that we have here on the initial release.

 

13:28
Travis Cox:
As I said, this is extensible. We will be adding more sources as we go forward. For example, Google PubSub and AWS and Azure, their service buses will be able to connect to those. OPC UA has events. We'll be able to tap into the event streams and bring those sources in. So this will continue to grow as we go forward, but this is what we have right now. And then lastly here, let's talk about the handlers before we get to the demo. The handlers are the destination. Where do we want that data to go? And you'll notice that some of the handlers are also sources, right? And that I want to subscribe from them, and I also want to send it to them. So there's some overlap because the sources are bringing data in, and the handlers are sending data out. So we've got right now the ability to write to a database where we can insert or update a record in a database. We can send a gateway event. So I can actually, at the end of the pipeline, we can send to another event stream. We can kind of chain event streams together by sending out the event to somewhere else on the same server or different servers.

 

14:31
Travis Cox:
We can use the gateway messaging if we wanted to send a message to the Ignition gateway and have it handled through the message handler so it can get data out. Maybe we bring that into a Perspective client or something like that. We have the ability, of course, to write to an HTTP, to a REST API, send it out to a REST API. We can publish to Kafka. We can write to our logging system, so you can actually see in the logs, you can see a message from the handler of a pipeline. We can, of course, publish to MQTT, and that could be any publishing on any format that you want, any topic payload there. And we also, of course, can write to a tag. With that, you can write to a document tag or any kind of tag in Ignition. And then last but not least is we still have the ability to use scripting. It's still powerful. Some people want to use more advanced features, and you could write a Python code to handle the data and do whatever you want using our standard system functions or Python libraries that are there that we can leverage. So there's quite a few different sources and handlers that we have right now, and this offers a lot of flexibility with being able to move data around.

 

15:43
Travis Cox:
We'll show some of these examples here today. So this is kind of what I want to do for the demo. We're going to explore kind of all that Event Streams has to offer. First, we're going to show just setting up our first Event Stream. What does it look like? How do we configure it? And when I run it, how does that all work? Then we're going to show some of the features of the pipeline, like filtering and transforming and buffering the data, as well as doing air handling. And then we're going to look at a lot of different sources and handlers in action. So we'll show you different ones that are there and the power behind them. And then we'll show some scalability by sending in 100,000 values through the Event Stream to kind of see that being handled and how that works. And then lastly, I'll show an example of extending the Event Streams Module by adding a source and a handler to that that we have in our GitHub page. So as a place to start, if you wanted to explore building your own module, you could use our samples and start from there. So hopefully you get a good sense of everything that, with the demo of all that Event Streams has to offer here today.

 

16:54
Travis Cox:
Alright, so let's get to our demo. So what we're going to do is go over here to my designer. And as I mentioned, let me minimize this. As I mentioned, when you add a Event Streams Module to Ignition, where of course you install Ignition 8.3, you're going to see a new area in the designer for event streams. And this is where you can configure one or more streams. Now you can organize those streams just like you'd organize views and scripts and all that into folders so that you can easily kind of see the different kind of ways you want to name and provide a hierarchy for that. It's all up to you. I'm going to create a new stream here. With every stream, as I mentioned, you give it a name and you give it a source. So let's start with a simple one. And that is a tag source. So I want to bring data in from a tag. And so we're going to use our tag event here as a source. And so you just select the source you're interested in. You create the stream. And then now you can see that kind of this pipeline action in that when the tag event comes in, so the configuration here is which tag or tags are you interested in looking at.

 

18:02
Travis Cox:
Then once that tag comes in, we can encode that data so we can specify what the format of that data is supposed to be. We can then filter that data. We can transform it. We can buffer it. We can then go in and provide some handlers so we can do something with that data. Let's do something really, really simple. Let's take a UDT. So I'm actually going to go here to a motor UDT. I'm actually going to just copy that whole motor path because that is actually if I look at subscribing to the UDT, it's just a full JSON object. So I'm going to bring in motor one. When that changes, it's going to be a JSON object. And I want to do something with it. So I've got to go over here to a handler and I've got to go and write that somewhere. So for that, there's lots of different options. Let's say that we want to just send this out as a pure MQTT message. So we're going to say that this is motors, motor one here is going to be the topic namespace for that. And we're going to send it out to, I have a MQTT Engine and Transmission Modules installed here and I have it connected to a broker on this, on my local machine. And I've also got MQTT Explorer here.

 

19:16
Travis Cox:
It's where I can actually see that publish being sent out, to that broker. So it's a real simple, I'm just taking a tag and writing that as a pure JSON message to MQTT. Of course, this was possible with the Transmission Module, but just kind of starting with one example here. I'm also going to do one more handler, which is right to a log, to a log message. So we're going to do this is my tag handler and we're going to do an info message to our logging system. And we're going to write the entirety of the data. That's the whole JSON message that's coming in there. So this is just an expression as to what is, what do I want to put? What's the message that I want to put in the log? So I've done two things. I have two places that it's going and it's one source for when that motor changes, it's going to then write it to those handlers. So once we go ahead and save, then you can see up here that you have some statistics that are going to be there. So it's going to show us what's being sent through that. And if I look at the status, I can actually see the events received and what's been going on.

 

20:25
Travis Cox:
So I actually do have one event that was the initial event that we had come through, which is the initial value here when this started up. And you can kind of see that that has then got sent to both of those handlers. I don't have any error messages here. So everything was successful. So if we go and look at our logs, we're going to see there's our tag handler and there's the JSON message for what got sent to the logs. And if I come over here to MQTT Explorer, there's my motor's motor one, and there's that straight JSON value for that UDT, which has all the tags in it and, of course, the quality and timestamp of that. So very, very simple to kind of set up our first stream. So there's lots of different possibilities that we can do here. But what I want to do is kind of show you some of the things in between here in terms of the filtering, the transforming, and, of course, what that means as well as potentially doing error handling. So let's take a look at a couple of those. I'm going to create another stream here. We're going to show actually bringing data in from a REST API. So I'm going to use HTTP here. I'm going to call HTTP.

 

21:36
Travis Cox:
And so this is going to add an endpoint to Ignition. So this is actually really cool in that I can post. Once we save this stream, there's a URL to get to this, and I can post data to that URL, and data will come in. And with that, I can then handle that data and do whatever I want with it. So the configuration here is I'm not going to require HTTPS. I don't have TLS turned on here or authentication, but sometimes you want to have authentication, so you don't just let anybody post this endpoint, or you want to require it to be a secure connection. You can certainly do that. Now, once that comes in, I'm going to have it be a JSON, but you can have it be if it's a string, a byte array, a JSON object, you can encode it how you want. And then now you have the ability to then filter and transform. So before I do the filter and transform, let's just go over here and handle this data by writing it to a database. Let's show the database handler where I'm going to write to my local database. I have a MariaDB database to a table that's called Kafka messages.

 

22:41
Travis Cox:
And I know it's not Kafka here, but I'm going to use that table. And so I'm going to write this. I'm going to write the timestamp in there. And I'm going to map a couple of columns. Machine is going to be a string and it's going to go to the event data machine. So part of the JSON message that I'll post is machine, and then we have temperature and we have status. So those are the three things that we'll bring in. Temperature is a float and status is a string. And so we'll put all three of these in here. Temperature and in status. Okay, so now I've got this from the insert new record. So when I receive that post message, I'm going to then handle it by inserting a record into the database, into this table here. And it will go through store and forward system, which is great so that if the database was down, we would still make sure that is going to get stored eventually. So now that we've got that configured, let's go ahead and save it. And so I've got nothing, as you can see, it's all zeros here, nothing's coming through yet on this particular pipeline. But again, now we have both of them running.

 

23:51
Travis Cox:
We have two pipelines running in parallel. And as data comes in, they're going to run. And they could be running lots of times as data is coming in. Let's go ahead and send that. I'm going to use a tool like Postman here, it's called Insomnia, where I'm going to send that JSON message. There's my extruder machine, my temperature, and my status. And you'll notice that I'm going here. This is the URL to get to that endpoint. It's on my local machine. So localhost 8088, it's slash system slash event stream for that module. And then this is my project. My name of my project called demo. And the name of my stream is called HTTP. So I'm going to send it to that. So let's go ahead. I'm going to move this down a little bit. And you can see if I press send, then I now have one that came in. And it has now then gone to the handler that got written to the database. So if I go and take a look at my database and refresh, I can see there's my extruder that I have here. Very simple. Now that record got written to the database. That's pretty awesome, right? So how quick and easy that is to go.

 

24:56
Travis Cox:
Now let's talk about the filtering and transforming here. So the filtering is just the ability, if we want to, for a given event, if we want to ignore that event, maybe there's something, a part of it that we don't care about and you just don't want that to come to the, you don't want it to be brought to the handler. You want to ignore it. This is your opportunity to do that. And it is right now writing code, Python code, it's simple, but hopefully in the future we'll have some, we can add some more flexibility here. But let's say for example, that if in here, if my status is test, I do not want to handle that. I don't want that to go through. So what we can do is simply return is event data status does not equal test. So if we filter it, sorry, we filter it if it equals test. If it is test, we don't want that data. So it's true if we want to get, if we want to ignore it, we want to filter it out. And so then now let's go ahead and save that. So let's go and let's put a, running here. Oh, I had the wrong way there. So let me go, it does not equal test here.

 

26:08
Travis Cox:
So if I send that through, then now you can see that went to my handler and that got written to the database. So if we go to my database, I can refresh, there's my extruder at stopped. Okay, now if I go and set this as test, then I'm gonna ignore it because you can see I've got two, but one has been ignored and that did not go to the handler because I don't want to use that kind of message. So that ability to filter is really important to get rid of things that you may not want to move through the stream because you can have lots of data coming in depending on the source that you're dealing with. The other ability here is for us to transform the message where we can add new parts to that message. So for example, here, let's just do something simple, which is I want to add something new to it. I'm going to add a timestamp that's new, which is going to get it from system.date.now. So I'm going to add, this is again using Python code, I'm going to add a new field to that. And I can also call like event.data of, hello world, right?

 

27:14
Travis Cox:
So we could add something like that random and we can have that being sent out. So what I'm going to do, since it's not going to get written to the database, let's create a tag over here. Let's show another way that we can bring data, a rake and write data. I'm going to create a memory tag that is going to be my event message. And it's going to be a document style tag where I'm going to write that, I'm going to handle that and write the full message to this event, to this tag. Let's go over here to our handler. Let's add a tag in. The tag path is that, it's going to hard code that, but it's an expression. So it could come from the, it could come from the message itself as to which tag to write to. I'm going to write the entirety of the data message there and the quality is going to be good, 192, and the timestamp is now. So we'll write that fully qualified value. So let's go ahead and save this. Okay, and so now we've got the transform turned on. So let's go over here. Let's put a message that's, let's do a different machine. Let's do a filler machine. It is got different temperature. It's running.

 

28:26
Travis Cox:
I'm going to send that through. That went through the filter. It got transformed and then it got received by two handlers. And of course, if I look at this event message here, you can see, let's go open up the actual value Let me just kind of copy it out here into here so we can see it. But there you got my timestamp in there and the hello world that was added to that message. So the transform is the ability to manipulate that payload or change it or do something with it before it actually gets the handler. Now last thing I want to point out is once you transform, we can then encode that object again. So I'm going JSON to JSON. But you have the buffering. And the buffering here is to ensure that the data is going to actually get to the handlers, right? We want to get the handler. So we have a lot of data coming in. Let's say it was hundreds of thousands of messages coming in. And the handlers are actively busy, like writing it to a database or to another message system, whatever. We don't want to lose any of that data. So the buffer is a way to ensure that we queue it all up so that it does finally get to that handler.

 

29:35
Travis Cox:
And you can look at the configuration for how that works. So the last thing I want to point out here, and I'll show the buffering in just a moment. Look at Kafka. What I want to point out is our error handler. If you did have some error message, especially when handling that data, if you want to handle that, maybe you want to write a message to a tag or have an alarm that goes out, or you want to write to the log system, or you want to then write to a database if there was a failure. You have that ability on the errors to do something with that. So you have complete control for how this pipeline works, but it's all defined in this one place. And we can easily see all the streams that we're dealing with in this one view. So that's very exciting. All right, so let's kind of show a couple of other streams that we can do. A new one that we've added to Ignition 8.3 is Kafka. So let's talk about that here for a moment. So I'm going to bring a Kafka source. And Kafka, it's a PubSub system. We connect to it. We can specify topics that we're interested in subscribing to.

 

30:45
Travis Cox:
This is the source we're subscribing to a message. And when those come in, we're going to handle that. We're going to do something with them. So with this, we have to configure a connection to Kafka. So let's go back here to our gateway. If I go here to connections and go over to this new area we have in Ignition 8.3 called service connectors. This is where we can add lots of different service connectors as we go forward to different systems. So Google PubSub and Azure services and AWS services, all of that will be in the service connector area where we can have more that are going to be introduced. Right now, we just have the one, a Kafka connector. And I have that one already configured. It's going to a Kafka cluster that I have here locally that I'm running in Docker. And so I'm now connected to that broker and I'm ready to go. So let's go back to our designer. So that's the local Kafka connector for the topic. I want to subscribe on a topic called Ignition messages. I want to wait for a message to come in and then we're going to handle that. I'm going to give myself a group ID.

 

31:51
Travis Cox:
So I'm going to call this Ignition group 10 as my group ID. And it's a unique group ID. And so then when that data comes in, it's going to be a JSON data. So I'm going to handle that. A lot of what we deal with is going to be JSON for a lot of these pieces. And then what we want to do is handle that data. So for the sake of this, let's write that to the database as well. And so going to that local database, Kafka messages will store that timestamp into the timestamp and we'll add those three. It's going to be a similar message, machine, temperature, and status. So we'll do those three again here. So string, float, string. And I want to bring in the machine part of the message. And I want to bring in the temperature and lastly, the status. All right. So now I've got that, this stream set up. And this is going to be a good place for us to kind of show the scalability and show the kind of the buffering that's set up with the event streams and just how performant the system is. So let's show bringing a message in. With that, I have this Python code that's here to produce a message.

 

33:08
Travis Cox:
So it's just going to create a random message, again, with the machine temperature and status, and it's going to publish that to the cluster. And I'm going to be able to receive that here on this stream. So let's do that. You can see that there is my value that was sent to that Ignition messages. And this was again, outside of Ignition. So it's publishing the message there. I can see that one came in. So Kafka, we got one and that got written to the database. So if I take a look at my database here, let me bring that and refresh. You can see there's that conveyor, that's falsely with the temperature. And so every one of those that we do, let's do another one. Right, so that got handled. There's now two. Let's go and refresh this. And there's conveyor faltered with a different temperature. So really easy for us to handle that message. Now again, if I wanted to write a message out to Kafka, if I want to receive it in one message and write a different Kafka message, I can create another handler, right? I could write that to Kafka or MQTT or to a tag in Ignition.

 

34:14
Travis Cox:
And if I want to write that to a tag list, we could do the same thing, right? Do this event message tag. It's really easy to be able to then figure out where we want this data to go. So I'll write that whole event message, good quality. And so now if I do this, let's do another one, that's going to now write to two handlers. And so there it just updated my tag. And these, by the way, these handlers run in sequential order. So you can see there's an order over here. I can move them up or down. And the reason for that is if it fails on one of those handlers, you may not want to have it go through. There is this failure handling where if it fails on the handler, you can abort and not have it go. But if you ignore, you can abort, you can retry, or you can ignore. If you ignore the message, it will then go to the next handler in the queue or in that list. So it does allow you to have multiple, but they run in order that you want them to run. Okay, so let's kind of show the scalability because the cool part about event streams is that we can send lots and lots of records through this.

 

35:25
Travis Cox:
So I have another one that I'm going to run here called a load, where this is going to be 100,000 records going through that. And it's going to get all read into the database. Let's run that. And you're going to see here all the messages that got sent to broker. And now look at all of them coming in. And what's cool is that this is now being handled, but we can't handle them fast enough. So it's all in the buffer. The buffer is keeping 500 records in that buffer as we're going forward. It's an infinite buffer that we have set up. It's keeping those records in, making sure they get to that handler so that ultimately we can handle all of these data points. And there we go. We now have 100,000 records that we brought in. And if we go to, and we handle that 200,000 times, right? So the tag, as well as we wrote to the database as well. If I go look at my database and let's do a select count star from Kafka messages. And let me go connect here. You can see, get the right table name in there. You can see I've got now 100,000 records.

 

36:40
Travis Cox:
And we can send a tremendous amount of data through these streams very easily there. Okay, so let's talk about some other handlers. We talked about database handler. The gateway event is where we can actually send from, let's say we want to send from one message to another. So from one pipeline to another. I could actually then send out a payload to another event stream that we have on this server or even a remote server. We have going through our gateway network. Real cool to be able to do that. Gateway message to send a message to a message handler within Ignition that we would receive. HTTP would be to write that to a third party API. So we can post that or we can put our post to a third party API very easily. So we can write that to another source. We can write, we can send out a Kafka message. We can do a logger, which we showed. We can do, of course, do MQTT. That's a cool one. So let's, I did that at the very first. Let me get rid of these. So this could be just like a Kafka message. So I can go straight MQTT. And that which is great.

 

37:54
Travis Cox:
I can go from Kafka MQTT as data comes in. So let's publish a record here. And now it's going to be handled in three different places. And it wrote it to our event streams Kafka message now in there. And there's my message. Real easy to get that in. So being able to read and write with any kind of payload format for MQTT is big. It gives us a lot of flexibility with what we can actually do. Of course, we showed right to tag. And then the last one's the scripts. And this one is again, you handle the data, you get the event message there and the state. And then you can literally use any one of our system functions or Python functions to be able to work with that particular data. So it's kind of the catch all. If we didn't have a built in sort of a handler, you might be able to kind of do that through scripting. But ultimately, there's the SDK, which we'll talk about next. And we're going to be coming out with lots of additional handlers and sources as we go forward. All right, let's see. Before we go on to the next piece, I want to show one more listener, because I think it's really cool that we have this, which is the event listener.

 

39:08
Travis Cox:
So that's this one here. So the event listener is where we can listen for an event coming from either scripting or coming from our alarm system in particular. So I'll show you how that works. So there's nothing to configure for the event source. It's just listening for an event to come in. And when that does, we can then go and do something with that event there. And so I think in this particular case, let's just go and send it out. We'll put in our log. So event there and we'll just put the whole event message in the log so we can see it. Okay, so this is a pretty simple one. Now how we use it, there's two different ways we can use this. We can go to Perspective and or in scripting. So I've got a button here that is actually going to call a system.eventstream.publish event. So I'm going to publish this same kind of JSON message I've been using everywhere and publish it to the event pipeline here in our demo project. And that will then go to that event listener. So let's go ahead and run that. So we come back to our events. You can see I've got one. And of course, I handled that particular data.

 

40:23
Travis Cox:
Now here I had an error message on that. So let's see, I think it's okay. I have an initial error message on there, so you can see if there's any error messages, you can see what's going on on that. Let's go back to our logs. And I did have an error message on the log, but real easy to get that data in from an alarm message there. Not sure what happened there, but let me do this. Let me do a MQTT. We're going to just publish it to a different one here. So let's go back to Perspective, and there we go. That got sent out. Let's go to MQTT. There is our MQTT stream, and so you can see that that message came from there. That's cool. You have the ability from scripting to instantiate one of these pipelines and send data into it, especially maybe somebody's entering data in a form or whatever, and they want that pipeline to be kicked off. You can certainly do that. Now, another more powerful way is with our alarming system. We can actually create a pipeline. Someone just call it pipeline. And in the pipeline, we have an event stream source. So I can actually, from an alarm coming in, I can have it go to one of those event streams.

 

41:40
Travis Cox:
So I'm going to go ahead and save this. And what I'm going to do is I'm going to go to a tag, my writable Boolean tag, which has an alarm on it, and we're going to have it go into that pipeline when it becomes active. All right. So let's go ahead and let's go back to our stream, the event stream. Let's go and make this tag go true. And so now we have a two. It just came in, and that got written here. And as you can see, that put all of the event message, the alarm message, that got written now to the MQTT. So I can have all my alarms being published through one of these pipelines going wherever you want it to go and have it easily configurable. All right. So that kind of shows a lot of different examples, the source and handlers of that. The last thing I want to point out is the ability for you to define your own source and handlers. And I'm not going to go into crazy detail on this, but we do have, if you go to our GitHub page, and I have links in the PowerPoint, which I'll share in a moment. Under SDK examples, we've got an example of a event stream source and event stream handler. And it kind of shows how to do that.

 

42:47
Travis Cox:
I'm not going to go into a lot of detail. The event stream source is really simple in that what it's doing is it's taking a list, a string, a common separated string, taking that and just sending the strings through the pipeline very simply. And the handler actually then writes that to a file. It's a file handler, writes to a file on the Ignition server. So there's two source handlers that are there. You can download that. You can then compile it, or you can see how it works, how it adds that. So basically, what we're doing is we're getting the event stream manager, and we're registering a new source or a new handler for that. And what you are doing then is you are defining, of course, because there's a designer portion of that, you are creating an editor for the designer, which is the UI for putting all the configuration you want for that particular source or handler. So again, I'm not going to go into a lot of detail. The GitHub page, the two examples are there. You can download those and play with them. What I want to show you is installing that and get it to work. So I'm going to go to your modules. I'm going to install a new module.

 

43:55
Travis Cox:
We're going to use the event stream source example. I've downloaded that, compiled it to an assigned MLDL. I'm going to install that. And one thing that's really important to note in Ignition 8.3 is that when you install modules, it does not install on the fly. You now have to restart the gateway. So let me close my designer and restart the gateway. And that's the only way to get these modules to start up. You have to restart. If you install a new one, remove one, any of that, you've got to restart the gateway. So this is a big change in Ignition 8.3 from before. But doing the hot swap of the modules, it added a lot of complexity to Ignition. So this is now very clean and simple. All right. So now this is starting back up. What we're going to see is there's a new module that added a new source to our event stream. So if I now go to it, I should see this new one that has just come in. So we'll give it here a moment to start up. Perfect. Let's go now and launch our designer. All right. So let's go to our event streams. Let's create a new source or a new stream.

 

45:07
Travis Cox:
And you'll notice that in the source, I have an example source that is just what I created. I'll call that example. And this was a really simple one. But here's the UI to just put a bunch of items, item one, item two, item three. Again, this is a string. And we're going to, we can write that. I'll write that to a log that is called example. And it'll be just the string itself that's being written. So you can see that it's now running. It's just writing those, directly. Let's log in and go to the logs. And you see that it's just writing all of those items. So pretty simple, but it shows how that works. And I'm going to disable it now, but it shows how that works, how you can create a source and handler for event streams. So hopefully you got a good sense of all that is possible here with event streams. And there's more to come, right, as we go forward. So real quick, in the PowerPoint, there are the QR codes to those, the GitHub examples, the SDK examples, if you're interested in learning more about that. And before we get to the Q&A, a couple of quick things.

 

46:16
Travis Cox:
I hope you enjoy what we did here today. But in order to play with this, you got to download the latest version of Ignition, which is 8.3. And you can download that on the website free anytime. Check it out yourself. It takes a few minutes to get it up and running. You can try everything. Everything we did here today, you can go try out in the trial period that you can reset as many times as you want. Really simple. Along with Ignition 8.3, I wanted to just kind of mention that we recently released Ignition Solution Suites, which are simpler ways to buy and deploy Ignition. And they're basically pre-built combinations of modules designed to deliver complete future-ready solutions. And in the DataOps Solution Suite, the Event Streams Module is part of that. And so that will help you get going very quickly with kind of DataOps. And what's cool about Solution Suites is that when you purchase that, you get all the modules that are there. And if you get upgrade protection, any new module we add to that suite in the future, you will get for free. That's the big value that Solution Suites bring long term. Of course, to learn how to do all this, you can go to our DuckKeep University, a free online training library to learn how to build an Ignition.

 

47:23
Travis Cox:
And a lot of the 8.3 videos are starting to get put out there. So there's more and more educational material that they'll have at your fingertips to learn how to do all of these pieces. And of course, you can contact any one of our international distributors if you're outside the US. Of course, you can contact our sales team here at our headquarters in California, as well as, of course, our industrial automation Australia team. So let's get to questions here. So one question here is, do I need to update Java on my clients when I upgrade from 8.1.32 to 8.3? No, you will not have to do that. Java is embedded in Ignition, and it will upgrade automatically for you. You don't have to install anything outside. It just comes with Ignition. So that is no problem whatsoever. There's a question about the event listeners. Does that work with Vision? So event streams that are running on the gateway, and they're running all the time. So the event listener is waiting for those events to come in. You can trigger that event from Vision. If you add a button there and you wanted to have it and run that event through, you could trigger that there.

 

48:30
Travis Cox:
You would have to probably send a message to the gateway to do that, but you certainly can trigger it from Vision. Let's see. Another question here. Is it possible to have the handler as HTTP API GET to grab the message query as first in, first out for API client requesting messages instead of a post to HTTP API server? So that is a good question. It bases in a handler to do an HTTP GET. Right now it's put in post. I'm sure that we can expand that to include other methods there. You could also, of course, use the script one first. The script will allow you to do a GET. And then from there, you can have another handler after that. So there's at least a way to do it for right now, but I think it's a good feature request. You can go post that into our forum. Let the teams know. If you have ideas of things you want to do with event streams, especially new sources, handlers, go ahead and let us know about it. We would love to hear all about that. A good question here. Do we have any data on performance or workload differences on the gateway between using a simple on tag change script to database versus doing it with event streams?

 

49:38
Travis Cox:
I do not have any metrics or any tests that we have ready to go to public. We have done quite a few tests of our own. And I'll tell you from a threading Perspective, if you have lots of events that are happening through tag changes, there are some significant threading problems with that. You have to increase thread pools in order to have that handle. It is possible to drop those with tag change scripts if you're not being careful of how many of those are happening in parallel. Whereas with event streams and the buffering side of it, that is not a problem. It is designed with this in mind. Threading is not an issue. And we can handle a large amount of data. So it would be for me, the preferred way that I would go about doing this now with event streams rather than doing tag change scripts. There might be some use cases for tag change scripts, but it certainly is a better way, I think, to be able to handle that and do something with that data, especially going to a database. I assume event streams live on the gateway. So with a redundant gateway, a failover continues to process the event streams.

 

50:44
Travis Cox:
Correct. Those event streams do live on the gateway. Of course, with redundancy, if the master fails, the backup takes over and all the streams are running on the backup there. Now, of course, state is important with that. If a stream is right in the middle of running and something happens, it depends. It can be a little tricky with redundancy for how that works. We have to test out some of those different scenarios, but it does work with redundancy. So there's a question, can you listen to all tags of a UDT type without listing all of them? Not right now. The source is just giving us tag paths, but we will be adding new tag sources, which will be things like subscribing to all the UDT instances of a definition or be able to do wildcard subscription. So there are more sources that we want to do. We just didn't get it ready for the initial release of Ignition 8.3. We have lots of cool ideas for more types of sources and handlers that are there. If there's something that you really need, again, let us know about that so we can prioritize what we're doing in the future better with this module.

 

51:51
Travis Cox:
How system will manage if you have more than one event stream right to the same database? Having data coming from five different sensors going to the same database table. So that's a great question That luckily is something that databases handle well. I will mention that if you have a database connection, the database connection has a number of concurrent connections to the database, and by default that's eight, so it actually opens up eight connections to the database, and they can write across all eight of those. So depending on how much you're trying to write and all that, you may want to increase that connection pool to a higher number. In fact, in my example, I increased mine to 50 since I was writing all those Kafka messages to my MariaDB database locally. So whether it was one stream or many streams, it's going to go through the same store and forward system and ultimately to the same database and batching into the database. So you just want to have the number of connections available to be able to handle that gracefully. So that's a good question there. All right, so here's the good ones. Event streams, the new transaction groups. So that's a very good question, kind of, in some ways, right? In terms of being able to insert or update a record, event streams offer a lot in that.

 

53:05
Travis Cox:
And that handler is actually part of the transaction, is part of the SQL Bridge Module there. And we plan on adding more integration between SQL Bridge and event streams to ensure there's power with that. But it kind of is the new transaction groups in a way, for sure. So the question was, why can't event streams be included in a maker? I don't see why we would exclude that from maker. If that is excluded right now, it's probably just as an oversight. We didn't necessarily think about that. We'd have to talk to the development teams. I'm not 100% sure, but I don't see why we wouldn't include that in the maker edition there. It'd be perfect for home automation. I certainly would use it with my home automation system. So Andrew, why don't you email me directly on that? We'll get a definitive answer for you on that. So there's a lot of questions about Kafka in terms of things like schema and playbacks. As more on the roadmap, there are more features on roadmap for that. This was kind of the initial. We didn't make that public. But if you're interested in having certain functionality, again, please feel free to reach out to your rep or your sales engineer and make sure that we get that feedback.

 

54:20
Travis Cox:
We really want to know where this module needs to go in the future. And so we need to know where you guys want to go, right? So we can prioritize these properly. So definitely, definitely don't feel shy to provide that feedback there. Do I need SEPA or I think it meant Cirrus Link MQTT to run the handlers? Yes, the MQTT handler for event streams is provided by the Transmission Module. The source is provided by the Engine Module. So you do need those right now. And that's why it's all part of the dataOps MQTT. Sorry, the enterprise connectivity is the solution suite. They're all part of that, all different connectors. So you have them all. You don't have to worry about that. So that's why solution suites are really powerful is we want to have everybody have all the functionality. So data ops includes things like event streams and SQL Bridge and Web Dev. The enterprise connectivity or integration provides MQTT and Kafka and the connectors that are there. So you do need different modules because they provide those sources and handlers. And right now the MQTT is provided by the Cirrus Link modules for Ignition. So good question. Is this module available for 8.1?

 

55:34
Travis Cox:
No, this module is only available for Ignition 8.3. You will have to upgrade from 8.1 to 8.3 to take advantage of event streams. You could also, of course, for a lot of people, they want to take advantage of this, but they're not ready to upgrade their production environment and 8.1. So they could actually, you can stand up a separate Ignition server for Ignition 8.3 with event streams. And you can connect that together with the 8.1 server to the gateway network. And you could basically send kickoff streams and send data back and forth, but having those event streams running in parallel with your 8.1 environment. I will see a lot of people doing this or they'll have dedicated servers that handle particular streams of data spinning out new VMs or new containers to handle certain streams. This is going to be a pretty common paradigm, I believe. I appreciate all the questions here today. It was amazing. A lot of, I'm sorry, I couldn't get to all of them that are there. Please feel free to reach out to us and make sure you get those questions answered or you make sure, you help us know where we need to go with this module as we're really excited about the potentials for that.

 

56:39
Travis Cox:
So thank you very much. We're going to be back here on November 20th with another webinar. Until then connect with us on social media and subscribe to our weekly newsfeed email You can see our latest blogs, case studies, and videos on our website. Thank you everybody for joining. Have a great rest of your day and a great rest of your week. Thank you.

Last Updated on: November 21, 2025