Introduction to Automated Testing of Perspective Projects

38 min video  /  31 minute read
 

Speakers

Cody Mallonee

Senior Quality Assurance Engineer II

Inductive Automation

Learn the most effective ways for leveraging automated testing to safeguard your development-to-production process. This session will start by outlining how the core tenets of testing apply to automated testing, leading directly into best practices for verifying that your Perspective project development is production-ready.

Transcript: 

00:02
Cody Mallonee: Before I really begin in earnest, I'd like to bring up three points that we're probably gonna be coming back to over and over again. The first being that your automated testing is gonna only catch issues after they've put into place. So if you have developers actively contributing to your projects, they're gonna put those changes in place, and then the testing will find the issue. It can't prevent them from putting them in place in the first place. Secondly, automated testing is ignorant of anything that you do not very specifically test for. We'll see a couple of examples of this later on. I like to think of this as "machine ignorance." And finally, your automated testing that you put in place is only going to be as good as the test cases that you yourself have written. If you have logical fallacies in your testing, it's not gonna do you any good when the tests run because you're gonna get false positives, you're gonna miss things, you're not gonna have test cases accounted for.

00:58
Cody Mallonee: Now, with those points in mind that we're gonna come back to as we go through this presentation, let's get into why you're in here listening to somebody that most of you have probably never heard of. My name is Cody Mallonee. I am a Senior Quality Test Assurance Engineer for Inductive Automation. Some of you might have had input with me on the forums. For the last five years, I have worked almost exclusively on automation of Perspective. I also do automation of gateway testing. I've been on that team since the 8.0 alpha phase, so really since the inception of Perspective. I have done all the things, and I've done them all wrong. I'm here to tell you what not to do, the best way to do it, what has worked for us, what did not. Hopefully, you can learn from our failures and use our examples to make yours better.

01:50
Cody Mallonee: So, what are we gonna be covering today? We're gonna be going over the pros and cons of automated testing, what it can do for you, what it can't, things like cost of actually putting this into play. We're gonna talk about what makes a good test, what makes a bad test. We're gonna talk about things to think about while you're crafting your own automated testing, what you should do and what you shouldn't do. We're gonna talk about a recommended structure for what your automated test framework should have. This is not only at the directory level, but what should go in those. And finally, we're gonna come to some Perspective tips and tricks towards the end, important points that people will make as their first mistake, and their second, and their third, and how you should avoid those and plan ahead. What will we not be covering? We won't be covering languages. The actual language that you use for your automated framework doesn't matter; whatever you're comfortable with. This might dictate what framework you end up using. For example, here at Inductive Automation, our Perspective and gateway testing is done with Pytest as our framework. If the name doesn't give it away, that means we've gotta use Python as the primary language.

03:00
Cody Mallonee: But if you have a team that specializes in Java, use a Java-based framework. It's up to you. We're not going to talk about environmental configurations. These are unique to every machine in every area. It depends on OS. I'm not gonna get into that. We're not going to talk about integrated development environments. These are all personal preference. Some of your developers probably like Eclipse. Some of them probably like IntelliJ. If you have Python developers, they may be using PyCharm or something else. Some people prefer Visual Basic. We're not gonna talk about record and playback automation, although I am going to take an actual minute to discuss why. Record and playback automation is great for static, unchanging websites. The dynamic nature of Perspective doesn't lend itself well to record and playback. There are a few exceptions that can work for you, but it's been my experience that the dynamic structure and the flexibility of a code-based approach has been far superior for us.

04:05
Cody Mallonee: Let's get into the benefits of automated testing and what it can actually do for you. You can prevent deployments of product regressions to a production environment. Now the key term here is regression. Notice I didn't say bug. Bugs are gonna make it into your product, no matter what. You're using Ignition. I'm sure you found a bug somewhere. We do our best, but they get in. That doesn't make them a regression. A regression is a change in what was documented as accepted behavior from one point to another. Now your automated testing can catch these, as you have tests in place. They're executing in some environment. They have some expected input. They have an expected output. As times change and you make changes to your code, we make updates to Perspective; perhaps that behavior changes, your automated testing will catch that. If you're adding new parts to your project and there's a bug that you've put in place, if you don't have a test case that actually is testing for that bug and had an already accepted behavior and an already expected outcome, your automated testing won't catch that.

05:13
Cody Mallonee: For another benefit of automated testing, it can identify bad project development practices. If you have testing in place and it fails between your development area and before you push it to production, was it Perspective that changed or was it your project? If it was your project, could better documentation have actually prevented that issue? Maybe it's something where your developer didn't know that something was being used in a certain way. Maybe you need better inter-team communication. Examination of your failure results can actually lend itself to identifying those bad practices, and rectifying those bad practices can prevent those regressions in the future. Automated testing should replace manual testing. Now, I do mean replace in the fact that a person is not interacting with it. I've spoken to at least one or two of you where you have an automated test framework in place, but it still requires human input during the execution. That's not the point of automated testing. Remove the humans from the equation, free up the time.

06:22
Cody Mallonee: Your goal should be to kick off your automated testing, go do something else, use your time somewhere in a more important area while it's running, come back when it's done, and look at the results. You don't want somebody watching this execution going, waiting for some point in time to type in some field or interact with some prompt in the IDE. That's a waste of that person's time. Take them out of it. The biggest benefit of automated testing in my eyes, besides the amount of tests that you can run in an hour as opposed to a human, is that the automated testing removes the chance of human error. Did anybody ever play the game telephone in school? You tell somebody a message, they tell somebody else, you do this five or six times until somebody tells you something that's completely unrelated to the original message. That's an issue even when it comes to actually doing manual testing. Somebody writes down a manual test case; they have this idea in their head of what the steps are supposed to be, what the expected outcome is.

07:29
Cody Mallonee: And then they go on, and the next time you run this test case, somebody else does it. They have a different idea of what each of these lines mean, they interpret them differently. This is an absolute problem when it comes to the expected results of your test cases. We'll get into an example a little later on of what this is. But this human error should be removed by your automated testing. So let's take a look at an example of what I'm talking about. There's an example test case that is very prone to human error. This seems incredibly simplistic, right? Go to the testing page, click the button, verify the date is today's date. Three simplest statements you're gonna hear today. But what is the testing page? This page has two dozen buttons. Which one am I supposed to be clicking? Today's date. Is that today? Is that the day the test was written? Does the format matter? These are all things not listed out here. And as this moves from one person to another, they're going to have different ideas as to the expectations and what each of these statements actually means.

08:42
Cody Mallonee: So I'm gonna circle back around to one of those points that I brought up at the very beginning. Automated testing is done after the development phase has ended, or at least some progress has been made, and you're checking against those results or those changes. And so this can kind of be a shortcoming of automated testing. It's not live action. You don't have it running as you're making the changes and seeing, oh, you know, I moved this button or I changed this script; did everything break? You have to actually make those changes, save them, commit them to the project, put that onto your testing environment, kick off the automation. And so it does happen after the development process. Some more shortcomings of automated testing. Automated testing generally does not grow with your project. If you want more tests, you have to write those tests. Automated testing can convey what is happening, where it is happening, and when, but it will not convey why, and it will not convey how that change actually got put into place. That's on you to document on your end.

09:52
Cody Mallonee: Finally, automated testing can require dedicated physical resources. Now, some of your use cases may be very simple. Maybe you have two dozen tests. You can run those in, you know, ten minutes on a singular laptop, single browser, no issue. Here at Inductive Automation, we have far higher standards. Kathy [Applebaum] actually low-balled us during the keynote earlier today. Nightly, we have 7,300 tests. That is simply the web-based UI test. So Perspective, gateway, Docker, Workstation. That does not include our Java-based test, which we do with QF test. I don't have numbers for you on that one. I need to get back around to any questions about that. Now these numbers are only nightly. When we do a release cycle, that number more than doubles because we still have our nightly running. We also have our release branch running, has the same tests, but we also have performance testing that's done during that time. And so during an actual release cycle, we have upwards of 15,000 tests per night. Now I talked about removing yourself, and running those, and not having human interaction. We kick those tests off at 11 p.m. They run through the night with no oversight so that we can look at them in the morning and immediately take action on the results.

11:14
Cody Mallonee: Now, that requires a lot of dedicated physical resources. We have Jenkins, virtual machines, isolated networks, development databases, all things that are intended to replicate environments and allow us to spin up multiple nodes at a time, but it's costly. Some more shortcomings. Automated testing does require long-term maintenance. Now this isn't unique to automated testing. Your manual test cases still require long-term maintenance, but I wanted to include it because people tend to think they write an automated test and it's done and they don't have to do anything to it in the future. That's not true. Finally, automated testing does not provide any more information than you provide within the code. I'm gonna throw up an example here in a moment, but think about the person that's gonna be looking at the failure, and if they reach a point of failure in the code and they're looking at a stack trace or they're looking at your report, can they digest that? Can they take any meaningful action on that result? This first example, at the very top, we have assert that some page object checkbox is checked.

12:31
Cody Mallonee: In the result of a failure, all you are going to see in the stack trace is assert false. That helps no one. That doesn't help the person reviewing the results know where it happened, it doesn't help them take any action on it, it doesn't help them ask any questions about what it actually means, but you can easily provide your own message to go along with that that allows them to actually take action and know where the failure occurred, know what to do about it. Simply providing a string message like "the checkbox which conveys sufficient permissions was not checked" will save everybody time when it comes to actually diagnosing failures. Now, I also talked earlier about a term that I called "machine ignorance." This is a view from one of our actual testing resources. This view is designed to test the way that a file upload component displays its content at various dimensions. For those of you that don't know, the file upload has three different layouts, actually depending on a whole bunch of different diagrams, but it could either have a layout like this. This is considered the large layout. It might have just a button and a little helper icon or an information icon in the upper right, or the entire component itself might be nothing more than a singular icon at the smaller sizes.

13:43
Cody Mallonee: This page changes the dimensions of the file upload to make sure that it has the expected appearance or the expected layout or structure at various dimensions, but notice that while the testing here might work and succeed and pass and give excellent results, during the testing of all that, we never actually check for the color of anything. Now, in the result that the button actually changes, we're just looking at the dimensions. It still looks right as far as the test is concerned. If you don't specifically test for the color of the button, you will never know during automated testing that it's changed. If you want to know about the color of the button, you have to actually explicitly assert that the color of the button is what you expect. Now, this is going to add some overhead to your tests. You're going to have to be very fine-grained with what you want. Maybe the color of the button isn't terribly important. For some of you, maybe you have requirements from government entities that your alarms result in states that are a very specific color. Those are things you should test for. The actual changing of the button colors is actually something that we should be testing for on our end Inductive Automation.

15:02
Cody Mallonee: Now, the biggest shortcoming of automated testing, or the biggest hindrance, the biggest reason that people stay away from it, is the actual time it takes to automate the tests. In this scenario here, this light blue line that doesn't change; that's how long it takes to automate some given test. We're gonna say it takes eight hours, a full day. Probably not very likely. Some of your tests are gonna take a bit longer. But over time, how much could that potentially save you? It depends on the test cases that are being automated. If a test case takes, let's say, 20 minutes, like this purple line, case B, it's gonna take you 24 executions of that automated test before you break even in the cost of manpower. How many times will this test be run manually if you didn't automate it during the lifecycle of your project? If you're gonna run this test once a week, you're gonna be making money within the next half year. If you run this test twice a year, you're probably not gonna run it for more than six years. Maybe automated testing isn't the way to go for this test. With all the shortcomings and benefits out of the way, let's talk about what actually constitutes a good test. Solid, sound, applied logic, which sounds simple to everyone, but it's actually surprisingly difficult to find people who can write good tests.

16:31
Cody Mallonee: If any of you took Logic 101 in college, great; brush up on it. If you didn't take Logic 101 in college, find an online resource, find some kind of certification. You're gonna want it. So many people get caught up in logical fallacies, or they will ignore content that they need to be worried about. Let's take a look back at that earlier example where we were navigating to a page. These seem like very simple test cases, and sure, maybe there's some ambiguity in the expected result. But notice that we get all the way to step three, and we're checking the label's date. Nowhere before then did we actually check to see if the label already had the date. This is affirming the consequent. This is gonna be a problem for automated testing, because automated testing has machine ignorance. It doesn't know that it was not supposed to already check the label for the date before it got to this point. Ideally, you would have a step before any of these in your automated testing that verifies that the label does not have the date already in place, because the goal of this entire scenario is to verify that clicking the button populates that label with the expected date.

17:48
Cody Mallonee: If it's already there, then how are you actually testing the button has done what it's supposed to do? What else constitutes a good test? It has to be readable, not just by you, by whoever's gonna be diagnosing the failures. Beyond diagnosing the failures, even the code has to be readable. Are your variables clearly named? Do the names actually convey the meaning or the content of what's stored in that variable? Are your function names created in a language that the person will understand? Do they follow patterns that are easy to recognize? Are your tests reliable and repeatable? Any test that you write that would fail every six runs or seven runs, it's gonna cost you time because every time you see that failure, you're gonna ask yourself, "Is this a valid failure? Does it matter this time?" Anytime you find a test that's flaky or fails after so many runs and then has to be reset for some reason, that test is gonna cost you so much time, so many headaches. Either bulletproof your test or take it out, because that time spent investigating the failure that isn't actually a failure is just wasted manpower.

19:05
Cody Mallonee: Some things that you should consider before doing any automated testing. When you start looking at your test cases to see if it should be automated, is you should identify destructive testing. This is beyond, you know, destroying your gateway. That's not what we're talking about. This is tests that could contend with one another or talk over one another. The easiest places to find these are when you have tests that are dependent on interactions with tags that could, you know, modify a tag in one session. Another session happens to also be looking at that tag and now sees a different value. Anything around alarms. We've had so many headaches inside of Inductive Automation because of alarm contention in our tests. Anywhere that you have update queries, so you're actually modifying data, some tests, if they're poorly written, we'll expect a very specific value, and then they'll perform some actions and expect another very specific value. If that data is changed after that test has been written, those values are now no longer valid, and your test is gonna fail. Finally, are you testing data in this test case? Are you testing the appearance of the session, or are you testing behaviors that a user would, you know, go through? Is there interacting with this page?

20:22
Cody Mallonee: If you are testing data, never modify data on a production environment; just that's rule number one. You shouldn't be doing this testing on a production environment. Get yourself a dedicated testing environment. If you're testing appearance, ask yourself. If it's absolutely mission critical, that example that I had earlier with the change of the color button inside the file upload, is that a deal-breaker for you or for your customers? If it's something like alarming having to be a certain color, perhaps it is; maybe it's important you test that, but testing of appearance is always gonna be very brittle anytime you change a CSS setting or if you are requiring a certain layout or spacing, if you add new components, does it affect that? Testing behaviors, in my experience, has always been the easiest, most reliable way to automate testing. You have a set of steps that a user will go through. You have some endpoint in mind that they will arrive at. It's very easy to to click a button and make some selections in a dropdown, and then verify that your table has the expected data. You're not actually testing the underlying data for values. You're testing that the steps the user takes gets to a prescribed spot. And that is the easiest thing to do with automated testing.

21:42
Cody Mallonee: So, with that in mind, where should you focus your efforts? You should probably look at mission-critical areas of your application or your project. If this area were to break, and it might be inconvenient, but no one really goes to that area anyway, or maybe they aren't gonna need it during this time of year, maybe you wait on that one. An easy win for you is going to be looking at all of your page resources and looking for the absence of quality overlays. Now, granted, this should be seen at design time by your developers. These quality overlays are typically the result of broken bindings, which you do see in the designer. It's an easy win to, to visit these pages and just do a quick check for quality overlays and verify there's none of them there. The most important point that I wanna make today, though, is that you should be testing your project. I should be testing the product. I should be testing Perspective and Ignition. That's my job. Your job is just testing that your project does what you want your project to do. So which projects are those? The ones that you have long-term development plan for.

22:58
Cody Mallonee: Remember, automated testing catches changes that your development team has made. If you're not making changes to these projects, what is it that you expect the automated testing to catch? Potentially, you could encounter issues. If you update Perspective, you know, 10 versions, maybe something's changed in those 10 versions that could affect your project. You might catch something like that. But really active development of the project is where you'll find most of the issues. Now, I want you to notice that this is in all caps. Areas with extensive scripting are probably going to be the number one area that you encounter failures or regressions as you develop your project and move it to production from version to version. Now here's why, and this next image should terrify you if you've done any automated testing. Any of these pink strings are hard-coded names of components that could easily be changed by somebody who's not familiar with them being used in a script. Also, they are all path to just be siblings of one another. If any of these gets put into an actual container, just, you know, a quick wrap of a container to maybe modify some spacing, the script would break. You would not see this in the designer. This is gonna look perfectly fine. Throw no errors until runtime. So what should your structure of your project actually look like when it comes to your automated project?

24:36
Cody Mallonee: There's sort of an accepted or adopted pattern when it comes to actually creating a structure for your project. You should have essentially a directory that has all of your testing files that contain all of your test cases. You should have something that contains all of your page objects, separate from your test cases. You don't wanna build your page objects in your test cases. Trust me, long-term maintenance, that will be a nightmare. You should have helpers and supporting classes. This probably will consist primarily of components as you build them. And finally, you should have resources and/or static data. I don't recommend it, but this can include testing creds. Maybe you have desired configurations that you need to have stored. This is where those would live. Now, once these are in place, you still have to communicate with the gateway. So you have to have this sort of middleman layer where your framework will actually communicate with the browser level. And then, based on whether or not you're just trying to spin up multiple nodes or perhaps you have just one, it's still then connecting to the gateway. Gateway is making changes. Speaking back to the browser. The browser is speaking back to your testing framework work.

26:00
Cody Mallonee: Now this is all well and good, but supposing your test runs and this framework is set up perfectly well, what do you do with the results? Are they just pumped out into a log file? That's gonna be hard to read. I've done it. I don't recommend it. Most frameworks that you would be looking to adopt either have reporting built in or they have free third-party packages that you can sort of tack onto them after the fact. I highly recommend that when you're looking at a framework, you find a reporting package that works well for you, works well for your team, has results that can easily be digested.

26:40
Cody Mallonee: Okay, now the reason that you're all here, trying to figure out tips to actually test in Perspective. The number one tip that I have for you that some people still don't even know about because we hid it so well is there is actually a DOM ID property available for every Perspective component hidden in the meta category. This setting will actually cause your components to emit an ID attribute to the HTML elements, that is extremely useful for targeting and querying for those components inside of the browser environment. Now this is great. Doesn't really work well when you have multiple instances of this component, like if you have a Flex Repeater or an embedded view, pop-ups, docked views, anything that could actually have an instance of this view and therefore the component. In those instances, you should probably provide your own prefix or suffix for the ID. I've seen some people use UUIDs. I highly recommend against it. While it can be done, you then have to have a registry of the UUIDs in order to keep everything straight. It becomes kind of a headache. It's a little bit easier, scratch that, a lot easier to provide the index of the Flex Repeater when you're using that sort of layout. And so use bindings on this DOM ID property to actually provide your own unique prefix or suffix based on your scenario.

28:15
Cody Mallonee: Now you should use your project as your end user is gonna use it. A lot of people are immediately going to be using the driver.get URL functionality of frameworks like Selenium or whatever other tool you use. Because of the unique nature of Perspective, this is going to cause you all kinds of headaches. The primary reason is that Perspective is a single-page application. There's only one page in use, and it's modified via data coming in through the web socket. After your original landing on the page, there are no more HTTP requests being made. So don't use Selenium to do HTTP requests. This results in a new page being made, results in a new page ID, results in all of your on-page startup scripts firing again every time you navigate to a page. So what does this mean for how you should actually navigate? The easy answer is instead of thinking to yourself, "I need to go to page X." You need to design your pages such that the page itself knows how to get to page X with its own mechanisms that the user would use. Whether that's a link on the page or using the navigation menu, the horizontal menu, your page needs to make that decision for you. Some more tips... If you're just testing the presence of resources that they're actually in place, that they haven't been lost or deleted, you can use API testing for that.

29:51
Cody Mallonee: You don't need to use an actual browser. API testing is gonna be much faster. It can be a little bit more difficult to actually automate, depending on your mindset, but much faster, a little bit easier. Use the browser-based testing for actual behavioral checks to see that a user interaction gets some expected result. If you happen to be using Selenium, avoid chains of find_ element(). This is a tip primarily for the larger components. For the simpler components, buttons, checkboxes, toggle switches, it's fine. But trying to chain find element inside of a table or a chart is a headache I would not wish on any of you. The alarm tables specifically, because of their constantly polling nature, have caused so many issues with stale element reference exceptions that I have lost count. At one point, we actually had to rewrite our framework and how we interacted with components of the alarm tables. Here's why. So every time you try to query for an element, it uses a sort of hashed ID to locate each element on the page. Now, assuming that your table is actually on the page, that first query we're gonna consider point zero, you've found your component. But now if you try to find a row inside of that element, you need to make another query using a reference to that original ID. If you're trying to find a cell within that row, you have to make another query to that original reference.

31:35
Cody Mallonee: Now if you're trying to click that cell, you still have to make yet another query behind the scenes using that original reference. If your table is updated in any way since that original reference was made, it's gone. It's no longer a valid reference. It's stale. Your function is gonna fail. You can either have special handling to try again or you can go about it a different way. What has worked wonders for us is instead of these chained calls to find elements, make one. Just make a really long, ugly locator. It's gonna look gross, but it's going to work so well, I promise. After we switch to this sort of approach, instead of having three calls to find the row, and then the column, and then the cell of that column and clicking on it, we find our table, and we make one additional reference to actually click it. I'm gonna throw out some numbers, and they're entirely made up. But let's assume that every time you try to find an element after you have originally found it, there's a 90% success rate.

32:47
Cody Mallonee: You have 90% chance of success, 81%, 72%. That's not good. You do not want a 72% success rate in your automated testing. Let's say this is only 90%. That's still so much better. You don't even know over a thousand runs, that's going to save you so many headaches. Now that 90% success rate is entirely made up. It's actually closer to, like, 99.9%. But still, the point was to clarify that you do not want to do those chained calls. So building the components is where you're gonna spend a lot of your time. And initial implementations, I promise, are probably going to be something like this. Where you have a table file that has all sorts of functions that define how to interact with the table and how to find a row or click on a cell. You're gonna be inclined to provide that function everything that it needs. So that includes a reference to the driver that's in use. How to find the table itself, which row you wanna find, and which cell you wanna click. This will work, but it's a lot of repetitive, very repetitive. All of your function cell or all of your functions inside this table file, are going to be expecting a driver and a table ID to interact with your table.

34:12
Cody Mallonee: There's an easier way to do it. Pass all that information to an instantiated object of a table so that it has everything that it needs when it needs to call its own functions. Here in this page object, I've created a table and given it everything it needs to find itself at runtime. Now if I need to actually do anything with that table, like finding a row or clicking a cell, all I have to do is apply the row index and the column name that I want to interact with. The table has the driver itself that it needs, and it has an idea of how to locate itself and therefore all of its children. So how long is this actually going to take you? Components are gonna take a long time. These are numbers for the alarm tables combined. So alarm status table and alarm journal table. Between just those tables, the code that is unique to them, 3,746 lines of code in our library. They share some code with the table component, some of the 338. They have some shared pieces that both of them are using in conjunction.

35:26
Cody Mallonee: So that's the body, the rows of the table, the header and footer, the page, or the filter. But all these add to the line count. That's a lot of lines of code. Knowing everything that I know today, how everything works, what is shared, where everything should go, what everything needs to return, and how it should return it. I estimate it would take me at the very least two weeks of uninterrupted heads-down time to do just the alarm tables. And that's with all my extensive knowledge of how this all works. For anyone just starting, my original effort at doing the alarm status table and the alarm journal table, probably upwards of three weeks each before I actually had everything condensed and then moved in together.

36:17
Cody Mallonee: So plan on these taking a very long time to automate, or not. I am very excited to announce that we have made the decision to share large parts of our automation libraries with all of you for free. They are up today.

36:37
Cody Mallonee: Thank you. They are available on a public Git repository. If you have already been to github.com/inductiveautomation, you can find them there. It includes all of our component libraries, includes quite a few of our helpers, and includes some essential page objects that you need to actually interact with Perspective. We anticipate that we will be updating this, if not at release, shortly after each release of Perspective itself. And we will be providing these version branches to site every time that you update to a new version of Perspective. You can find the associated automation code for that version and update your own testing and internally to use what is working at that time. Thank you, everyone. I hope you enjoyed your time today. I hope you enjoy the rest of ICC. I know some of you have some sessions to go to. Thank you.

Posted on November 7, 2023