Have you wondered how the teams working on Telerik products test software? We continue with the next chapter in our detailed guide, giving you deeper insight into our very own processes. You can find Chapter One here, and the first part of Chapter Two here.
You’re reading the third post in a series that’s intended to give you a behind-the-scenes look at how the Telerik teams, which are invested in Agile, DevOps, TDD development practices, go about ensuring product quality.
We’ll share insights around the practices and tools our product development and web presence teams employ to confirm we ship high quality software. We’ll go into detail so that you can learn not only the technical details of what we do but why we do them. This will give you the tools to become testing experts.
Important note: After its completion, this series will be gathered up, updated/polished and published as an eBook. Interested? Register to receive the eBook by sending us an email at email@example.com.
Great UI, service or performance tests mean nothing if the tests are not scheduled to run on a regular basis. We employ Microsoft test agents and test controllers, Jenkins and custom implementations to ensure this occurs. Jenkins is probably the best solution for scheduling tasks, because it’s easy to install, configure and maintain. It’s also extensible; there is a plugin for everything.
Some tests must be executed nightly. The more functionality you develop, the more tests you’ll need. One day you will end up with thousands of tests orchestrated by hundreds of Jenkins jobs. Be ready for this moment! Having this in mind when you set up your Jenkins instance will make your life much easier in the future.
On our team, we have lots of machines with a small number of executors. Depending on the type of test we execute, we have from one to five executors.
Let’s say we have Sikuli UI tests. In this case, the machines executing UI tests will have a single executor. This way, we can guarantee there will be no overlapping between test runs on one machine, as there is no way to run two concurrent Sikuli test suites on the same machine. If you are using the default Jenkins configurations, your Jenkins will become extremely slow at some point. Make sure you set the -Xmx1024m flag in your json.xml file. 1024m is just the way we set up our Jenkins but you can change it. This option sets the maximum Java heap size, which seems to have a dramatic effect on performance, especially when you have lots of jobs.
We do not run any tests on the Jenkins master, so the slaves are the machines that do the job. We have tons of machines executing our service tests—Linux machines with all service tests runners we use and all their dependencies. We also have a lot of machines configured to run UI tests. The machines configured to run service tests can't execute UI tests, and vice versa. We segment our Jenkins slaves into two groups: service test runners and UI tests runners.
Jenkins enables you to label each Jenkins slave, so you can configure the Jenkins jobs to be executed on a particular label, instead of a particular machine. Your tests will be executed almost immediately, once the “master” locates the “slave” with the correct label. If jobs running your tests stay in the Jenkins queue too long, you simply add a new slave with a label. Using this approach makes your test infrastructure scalable.
Jenkins jobs are configured manually. It’s easy, but there’s no versioning available. Be sure to download the Job DSL plugin, which enables you to define your Jenkins jobs using groovy script. Here we have an abstraction over this plugin, but the idea is to define your Jenkins jobs as code. Having them as code enables version control. Additionally, groovy scripts have loops, meaning you can create thousands of jobs using a template looping through different configurations.
With cloud-services, there’s another aspect to the overall quality process: health monitoring. In addition to testing our software in various staging environments and finding and fixing bugs, we must constantly monitor performance to ensure the product is operational. The Telerik Platform team takes systems health monitoring very seriously. Our Service Level Agreement (SLA) promises reliability and stability, so we need to be aware of state of the standalone components every single minute. If something goes wrong, we prefer to know before our clients do. Although you can always add more, primary monitoring points should include:
The next step is to choose a health monitoring tool or decide to develop one in-house. ROI should be a primary consideration.
Finally, decide what problems will be low, high or critical priority and what process to follow based on the severity of the problem. Your monitoring tool should be programmed to send out notifications based on the priority of the issue. Notifying the responsible person immediately is essential for a fast and successful problem resolution.
Reporting includes a summary dashboard for observing the overall Telerik Platform quality. Having these simple dashboards dramatically changes the way we work, because they combine the results of automated tests, newly deployed functionality and the overall health monitoring of the system.
Did you find these tips helpful? Stay tuned for the next chapter and let us know your feedback. And don't forget to register to receive the eBook that will be put together when this series is complete.
If you are interested in trying the latest Test Studio, feel free to go and grab your free trial.
Angel Tsvetkov is an experienced, goal-oriented Quality Assurance Architect for the Telerik Platform with proven ability in test automation. He has exceptional ability to enter new environments and produce immediate results through the use of flexible test techniques with excellent communication skills.