Have you wondered how the teams working on Telerik products test software? We continue with the next chapter in our detailed guide, giving you deeper insight into our very own processes. You can find Chapter One here.

An Introduction

You’re reading the second post in a series that’s intended to give you a behind-the-scenes look at how the Telerik teams, which are invested in Agile, DevOps, TDD development practices, go about ensuring product quality.

We’ll share insights around the practices and tools our product development and web presence teams employ to confirm we ship high quality software. We’ll go into detail so that you can learn not only the technical details of what we do but why we do them. This will give you the tools to become testing experts.

Important note: After its completion, this series will be gathered up, updated/polished and published as an eBook. Interested? Register to receive the eBook by sending us an email at telerik.testing@telerik.com.

Chapter Two: Telerik Platform—Creating Synergy through Testing

Telerik Platform brings together various products and services. It requires ample coordination between all testing teams, to facilitate the development of mobile apps and provide developers with everything they need. These include:

  • AppBuilder: A web-based and standalone IDE for developing hybrid mobile applications with numerous additional features to facilitate this process
  • Backend Services: For mobile or desktop applications with SDKs for lots of programming languages, and support for features such as push notifications
  • Analytics: Analysis and reporting tools that provide insight into your customer base

By combining these elements, Telerik Platform provides a way to visually define the mobile app user interface, develop the functionality behind this UI and facilitate mobile app server-side requirements. Leveraging Analytics, developers can get closer to the end users, then collect feedback from the end user and move to testing with Telerik Test Studio.

To assure the best possible quality for such a diverse platform is quite challenging. For each component, all features must work according to the specification, but the platform must provide seamless integration between all components.

Testing Approach

In developing Telerik Platform, we introduced thousands of integration tests to verify integration points between services, including user experience and performance tests. There is no centralized QA team for the entire Telerik Platform—thirty QA engineers work collaboratively in several teams to keep up its quality. This adds further complexity to the task of executing the common testing tasks efficiently.

When we started working together, each team had its own, disparate tests. We started with service testing which covered the integration points between all services. Each team handled its own portion.

Next, we established a schedule for the different teams to execute common tests. All common service tests were scheduled to be executed against all test environments orchestrated by Jenkins, which provided a common place where each team member could observe the status of a particular functional area.

Still, adding tests and executing them across different environments increased the number of Jenkins jobs dramatically, necessitating a way to gain better test visibility. To that end, we built Dashing dashboards, which could be observed on TVs in each team room. At this point, we had established processes for common service tests, UI tests and performance tests, all of which were split across different teams and maintained independently.

We also invested in resilience testing as the teams found it should be an integral part of their testing process.

To facilitate the testing process, we needed a common communication channel. We set up mail groups and IM channels to enable communication between QAs across different teams. Next, we implemented the Telerik TeamPulse feedback portal as part of our QA process. Each QA could submit an idea in the feedback portal; then we voted for the ideas we thought were the most important.

Areas of Testing

Functional Testing

The correct functioning of a single service or component in Telerik Platform is as important as the correct functioning of the entire platform. Functional testing therefore verifies the quality and functionality of all service layers. We use both manual and automated tests for UI, REST and Integration testing. The test types are executed continuously.

  • UI testing: Performed frequently and automated using an image recognition tool, UI testing compares the current system state with images set as expected behavior to determine whether the desired behavior is achieved.
  • REST testing: A massive set of automated service tests that estimate the stability of all Platform services. REST tests are fast and cover a large percent of all possible functional scenarios a user may experience. Speed is a benefit, because in a continuous delivery environment, the time for distributing a feature or fix to clients is limited.
  • Integrations tests: Used to validate Platform service integration. The variety of services developed with different technologies should work together seamlessly.

All of the test types are executed through a CI tool, which enables us to keep a history of the results, coordinate the tests and integrate the reports with other systems for monitoring and quality estimation.

Resilience Testing

Cloud services live in an environment where one thing is 100% certain: at some point, something will fail.

Consider a Cloud solution that includes a web server, database server and cache server. A simple UI deployed on and served by the web server collects data from the database server and displays it in the end-user's browser. For better performance, the web server leverages the cache server to decrease the number of requests to the database server. Resilience tests are used to determine the end-user experience should the cache server fail.

A desired outcome is that the end user can still use the cloud solution, regardless of a failed cache server. Performance may be impaired, but the solution should still work.

The Telerik teams test our services when there are missing functional requirements as well as infrastructure dependencies. We strive to answer questions such as:

  • Can you create an AppBuilder project in Telerik Platform if AppBuilder service is down?
  • What is the user experience in such case?
  • What will performance be when the cache server is not operational or one of the nodes in the load balanced environment is down?
  • Would the affected functionality start working when the missing dependency comes back?

Automating start/stop procedures of particular infrastructure elements simplifies and accelerates the execution of well-documented resilience scenarios. We manage the start/stop procedure from a web interface to provide the QAs better control over particular infrastructure components, and to visualize all infrastructure elements.

Considerations for preparing resilience tests include the execution environment and defining the testing scenarios. Today, scenarios are defined, tested and documented in Excel spreadsheets. Found bugs are logged into TeamPulse for planning future iterations.

Security Testing

Security is one of the most important non-functional characteristics of enterprise software quality. Unfortunately, security testing is often neglected, as many project stakeholders underestimate the negative impact of a lack of security on the project or company.

We take security testing seriously. Our enterprise customers require established security policies, process control and compliance with certain security standards. To that end, we have the following goals around security testing:

  • Test our software for vulnerabilities
  • Establish security testing practices on a regular basis
  • Verify we meet common industry standards for application security, including PCI DSS, OWASP Top 10, CME/SANS Top 25 and CERT Secure Coding Standards
  • Produce testing reports to show to customers, upon request
  • Pass all PCI-compliance tests for application security

We perform security testing in two categories: black box and white box:

  • Black box tests: Identify and resolve potential security vulnerabilities before deployment. These are used periodically to identify and resolve security issues with deployed systems. For black box testing, we scan applications (often these are web applications) from outside our environment, attempting a variety of security attacks such as input checking and validation, SQL injection, session management issues, buffer overflow vulnerabilities, XSS and so on. This is known as dynamic security testing, which simulates how a user may attempt to "break" the application.
  • White box tests: During white box testing, we analyze the flow of data, control and information, as well as coding practices, and exception and error handling within the system, to test intended and unintended software behavior. White box testing can determine whether code implementation follows the intended design, to validate implemented security functionality and to uncover exploitable vulnerabilities. Known as static security testing, this type of testing requires gathering all source code and applying a security scan using the security tool. Once performed and the problems triaged and fixed in development, a new static scan is applied on regular basis to verify no new vulnerabilities are present with the release of new functionality.

Stress Testing

For a quality product, you must achieve good results in terms of response time and server resource utilization with expected load. However, it’s also important to test our systems under stress—when more users access them than expected.

To do so, we simply increase the number of expected virtual users using the “controller-agent” approach. In this approach, each controller manages numerous agents. Controllers and agents are hosted by a cloud-based hosting provider to enable load to be generated in a geographically dispersed manner. (Generating load from a single point in some cases may produce incorrect results for the overall performance of your system.)

Did you find these tips helpful? Stay tuned for the next chapter which will continue the story of the Telerik Platform team and how they go about test execution and reporting.

If you are interested in trying out the latest Test Studio, feel free to go and grab your free trial.


Angel Tsvetkov
About the Author

Angel Tsvetkov

Angel Tsvetkov is an experienced, goal-oriented Quality Assurance Architect for the Telerik Platform with proven ability in test automation. He has exceptional ability to enter new environments and produce immediate results through the use of flexible test techniques with excellent communication skills.

Related Posts

Comments

Comments are disabled in preview mode.