This post discusses how to best leverage test automation and the importance of manual verification as well as the business value of cross-browser testing and its impact on customer experience.
Cross browser testing involves repetitive testing against various browser platforms. Software applications typically support the top three to five browsers, and then support varies for less popular browser types. For example, most mobile and web applications run on Chrome, Safari, Microsoft Edge and Firefox. There are also custom browsers, and browsers such as Opera, Brave and Vivaldi.
For testing purposes, a QA tester may need to plan to test at least two or more browsers to ensure the application functions correctly. Some organizations pre-determine which browsers the application supports and only those are tested. It’s mind-numbingly dull to execute automated or the same repetitive tests on each browser—are there options?
Automated regression or smoke test suites are great options for executing cross browser testing, but do they cover the typical browser defect types? Does the automated test suite focus specifically on common defects between browsers? If not, consider adding manual testing to cover user workflows that automation doesn’t or can’t fully cover. How can a tester design tests or mix up their test execution to find where the best cross-browser defects hide? How does one identify the gaps where cross-browser defects live?
This guide provides 10 functional testing techniques to locate defects between browsers, and descriptions of the most common defects found between browsers.
Software applications typically are developed for a single browser. However, the variances in how browsers perform and render webpages differ significantly in how webpages are formatted and displayed when loaded. Variances in security handling also impact how a webpage displays to the end user. The value of cross browser testing lies in the ability to ensure the application runs on more than a single browser because end users use a wide variety of available browsers.
Cross browser testing’s value lies in protecting customers using a browser different than the one the site was developed for and ensuring the application performs as expected. It also protects the application and the business bottom line. How? When customers can’t use their favorite or preferred browser, they’ll likely walk away and find another provider whose applications function on their browser. For example, many modern applications are developed using Google tools for the Chrome browser. However, due to Chrome’s automatic tracking and looser security protocols, many users refuse to use Chrome. Instead, they’ll use Firefox or Safari for their enhanced security and anti-tracking functionality.
Most software application development organizations determine which browser the application can function with. Most support the top browsers including Chrome, Edge, Firefox and Safari. Keep in mind, however, that many businesses are still actively using Internet Explorer so if the application intends to work for a business client, Internet Explorer may need to be included in cross browser test execution plans.
Cross browser testing discovers defects in every regression cycle in which new code or features are deployed. The simple reason is that developers and testers often forget to test the new features on more than their favorite browser.
If there are no unit tests or automated or manual tests that execute on more than one browser, then defects are likely present in each release. The customer who finds significant issues in the release because they prefer Safari over Chrome, for example, are customers the business risks losing. Customer experience is king in today’s business market, so finding new and retaining existing customers is critical to business success.
Cross browser defects typically involve display or UI display issues. Display issues including differences in type and control sizing differences often cause buttons or links to become unreadable. Sizing issues also include form text and font display issues. Additionally, many pagination functions tend to fail by not loading the correct page or non-functioning links or buttons to move between pages.
Additionally, defects are found between browsers within date-related fields and date displays, authentication or logging in, and even security errors when an application saves to a database. Graphics and images are frequently resized or, if serving as links to other pages, the links fail to connect. When applications include frames or designated display sections, those sections are often merged or overwritten onto each other.
For successful cross browser testing, look for display defects. Responsive web design and mobile pages are often ignored, so verify if a user can adjust the page size and still be able to read the text and images.
Occasionally, application functionality that executes properly in one browser crashes the application in a different browser type. Manually testing user workflows executed from end to end often yield a plethora of defects in function failures and display problems like fonts being too small to read, text that doesn’t zoom in or out, or button or link label text that runs over each other.
Most cross browser defects are found by testing workflows in the user interface for all application functions. However, there are often additional connection type failures when data is saved using an API, directly to a database, or cached in temporary storage. Testers find these defects by simply saving data within the application or using a function that causes the application to save, send or transfer data.
Leverage test automation where it exists and include valid tests for page display including buttons, links and framed groups on a page.
Use manual testing to perform end-to-end or system testing based on user workflow scenarios for the application.
Log in and test authentication security. Confirm end users cannot bypass security within a new browser or hack the URL and gain access without proper authentication. Verify access errors are not present when the user switches between browsers.
Execute responsive display testing by resizing the browser window and zooming each page in and out. Verify the user can read all text and select all functional buttons, links and images.
View each page in the user interface and check for display issues with control sizing, text running over or extending into other text and tiny fonts on the page or within forms.
Execute functions on all data entry forms and save the original, and then edit and save. Verify forms function as designed especially where they perform calculations or move automatically through each field based on the user’s input. Confirm the data saves as expected and saving does not hang the application or generate a connection or other error.
Test through all paging or pagination options including moving back and forth between pages. Confirm the page displays and pages switch context as expected.
Most applications use date fields or calendar pickers to allow users to add or create date fields. Confirm date fields display in the proper format and save as expected. Negative testing and boundary value testing are handy options for flushing out issues with date fields.
Performance testing using test automation or executed manually by timing output finds defects. Browsers use different backend systems to perform functions, and, if an application isn’t designed for a variety of browsers, performance degradation defects are noticeable.
Execute functional testing on frames or pages with grouped items that display in specific sections. Many times, the frame no longer allows users to enter data or edit options from dropdown lists or other selector controls. Verify each field in the frame or group operates as expected.
When cross browser testing, always have the browser’s dev tools window open. Most browsers like Safari, Chrome, Firefox and Edge include a dev tools option. The dev tools windows provide insight into failures in the application that are not visible in the user interface.
Cross browser testing ensures a positive customer experience by supporting a variety of browser types. Customers can use the application on their preferred browser instead of being forced to choose between using a different browser or switching applications. Don’t let customers leave because of browser issues. Attract new and keep existing customers by creating and testing application functionality on a variety of browser options.
Are application testers already maxed out with test execution? Consider tools to make managing cross-browser testing both efficient and effective. Cross-browser testing tools like Test Studio leverages the latest in browser testing technology for creating, managing and executing effective and efficient cross browser testing. Test Studio offers test teams access to multiple browser types through a virtualized interface, so no need to install various browsers on the local machine. Add business value to test execution by ensuring the application runs on a variety of browsers customers use or prefer.
A QA test professional with 23+ years of QA testing experience within a variety of software development teams, Amy Reichert has extensive experience in QA process development & planning, team leadership/management, and QA project management. She has worked on multiple types of software development methodologies including waterfall, agile, scrum, kanban and customized combinations. Amy enjoys continuing to improve her software testing craft by researching and writing on a variety of related topics. In her spare time, she enjoys gardening, cat management and the outdoors.
Subscribe to be the first to get our expert-written articles and tutorials for developers!
All fields are required