This is a migrated thread and some comments may be shown as answers.

Test Lists are un-stable while Quick Executions are Stable

9 Answers 248 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
eugene
Top achievements
Rank 1
eugene asked on 28 Aug 2012, 01:34 PM
Hi,

I've been encountering an issue for a few weeks now, that I have not been able to 100% solve. Originally, I created a test list which contained 22 tests for a smoke test, which includes over a few hundred steps. Tests range anywhere from 12 steps to 75 at the most. The reason some tests contain upwards of 75 steps is mainly due to the following issue.

100% of the time, one or more tests fails within the test list, however 99.9% of the time, these test pass when I run a Quick Execution. Many of the failed steps within the tests of the list are due to Navigate steps and sometimes an unable to find element click event. To try and eliminate these issues I added a Wait - to exist and have played around with the wait times. While this helps, it consistently does not help during a Test List run and was unnecessary for a Quick Execution.

So after running a Test List of 22 tests, I spread the tests out to 4-5 Dynamic Test Lists. While this definitely helped, I can no longer run a scheduled test without having to account for the execution wait time of between 1-5 seconds (depending on my setting), as well as extra padding to account for the Test List to properly finish before executing the next Test List. So now, while my smoke test takes 15 minutes to execute a Test List of 22 tests, I now have to span these smaller Lists through out an hour to be 100% sure all tests will run. I have voted on the ticket to allow multiple Test Lists to be scheduled at one time.

So now the problem I am running into, is that even when one of these Dynamic Tests fails, I will Quick Execute the test and it will pass with flying colors. I have messed with the mouse clicks, wait times, and even IF/ELSE statements for such steps as Navigate as mentioned earlier. However (and this may be for another ticket), when I Quick Execute a test containing an IF/ELSE, it runs, but when I run the same test inside a test list, it fails on the IF/ELSE 100% of the time.

What, if anything can I do so that I may run my scheduled smoke test everyday in a Test List that will not fail on a step that passes during a Quick Execution? I have also run these tests at different hours of the day to try and account for server performance.


Thanks

9 Answers, 1 is accepted

Sort by
0
Cody
Telerik team
answered on 30 Aug 2012, 09:20 PM
Hello Eugene,

That's actually a pretty tall order. There are no cut and dry solutions to this problem. The one key difference between executing tests in a test list and executing them via Quick Execute is that the test list is much faster. This is because Quick Execute is slowed down by the presence and updating of our Visual Debugger. You call tell Test Studio to Run Without Debugging as shown in the attached screen shot to more closely match the speed of the test running in a test list.

Generally this problem is caused by the test script outrunning the web application. If your website is using Ajas then setting the Ajax Timeout on critical steps may help. By "critical" I'm referring to any test steps that initiate an Ajax event, such as clicking on a button that takes you from page 1 to page 2 and then you see a Loading/Busy icon as data is fetched from the web server. Test Studio uses this timeout as the maximum amount of time to wait for any Ajax event to finish before proceeding to the next test step.

Other than that we need to look at specific repeatable failures so that we can isolate the root cause and come up with an appropriate solution. There are too many possible problems and solutions to try and list them all here.

Greetings,
Cody
the Telerik team
Quickly become an expert in Test Studio, check out our new training sessions!
Test Studio Trainings
0
eugene
Top achievements
Rank 1
answered on 04 Sep 2012, 04:49 PM
Hi Cody,

Thanks for getting back to me. So I've added Ajax Timeouts to 'Click' steps that did not previously have them. While this did seem to help in the case a page took longer to load than usual, this still did not seem to help in the form of running a large test list.

If I run these tests with the newly added timeouts, in a small dynamic test list, once again, they still pass, but if I run a large single test list, the tests still fail. Since Telerik cannot schedule more than one list at a time or set up for a simultaneous schedule, this is a pretty large problem for me. There is just too much time spent setting up multiple schedules and takes valuable time out of my day.

I guess I'm a little bit confused as to why a test list runs faster than a quick execution. As previously mentioned, I voted for this ability, but there are not many votes. What is the status of this feature being added?



Thank you.
0
Cody
Telerik team
answered on 07 Sep 2012, 04:14 AM
Hello,

I guess I'm a little bit confused as to why a test list runs faster than a quick execution.

As explained in my last response it is because Quick Execute is slowed down by the presence of and updating of our Visual Debugger.

but if I run a large single test list, the tests still fail.

I need to see a specific example of this including the error message and the test log before we can effectively diagnose what's causing it. Please send me an export from the Test Step Failure Details.

What is the status of this feature being added?
Which feature are you referring to exactly? This one? I checked and it has not yet been scheduled for implementation.

Greetings,
Cody
the Telerik team
Quickly become an expert in Test Studio, check out our new training sessions!
Test Studio Trainings
0
Dean
Top achievements
Rank 1
answered on 21 Nov 2012, 06:12 AM

Hey guys this thread is exactly the problems I am facing atm.

My question is why has there not been built an option to have the visual debugger function while running a test list?

This seems like a no brainer to me as the very essence of testing is consistency and your product is is not.

Tests are created in a mode that has visual debugger (which appears to be alot more stable than not having it)  but when run, they do not have this same mode available (List being the only viable mode for running more than 1 test).

This information should be front and centre of any information that customers are given so that they do not waste large amounts of time trying to figure out a possible problem when the problem is a facet of the test software design.

Personally I could have turned off the visual debugger and created tests in that mode so that my lists would run in the same mode as my tests and thus saved myself alot of frustration at trying to figure out why individually tests are fine but grouped they are not.
0
Cody
Telerik team
answered on 21 Nov 2012, 08:53 PM
Hello Dean,

My question is why has there not been built an option to have the visual debugger function while running a test list?

The answer to this question comes down to the intended purpose of the Visual Debugger and our Test Lists. We intended our Visual Debugger to be used only during test development. When you're done developing your tests then you add it to your test list(s) and execute your test list on a scheduled basis.

Test Lists are intended to be run on a scheduled basis and run totally unattended. Because they are meant to be run unattended there's no point to showing the Visual Debugger as there is no one there to take advantage of it.

Tests are created in a mode that has visual debugger (which appears to be alot more stable than not having it)...

Tests that do not successfully run both with and without the Visual Debugger imply two things:
a) You're dealing with an application that has a lot of dynamic post backs i.e. the UI is changing due to JavaScript calls (jQuery or AJAX for example) instead of the standard simple web page downloading. These types of application require special care in proper test design.
b) The test isn't doing sufficient synchronization with the web application i.e. it's trying to interact with the UI before the UI is ready. This is a common problem when you're not accustomed to writing tests for dynamic websites that rely heavily on post backs.

All the best,
Cody
the Telerik team
Are you enjoying Test Studio? We’d appreciate your vote in the ATI automation awards.
Vote now
0
Dean
Top achievements
Rank 1
answered on 22 Nov 2012, 12:51 AM

Why not reply to my whole post? Why just those two select parts?

Right now because your company does not have a simple explanation (unless i've missed something) of the two testing modes and possible problems faced by not turning off Visual Debugger , you have customers that are having problems with testing because of a mode "YOUR" product runs in and you choose two specific parts of my post so that you can give a snarky response (*below).

(This is a common problem when you're not accustomed to writing tests for dynamic websites that rely heavily on post backs.)
 
*************************You have no idea of my experience

Yes you explained the reason for not having visual debugger in list testing well and about how post backs could be causing a problem.

Postbacks aren't the problem though, your software operating in two different modes that aren't disclosed to customers is.

My suggestion is to include information in your basic testing info about the effects the visual debugger has when developing tests and how they should be developed with the visual debugger delay in mind.
0
Cody
Telerik team
answered on 26 Nov 2012, 11:08 PM
Hi Dean,

My apologies – we work with a wide set of testers having a variety of experience and skillsets from beginner to expert and I did make an assumption in my previous response – please accept my apologies if I offended you in any way – it was not my intent.

You do have a very valid point. There is a difference in how a test runs with our visual debugger versus without the visual debugger and we could do a better job at documenting this. I will discuss how we can improve our documentation to discuss these differences. Regards,
Cody
the Telerik team
Are you enjoying Test Studio? We’d appreciate your vote in the ATI automation awards.
Vote now
0
Jorrit
Top achievements
Rank 1
answered on 23 Sep 2015, 08:05 AM

Dear Cody,

To start off thanks for the valuable information you've given me with you previous posts. It helped clarify some issues I was running into, however if I might be so bold I'd like to ask you for some support and I think this ticket is the right place to ask my question (please correct me if I'm wrong). 

 Scenario

I have a test list which contains about 40 web tests, all of different sizes most of them about 10-15 steps and never exceeding more than 50 steps. These tests were set up like so, to ensure we could easily discard and add the testing of individual parts of the website. The tests add data to a database with a series of forms, afterwhich the data is deleted using a SQL package (this last part is out of scope with my question and is purely to sketch the situation).

Issue

When executing the test list I found out that one of the test is failing the first time and passing the next, the regularity of the test failing and passing is quite remarkable. Please note it's not a matter of speaking when I said it fails the first and passes the next.

 The point where the test is faltering is right after closing a pop-up window and returning to the parent window where I started off.

Question

Is there any way I can ensure my tests' consitency is maintained and will pass the steps of reconnecting to the parent windo?

 

Thank you in advance for your help, I hope I will hear from you soon.

With kind regards,

Jorrit

0
Cody
Telerik team
answered on 24 Sep 2015, 06:13 PM
Hi Jorrit,

It's very difficult to explain why the test is failing the first time and passing the next based on that description alone. I'm going to need a little more information. First please let the test fail then go to Step Failure Details, find and click Export then attach the generated .zip file. It contains valuable information we need to diagnose this type of failure. I would also like to have a copy of the failing test so that I can study its current structure. Once I have both of those items I should be able to come up with a recommendation or two how to make it stable.

Regards,
Cody
Telerik
 
The New Release of Telerik Test Studio Is Here! Download, install,
and send us your feedback!
Tags
General Discussions
Asked by
eugene
Top achievements
Rank 1
Answers by
Cody
Telerik team
eugene
Top achievements
Rank 1
Dean
Top achievements
Rank 1
Jorrit
Top achievements
Rank 1
Share this question
or