I have a bunch of tests that are more or less used to open my application, browse through it, run a specific database query, and then using some custom-code files with a self-made object, I test the query against another file which will return true or false (depending on whether or not the files are identical in the data that they hold).
In my tests, I use an assert to call the function and fail my coded step depending on if the values are different.
Now my real problem is, I have a "master test" which I use to "Test As Step" all my tests so they run one after the next...instead of me having to go through and run each test manually, I can walk away and not worry about them not getting done.
When I run my tests individually, my coded step can pass (the Assert is not thrown).
When I run my master, my coded steps (and therefore those tests) can pass, AS LONG AS any of the previous "Test As Step"'s haven't failed on the coded step. If a single "Test As Step" fails, every subsequent "Test As Step" fails on their individual coded steps.
Each test has it's own objects instantiated with the new keyword which are compared...so I can't see it being data in the objects still in a new test...? Not really sure what else the problem could be though. Any push in the right direction would be greatly appreciated...
Hi team,
We are evaluating Telerik test studio as a Test Automation tool for Automation testing for one of the products for our customers . The customer product is a Windows Application build in C# and WPF and uses DevExpress (3rd Party) for UI Controls.
Currently we are carrying out Proof of Concept (POC) to automate few business scenarios in the product. We are able to automate some of the scenarios and it runs successfully in quick execution without any error.
But during test list scheduling,some test cases fails with "Null reference exception".
I have a list with 40 test cases. But I see that isn't stable, b/c sometimes it run correctly and sometimes the test fail(approx.38/40). Not sure why is the inconsistency on it.And today if one fails,next run some other test case will fails.
So, not sure why they are failing in the list but individually it passes consistently.
Please any one help me on this.
Hi All,
I'm trying to find a way to dynamically exit a loop before its completed its number of iterations if a certain even occurs. so logically it need to look something like this ..
Loop 20 times
Refresh page
wait 1sec
If (Text = bob)
exit loop
else
Loop count ++
end loop
The reason im needing to do this is that we have a cloud service that can 'acceptably' take upto 10mins to process new information if it takes longer than 10mins i want the test to fail, but i don't want the test to wait 10 mins if the data comes back quicker. the data is not pushed to the page but has to be pulled by a page refresh. from the research we've done here we cant find a way to force exit the loop, has anyone done this before?
Thanks,
Dan
Hello
My problem is a weird one.
I cannot record any test steps with Chrome or Firefox on my website. Telerik Test Studio was perfectly capably to record tests of any kind back in December 2015. Now acts in this way:
I don't know what is the cause of this, as this was working befor and is still working with IE.
Hello,
We are trying to open a detail screen that can be opened by double clicking a row in the grid (RadGridView). I have put in a wait step to overcome problems with building up the grid and changed the click step into an double click step. Still I am getting this error:
Unable to locate element. Details: Attempting to find [Wpf] element using
Find logic
(Wpf): [AutomationId 'Exact' CellElement_0_2] AND [XamlTag 'Exact' textblock]
Unable to locate element. Search failed!
Up till now I was also unable to resolve the issue using the Resolve failure wizard.
Any help will be appreciated.
Richard
I have two steps that I want to run at the beginning of each test I've created in multiple projects. Is there a way that I can do that? I wanted to create the steps in one test and save them somewhere so when I'm building the next test, I can just pull the steps in. The only thing that will be different on each test is the URL that I'm navigating too.
Thanks,
Misty