I'm running a data driven test with 500 rows and 220 Columns that has roughly 600 steps. It fails in the middle of the 49th iteration. I'm seeking tips reduce the number of batches I need to run the test.
Unfortunately, the log didn't pick anything up other than "Out of Memory"
1 Master test with Data Binding
1 Subtest to login run each iteration - opted for this so that the browser could be terminated between iterations
1 Subtest to click "Save" that runs a single step - the save button is the same across all pages
7 Answers, 1 is accepted
I'm running a data driven test with 500 rows and 220 Columns that has roughly 600 steps.
I have to admit this makes me cringe just thinking about it for multiple reasons:
- The size of the test is much too large.
- The size of the data is much too large.
- Test Studio will struggle to handle the size of the results that will be created by such a large test. Keep in mind Test Studio wants to load/maintain the entire result set into memory all at once. When the result is too large it cannot be loaded into memory.
If a test script exceeds about 150 steps total (including sub-tests) there's a pretty good chance you're trying to do too much in a single test. A common pitfall I see beginners (not saying you're a beginner, I have no idea) fall in to is putting multiple test cases into a single test. The problem you run into if/when you do this is when the test fails it's not immediately clear which feature is broken. The tester must spend time interpreting the results to try to determine if feature A or B or C broken. Tell me, do you have a lot of IF/ELSE blocks in your test? That's another sign of trying to test too many features in a single test.
We advocate a separate distinct test per test case. Each test case should be about one specific function or feature. If the tester takes this approach then when a test fails you immediately know which feature is broken simply by the name of the test.
You can run into the same problem with data driven tests. A classic example is trying to test both good and bad logins by putting both into your data source. These are two separate test cases that deserve completely separate tests (and data sources).
Here's an excellent blog post about Using Data Driving Wisely. This may give you some ideas how to reduce the complexity of your tests.
Unfortunately, the log didn't pick anything up other than "Out of Memory"
This doesn't surprise me. The size of the results being generated by such a large test and that many iterations is probably more than Test Studio can handle. I believe it needs to be broken up into more granular pieces. It would be far better to have a lot of tests combined into a test list, then a single overly complex test that's trying to do everything. Test Studio doesn't have any problem with lots of tests in a test list since each test is run separately and generates separate test results. Test Studio won't try to load multiple results into memory at the same time, thus avoiding Out Of Memory problems.
This test only does one thing - creates an item in a store. It has no verification in the store for the items - it just makes sure they are are made.
Items have a number of variables that need to be gone through. Here's the workflow.
1 - Create an Item
2 - Set X values
3 - Set Y values
n - Optionally, set Z values
My regression test goes 1-n, 1-(n-1), 1-(n-1), 1-(n-1) to cover all different variables. I can't stop creating an item in the middle of the test. In this regard, I've followed best practices. It's the shortest, end to end test that can run. Should a specific combination of factors cause issues, I can add that, specific combination to my spreadsheet to test each week.
So in essence, I'm having no issues with running regression testing using Test Studio.
My issue is that I'm using Test Studio as a data load. Instead of spending a great deal of developer time building an item data load, we want to utilize the test script that creates items to load them into the system. I completely understand if Test Studio doesn't offer support for this, in general. I'm just looking for advice on how to minimize the number of batches I need to run.
Here are my follow up questions:
1) Is that 150 a specific number where test studio starts to fail?
2) With above in mind, is it poor practice to have test lists within test lists? Here's my setup on one of my tests.
"CHECK FOO WITH TWO, TF Flags"
1 - SETUP
2 - SET TT
3 - TEST TT
4 - SET TF
5 - TEST TF
Because of test studio's limit of 17 tests, it's hard to quickly review what went wrong with they're split up when it's more than two TF flags (or worse, it can be set to 1,2,3 or higher). But doing this causes some tests to exceed that 150 step soft-cap. In addition, if I split it up, I'd either need to have the SETUP phase run 4 times instead of just 1 time or if testing an individual piece, would need to run the setup on its own to debug each phase after a test list had run.
3) Would it help performance if test studio logged each iteration in the result file instead of all at once at the end of the primary test's iteration (but not a data driven sub test)? It seems strange that if I had 20 spreadsheets and 20 tests with 1 row of data that Test Studio could better handle it than 1 Test, 1 Spreadsheet and 20 rows of data. Isn't the end result the same? I think this is a feature request.
4) As per my original question, what are some tips to increasing the amount of iterations I could do in one go? Remove comments? Turn off logging? Move the test into its own project file so it copies less files over?
And FYI, today I ran 12 batches of 40 on 2 machines while working on adding new regression tests on another machine. Our sales people are very happy.
Does having comment steps effect performance due to logging?
Not significantly enough to worry about. Yes it takes time to process each comment in the test, but that's only 10-50 milliseconds (depending on the speed of the computer). I'd be more concerned about test result size, which equates to memory use than performance.
...with a number of conditional statements that are aimed at deciding which box to click or move from the spreadsheet.
This is precisely the type of test structure we recommend avoiding. We believe it would be better to create separate tests for the different types of boxes you are creating e.g. a test for item type A, a test item type B, a test item type C, and so on. This approach dramatically simplifies the test structure (because it virtually eliminates all conditional statements) and the complexity of the data required to feed it.
Now maybe I'm am incorrectly guessing what your application does but I hope you get my point just the same... do everything you can to avoid conditional statements as much as possible. We run into a lot of cases where each conditional path is actually a different test case that should be tested by a separate test, not in one (overly) complex test.
it also significantly reduces the size of your result sets. By simplifying what each test does you only get the results in your result sets that apply e.g. item type by item type, instead of one large result set for all possible item types.
Now let me answer your more direct questions:
1) No, 150 is approximate and based on experience. There are many variables (too many to give an exact number) that go into what the actual breaking point is... the structure of the test, the number of comments, the resources of the machine, the version of Windows being used, etc.
2) I'm not totally sure what you mean by "test lists within test lists" because Test Studio does not have direct support for this. Yes we support our version of test lists... and we also support using Test-as-step. Is this what you mean i.e. using these two combined?
"Because of test studio's limit of 17 tests" Um, let me ask where do you get this limit from? It's no where in our documentation. It's very common to have 200-300 separate tests in a Test Studio test list. Many of our customers have large test lists like this. Granted that's a lot of tests go walk through and see which passed and which failed... but we do give you overviews of how many passed and failed.
if I split it up, I'd either need to have the SETUP phase run 4 times instead of just 1 time or if testing an individual piece, would need to run the setup on its own to debug each phase after a test list had run.
Yes that is true. It is something we testers have to get used to. If you have to run a SETUP phase for each test to succeed, it's something we have to live with and plan for it. Often times this means we need more hardware so we can start running our test lists in parallel in order to run all of our automated tests in a reasonable amount of time.
3) We already have a feature close to this, though it's only available when running from the command line. It's the command line option "persistOnEachStep".
No turning on this feature actually hurts performance slightly because it's having to write to disk after every step in addition to keeping the results in memory. We still have to keep all the results in memory because after the test is completed we will create a .aiiresult file which is an XML formatted file generated from the in-memory Results object.
4) Minimize the number of steps and conditional logic in each test as much as possible, such as removing comments. 35 steps x n iterations is better than 45 steps of n iterations. That's the only way to maximize the number of iterations. It all comes down to making your test results as small as possible. The smaller you can make per iteration, the more iterations you'll be able to execute without running into problems.
I'm very low on the totem poll here, but I've managed to get access to 3 machines on which to run tests including my current one. Prior to this question, I have been reorganizing my test lists so that they can run alongside each other without stepping on each others toes. I'm beginning work on another application for our company so the running side-by-side bit should work well.
There are 10s of Thousands of item combinations possible, yet there is only a need to test 4 permutations because that covers one of each field (no field has more than 4 different options... for now). I have the large, conditional piece setup to create any of the thousands of combinations. I can then use the items created in smaller tests. Without the conditionals, I couldn't create any of the permutations of an item save one without rebuilding a new test from a series of subtests.
Unfortunately, the Telerik UI for test steps is significantly slower than changing a spreadsheet.
My initial iteration of this test scenario was the way you described - I had many subtests and put them together on an item by item basis based on bug reports / common problems. But the more subtests I added, the longer it took to piece them together, so I made a branching tree of sub-sub-sub tests. Yet, the more tests running tests I added, the more the UI slowed down when adding test as step. (Having just started a new project, it's refreshing to click "Test as Step" and have it load immediately as opposed to taking 5-10 seconds).
I also didn't have the option to build more than one item at a time of different types - a severely limiting feature of a "conditionless" test.
1) Appreciate the feedback
2) That's what I mean. Basically, a test that runs a number of subtests as opposed to running them all separately. The "macro" setup is my concern. The "micro" level setup is run in each test. Macro setup effects all users of the site, micro only the small piece of the test.
3) Not sure that will help me. I'd rather have a feature that doesn't report back at all or that writes / appends a file, then dumps memory so that it can go numerous iterations. Oversimplified and probably something only a small percent of users want / need.
4) I decided to test performance using my spare machine. I have 4 tests. Control test, no comments, no optional step, no comments or optional step. It may just come down to picking the right tool for the size of the jobs depending on how different each one is performance wise.
Attached is an image of what I mean by my 17 results limit etc. It's not a huge deal. In fact, I think I read that this was getting updated in the new version of test studio.
Thanks again for your help!
...the Telerik UI for test steps is significantly slower...
The slowness of our UI is a problem we're aware of and are actively working on. We've done a major redesign for our upcoming 2013 R1 release. We think you'll like the improvements (once you get your hands on it).
RE: Saving persisting the results to disk - I understand you'd prefer to have an output that simply continually appends to a log. That would require us to change what we generate for output. We'd have to create a plain text file as a running log instead of an XML formatted file that has structure to it. Because we want to generate a .aiiresult file that has a clearly defined structure to it, we have to hold everything in memory throughout the life of the test so that we can properly create the structure at the end. I went ahead and filed a feature request about handling long running tests such as yours.
RE: The 17 test limit - did you notice the Paging control at the bottom? This "17 limit" is based on the current size of the window. The number of tests listed is automatically adjusted based on the size of the results window. Plus with the paging control you can get to the next set of test results by going to page 2, then page 3, etc.
As for the 17 limit, I posted two screenshots When I resize the window, the number of tests doesn't update for me. When using maximize, restore, minimize and resizing, the results get wildly inconsistent.
"17 limit.png" which is different than before - Test Studio added a scroll bar in addition to the page controls.
"3 limit.png" - Lots of white space, only 3 results displayed.
Yep, we're aware of this UI glitch. Fortunately the paging still works, the scrolling works such that you can get at all the results, no matter how many tests are in the results. We believe we have fixed this UI glitch in our next release.Regards,