Does having comment steps effect performance due to logging?
Not significantly enough to worry about. Yes it takes time to process each comment in the test, but that's only 10-50 milliseconds (depending on the speed of the computer). I'd be more concerned about test result size, which equates to memory use than performance.
...with a number of conditional statements that are aimed at deciding which box to click or move from the spreadsheet.
This is precisely the type of test structure we recommend avoiding. We believe it would be better to create separate tests for the different types of boxes you are creating e.g. a test for item type A, a test item type B, a test item type C, and so on. This approach dramatically simplifies the test structure (because it virtually eliminates all conditional statements) and the complexity of the data required to feed it.
Now maybe I'm am incorrectly guessing what your application does but I hope you get my point just the same... do everything you can to avoid conditional statements as much as possible. We run into a lot of cases where each conditional path is actually a different test case that should be tested by a separate test, not in one (overly) complex test.
it also significantly reduces the size of your result sets. By simplifying what each test does you only get the results in your result sets that apply e.g. item type by item type, instead of one large result set for all possible item types.
Now let me answer your more direct questions:
1) No, 150 is approximate and based on experience. There are many variables (too many to give an exact number) that go into what the actual breaking point is... the structure of the test, the number of comments, the resources of the machine, the version of Windows being used, etc.
2) I'm not totally sure what you mean by "test lists within test lists" because Test Studio does not have direct support for this. Yes we support our version of test lists... and we also support using Test-as-step. Is this what you mean i.e. using these two combined?
"Because of test studio's limit of 17 tests" Um, let me ask where do you get this limit from? It's no where in our documentation. It's very common to have 200-300 separate tests in a Test Studio test list. Many of our customers have large test lists like this. Granted that's a lot of tests go walk through and see which passed and which failed... but we do give you overviews of how many passed and failed.
if I split it up, I'd either need to have the SETUP phase run 4 times instead of just 1 time or if testing an individual piece, would need to run the setup on its own to debug each phase after a test list had run.
Yes that is true. It is something we testers have to get used to. If you have to run a SETUP phase for each test to succeed, it's something we have to live with and plan for it. Often times this means we need more hardware so we can start running our test lists in parallel in order to run all of our automated tests in a reasonable amount of time.
3) We already have a feature close to this, though it's only available when running from the command line. It's the command line option "persistOnEachStep".
No turning on this feature actually hurts performance slightly because it's having to write to disk after every step in addition to keeping the results in memory. We still have to keep all the results in memory because after the test is completed we will create a .aiiresult file which is an XML formatted file generated from the in-memory Results object.
4) Minimize the number of steps and conditional logic in each test as much as possible, such as removing comments. 35 steps x n iterations is better than 45 steps of n iterations. That's the only way to maximize the number of iterations. It all comes down to making your test results as small as possible. The smaller you can make per iteration, the more iterations you'll be able to execute without running into problems.