This is a migrated thread and some comments may be shown as answers.

Stored procedure usage questions

6 Answers 326 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
This question is locked. New answers and comments are not allowed.
Vitaliy
Top achievements
Rank 1
Vitaliy asked on 06 Feb 2009, 05:17 PM

I should say that we are still working on theVanatec 4.4.0.48. I know, you promised to renew SQL 2000 support in April. I hope we will upgrade our SQLs before.Anyhow I’m interested about the issues described below.

 

Do the issues exist in the latest version? And if they are then, maybe you’ll be too kind so you will find it useful to amend the T ORM behavior in the next release, and we could be forced with the SQL upgrade very much. :-)

 

Okay, some words as preface… Our team is working on a property management system. Every property could be a home, or estate, or any other building. The system is in a maintaining phase. And I’m keen to re-factor it to TOA completely.

Probably about 7% of the system business transactions look like batch processing. 93% are not and we’re glad to use TOA. :-)

 

Well, an Automatic Demands Generator module is an example of batch processing.

We need to generate a lot of period demands to tenants of our properties. The process needs for a lot of data: budget, previously generated demands, etc, etc. During a re-factoring from storage procedures, I tried to use OQL. You know, it is not a very complex query language, and it forced to load a half of the DB to my scope. :-) Lazy loading works extremely slowly.

So, I found that I should use a stored procedure.

The idea was so a new stored procedure will prepare demands for the most data heavy processing, as because I wasn’t going to move whole the Demand class logic into the stored procedure. Thus the procedure returned a list of partially filled Demand objects with fake IDs.I intended to generate real demands by those templates. Like in the Prototype pattern. Unfortunately the Vanatec 4.4.0.48 didn’t work with it very well. From time to time it tried to check the Demand with any visible reason. One time the generate button worked just fine, next time we got something like this:

#1 25/12/2005..24/03/2006 SC [Amount/Paid: £0.01/£0.00] OUTSTANDING

#2 01/02/2002..28/02/2002 AST Monthly Demand [Amount/Paid: £575.00/£575.00] PAID

#3 01/02/2002..28/02/2002 AST Monthly Demand [Amount/Paid: £575.00/£575.00] PAID

#4 02/02/2002..01/03/2002 AST Monthly Demand [Amount/Paid: £475.00/£475.00] PAID

#5 26/01/2002..25/02/2002 AST Monthly Demand [Amount/Paid: £495.00/£495.00] PAID

#6 28/01/2002..27/02/2002 AST Monthly Demand [Amount/Paid: £680.00/£680.00] PAID

#7 02/02/2002..01/03/2002 AST Monthly Demand [Amount/Paid: £730.00/£730.00] PAID

#8 01/02/2002..28/02/2002 AST Monthly Demand [Amount/Paid: £460.00/£460.00] PAID

#9 04/02/2002..03/03/2002 AST Monthly Demand [Amount/Paid: £680.00/£680.00] PAID

#10 02/02/2002..01/03/2002 AST Monthly Demand [Amount/Paid: £680.00/£680.00] PAID

#11 01/02/2002..28/02/2002 AST Monthly Demand [Amount/Paid: £500.00/£500.00] PAID

OpenAccess.Exceptions.NoSuchObjectException: No row for DataDomain.Accounting.PMSDemand ('RECEIVABLES') GenericOID@2e008e7 PMSDemand DEMANDID=12

at OpenAccess.SPI.Backends.ThrowException(Exception e)

at OpenAccess.RT.ExceptionWrapper.Throw()

at

com.versant.core.jdo.VersantPersistenceManagerImp.handleException(Throwable x)

at com.versant.core.jdo.VersantPersistenceManagerImp.getObjectById(Object oid, Boolean validate)

at com.versant.core.jdo.UnsynchronizedPMProxy.getObjectById(Object o, Boolean b)

at com.versant.core.jdo.QueryResultBase.resolveRow(Object row, VersantPMProxy pm)

at com.versant.core.jdo.ForwardQueryResult.getNextData(Int32 windowIndex)

at com.versant.core.jdo.ForwardQueryResult.internalGet(Int32 requestedIndex, Int32 maxAvailableIndex)

at

com.versant.core.jdo.ForwardQueryResult.get(Int32 index)

at OpenAccess.RT.ListEnumerator.setCurrent(Int32 _pos)

at OpenAccess.RT.ListEnumerator.Move(Int32 relative)

at OpenAccess.RT.ListEnumerator.MoveNext()

at

OpenAccess.RT.TypedEnumerator`1.MoveNext()

at

DataDomain.DataAccessorEngine.Demands_GetAutomaticDemandTemplatesNEW(IList`1 leaseTypes, Boolean scGenerateDemands, DateTime date) in DataAccessor.cs: line 556

at

DataDomain.DataAccessor.Demands.GetAutomaticDemandTemplatesNEW(IList`1 leaseTypes, Boolean scGenerateDemands, DateTime date) in DataAccessor.cs: line 238


Next time it was demand #34, etc. It is because there are really no 12, 34 or 1..11 demands in the DB. I know you fully reprogrammed the engine; however could you please confirm that this approach will work in the current version?

 

Another idea could be about using a transfer object.

Look, the Demand objects that the stored procedure returned were just partially filled. The data domain logic is responsible to finish them. So, fortunately I had some fields to pass some messages from the stored procedure, like an exception message. However it looks as a poor idea. (And this generally didn’t work as I explained above.)

So, I used the second approach, and got scalars, then built a transient object to use in the code. But I thought… Could TOA help me with it?...

Could you please explain, is it possible to define a persistent class with no corresponding table? So a stored procedure could return them?

Moreover, we could continue with the transfer objects…

What if we will have an interface to send a transfer object *to* a stored procedure? So the engine could parse the stored procedure parameters according to the object fields: @CustomerID <=> int customerID or even @CustomerID <=> OurCustomerClass customer? It would be just fine if the engine will not ask to have every ‘persistent’ class field as the procedure parameter. Well, it might be the stored procedure could need just for a single ID field. Actually it could look like the MS built in DataAdapter does with DataTable.

This way we could get a transient object directly from a SP, put it to another SP… Well, I’m not sure that is too much required, but hey, we would be happy with it. :-)

 

Okay…

Two more questions.

I used a stored procedure that returned a lot of scalars. I.e. some records. To build domain objects. During the building TOA lazy loaded referenced objects. As I wrote above, it took extremely long time. I wasn’t able to use any fetch groups of course. And I used another approach:

Create procedure ADG_PrepareDueDemands 
… 
@ResultType int 
as 
--<SOME CODE TO BUILD THE DEMANDS - #Demands> 
 
    declare @RTDemands int, @RTTenants int, @RTUnits int, @RTProperties int, @RTPreviousDemands int, @RTBankAccounts int 
    select  @RTDemands = 0, @RTTenants = 1, @RTUnits = 2, @RTProperties = 3, @RTPreviousDemands = 4, @RTBankAccounts = 5 
 
if @ResultType = @RTDemands 
    begin 
        select *  
        from #Demands 
         
        return 
    end 
 
if @ResultType = @RTDemands 
    begin 
        select * 
        from TENANT 
        where TKEY in (select distinct TKEY from #Demand) 
         
        return 
    end 
… 
 
raiserror('Unknown result type', 16, 1) 
 

And so, the code executes the stored procedure six times to load all the most diverse objects into the scope. That works rather quickly. But it could work much quicker if you could resolve two problems:

1)                            the T-SQL <SOME CODE TO BUILD THE DEMANDS - #Demands> works six times and it is rather heavy code. How would it be good to run it one time and keep the result in a temporary table accessible to other 5 passes. It would be easy to do with T-SQL but Vanatec executes my SQL with BEGIN TRANSACTION … ROLLBACK  bracket. Another idea could be in getting a list of object lists from the stored procedure at once. E.g.: ArrayList{IList<Demands>,  IList<Tenants>,…}

2)                            Unfortunately my Vanatec version wasn’t able to return any multiply fields ID object. It looks like a bug, and I’m not sure have you fixed it or not. Could you please check this out?

 

Hopes I will be happy to check the latest version soon.

Many thanks.

6 Answers, 1 is accepted

Sort by
0
Thomas
Telerik team
answered on 09 Feb 2009, 03:10 PM
Hello Vitaliy,

that is a long question!
First of all: Using a fetch plan it should be possible to optimize the load time of an object graph from the database. Using direct sql can be faster when the data needs to be processed in the database server directly. It always depends what you need to do with the data. Sometimes it is required in the client, sometimes not. Sometimes it is mapped to a field of a persistent class, sometimes it is an aggregate without name.

I assume now, that you have a Table with name Demand, and that there exists at least one SP that is mapped to return Demand instances. However, if you use fake IDs, then accessing the unloaded fields of a Demand is attempting to fetch the missing pieces from the database. And that fails, right?
The changes of the 'rewrite' will not affect this behavior.

It is currently not possible to have 'transient' persistent types that correspond to no table. And if we had them, how is the columns from the result set of a SP to be mapped to the fields? What is possible is to have a class that is associated with a table, uses no keygenerator and does not store instances. You could return a result set that includes the key columns (and all others as well) and let OpenAccess do the transfer of the data into memory. Make sure that there is no lazy loading as otherwise the associated table will be accessed and that will fail then.

Using such an object directly to transfer data between several invocations of SPs is currently not possible as all parameters have to be declared in advance. But in computer science, everything is solvable with one indirection. You could make your objects have a method that describes the parameters, and use that method to generate the necessary code for the SQL SP declaration an invocation.

As for returning polymorphic result sets in order to avoid the sixfold heavy computation, that is certainly doable if you include a discriminator column so that you are able to interpret the returned result in the client.

I will check if using multi-field identity is causing problems and will post my results here.
Regards,
Thomas
the Telerik team

Check out Telerik Trainer, the state of the art learning tool for Telerik products.
0
Vitaliy
Top achievements
Rank 1
answered on 16 Feb 2009, 09:51 PM

Hi Thomas


Where is my answer? :-)

First of all: Using a fetch plan it should be possible to optimize the load time of an object graph from the database. Using direct sql can be faster when the data needs to be processed in the database server directly. It always depends what you need to do with the data. Sometimes it is required in the client, sometimes not. Sometimes it is mapped to a field of a persistent class, sometimes it is an aggregate without name.
Sure, but the problem was how to specify that objects graph! Unfortunately умукн fetch plan could do it for whole the referenced objects only: 'this field referenced to that object of that class and therefore that should be loaded together with this instance'. The graph was too huge and contained a lot of useless data that no code used. I was need for some kind of a 'conditional fetch plan' that could load refernced instances by a condition.

Okay, maybe even without that fetch plan I could collect an array of needed referenced objects and pass them in an OQL query: SELECT * FROM SomeReferencedClassExtent WHERE id = 12 OR id = 25 OR ... . Then do something like:

using (IMovableEnumerator enumerator = MyOQLQuery.GetEnumerator())

          while (enumerator.MoveNext()){}

(BTW: Will this method guarantee that I will have all the objects in the scope for ages? Is there any simpler method? )

 

I could get all referenced objects into the scope. Well, maybe a thousand of ORs will work just fine there.
But that time I found that I'm unable to specify even the start objects of the graph rather narrowly. I.e. OQL wasn’t able to return data quite selectively, and I would need to load a huge table and start to play with it as I described below.


Well, sometime we need to run an SP.

Fortunately not too often - about 7%.

And, fortunately open access has a method to run it. :-)

I assume now, that you have a Table with name Demand, and that there exists at least one SP that is mapped to return Demand instances. However, if you use fake IDs, then accessing the unloaded fields of a Demand is attempting to fetch the missing pieces from the database. And that fails, right?
Not quite. As far as I understand it didn’t require for 'missing pieces'. I.e. there was no lazy loading. It simply tried to reload the object by an unclear reason. One time on N row, next time on M row, depending on the modules loading order. But on the same row for an NUnit test execution. Probably it depends on an internal OA cache free space or something similar.

Could you please check how does it work?

(Though, db-key-generator ~ AUTOINC, and there were a lot of real instances in the table. )

The changes of the 'rewrite' will not affect this behavior.
Pardon? I don’t understand. A life cycle event?

It is currently not possible to have 'transient' persistent types that correspond to no table. And if we had them, how is the columns from the result set of a SP to be mapped to the fields?

This could look like ADO.NET does it for Select comand as SP. The result set field name -> the object field name.


 

Using such an object directly to transfer data between several invocations of SPs is currently not possible as all parameters have to be declared in advance. But in computer science, everything is solvable with one indirection. You could make your objects have a method that describes the parameters, and use that method to generate the necessary code for the SQL SP declaration an invocation.

Sure, I could do it manually and I know assembler as well. :-)

Seriously, the feature could be not too much required. At least I don’t need for it right now.

I just remembered the ADO.NET SP parameter<-> table field mapping, and thought it could be a good idea here too.


public SqlParameter (
        string parameterName,
        SqlDbType dbType,
        int size,
        string sourceColumn

  )

 

If you will DataAdapter.Update(myDataTable) it will take the row sourceColumn field value and pass it to the SP.

Well, it was rather comfortable to use it.


But I definitely need for three things:

1. Handling of several tables received from a MS SQL server storage procedure at once. Returned in a structural container. With no ‘manual’ digging a returned array and extracting a persistent object piece by piece.

2. Getting a fake persistent class object with no problem.

3. Transient ‘persistent’ objects, described with OA, that I could get from a SP.

Pinging objects back to a SP just looks logically, but maybe nobody requires for it.

 

The (1) will work with MS SQL only. As far as I remember other servers cannot return several tables from one SP. But there is scope <-> SQL server 1:1 relation. I think you could implement interfaces to different SQL servers differently.

I would be glad to type something like:

ACollectionDescribedForOpenAccess  returnedContainer = myScope.ExecMSSQLStoredProcedue(“MyLovellyStoredProcedure”, param1, “param2”, GetSomeDate()).Get<ACollectionDescribedWithTelerikOpenAccess>(); 
foreach(MyPersistentObject foo in (IList<MyPersistentObject>)(returnedContainer [3])) 
            Console.WriteLine(“That is {0}! That works! That really work! I love TOA!”, foo) 

 

Or well, Oracle allowed implementing of Java value objects. Sure, we cannot use them.

But why we cannot operate with MS SQL 2005 c# value objects? I guess most of TOA users use SQL server.

And regarding usability... I've asked for an iterface to a SP. It is specific T-SQL code behind of the interface. It cannot be universal. So, why I cannot use an interface adopted to my SQL server?

 

As for returning polymorphic result sets in order to avoid the sixfold heavy computation, that is certainly doable if you include a discriminator column so that you are able to interpret the returned result in the client.

 

Could you please explain the trick in detail?

Do you mean the flat class’s hierarchy storage? Oh, I mean aggregation links to very different objects. I cannot move them in a hierarchy.

0
Vitaliy
Top achievements
Rank 1
answered on 19 Feb 2009, 07:42 AM
Hi,
I beleive you're still working on this.

One more problem.
I've loaded a big list of persistent class PMSTenant objects with a stored procedure.
I found that Open Access reloaded about 100 tenants of 1000 pre-loaded tenants.
I racked it with IInstanceCallbacks.PostLoad():

DataDomain.dll!DataDomain.Property.PMSTenant.PostLoad() Line 143

Unknown

[External Code]

 

DataDomain.dll!DataDomain.Property.PMSTenant.Current.get() Line 42 + 0x9 bytes

Unknown

DataDomain.dll!DataDomain.Accounting.PMSDemand.CheckDemandValues(DataDomain.Accounting.DemandChargeType chargeType = {Reserve Fund}, DataDomain.Property.PMSTenant tenant = {Magdalen Mews, Office 2 Car Park & Stor, Eatbigfish Limited}, System.DateTime dueDate = {01/10/2008 00:00:00}, System.DateTime fromDate = {01/10/2008 00:00:00}, System.DateTime toDate = {31/12/2008 00:00:00}, decimal amount = 600, string description = null, DataDomain.Property.PMSProperty property = {Magdalen Mews}) Line 104 + 0xa bytes

Unknown

DataDomain.dll!DataDomain.Accounting.PMSDemand.GetGenerationErrorMessage(DataDomain.Accounting.DemandTemplate tmpl = {DataDomain.Accounting.ADGDemandTemplate}) Line 120 + 0xc0 bytes

Unknown

DBDesktop.exe!DBDesktop.Accounting.Demands.PreparedDemandWrapper.GetGenerationProblemMessage(DataDomain.Accounting.DemandTemplate pTemplate = {DataDomain.Accounting.ADGDemandTemplate}) Line 19 + 0x8 bytes

C#


Where Current is:
[FieldAlias("current")] 
    public bool Current{ 
      get { return current; } 
      set { current = value; } 
    } 
A boolean property of a boolean persisten field, which should be in the memory allready of course!
I cannot find any real reason for this behaviour in my code.
And one more thing, if I will preload say 50 tenants then Open Access will not reload my tenants.
I.e. I got this behaviour on a production database. And my users will need to wait for one minute more. :-(

I feel a buffer of fixed size or a boundary condition in the black box named as Telerik Open Access.

Guys, I thought, maybe it is not comfortable to  deal with a long post. And the forum engine doesn't look as too much usable.
Maybe I'd better should start a number of threads? Or tickets?
It could take several hours. So, please confirm.

Or maybe some kind of special support is required?
Gold, Platinum, whichever we could have?
Or should we order an online discussion?
:-)

Thanks
0
Thomas
Telerik team
answered on 19 Feb 2009, 09:10 AM
Hello Vitaliy,

this looks like the grabage collector is playing a role in this case. What OpenAccess does is to reference all loaded persistent instances weakly, so that the GC can remove them once they are no longer used. This does not apply to modified/deleted/inserted instances, only to data which was read. I guess you are iterating over the results of your query, and during the iteration the GC runs and throws away things which have not been used by the application yet.... and then reloading can happen. Currently you can avoid that only by using strong references instead of weak references; the drawback is that the scope will never loose its content and will grow in size... so you will need to dispose the scope and create a new one from time to time. You can enable that behavior with the backend configuration setting

<pmCacheRefType>STRONG</pmCacheRefType>

As for the previous questions:

As you correctly stated, the FetchPlan is there to optimize the loading of object graphs that belong together. If you need to express a dynamic fetching of data, you can either plug new fetch group names to the FetchPlan dynamically based on the expected shape of the returned object graph or -more likely in your case- need to specify the the dynamic filter conditions in the where clause of your query. At the end, that's what queries are for!

I've got your wish for a more structured approach to data returned by a stored procedure. We will need to think about that.

The discriminator 'trick' is that in every table+row returned there is an indicator value that allows to interpret the data on the client in the correct way. This value should be appendable inside a stored procedure, and OpenAccess uses this technique when data is fetched from the database for a class hierarchy.

Sincerely yours,
Thomas
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Check out the tips for optimizing your support resource searches.
0
Vitaliy
Top achievements
Rank 1
answered on 19 Feb 2009, 12:38 PM
Hi Thomas

Yes, I thought about weakly references.
Actually, I upload every tenant in collection immediately after getting it form the query execution list.
<pmCacheRefType>STRONG</pmCacheRefType>  din't help me. :-(
Well, we are still looking on different OA version.
But maybe you could make a test project to reprodcue the issue?
And to reproduce the multiply fields PK issue?


FetchPlan.
Well, it is a good thing. A perfect one. No, really, I like it.
But probably you can imagine a sitution when it doesn't work well.
Look, I use  4 heavy temporary tables in the stored procedure. It is ok to have a temporary table of 50MBs in a stored procedure body.
But your idea is to get them on the cleint.
We already use a Remote Desktop Connection to work with the system from the farthest ofice.
Okay a webservice + disconnected API - No, thanks.
Duplicated logic in the domain object and in the stored procedure - thanks, No.
I found that the best idea would be a putting selective logic only into the SP. Let's the SP just prepare a graph of objects that I really need for on the client. And business logic will deal with them.
I found it as good architectural compromise.
But unfortuantely it doesn't work well as I described.

Can I have an example about the discriminator column please?
I still don't understand how to deal with it.

Many thanks.
0
Thomas
Telerik team
answered on 20 Feb 2009, 04:56 PM
Hello Vitaliy,

I've checked and at least the newest version of OpenAccess does not have a problem mapping multi-field identities returned from a stored procedure. The only thing that needs to obeyed is that the column names should remain the original names as this is the only way to know for us which column holds the primary keys.

As for the STRONG references, please check that your backend configuration looks similiar to

      <backendconfiguration id="mssqlConfiguration" backend="mssql"
        <mappingname>mssqlMapping</mappingname> 
        <pmCacheRefType>STRONG</pmCacheRefType> 
      </backendconfiguration> 

I've simply made a loop over the query result and called SystemGC.Collect() on each round. With the STRONG references, the application behaved much better as less queries were generated.

When your stored procedures results return an additional column where a value of 1 means class X and a value of 2 means class Y, that is called a discriminator column as it allows to interpret the fields on the clients and to discriminate between class X and class Y data. In cases of OpenAccess there is a column voa_class (per default when required), that holds such values. Of course, the values must be known in the client before to build up a mapping between 1->X, 2->Y. The neccesary information is held in the app.config file.

Have a nice weekend!

Sincerely yours,
Thomas
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Check out the tips for optimizing your support resource searches.
Tags
General Discussions
Asked by
Vitaliy
Top achievements
Rank 1
Answers by
Thomas
Telerik team
Vitaliy
Top achievements
Rank 1
Share this question
or