I wan't to do the following:
Due to some restrictions i have to use Telerik Classic.
Objects should be copied between 2 object scopes.
My first try was
which caused: "Object references between two different object scopes are not allowed. "
So i thought, maybe the objectcontainer can be used .
which also caused: "Object references between two different object scopes are not allowed. "
Is there a feasible way to move objects between ObjectContainers or a sort of generic copy constructor for telerik persistent objects? (what i used now is a stream copy.. but is there any build in way?)
By the way: Would the newer telerik connection make it easier?
19 Answers, 1 is accepted
The ObjectContainer approach should work. Can you send me the entire copy code that you have?
Alternatively you can use the new 'Attach' functionality that is added on the OpenAccessContext. Although it is added on the context you can use it with the scope also. Some sample code
All methods that are available on the context are available via the IExtendedObjectScope interface.
Do get back in case you need further assistance.
All the best,
the Telerik team
I have a Persistentobject aObject in mySourceObjectScope.
** will bring the error message: AttachCopy cannot attach yet instances that are not clean or modified or new.
aState = OpenAccessContext.PersistenceState.GetState(aObject);
.OpenAccess.ObjectState.MaskLoaded | Telerik.OpenAccess.ObjectState.MaskManaged | Telerik.OpenAccess.ObjectState.MaskNoMask
At the moment i'm thinking that telerik makes the things i wan't to do more complex then just using SQL.
what worked just a bit
But the drawback is, that the add in targetscope only works with new, unlinked objects. I would also like to update linked objects to new opbejtscope.
I also tried the code below, but with this i had no new objects in second database.
So I still searching for a way to transfer a object as copy to another context...
I will prepare an example for you and get back to you on this.If I understand you right you want to copy over an entire object graph (with possible modified but not committed objects) from one scope to the other?
the Telerik team
Copy an object from one scope to another.
If the object has dependend objects, they are also needed (structural integrity).
Object or dependend objects can exist in the second scope.
My thought was:
if rootobjects has subobjects, parse ease subobject
if rootobject has no subobjects
if object is in second scope -> update object in scope 2
if object is not in second scope -> add object to scope 2
if rootobject is in second scope
if subobjects exist -> attach added subobjects to new root object -> update object in scope 2
if rootobject is not in second scope if subobjects exist -> attach added subobjects to new root object
add object to scope 2
As I started to implement the example a basic question comes to my mind. What do you want to exactly achieve by creating a copy of an object graph in scope2? Is it just to avoid reading the objects again from the database? In this case you can use the Level 2 Cache.
Maybe there is already a solution to achieve your goal.
All the best,
the Telerik team
Are you talking about 2 independent processes that use OpenAcces? In that case the scopes are completely independent. It is not clear to me how these 2 applications interact with each other.
You can detach an object from 1 scope and attach it to the other, but you need to consider using fetch plans to define what part of the object graph to detach and attach.
the Telerik team
As all pcs which should be connected to are in the same network, 2 distinct connections are opened.
Think of this:
We have one application running on PC1.
On PC2 data shall be pushed for production reasons (e.g. it's a steering pc for industrial usage).
Both have a distinct database.
Sometimes production should be shifted from one machine (and therefore one pc) to another.
So the data needs to be transferred in one way or the other.
In an older solution DTO's were used, but the human overhead was to high (sometimes developers didn't change DTO when a database change was made, therefore sometimes production data was missing).
So the idea behind is: use only one data model, and transfer the persistent data directly. Make it with a very small overhead for ~180 persistent classes.
ATM some of this works, a recursive parse is possible, but it does not look nice. The scopes should be independend.
(manually get an item, a bit of reflection magic to get persistent informations, a bit more to get linked persistent classes and "1:n" relations, and some recursion with checks wether data is available in second scope).
As i have written before Attach and CopyFrom did not work (older posts).
You can serialize and derialize the contents of the container. Once you fill the container with the necessary objects you can obtain a 'change set' using the 'container.GetContent()' method. The resultant 'ChangeSet' instance is serializable. You can later deserialize this changeset on the target pc and use the 'container.Apply(changeSet)' method to copy over the contents into the new container.
Regarding the 'Attach' approach. I forgot to mention that you first need to detach the instances with a call to 'context.CreateDetachedCopy' and then use this detached copy in a call to 'context.Attach'
Hope this helps. Do get back in case you need further assistance.
the Telerik team
In the meantime We create objects in new container anew from data in other container.
Every change is now transferred without using openaccess' method, only the objects in different scopes are altered. (E.g. a new object is created, object is searched and altered if a change must be done etc.pp.)
Thank you for the information. Do get back in case you need further assistance on this issue.Regards,
The Model is loaded and saved within the same HttpRequest, but under two different contexts because I am using a MVVM approach. My main entity is "Application" and the child entity is "Department". Here is the code:
I tried to work around this issue by checking for the state of the Model, but that didn't work either. See http://www.telerik.com/community/forums/orm/general-discussions/exception-attachcopy-cannot-attach-yet-instances-that-are-not-clean-or-modified.aspx#2898239
I think this is due to the fact that the object graph upon detach does include the referenced objects, but proxy instances only. When you the later try to change the relationship between the instances in the graph and try to attach the graph, some information is lacking.
That means, you need to detach the whole graph that could be modified, and then the code should work.
However, the results of this code are even less predictable than the results of my previous code without the fetch plan. In some use cases, it would work and in others, it would throw various error messages.
I ended up abandoning short-lived contexts altogether in favor of the approach described in your documentation titled "Handling OpenAccessContext in Web Projects" Option #2 "Using the HttpContext". Now, I just use the a context that lives throughout the entire request and everything works great.
I will say that short-lived contexts are option #3 in your documentation, but it obviously doesn't work properly under the scenario of modifying only the graph and not actual properties on the object. You may want to get this fixed simply because other ORM's out there allow you to use short-lived contexts and from my experience, that is a popular way of handling other ORM contexts, especially with MVVM and domain driven design.
Also, I'm still slightly worried about memory leaks or resource locks and your documentation doesn't say anything about where to call the static Dispose method on the ContextFactory class demonstrated in your documentation. After researching, I found out that it should be called in the Application_EndRequest method from finding forums of people using this approach with other ORM contexts. You may want to include that in your documentation. I have never used this approach in a production environment so I will just have to wait and see if I encounter any memory leak issues or resource locking issues. I do like how clean the code is with this approach though. I never really cared for needing to have a using statement and declare a context variable every where I needed to access the db.
I do have another question though. Using the HttpContext scoped OpenAccessContext approach, if I have a web request that is a long running process, will having the OpenAccessContext as the lifetime of the HttpRequest cause database locking issues in a web environment? What about if the cache is turned on?
I'm not happy to hear that short lived contexts failed for you, but your description of the observed errors is a bit vague to identify the real cause. We would like to know them so that these kind of issues can be handled better in the future. Maybe the GC run in the middle and some objects were freed, causing troubles later? That could be fixed by specifying that hard (and no weak) references should be used in the context (see http://documentation.telerik.com/openaccess-orm/documentation/developers-guide/openaccess-orm-domain-model/advanced-domain-model-tasks/openaccess-tasks-garbage-collection ).
Now to your question about the cache: A long lived context is just an in memory workspace, that can hold the state of an object. The objects state in the database can be changed by another client, and the likelihood of such a concurrent change increases when the context is held for a longer time; this is why we usually prefer short lived contexts. If you issue data in the long lived context to the database server (by either using FlushChanges or a flushing query), then a server side transaction will be active for as long as there is no final SaveChanges/ClearChanges call. This could cause an issue for other clients, because the server will have rows/pages/indexes locked then. If you don't flush content to the database server, using a long lived context will not affect other clients.
I haven't tried your garbage collection configuration, but that may have been the cause of some of my other errors. The other error I saw quite a bit when attempting to use short-lived contexts was Object references between two different object scopes are not allowed. The object 'Wilsonart.AppRegistry.Model.ApplicationType' is already managed by 'ObjectScopeImpl 0x8' and was tried to be managed again by 'ObjectScopeImpl 0x7'.
I understand that you have to detach and attach, but even with doing so, there were issues I wasn't able to work around.
In any case, I really like the way the code works with the request-length context. The code is much cleaner and expressive and I am able to navigate the model easily and keep my reference ID's private to the model assembly.
I appreciate your answer to my concerns about database locking. With a long running context, will the context also keep the connection to the database open or does it only keep database connections open long enough to push changes to the db? Does it use a different connection for each thread? I ask this question because of issues I have seen in the past with thread pools keeping too many connections open which caused problems with the SQL database server.
connections are only held for the time they are used. They are used when a query is issued and the result needs to be fetched, so that means when
(1) you navigate from an object that is already loaded to one that is lazily fetched (think OrderDetails->Product navigational access or Order->OrderDetails lazy collection resolution)
(2) you issue a LINQ/OQL/SQL query
(3) when changes are pushed to the database server (during SaveChanges or FlushChanges).
Again, the connection is held for the context (or object scope) only for the duration of the pending activity, and if you f.e. finish the reading of the query result, the connection is given back to the connection pool.
That means it is a good idea to read quickly through the results, so that the connection can be freed quickly too. Don't intermingle business activities that can take a longer time while still enumerating the query results.
The bigger concern however is the use of FlushChanges, which will need to hold ('pin') the connection it uses until the final SaveChanges/ClearChanges happens. If insert/update/delete statements were issued via FlushChanges, the connection must be held open (for the visible-but-not-yet-committed state), and this can potentially be for a long time. Keep an eye on that, and try to minimize the usages of FlushChanges.
Thank you very much for your reply. It works as I would hope it would so I will keep an eye out for the items you mentioned, but I feel more confident in using the ORM in a production environment now.