This question is locked. New answers and comments are not allowed.
HI
Following on from the thread concerning usign one ObjectCOntainer on a disconnected client, I am now getting an error when using the same Container to add a new object then update another where the second update ChangeSet causes a duplicatekeyexception when commiting a Scope transaction as it tries to insert a duplicate of the new object added in the first ChangeSet from the Container. HEre is the sequence of events (the client is via WCF)
1. Get a container from server to client
2. Get reference data from server as a ChangeSet and apply to client container
3. Create a new instance of a persisted object within client container transaction, attach reference objects from container
4. Send changeset to server, apply, copy to scope and commit, receive ChangeSet via GetContent() from the server container and apply to client container.
5. Retrieve another preexisting object from server via a GetContent() changeset applied ot client container from a fresh container on the server. Evict the original new object.
6. Edit the preexisting object within client container transaction
7. Send changes to server via GetChanges() the subsequent scope transaction Commit operation (after a CopyTo) on a scope fails with the DuplicateKeyException witht he data indicating it is trying to insert the object form the first phase.
When I use a fresh container for the update phase on the client this error does not occur BUT I have then lost all my cached reference data and would have to refetch it from the server :( . The aim is to always use one container throughout the lifetime of the client application. AM i missing a step here? What ObjectContainer.Verfify modes should I be using in these scenarios?
Following on from the thread concerning usign one ObjectCOntainer on a disconnected client, I am now getting an error when using the same Container to add a new object then update another where the second update ChangeSet causes a duplicatekeyexception when commiting a Scope transaction as it tries to insert a duplicate of the new object added in the first ChangeSet from the Container. HEre is the sequence of events (the client is via WCF)
1. Get a container from server to client
2. Get reference data from server as a ChangeSet and apply to client container
3. Create a new instance of a persisted object within client container transaction, attach reference objects from container
4. Send changeset to server, apply, copy to scope and commit, receive ChangeSet via GetContent() from the server container and apply to client container.
5. Retrieve another preexisting object from server via a GetContent() changeset applied ot client container from a fresh container on the server. Evict the original new object.
6. Edit the preexisting object within client container transaction
7. Send changes to server via GetChanges() the subsequent scope transaction Commit operation (after a CopyTo) on a scope fails with the DuplicateKeyException witht he data indicating it is trying to insert the object form the first phase.
When I use a fresh container for the update phase on the client this error does not occur BUT I have then lost all my cached reference data and would have to refetch it from the server :( . The aim is to always use one container throughout the lifetime of the client application. AM i missing a step here? What ObjectContainer.Verfify modes should I be using in these scenarios?