This is a migrated thread and some comments may be shown as answers.

FetchPlans & Partial object data loading

19 Answers 298 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
This question is locked. New answers and comments are not allowed.
Heiko
Top achievements
Rank 1
Heiko asked on 24 Sep 2009, 07:17 AM
Hello Telerik Guys,

please correct me if i am wrong, but I thought using fetchplans i can define which fields
of an object class ORM should load.
(And not only for navigational properties in one query optimization).

I tried to load an object using a fetchplan where only the ID and a NAME property should be
loaded for that object class. But ORM always loads all properties and not only ID and NAME.

Example:  ==> I really only want to load ID and NAME and not other properties like firstname,
lists and so on.

========================================================================
// My SIMPLE class example

[Persistent]

public 

 

class User {

 

 

 

     [

FetchField("short")]

    private Guid id; 

 

 

    [

FetchField("short")] 
   private string username;

 

 

 

    private string firstname;

 

}

========================================================================

// My SIMPLE code example

 

IObjectScope 

 

objectScope = ObjectScopeProvider1.GetNewObjectScope();

 

 

 

 objectScope.FetchPlan.Clear();

 

 objectScope.FetchPlan.Add("short");

 

 

var users = from u in objectScope.Extent<User>() select u;

 

 

 

 

foreach (User u in users.ToList()) { 
    string username = u.Username;

    string firstname = u.Firstname;

}

// firstname is also filled, but it is not part of the fetchplan!


========================================================================

Is this a missunderstanding of fetchplans?

How can I achieve this result?

The main background for this is using ORM via WCF Services with Many-To-Many relation ships
where I get an exception when there are CYCLES given like object A is also defined and loaded in
an assignment table class (A_mn_B) and referencing to itself.
Maybe you already have a solution how to avoid this "CYCLE" problem using WCF Services.

I really hope you can help here.

I am really not 100% sure how to use ORM via WCF in Silverlight for Many-To-Many
relationships where you have an 3rd assignment table.

Thanks and please feedback.

 

 

 

19 Answers, 1 is accepted

Sort by
0
PetarP
Telerik team
answered on 24 Sep 2009, 11:15 AM
Hi Heiko,

Basically you have understood the main purpose of the fetchplans. What you have missed, however, is that when a field that is not part of a given fetchplan ( in your case the field firstname is not part of the short fetchplan) and the given field is accessed than the default fetchplan is loaded. This behavior is by design. You can use the SQL profiler (or some other monitoring tool) to see that  when your query is executed only the ID and Name are retrieved. When you try to access the firstname field, only then OpenAccess sees that this property is not loaded and is not part of the short fetchplan thus automatically loads the default one.

Regards,
Petar
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
0
Heiko
Top achievements
Rank 1
answered on 25 Sep 2009, 07:03 AM
Hi Petar,

this is exactly the behavior I noticed.

Is there a way to leave a field empty and not load it automatically?

Did you got my background for this issue in case of using WCF services and M:N (Many-To-Many) relations? What's the preferred soltion to avoid such CYCLE-issues?

Thanks a lot Petar!!!
0
IT-Als
Top achievements
Rank 1
answered on 29 Sep 2009, 09:00 AM
Hello Heiko,

You just hit the exact same problem we had in our system (also based on WCF) for about two years ago.
We also use OpenAccess as ORM.

The problem is the lazy loading of OA and the default DataContractSerializer used on your DataContract classes that forms your contract for your WCF service. I am guessing you have decorated your OA persistent model classes with the DataContract attribute also, right? So, you end up serializing persistent instance over the wire when your WCF service is called.. right?

What happens is that the DataContractSerializer (the "thing" that actually transforms your class to xml to be sent on the wire and vice versa) performs a read (get property) on all properties of your DataContract class... thus invoking lazy loading for properties that have not yet been loaded (it totally ignores the fetchplan - since it has no idea of it - only OA knows about it)

Another thing, the default behaviour of the DataContractSerializer  is to embed every object.. even though it is the same instance (compared to the Id) - in other words: It has no idea of "object identity".

Anyway... there are two solutions to your problem. A quick one .. and a more time consuming one

The quick one (only works if using .NET 3.5SP1)
Pass the "IsReference=true" when decorating your DataContract class with the DataContract attribute.

This will bring "object identity" to the on-the-wire XML format used by the DataContractSerializer. Thus, the serializer will use references to the same instance instead of embedding them again and again.. Thus, no more cycles in m:n relations.

But remember... everything that can be reached (has a public property) and is decorated with the DataMember attribute is serialized by the DataContractSerializer... - everything..!..  I am mentioning this because typically you would design you persistent classes to be highly navigational in nature.. and high navigation equals a lot that can be reached by the serializer.

See this example for an very good example of how to do it.

The more time consuming one
Add an addiotional mapping layer that sits between your persistent model classes and your DataContract classes.. for example you have one persistent class called Customer in Model assembly and you have one DataContract class called Customer in the DataContracts assembly. Upon a WCF service request (going in) you have a "mapping layer" that maps from the DataContract classes to the persistent model classes (if something needs to be persisted) and vice versa (when sending the response to the client)

By doing so you are separating you persistent model from the one you expose to "the outer world" in your WCF services.

I don't know what method fits in your scenario.. as always ..  it depends... :-) We did "the more time consuming one" in our system, mostly because a) We have more client types (consumer types) accessing the same services (with different security rights) and b) We did not want to expose our "full" persistent model to the world as this tends to be the case (when doing the quick one) over time and the system grows..

Hope this post, shed some light on your issues...and hope to help

Regards

Henrik



0
PetarP
Telerik team
answered on 29 Sep 2009, 02:53 PM
Hi Henrik,

Great post! You have managed to clarify the problem that most people using DataContract attributes are facing. Indeed those attributes would cause your object tree to be completely serialized (regardless of the fetch plan). Usually we recommend the users to use the second approach which would be to create wrapper class for the ones that are being serialized. Although at first this might seem to be way too much overhead it will prove to be the better choice at the end. You can find an implementation of this approach in our Nortwind WCF N-Tier Demo application.

Once again I want to take the opportunity to congratulate you for the great post you wrote. It will be great if you would like to implement the approaches you described into a sample application (something agile and lite) that will be put in our code library so that our customers can look at them and see how it is done. We would greatly appreciated this effort on your side if you are willing to make it.

Greetings,
Petar
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
0
Stefan
Top achievements
Rank 1
answered on 02 Oct 2009, 02:18 PM
Hello Telerik,

I am currently checking out OpenAccess for a new project and came to face the described problem.
We are still in a pre-project state, so this is my first post here :-)

Please have a look at the following code-block:
[Telerik.OpenAccess.Persistent(IdentityField="id")] 
    public partial class Order  
    { 
        private long id; // pk  
 
        private string headertext; 
 
        private DateTime orderdate; 
 
        private Customer customer; 
 
        private IList<OrderPosition> orderPosition = new List<OrderPosition>();  // inverse OrderPosition.order 
  
    } 

What we are thinking of right now to solve (or at least to weaken) the "lazy-load via WCF"-Problem is the following tasks:
  1. Create Attributes for all members, all of them exposed as "DataMember" in the "DataContract"
  2. Switch off inverse List Loading for "orderPosition", which means that the WCF-Client will always recieve an empty List
  3. Implement lazy-loading on client side by calling a WCF-Service "getOrderPosition(long orderid)" as soon as the client Getter of the orderPosition Attribute is called

By these tasks we have a DataContract exposing all possible members to the client, without inital deep-loading of all 1:n-lists.

What's still missing is a possibility to control the load-behaviour of referenced DataContracts (see the "Customer" in our example). We thought of the following:
  1. Create a database-view on the the table "Order", consisting of only the fields id, headertext and orderdate
  2. Write a constructor on client-side, taking this "OrderView" and constructing a "real" Order-Object by loading it from the server (including the Customer) if detailed information is needed.

We hope to have an advantage by doing so over creating a complete Mapping-Layer with the need of mapping every single attribute in every single DataObjectClass.

What do you think of this solution? Did we miss something essential?

Best regards and keep up the good work!
Stefan



0
IT-Als
Top achievements
Rank 1
answered on 02 Oct 2009, 02:26 PM
Hi Petar,

Thanks for the nice words. I will see if I can flick together a small sample in the near future.

/HG
0
IT-Als
Top achievements
Rank 1
answered on 02 Oct 2009, 03:06 PM
Hello Stefan,

Your idea seems interesting and it could work depending on the characteristics of your client..maybe multiple client types..btw.. what is the client type (web, smart client, silverlight...)? Maybe the services must be consumable from another platform than .NET?

However, I see some issues:

Implementing the "client-side" lazy load
This will give you a "chatty" interface for your services, thus gives you network round-trips to server, whenever you need to fetch the "positions" as in your example.

When I do services the first rule of thumb for me is to keep the services (aka service methods) as coarse-grained as possible.
Think:
How do I design my service method (+ data contract) in such a way that it actually solves a business requirement.

Try not to think:
How do I design my service method in such a way that it gives me the data that I want.
(In other words: Try not to make your decision on using an ORM reflect in your service interface)

Generally speaking I try to

1) Call a service to get the information I need to solve my business requirement
2) Manipulate the data on client
3) Call a service to push back the information needed to end up with success in solving my business requirement.

Otherwise I think you will end up with a - I know - rough said - "remoting (with WCF) object database"

You could also ask yourself the question: Why did I choose WCF services in the first place? What would my benefits be? Is it because other client types (maybe on other platforms) need to consume my services?

There may be reasons for doing the chatty interface - I don't know your application scenarios and environments from what you described.

Doing "a complete mapping layer"
It takes some time upfront, but you will end up reusing a lot of your code if you construct this correctly. So eventhough the task seems trivial and time consuming upfront.. it will pay back later.

To sum up:

I would not go for the chatty interface, unless there's a very good reason to do so. But your approach (and that's what you asked about) is definitely possible.

Just my few cents

/HG


0
Stefan
Top achievements
Rank 1
answered on 05 Oct 2009, 07:17 AM
Hi Henrik,

thanks for your reply. Generally speaking, I agree to all points you made.
Nevertheless, our project has some special requirements (I guess every project has ;-))

  1. Our client developer team has to focus on objects, not on network traffic optimization. Therefore I think our approach is a good one: the client developer can call services returning a lot of (flat) objects. By accessing an Attribute, the missing data is loaded. This is transparent to the client developer and still we optimize network traffic.
  2. We have a lot of Master-Detail-Views. If you think of a scenario where you want to show all Orders of a Customer to the client, it makes no sense to fetch all orderpositions in the same step. Probably the user only wants to display one particular order in detail.

Perhaps these thoughts may be useful for other developers/architects setting up their initial software landscape. If we go any further on this, I will post our experiences here.

Thanks again for your reply, I appreciate it a lot!

Stefan
0
IT-Als
Top achievements
Rank 1
answered on 05 Oct 2009, 08:07 AM
Hi Stefan,

Thanks for outlining your scenario further. As I said, there might be application scenarios where the chatty interface is a perfect match.

One way to implement your solution is to extend the (partially) proxies (generated from svcutil) in another source code file (so they are not overwritten by the svcutil). In those extended proxies you can implement the logic needed to make a round-trip to the server to get the data needed (when it has not been fetched already).

I understand your master-detail scenario. We faced the same problem an solved it by introducing two client "views" (that is datacontracts) of the same persistent class, say Order. A "list item" one, say OrderListItem and a an Order one that includes the OrderPostion list, too..

Still, the result is the same.. It's just a matter of where you implement the logic.. Server side or client side.

I would greatly appreciate to hear about your results.

/HG
0
Roger
Top achievements
Rank 1
answered on 23 Oct 2009, 06:57 PM
First, this is one of the best threads I've read in a while, well-written and informative.  I was struggling with similar issues, and it's great to have this knowledge.

I believe that I'm following a similar methodology to what Henrik laid out.  My mistake has been in overlooking the methods that Telerik has made available for detaching and attaching persistent objects (i.e. I was trying to write all that code myself).  I didn't see any mention in this thread of attach/detach of persistent objects.

Can I assume that, since you are writing the wrapper business objects, that you are using the ObjectNetworkAttacher helper class and not the ObjectContainer (used in the n-Tier sample)?  Do any of you have experiences that would push me toward one or the other?  At this point, I'm leaning toward ObjectNetworkAttacher because a) it gives me the flexibility of writing the business layer the way I want; b) the comparisons are done on the back end; and c) the changes are made at the last possible moment.  Does that seem right?
0
IT-Als
Top achievements
Rank 1
answered on 25 Oct 2009, 10:54 PM
Hi Roger,

I am glad you like the post. Anyway on to your comments:

We have not used the Network attacher/detacher, but are writing our own "wrappers" (since we have a WCF service layer on top, we call those classes our Data Contracts)
At the time we started to use OA (4+ years ago) where simply was no helper class to do the stuff. I haven't studied the attach/detach mechanism, so I really can't elaborate on this.

However our architecture on the server side is divided in the following layers:

- WCF services
- Business layer (mapping from/to model/datacontracts goes on here)
- Persistence layer.

The model (the persistence capable classes) are available to all layers except the WCF services layer.

We found this to be useful, since:

1) We have a multi-tenant system (different customers are using the same system..some of them with a little "twist")
2) Each of those customers can have multiple consumer types/devices (that is, web apps, another service, smart clients, silverlight, etc)

Taking the two points into account:

What "defines" the interface to the worlds is actually the WCF services layer...  Why share your persistence model to the world... and why share parts of it to consumer types what does not need the information....or even better... is not allowed to even see the information.
So different consumer types can have different views of the services provided by the system AND the information those services provide or need to apply.
But that's only the interface to the world...  what goes on beneath in the business and persistence layers is exactly the same.
Actually, we have a Apply method to each and every data contract included in a service method that manipulates the persistent model. And also a Convert method that does the opposite: maps model information to data contract classes that hold some of the informationer or even (and often) combined information based on the model.

So during convert:
You really make a "view" of your persistent classes that matches the "contract" with the calling service,

And during Apply (this is little harder):  
You implement reusable methods for applying for example an Address or a Customer to the persistence model. It also here that we implemented the validation code... We split those Convert/Apply methods into a separate assembly so that they can be shared by the methods in the business layer... say both an SaveCustomer and a SaveEmployee method needs to apply the Address class. 

My point is:
Design so it fits your needs.... we had a rather complex system setup.. where security issues, multiple tenants and multiple consumer types were some of the key requirements that led to this implementation... and again and again we went back to the main point:

We want to control what information the consumer (whether it is a human or a machine) of our services have access to.. and it is certainly not our full persistence model.

Hope it gave you some clues

0
Roger
Top achievements
Rank 1
answered on 27 Oct 2009, 03:10 PM
Thanks for the response.  It would appear that I have a very similar system in place as you, although maybe more tightly coupled.  In fact, I have similar layers and transports.  My equivalent to your data contracts are business objects but I think we're talking about the same thing:  they're disconnected from the persistent objects and contain methods for both manipulating the business object data as well as transferring to and from the persistent layer.  I don't have the complexity of multi-tenant but I will have multiple client types.

I also have equivalent methods to your Convert and Apply methods.  I haven't had any issues with the Convert method but I have had some difficulties with Apply, specifically when trying to add new instances within an object hierarchy.  My problems may have to do with my understanding of how the persistent objects are constructed.

A concrete example would be a Customer/Address/Contact scenario where a Customer could have multiple addresses and contacts.  I will do a Convert on a particular Customer and get its data and its associated Address and Contact objects.  Each Address will have the Customer object as a reference (OA calls it an "inverse").  Each Contact will be associated with an Address (Address:1-m:Contact).

The main problem is in trying to add new items through Apply.  If I were to add a new Address to a Customer then I have to Convert the Customer based on its primary key before I can Apply the new Address.  If I were to add a new Contact to an existing Address then I have to Convert the Address before I can Apply the new Contact.  And if there are several Addresses, each with several Contacts, for the same Customer, then a lot of objects will be created simply to insert a new item into the lower part of the hierarchy.  This seems painful, and it feels like it artificially creates extra work to build up parent objects.  What I would have preferred is to have a way to use the parent reference in some cases (e.g. when displaying the items) but use the primary key only when applying the data to the persistent layer.  How do you deal with this?
0
Roger
Top achievements
Rank 1
answered on 28 Oct 2009, 07:03 AM
fwiw, I was over-thinking this particular problem.  I was trying to rebuild the parent object as well as the new child object, but it was simply a matter of fetching the parent object and attaching it to the new child object.

And, if I understand IQueryable and the delayed fetches better, this would not necessarily even require a back-and-forth with the data server.  The fetch of the parent object will just give me a persistent object that hasn't actually fetched the data itself, and the data will only be needed when the transaction that surrounds all this activity is committed.  Does that sound right?
0
IT-Als
Top achievements
Rank 1
answered on 28 Oct 2009, 07:33 AM
Hi Roger,

Sorry the late answer.

As long as you have the Id of the persistent object in your datacontract / business object (that represents the persistent object) you can go and fetch the persistent object by using scope.GetObjectById(<id goes here>). The process of converting to the datacontract class from the persistent class is what I call Convert methods.

For example: You have have a PC.Customer which is mapped (by a Convert method) to a DC.Customer when it is sent over the wire by WCF. In this process you also include the Id of the PC.Customer in the DC.Customer class.
When the (possibly) modified customer comes in for saving (for example in a SaveCustomer service method), you use the Id in DC.Customer to fetch the PC.Customer and in the simple case just update the properties of the PC.Customer with the values of their DC.Customer counterparts.
In my case the ApplyCustomer(DC.Customer) is called by the SaveCustomer service method. ApplyCustomer takes care of all the details of actually fetching the persistent object and updating the values of it's properties.

The same applies if you have nested objects. For example several addresses on on customer:

You just have a nested call in ApplyCustomer to ApplyAddress(DC.Address). As with ApplyCustomer the responsibility of ApplyAddress is to apply the address and any nested object/objects with the DC.Address class.

This might seem a bit overwhelming, but trust me, you end up with at set of highly reusable Apply and Convert methods. We even choose to put validation code within the Apply methods...  so like in the ApplyAddress example above.. regardless of how you apply and address it always undergoes the same validation check before it is persisted/applied.

Does this make sense at all? :-)

PS. Regarding your question on fetching objects it depends on the fetch plan defined. There's a default fetchplan assigned and it fetches all simple properties(int string, DateTime, etc) of an object for example when you do a GetObjectById in your code. The nested objects and collection of objects is lazy loaded (when using the the default fetchplan) when needed.

Regards

Henrik

0
Marco Tambalo
Top achievements
Rank 1
answered on 29 Jan 2010, 08:04 AM
Hi guys, I have a similar situation.

The default FetchGroup specifies that only primitive types and single reference types will be retrieved right?

If yes, then let's say I have two persistent objects, Project and Manager, each having a reference to the other, in a one-to-one relationship.

If I retrieve an instance of a Project, its Project.Manager property will be retrieved right? Because its single reference?
If yes, then
how about the Manager.Project property? Will it be retrieved? Will it point to the same instance of a Project I just retrieved?
0
Jan Blessenohl
Telerik team
answered on 29 Jan 2010, 02:23 PM
Hi Marco Tambalo,
We are not loading the referenced object, means the Manager is not loaded. By default we are loading the complete content of the underlying table. This means the id of the manager is loaded. If you now access the Manager property we take this id and retrieve the manager.

Best wishes,
Jan Blessenohl
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
0
Marco Tambalo
Top achievements
Rank 1
answered on 29 Jan 2010, 03:14 PM
Hi Jan,

In that case, if the Manager is loaded by accessing from Project, the Manager's own reference to Project (lets call this refBack) will not be loaded right?
But what if refBack is loaded, will it point to the original Project?



0
Jan Blessenohl
Telerik team
answered on 29 Jan 2010, 03:26 PM
Hello Marco Tambalo,
Yes, we first look at the scope if the object is already loaded, only if we do not find it there we ask the db. This is also true with your first Manager property access. If the manager is loaded we just take that.

Regards,
Jan Blessenohl
the Telerik team

Instantly find answers to your questions on the new Telerik Support Portal.
Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
0
Marco Tambalo
Top achievements
Rank 1
answered on 29 Jan 2010, 04:55 PM
That's what I want to hear. Thanks.
Tags
General Discussions
Asked by
Heiko
Top achievements
Rank 1
Answers by
PetarP
Telerik team
Heiko
Top achievements
Rank 1
IT-Als
Top achievements
Rank 1
Stefan
Top achievements
Rank 1
Roger
Top achievements
Rank 1
Marco Tambalo
Top achievements
Rank 1
Jan Blessenohl
Telerik team
Share this question
or