This is a migrated thread and some comments may be shown as answers.

Help with structuring a domain model

28 Answers 315 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
This question is locked. New answers and comments are not allowed.
Bryan
Top achievements
Rank 1
Bryan asked on 02 Mar 2011, 06:12 PM

My company is about to start a new project using Telerik's OpenAccess ORM. This is a new product to us, and the first time we'll be using an ORM for a project instead of a Dataset based approach. We are currently having some disagreement regarding the best way to structure our data layer. Specifically, should we have a single .rlinq file and domain model for the project, or should we have per screen/module .rlinq files that contain only the tables, and the columns from the tables, required for that particular screen/module. To illustrate the latter:

Say we have a Person table, with fields for first name, last name, ssn, birthdate, gender and marital status. In the personal information screen, we need all of these fields, so we include the whole table in the domain model in that .rlinq file. On another screen (with a separate .rlinq file), we only need the person's last name and ssn, so the Person object in that .rlinq file contains only last name and ssn.

The argument for this method has been primarily that we should only select the data that we need for a particular screen, and no more. In our current Dataset based applications, this makes sense. There has also been concern expressed that having unnecessary tables and relationships will cause unneeded data to be loaded even if it is not asked for and lead to network load. The argument against this has been that we're fragmenting the domain model and introducing unnecessary complexity, and that part of the job of the ORM is to manage data fetching with caching and lazy loading. We can't come to an agreement on this, and can't find any conclusive information one way or another, so we're turning to you for help!

If it matters, we're building a Windows Forms based intranet app, and the data layer will be sitting behind WCF services, and the database will have around 100 tables.

Thank you in advance for your help!

28 Answers, 1 is accepted

Sort by
0
IT-Als
Top achievements
Rank 1
answered on 03 Mar 2011, 11:41 AM
Hi Bryan,

Your thoughts/disagreements/discussions (pick the one that matches best) are very valid - and looks like the same thoughts we had within our team when starting out with OA some 5 years ago.

We also have a WindowsForms based client that "talks" to WCF services.

I can't stress enough the importance of separating the WCF DataContract model (the model the "outside world" uses to engage/communicate with your WCF services) and the persistent model (the model of your OA entities).
We have done the above and it has a lot of flexible and it only comes at one cost...  you have to map from the DataContract model to the Persistent Model and vice versa. It's not really a huge workload to do so, once the key concepts/software architecture is in place.

Behind the WCF services we have further layered the "server side" into a business layer and repository layer.

So the upper most service layer calls the business layer which might call the repository layer multiple times.

An example:

1. OrderService (WCF Service) method: void CreateOrder(DC.Order)
2. BusinessLayer (class library) method: void CreateOrder(DC.Order)
3. RepositoryLayer(class library) methods: PC.Order CreateOrder(DC.Order), IList<PC.Invoice> CreateInvoicesFromOrder(DC.Order), etc.

Basically the business layer orchestrates the "business transaction" (one single action that has business value for the user) among the lower layers (here RepositoryLayer). That way, methods within the ReposityLayer can be reused from multiple BusinessLayer methods.

As you can see in pin 3 above... The CreateOrder takes a DataContract (the DC) Order class instance and return a persistent (the PC) Order class instance to the business layer. So in general: the Repository returns (outbound) persistent instances to the upper layers and converts (inbound) DataContract instances to something meaningful in the persistent model.

So, this actually solves your issues I guess... because you have two models: the DC model and the PC model.

When retrieving information (for example calling a GetOrder(int orderId) method i the service layer you will call a business layer method, which again will query the PC model in the Repository layer and return the PC.Order instance in question. The business layer will take the appropriate steps to convert "what is needed" from the persistent instance to the DC.Order instance.

There are several methods to performance tune querying against the persistent model, just be mention a few: fetch groups, the L2 cache, relying on lazy loading... etc..

Doing the above will also impose a "natural point" of validating what goes in from the client (WCF service consumer)... you can validate the DC.Order instance for correctness (in the business layer) before it is used/passed on further..

Phew...  Quite some words...  Hope it shed some light on your thoughts...

Regards

Henrik



0
Bryan
Top achievements
Rank 1
answered on 03 Mar 2011, 04:57 PM
Henrik,

Thank you for your quick and thoughtful reply.

I completely failed to mention in my initial inquiry that we are using Telerik's t4 tool to generate DTOs to send to our front end.  The tool also builds "repositories" that handle the translation between our persistent objects and our serialized DTOs, as well as basic select, insert, and update operations - we're really quite pleased with it.

So let me take another swing at illustrating our problem using your example.  Suppose we have a simple application for customers to place orders with 2 screens, one for managing the customer's information, and the other for entering orders.  One side of our argument is simple: we have one .rlinq file, one domain model, one set of DTOs that serve both screens.  So even if we don't need to know the customer's middle name to place an order, that data point is there in the CustomerDto that's feeding the order entry screen.  And even though we don't need any order information on the customer info screen, the link to the orders is there in the background.

The other side of our argument is that we should have separate .rlinq files, domain models, and DTOs for the customer info screen and the order entry screen.  After all, we don't need the link to a customer's orders on the customer info screen, and we don't need the customer's middle name to place an order.  For this we'd have .rlinq file in the CustomerInfo namespace that contains only the Customer table/persistent object, and the t4 file that generates the CustomerDto.  The CustomerInfoService in our service layer would use the model in this namespace to feed the customer info screen on the front end.  Then we'd have another OrderInfo namespace, with an .rlinq file that contains both the Customer and Order tables/persistent objects and DTOs, which the OrderService would use to feed the orders screen.  In each of these, we would eliminate any fields that weren't needed for the screen from the persistent objects and, by extension, the DTOs.  For instance, the OrderInfo.Customer persistent object and the OrderInfo.CustomerDto would not have a MiddleName property, because we don't need that field to place an order.

The perceived advantage of this second scenario is that we aren't sending any extra fields to the front end, and that it will be more efficient.  The argument against has been that it introduces unnecessary complexity and fragments our domain model, as well as adds to development time.  Basically our question is: is there anything to be gained by splitting our domain model as described above?  How was OA designed to be used?
0
Accepted
IT-Als
Top achievements
Rank 1
answered on 03 Mar 2011, 05:37 PM
Bryan,

As I can "see" what your are trying to achieve here it imposes a problem...

As per my understanding you'll end up having two entities (in the persistent model, although in different namespaces, but nevertheless)..namely the Customer entity... one that resides in the CustomerInfo namespace and one in the OrderInfo namespace...  So you'll end up having two entities representing the same "real world" thing - the customer - and that's "no good"..

Ideally persistent entities should be representing "real world" stuff - and there should only be one representation of it - the persistent Customer class for example...
Because you need different information in different screens for your client should not affect how your persistence model is build and how entities are associated with each other.
Instead (if you really need this - and I guess you do, when you are building real world application for the enterprise level), you should either:
1)
Use the full blown DataContract (the DTO) built from the Customer entity using T4 templates... that is... including all associations that might built up over time....    Not really an option in my opinion...  but if you don't care about the xml payload produced by the DataContractSerializer in WCF... well still valid then.

2)
Use specialized DataContracts.... for the service method that needs to be performed... Focus on... What task do the service need to perform... and what information does it need to do so. Think of DataContract classes / DTOs as views of the persistent model... A view that is exposed to the "outside world"....   Keep your persistent model on the server on do not expose it in full through the DataContract.... this is essentially what option 1 leads to if you think about it.

This leads to my original post again...  there's an impedance between our highly navigable (bi directional in some cases) persistent model and the more simpler (forward only navigable) model that we wish to expose as DataContract classes in our services.

I would go for option 2 - after all - that's what we did...  We couldn't come up with a general (DTO)generator mechanism that would fit all needs...  Instead we thought of what the WCF service needed to perform its task.

Largely generalized - I know -  but you will often end up with at least two types of data contracts...  A "cooked down" data contract and a more complete data contract representing the same persistent object...
An example (the customer entity):

The cooked down data contract will include the id, the fullname and maybe the address of the customer...Enough information to identify the customer from the system point of view (the id) and the end user point of view (name, etc..)

The more complete data contract will include associations also, for example a list of orders the customer placed and so on...but only in a forward navigable manner... meaning the Customer data contract will have a list of Order data contracts instances, but the Order data contract will not have a reference back to the Customer data contract instance.

So, put into the context of your application... the "Administer Customers screen" would work on the more complete Customer data contract...but the "Create Order screen" would use the cooked down data contract, for example to identify a customer from a search result or a drop down box or something third.

So basically... what I am saying is:

Go for the specific DataContracts option... If the system you're doing is used for something serious it will turn out to be a good investment, as you would bang your head against limitations in how DTOs are generated by the T4 templates - and you will find yourself fiddling with T4 templates where you could actually code something that has business value.

And here's something for Telerik if they are "listening"..

Maybe it could be possible to generate DTO's by using attributes on the persistent classes...  so that the T4 template would read the attribute and say...I need to generate a FullCustomerDTO and a SimpleCustomerDTO from this (the Customer entity) class... and I must include these fields in the first and these fields in the second...

Phew... again... long post...

Regards

Henrik





0
Bryan
Top achievements
Rank 1
answered on 03 Mar 2011, 06:20 PM
Henrik,

I can't thank you enough for this incredibly detailed and enormously helpful response.  You've answered a lot of our questions and given us a great deal to think about as we move forward.  Thank you for your time and patience.

Bryan
0
IT-Als
Top achievements
Rank 1
answered on 04 Mar 2011, 09:03 AM
Thanks Bryan,

Please, do get back if you have more questions... after all I have been working with this product for 5 years now... While it certainly has evolved over that time frame some stuff is still the same.

Regards

Henrik
0
Kevin
Top achievements
Rank 1
answered on 11 Dec 2011, 07:52 PM

Henrik,

Your posts have been very enlightening with several OA issues. I know this is an older post but a similar situation came up for me. The key in this post was this statement:
As per my understanding you'll end up having two entities (in the persistent model, although in different namespaces, but nevertheless)..namely the Customer entity... one that resides in the CustomerInfo namespace and one in the OrderInfo namespace... So you'll end up having two entities representing the same "real world" thing - the customer - and that's "no good"..

As Bryan was trying to do, I was attempting to "isolate" my domain models by using multiple RLINQ files. This made sense until I had to deal with a "shared" table (a Note table that was being used by many objects) and realized that they were resolving to different namespaces.

I tried some of the newer approaches (as of 2011 Q3), while still having multiple RLINQ files, such as the "Generate in Nested Namespaces" option for Output: the problem is that OA re-generates each nested namespace each time the metadata is built, so if you don't have all the same files in both RLINQ files then one metadata generation will wipe out the next.

I understand that Telerik expects that multiple RLINQ files be used only to separate actual database domains but I feel that a large database is difficult to maintain in one RLINQ file. But if that is the way it should be then I see two possibilities for making the resulting data entities more manageable (both require generating nested namespaces):

  1. manually set the namespace on each entity, so the metadata generator separates into a folder per namespace; or
  2. use database schemas, so the metadata generator separates into a folder per schema.

 

Is there a better way? Are these two options equally viable? And how did you manage a large database in one RLINQ file before these generation options were available?

Thanks,
=Kevin=

P.S. Henrik, you might have seen me post as "Roger" before, but now I have my own account!  :o)

0
IT-Als
Top achievements
Rank 1
answered on 12 Dec 2011, 01:05 PM
Kevin,

Thank you very much for the nice words.

I used a third option and it was....  not to split the model into different namespaces and/or schemas. That is.. all our persistent classes resides in the Company.Product.Model namespace.

What is your motivation of splitting into different namespaces...?

/ Henrik
0
Kevin
Top achievements
Rank 1
answered on 12 Dec 2011, 07:00 PM
My main motivation was ease of maintenance, since I know that the model will grow quite large. But, judging from your previous posts, you must also have a large number of entities in your model...is that correct? I was worried on two levels: 1) it could be difficult to navigate around the designer when there are that many entities; and 2) performance (of either the application or of Visual Studio) could be affected. I have no proof of #2 but, having done what you suggested (creating a single model), #1 still seems like it could be an issue.

What is your experience with these issues?

Thanks for your help and discussion,
=Kevin=
0
IT-Als
Top achievements
Rank 1
answered on 14 Dec 2011, 08:17 AM
Hi Kevin,

Yes, we have a quite large model with many entities.

Still, it will be quite cumbersome to navigate the model in the visual designer to find the right class. That's why we thought of changing to the Fluent Mapping instead. Because when you use the Fluent Mapping you can have have your persistent classes residing in different assemblies and then merge the metadata sources into one giant model. However, if you go this way, you lack the features of the visual designer, because all mapping is done fluent - in code.
To my knowledge it is not possible to merge more .rlinq models into one by using the above approach. Can anyone from Telerik elaborate on this?

I would still go for the one model (.rlinq) option - alternatively go Fluent.

Ok, the design surface in Visual Studio will get quite messy, but you have to make up your mind (I know it is put on top but still spot on I think): What do you use the Visual designer for...To have a set "neat" looking classes with interconnections or to do the mapping.
Once your model grows large forget everything about "neat" classes. You will find yourself only doing mapping in the visual designer.
My opion is: Once you get the Fluent Mapping under your skin, you will find it very easy to maintain. But if you prefer to do things visually - this is not an option.

Regarding performance: At runtime of cause the number of classes and their associations affects performance, but still... there are several options for tuning it... the Level 2 cache for example, which we use heavily
On design time performance..the number of classes will affect the performance of the visual designer, but again...  your model should be quite large...

/Henrik



0
Alexander
Telerik team
answered on 14 Dec 2011, 10:11 AM
Hello Henrik and Kevin,

 Merging multiple rlinq models is possible, however it requires manual merging of the metadata before the first context instance is created. I would recommend this approach only in cases when the parts of the model that need to be merged are dynamically determined at runtime. If your application is always using the whole database, create just one rlinq file and include all necessary tables. If the model gets very big, just use the Locate in Diagram option from the OpenAccess Model Explorer to easily navigate to a class.   For easier recognition you could also use different colors for the classes.

The runtime performance is the same for all mapping types, as we are instantiating a MetadataContainer object with the same relational/conceptual artifacts, just the way to store/read the mapping information is different. 
The design-time performance depends on the size of the database, your PC configuration and other factors, so you will have to test it yourself. Normally you should not observe a considerable decrease in the performance for rlinq models with up to 500 classes.

All the best,
Alexander
the Telerik team

Q3’11 of Telerik OpenAccess ORM is available for download. Register for the What's New in Data Tools webinar to see what's new and get a chance to WIN A FREE LICENSE!

0
IT-Als
Top achievements
Rank 1
answered on 14 Dec 2011, 10:14 AM
Hi Alexander,

Thanks for elaborating on the issue regarding merging of .rlinq models.

So to sum up:
Just keep the visual designer and map the classes from there. Right?

Regards

Henrik
0
Alexander
Telerik team
answered on 14 Dec 2011, 10:35 AM
Hi Henrik,

Yes, for most cases the visual designer and a single rlinq file is the easies and most user-friendly way to setup a domain model. 
For advanced scenarios like a model split in several parts, where associations should be defined between classes from different assemblies, the fluent mapping is the way to go. There are in fact a few other things that are not currently supported in the designer - like Dictionary associations, which are available only in the fluent mapping, but I guess this could be considered an advanced scenario as well.

All the best,
Alexander
the Telerik team

Q3’11 of Telerik OpenAccess ORM is available for download. Register for the What's New in Data Tools webinar to see what's new and get a chance to WIN A FREE LICENSE!

0
IT-Als
Top achievements
Rank 1
answered on 14 Dec 2011, 11:03 AM
Thanks Alexander,

Kevin:
Did you figure out which path to choose then?

/Henrik
0
Kevin
Top achievements
Rank 1
answered on 14 Dec 2011, 06:04 PM
It seems that the best path is almost always the easiest. At this time, I prefer the visual nature of the designer so I will go with a single RLINQ file for my data classes, but I will consider the Fluent approach. Alexander, your tips for searching and colourizing entities will definitely help. Henrik, of course I would like to have the mapping functionality and "neat"-looking classes.  ;o)

One of the main reasons why keeping a single RLINQ file appears to be the best approach for me is that I have several classes that need to be shared (i.e. FK relationships) among a great number of other classes. That makes it more difficult to break up. On the other hand, I expect to have around 300-400 classes by the time I'm done, so I was concerned about the maintenance (both of you have alleviated my concerns in this area). Whether I use Fluent or stick with a single RLINQ, I'm still using shared objects in my business layer and separating functionality through the use of different WCF services, so I'm thinking that my SoC is still good. As to "neatness", because of the shared entities, I already have a lot of crossing connection lines in the designer, so I'm not too concerned about that...anymore.

Henrik and Alexander, thanks for all your help. I'm the sole developer on my project, so it's nice to be able to bounce ideas off this "team".

=Kevin=
0
IT-Als
Top achievements
Rank 1
answered on 15 Dec 2011, 07:14 AM
Hi Kevin,

With 300-400 classes you should do just fine.We have quite a few more in our model.

As to "neatness":
I already suspected that you did not care any longer :-)

As to "bounce ideas":
You know where to find me (us)

Regards
Henrik
0
topry
Top achievements
Rank 1
answered on 02 Jan 2012, 05:09 PM

Excellent thread – my thanks to Henrik for sharing your experience/knowledge

Henrik – I would appreciate your opinion on something if you have time:
I am evaluating the practicality of moving our data layer into a WCF using OpenAccess ORM. We would use netTCP binding and it would be self hosted as a Windows service on a dedicated virtual machine. Most of our current applications are VS2010 Winforms (moving slowly to WPF) with a traditional client data layer that uses ADO.Net abstracted into a class wrapper, and all data methods use stored procedures. It works well and with less than 50 concurrent users, we have no performance issues. The majority of the applications are run under Windows 2008R2 terminal services.


As to the normal considerations for moving to a WCF hosted data layer:

  1. Performance/scaling:  There isn’t a scaling issue – performance is good now and we do not anticipate growth/utilization in the next 5 years that will change this.
  2. Outside access: Currently – none. All sharing of data with partners is done via a secure website. The needs are limited and we do not anticipate needing a web-service, mostly because our partners do not have the resources/expertise to consume/use the data. What they use now, we create/provide for them.
  3. Reusability – this is really the only driving factor. Since the current data class goes back over the past 10 years, and the schema has changed demonstrably since then, it is in need of some major refactoring. This is what has initiated the testing/discussions for utilizing ORM and possibly WCF for our data access needs going forward.

I have done some limited testing with OA and EF 4.2 in both the client layer as well as using a self-hosted WCF service. The majority of our queries are very straight forward- return a client record and ‘1 to n’ records in ‘x’ linked tables. The primary application is basically a CRM at its core, with a client record at the heart, which in turn is linked to over 200 other tables  2n to 3n (~400 total tables in the db). Current methodology executes a single stored procedure, which in one round-trip will return only those tables/records required for that function within a single dataset.

When using ORM within the client layer, I can see the benefits / productivity gains going forward. However, with the additional effort involved with serializing the data to/from WCF (even with the help of the mapping wizard in OA), as well as the inherit overhead with WCF, I’m wondering if the pains are worth the gains. We are and will likely always be the only consumer. All of our access will be through .Net framework tools. 

One of my biggest concerns deals with handling updates to the WCF as the service must be restarted after any changes are made, which means it can only happen during off hours. Using the client layer, we can slipstream changes at anytime as we use shadow-copy, with each client copying/working with their own private set of the dlls.

So, my question for you is this – considering our limit focus, would you utilize a WCF data object in this situation, or keep the data objects with the client?


0
IT-Als
Top achievements
Rank 1
answered on 02 Jan 2012, 06:19 PM
Hi Topry,

I would be thinking in terms of service consumer types (instead of the consumer) to answer the question whether or not to utilize WCF on top of a OpenAccess powered persistence model.

In other words:
You can have a more "natural" way of accessing data by using OpenAccess, because you will work with real world entities instead of stored procedures and tables.
So based on your existing schema (tables and SPs) you can build an OA model of persistent classes.

On top of the persistence model you would write repositories logically divided into application areas to manipulate the persistence model, like CustomerManagement...with methods like CreateCustomer.. UpdateCustomer... FindCustomersByXXXX

That is the first step.

The second step is where the consumer types comes in.
Now that you have repositories with all the "business transactions" needed you can have your Winforms application use these repositories to perform business transactions.
Moreover, if you wish to have WPF powered client you can still build a WCF layer on top of the repositories...  Thw WPF client could then access the WCF services to perform actions.

Back to your question:
With my knowledge on what you described here I think it would be overkill to apply WCF on top, because:
1. Yet, no one from the "outside" needs an API / services.
2. Yet, no one from the "inside" (your applications) needs WCF services...  you can still go directly at the repositories (typically this will be implemented as one or more class libraries)
3. Using the repositories will be have better performance since you don't have the serialization / de-serialization workload.

That said... if you construct the repositories and the model described above you will have a more maintainable system in my opinion... still being extensible to new client access technologies (like RIA, WCF DataServices), if you choose to build WCF on top of the repositories.

Just my few cents..

Regards

Henrik


0
Kevin
Top achievements
Rank 1
answered on 03 Jan 2012, 02:05 AM
Hi Topry,

Just to add to what Henrik has already said, I think that implementing WCF in your situation will still give you a more re-usable environment. Where you might take advantage of WCF is in re-using some business logic across multiple service methods. I am guessing that you probably do something similar with stored procedures now, but if you're planning to refactor the data layer and add an ORM then you might find that atomizing calls at the data layer then aggregating them in the business layer is more appropriate.

Regarding your concern about serialization/de-serialization being non-performant, from my experience using WCF over TCP is very fast...unless what you meant was the time it takes to build up the service layer to manage translate between data and business objects. If the latter then I can see your concern, although it is possible--and Henrik has mentioned it before--to build a framework that simplifies this process.

I was also curious about your comment: "One of my biggest concerns deals with handling updates to the WCF as the service must be restarted after any changes are made, which means it can only happen during off hours." While you may have a case for applying changes during off hours, I have only had to do so when I've had to make database changes as well. fwiw, I haven't had to start and stop the service, either. My service is hosted over WAS (easily the best choice IMO when running on 2008R2 because you can set up built-in logging and metrics, and it will manage service start-up) and I use Publishing to push my builds. Since I'm using WCF, every service call requires the proxy to be opened and closed, so any changes that are pushed to the server will be picked up by each user client on the next call. And my services run in Per-Session mode (I have a small-ish number of users) so the pipeline does not get destroyed with each call, thereby reducing time spent on the wire.

I hope any of this makes sense to your situation. I trust that Henrik will correct me if needed. ;o)

=Kevin=
0
topry
Top achievements
Rank 1
answered on 03 Jan 2012, 01:33 PM
Henrik/Kevin-
Thank you for your time and the information provided. I will definitely make use of the repository model - and based upon Kevin's feedback, will give WAS a look. Thank you for mentioning this - as I was not aware of this hosting method. I was under the mis-impression that IIS could only host SOAP based services. The ability to slip-stream updates would make WCF more practical and the limited reading I've done on WAS would seem to make it a good option for our situation.

Regards,
-Tim
0
IT-Als
Top achievements
Rank 1
answered on 03 Jan 2012, 03:06 PM
Hi Tim,

Super that you could use the knowledge and apply it to your own situation. Only glad to help.

Maybe you can mark the question as answered, so other users of the forums are able to find their answers quicker.
Thanks.

Best regards

Henrik
0
Kevin
Top achievements
Rank 1
answered on 03 Jan 2012, 03:41 PM
Glad to help, Tim. Good luck!
0
topry
Top achievements
Rank 1
answered on 08 Mar 2012, 05:42 PM
Kevin (or anyone else)-

I created a WCF Service Library project, configured for netTCPBinding only.
I then created the DAL project, added an .rlinq using the 'Add Domain Model' wizard.

The Data Services Wizard only works with http based WCF (WCF Services Application project) - confirmed via a support query to Telerik, that they only support IIS hosted WCF via the wizard. So, I'm looking for a way to add all of the DTOs and supporting code like the Data Services wizard creates.

I tried the Domain Service wizard, but it's functionality and output is demonstrably different - and requires the manual selection of each individual object (table) - and it apparently is not updateable - it has to be totally recreated each time, selecting a few hundred individual tables each time to add a new one is a bit painful - no 'select all' function.

So, am I missing something (hopefully)?  How did you construct your DAL for a netTCP bound WCF service?
0
Kevin
Top achievements
Rank 1
answered on 09 Mar 2012, 06:14 AM
Well, when I started this project, not all the wizards were available, so I can't really speak to them as well as I would like. What I have used, though, I found was not really what I was looking for. If you wanted something quick to start out with, the wizards do the trick. But the way the code is generated and the style of code don't suit me (e.g. it appears that multiple Get() methods are put together instead of grouping with other methods for that type) so I would have ended up manually re-arranging it anyway. I may be wrong, but I'm not sure that Telerik really intended the wizards to be enterprise-level, which would also match your observations about the Domain Service wizard.

Frankly, I found that the best (not the easiest!) approach was to build up my own DTOs, DAL and service methods. Actually, Henrik pushed me in this direction after having tried a couple of other approaches. My DTOs contain, for the most part, simple auto-properties and they are in their own libraries (i.e. grouped with similar DTOs, not one per library). Each DTO library is then shared with both the service project(s) and the UI or Business Layer project that uses it (I'm using the MVVM pattern, so my View-Models use the DTO libraries). My service methods are also very simple interfaces to the DAL, which does the bulk of the work. Each DAL class serves two purposes: translating the DTO types to the ORM persistent types so the ORM can perform CRUD operations; and translating the results of ORM fetch operations into DTO types to be sent back to the application.

I would like to point out that if you are using IIS to host WCF, you can choose one or more transports. I say that because I believe that the comment "they only support IIS hosted WCF via the wizard" may not be entirely correct. That is, I'm sure that the default configuration is set up that way, but there should be nothing to prevent you using the same code over TCP. In the end, WCF exists to create services independent of transport, with the exception that native types can more easily be transferred via TCP. fwiw, I have my services set up in IIS to use both HTTP and TCP, which allows me to do a quick call from my browser just to be sure that a service is working.

What I would suggest is that you work on an end-to-end solution using only one entity. That will help establish the layers you need before tackling the rest of your entities.

I hope this helps you, although you may not find it was what you were hoping for. ;o)

=Kevin=
0
topry
Top achievements
Rank 1
answered on 09 Mar 2012, 07:15 PM
Kevin,

Thanks again for the information. While I have created WCF services in the past using both HTTP/s and netTCP bindings (always using ADO.Net classes), I have yet to get my arms around getting this (ORM) to work in with a WCF service library (vs ASP.Net application hosted). I do agree with your point on some of the constructs used by the wizard generated code, but right now I would settle for just getting something to work!

I sent a query to Telerik's consulting arm today - hopefully, they can help kick start my brain and get me pointed in the right direction.

-Tim
0
Serge
Telerik team
answered on 13 Mar 2012, 03:09 PM
Hello guys,

 While I already answered Tim's support ticket I would like to share the answer in here, so that it is available for everybody. Do let me know if you have any comments or find something not entirely to your liking. As always we are open to suggestions and would like to provide a better services story. Here goes: 

Unfortunately we haven't designer the wizard to be capable of generating into a WCF Service Library projects. Of course if that is the case we shouldn't have allowed you to use the wizard in the first place. We will consider adding support for that (or hide the project entirely).

As to fetch plans, they are not automatically applied to services. Let me shed some light on the matter, a WCF service will work (and load) the same whether it is exposed trough web or netTcpBinding. The question now becomes how to transfer object graphs trough a service and whether fetch plans apply. 

First of all I will suggest using the new "Generate OpenAccess Domain Service..." wizard. It is build so that it can be easily extended and should be pretty straight forward and easy to understand. The DataManager on the other hand, handles most of the stuff automatically but it can be a bother to tweak. 

So when using the new wizard a DTO layer will be generated for you. Each service method will call a domain service that will retrieve the OpenAccess objects from the database and construct DTO object from them that will in turn be returned through the service. Now is a good time to point out that fetch strategies apply only for loading OpenAccess objects. You need to keep in mind that if a property that is now loaded is access the integrated lazy loading will kick in and load the data in a separate query (and this cannot be turned off). 

You will see that the Assembler classes that are used to translate a DTO  object to an OA object, and the reverse will not populate the collection and reference properties (i.e. related entities). This is done on purpose, because usually loading object graphs and sending them trough a service is not a good idea. The message size quickly becomes too much. This is why the SDK example load data in different calls to the service, exactly because loading everything in a single call might be too big. Loading only needed data is what we tried to achieve. 

You can easily extend the generated code so that related entities are loaded and sent, however you should be careful in doing so. 

Unfortunately we do not have such a sample right now. I hope this is helpful, please let us know if you have further question or face more trouble.
 
Regards,
Serge
the Telerik team
Telerik OpenAccess ORM Q1 2012 release is here! Check out what's new or download a free trial >>
0
topry
Top achievements
Rank 1
answered on 08 May 2012, 08:15 PM
Henrik/Kevin etal-

Can you advise how many methods you are creating for any one WCF contract?
When I have used the 'Generate Open Access Domain Model service' context menu wizard to generate a 'plain' service, the code is generated correctly and works - but, when I have over ~100 tables in my entity model, and several hundred methods in the contract, it becomes a bit cumbersome and eventually, clients will timeout when attempting to download the service references - and no modifications on the client or service that I have tried will overcome this timeout.

I have read that while there may be no technical limit to the method count per contract, some recommend a practical limit at 'a dozen or so', which does not seem possible using the Telerik wizards.

Since we have over 300 tables in this one database, the number of generated methods is greater than 1000. Even if I could get a client to download the service reference, it does not seem to be a practical implementation. As such, I've been considering a service bus type of methodology and using CodeSmith to generate my templates.

Since you have been using OpenAccess with large object sets, can you advise what structure/methodology you are using for your WCF contract(s)?

Regards,

Tim
0
IT-Als
Top achievements
Rank 1
answered on 11 May 2012, 08:11 AM
Hi Tim,

Before I give you my opinion I would like to point out, that at the time we wrote our WCF (+5 years) services on top of OpenAccess there was no such thing as Generate Open Access Domain Model service wizard. Therefore, each and every WCF is written (almost) by hand..

Anyway:

We split the WCF service methods among different WCF services... one for each logical business domain of the application. For example: FinancialService, AccountingService, SalesService, PurchaseService, etc.
Thus you have one code file for each Service

However, the DataContracts used by the service methods (either as input parameters or return values) are placed in one single file/assembly.

This works fine for us and we had no problems with timeouts.

How do you download the service description to your client (by code, command line tool or...) ?
0
topry
Top achievements
Rank 1
answered on 12 May 2012, 04:50 PM
To download the service description, I have tried the Add/Update reference from within Visual Studio 2010 and wcftestclient.exe.
The more methods I add, the longer it takes to update until it exceeds 5 minutes (I have yet to find a config setting for that). While the included wizard is helpful in creating small services and unit tests, I'm not sure the structure is one I want to use for an entity of this size and all of the individual methods. I still have a lot to learn utilizing ORM with WCF, so for now, I imagine the main issue is my lack of knowledge/expertise in this area.

Thanks for the reply.
-Tim

Tags
General Discussions
Asked by
Bryan
Top achievements
Rank 1
Answers by
IT-Als
Top achievements
Rank 1
Bryan
Top achievements
Rank 1
Kevin
Top achievements
Rank 1
Alexander
Telerik team
topry
Top achievements
Rank 1
Serge
Telerik team
Share this question
or