The Telerik Data Access Second Level cache is doing some extra work for you like invalidating objects in case there are insert, delete or update operations that might influence the result of some select statements. Also the cache maintains the relation between a query, the query parameters and its result so the next time you execute the same query you will receive the cached objects and do not hit the database. Of course those operations will cost some extra time but I think in your scenario the cache performs a bit slower than expected.
One of the following reasons could influence the cache performance:
- If you have projections the results are not cached. The reason is that the result is anonymous type and we cannot compare them from different parts of your application. Also using a projection over a cached object will be slower than using directly the cached object
- The queries you execute could fill up the cache too often and thus purge operations to happen too often.You could validate this by profiling the queries that are executed against the DB and play with the cache limit settings.
- The result for queries that return too many objects and we consider will fill up the cache is not cached. If you specify Take it is cached even we think they will fill up the cache fast. This is why you see the difference in the behavior with and without Take() in your linq queries.
It is perfectly fine to use your own cache implementation if you do not expect too many changes in the cached objects. It will most probably work faster because it will be designed just for you scenario.
Also you could profile the queries you execute and see which is the one that takes most of the time. If you can list those queries and will be happy to help you to improve them if possible.
I hope this helps.
OpenAccess ORM is now Telerik Data Access
. For more information on the new names, please, check out the Telerik Product Map