Telerik blogs

Telerik OpenAccess ORM has two levels of caching. The upper one is a local cache specific for each IObjectScope instance and it is the focus of this post. The lower cache, the second level cache is common for all IObjectScope objects and is used to hold unchanged database content in memory. The values for various fields of the user objects (like a Person instance) are duplicated in the L2 cache. It is populated during read access and gives fast retrieval of commonly used objects. Additionally, the cache can contain complete query results. Note that the L2 cache is not enabled by default.

Now let’s get back to the Level One cache. As we already mentioned, each IObjectScope instance has its own L1 cache. The main role of this cache is to make sure that the objects in the scope are unique, i.e. an object cannot be loaded twice in the same scope. To achieve this, the scope maintains a dictionary of ObjectId keys and ‘weak‘ references to all objects ‘living’ in the scope. Weak reference is a reference that allows referencing objects that can be collected by the Garbage Collector. Having that in mind, when a reference to an object is followed, the object scope first checks if the dictionary contains such an ObjectID. And if it finds one, the existing weak reference is returned and nothing is fetched from the database. However, if the reference is pointing to free memory or the ObjectID has not been added to the dictionary of this scope yet, the object is read from the database and the dictionary entry is updated with the new object.

This mechanism works not only for single objects but even for large result sets. When executing a query, it returns ObjectID instances which are checked for existence the same way as described above. For example, if we have already loaded some “Order” objects but want to retrieve the rest of them as well, by using ”from o in scope.Extent<Order>() select o”, the scope will retrieve all objects but give back the instances that are already in the cache.

If a particular object is referenced only by weak references, it could be collected by the Garbage Collector at any time when additional free memory is needed. That may happen even while executing a single query. If the specified fetch plan is deep and the retrieved objects require a lot of memory, some of the newly loaded objects might be thrown away by the garbage collector while the reading is still in progress. Obviously, this will cause additional fetches when the freed objects are accessed. If it is necessary to avoid this behavior, usage of weak references can be forbidden by adding a node to the backend configuration section.

<backendconfigurations>
 
<backendconfiguration id="mssqlConfiguration" backend="mssql">
  
<mappingname>mssqlMapping</mappingname>
  
<pmCacheRefType>STRONG</pmCacheRefType>
 
</backendconfiguration>
</backendconfigurations>
 

The pmCacheRefType setting declares the reference type to be used. The value STRONG can be specified to force using strong references instead of weak. Note that using strong references may lead to large memory consumption, so be careful.

The L1 cache takes care of all new, deleted or changed objects. Even if you do not keep a reference to those instances until the transaction commit is called. The cache also ensures that those changed data is not overwritten by loading the object again via a query.

Hope this helps you to understand how Telerik OpenAccess ORM caching mechanism works so you will be able to optimize the data access of your applications. Any feedback is always appreciated.


Comments

Comments are disabled in preview mode.