As we already mentioned last week, Telerik OpenAccess ORM manages two cache levels – one specific for each ObjectScope instance and another shared by all scopes. The shared cache is generally called the "2nd level cache" or the "L2 cache". Its main job is to hold copies of the database content in memory. The L2 cache is populated during read access and gives fast retrieval of commonly used objects. This could be extremely helpful in multithreaded applications, where every single thread has its own ObjectScope. In order to avoid too many calls from each scope to the relational server, the L2 cache is shared by all of them in the application process. Database access is necessary only when the data, which needs to be retrieved, is not currently available in the cache. The cache content is indexed by ObjectIds. Additionally, the cache can contain complete query results which are cached by storing the ObjectIds of all the instances returned by the query. If a query is executed with the same parameters and options, executing a database query is avoided, since the cache is accessed. Cached query results are automatically evicted when any instance of any class involved in the query is modified. All of this caching is transparent to the application using Telerik OpenAccess ORM.
Important to know is that the 2nd level cache is used only for non-transactional reads and reads in optimistic transactions. It is bypassed for pessimistic transactions to ensure that database locks are obtained. Only data read in an optimistic transaction or outside of a transaction is cached.
Note that by default the L2 cache is not created and it needs to be explicitly enabled in the <backendconfigurations> section of the configuration file:
The maximum number of objects or queries in the cache can be set as well. The default values are accordingly 10000 and 1000 for persistent objects and queries.
To control what is cached, a caching strategy can be defined for each class:
There are three valid strategies:
Telerik OpenAccess ORM will automatically evict the modified instances and query results as since they are modified, they do not correspond to the actual data from the database. If other applications are modifying the database you can either disable caching for classes mapped to the tables being modified or manually evict instances, when you know the data has changed, by using the IObjectScope.Evict(object) method. When the cache is full (has reached the maximum configured number of instances) the least recently used instance(s) are evicted.
The second level cache can also be used in a distributed scenario. This helps to keep all caches in an application or web server farm in sync. However, synchronizing caches will be discussed soon in a separate post, just stay tuned.
In a nutshell, the second level cache helps you to decrease the roundtrips to the database and keep the most used content in memory. So, if your multithreaded application requires often reads from the database, enable the L2 cache and you will get a performance booster.