An object relational mapper is about developer productivity, about the ability to quickly and easily make changes to your model, about integrated change tracking, caching and moving to other databases easily. It is also about speed.
In this blog post I will try to explain the most common questions about OpenAccess in regards to performance and how you can improve it.
When using OpenAccess you always start with a type that derives from OpenAccessContext. Creating an instance of that type is a lightweight operation, typically in the order of a dictionary lookup.
Then you execute your first LINQ query, and it looks really slow. Retrieving 500 records in 500ms? Not good enough. Execute the same query and it will only take 20ms. Now that is something we can work with.
What happened? The first time you execute a query OpenAccess creates an internal Database object that holds a lot of information cached. This includes creating the connection pool, calculating your model information to the last bit, preparing some internal caches, initializing some infrastructure objects etc. So the second time you create and execute a query, you get all of this for free. The prepared Database setup is made available under the connection string in a dictionary, and the second context will just find the already opened Database instance.
Another thing that happened behind the curtains for the previous example is that we cached the LINQ query translation result. As you might imagine our LINQ engine translated the expression tree to SQL, however we then cached that result in the Database object, so that every consecutive execution of the same LINQ query will skip that translation to SQL step altogether.
There are some things that you might want to watch out for. It is very common to have data that is not needed all the time. Take the Northwind Category table, most of the time you do not want the byte[] image retrieved yet it is. Wouldn’t it be nice to only load it if you need it? As a matter of fact you can, just find the property on the diagram window and (through the Properties pane) change its loading behavior to lazy.
Another thing you can improve is loading related data. Say you have retrieved a single Product instance from the database. And you want to browse through all of the related orders, as soon as you ask for the property representing the list of items, the integrated lazy loading kicks in and executes queries in order to retrieve the data. You could have loaded all of the data in a single query using a fetch plan.
Note: You can actually define fetch strategies that affect your whole application or fine tune single queries. (Have a look at our help articles)
It is quite common that your application is already up and running and you then need to figure out how to optimize it, but where to start? Our Profiler can help. You can use it to quickly locate N+1 problems, long running queries, huge result sets and other possible performance hindrances. It can even tell you which LINQ query produced the SQL.
Note: In order to start profiling you have to turn on the logging, keep in mind though that logging decreases performance. So remember to turn it off when you don’t need it.
By just using the OpenAccessContext you automatically get a cache per context enabled, that stores all of the retrieved objects and only queries the database if the object is not materialized. (Note that LINQ queries are always executed and their results returned, however if the object is already materialized and cached a new one will not be created). This context cache supports heavily connected object networks by eliminating redundant copies.
And with just a few settings you can enable the Level Two Cache that is one per database that would be used by all contexts in the current application. You can even synchronize Level Two Caches between different applications using MSMQ. The Level Two Cache eliminates the need to contact the relational database server for recurring queries with the same parameters. You can read more on that matter in our help articles.
A feature we introduced a while back is our low-level API, also known as the ADO API that is basically a stack that completely bypasses the change tracking and caching mechanisms of OpenAccess. You can use that in applications that strive for performance and only need to retrieve data. More on that in this article.
Keep in mind that this is just the tip of the iceberg, there is still a lot more that you can tweak and customize to take the most of Telerik OpenAccess ORM.
We are going to continually refine our documentation and provide a whole new Optimization section that will include most of the topics discussed in this blog in greater detail.
As always, we are looking forward to hearing from you.