Hardware and sizing recommendation

Thread is closed for posting
4 posts, 0 answers
  1. Patrice Boissonneault
    Patrice Boissonneault avatar
    35 posts
    Member since:
    Nov 2009

    Posted 09 Feb 2010 Link to this post

    Hello,

    We are currently evaluating the possibility to use OpenAccess as the ORM for one of our project and we are planning to use the L2 Cache.

    Are you recommending any particular hardware specification (e.g. amount of memory, os, etc) to us so that we get the most performance out of the ORM layer (particularly the L2 cache)?

    Another related question is the size of the L2 Cache.  We are assuming that the "raw" data are stored in the L2 Cache and that "joins" are being done by the ORM layer and not the database server, is that correct?  If so, is it safe to assume that the L2 Cache size will never get bigger than the actual database size?

    Thank you for your help, your products looks great.

    P
  2. Jan Blessenohl
    Admin
    Jan Blessenohl avatar
    707 posts

    Posted 10 Feb 2010 Link to this post

    Hello Patrice Boissonneault,
    You are right, we store raw data in the cache. But the data is not packed as it might be possible in some backends. You should restrict the level two cache to a certain amount of objects, the default is 10000. Iif there is more data to be cached we will throw out the oldest data. More memory helps but really depends on your kind of application. The idea of the l2c is to take some load from the database server because it is normally the bottle neck.

    Greetings,
    Jan Blessenohl
    the Telerik team

    Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
    Follow the status of features or bugs in PITS and vote for them to affect their priority.
  3. Patrice Boissonneault
    Patrice Boissonneault avatar
    35 posts
    Member since:
    Nov 2009

    Posted 10 Feb 2010 Link to this post

    Hello Jan,

    Thank you very much for your answer.  Just to precise a bit more, let's imagine the extreme scenario where we would like the L2 cache to hold the entire database and act somehow like an in-memory database.  Can we just raise the default object limit to say 10,000,000?  What is the limit of the L2 cache in terms of performance?  And in terms of design (can we fill it up to the physical memory limit?)

    Do you have any performance metrics about the L2 cache size?

    Another question, is an object = a database row?  For example, we have reports which needs 50,000 customers row.  Is that 50,000 objects for the L2C and if so, what would be the behaviour if we keep the default of 10,000, will the L2C start flushing data when the whole dataset is not even loaded completely yet?  And finally, if an object is a row, is the 10,000 limit for all tables combined or per table?

    Finally,.sorry if those questions were asked before, I made some quick search without finding answers.  BTW, is there a manual available somewhere that we can take offline (like pdf, or even a hard book)?

    Thanks again for your help and have a good one.

    P
  4. Jan Blessenohl
    Admin
    Jan Blessenohl avatar
    707 posts

    Posted 11 Feb 2010 Link to this post

    Hi Patrice Boissonneault,
    The second level cache uses an LRU list with dictionary part to manage the content of the L2Cache. This list will be huge but I cannot see a big overhead. To fill the memory with the L2C is possible.

    An object in the L2C is basically a row, if you use inheritance where the data is collected from different tables it might be more than one row but never less. The limit is overall but you can also mark tables as non cachable.

    Our offline docu is in chm format. You can download the docu from our web page if you are logged in.

    Best wishes,
    Jan Blessenohl
    the Telerik team

    Watch a video on how to optimize your support resource searches and check out more tips on the blogs.
    Follow the status of features or bugs in PITS and vote for them to affect their priority.
Back to Top