This is a migrated thread and some comments may be shown as answers.

Master/Slave support in OpenAccess ORM?

4 Answers 62 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
This question is locked. New answers and comments are not allowed.
Kendall Bennett
Top achievements
Rank 2
Kendall Bennett asked on 21 Apr 2011, 07:15 PM
Hi Guys,

I am looking at using OpenAccess ORM and am trying to work out the best way to support a master/slave type environment for MySQL. We have developed all our direct ADO.NET code such that we can create connections either to a read only slave database, or a read/write master database allow our code to handle MySQL scaling using a master/slave environment. We use a sentinal object that can be used to wrap any read/modify/write type scenarios that cross over database calls, so that any code that would normal read from the slave database would end up reading from the master while the sentinal is being held.

So my question is, how would something like that fit with an ORM model like Open Access? It would seem you could change the way the context is created and use two connection strings, one for the master and one for the slave, and then just choose which context to use similar to how we currently choose the context currently. The same sentinal system could be used to make sure that when code requests a context, it gets the correct master or slave context as necessary.

However I am wondering how this would interact with the 2nd Level Cache support that OpenAccess has? Would the second level cache work correctly such that code writing to the database through the 'master' context would end up correctly invalidating information in the cache for the 'slave' context? I would imagine that somehow this would work correctly, given that every thread in the server will end up with separate contexts, so somehow the 2nd level cache must maintain cache coherency across multiple contexts in the same application.

BUT, does this work if the connection strings for the context are different, since in our development environment they both connect to the same database, but the slave goes through a read only user that only has SELECT permissions. And in a live environment, the slave would actually be a physically separate database server.

If this does not work, what is recommended to scale database support across multiple machines when using OpenAccess ORM? Perhaps using a master/master environment? But then again, how does Open Access maintain cache coherency in those kinds of scale out environments?

4 Answers, 1 is accepted

Sort by
0
Zoran
Telerik team
answered on 22 Apr 2011, 08:13 AM
Hi Kendall Bennett,

 The 2nd level cache will always be properly invalidated as it is actually projected for scenarios exactly like yours or similar(web farm for example) where multiple machines work with the same objects. The information that the 2nd level cache knows about is the ID of an object and the Type of an objects - it does not know anything about the connection string. So whenever an object is updated by a given context, this context will trigger invalidation of the 2nd level cache, no matter what connection string it uses to persist the changes.

All the best,
Zoran
the Telerik team
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Public Issue Tracking system and vote to affect the priority of the items
0
Kendall Bennett
Top achievements
Rank 2
answered on 22 Apr 2011, 06:01 PM
Ok thanks. So for the second level cache to work, all access to the database must occur through OpenAccess, and all machines involved must communicate with each other so the cache can be kept coherent?

What happens if some other process, perhaps using native SQL, updates the database? How does the 2nd level cache get invalidated in that case?
0
Zoran
Telerik team
answered on 26 Apr 2011, 05:05 PM
Hi Kendall Bennett,

 You assumed correctly, all of the data access should be done via OpenAccess so you can 100% rely on the level 2 cache. If someone manually alters a record in the database, then a possibility for reading out-dated data is in place. And yes, if you have a web farm or some other multiple-server solution, the machines should be connected so that a request served on one of the machines can invalidate the cache on another one(via MSMQ).

Greetings,
Zoran
the Telerik team
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Public Issue Tracking system and vote to affect the priority of the items
0
Kendall Bennett
Top achievements
Rank 2
answered on 26 Apr 2011, 06:58 PM
Ok thanks. For the moment this means the 2nd level cache won't be useable at all for us, as we have a combination of legacy PHP code and new C# code that accesses the database. We are in the process of porting all of the PHP code to C#, and eventually everything will be in C#, but that is going to be some time in the future :(

So until then I suppose we would just have to avoid the 2nd level cache.
Tags
General Discussions
Asked by
Kendall Bennett
Top achievements
Rank 2
Answers by
Zoran
Telerik team
Kendall Bennett
Top achievements
Rank 2
Share this question
or