Fluent Mapping Performance

5 posts, 0 answers
  1. Richard
    Richard avatar
    2 posts
    Member since:
    Feb 2015

    Posted 25 Apr Link to this post

         We are using Data Access with WebAPI to supply data to our website.  We have been using JMeter to test a simple web method to return data.  Even through our LINQ and resultant SQL from the LINQ does not change the more columns that we map the slower the throughput gets.  Mapping 5 columns we can achieve 200 requests per second, but with 100+ columns mapped we are suddenly down to 10 requests per second.  Again the SQL statements do not change in this case only the number of columns being mapped that are never even used.  A sample of our mapping code is below.  We have also tested progressively adding columns and it gets worse the more that you add (progressively getting slower).  Can someone help explain this behavior?  Thank you.

     

    var configuration = new MappingConfiguration<Sku>();
                configuration.MapType().ToTable("SKU");      
         configuration.HasProperty(x => x.ImageRecnbr).IsIdentity().HasFieldName("_imagerecnbr").WithDataAccessKind(DataAccessKind.ReadWrite).ToColumn("IMAGE_RECNBR").IsNotNullable().HasColumnType("decimal").HasPrecision(10).HasScale(0);
                configuration.HasProperty(x => x.SkuInternal).HasFieldName("_skuinternal").WithDataAccessKind(DataAccessKind.ReadWrite).ToColumn("SKU_INTERNAL").IsNotNullable().HasColumnType("decimal").HasPrecision(9).HasScale(0);

  2. Richard
    Richard avatar
    2 posts
    Member since:
    Feb 2015

    Posted 26 Apr in reply to Richard Link to this post

     I have been able to verify that the older rlinq equivalent does not exhibit this behavior.  It seems to be a newly introduced issue.  Since fluent mapping is only the option moving forward this is serious concern for us.
  3. DevCraft banner
  4. RIT
    RIT avatar
    21 posts
    Member since:
    Apr 2015

    Posted 27 Apr in reply to Richard Link to this post

    Hi Richard

    How have you measured your throughput? We saw a similar issue, that can be solved by instantiating the MetadataContainer only once:

    before:

        public partial class Db : OpenAccessContext, IDbUnitOfWork
        {
            private static string connectionStringName = @"myconnectionstringname";
                 
            private static readonly BackendConfiguration Backend = GetBackendConfiguration();
     
            private static readonly MetadataSource MetadataSource = new DbMetadataSource();
    ...

    after:

        public partial class Db : OpenAccessContext, IDbUnitOfWork
        {
            private static string connectionStringName = @"myconnectionstringname";
                 
            private static readonly BackendConfiguration Backend = GetBackendConfiguration();
     
            private static readonly MetadataContainer metadataContainer = new DbMetadataSource().GetModel();
    ...

    So the context will not build the model every time you instantiate a new context. This has a big impact if your MetadataSource is big...

    Greetings,

    Florian

  5. Richard
    Richard avatar
    2 posts
    Member since:
    Feb 2015

    Posted 27 Apr in reply to RIT Link to this post

    We have used JMeter to perform all of our performance testing. We did create a static instance of the MetadataContainer.  The performance was much worse before we did.  We created and populated the container on application start.

     

  6. Richard
    Richard avatar
    2 posts
    Member since:
    Feb 2015

    Posted 27 Apr in reply to Richard Link to this post

    I spoke too soon!  I thought that we had done that, but in fact we only created a static instance as in your first example.  Thank you for the help!!
Back to Top
DevCraft banner