This is a migrated thread and some comments may be shown as answers.

SQLite Write Performance For Large-ish Dataset

2 Answers 55 Views
Data Access Free Edition
This is a migrated thread and some comments may be shown as answers.
This question is locked. New answers and comments are not allowed.
Mike
Top achievements
Rank 1
Mike asked on 10 Feb 2015, 03:36 PM
I've just converted my app from writing directly to binary files over to using SQLite and DataAccess. 

After replicating about 90% of the functionality I'm hitting a few performance issues, that I'm not sure are either the way I've constructed the schema, or the way I'm configuring the database.

It's a telemetry, so very much like an F1 team would use to store data from their car instruments. 

The structure is like so:
We can have multiple sessions in the Session Table, but only one session will be active before the data is written.
We can have multiple laps per session in the Lap Table. 
These laps will consist of between 3000 and 5000 telemetry data points. The telemetry table has about 50 fields that are stored.
We also have a vector table, which has about 12 fields and we will get 3 of these per telemetry data point.
We also have a corner table, which has about 15 fields and we will get 4 of these per telemetry data point (one for each corner of the vehicle).

I build up the data as it's being read into memory, then add it to the context and do a SaveChanges on it. 

On a sample bit of data which is 1 session, 4 laps, 19,000 telemetry points, 58,000 corner entries and 44,000 vector entries (roughly), this can take over 16 seconds to save to disk when the SaveChanges is called.

I though initially I could wrap the save into a "Task task = Task.Run(() => context.SaveChanges());", but this triggers an instance not managed error. For info the entire process that's getting the data, processing, sending over IP and saving to the local SQLite DB is all running in a DispatcherTimer thread, so I'm not UI blocking anyway. But the read process can kick off fairly quickly and sitting for 16 seconds while it writes isn't great. Shudder to think what it would be like if there were 20+ laps involved.

If you compare, roughly, with the performance figures mentioned here: https://www.sqlite.org/speed.html, then I should be in the 8 second bracket. I never actually benchmarked the binary data version, but with a similar dataset I never noticed the lag, so it'll be sub 1 sec I would have thought. For the dataset mentioned about that's about 18Mb of data.

Is there something I should be doing differently, or is this kind of on-par with what to expect when writing this amount of data?

2 Answers, 1 is accepted

Sort by
0
Boris Georgiev
Telerik team
answered on 13 Feb 2015, 01:15 PM
Hi Mike,

The statistics shown in the web page are showing comparison between different SQL engines and they could be used as a base which database to choose, but they are not defining the time limit for your application and usage of the database. For example, the benchmarks are made with index table with 3 columns, so if you have 4 columns there will be a difference between execution time on your side and the time shown there.

You should also have in mind that if you are using indexed table the time will not rise proportional as the entities inserted in the table - if you are inserting X entities for Y time, then if you insert 2X entities it could(and most probably it will) take more than 2Y time because re-indexing and fragmenting the index. Also I assume you have constraints which will slow the inserts too. 

It is not correct to compare writing in SQL database and in binary file, because writing in the file system will be always faster than writing in the structured store.

I suppose that your application could be somehow optimized but I am not able to tell you how and where without looking into the code. If you have the possibility to send us your sample project, I will be happy to look into it and if it is possible to suggest optimizations.

Regards,
Boris Georgiev
Telerik
 
OpenAccess ORM is now Telerik Data Access. For more information on the new names, please, check out the Telerik Product Map.
 
0
Mike
Top achievements
Rank 1
answered on 13 Feb 2015, 04:20 PM
Hi, thanks for the reply. 

I probably should have left off the comparison. I was just trying to get a gauge for what to expect.

The project is wrapped up into a whole lot of other stuff, so wouldn't be that simple to send over unfortunately. I think at this stage I'm really after a bit of guidance on the performance. If what I'm experiencing is probably to be expected based on the amount of data I'm writing and into the various tables, with indexes etc. then I need another approach. If you would expect it to faster and a lot faster then I'll further the investigation and see if I can put together a demo app.

I did a little more googling after posting this, and I appreciate your comparing apples for apples comment, but it would appear that performance wise I'm probably in the ballpark of "to be expected". I would imagine that any real improvement may be by looking at a proper SQL engine like SQLServer, but for my little app that's not the way I really want to go.

Thanks, Mike...
Tags
Data Access Free Edition
Asked by
Mike
Top achievements
Rank 1
Answers by
Boris Georgiev
Telerik team
Mike
Top achievements
Rank 1
Share this question
or