This question is locked. New answers and comments are not allowed.
I've just converted my app from writing directly to binary files over to using SQLite and DataAccess.
After replicating about 90% of the functionality I'm hitting a few performance issues, that I'm not sure are either the way I've constructed the schema, or the way I'm configuring the database.
It's a telemetry, so very much like an F1 team would use to store data from their car instruments.
The structure is like so:
We can have multiple sessions in the Session Table, but only one session will be active before the data is written.
We can have multiple laps per session in the Lap Table.
These laps will consist of between 3000 and 5000 telemetry data points. The telemetry table has about 50 fields that are stored.
We also have a vector table, which has about 12 fields and we will get 3 of these per telemetry data point.
We also have a corner table, which has about 15 fields and we will get 4 of these per telemetry data point (one for each corner of the vehicle).
I build up the data as it's being read into memory, then add it to the context and do a SaveChanges on it.
On a sample bit of data which is 1 session, 4 laps, 19,000 telemetry points, 58,000 corner entries and 44,000 vector entries (roughly), this can take over 16 seconds to save to disk when the SaveChanges is called.
I though initially I could wrap the save into a "Task task = Task.Run(() => context.SaveChanges());", but this triggers an instance not managed error. For info the entire process that's getting the data, processing, sending over IP and saving to the local SQLite DB is all running in a DispatcherTimer thread, so I'm not UI blocking anyway. But the read process can kick off fairly quickly and sitting for 16 seconds while it writes isn't great. Shudder to think what it would be like if there were 20+ laps involved.
If you compare, roughly, with the performance figures mentioned here: https://www.sqlite.org/speed.html, then I should be in the 8 second bracket. I never actually benchmarked the binary data version, but with a similar dataset I never noticed the lag, so it'll be sub 1 sec I would have thought. For the dataset mentioned about that's about 18Mb of data.
Is there something I should be doing differently, or is this kind of on-par with what to expect when writing this amount of data?
After replicating about 90% of the functionality I'm hitting a few performance issues, that I'm not sure are either the way I've constructed the schema, or the way I'm configuring the database.
It's a telemetry, so very much like an F1 team would use to store data from their car instruments.
The structure is like so:
We can have multiple sessions in the Session Table, but only one session will be active before the data is written.
We can have multiple laps per session in the Lap Table.
These laps will consist of between 3000 and 5000 telemetry data points. The telemetry table has about 50 fields that are stored.
We also have a vector table, which has about 12 fields and we will get 3 of these per telemetry data point.
We also have a corner table, which has about 15 fields and we will get 4 of these per telemetry data point (one for each corner of the vehicle).
I build up the data as it's being read into memory, then add it to the context and do a SaveChanges on it.
On a sample bit of data which is 1 session, 4 laps, 19,000 telemetry points, 58,000 corner entries and 44,000 vector entries (roughly), this can take over 16 seconds to save to disk when the SaveChanges is called.
I though initially I could wrap the save into a "Task task = Task.Run(() => context.SaveChanges());", but this triggers an instance not managed error. For info the entire process that's getting the data, processing, sending over IP and saving to the local SQLite DB is all running in a DispatcherTimer thread, so I'm not UI blocking anyway. But the read process can kick off fairly quickly and sitting for 16 seconds while it writes isn't great. Shudder to think what it would be like if there were 20+ laps involved.
If you compare, roughly, with the performance figures mentioned here: https://www.sqlite.org/speed.html, then I should be in the 8 second bracket. I never actually benchmarked the binary data version, but with a similar dataset I never noticed the lag, so it'll be sub 1 sec I would have thought. For the dataset mentioned about that's about 18Mb of data.
Is there something I should be doing differently, or is this kind of on-par with what to expect when writing this amount of data?