Hi, complicated question regarding the dynamic sizing of JSON data blocks into dataSources inserted into a spreadsheet.
I was getting large dataSources (blocks of data in JSON format) from our responses from API requests, so I bumped up the global row and column sizes for the spreadsheet component:
Problem solved!!! However, I found that this change significantly decreases the performance of imports into the spreadsheet control:
$("#spreadsheet").kendoSpreadsheet({
rows: 300,
columns: 20000
});
A little research confirmed this problem with importing an excel spreadsheet into the Kendo spreadsheet component:
Memory Usage:
Initializing with a large dataset consumes a lot of memory. When you import an Excel file, the spreadsheet control needs additional memory to process and render the imported data. This can lead to memory exhaustion and slow performance.
Rendering Overhead:
The spreadsheet control has to render all the rows and columns, even if they are empty. This increases the rendering overhead, making the control less responsive when importing data.
Data Processing:
Importing an Excel file involves:
(1) parsing the file,
(2) converting it to the spreadsheet's internal format
(3) And then rendering it.
With a large number of pre-initialized rows and columns, this process becomes more complex and time-consuming.
Garbage Collection:
The large initial dataset can lead to frequent garbage collection cycles, which can further slowdown the import process. Garbage collection is triggered more often to manage the high memory usage, impacting overall performance. (this is where I noticed frequent garbage collection calls in the browser's Dev Tool Performance page) ...there is probably other stuff hampering the import also as mentioned above.
$("#spreadsheet").kendoSpreadsheet({
//No explicit settings for rows or columns. Default at 200 rows, and 50 columns.
});
Another workaround found on CoPilot suggested doing this on declaration and initialization of the spreadsheet component:
// Initialize with a smaller dataset
$("#spreadsheet").kendoSpreadsheet({
sheets: [{
name: "Sheet1",
rows: 100,
columns: 50
}]
});
But here is what happens when I do that:
I have better results with this:
SIZE INIT. KENDO SPREADSHEET | Kendo UI Dojo
But there is space below the rows for some reason....that's ok but I know somebody won't like it. :)
So no to the latter approach, and even with an additional resize the grid does not render; the rows and columns do not appear, and the creator and user is stuck with that little blue line.
My workaround was to use the default size of 200 rows and 50 columns.... which significantly improves the loading time of the Excel spreadsheet. This also allows the excel importer topreserve the size of the spreadsheet even if it exceeds the global rows and columns configuration options.
However, when I try to receive an API response message and put it into a kendo dataSource then insert that dataSource into a sheet within the sheet's settings/configurations option:
spreadsheet.fromJSON({
sheets: [{ name: data.reportName, rows: 19000, column: 65, dataSource: { data: data.MyJsonData, schema: { model: reportModel } } }] //initialize the sheet property with the first report sheet.
});
The global rows and columns size will override the dataSource size and cut it off at a maximum of 200 rows and 50 columns. hmmm.
(Q) The question is ... how to handle the dynamic sizing of incoming JSON blocks correctly so that the:
(1) Initialized first empty grid renders properly (not a problem if you don't initialize an initial sheet with rows and columns).
(2) The default global values for row and columns are overridden when you insert the sheet coming from the API.
(?)
QUESTION: What is the recommended way of handling this? I have no control or say whether the user can exceed the limit of the default rows and columns configuration options.