Hi,
I am executing sql queries that are returning over 100,000 rows. The grid visualization handles these large results easily. From what I can gather the rows visible on screen are in memory and the rest are in some kind of cache. I am saving the results that I set in itemssource for future review where the client does not have access to the database. I want to provide a result viewer where the client can drill down on very large data sets using the excellent adhoc filtering capabilities of the grid The major problem that I am having is the amount of time it takes to serialize and deserialize and sometimes getting out of memory. Since the grids UI virtualization capabilities must have dealt with these issues in some form I was hoping that you could provide me with some ideas on how to handle the serialization problem.
The following shows the coding to get the data. clientData.Result is what needs to be serialized and deserialized. As you can see this is generic code to handle any resultset.
clientData.Result = new ObservableCollection<dynamic>()
while (dataReader.Read())
{
Dictionary<string, object> clientRow = new Dictionary<string, object>();
for (int i = 0; i < dataReader.FieldCount; i++)
{
string myKey = clientData.ColumnInfo[i].UniqueColumnName;
object myValue = null;
if (DBNull.Value != dataReader[i] && clientData.ColumnInfo[i].ProviderSpecificDataType != "Microsoft.SqlServer.Types.SqlHierarchyId")
{
myValue = dataReader[i];
}
clientRow.Add(myKey, myValue);
}
clientData.Result.Add(new CloudburstUtilities.ClientRow(ToDictionary(clientRow)));
}
public static IDictionary<string, object> ToDictionary(IDictionary<string, object> row)
{
var dict = new Dictionary<string, object>();
foreach (KeyValuePair<string, object> column in row)
{
dict.Add(column.Key, column.Value);
}
return dict;
}
Thanks
Rich
I am executing sql queries that are returning over 100,000 rows. The grid visualization handles these large results easily. From what I can gather the rows visible on screen are in memory and the rest are in some kind of cache. I am saving the results that I set in itemssource for future review where the client does not have access to the database. I want to provide a result viewer where the client can drill down on very large data sets using the excellent adhoc filtering capabilities of the grid The major problem that I am having is the amount of time it takes to serialize and deserialize and sometimes getting out of memory. Since the grids UI virtualization capabilities must have dealt with these issues in some form I was hoping that you could provide me with some ideas on how to handle the serialization problem.
The following shows the coding to get the data. clientData.Result is what needs to be serialized and deserialized. As you can see this is generic code to handle any resultset.
clientData.Result = new ObservableCollection<dynamic>()
while (dataReader.Read())
{
Dictionary<string, object> clientRow = new Dictionary<string, object>();
for (int i = 0; i < dataReader.FieldCount; i++)
{
string myKey = clientData.ColumnInfo[i].UniqueColumnName;
object myValue = null;
if (DBNull.Value != dataReader[i] && clientData.ColumnInfo[i].ProviderSpecificDataType != "Microsoft.SqlServer.Types.SqlHierarchyId")
{
myValue = dataReader[i];
}
clientRow.Add(myKey, myValue);
}
clientData.Result.Add(new CloudburstUtilities.ClientRow(ToDictionary(clientRow)));
}
public static IDictionary<string, object> ToDictionary(IDictionary<string, object> row)
{
var dict = new Dictionary<string, object>();
foreach (KeyValuePair<string, object> column in row)
{
dict.Add(column.Key, column.Value);
}
return dict;
}
Thanks
Rich