AI Service SetupPremium
To enable AI-powered interaction in the Smart Grid AI Assistant tools, you need a backend service that processes natural language prompts and returns executable Grid commands. The Smart Extensions library for .NET simplifies this by automatically handling the request/response format and command generation.
This article shows you how to build your own .NET backend service. You will learn how to:
- Install the
Telerik.AI.SmartComponents.Extensionspackage - Configure your AI provider (Azure OpenAI, OpenAI, or local models)
- Create an API endpoint that uses the library
- Understand the commands the library generates
How It Works
The AI Assistant tools send user prompts to your backend service, which must return the response in a specific format that the Grid understands. The Telerik.AI.SmartComponents.Extensions package for .NET simplifies this process by handling the request/response formatting automatically.
The Smart Extensions library acts as a bridge between your AI model and the Grid. You provide the AI configuration (Azure OpenAI, OpenAI API, or local LLM credentials) and create an API endpoint, while the library handles all request/response formatting and command generation.
How the library processes requests:
- Receives structured requests from the Grid containing the user's prompt and Grid column information.
- Configures your AI model with Grid-specific function definitions using tool calling. These function definitions enable the AI to understand available Grid capabilities and generate appropriate command responses.
- Processes the AI response and extracts structured commands.
- Returns formatted commands that the Grid applies automatically.
For example, when a user types "Show products with price over 100", the library processes this prompt and returns a structured filter command with the appropriate field, operator, and value that the Grid can apply.
Prerequisites
Before you start, ensure you have:
- .NET 8.0 or later
Microsoft.Extensions.AIpackage- Azure OpenAI or OpenAI API access, or local LLM
- ASP.NET Core (for web API scenarios)
Setup Steps
Follow these steps to set up the Smart Extensions library in your .NET application.
Install Required Packages
Install the Smart Extensions library and the Microsoft AI abstractions:
dotnet add package Telerik.AI.SmartComponents.Extensions
dotnet add package Microsoft.Extensions.AI
Install your AI provider package. For Azure OpenAI:
dotnet add package Azure.AI.OpenAI
Configure the AI Client
Add your AI provider credentials and configuration in the appsettings.json file. This example shows Azure OpenAI configuration:
{
"AI": {
"AzureOpenAI": {
"Endpoint": "https://your-openai-resource.openai.azure.com/",
"Key": "your-api-key-here",
"Chat": {
"ModelId": "gpt-4"
}
}
}
}
Register the AI chat client in your application by adding the following code to Program.cs:
using Microsoft.Extensions.AI;
using Azure.AI.OpenAI;
var builder = WebApplication.CreateBuilder(args);
// Register the Azure OpenAI client.
builder.Services.AddSingleton(new AzureOpenAIClient(
new Uri("YOUR_AZURE_OPENAI_ENDPOINT"),
new AzureKeyCredential("YOUR_AZURE_OPENAI_CREDENTIAL")
));
// Register the Chat client with the specified model.
builder.Services.AddChatClient(services =>
services.GetRequiredService<AzureOpenAIClient>()
.GetChatClient("gpt-4o-mini").AsIChatClient()
);
builder.Services.AddControllers();
var app = builder.Build();
Process Grid AI Requests
Create a controller that handles Grid AI requests. The Smart Extensions library provides two key methods:
AddGridChatTools()—Configures the AI model with Grid-specific capabilities.ExtractGridResponse()—Extracts structured commands and messages from the AI response that the Grid can understand.
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.AI;
using Telerik.AI.SmartComponents.Extensions;
[ApiController]
[Route("[controller]/[action]")]
public class GridController : Controller
{
private readonly IChatClient _chatClient;
public GridAIController(IChatClient chatClient)
{
_chatClient = chatClient;
}
[HttpPost]
[Route("/grid/smart-state")]
public async Task<IActionResult> SmartState([FromBody] GridAIRequest request)
{
// Create chat options
var options = new ChatOptions();
// Add Grid-specific chat tools for AI processing
options.AddGridChatTools(request.Columns);
// Convert request contents to chat messages
var conversationMessages = request.Contents
.Select(m => new ChatMessage(ChatRole.User, m.Text))
.ToList();
// Process the request and obtain the AI response
ChatResponse completion = await _chatClient.GetResponseAsync(conversationMessages, options);
// Extract structured response from the AI response
GridAIResponse response = completion.ExtractGridResponse();
return Json(response);
}
}
With this setup, the library automatically handles the following tasks:
- Interpreting the user's natural language prompts
- Generating appropriate Grid commands (filtering, sorting, etc.)
- Formatting the response correctly for the Grid
Configure the Frontend
Now that your backend is ready, configure your KendoReact Grid to use this API endpoint. See Smart Grid AI Assistant Tools Setup for frontend setup options.
Request and Response Format
The Smart Extensions library uses specific request and response structures when handling communication between the Grid and your backend service. This section documents both formats to help you understand the communication flow and explain how to work with them.
Request Structure
The Grid sends a GridAIRequest object to your endpoint when using automatic or controlled integration. For manual integration, you build and send requests programmatically using the same format.
public class GridAIRequest
{
public string Role { get; set; } // Message sender (typically "user")
public List<GridAIRequestContent> Contents { get; set; } // User's natural language prompt
public List<GridAIColumn> Columns { get; set; } // Grid column definitions
}
public class GridAIRequestContent
{
public string Type { get; set; } // Content type (typically "text")
public string Text { get; set; } // The natural language prompt
}
public class GridAIColumn
{
public string Id { get; set; } // Unique column identifier
public string Field { get; set; } // Field name from your data model
public string[]? Values { get; set; } // Optional predefined values for enum-like fields
}Include the
Valuesproperty for columns with predefined values (status, category, etc.) to help the AI generate more accurate filters. For more details, see Provide Column Values in the Best Practices section.
Response Structure
When processing the Grid request using the ExtractGridResponse() method, a GridAIResponse object is generated with the following structure:
public class GridAIResponse
{
public List<ICommand> Commands { get; set; } // Grid operation commands
public string? Message { get; set; } // Optional status message
}The
Commandsarray contains command objects that tell the Grid which operations to apply. For a complete list of available command types, see the Command Types section below.
Command Types
The library generates specific command types based on the user's prompt. The following tables list all available commands grouped by operation category:
| Command Type | Parameters |
|---|---|
GridFilterCommand | Filter with field, operator, and value |
GridSortCommand | Sort with field and direction |
GridGroupCommand | Group with field and direction |
GridPageCommand | Page number |
GridPageSizeCommand | PageSize value |
Example:
{
"Commands": [
{
"Type": "GridFilterCommand",
"Filter": {
"Field": "Price",
"Operator": "greaterthan",
"Value": 100
}
}
]
}
| Command Type | Parameters |
|---|---|
GridColumnShowCommand | Column Id |
GridColumnHideCommand | Column Id |
GridColumnLockCommand | Column Id |
GridColumnUnlockCommand | Column Id |
GridColumnResizeCommand | Column Id and Width |
GridColumnReorderCommand | Column Id and Position |
Example:
{
"Commands": [
{
"Type": "GridColumnHideCommand",
"Id": "1"
}
]
}
| Command Type | Parameters |
|---|---|
GridHighlightCommand | Highlight with filters and cells |
GridSelectCommand | Select with filters and cells |
Example:
{
"Commands": [
{
"Type": "GridHighlightCommand",
"Highlight": {
"Filters": [
{
"Field": "Status",
"Operator": "equalto",
"Value": "Active"
}
]
}
}
]
}
| Command Type | Parameters |
|---|---|
GridExportExcelCommand | FileName |
GridExportPDFCommand | FileName |
GridExportCSVCommand | FileName |
Example:
{
"Commands": [
{
"Type": "GridExportExcelCommand",
"FileName": "products.xlsx"
}
]
}
| Command Type | Parameters |
|---|---|
GridClearFilterCommand | None |
GridClearSortCommand | None |
GridClearGroupCommand | None |
GridClearHighlightCommand | None |
GridClearSelectCommand | None |
Example:
{
"Commands": [
{
"Type": "GridClearFilterCommand"
}
]
}
Sample Prompts
The Smart Extensions library interprets natural language prompts and converts them into Grid operations. The following examples demonstrate the types of prompts you can use:
Data Operations
"Show products with price over 100"
"Sort by amount descending"
"Group by account type"
"Go to page 20"
"Clear filtering"
Column Operations
"Hide the Age column"
"Lock the Name column"
"Resize the Name column to 200px"
"Move the Department column to position 1"
Highlighting and Selection
"Highlight rows where status is Active"
"Select age cells where age is greater than 30"
"Clear selection"
Export
"Export to Excel with file name 'employee_data'"
"Export to PDF"
Best Practices
Follow these recommendations to optimize your Smart Extensions implementation and ensure reliable AI-powered Grid operations.
Provide Column Values
When Grid columns have predefined values (such as status or category fields), ensure the frontend includes them in the Values property of the column definitions before sending the request. This helps the AI generate more accurate filters by understanding the available options for each field.
Create a helper function that enriches column definitions with the available values for each field:
const addColumnsValues = (columns) => {
return columns.map((col) => {
if (col.field === 'Status') {
return { ...col, values: ['Active', 'Pending', 'Completed'] };
}
if (col.field === 'Region') {
return { ...col, values: ['North America', 'Europe', 'Asia Pacific'] };
}
return col;
});
};
Use the onPromptRequest event to enrich columns before the request is sent:
<GridToolbarAIAssistant
requestUrl="https://your-ai-service.com/api/grid"
onPromptRequest={(event) => {
event.columns = addColumnsValues(event.columns);
}}
/>
With this approach, the request sent to your AI service includes enriched column values:
{
"Columns": [
{
"Id": "status",
"Field": "Status",
"Values": ["Active", "Pending", "Completed"]
},
{
"Id": "region",
"Field": "Region",
"Values": ["North America", "Europe", "Asia Pacific"]
}
]
}
For more details on integrating with your AI service, see the AI Assistant Tools Setup article.
Error Handling
Implement proper error handling to manage AI service failures and provide meaningful feedback to users. Wrap your AI processing logic in try...catch blocks to handle exceptions gracefully:
try
{
var completion = await _chatClient.GetResponseAsync(conversationMessages, options);
var response = completion.ExtractGridResponse();
return Json(response);
}
catch (Exception ex)
{
return StatusCode(500, $"AI processing failed: {ex.Message}");
}
Input Validation
Validate incoming requests before processing them to ensure all required data is present. This prevents unnecessary AI service calls and provides clear error messages to the client:
if (request?.Columns == null || !request.Columns.Any())
{
return BadRequest("Columns are required");
}
if (request.Contents == null || !request.Contents.Any())
{
return BadRequest("Content is required");
}
Testing
The library includes comprehensive test coverage. You can run tests to verify functionality:
cd tests
dotnet test
For integration testing with your specific data model, create test cases that verify if AI responses match the expected Grid operations.
Troubleshooting
Connection errors
- Verify your AI service endpoint URL is correct.
- Check that your API key is valid and not expired.
- Ensure your application can reach the AI service.
Model not found
- Confirm the model ID is deployed in your Azure OpenAI resource.
- Check that the model name matches exactly.
Token limit exceeded
- Reduce the number of columns sent in requests.
- Limit the size of the
Valuesarrays for columns. - Consider using a model with higher token limits.