New to Kendo UI for AngularStart a free 30-day trial

Streaming AI Responses with InlineAIPrompt

The Kendo UI for Angular InlineAIPrompt component supports streaming responses, allowing users to see AI-generated content as it is being produced. This feature enhances the user experience by providing immediate feedback and a more interactive interface.

Streaming is particularly useful when:

  • Working with long-form AI responses that take several seconds to generate.
  • Creating inline editing interfaces where users expect real-time feedback.
  • Integrating with AI services that support chunked responses.
  • Enhancing user engagement in contextual AI assistance scenarios.
Change Theme
Theme
Loading ...

Configuration

To enable streaming in the InlineAIPrompt component, follow these steps:

  1. Use the streaming property of the InlineAIPrompt and bind it to a boolean variable. This property controls whether the component displays the Stop Generation button and indicates that a response is being streamed.

    HTML
    <kendo-inlineaiprompt [streaming]="isStreaming"> </kendo-inlineaiprompt>
  2. Bind the promptOutput property to an InlineAIPromptOutput object. The promptOutput property contains the AI model's generated response, which the output card displays.

    HTML
    <kendo-inlineaiprompt [promptOutput]="promptOutput"> </kendo-inlineaiprompt>
  3. Handle the promptRequest event to start streaming. When the user sends a prompt to the service, the promptRequest event is fired. Start streaming and update the output as new data arrives.

    HTML
    <kendo-inlineaiprompt (promptRequest)="onPromptRequest($event)"></kendo-inlineaiprompt>
  4. Handle the promptRequestCancel event to stop streaming. This event is fired when the user clicks the Stop Generation button. Handle the promptRequestCancel event to interrupt the AI model request.

    HTML
    <kendo-inlineaiprompt (promptRequestCancel)="onPromptRequestCancel()"> </kendo-inlineaiprompt>

When implementing actual AI model streaming logic:

  • Replace the simulated timeout in the promptRequest event handler with your actual AI model streaming logic.
  • As new data arrives from the AI model, update the promptOutput object to update the response in real time.
  • If the user clicks the Stop Generation button, use the promptRequestCancel event to interrupt the AI model request.
In this article
ConfigurationSuggested Links
Not finding the help you need?
Contact Support