New to Kendo UI for AngularStart a free 30-day trial

Streaming AI Responses with AIPrompt

The Kendo UI for Angular AIPrompt component supports streaming responses, allowing users to see AI-generated content as it is being produced. This feature enhances the user experience by providing immediate feedback and a more interactive interface.

Streaming is particularly useful when:

  • Working with long-form AI responses that take several seconds to generate.
  • Creating chat-like interfaces where users expect real-time feedback.
  • Integrating with AI services that support chunked responses.
  • Enhancing user engagement in conversational applications.
Change Theme
Theme
Loading ...

Configuration

To enable streaming in the AIPrompt component, follow these steps:

  1. Use the streaming property of the AIPrompt and bind it to a boolean variable. This property controls whether the Output View displays the Stop Generation button and indicates that a response is being streamed.

    HTML
    <kendo-aiprompt [streaming]="isStreaming">
    </kendo-aiprompt>
  2. Bind the promptOutputs property to an array. The promptOutputs array holds the generated responses by the AI model, which are displayed in the Output View.

    HTML
    <kendo-aiprompt [promptOutputs]="promptOutputs">
    </kendo-aiprompt>
  3. Handle the promptRequest event to start streaming. When the user clicks the Generate button, the promptRequest event is fired. Start streaming and update the output as new data arrives.

    HTML
    <kendo-aiprompt (promptRequest)="onPromptRequest($event)">
    </kendo-aiprompt>
  4. Handle the promptRequestCancel event to stop streaming. This event is fired when the user clicks the Stop Generation button. You can handle the promptRequestCancel event to interrupt the AI model request if supported.

    HTML
    <kendo-aiprompt (promptRequestCancel)="onPromptRequestCancel()">
    </kendo-aiprompt>

When implementing real AI model streaming logic:

  • Replace the simulated timeout in the promptRequest event handler with your actual AI model streaming logic.
  • As new data arrives from the AI model, push it to the promptOutputs array to update the response in the Output View in real time.
  • If the user clicks the Stop Generation button, use the promptRequestCancel event to interrupt the AI model request.
In this article
ConfigurationSuggested Links
Not finding the help you need?
Contact Support