Streaming AI Responses with AIPrompt
The Kendo UI for Angular AIPrompt component supports streaming responses, allowing users to see AI-generated content as it is being produced. This feature enhances the user experience by providing immediate feedback and a more interactive interface.
Streaming is particularly useful when:
- Working with long-form AI responses that take several seconds to generate.
- Creating chat-like interfaces where users expect real-time feedback.
- Integrating with AI services that support chunked responses.
- Enhancing user engagement in conversational applications.
Configuration
To enable streaming in the AIPrompt component, follow these steps:
-
Use the
streamingproperty of the AIPrompt and bind it to a boolean variable. This property controls whether the Output View displays the Stop Generation button and indicates that a response is being streamed.HTML<kendo-aiprompt [streaming]="isStreaming"> </kendo-aiprompt> -
Bind the
promptOutputsproperty to an array. ThepromptOutputsarray holds the generated responses by the AI model, which are displayed in the Output View.HTML<kendo-aiprompt [promptOutputs]="promptOutputs"> </kendo-aiprompt> -
Handle the
promptRequestevent to start streaming. When the user clicks the Generate button, thepromptRequestevent is fired. Start streaming and update the output as new data arrives.HTML<kendo-aiprompt (promptRequest)="onPromptRequest($event)"> </kendo-aiprompt> -
Handle the
promptRequestCancelevent to stop streaming. This event is fired when the user clicks the Stop Generation button. You can handle thepromptRequestCancelevent to interrupt the AI model request if supported.HTML<kendo-aiprompt (promptRequestCancel)="onPromptRequestCancel()"> </kendo-aiprompt>
When implementing real AI model streaming logic:
- Replace the simulated timeout in the
promptRequestevent handler with your actual AI model streaming logic. - As new data arrives from the AI model, push it to the
promptOutputsarray to update the response in the Output View in real time. - If the user clicks the Stop Generation button, use the
promptRequestCancelevent to interrupt the AI model request.