Streaming AI Responses with AIPrompt
The Kendo UI for Angular AIPrompt component supports streaming responses, allowing users to see AI-generated content as it is being produced. This feature enhances the user experience by providing immediate feedback and a more interactive interface.
Streaming is particularly useful when:
- Working with long-form AI responses that take several seconds to generate.
- Creating chat-like interfaces where users expect real-time feedback.
- Integrating with AI services that support chunked responses.
- Enhancing user engagement in conversational applications.
Configuration
To enable streaming in the AIPrompt component, follow these steps:
-
Use the
streaming
property of the AIPrompt and bind it to a boolean variable. This property controls whether the Output View displays the Stop Generation button and indicates that a response is being streamed.HTML<kendo-aiprompt [streaming]="isStreaming"> </kendo-aiprompt>
-
Bind the
promptOutputs
property to an array. ThepromptOutputs
array holds the generated responses by the AI model, which are displayed in the Output View.HTML<kendo-aiprompt [promptOutputs]="promptOutputs"> </kendo-aiprompt>
-
Handle the
promptRequest
event to start streaming. When the user clicks the Generate button, thepromptRequest
event is fired. Start streaming and update the output as new data arrives.HTML<kendo-aiprompt (promptRequest)="onPromptRequest($event)"> </kendo-aiprompt>
-
Handle the
promptRequestCancel
event to stop streaming. This event is fired when the user clicks the Stop Generation button. You can handle thepromptRequestCancel
event to interrupt the AI model request if supported.HTML<kendo-aiprompt (promptRequestCancel)="onPromptRequestCancel()"> </kendo-aiprompt>
When implementing real AI model streaming logic:
- Replace the simulated timeout in the
promptRequest
event handler with your actual AI model streaming logic. - As new data arrives from the AI model, push it to the
promptOutputs
array to update the response in the Output View in real time. - If the user clicks the Stop Generation button, use the
promptRequestCancel
event to interrupt the AI model request.