OpenAI Integration

The KendoReact AIPrompt allows you to easily integrate it with a chat api service such as ChatGPT. This is thanks to the onPromptRequest event that returns the input text (prompt), and sets the output.

Using with an OpenAI key

If you have an OpenAI subscription, you can directly use it with the AIPrompt component.

  1. Install the openai package from npm.
npm install openai
  1. Import the OpenAI interface. If you are not setting the key to an environment variable, you will need to set the dangerouslyAllowBrowser or else you will get an error, in order to prevent you from exposing your key.
import OpenAI from "openai";

API_KEY='my-key';

const openai = new OpenAI({
    apiKey: API_KEY,
    dangerouslyAllowBrowser: true,
  });
  1. Add a method that performs the API call. Make sure to set the correct model that corresponds to your API key.
async function generate(prompt) {
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: prompt }],
    model: "gpt-3.5-turbo",
  });

for (const choice of completion.choices) {
    return choice.text;
  }
}
  1. Call this method inside the onPromptRequest event handler. Refer to this section that demonstrated how to use the generate method with the onPromptRequest event.

Using with Azure OpenAI

Another common option is getting access to OpenAI using an Azure deployment. Follow the steps below to setup the component with Azure OpenAI.

  1. Install the required dependencies.
npm install openai @azure/identity
  1. Import the AzureOpenAI interface and set the required keys. Make sure to set the correct API version and model name or else the calls will fail.
import AzureOpenAI from "openai";

const endpoint = "your-endpoint"; // deployment URL
const apiKey = "your-key";
const apiVersion = "2024-04-01-preview";
const deployment = "open-ai-gpt-35-turbo"; // model name
  1. Add a method that performs the API call. Make sure to set the correct model that corresponds to your API key. dangerouslyAllowBrowser should be set to true if you are not setting the key to an environment variable.
async function generate(prompt) {
  const client = new AzureOpenAI({
    endpoint,
    apiKey,
    apiVersion,
    deployment,
    dangerouslyAllowBrowser: true,
  });

  const result = await client.completions.create({
    prompt,
    model: deployment,
    max_tokens: 128,
  });

  for (const choice of result.choices) {
    return choice.text;
  }
}
  1. Call this method in the onPromptRequest event handler. Refer to the following section for an example.

Perform the call

In both cases, you can call the generate method inside the onPromptRequest event handler. You need to change this method to an asynchronous method since the API request is an asynchronous one. After the request is complete, the output will be displayed.

const handleOnRequest = async (prompt, output) => {
    const gptResult = await generate(prompt);
    if (!prompt) {
      return;
    }
    if (output?.isRetry) {
      setOutputs([
        {
          id: outputs.length + 1,
          title: prompt,
          responseContent: gptResult || unknownRequest,
          prompt,
        },
        ...outputs,
      ]);
    } else if (output?.ratingType) {
      setOutputs([
        {
          id: outputs.length + 1,
          title: prompt,
          responseContent: gptResult || unknownRequest,
          prompt,
        },
        ...outputs,
      ]);
    } else {
      setOutputs([
        {
          id: outputs.length + 1,
          title: prompt,
          responseContent: gptResult || unknownRequest,
          prompt,
        },
        ...outputs,
      ]);
    }
    setActiveView(outputViewDefaults.name);
  };