Telerik blogs

Learn how to integrate the OpenAI GPT model into an Angular application and return a streaming response from the OpenAI GPT-3.5-Turbo model for a given prompt.

In this article, we’ll walk through the step-by-step process of integrating the OpenAI GPT model into an Angular application. To get started, create a new Angular project and follow each step as you go along. Please note that you’ll need a valid OpenAI API key to proceed.

Adding Environment Files

By default, newer versions of Angular projects do not come with environment files. So, let’s create one by running the following CLI command:

ng generate environments

We are creating environment files to store the OpenAI API key and base URL. In the file, add properties below.

export const environment = {
  production: false,
  openaiApiKey: 'YOUR_DEV_API_KEY',
  openaiApiUrl: 'https://api.openai.com/v1',
};

You can find the OpenAI API Key here: https://platform.openai.com/settings/organization/api-keys

Adding Service

We will connect to the OpenAI Model in an Angular service. So, let’s create one by running the following CLI command:

ng g s open-ai

In the service, we begin by injecting the HttpClient to handle API requests, and we retrieve the OpenAI API URL and key from the environment configuration file.

private http = inject(HttpClient);
private apiKey = environment.openaiApiKey;
private apiUrl = environment.openaiApiUrl;

Make sure that the app.config.ts file includes the provideHttpClient() function within the providers array.

export const appConfig: ApplicationConfig = {
  providers: [
    provideHttpClient(),
    provideBrowserGlobalErrorListeners(),
    provideZonelessChangeDetection(),
    provideRouter(routes)
  ]
};

Next, we’ll define a signal to store the prompt text and create a function to set its value.

private promptSignal = signal<string>('');

setPrompt(prompt: string) {
    this.promptSignal.set(prompt);
  }

Next, let’s use Angular Version 20’s new feature the httpResource API to make a the call to the OpenAI API endpoint. In the authorization section, we are passing the API Key, and choosing gpt-3.5-turbo as the model.

  responseResource = httpResource<any>(() => ({
    url: this.apiUrl + '/chat/completions',
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${this.apiKey}`
    },
    body: {
      model: 'gpt-3.5-turbo',
      messages: [{ role: 'user', content: this.promptSignal() }]
    }
  }));

Learn more about the httpResource API.

Adding Component

We will use the service in the component and create the UI. Run the following CLI command:

ng g c openaichat

In the component, we start by injecting the service and defining variables to capture user input as a signal and handle the response from the API.

  prompt = signal("What is Angular ?");
  openaiservice = inject(OpenAi);
  response: any;

Next, define a function to fetch the response from the API.

getResponse() {
    this.response = this.openaiservice.setPrompt(this.prompt());
  }

Add an input field in the component template to receive user input.

<label for="prompt">Enter your question:</label>
    <input 
      id="prompt" 
      type="text" 
      [value]="prompt()" 
      (input)="prompt.set($any($event.target).value)"
      placeholder="Ask me anything..."
    />

In the above code:

  • The (input) event binding is used to listen for real-time changes to the input element’s value.
  • When the event fires, the set() method is called to update the prompt signal.
  • $any($event.target) casts the event target to any, bypassing TypeScript’s strict type checking.

Next, add a button that triggers the getResponse() function to fetch the response for the prompt from OpenAI. This function was implemented in the previous section.

    <button (click)="getResponse()">
      Get Regular Response
    </button>

Next, display the response inside a <p> element as shown below.

@if (openaiservice.responseResource.value()?.choices?.[0]?.message?.content) {
        <p>{{ openaiservice.responseResource.value().choices[0].message.content }}</p>
      } @else {
        <p class="placeholder">No regular response yet...</p>
      }

So far, we have completed all the steps. When you run the application, you should receive a response from the OpenAI GPT-3.5-Turbo model for the submitted prompt.

Working with Streaming Response

The OpenAI models give two types of responses:

  1. Regular response
  2. Streaming response

In the above implementation, we handled a regular response, where the user waits for the entire response to be returned at once. This approach can feel unresponsive or less engaging for some users. An alternative is a streaming response, where OpenAI streams data as the model generates each token.

In this section, we will explore how to work with streaming responses. To achieve this, read the API Key and URL from the environment file.

  private apiKey = environment.openaiApiKey;
  private apiUrl = environment.openaiApiUrl;

Next, define a signal to hold the streaming response and create a corresponding getter function to expose it as read-only. This getter will be used within the component template to display the response.

  private streamingResponseSignal = signal<string>('');

  get streamingResponse() {
    return this.streamingResponseSignal.asReadonly();
  }

Next, create a function to send a request to OpenAI.

async streamChatCompletion(prompt: string): Promise<void> { } 

This function will perform two main tasks:

  1. Send a request to OpenAI to receive a streaming response.
  2. Parse the incoming stream and update the streamingResponseSignal with the content.

To perform Part 1 and to receive a streaming response from OpenAI, we use the fetch API and set the stream property to true, as shown below.

const response = await fetch(this.apiUrl + '/chat/completions', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${this.apiKey}`
        },
        body: JSON.stringify({
          model: 'gpt-3.5-turbo',
          messages: [{ role: 'user', content: prompt }],
          stream: true // Enable streaming
        })
      });

As Part 2, we will perform the following tasks:

  1. Read the stream using the getReader.
  2. Decode it using the TextDecoder.
  3. Add the decoded line to the response signal.
  const reader = response.body?.getReader();
      const decoder = new TextDecoder();

      if (!reader) {
        throw new Error('Failed to get response reader');
      }

      let accumulatedResponse = '';

      while (true) {
        const { done, value } = await reader.read();
        
        if (done) break;

        const chunk = decoder.decode(value);
        const lines = chunk.split('\n');

        for (const line of lines) {
          if (line.startsWith('data: ')) {
            const data = line.slice(6);
      
            if (data === '[DONE]') {
              return;
            }

            try {
              const parsed = JSON.parse(data);
              const content = parsed.choices?.[0]?.delta?.content;
              
              if (content) {
                accumulatedResponse += content;
                this.streamingResponseSignal.set(accumulatedResponse);
              }
            } catch (e) {
              continue;
            }
          }
        }
      }

This code handles streaming responses returned as a chunked HTTP response from OpenAI.

  1. It reads the response body using the getReader().
  2. Next, it uses TextDecoder to convert binary data into a string.
  3. Then, chunks are split into lines if they start with "data:".
  4. Finally, it parses the data for the content using JSON.parse.

Putting everything together, the service to get streaming response from OpenAI should look like this:

import { Injectable, signal } from '@angular/core';
import { environment } from '../environments/environment';

@Injectable({
  providedIn: 'root'
})
export class StreamingChatService {
  private apiKey = environment.openaiApiKey;
  private apiUrl = environment.openaiApiUrl;

  private streamingResponseSignal = signal<string>('');

  get streamingResponse() {
    return this.streamingResponseSignal.asReadonly();
  }

  async streamChatCompletion(prompt: string): Promise<void> {
    this.streamingResponseSignal.set('');
    
    try {
      const response = await fetch(this.apiUrl + '/chat/completions', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${this.apiKey}`
        },
        body: JSON.stringify({
          model: 'gpt-3.5-turbo',
          messages: [{ role: 'user', content: prompt }],
          stream: true // Enable streaming
        })
      });

      if (!response.ok) {
        throw new Error(`HTTP error! status: ${response.status}`);
      }

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();

      if (!reader) {
        throw new Error('Failed to get response reader');
      }

      let accumulatedResponse = '';

      while (true) {
        const { done, value } = await reader.read();
        
        if (done) break;

        const chunk = decoder.decode(value);
        const lines = chunk.split('\n');

        for (const line of lines) {
          if (line.startsWith('data: ')) {
            const data = line.slice(6);
      
            if (data === '[DONE]') {
              return;
            }

            try {
              const parsed = JSON.parse(data);
              const content = parsed.choices?.[0]?.delta?.content;
              
              if (content) {
                accumulatedResponse += content;
                this.streamingResponseSignal.set(accumulatedResponse);
              }
            } catch (e) {
              continue;
            }
          }
        }
      }
    } catch (error) {
      console.error('Streaming error:', error);
      this.streamingResponseSignal.set('Error occurred while streaming response');
    }
  }
} 

In the component, define a new function to fetch the streaming response.

  async getStreamingResponse() {
    this.isStreaming.set(true);
    await this.streamingService.streamChatCompletion(this.prompt());
    this.isStreaming.set(false);
  }

On the template, add a new button to get the streaming response.

<button (click)="getStreamingResponse()" [disabled]="isStreaming()">
      {{ isStreaming() ? 'Streaming...' : 'Get Streaming Response' }}
</button>

Next, display the response inside a <p> element as shown below.

@if (streamingService.streamingResponse()) {
        <p>{{ streamingService.streamingResponse() }}</p>
      } @else {
        <p class="placeholder">No streaming response yet...</p>
      }

We have now completed all the steps. When you run the application, you should receive a streaming response from the OpenAI GPT-3.5-Turbo model for the given prompt.

I hope you find it easy to incorporate the OpenAI model into your Angular app, and that it opens up many new possibilities for your projects.


Dhananjay Kumar
About the Author

Dhananjay Kumar

Dhananjay Kumar is a well-known trainer and developer evangelist. He is the founder of NomadCoder, a company that focuses on creating job-ready developers through training in technologies such as Angular, Node.js, Python, .NET, Azure, GenAI and more. He is also the founder of ng-India, one of the world’s largest Angular communities. He lives in Gurgaon, India, and is currently writing his second book on Angular. You can reach out to him for training, evangelism and consulting opportunities.

Related Posts

Comments

Comments are disabled in preview mode.