Get a chatbot up and running on your Angular app with Gemini and Kendo UI for Angular.
Today, chatbots are more than simple conversational tools. They can analyze data, automate processes and solve real-world problems. As developers, we can harness AI to simplify processes and simplify complex scenarios. By combining the power of Gemini, Angular and Kendo UI, we can build advanced chatbots that not only chat but also analyze images to provide smart recommendations.
Imagine a scenario where you upload a screenshot of a webpage, and the chatbot identifies which Progress Kendo UI components can help you recreate it, complete with explanations and links to resources. With Kendo UI for Angular’s rich set of components and Gemini’s AI capabilities, building sophisticated chatbots has never been easier.
We want to build a bot where we can upload a page screenshot or photo and it detects which Kendo UI for Angular components can help us to build it, with a nice recommendation explaining what each one does and why to use it, plus links to resources.
Sound challenging? Not to worry, Progress Kendo UI provides a set of components to help: Kendo UI for Angular Conversational UI, Kendo Upload and the power of gemini-1.5-flash
. Let’s get to coding!
First, create a new Angular application using the Angular CLI with the command ng new chatLens
. This will generate a new Angular project named “ChatLens.”
npx -p @angular/cli ng new chatlens
Next, navigate to your project folder using cd chatLens
. In the terminal, install the Kendo UI for Angular Conversational UI and Kendo UI for Angular Uploads libraries using the schematics:
ng add @progress/kendo-angular-conversational-ui
ng add @progress/kendo-angular-upload
After that, install the marked library to process markdown responses to HTML.
npm i marked
Finally, install the Google Generative AI library with:
npm i @google/generative-ai
To get an API key, visit https://aistudio.google.com/apikey
, sign in with your Google account and create an API key.
Copy the key and save it in your project by creating an environment.ts
file in the src/environments
directory. Export a variable named GEMINI_KEY
and assign your API key to it.
Now, with Kendo UI components and Gemini set up, you’re ready to create the service to talk with Gemini
To integrate Gemini into our chatbot, we’ll create a service named GeminiService
using the Angular CLI. It will handle communication with the AI model and process user inputs and image uploads.
First in the terminal run ng g s services/gemini
. It generates an empty service.
d.paredes@danys-mbp chatlens % ng g s services/gemini
CREATE src/app/services/gemini.service.spec.ts (362 bytes)
CREATE src/app/services/gemini.service.ts (136 bytes)
Next define two properties, KENDO_RECOMMEND_PROMPT
and generativeModel
.
KENDO_RECOMMEND_PROMPT
prompt will guide the AI model when analyzing user inputs and images.generativeModel
is an instance of the Gemini AI model. We’ll use the geminiKey
to authenticate and initialize the model, specifying the model version as “gemini-1.5-flash”.The code looks like:
import { Injectable } from '@angular/core';
@Injectable({ providedIn: 'root' })
export class GeminiService {
KENDO_RECOMMEND_PROMPT = `
Given a screenshot of a browser tell me which Kendo UI components help me to build that UI.
The response a string, with the section detected, the components of Kendo UI and a URL.
Format the output as Markdown. Include headings, bold text, bullet points, and hyperlinks as appropriate.
`;
private generativeModel = new GoogleGenerativeAI(
GEMINI_KEY,
).getGenerativeModel({
model: 'gemini-1.5-flash',
});
It’s time to interact with Gemini! The getResponse
method (still inside our Gemini service) will take a user input and an uploaded image, and then interact with the AI model. It will call the generateContent
method of the AI model, passing an array containing the KENDO_RECOMMEND_PROMPT
and the uploaded image. The response from the AI will be processed to extract the text and transform it into HTML using the marked
library.
async getResponse(image: string): Promise<string> {
const { response } = await this.generativeModel.generateContent([
this.KENDO_RECOMMEND_PROMPT,
image,
]);
const llmResponse = response.text();
return marked(llmResponse);
}
Once the AI generates a response, we’ll use the marked
library to convert the markdown text to HTML. This HTML-formatted response will then be returned so it can be displayed in the chat interface.
The final code looks like:
import { Injectable } from '@angular/core';
import { GoogleGenerativeAI } from '@google/generative-ai';
import { marked } from 'marked';
import { GEMINI_KEY } from '../../environment';
@Injectable({ providedIn: 'root' })
export class GeminiService {
KENDO_RECOMMEND_PROMPT = `
Given a screenshot of a browser tell me which kendo ui components help me to solve, and why can help me to build that UI.
The response a string , with the section detected , the components of kendo and a url.
Format the output as Markdown. Include headings, bold text, bullet points, and hyperlinks as appropriate.
`;
private generativeModel = new GoogleGenerativeAI(
GEMINI_KEY,
).getGenerativeModel({
model: 'gemini-1.5-flash',
});
async getResponse(image: string): Promise<string> {
const { response } = await this.generativeModel.generateContent([
this.KENDO_RECOMMEND_PROMPT,
image,
]);
const llmResponse = response.text();
return marked(llmResponse);
}
}
Next, it’s time to work up our components kendo-upload
and kendo-chat
along with their interactions, but before using the Kendo Upload we must to register provideHttpClient
and provideAnimations
in app.config.ts
.
import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';
import { provideRouter } from '@angular/router';
import { routes } from './app.routes';
import { provideHttpClient } from '@angular/common/http';
import { provideAnimations } from '@angular/platform-browser/animations';
export const appConfig: ApplicationConfig = {
providers: [
provideZoneChangeDetection({ eventCoalescing: true }),
provideRouter(routes),
provideHttpClient(),
provideAnimations(),
],
};
Open app.component.ts
and import the KENDO_CHAT
and KENDO_FILESELECT
to get access to the kendo-chat component and file upload component in the template.
import { KENDO_CHAT } from '@progress/kendo-angular-conversational-ui';
import { KENDO_FILESELECT } from '@progress/kendo-angular-upload';
@Component({
selector: 'app-root',
imports: [KENDO_CHAT, KENDO_FILESELECT],
templateUrl: './app.component.html',
styleUrl: './app.component.scss',
})
Before we begin, let’s have a small overview about kendo-chat. It comes with a customizable chat interface out of the box. I will give a little overview of the key properties to make it easy for us:
[messages]
: Binds the messages array from the current conversation to the chat component.[user]
: Defines the current user interacting with the chat. This is important for differentiating between messages sent by the user and others.(sendMessage)
: Event emitted when user hits send.kendoChatMessageTemplate
and kendoChatMessageBox
: Make the template flexible and customizable for chat and messages.And don’t forget to give Kendo UI for Angular a try (for free) if you haven’t already! (No credit card required!)
Next, define the bot and user details properties to personalize the chatbots conversation:
GEMINI_USER
represents the chatbot.JOHN_DOE_USER
represents the user interacting with the chatbot.INITIAL_MESSAGE
is the first message the chatbot sends. GEMINI_USER: User = { id: 1, name: 'Gemini' };
JOHN_DOE_USER: User = { id: 2, name: 'John Doe' };
INITIAL_MESSAGE: Message = {
text: 'Hello! How I can help you with your website?',
author: this.GEMINI_USER,
};
To manage messages, use an Angular signal to declare an array of messages with the initial messages to bind the [messages]
property in kendo-chat
later.
messages = signal<Message[]>([this.INITIAL_MESSAGE]);
Next, create a method addMessageToMessages
to add new messages to the conversation:
private addMessageToMessages(message: Message) {
this.messages.update((m) => [...m, message]);
}
We already covered the message, but what about upload? Remember in the UI we’re going to allow the user to upload an image, which we’ll send to LLM. That part will be handled by kendo-fileselect
.
Let’s do it!
First, because we are going to send the image to the LLM request, inject the gemini.service
:
private geminiServiceInstance = inject(GeminiService);
Next we create two new private methods, fileToBase64
and handleFileUpload
. fileToBase64
transforms the File
to base64 string.
private fileToBase64(file: File): Promise<string> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = () => resolve(reader.result as string);
reader.onerror = (error) => reject(error);
});
}
Read more: https://developer.mozilla.org/en-US/docs/Web/API/FileReader/readAsDataUR.
Let’s work with the handleFileUpload
method, which handles the file
and FileSelectComponent
interaction with Gemini. First, we get the imageBase64. Next, we wait for geminiService to return the response. After that, we recreate a new message with the response and add to the message. And finally we clear the list of files in the fileSelectComponent
.
The final handleFileUpload
code looks like:
private async handleFileUpload(
file: File,
fileSelectComponent: FileSelectComponent,
) {
try {
const imageBase64 = await this.fileToBase64(file);
const response =
await this.geminiServiceInstance.getResponse(imageBase64);
const responseMessage: Message = {
author: this.GEMINI_USER,
text: response,
};
this.addMessageToMessages(responseMessage);
} finally {
fileSelectComponent.fileList.clear();
}
}
Before calling the handleFileUpload
, we’re going to fake what the LLM is thinking with a fake message! 😈
private createTypingMessage(): Message {
return { typing: true, author: this.GEMINI_USER };
}
The uploadFile
public method will call addMessageToMessages
with a fake typing and iterate over the list of files to upload. If it finds a file, then it calls the handleFileUpload
with the file and the fileSelectComponent
:
The code looks like:
async uploadFile(fileSelectComponent: FileSelectComponent) {
this.addMessageToMessages(this.createTypingMessage());
const file = fileSelectComponent.fileList.firstFileToUpload[0]?.rawFile;
if (file) {
await this.handleFileUpload(file, fileSelectComponent);
}
}
The final code in the app.component.ts
looks like:
import { Component, inject, signal } from '@angular/core';
import {
KENDO_CHAT,
Message,
User,
} from '@progress/kendo-angular-conversational-ui';
import { GeminiService } from './services/gemini.service';
import {
FileSelectComponent,
KENDO_FILESELECT,
} from '@progress/kendo-angular-upload';
@Component({
selector: 'app-root',
imports: [KENDO_CHAT, KENDO_FILESELECT],
templateUrl: './app.component.html',
styleUrl: './app.component.scss',
})
export class AppComponent {
GEMINI_USER: User = { id: 1, name: 'Gemini' };
JOHN_DOE_USER: User = { id: 2, name: 'John Doe' };
INITIAL_MESSAGE: Message = {
text: 'Hello! How I can help you with your website?',
author: this.GEMINI_USER,
};
messages = signal<Message[]>([this.INITIAL_MESSAGE]);
private geminiServiceInstance = inject(GeminiService);
private addMessageToMessages(message: Message) {
this.messages.update((m) => [...m, message]);
}
async uploadFile(fileSelectComponent: FileSelectComponent) {
this.addMessageToMessages(this.createTypingMessage());
const file = fileSelectComponent.fileList.firstFileToUpload[0]?.rawFile;
if (file) {
await this.handleFileUpload(file, fileSelectComponent);
}
}
private createTypingMessage(): Message {
return { typing: true, author: this.GEMINI_USER };
}
private async handleFileUpload(
file: File,
fileSelectComponent: FileSelectComponent,
) {
try {
const imageBase64 = await this.fileToBase64(file);
const response =
await this.geminiServiceInstance.getResponse(imageBase64);
const responseMessage: Message = {
author: this.GEMINI_USER,
text: response,
};
this.addMessageToMessages(responseMessage);
} finally {
fileSelectComponent.fileList.clear();
}
}
private fileToBase64(file: File): Promise<string> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = () => resolve(reader.result as string);
reader.onerror = (error) => reject(error);
});
}
}
We have all our methods ready! It’s time to bind our methods and properties and use them with the kendo-chat
and kendo-select
components in the app.component.html
.
Now we will add kendo-chat
and kendo-select
to our template. The combination handles both text messages and image uploads.
First, we connect the kendo-chat
to the current user with the user property JOHN_DOE_USE
, which is the user talking to the bot, and the [messages]
property with messages()
, which holds all the messages in the chat.
Note we are working with Angular Signals—read more.
To customize how messages are shown, we use kendoChatMessageTemplate. This lets us control what each message looks like. For the message, we use [innerHTML] to show the response in HTML format thanks to the marked library to convert Markdown from text into HTML.
Instead of a normal text input for the chatbot, we use customized Kendo FileSelect in the input area to upload images that the bot can analyze using kendoChatMessageBoxTemplate.
The Kendo FileSelect component lets users pick a file and tells us when a file is selected using the valueChange event. We use a reference variable, #images
, to access the component in our code, to have control over the file list.
The final code looks like:
<kendo-chat
[user]="JOHN_DOE_USER"
[messages]="messages()">
<ng-template kendoChatMessageTemplate let-message>
<div [innerHTML]="message.text"></div>
</ng-template>
<ng-template kendoChatMessageBoxTemplate>
<div class="k-text-center k-flex ">
<kendo-fileselect #images (valueChange)="uploadFile(images)"/>
</div>
</ng-template>
</kendo-chat>
It is ready! It’s time to save and test our code, save your changes, run ng serve
and watch your chatbot in action.
We built a chatbot with a real big three combo using the power of Kendo UI for Angular Conversation UI for building a conversational interface to enable upload power with Kendo FileSelect to handle file uploads and Gemini AI for analyzing the image and suggesting Kendo UI components.
In our chat, the users can upload a screenshot of a webpage, and the chatbot analyzes the image and recommends which Kendo UI components to use with helpful explanations and links to resources.
But it is just a single chatbot—and just the beginning! Here are some ideas to take like add more features, like allowing multiple file uploads or customizing the AI prompt and try other Kendo UI components to expand the chatbot’s capabilities.
It’s your time to build!😊
Source Code: https://github.com/danywalls/kendo-chat-gemini-flash/.
Remember: Kendo UI for Angular comes with a free 30-day trial, so give it a spin!
Dany Paredes is a Google Developer Expert on Angular and Progress Champion. He loves sharing content and writing articles about Angular, TypeScript and testing on his blog and on Twitter (@danywalls).