See how to integrate with Azure’s AI Language Services and Telerik UI for Blazor, like determining the document’s sentiment and extracting key sentences/phrases.
In a previous post, I showed how you could integrate Azure AI’s Language Services to do analysis of a PDF document into your applications using your existing toolset. However, in that post, I only used one platform (ASP.NET Core), loaded just a single PDF (and did that at startup), and only used one of the Language Service’s features (abstractive summary, which generates a roughly 100-word paragraph describing the document’s contents).
In this post, I’m going to go beyond that post to demonstrate integrating with Language Services on a different platform (Blazor), support the user dynamically loading documents at run time, and exploit multiple features of Azure’s AI Language Service. Specifically, I’m going to look at Language Service’s features for determining the document’s sentiment, extracting key sentences, extracting key phrases and extracting/classifying key entities referenced in the document.
If you’re interested in how well those features work (and don’t particularly care how I got there), you can skip to the relevant section further down in this post.
If you want to go to the other extreme and implement this yourself, you’re going to need set up an Azure Language Service. I’ve covered how to do that in my previous post and will skip repeating it here. Once your service is created, you’ll need to copy the service’s URL and one of its two “secret keys,” which you can find on the service’s “Keys and endpoint” page (I used the Key 1 value on that page as my secret key).
I created my Blazor app for this case study using Progress Telerik’s standard guidance (I created a server-side app but I could have also created a client-side app).
Once I created my app, I opened its default Index.cshtml page and added a Blazor PDF Viewer to it, using this markup:
<TelerikPdfViewer OnOpen="GetFileName"
Height="800px">
</TelerikPdfViewer>
That markup not only creates a window for displaying a PDF file, it includes a toolbar that allows the user to load new documents at run time. In that markup, I’ve tied into the viewer’s OnOpen
event to a method in my component so that I can capture the name of the file as the user uploads it. That method is pretty straightforward:
string fileName;
private async Task GetFileName(PdfViewerOpenEventArgs e)
{
fileName = e.Files[0].Name;
}
Azure’s Language Services only accept text, so I need to convert my PDF document to text. I could use Telerik Document Processing Library (DPL) objects, as I did in my previous post. However, since I’m running in Blazor on the client, I felt I should take advantage of the platform and used some Progress Telerik-supplied JavaScript code to handle the conversion. Here’s a client-side JavaScript method that finds the PDF document in my PDF Viewer component on my page and hands back the document’s content as text:
<script suppress-error="BL9992">
window.getLoadedDocumentText = async () => {
let pdfInstances = Object.fromEntries(Object.entries(TelerikBlazor._instances)
.filter(([key, value]) => value.element &&
value.element.classList.contains("k-pdf-viewer")));
let pdfInstance = pdfInstances[Object.keys(pdfInstances)[0]]
let document = pdfInstance.widget.state.pdfDocument;
let allPages = document.numPages
let allData = '';
for (let i = 1; i <= document.numPages; i++) {
var pageContent = (await (await document.getPage(i)).getTextContent())
.items.map(token => token.str).join('')
allData += " " + pageContent;
}
return allData;
}
</script>
To invoke this script from my component’s C# code, I need to grab Blazor’s JavaScript runtime from my application’s Services collection and shove it into some variable. This Razor markup, added at the top of my component’s .cshtml file, gets Blazor’s JavaScript runtime and puts it into a variable called jsr
for me to use later in my application:
@inject IJSRuntime jsr
With that in place, this is all the code I need to convert a PDF to text after my user has loaded a document into the viewer and store it in a field:
string txtDoc = string.Empty;
txtDoc = await jsr.InvokeAsync<string>("getLoadedDocumentText");
Now that I’ve got the text for whatever PDF document the user just uploaded, I can start writing some AI integration code.
To use the features of your Language Service, you just need to create a TextAnalyticsClient
, passing it the URL and secret key for your service (I’ve omitted those two values in this sample code—you’ll need to swap in the URL and key from your service’s “Keys and endpoint” page).
I created my client in my Blazor app’s OnInitializedAsync
event and stored it in a field, like this:
TextAnalyticsClient tac;
protected override Task OnInitializedAsync()
{
tac = new TextAnalyticsClient(
new Uri("<URL from your Language Service’s ‘Keys and endpoint’ page>"),
new AzureKeyCredential(
"<Key 1 from your Language Service’s ‘Keys and endpoint page>"));
return base.OnInitializedAsync();
}
With the client created, I can now start trying out some of Language Service’s features.
In the ASP.NET Core app that I created in my previous post, I used the TextAnalyticsClient’s AbstractiveSummary
method to generate a paragraph that (ideally) covers the key points from the document the user has loaded into the viewer. For this example, I’m using a slightly different feature: the ExtractiveSummary
method, which summarizes the document by pulling out three key sentences.
The process for using the ExtractiveSummary
method is almost the same as using the AbstractiveSummary
method. You need to do all of these seven steps (if it helps, all the other methods in this article are much simpler):
TextDocumentInput
objects.TextDocumentInput
object, passing a key and the text from your document (I used the file name of the document being displayed in the viewer as my key).TextDocumentInput
object to the list you created earlier.WaitUntil.Completed
enum to the ExtractiveSummarizeAsync
method.ExtractiveSummarizeOperation
object that the method returns.GetValues
method and retrieve the first item in the collection of results returned (and that item is yet another collection).TextDocumentInput
object (in my case, I used the document’s file name as my key).And that’s what this code does, eventually returning an ExtractiveSummarizeResult
object with the results for the currently loaded document:
public async Task GetExtractiveSummary()
{
ExtractiveSummarizeResult docResult;
docs = new List<TextDocumentInput>();
TextDocumentInput doc = new(fileName, txtDoc);
docs.Add(doc);
ExtractiveSummarizeOperation eso = await
tac.ExtractiveSummarizeAsync(WaitUntil.Completed, docs);
ExtractiveSummarizeResultCollection results = eso.GetValues().First();
docResult = results.FirstOrDefault(r => r.Id == fileName);
}
That returned object has a Sentences collection with the three sentences pulled from the PDF document. I displayed those sentences in my component’s UI as a bulleted list, using this Razor markup:
<ul>
@{
if (docResult != null)
{
foreach (ExtractiveSummarySentence sentence in docResult.Sentences)
{
<li>@sentence.Text</li>
}
}
}
</ul>
How good a job did the method do? These are the sentences that the service pulled from a post I wrote on enterprise reporting best practices and standards:
I should mention that the analyzer doesn’t seem to understand ellipses (…), at least when the PDF file is created by Word’s Print to PDF function. I had to remove some ellipses from my document to get the response you see here.
So, how well did the tool do in choosing the sentences? With any tool, the question is always “fitness for purpose”—in what scenario would this tool be useful?
If the scenario you’re using this functionality in is to give a potential reader an idea of what the document is about, then I think this isn’t a bad choice of sentences. If, on the other hand, the scenario is to boil the post down to the key takeaways, then I think there’s a lot that’s missing.
Of course, I also wrote the article and, on the “no one has ugly children” principle, my judgment may not be trustworthy. You can take a look at the original document and make up your own mind.
But this may be an unfair test—that particular post was almost 2,000 words long, verging on tl;dr territory. It might be impossible to do justice to a post that long in just three sentences.
For comparison purposes, here are the three extracted sentences from a blog post that I did on the importance of isolating user tests that was just 1,000 words long:
I was unhappy with the first sentence since the reader would have no idea what “that” is, let alone why “re-initializing” would matter. I have similar concerns about the last sentence: The “And” at the start suggests that the sentence is a continuation of some previous thought, and, without that previous thought, I question the value of including the sentence.
But, again, take a look at the article and make up your own mind of what value this summary provides.
The other three features I decided to test (key phrases, sentiment and categorized entities) won’t accept a document with more than 5,000 characters (including spaces). Since a) I wanted to restrict my tests to posts I’d written to avoid copyright issues, and b) I talk too much … well, that limited the posts that I could test with. In the end, I used that unit testing article from my last test and, even then, I had to cut 250 words to get it under 5,000 characters (if you care, I cut the post off at the “The Benefits of Isolation” heading).
The code for retrieving key phrases from the document is more straightforward than the summary method: Just pass the text version of your document to the client’s ExtractKeyPhrases
method and catch the result in a KeyPhraseCollection
object. I used the async version of the method in this sample code:
KeyPhraseCollection kpc;
public async Task GetKeyPhrases()
{
kpc = await tac.ExtractKeyPhrasesAsync(txtDoc);
}
The UI for this function was simple: I just looped through the collection of sentences in the KeyPhraseCollection
, displaying each one in a bulleted list:
<ul>
@{
if (kpc != null)
{
foreach (string sentence in kpc)
{
<li>@sentence</li>
}
}
}
</ul>
The text from the shortened version of the UI testing post generated 71 phrases, ranging from key phrases like “local, isolated resources” that I could see being useful to less interesting (and common) phrases like “something,” “example,” “time” and “fact.” If the scenario here is to generate search terms for indexing, this tool could be useful, provided you filtered out the more common words from the generated list.
Next, I tried the sentiment function. Again, the code is pretty simple: Feed your text document to the client’s AnalyzeSentiment
method and catch the result in a DocumentSentiment
object. Again, I used the async version of the method:
DocumentSentiment sent;
public async Task GetSentiment()
{
sent = await tac.AnalyzeSentimentAsync(txtDoc);
}
At the document level, the DocumentSentiment
object has two properties for reporting sentiment:
The DocumentSentiment
also has a Sentences
collection that lets you access the sentiment score for each sentence in the document.
I put together a UI that reported just at the document level, displaying the document’s assigned sentiment and the confidence level for the sentiment assigned:
@{
if (sent != null)
{
@sent.Sentiment;
<text>(</text>
}
}@{
if (sent != null)
{
switch (sent.Sentiment.ToString())
{
case "Negative":
@sent.ConfidenceScores.Negative;
break;
case "Positive":
@sent.ConfidenceScores.Positive;
break;
case "Mixed":
@sent.ConfidenceScores.Neutral;
break;
};
<text>)</text>
}
}
Plainly, the goal of the method is to determine how the text’s author feels about the document’s topic … and it turns out that I’m pretty wishy-washy: Virtually all of my documents that I fed to this method returned a sentiment of “mixed.”
However, when I deleted the last paragraph from the shortened version of my unit testing post (the paragraph beginning “The goal of isolating your tests…”), text analytics decided I was negative about the isolating unit tests. It even felt I was pretty sure about that, too, with a confidence of 0.96.
Given that my sample texts really aren’t the appropriate input for this scenario, I’m not going to draw any conclusions about how this method would work in the scenarios it’s intended to be used in.
Finally, I tried out text analytics’ RecognizeEntities
method, which looks for terms in the document and assigns them to various categories. The method hands back a collection of CategorizedEntity
objects. Unlike my previous examples, there isn’t an async version of this method, so this code is marginally simpler than my previous examples:
CategorizedEntityCollection cec;
public void GetEntities()
{
cec = tac.RecognizeEntities(txtDoc);
}
The CategorizedEntity
object has two useful properties:
Text
property with some term from the documentCategory
property with the name of the category the term was assigned to.The UI I created displays the category and text for each CategorizedEntity
object:
<ul>
@{
if (cec != null)
{
foreach (CategorizedEntity entity in cec)
{
<li>@entity.Category: @entity.Text</li>
}
}
}
</ul>
As with the key phrases, I got back a lot: well over 80 entries organized into seven categories. Here are those seven categories with some examples of text from each category (except for the three categories that had only one or two terms):
The DateTime
and Location
categories were just wrong (i.e., neither a date nor a location). The two entries in the Quantity
category weren’t wrong … but were less than useful. The terms in the other four categories could be more useful.
On the other hand, many category/terms appeared multiple times. “Skill: code” cropped up 12 times, for example, and some variation on “Skill: test” appeared eight times. A scenario driven by targeted analysis, looking for specific categories and term combinations, might find this analysis useful.
If you’re interested in using Azure’s Language Services, these two articles should demonstrate that you can start using them from any platform and using your existing tools—this is all stuff you could be putting in your applications right now. While I’ve used web-based platforms in these two case studies, this code (especially, the DPL code from the ASP.NET Core post) would work just as well in any .NET project. The major differences would be UI oriented: the markup for whatever version of the PDF viewer you’re using and how you display your results.
And, after all, if AI is our future, you should probably start cozying up to Skynet now.
Try this out for yourself with a free trial of Telerik UI for Blazor—or, better yet, get the Telerik DevCraft bundle, also with a free 30-day trial, so you can try out the AI Prompt component across libraries.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.