Some techniques can make a big difference in the efficient use of resources, improving performance and scalability. Check out these nine tips on how to optimize APIs in ASP.NET Core.
In scenarios that involve transferring or managing large amounts of data, the performance of APIs is crucial for fast, scalable and efficient applications.
Good API performance can reduce latency, improve user experience and optimize the use of server resources.
In this post, we will consider some best practices that can be adopted for good performance in an ASP.NET Core web API. We will cover topics such as caching, code optimization, pagination and others.
Web APIs are very common nowadays, especially in large-scale applications that need to communicate with external modules, sending and receiving data.
Creating efficient applications should be the objective of backend developers because of the various advantages that resource optimization brings. In addition, several problems can be avoided when creating optimized web applications.
Some advantages of optimized APIs include faster responses, improved user experience and reduced consumption of server resources like memory and CPU. Optimized resources allow APIs to scale better, handling more requests without significantly degrading performance.
To achieve these benefits, it is essential to know techniques that help identify possible bottlenecks in the application. Furthermore, there are also excellent tools that can be used to make an API efficient.
Below we will check out some of the main ASP.NET Core tools and resources to improve the performance of a web API.
To save time, this post will not demonstrate details of the implementation of the base application, only the code snippets relevant to the topics covered.
You can check out the details by accessing the complete application in this GitHub repository: Service Event Handler source code.
Asynchronous communication allows thousands of requests to be processed simultaneously. This brings performance gain, as it is not necessary to wait for one request to finish before another one starts. ASP.NET Core allows a small pool of threads to handle thousands of simultaneous requests.
To see the use of asynchronous communication in practice, let’s analyze the code below. Note that the return is being handled synchronously:
[HttpGet("error-logs")]
public ActionResult<List<Log>> GetErrorLogs()
{
var logErrors = _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.ToList();
return Ok(logErrors);
}
Each HTTP request to a synchronous endpoint blocks a thread from the server’s thread pool. If many synchronous requests occur simultaneously, this can exhaust the thread pool, causing slowdowns or service failures. Note that nothing is asynchronous here, which means this endpoint can become a problem. To fix this we can rewrite it as follows:
[HttpGet("log-errors")]
public async Task<ActionResult<List<Log>>> GetErrorLogs()
{
var logErrors = await _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.ToListAsync();
return Ok(logErrors);
}
Note that we have now added the async
and await
commands to the request, in addition, we are using the ToListAsync()
method which is also asynchronous, which means that the call to this endpoint will no longer block a thread, as now it can be processed asynchronously. In addition, other processes can be executed in parallel, which generates better performance for the application.
Simultaneously transporting large amounts of data can cause serious performance problems, excessive memory consumption and slowdowns. To mitigate these possible bottlenecks, we can use pagination.
Pagination is a technique used to partition large amounts of data, where you allow the requester to choose a certain range of values. For example, you can send a value (skip) = 10
and a second value (take) = 50
, so if the search is in an ordered list, only the records that make up the range of these numbers (11 to 60) will be returned. In this case, only the requested quantity was transported, and not the total quantity of items.
Below is one of the ways to implement pagination in ASP.NET Core:
[HttpGet("error-logs/{skip}/{take}")]
public async Task<ActionResult<List<Log>>> GetErrorLogsPaginated([FromRoute] int skip = 0, [FromRoute] int take = 10)
{
var logErrors = await _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.Skip(skip)
.Take(take)
.OrderBy(c => c.Id)
.ToListAsync();
return Ok(logErrors);
}
Now the endpoint allows API clients to obtain error log records in a paged form, using the skip and take parameters to control pagination. These parameters are passed to the Skip()
and Take()
LINQ extension methods, filtering the records and returning only what was requested. We also use the OrderBy()
method so that the list is ordered before filtering.
Using paging allows you to optimize performance and scalability when dealing with large volumes of data, as it allows for a more balanced distribution of workload and improves user experience by reducing response time and system resource consumption.
AsNoTracking()
is an extension method present in Entity Framework Core and works as follows:
By default, EF Core tracks the entities present in the DbContext
class. This means that it maintains a reference to each entity to detect changes and synchronize them with the database when you call the SaveChanges
method. Entity tracking consumes memory and processing time.
When you use the AsNoTracking()
method, EF Core disables this tracking behavior, which can result in faster queries and lower memory usage because the DbContext
does not need to maintain references to the returned entities.
Therefore, in scenarios where there is only data reading, it is advisable to use AsNoTracking
to inform EF that there is no need to create references to entities, which causes the load on EF Core to be reduced, resulting in faster queries.
The code below shows the previous endpoint with the AsNoTracking method.
[HttpGet("error-logs/{skip}/{take}")]
public async Task<ActionResult<List<Log>>> GetErrorLogsPaginated([FromRoute] int skip = 0, [FromRoute] int take = 10)
{
var logErrors = await _context.Logs
.AsNoTracking()
.Where(l => l.LogLevel == InternalLogLevel.Error)
.OrderBy(c => c.Id)
.Skip(skip)
.Take(take)
.ToListAsync();
return Ok(logErrors);
}
Although it is useful to use AsNoTracking
to recover data, it should be avoided in scenarios where there are data modifications, as DbContext
will not be able to track and apply the changes.
Avoiding network round trips means that you should, whenever possible, retrieve the necessary data in a single call, rather than making multiple calls and then putting them together.
Note the code below:
[HttpGet("services-ids")]
public async Task<ActionResult<List<int>>> GetServicesIds()
{
var servicesIds = await _context.Services.AsNoTracking().Select(x => x.Id).ToListAsync();
return Ok(servicesIds);
}
[HttpGet("logs-with-service")]
public async Task<ActionResult<List<Log>>> GetLogsByServiceIds([FromQuery] List<int> serviceIds)
{
var logs = await _context.Logs
.AsNoTracking()
.Where(s => serviceIds.Contains(s.Id))
.Select(s => new Log
{
Id = s.Id,
Service = s.Service,
})
.ToListAsync();
return Ok(logs);
}
Here we have two endpoints—the first to return all service IDs, and the second to return logs that have records with these IDs. In this case, we are making two calls, which is unnecessary as we could do it as follows:
[HttpGet("logs-with-service")]
public async Task<ActionResult<List<Log>>> GetAllServicesWithLogs()
{
var services = await _context.Services
.AsNoTracking()
.ToListAsync();
var serviceIds = services.Select(s => s.Id).ToList();
var logs = await _context.Logs
.AsNoTracking()
.Where(l => serviceIds.Contains(l.ServiceId))
.ToListAsync();
return Ok(logs);
}
Now there is only one endpoint, which combines the retrieval of all services and their related logs in a single call, without the need to provide service IDs. This simplifies the client’s logic and reduces the number of requests. Although simple, things like this often go unnoticed and can cause a significant increase in server resource consumption. So, whenever you need to consolidate data across multiple requests, consider whether there is a way to reduce the number of calls required.
Using async Task
on endpoints allows the caller to wait for the task to complete. This means that the request pipeline can wait until the asynchronous method completes before continuing.
ASP.NET Core expects action methods to return a Task so it can integrate correctly into the asynchronous request pipeline. When using async void
, this expectation is broken and can result in responses before the asynchronous operation is complete.
Instead of using async void
:
[HttpDelete("/{id}")]
public async void DeleteService([FromRoute] int id)
{
await _context.Services.FirstOrDefaultAsync(s => s.Id == id);
await Response.WriteAsync("Successfully deleted record");
}
Use async Task
:
[HttpDelete("/{id}")]
public async Task DeleteService([FromRoute] int id)
{
await _context.Services.FirstOrDefaultAsync(s => s.Id == id);
await Response.WriteAsync("Successfully deleted record");
}
Using LINQ (Language-Integrated Query) to filter and aggregate data, in addition to being more efficient, makes the code cleaner and more concise. When using instructions such as .Where
, .Select
, .CountAsync
or .Sum
for example, filtering is performed during the database query. This way, only the necessary data will be returned from the query, without the need to create extra variables and scour large lists for specific data.
Instead of searching and filtering data manually:
[HttpGet("error-logs/count")]
public async Task<ActionResult<int>> GetErrorLogsCount()
{
var errorLogs = await _context.Logs.ToListAsync();
int errorLogCount = 0;
foreach (var errorLog in errorLogs)
{
if (errorLog.LogLevel == InternalLogLevel.Error)
{
errorLogCount++;
}
}
return Ok(errorLogCount);
}
Use LINQ features to create efficient queries:
[HttpGet("error-logs/count")]
public async Task<ActionResult<int>> GetErrorLogsCount()
{
int errorLogsCount = await _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.CountAsync();
return Ok(errorLogsCount);
}
Caching is a technique that can significantly improve the performance of an application by reducing the consumption of database resources.
On the first request, data is retrieved from the source and inserted into the cache storage. And on subsequent requests, if the data exists in the cache, it is immediately returned to the requester, without the need to search the source.
Caching is best suited for scenarios where data changes infrequently and needs to be available quickly when requested.
ASP.NET Core supports two main types of caching: in-memory caching and distributed caching.
In-memory caching is the simplest and is provided through the IMemoryCache interface. IMemoryCache represents a cache stored in the web server’s memory.
Below is an in-memory cache example:
[HttpGet("log-errors-cache-in-memory")]
public async Task<ActionResult<List<Log>>> GetErrorLogsWithCacheInMemory()
{
const string CacheKey = "logs_with_error";
if (!_memoryCache.TryGetValue(CacheKey, out List<Log>? logErrors))
{
logErrors = await _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.ToListAsync();
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetAbsoluteExpiration(TimeSpan.FromMinutes(30));
_memoryCache.Set(CacheKey, logErrors, cacheEntryOptions);
}
return Ok(logErrors);
}
Note that now this endpoint has a check if the data exists in the cache memory. If it does not exist, it is added, so in the next request, the data will be returned directly from memory, without the need to make a new query in the database.
The other way to implement caching is using distributed caching, which can be shared across multiple servers and is typically maintained as an external service.
ASP.NET Core has several types of distributed cache, one of the best known is EnyimMemcachedCore, an open-source library for ASP.NET Core, which has good resources for using distributed cache.
To use Memcached, you need to have a server running an instance of Memcached or you can use it via Docker.
The code below shows an endpoint using Memcached:
[HttpGet("log-errors-cache-distributed")]
public async Task<ActionResult<List<Log>>> GetErrorLogsWithCacheDistributed()
{
const string CacheKey = "logs_with_error";
List<Log>? logErrors = await _memcachedClient.GetValueAsync<List<Log>>(CacheKey);
if (logErrors == null)
{
logErrors = await _context.Logs
.Where(l => l.LogLevel == InternalLogLevel.Error)
.ToListAsync();
await _memcachedClient.SetAsync(CacheKey, logErrors, TimeSpan.FromMinutes(30));
}
return Ok(logErrors);
}
Note that the implementation looks very similar to the previous approach that uses in-memory caching. The biggest difference is that the distributed cache is stored on an external server or service.
Using JSON in relational databases can be an efficient way to optimize the performance of an API, especially in scenarios where data manipulation does not require complex business rules.
Although relational databases specialize in creating relationships between tables, it is possible to use common features in non-relational databases such as JSON data. To do this, simply insert the JSON data into a text column. Some databases have specific features for this type of work, such as PostgreSQL, which has a special type (JSONB) to handle data in JSON format.
By manipulating JSON data directly, it can reduce mapping, eliminating the need to create C# entities for each JSON structure. In addition, it can simplify data writing, especially if the entire JSON is modified frequently since it avoids multiple table operations (you just delete the old data and insert the new data). When manipulating large amounts of data, it is possible to effectively improve the performance of an API by using JSON manipulation.
The code below shows an endpoint manipulating JSON and inserting it directly into the database in a text column:
//Json Log Service Class
public class ServiceJsonLog
{
public int Id { get; set; }
public string JsonLogData { get; set; }
}
//Endpoint that receives JSON data
[HttpPost("create-json-log")]
public async Task<ActionResult> PostServiceJsonLog([FromBody] ServiceJsonLog serviceJsonLog)
{
await _context.ServiceJsonLogs.AddAsync(serviceJsonLog);
await _context.SaveChangesAsync();
return NoContent();
}
JSON sent in the request:
{
"jsonLogData": "{\"Id\": 1, \"ServiceId\": 101, \"Service\": {\"Id\": 101, \"Name\": \"Service 01922\", \"Description\": \"SVC News web service\"}, \"Message\": \"Error to request\", \"LogLevel\": \"Error\", \"Timestamp\": \"2024-07-26T15:30:00\"}"
}
Using optimized code is important to maintain good performance. There are features in C# that, if used incorrectly, can have side effects that contribute to excessive resource usage. Take a look at the example below:
[HttpGet]
public async Task<ActionResult<IEnumerable<Log>>> GetLogs()
{
IEnumerable<Log> logs = await _context.Logs.ToListAsync();
var result = logs.Where(log => log.LogLevel == InternalLogLevel.Error).ToList();
return Ok(result);
}
Note that here we are unnecessarily loading the log list, and then filtering the error data, in addition, we are using the IEnumerable<>
interface and then converting it to List<>
.
To improve this, we can do the following:
[HttpGet]
public async Task<ActionResult<List<Log>>> GetLogs()
{
List<Log> logs = await _context.Logs
.Where(log => log.LogLevel == InternalLogLevel.Error)
.ToListAsync();
return Ok(logs);
}
In the second example, we apply the filtering directly to the database query without creating a new variable to store the log data, which is more efficient. In addition, we eliminate unnecessary conversions by using only the List
data type.
Although it may seem simple, if you look closely at the code, you can find several opportunities for improvement, which together can result in a large performance gain.
Knowing and using optimization techniques is essential for creating efficient web APIs, especially in medium and large systems, where data traffic tends to increase exponentially.
In addition, implementing effective optimization strategies can make a difference when dealing with large volumes of simultaneous requests without compromising response time.
In this post, we covered nine tips for optimizing resources and improving performance in an ASP.NET Core API. We covered everything from simple concepts such as code optimization to more complex implementations such as the use of distributed caching.
So, when working with web APIs, consider adopting optimization practices for an efficient and scalable solution.