It’s amazing to see the .NET community with so much energy, can you believe there’s a C# Advent Calendar? That means there’s 25 (including this one) all new articles for the month of December!
As the .NET ecosystem moves at record speed these days, it's easy to overlook some amazing things happening in this space. There are constantly new tools and features being released that can easily increase productivity. Let's take a look at a few features Microsoft recently shipped that help with testing, front-end development and cross-platform migrations.
Entity Framework (EF) Core now targets .NET Standard 2.0, which means it can work with libraries that implement .NET Standard 2.0. While EF boasts many new features, the InMemory database provider is one that shouldn't go unnoticed. This database provider allows Entity Framework Core to be used with an in-memory database and was designed for testing purposes.
The InMemory provider is useful when testing a service that performs operations using a DbContext to connect with a SQL Server database. Since the DbContext can be swapped with one using the InMemory database provider, we can avoid elaborate test setups, code modification or test doubles. Even though it is explicitly designed for testing, the InMemory provider also makes short work of demos, prototypes, and other non-production sample projects.
The InMemory provider can be added to a project using NuGet by installing the Microsoft.EntityFrameworkCore.InMemory package.
Package manager console:
PM> Install-Package Microsoft.EntityFrameworkCore.InMemory -Version 2.0.1
NET Core CLI:
$ dotnet add package Microsoft.EntityFrameworkCore.InMemory
Visual Studio NuGet Package Manager:
Once the InMemory provider is set up, we'll need to add a constructor to our DbContext that accepts a DbContextOptions<TContext>
object. The DbContextOptions
object will allow us to specify the InMemory provider via configuration, thus allowing us to swap the provider as needed.
public class BloggingContext : DbContext
{
public BloggingContext()
{ }
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }
}
With the our DbContext implementation in place, we're able to then create a new instance of the context. Calling the UseInMemoryDatabase
extension method configures the context to connect to an in-memory database. Once the database connection is established, we can seed it with data and run our tests.[TestMethod]
public void Find_searches_url()
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseInMemoryDatabase(databaseName: "Find_searches_url")
.Options;
// Insert seed data into the database using one instance of the context
using (var context = new BloggingContext(options))
{
context.Blogs.Add(new Blog { Url = "http://sample.com/cats" });
context.Blogs.Add(new Blog { Url = "http://sample.com/catfish" });
context.Blogs.Add(new Blog { Url = "http://sample.com/dogs" });
context.SaveChanges();
}
// Use a clean instance of the context to run the test
using (var context = new BloggingContext(options))
{
var service = new BlogService(context);
var result = service.Find("cat");
Assert.AreEqual(2, result.Count());
}
}
$ dotnet new xunit
From Visual Studio: File > New (or Add) > Project
The new project is initialized with all the necessary dependencies. In addition, it works seamlessly with Visual Stuido 2017 Test Runner Explorer and supports Life Unit Testing in VS2017 Enterprise Edition.
$ dotnet new templateName [angular, react, reactredux]
From Visual Studio 2017
What we’re getting from the template is a highly opinionated setup where many of the nuances required to initialize a project are planned out for us. This means that Webpack is configured to deal with vendor dependencies and server-side rendering. Included in the setup are processes to kick off the actual tooling, such that Webpack is initialized from within the ASP.NET application at start up. In addition, Hot Module Reload (HMR) is triggered from the application's middleware, thereby further automating the development process.
It's a must-have for any developer working with npm from Visual Studio, so grab it from the Visual Studio Market Place.
.NET Core 2.0 is a milestone release for backwards compatibility. With .NET Core & Standard 2.0, there's little need to worry about legacy code. If there's one thing Microsoft understands well it's that existing applications need a low friction upgrade path. There are several attributes to .NET Core that make it well suited to transition to from legacy code. Because .NET Standard 2.0 is at near API parity with .NET Framework 4.6.1, applications can make use of code they already know. In addition, .NET Core includes a compatibility shim that allows projects to reference existing .NET Framework NuGet packages and projects. This means most packages on NuGet today are already compatible with .NET Core 2.0 without the need to be recompiled.
With .NET Core being cross platform there are some instances where APIs are only available on Windows. For this reason, the APIs are not available in .NET Core. For example, Linux does not have the Windows Registry, therefore Microsoft.Win32.Registry isn't available in .NET Core. This might pose a problem for the migration of applications from .NET Framework to .NET Core. To overcome this issue Microsoft has released the Windows Compatibility Pack for .NET Core (currently in preview).
The Windows Compatibility Pack, provides access to an additional 20,000 APIs missing from .NET Core. The Windows Compatibility Pack bridges the gap, allowing those APIs to run with .NET Core when the application is running on Windows. Having the ability to migrate existing applications to the next generation gives .NET developers the freedom to write future-proof code and move the business forward with minimal risk. Microsoft suggests the following migration steps in order to smoothly transition to other platforms.
Ed