Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

Close

ASP.NET und mehr ...

Mehr oder weniger regelmäßig werden Artikel auf meinem Blog auf ASP.NET Zone veröffentlicht: ASP.NET und mehr...

Removing Disqus and adding GitHub Issue Comments

Monday, November 19, 2018 12:00 AM

I recently realized that I ran this new blog for almost exactly three years now and wrote almost 100 posts until yet. Running this blog is completely different compared to the the previous one based on the community server on ASP.NET Zone. I now write on markdown files which I commit and push to GitHub. I also switched the language. From January 2007 to November 2015 I wrote in German and since I run this GitHub based blog I switched completely to English, which is a great experience and improves the English writing and speaking skills a lot.

This blog is based on Pretzel, which is a .NET based Jekyll clone, that creates a static website. Pretzel as well as Jekyll is optimized for blogs or similar structured web sites. Both systems take markdown files and turn them based on the Liquid template engine into static HTML pages. This works pretty well and I really like the way to push markdown files to the GitHub repo and get an updated blog a few seconds later on Azure. This is continuous delivery using GitHub and Azure for blog posts. It is amazing. And I really love blogging this way.

Actually the blog is successful from my perspective. Around 6k visits per week is a good number, I guess.

Because the blog is static HTML, at the end I need to extend it with software as a service solutions to create dynamic content or to track the success of that blog.

So I added Disqus to enable comments on this blog. Disqus was quite popular at that time for this kind of blogs and I also get some traffic from Disqus. Anyway, now this service started to show some advertisement on my page and it also shows advertisement that is not really related to the contents of my page.

I also added a small Google AdSense banner to the blog, but this is placed at the end of the page and doesn't really annoy you as a reader, I hope. I put some text upon this banner, to ask you as a reader to support my blog if you like it. A click on that banner doesn't really cost some time or money.

I don't get anything out of the annoying off-topic adds that Disqus shows here, except a free tool to collect blog post comments and store them somewhere outside in the cloud. I don't really "own" the comments, which is the other downside.

Sure Disqus is a free service and someone need to pay for it, but the ownership of the contents is an problem as well as the fact that I cannot influence the contents of the adds displayed on my blog:

Owning the comments

The comments are important contents you provide to me, to the other readers and to the entire developer community. But they are completely separated from the blog post they relate to. They are stored on a different cloud. Actually I have no idea where Disqus stores the comments.

How do I own the comments?

My idea was to use GitHub issues of the blog repository to collect the comments. Every first comment of a blog post should create a GitHub issue and any comment is a comment on this issue. With this solution the actual posts and the comments are in the same repository, they can be linked together and I own this comments a little more than previously.

I already asked on twitter about that and got some positive feedback.

Evaluating a solution

There are already some JavaScript codes available which can be used to add GitHub Issues as comments. The GitHub API is well documented and it should be easy to do this.

I already evaluated a solution to use and decided to go with Utterance

"A lightweight comments widget built on GitHub issues"

Utterance was built by Jeremy Danyow. I stumbled upon it on Jeremys blog post about Using GitHub Issues for Blog Comments. Jeremy works as a Senior Software Engineer at Microsoft, he is member of the Aurelia core team and created also gist.run.

As far as I understood, Utterances is a light weight version of Microsofts comment system used with the new docs on https://docs.microsoft.com. Also Microsoft stores the comments as Issues on GitHub, which is nice because they can create real issues out of it, in case there are real Problems with the docs, etc.

More Links about it: https://utteranc.es/ and https://github.com/utterance.

At the end I just need to add a small HTML snippet to my blog:


This script will search for Issues with the same title as the current page. If there's no such issue, it will create a new one. If there is such an issue it will create an comment on that issue. This script also supports markdown.

Open questions until yet

Some important open question came up while evaluating the solution:

  1. Is it possible to import all the Disqus comments to GitHub Issues?
    • This is what I need to figure out now.
    • Would be bad to not have the existing comments available in the new system.
  2. What if Jeremys services are not available anymore?

The second question is easy to solve. As I wrote, I will just host the stuff by my own in case Jeremy will shut down his services. The first question is much more essential. It would be cool to get the comments somehow in a readable format. I would than write a small script or a small console app to import the comments as GitHub Issues.

Exporting the Disqus comments to GitHub Issues

Fortunately there is an export feature on Disqus, in the administration settings of the site:

After clicking "Export Comment" the export gets scheduled and you'll get an email with the download link to the export.

The exported file is a GZ compressed XML file including all threads and posts. A thread in this case is an entry per blog post where the comment form was visible. A thread actually doesn't need to contain comments. Post are comments related to a thread. Posts contain the actual comment as message, Author information and relations to the thread and the parent post if it is a reply to a comment.

This is pretty clean XML and it should be easy to import that automatically into GitHub Issues. Now I needed to figure out how the GitHub API works and to write a small C# Script to import all the comments.

This XML also includes the authors names and usernames. This is cool to know, but it doesn't have any value for me anymore, because Disqus users are no GitHub users. I can't set the comments in behalf of real GitHub users. So any migrated comment will be done by myself and I need to mark the comment, that it originally came from another reader.

So it will be something like this:

var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

Importing the comments

I decided to write a small console app and to do some initial tests on a test repo. I extracted the exported data and moved it into the .NET Core console app folder and tried to play around with it.

First I read all threads out of the file and than the posts afterwards. A only selected the threads which are not marked as closed and not marked as deleted. I also checked the blog post URL of the thread, because sometimes the thread was created by a local test run, sometimes I changed the publication date of a post afterwards, which also changed the URL and sometimes the thread was created by a post that was displayed via a proxying page. I tried to filter all that stuff out. The URL need to start with http://asp.net-hacker.rocks or https://asp.net-hacker.rocks to be valid. Also the posts shouldn't be marked as deleted or marked as spam

Than I assigned the posts to the specific threads using the provided thread id and ordered the posts by date. This breaks the dialogues of the Disqus threads, but should be ok for the first step.

Than I created the actual issue post it and posted the assigned comments to the new issue.

That's it.

Reading the XML file is easy using the XmlDocument this is also available in .NET Core:

var doc = new XmlDocument();
doc.Load(path);
var nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace(String.Empty, "http://disqus.com");
nsmgr.AddNamespace("def", "http://disqus.com");
nsmgr.AddNamespace("dsq", "http://disqus.com/disqus-internals");

IEnumerable threads = await FindThreads(doc, nsmgr);
IEnumerable posts = FindPosts(doc, nsmgr);

Console.WriteLine($"{threads.Count()} valid threads found");
Console.WriteLine($"{posts.Count()} valid posts found");

I need to use the XmlNamespaceManager here to use tags and properties using the Disqus namespaces. The XmlDocument as well as the XmlNamespaceManager need to get passed into the read methods then. The two find methods are than reading the threads and posts out of the XmlDocument.

In the next snippet I show the code to read the threads:

private static async Task> FindThreads(XmlDocument doc, XmlNamespaceManager nsmgr)
{
    var xthreads = doc.DocumentElement.SelectNodes("def:thread", nsmgr);

    var threads = new List();
    var i = 0;
    foreach (XmlNode xthread in xthreads)
    {
        i++;

        long threadId = xthread.AttributeValue(0);
        var isDeleted = xthread["isDeleted"].NodeValue();
        var isClosed = xthread["isClosed"].NodeValue();
        var url = xthread["link"].NodeValue();
        var isValid = await CheckThreadUrl(url);

        Console.WriteLine($"{i:###} Found thread ({threadId}) '{xthread["title"].NodeValue()}'");

        if (isDeleted)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was deleted.");
            continue;
        }
        if (isClosed)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was closed.");
            continue;
        }
        if (!isValid)
        {
            Console.WriteLine($"{i:###} the url Thread ({threadId}) is not valid: {url}");
            continue;
        }

        Console.WriteLine($"{i:###} Thread ({threadId}) is valid");
        threads.Add(new Thread(threadId)
        {
            Title = xthread["title"].NodeValue(),
            Url = url,
            CreatedAt = xthread["createdAt"].NodeValue()

        });
    }

    return threads;
}

I think there's nothing magic in it. Even assigning the posts to the threads is just some LINQ code.

To create the actual issues and comments, I use the Octokit.NET library which is available on NuGet and GitHub.

dotnet add package Octokit

This library is quite simple to use and well documented. You have the choice between basic authentication and token authentication to connect to GitHub. I chose the token authentication which is the proposed way to connect. To get the token you need to go to the settings of your GitHub account. Choose a personal access token and specify the rights the for the token. The basic rights to contribute to the specific repository are enough in this case:

private static async Task PostIssuesToGitHub(IEnumerable threads)
{
    var client = new GitHubClient(new ProductHeaderValue("DisqusToGithubIssues"));
    var tokenAuth = new Credentials("secret personal token from github");
    client.Credentials = tokenAuth;

    var issues = await client.Issue.GetAllForRepository(repoOwner, repoName);
    foreach (var thread in threads)
    {
        if (issues.Any(x => !x.ClosedAt.HasValue && x.Title.Equals(thread.Title)))
        {
            continue;
        }

        var newIssue = new NewIssue(thread.Title);
        newIssue.Body = $@"Written on {thread.CreatedAt} 

URL: {thread.Url}
";

        var issue = await client.Issue.Create(repoOwner, repoName, newIssue);
        Console.WriteLine($"New issue (#{issue.Number}) created: {issue.Url}");
        await Task.Delay(1000 * 5);

        foreach (var post in thread.Posts)
        {
            var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

            var comment = await client.Issue.Comment.Create(repoOwner, repoName, issue.Number, message);
            Console.WriteLine($"New comment by {post.Author} at {post.CreatedAt}");
            await Task.Delay(1000 * 5);
        }
    }
}

This method gets the list of Disqus threads, creates the GitHub client and inserts one thread by another. I also read the existing Issues from GitHub in case I need to run the migration twice because of an error. After the Issue is created, I only needed to create the comments per Issue.

After I started that code, the console app starts to add issues and comments to GitHub:

The comments are set as expected:

Unfortunately the import breaks after a while with a weird exception.

Octokit.AbuseException

Unfortunately that run didn't finish. After the first few issues were entered I got an exception like this.

Octokit.AbuseException: 'You have triggered an abuse detection mechanism and have been temporarily blocked from content creation. Please retry your request again later.'

This Exception happens because I reached the creation rate limit (user.creation_rate_limit_exceeded). This limit is set by GitHub on the public API. It is not allowed to do more than 5000 requests per hour: https://developer.github.com/v3/#rate-limiting

You can see such security related events in the security tap of your GitHub account settings.

There is no real solution to solve this problem, except to add more checks and fallbacks to the migration code. I checked which issue already exists and migrate only the issues that don't exist. I also added a five second delay between each request to GitHub. This only increases the migration time, and helps to start the migration only two times. Without the delay I got the exception more often during the tests.

Using Utterances

Once the Issues are migrated to GutHub, I need to use Utterances to the blog. At first you need to install the utterances app on your repository. The repository needs to be public and the issues should be enabled obviously.

On https://utteranc.es/ there is a kind of a configuration wizard that creates the HTML snippet for you, which you need to add to your blog. In my case it is the small snippet I already showed previously:


This loads the Uttereances client script, configures my blog repository and the way the issued will be found in my repository. You have different options for the issue-term. Since I set the blog post title as GitHub issue title, I need to tell Utterances to look at the tile. The theme I want to use here is the GitHub light theme. The dark theme doesn't fit the blog style. I was also able to override the CSS by overriding the following two CSS classes:

.utterances {}
.utterances-frame {}

The result

At the end it worked pretty cool. After the migration and after I changed the relevant blog template I tried it locally using the pretzel taste command.

If you want to add a comment as a reader, you need to logon with your GitHub account and you need to grand the utterances app to post to my repo with our name.

Not every new commend will be stored in the repository of my blog. All the contents are in the same repository. There will be an issue per post, so it is almost directly linked.

What do you think? Do you like it? Tell me about your opinion :-)

BTW: You will find the migration tool on GitHub.

Disabling comments on this blog until they are moved to GitHub

Friday, November 16, 2018 12:00 AM

I'm going to remove the Disqus comments on this blog and move to GitHib issue based comments. The reason is, that I don't want to have advertisements that are not related to the contents of this page. Another reason is, that I want to have the full control over the comments. The third reason is related to GDPR: I've no Idea yet what Disqus is doing to protect the users privacy and how the users are able control their personal data. With the advertisements they are displaying it gets less transparent, because I don't know who what is the original source of the adds and who is responsible for the users personal data.

I removed Disqus from my blog

I'm currently migrating all the Disqus comments to GitHub issues. There will be an GitHub issue per blog post and the issue comments will be the blog post comments than. I will lose the dialogue hierarchy of the comments, but this isn't really needed. Another downside for you readers is, that they will need to have an GiHub account to create comments. Otherwise the most of you already have one and you don't need to have an Discus account anymore to drop a comment.

To do the migration I removed Disqus first and exported all the comments. After a few days of migrating and testing I'll enable the GitHub issue comments on my blog. There will be a comment form on on each blog post as usual and you don't need to go to GitHub to drop a comment.

I will write a detailed blog post about the new comment system and how I migrated it, if it's done.

The new GitHub issue based comments should be available after the weekend

Customizing ASP.​NET Core Part 10: TagHelpers

Tuesday, November 13, 2018 12:00 AM

This was initially planned as the last topic of this series, because this also was the last part of the talk about customizing ASP.NET Core I did in the past. See the initial post about this series. Now I have three additional customizing topics to talk about. If you like to propose another topic feel free to drop a comment in the initial post.

In this tenth part of this series I'm going to write about TagHelpers. The built in TagHelpers are pretty useful and making the razor more pretty and more readable. Creating custom TagHelpers will make your life much easier.

This series topics

About TagHelpers

With TagHelpers you are able to extend existing HTML tags or to create new tags that get rendered on the server side. The extensions or the new tags are not visible in the browsers. TagHelpers a only kind of shortcuts to write easier and less HTML or Razor code on the server side. TagHelpers wil be interpreted on the server and will produce "real" HTML code for the browsers.

TagHelpers are not a new thing in ASP.NET Core, it was there since the first version of ASP.NET Core. The most existing and built-in TagHelpers are a replacement for the old fashioned HTML Helpers, which are still existing and working in ASP.NET Core to keep the Razor views compatible to ASP.NET Core.

A very basic example of extending HTML tags is the built in AnchorTagHelper:


  • @Html.Link("Home", "Index", "Home")
  • Home
  • The HtmlHelper are kinda strange between the HTML tags, for HTML developers. It is hard to read. It is kind of disturbing and interrupting while reading the code. It is maybe not for ASP.NET Core developers who are used to read that kind of code. But compared to the TagHelpers it is really ugly. The TagHelpers feel more natural and more like HTML even if they are not and even if they are getting rendered on the server.

    Many of the HtmlHelper can be replaced with a TagHelper.

    There are also some new tags built with TagHelpers. Tags that are not existing in HTML, but look like HTML. One example is the EnvironmentTagHelper:

    
        
        
    
    
        
        
    
    

    This TagHelper renders or doesn't render the contents depending of the current runtime environment. In this case the target environment is the development mode. The first environment tag renders the contents if the current runtime environment is set to Development and the second one renders the contents if it not set to Development. This makes it a useful helper to render debugable scripts or styles in Development mode and minified and optimized code in any other runtime environment.

    Creating custom TagHelpers

    Just as a quick example, let's assume we need to have any tag configurable as bold and colored in a specific color:

    Use this area to provide additional information.

    This looks like pretty old fashioned HTML out of the nineties, but this is just to demonstrate a simple TagHelper. But this can be done by a TagHelper that extend any tag that has an attribute called strong

    [HtmlTargetElement(Attributes = "strong")]
    public class StrongTagHelper : TagHelper
    {
        public string Color { get; set; }
    
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.Attributes.RemoveAll("strong");
    
            output.Attributes.Add("style", "font-weight:bold;");
            if (!String.IsNullOrWhiteSpace(Color))
            {
                output.Attributes.RemoveAll("style");
                output.Attributes.Add("style", $"font-weight:bold;color:{Color};");
            }
        }
    }
    

    The first line tells the tag helper to work on tags with an target attribute strong. This TagHelper doesn't define an own tag. But also provides an additional attribute to specify the color. At least the Process method defined how to render the HTML to the output stream. In this case it adds some CSS inline Styles to the current tag. It also removes the target attribute from the current tag. The color attribute won't show up.

    This will look like this

    Use this area to provide additional information.

    The next sample show how to define a custom tag using a TagHelper:

    public class GreeterTagHelper : TagHelper
    {
        [HtmlAttributeName("name")]
        public string Name { get; set; }
    
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.TagName = "p";
            output.Content.SetContent($"Hello {Name}");
        }
    }
    

    This TagHelper handles a greeter tag that has a property name. In the Process method the current tag will be changed to a p tag and the new content is set the the current output.

    
    

    The result is like this:

    Hello Readers

    A more complex scenario

    The TagHelpers in the last section were pretty basic just to show how TagHelpers work. The next sample is a little more complex and shows an almost real scenario. This TagHelper renders a table with a list of items. This is a generic TagHelper and shows a real reason to create own custom TagHelpers. With this you are able to reuse an a isolated piece of view code. You can wrap for example Bootstrap components to make it much easier to use, e.g. with just one tag instead of nesting five levels of div tags. Or you can just simplify your Razor views:

    public class DataGridTagHelper : TagHelper
    {
        [HtmlAttributeName("Items")]
        public IEnumerable Items { get; set; }
    
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.TagName = "table";
            output.Attributes.Add("class", "table");
            var props = GetItemProperties();
    
            TableHeader(output, props);
            TableBody(output, props);
        }
    
        private void TableHeader(TagHelperOutput output, PropertyInfo[] props)
        {
            output.Content.AppendHtml("");
            output.Content.AppendHtml("");
            foreach (var prop in props)
            {
                var name = GetPropertyName(prop);
                output.Content.AppendHtml($"{name}");
            }
            output.Content.AppendHtml("");
            output.Content.AppendHtml("");
        }
    
        private void TableBody(TagHelperOutput output, PropertyInfo[] props)
        {
            output.Content.AppendHtml("");
            foreach (var item in Items)
            {
                output.Content.AppendHtml("");
                foreach (var prop in props)
                {
                    var value = GetPropertyValue(prop, item);
                    output.Content.AppendHtml($"{value}");
                }
                output.Content.AppendHtml("");
            }
            output.Content.AppendHtml("");
        }
    
        private PropertyInfo[] GetItemProperties()
        {
            var listType = Items.GetType();
            Type itemType;
            if (listType.IsGenericType)
            {
                itemType = listType.GetGenericArguments().First();
                return itemType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
            }
            return new PropertyInfo[] { };
        }
    
        private string GetPropertyName(PropertyInfo property)
        {
            var attribute = property.GetCustomAttribute();
            if (attribute != null)
            {
                return attribute.DisplayName;
            }
            return property.Name;
        }
    
        private object GetPropertyValue(PropertyInfo property, object instance)
        {
            return property.GetValue(instance);
        }
    }
    
    

    To use this TagHelper you just need to assign a list of items to this tag:

    
    

    In this case it is a list of persons, that we get in the Persons property of our current model. The Person class I use here looks like this:

    public class Person
    {
        [DisplayName("First name")]
        public string FirstName { get; set; }
        
        [DisplayName("Last name")]
        public string LastName { get; set; }
        
        public int Age { get; set; }
        
        [DisplayName("Email address")]
        public string EmailAddress { get; set; }
    }
    

    So not all of the properties have a DisplayNameAttribute, so the fallback in the GetPropertyName method is needed to get the actual property name instead of the the DisplayName value.

    To use it in production this TagHelper need some more checks and validations, but it works:

    Now you are able to extend this TagHelper with a lot more features, like sorting, filtering, paging and so on. Feel free.

    Conclusion

    TagHelpers are pretty useful to reuse parts of the view and to simplify and cleanup your views. You can also provide a library with useful view elements. Here are some more examples of already existing TabHelper libraries and samples:

    • https://github.com/DamianEdwards/TagHelperPack
    • https://github.com/dpaquette/TagHelperSamples
    • https://www.red-gate.com/simple-talk/dotnet/asp-net/asp-net-core-tag-helpers-bootstrap/
    • https://www.jqwidgets.com/asp.net-core-mvc-tag-helpers/

    This part was initially planned as the last part of this series, but I found some more interesting topics. If you also have some nice ideas to write about feel free to drop a comment in the introduction post of this series.

    In the next post, I'm going to write about how to customize the Hosting of ASP.NET Core Wep Applications: Customizing ASP.NET Core Part 11: Hosting (not yet done)

    Customizing ASP.​NET Core Part 09: ActionFilter

    Monday, October 29, 2018 12:00 AM

    This post is a little late this time. My initial plan was to throw out two posts of this series per week, but this doesn't work out, since there are sometimes some more family and work tasks to do than expected.

    Anyway, we keep on customizing on the controller level in this ninth post of this blog series. I'll have a look into ActionFilters and hot to create your own ActionFilter to keep your Actions small and readable.

    The series topics

    About ActionFilters

    Action filters are a little bit like MiddleWares, but are executed immediately on a specific action or on all actions of a specific controller. If you apply an ActionFilter as a global one, it executes on all actions in your application. ActionFilters are created to execute code right before the actions is executed or after the action is executed. They are introduced to execute aspects that are not part of the actual action logic. Authorization is such an aspect. I'm sure you already know the AuthorizeAttribute to allow users or groups to access specific Actions or Controllers. The AuthorizeAttribute actually is an ActionFilter. It checks whether the logged-on user is authorized or not. If not it redirects to the log-on page.

    The next sample shows the skeletons of a normal ActionFilters and an async ActionFilter:

    public class SampleActionFilter : IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            // do something before the action executes
        }
    
        public void OnActionExecuted(ActionExecutedContext context)
        {
            // do something after the action executes
        }
    }
    
    public class SampleAsyncActionFilter : IAsyncActionFilter
    {
        public async Task OnActionExecutionAsync(
            ActionExecutingContext context,
            ActionExecutionDelegate next)
        {
            // do something before the action executes
            var resultContext = await next();
            // do something after the action executes; resultContext.Result will be set
        }
    }
    

    As you can see here there are always two section to place code to execute before and after the action is executed. This ActionFilters cannot be uses as attributes. If you want to use the ActionFilters as attributes in your Controllers, you need to drive from Attribute or from ActionFilterAttribute:

    public class ValidateModelAttribute : ActionFilterAttribute
    {
        public override void OnActionExecuting(ActionExecutingContext context)
        {
            if (!context.ModelState.IsValid)
            {
                context.Result = new BadRequestObjectResult(context.ModelState);
            }
        }
    }
    

    This code shows a simple ActionFilter which always returns a BadRequestObjectResult, if the ModelState is not valid. This may be useful an a Web API as a default check on POST, PUT and PATCH requests. This could be extended with a lot more validation logic. We'll see how to use it later on.

    Another possible use case for an ActionFilter is logging. You don't need to log in the Controllers and Actions directly. You can do this in an action filter to not mess up the actions with not relevant code:

    public class LoggingActionFilter : IActionFilter
    {
        ILogger _logger;
        public LoggingActionFilter(ILoggerFactory loggerFactory)
        {
    
            _logger = loggerFactory.CreateLogger();
        }
    
        public void OnActionExecuting(ActionExecutingContext context)
        {
            // do something before the action executes
            _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executing");
        }
    
        public void OnActionExecuted(ActionExecutedContext context)
        {
            // do something after the action executes
            _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executed");
        }
    }
    

    This logs an information message out to the console. You are able to get more information about the current Action out of the ActionExecutingContext or the ActionExecutedContext e.g. the arguments, the argument values and so on. This makes the ActionFilters pretty useful.

    Using the ActionFilters

    ActionFilters that actually are Attributes can be registered as an attribute of an Action or a Controller:

    [HttpPost]
    [ValidateModel] // ActionFilter as attribute
    public ActionResult Post([FromBody] Person model)
    {
        // save the person
        
    	return model; //just to test the action
    }
    

    Here we use the ValidateModelAttribute that checks the ModelState and returns a BadRequestObjectResult in case the ModelState is invalid and I don't need to check the ModelState in the actual Action.

    To register ActionFilters globally you need to extend the MVC registration in the CofnigureServices method of the Startup.cs:

    services.AddMvc()
        .AddMvcOptions(options =>
        {
            options.Filters.Add(new SampleActionFilter());
            options.Filters.Add(new SampleAsyncActionFilter());
        });
    

    ActionFilters registered like this are getting executed on every action. This way you are able to use ActionFilters that don't derive from Attribute.

    The Logging LoggingActionFilter we created previously is a little more special. It is depending on an instance of an ILoggerFactory, which need to be passed into the constructor. This won't work well as an attribute, because Attributes don't support constructor injection via dependency injection. The ILoggerFactory is registered in the ASP.NET Core dependency injection container and needs to be injected into the LoggingActionFilter.

    Because of this there are some more ways to register ActionFilters. Globally we are able to register it as a type, that gets instantiated by the dependency injection container and the dependencies can be solved by the container.

    services.AddMvc()
        .AddMvcOptions(options =>
        {
            options.Filters.Add();
        })
    

    This works well. We now have the ILoggerFactory in the filter

    To support automatic resolution in Attributes, you need to use the ServiceFilterAttribute on the Controller or Action level:

    [ServiceFilter(typeof(LoggingActionFilter))]
    public class HomeController : Controller
    {
    

    in addition to the global filter registration, the ActionFilter needs to be registered in the ServiceCollection before we can use it with the ServiceFilterAttribute:

    services.AddSingleton();
    

    To be complete there is another way to use ActionFilters that needs arguments passed into the constructor. You can use the TypeFilterAttribute to automatically instantiate the filter. But using this attribute the Filter isn't instantiate by the dependency injection container and the arguments need to get specified as argument of the TypeFilterAttribute. See the next snippet from the docs:

    [TypeFilter(typeof(AddHeaderAttribute),
        Arguments = new object[] { "Author", "Juergen Gutsch (@sharpcms)" })]
    public IActionResult Hi(string name)
    {
        return Content($"Hi {name}");
    }
    

    The Type of the filter end the arguments are specified with the TypeFilterAttribute

    Conclusion

    Personally I like the way to keep the actions clean using ActionFilters. If I find repeating tasks inside my Actions, that are not really relevant to the actual responsibility of the Action, I try to move it out to an ActionFilter, or maybe a ModelBinder or a MiddleWare, depending on how globally it should work. The more it is relevant to an Action the more likely I use an ActionFilter.

    There are some more kind of filters, which all work similar. To learn more about the different kind of filters, you definitely need to read the docs.

    In the tenth part of the series we move to the actual view logic and extend the Razor Views with custom TagHelpers: Customizing ASP.NET Core Part 10: TagHelpers

    Customizing ASP.​NET Core Part 08: ModelBinders

    Wednesday, October 17, 2018 12:00 AM

    In the last post about OutputFormatters I wrote about sending data out to the clients in different formats. In this post we are going to do it the other way. This post is about data you get into your Web API from outside. What if you get data in a special format or what if you get data you need to validate in a special way. ModelBinders will help you handling this.

    The series topics

    About ModelBinders

    ModelBinders are responsible to bind the incoming data to specific action method parameters. It binds the data sent with the request to the parameters. The default binders are able to bind data that are sent via the QueryString or sent within the request body. Within the body the data can be sent in URL format or JSON.

    The model binding tries to find the values in the request by the parameter names. The form values, the route data and the query string values are stored as a key-value pair collection and the binding tries to find the parameter name in the keys of the collection.

    Preparation of the test project

    In this post I'd like to send CSV data to a Web API method. I will reuse the CSV data we created in the last post:

    Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
    48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
    1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
    55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415
    99,Alexander,King,59,Ross.Timms@live.com,"3089 Paerdegat 7th Street ",Monte Alto,(366) 319-4154
    69,Autumn,Hayes,25,Mark.Diaz@shaw.ca,"3263 Avenue O  ",Montreal West (Montréal-Ouest),(283) 438-7801
    94,Destiny,James,47,Kylie.Walker@telus.net,"1057 14th Street ",Montreal,(570) 574-3208
    59,Christina,Bennett,87,Madeline.Adams@att.com,"5672 19th Lane ",Corrigan,(467) 304-0309
    71,Isaac,Hayes,33,Trevor.Robinson@hotmail.com,"9707 Langham Street ",Huntington,(635) 317-0231
    23,Jason,Morgan,77,Jennifer.Powell@rogers.ca,"4413 Debevoise Avenue ",Pinole,(265) 467-1984
    43,Jenna,Brandzin,92,Natalie.Reed@gmail.com,"4691 Sea Breeze Avenue ",Cushing-Douglass,(502) 427-9135
    79,Madison,Verstraete,69,Abigail.Wright@hotmail.com,"2066 104th Street ",Moose Lake,(448) 423-7550
    80,Lorrie,Long,89,Melissa.Bennett@microsoft.com,"3048 Allen Avenue ",Munday,(576) 707-6183
    79,Alejandro,Daeninck,51,Matthew.Phillips@att.com,"9997 41st Street ",North Bay,(455) 297-2648
    14,Makayla,Clark,44,Joshua.Jackson@rogers.ca,"4518 Folsom Place ",Cortland,(772) 692-0732
    12,Isaac,Sanchez,37,Paige.MacKenzie@live.com,"2094 Mc Kenny Street ",Brockville,(563) 735-0233
    68,Jesus,Brandzin,34,Molly.Clark@telus.net,"3532 Durland Place ",Comfort,(627) 319-9704
    59,Logan,Howard,59,Jorge.Brandzin@rogers.ca,"3458 Wythe Avenue ",Enderby,(226) 520-9653
    48,Nathaniel,Richardson,58,Amanda.Pitt@gmail.com,"6926 Sunnyside Court ",Los Altos Hills,(513) 338-4602
    34,Tiffany,Miller,18,Claire.Alexander@att.com,"1985 Devon Avenue ",Sansom Park,(357) 274-3606
    

    So let's start by creating a new project using the .NET CLI:

    dotnet new webapi -n ModelBinderSample -o ModelBinderSample
    

    This creates a new Web API project.

    In this new project I created a new controller with a small action inside:

    namespace ModelBinderSample.Controllers
    {
        [Route("api/[controller]")]
        [ApiController]
        public class PersonsController : ControllerBase
        {
            public ActionResult Post(IEnumerable persons)
            {
                return new
                {
                    ItemsRead = persons.Count(),
                    Persons = persons
                };
            }
        }
    }
    
    

    This looks basically like any other action. It accepts a list of persons and returns an anonymous object that contains the number of persons as well as the list of persons. This action is pretty useless, but helps us to debug the ModelBinder using Postman.

    We also need the Person class:

    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int Age { get; set; }
        public string EmailAddress { get; set; }
        public string Address { get; set; }
        public string City { get; set; }
        public string Phone { get; set; }
    }
    

    This actually will work fine, if we would send JSON based data to that action.

    As a last preparation step, we need to add the CsvHelper NuGet package to easier parse the CSV data. I also love to use the .NET CLI here:

    dotnet package add CsvHelper
    

    Creating a CsvModelBinder

    To create the ModelBinder add a new class called CsvModelBinder, which implements the IModelBinder. The next snippet shows a generic binder that should work with any list of models:

    public class CsvModelBinder : IModelBinder
    {
        public Task BindModelAsync(ModelBindingContext bindingContext)
        {
            if (bindingContext == null)
            {
                throw new ArgumentNullException(nameof(bindingContext));
            }
    
            // Specify a default argument name if none is set by ModelBinderAttribute
            var modelName = bindingContext.ModelName;
            if (String.IsNullOrEmpty(modelName))
            {
                modelName = "model";
            }
    
            // Try to fetch the value of the argument by name
            var valueProviderResult = bindingContext.ValueProvider.GetValue(modelName);
            if (valueProviderResult == ValueProviderResult.None)
            {
                return Task.CompletedTask;
            }
    
            bindingContext.ModelState.SetModelValue(modelName, valueProviderResult);
    
            var value = valueProviderResult.FirstValue;
            // Check if the argument value is null or empty
            if (String.IsNullOrEmpty(value))
            {
                return Task.CompletedTask;
            }
    
            var stringReader = new StringReader(value);
            var reader = new CsvReader(stringReader);
    
            var modelElementType = bindingContext.ModelMetadata.ElementType;
            var model = reader.GetRecords(modelElementType).ToList();
    
            bindingContext.Result = ModelBindingResult.Success(model);
    
            return Task.CompletedTask;
        }
    }
    

    In the method BindModelAsync we get the ModelBindingContext with all the information in it we need to get the data and to de-serialize it.

    First the context get's checked against null values. After that we set a default argument name to model, if none is specified. If this is done we are able to fetch the value by the name we previously set.

    If there's no value, we shouldn't throw an exception in this case. The reason is that maybe the next configured ModelBinder is responsible. If we throw an exception the execution of the current request is broken and the next configured ModelBinder doesn't have the chance to get executed.

    With a StringReader we read the value into the CsvReader and de-serialize it to the list of models. We get the type for the de-serialization out of the ModelMetadata property. This contains all the relevant information about the current model.

    Using the ModelBinder

    The Binder isn't used automatically, because it isn't registered in the dependency injection container and not configured to use within the MVC framework.

    The easiest way use this model binder is to use the ModelBinderAttribute on the argument of the action where the model should be bound:

    [HttpPost]
    public ActionResult Post(
        [ModelBinder(binderType: typeof(CsvModelBinder))] 
        IEnumerable persons)
    {
        return new
        {
            ItemsRead = persons.Count(),
            Persons = persons
        };
    }
    
    

    Here the type of our CsvModelBinder is set as binderType to that attribute.

    Steve Gordon wrote about a second option in his blog post: Custom ModelBinding in ASP.NET MVC Core. He uses a ModelBinderProvider to add the ModelBinder to the list of existing ones.

    I personally prefer the explicit declaration, because the most custom ModelBinders will be pretty specific to an action or to an specific type and theres no hidden magic in the background.

    Testing the ModelBinder

    To test it, we need to create a new Request in Postman. I set the request type to POST and put the URL https://localhost:5001/api/persons in the address bar. No I need to add the CSV data in the body of the request. Because it is a URL formatted body, I needed to put the data as persons variable into the body:

    persons=Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
    48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
    1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
    55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415
    
    

    After pressing send, I got the result as shown below:

    Now the clients are able to send CSV based data to the server.

    Conclusion

    This is a good way to transform the input in a way the action really needs. You could also use the ModelBinders to do some custom validation against the database or whatever you need to do before the model get's passed to the action.

    To learn more about ModelBinders, you need to have a look into the pretty detailed documentation:

    While playing around with the ModelBinderProvider Steve describes in his blog, I stumbled upon InputFormatters. Would this actually be the right way to transform CSV input into objects? I definitely need to learn some more details about the InputFormattersand will use this as 12th topic of this series.

    Please follow the introduction post of this series to find additional customizing topics I will write about.

    In the next part I will show you what you can do with ActionFilters: Customizing ASP.NET Core Part 09: ActionFilter

    Customizing ASP.​NET Core Part 07: OutputFormatter

    Thursday, October 11, 2018 12:00 AM

    In this seventh post I want to write about, how to send your Data in different formats and types to the client. By default the ASP.NET Core Web API sends the data as JSON, but there are some more ways to send the data.

    The series topics

    About OutputFormatters

    OutputFormatters are classes that turn your data into a different format to sent them trough HTTP to the clients. Web API uses a default OutputFormatter to turn objects into JSON, which is the default format to send data in a structured way. Other build in formatters are a XML formatter and a plan text formatter.

    With the - so called - content negotiation the client is able to decide which format he wants to retrieve .The client need to specify the content type of the format in the Accept-Header. The content negotiation is implemented in the ObjectResult.

    By default the Web API always returns JSON, even if you accept text/xml in the header. This is why the build in XML formatter is not registered by default. There are two ways to add a XmlSerializerOutputFormatter to ASP.NET Core:

    services.AddMvc()
        .AddXmlSerializerFormatters();
    

    or

    services.AddMvc(options =>
    {
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    });
    

    There is also a XmlDataContractSerializerOutputFormatter available

    Also any Accept header gets turned into application/json. If you want to allow the clients to accept different headers, you need to switch that translation off:

    services.AddMvc(options =>
    {
        options.RespectBrowserAcceptHeader = true; // false by default
    });
    

    To try the formatters let's setup a small test project.

    Prepare a test project

    Using the console we will create a small ASP.NET Core Web API project. Execute the following commands line by line:

    dotnet new webapi -n WebApiTest -o WebApiTest
    cd WebApiTest
    dotnet add package GenFu
    dotnet add package CsvHelper
    

    This creates a new Web API projects and adds two NuGet packages to it. GenFu is a awesome library to easily create test data. The second one helps us to easily write CSV data.

    Now open the project in Visual Studio or in Visual Studio Code and open the ValuesController.cs and change the Get() method like this:

    [HttpGet]
    public ActionResult> Get()
    {
    	var persons = A.ListOf(25);
    	return persons;
    }
    

    This crates a list of 25 Persons using GenFu. The properties get automatically filled with almost realistic data. You'll see the magic of GenFu and the results later on.

    In the Models folder create a new file Person.cs with the the Person class inside:

    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int Age { get; set; }
        public string EmailAddress { get; set; }
        public string Address { get; set; }
        public string City { get; set; }
        public string Phone { get; set; }
    }
    

    Open the Startup.cs as well and add the Xml formatters and allow other accept headers as described earlier:

    services.AddMvc(options =>
    {
        options.RespectBrowserAcceptHeader = true; // false by default
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    });
    

    That's it for now. Now you are able to retrieve the data from the Web API. Start the project by using the dotnet run command.

    The best tools to test a web API are Fiddler or Postman. I prefer Postman because it is easy to use. At the end it doesn't matter which tool you want to use. In this demos I'm going to use Postman.

    Inside Postman I create a new request. I write the API Url into the address field, which is https://localhost:5001/api/values and I add a header with the key Accept and the Value application/json.

    After I press send I will see the JSON result in the response body below:

    Here you can see the auto generated values. GenFu puts the data in based on the property type and the property name. So it puts real first names and real last names as well as real cities and phone numbers into the Persons properties.

    No let's test the XML output formatter.

    In postman change the Accept header form application/json to text/xml and press send:

    We now have an XML formatted output.

    Now let's go a step further and create some custom OutputFormatters.

    Custom OutputFormatters

    The plan is to create an VCard output to be able to import the persons contacts directly to outlook or any other contact database that supports VCards. Later in this section we also want to create an CSV output formatter.

    Both are text based output formatters and will derive from TextOutputFormatter. Create a new class in a new file called VcardOutputFormatter.cs:

    public class VcardOutputFormatter : TextOutputFormatter
    {
        public string ContentType { get; }
    
        public VcardOutputFormatter()
        {
            SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/vcard"));
    
            SupportedEncodings.Add(Encoding.UTF8);
            SupportedEncodings.Add(Encoding.Unicode);
        }
    
        // optional, but makes sense to restrict to a specific condition
        protected override bool CanWriteType(Type type)
        {
            if (typeof(Person).IsAssignableFrom(type) 
                || typeof(IEnumerable).IsAssignableFrom(type))
            {
                return base.CanWriteType(type);
            }
            return false;
        }
    
        // this needs to be overwritten
        public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
        {
            var serviceProvider = context.HttpContext.RequestServices;
            var logger = serviceProvider.GetService(typeof(ILogger)) as ILogger;
    
            var response = context.HttpContext.Response;
    
            var buffer = new StringBuilder();
            if (context.Object is IEnumerable)
            {
                foreach (var person in context.Object as IEnumerable)
                {
                    FormatVcard(buffer, person, logger);
                }
            }
            else
            {
                var person = context.Object as Person;
                FormatVcard(buffer, person, logger);
            }
            return response.WriteAsync(buffer.ToString());
        }
    
        private static void FormatVcard(StringBuilder buffer, Person person, ILogger logger)
        {
    		buffer.AppendLine("BEGIN:VCARD");
    		buffer.AppendLine("VERSION:2.1");
    		buffer.AppendLine($"FN:{person.FirstName} {person.LastName}");
    		buffer.AppendLine($"N:{person.LastName};{person.FirstName}");
    		buffer.AppendLine($"EMAIL:{person.EmailAddress}");
    		buffer.AppendLine($"TEL;TYPE=VOICE,HOME:{person.Phone}");
    		buffer.AppendLine($"ADR;TYPE=home:;;{person.Address};{person.City}");            
    		buffer.AppendLine($"UID:{person.Id}");
    		buffer.AppendLine("END:VCARD");
    		logger.LogInformation($"Writing {person.FirstName} {person.LastName}");
        }
    }
    

    In the constructor we need to specify the supported media types and encodings. In the method CanWriteType() we need to check whether the current type is supported within this output formatters. Here we only want to format a single Person or a lists of Persons.

    The method WriteResponseBodyAsync() then actually writes the list of persons out to the response stream via a StringBuilder

    At least we need to register the new VcardOutputFormatter in the Startup.cs:

    services.AddMvc(options =>
    {
        options.RespectBrowserAcceptHeader = true; // false by default
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
        
        // register the VcardOutputFormatter
        options.OutputFormatters.Add(new VcardOutputFormatter()); 
    });
    

    Start the app again using dotnet run. Now change the Accept header to text/vcard and let's see what happens:

    We now should see our date in the VCard format.

    Let's do the same for a CSV output. We already added the CsvHelper library to the project, so you can just copy the next snippet into your project:

    public class CsvOutputFormatter : TextOutputFormatter
    {
        public string ContentType { get; }
    
        public CsvOutputFormatter()
        {
            SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/csv"));
    
            SupportedEncodings.Add(Encoding.UTF8);
            SupportedEncodings.Add(Encoding.Unicode);
        }
    
        // optional, but makes sense to restrict to a specific condition
        protected override bool CanWriteType(Type type)
        {
            if (typeof(Person).IsAssignableFrom(type)
                || typeof(IEnumerable).IsAssignableFrom(type))
            {
                return base.CanWriteType(type);
            }
            return false;
        }
    
        // this needs to be overwritten
        public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
        {
            var serviceProvider = context.HttpContext.RequestServices;
            var logger = serviceProvider.GetService(typeof(ILogger)) as ILogger;
    
            var response = context.HttpContext.Response;
    
            var csv = new CsvWriter(new StreamWriter(response.Body));
    
            if (context.Object is IEnumerable)
            {
                var persons = context.Object as IEnumerable;
                csv.WriteRecords(persons);
            }
            else
            {
                var person = context.Object as Person;
                csv.WriteRecord(person);
            }
    
            return Task.CompletedTask;
        }
    }
    

    This almost works the same way. We can pass the response stream via a StreamWriter directly into the CsvWriter. After that we are able to feed the writer with the persons or the list of persons. That's it.

    We also need to register the CsvOutputFormatter before we can test it.

    services.AddMvc(options =>
    {
        options.RespectBrowserAcceptHeader = true; // false by default
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
        
        // register the VcardOutputFormatter
        options.OutputFormatters.Add(new VcardOutputFormatter()); 
    	// register the CsvOutputFormatter
        options.OutputFormatters.Add(new CsvOutputFormatter()); 
    });
    

    In Postman change the Accept header to text/csv and press send again:

    Conclusion

    Isn't that cool? I really like the way to change the format based on the except header. This way you are able to create an Web API for many different clients and that accept many different formats. There are still a lot of potential clients outside which don't use JSON and prefer XML or CSV.

    The other way around would be an option to consume CSV or any other format inside the Web API. Let's assume your client would send you a list of persons in CSV format. How would you solve this? Parsing the String manually in the action method would work, but it's not a nice option. This is what ModelBinders can do for us. Let's see how this works in the next chapter about Customizing ASP.NET Core Part 08: ModelBinders.

    Customizing ASP.​NET Core Part 06: Middlewares

    Monday, October 8, 2018 12:00 AM

    Wow, it is already the sixth part of this series. In this post I'm going to write about middlewares and how you can use them to customize your app a little more. I quickly go threw the basics about middlewares and than I'll write about some more specials things you can do with middlewares.

    The series topics

    About middlewares

    The most of you already know what middlewares are, but some of you maybe don't. Even if you already use ASP.NET Core for a while, you don't really need to know details about middlewares, because they are mostly hidden behind nicely named extension methods like UseMvc(), UseAuthentication(), UseDeveloperExceptionPage() and so on. Every time you call a Use-method in the Startup.cs in the Configure method, you'll implicitly use at least one ore maybe more middlewares.

    A middleware is a peace of code that handles the request pipeline. Imagine the request pipeline as huge tube where you can call something in and where an echo comes back. The middlewares are responsible for create this echo or to manipulate the sound, to enrich the information or to handle the source sound or to handle the echo.

    Middlewares are executed in the order they are configured. The first configured middleware is the first that gets executed.

    In an ASP.NET Core web, if the client requests an image or any other static file, the StaticFileMiddleware searches for that resource and return that resource if it finds one. If not this middleware does nothing except to call the next one. If there is no last middleware that handles the request pipeline, the request returns nothing. The MvcMiddleware also checks the requested resource, tries to map it to a configured route, executes the controller, created a view and returns a HTML or Web API result. If the MvcMiddleware doesn't find a matching controller, it anyway will return a result in this case it is a 404 Status result. It returns an echo in any case. This is why the MvcMiddleware is the last configured middleware.

    (Image source: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-2.1)

    An exception handling middleware usually is one of the first configured middleware, but it is not because it get's executed at first, but at last. The first configured middleware is also the last one if the echo comes back the tube. An exception handling middleware validates the result and displays a possible exception in a browser and client friendly way. This is where a runtime error gets an 500 Status.

    You are able to see how the pipeline is executed if you create an empty ASP.NET Core application. I usually use the console and the .NET CLI tools:

    dotnet new web -n MiddleWaresSample -o MiddleWaresSample
    cd MiddleWaresSample
    

    Open the Startup.cs with your favorite editor. It should be pretty empty compared to a regular ASP.NET Core application:

    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
        }
    
        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            
            app.Run(async (context) =>
            {
                await context.Response.WriteAsync("Hello World!");
            });
        }
    }
    

    There is the DeveloperExceptionPageMiddleware used and a special lambda middleware that only writes "Hello World!" to the response stream. The response stream is the echo I wrote about previously. This special middleware stops the pipeline and returns something as an echo. So it is the last one.

    Leave this middleware and add the following lines right before the app.Run():

    app.Use(async (context, next) =>
    {
        await context.Response.WriteAsync("===");
        await next();
        await context.Response.WriteAsync("===");
    });
    app.Use(async (context, next) =>
    {
        await context.Response.WriteAsync(">>>>>> ");
        await next();
        await context.Response.WriteAsync(" <<<<<<");
    });
    

    This two calls of app.Use() also creates two lambda middlewares, but this time the middlewares are calling the next ones. Each middleware knows the next one and calls it. Both middleware writing to the response stream before and after the next middleware is called. This should demonstrate how the pipeline works. Before the next middleware is called the actual request is handled and after the next middleware is called, the response (echo) is handled.

    If you now run the application (using dotnet run) and open the displayed URL in the browser, you should see a plain text result like this:

    ===>>>>>> Hello World! <<<<<<===
    

    Does this make sense to you? If yes, let's see how to use this concept to add some additional functionality to the request pipeline.

    Writing a custom middleware

    ASP.NET Core is based on middlewares. All the logic that gets executed during a request is somehow based on a middleware. So we are able to use this to add custom functionality to the web. We want to know the execution time of every request that goes through the request pipeline. I do this by creating and starting a Stopwatch before the next middleware is called and by stop measuring the execution time after the next middleware is called:

    app.Use(async (context, next) =>
    {
        var s = new Stopwatch();
        s.Start();
        
        // execute the rest of the pipeline
        await next();
        
        s.Stop(); //stop measuring
        var result = s.ElapsedMilliseconds;
        
        // write out the milliseconds needed
        await context.Response.WriteAsync($"Time needed: {result }");
    });
    

    After that I write out the elapsed milliseconds to the response stream.

    If you write some more middlewares the Configure method in the Startup.cs get's pretty messy. This is why the most middlewares are written as separate classes. This could look like this:

    public class StopwatchMiddleWare
    {
        private readonly RequestDelegate _next;
    
        public StopwatchMiddleWare(RequestDelegate next)
        {
            _next = next;
        }
    
        public async Task Invoke(HttpContext context)
        {
            var s = new Stopwatch();
            s.Start();
    
            // execute the rest of the pipeline
            await next();
    
            s.Stop(); //stop measuring
            var result = s.ElapsedMilliseconds;
    
            // write out the milliseconds needed
            await context.Response.WriteAsync($"Time needed: {result }");
        }
    }
    

    This way we get the next middleware via the constructor and the current context in the Invoke() method.

    Note: The Middleware is initialized on the start of the application and exists once during the application lifetime. The constructor gets called once. On the other hand the Invoke() method is called once per request.

    To use this middleware, there is a generic UseMiddleware() method available you can use in the configure method:

    app.UseMiddleware();
    

    The more elegant way is to create an extensions method that encapsulates this call:

    public static class StopwatchMiddlewareExtension
    {
        public static IApplicationBuilder UseStopwatch(this IApplicationBuilder app)
        {
            app.UseMiddleware();
            return app;
        }
    }
    

    Now can simply call it like this:

    app.useStopwatch();
    

    This is the way you can provide additional functionality to a ASP.NET Core web through the request pipeline. You are able to manipulate the request or even the response using middlewares.

    The AuthenticationMiddleware for example tries to request user information from the request. If it doesn't find some it asked the client about it by sending a specific response back to the client. If it finds some, it adds the information to the request context and makes it available to the entire application this way.

    What else can we do using middlewares?

    Did you know that you can divert the request pipeline into two or more branches?

    The next snippet shows how to create branches based on specific paths:

    app.Map("/map1", app1 =>
    {
        // some more middlewares
        
        app1.Run(async context =>
        {
            await context.Response.WriteAsync("Map Test 1");
        });
    });
    
    app.Map("/map2", app2 =>
    {
        // some more middlewares
        
        app2.Run(async context =>
        {
            await context.Response.WriteAsync("Map Test 2");
        });
    });
    
    // some more middlewares
    
    app.Run(async (context) =>
    {
        await context.Response.WriteAsync("Hello World!");
    });
    

    The path "/map1" is a specific branch that continues the request pipeline inside. The same with "/map2". Both maps have their own middleware configurations inside. All other not specified paths will follow the main branch.

    There's also a MapWhen() method to branch the pipeline based on a condition instead of branch based on a path:

    public void Configure(IApplicationBuilder app)
    {
        app.MapWhen(context => context.Request.Query.ContainsKey("branch"),
                    app1 =>
        {
            // some more middlewares
        
            app1.Run(async context =>
            {
                await context.Response.WriteAsync("MapBranch Test");
            });
        });
    
        // some more middlewares
        
        app.Run(async context =>
        {
            await context.Response.WriteAsync("Hello from non-Map delegate. 

    "); }); }

    You can create conditions based on configuration values or as shown here, based on properties of the request context. In this case a query string property is used. You can use HTTP headers, form properties or any other property of the request context.

    You are also able to nest the maps to create child and grandchild branches of needed.

    Map() or MapWhen() is used to provide a special API or resource based an a specific path or a specific condition. The ASP.NET Core HealthCheck API is done like this. It first uses MapWhen() to specify the port to use and then the Map() to set the path for the HealthCheck API, or it uses Map() only if no port is specified. At the end the HealthCheckMiddleware is used:

    private static void UseHealthChecksCore(IApplicationBuilder app, PathString path, int? port, object[] args)
    {
        if (port == null)
        {
            app.Map(path, b => b.UseMiddleware(args));
        }
        else
        {
            app.MapWhen(
                c => c.Connection.LocalPort == port,
                b0 => b0.Map(path, b1 => b1.UseMiddleware(args)));
        }
    }
    

    (See here on GitHib)

    UPDATE 10/10/2018

    After I published this post Hisham asked me a question on Twitter:

    Another question that's middlewares related, I'm not sure why I never seen anyone using IMiddleware instead of writing InvokeAsync manually?!!

    IMiddleware is new in ASP.NET Core 2.0 and actually I never knew that it exists before he tweeted about it. I'll definitely have a deeper look into IMiddleware and will write about it. Until that you should read Hishams really good post about it: Why you aren't using IMiddleware?

    Conclusion

    Most of the ASP.NET Core features are based on middlewares and we are able to extend ASP.NET Core by creating our own middlewares.

    In the next to chapters I will have a look into different data types and how to handle them. I will create API outputs with any format and data type I want and except data of any type and format. Read the next part about Customizing ASP.NET Core Part 07: OutputFormatter

    Customizing ASP.​NET Core Part 05: HostedServices

    Thursday, October 4, 2018 12:00 AM

    This fifth part of this series doesn't really show a customization. This part is more about a feature you can use to create background services to run tasks asynchronously inside your application. Actually I use this feature to regularly fetch data from a remote service in a small ASP.NET Core application.

    The series topics

    About HostedServcices

    HostedServices are a new thing in ASP.NET Core 2.0 and can be used to run tasks in the asynchronously in the background of your application. This can be used to fetch data periodically, do some calculations in the background or some cleanups. This can also be used to send preconfigured emails or whatever you need to do in the background.

    HostedServices are basically simple classes, which implements the IHostedService interface.

    public class SampleHostedService : IHostedService
    {
    	public Task StartAsync(CancellationToken cancellationToken)
    	{
    	}
    	
    	public Task StopAsync(CancellationToken cancellationToken)
    	{
    	}
    }
    

    A HostedService needs to implement a StartAsync() and a StopAsync() method. The StartAsync() is the place where you implement the logic to execute. This method gets executed once immediately after the application starts. The method StopAsync() on the other hand gets executed just before the application stops. This also means, to start a kind of a scheduled service you need to implement it by your own. You will need to implement a loop which executes the code regularly.

    To get a HostedService executed you need to register it in the ASP.NET Core dependency injection container as a singleton instance:

    services.AddSingleton();
    

    To see how a hosted service work, I created the next snippet. It writes a log message on start, on stop and every two seconds to the console:

    public class SampleHostedService : IHostedService
    {
    	private readonly ILogger logger;
    	
    	// inject a logger
    	public SampleHostedService(ILogger logger)
    	{
    		this.logger = logger;
    	}
    
    	public Task StartAsync(CancellationToken cancellationToken)
    	{
    		logger.LogInformation("Hosted service starting");
    
    		return Task.Factory.StartNew(async () =>
    		{
    			// loop until a cancalation is requested
    			while (!cancellationToken.IsCancellationRequested)
    			{
    				logger.LogInformation("Hosted service executing - {0}", DateTime.Now);
    				try
    				{
    					// wait for 3 seconds
    					await Task.Delay(TimeSpan.FromSeconds(2), cancellationToken);
    				}
    				catch (OperationCanceledException) { }
    			}
    		}, cancellationToken);
    	}
    
    	public Task StopAsync(CancellationToken cancellationToken)
    	{
    		logger.LogInformation("Hosted service stopping");
    		return Task.CompletedTask;
    	}
    }
    

    To test this, I simply created a new ASP.NET Core application, placed this snippet inside, register the HostedService and started the application by calling the next command in the console:

    dotnet run
    

    This results in the following console output:

    As you can see the log output is written to the console every two seconds.

    Conclusion

    You can now start to do some more complex thing with the HostedServices. Be careful with the hosted service, because it runs all in the same application. Don't use to much CPU or memory, this could slow down your application.

    For bigger applications I would suggest to move such tasks in a separate application that is specialized to execute background tasks. A separate Docker container, a BackroundWorker on Azure, Azure Functions or something like this. However it should be separated from the main application in that case

    In the next part I'm going to write about Middlewares and how you can use them to implement special logic to the request pipeline, or how you are able to serve specific logic on different paths. Customizing ASP.NET Core Part 06: Middlewares

    Customizing ASP.​NET Core Part 04: HTTPS

    Monday, October 1, 2018 12:00 AM

    HTTPS is on by default now and a first class feature. On Windows the certificate which is needed to enable HTTPS is loaded from the windows certificate store. If you create a project on Linux and Mac the certificate is loaded from a certificate file.

    Even if you want to create a project to run it behind and IIS or an NGinX webserver HTTPS is enabled. Usually you would manage the certificate on the IIS or NGinX webserver in that case. But this shouldn't be a problem and you shouldn't disable HTTPS in the ASP.NET Core settings.

    To manage the certificate within the ASP.NET Core application directly makes sense if you run services behind the firewall, services which are not accessible from the internet. Services like background services for a micro service based applications, or services in a self hosted ASP.NET Core application.

    There are some scenarios where it makes sense to also load the certificate from a file on Windows. This could be in an application that you will run on docker for Windows, and also on docker for Linux.

    Personally I like the flexible way to load the certificate from a file.

    The series topics

    Setup Kestrel

    As well as in the first to parts of this blog series, we need override the default WebHostBuilder a little bit. With ASP.NET Core it is possible to replace the default Kestrel based hosting with an hosting based on an HttpListener. This means the Kestrel webserver is configured somehow to the host builder. You are able to add and configure Kestrel manually by using it. That means by calling the UseKestrel() method on the IWebHostBuilder:

    public class Program
    {
    	public static void Main(string[] args)
    	{
    		CreateWebHostBuilder(args).Build().Run();
    	}
    
    	public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    		WebHost.CreateDefaultBuilder(args)
    			.UseKestrel(options => 
    			{	
    			})
    			.UseStartup();
    }
    

    This method accepts an action to configure the Kestrel webserver. What we actually need to do is to configure the addresses and ports the webserver is listen on. For the HTTPS port we also need to configure how the certificate should be loaded.

    .UseKestrel(options => 
    {
    	options.Listen(IPAddress.Loopback, 5000);
    	options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    	{
    		listenOptions.UseHttps("certificate.pfx", "topsecret");
    	});
    })
    

    In this snippet we add to addresses and ports to listen on. The second one is defined as secure endpoint configured to use HTTPS. The method UseHttps() is overloaded multiple times, to load certificates from the windows certificate store as well as from files. In this case we use a file called certificate.pfx located in the project folder.

    Reminder to myself: Replacing the host actually would be an idea for an eleventh part of this series.

    To create such a certificate file to just play around with this configuration open the certificate store and export the development certificate created by visual studio.

    For your safety

    Use the following line ONLY to play around with this configuration:

    listenOptions.UseHttps("certificate.pfx", "topsecret");
    

    The problem is the hard coded password. Never ever store a password in a code file that gets pushed to any source code repository. Ensure you load the password from the configuration API of ASP.NET Core. Use the user secrets on your local development machine and use environment variables on a server. On Azure use the Application Settings to store the passwords. Passwords will be hidden on the Azure Portal UI, if they are marked as passwords.

    Conclusion

    This is just a small customization. Anyway, this helps if you want to share the code between different platforms, if you want to run your application on Docker and don't want to care about certificate stores, etc.

    Usually, if you run your application behind an web server like IIS or NGinX, you don't need to care about certificates in your ASP.NET Core application. But you need to if you host your application inside another application, on Docker or without an IIS or NGinX.

    ASP.NET Core has a new feature to run tasks in the background inside the application. To learn more about that, read the next post about Customizing ASP.NET Core Part 05: HostedServices.

    Customizing ASP.​NET Core Part 03: Dependency Injection

    Thursday, September 27, 2018 12:00 AM

    In the third part we'll take a look into the ASP.NET Core dependency injection and how to customize it to use a different dependency injection container if needed.

    The series topics

    Why using a different dependency injection container?

    In the most projects you don't really need to use a different dependency injection Container. The DI implementation in ASP.NET Core supports the main basic features and works well and pretty fast. Anyway, some other DI container support some interesting features you maybe want to use in your application.

    • Maybe you like to create an application that support modules as lightweight dependencies.
      • E.g. modules you want to put into a specific directory and they get automatically registered in your application
      • This could be done with NInject.
    • Maybe you want to configure the services in a configuration file outside the application, in an XML or JSON file instead in C# only
      • This is a common feature in various DI containers, but not yet supported in ASP.NET Core.
    • Maybe you don't want to have an immutable DI container, because you want to add services at runtime.
      • This is also a common feature in some DI containers.

    A look at the ConfigureServices Method

    Create a new ASP.NET Core project and open the Startup.cs, you will find the method to configure the services which looks like this:

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
    	services.Configure(options =>
    	{
    		// This lambda determines whether user consent for non-essential cookies is needed for a given request.
    		options.CheckConsentNeeded = context => true;
    		options.MinimumSameSitePolicy = SameSiteMode.None;
    	});
        
        services.AddTransient();
    
    	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    }
    

    This method gets the IServiceCollection, which already filled with a bunch of services which are needed by ASP.NET Core. This services got added by the hosting services and parts of ASP.NET Core that got executed before the method ConfigureSercices is called.

    Inside the method some more services gets added. First a configuration class that contains cookie policy options is added to the ServiceCollection. In this sample I also add a custom service called MyService that implements the IService interface. After that the method AddMvc() adds another bunch of services needed by the MVC framework. Until yet we have around 140 services registered to the IServiceCollection. But the service collections isn't the actual dependency injection container.

    The actual DI container is wrapped in the so called service provider, which will be created out of the service collection. The IServiceCollection has an extension method registered to create a IServiceProvider out of the service collection.

    IServiceProvider provider = services.BuildServiceProvider()
    

    The ServiceProvider than contains the immutable container that cannot be changed at runtime. With the default method ConfigureServices the IServiceProvider gets created in the background after this method was called, but it is possible to change the method a little bit:

    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        services.Configure(options =>
        {
            // This lambda determines whether user consent for non-essential cookies is needed for a given request.
            options.CheckConsentNeeded = context => true;
            options.MinimumSameSitePolicy = SameSiteMode.None;
        });
        
        services.AddTransient(); // custom service
        
        services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
        
        return services.BuildServiceProvider()
    }
    

    I changed the return type to IServiceProvider and return the ServiceProvider created with the method BuildServiceProvider(). This change will still work in ASP.NET Core.

    Use a different ServiceProvider

    To change to a different or custom DI container you need to replace the default implementation of the IServiceProvider with a different one. Additionally you need to find a way to move the already registered services to the new container.

    The next code sample uses Autofac as a third party container. I use Autofac in this snippet because you are easily able to see what is happening here:

    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        services.Configure(options =>
        {
            // This lambda determines whether user consent for non-essential cookies is needed for a given request.
            options.CheckConsentNeeded = context => true;
            options.MinimumSameSitePolicy = SameSiteMode.None;
        });
    
        //services.AddTransient();
    
        services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    
        // create a Autofac container builder
        var builder = new ContainerBuilder();
    
        // read service collection to Autofac
        builder.Populate(services);
    
        // use and configure Autofac
        builder.RegisterType().As();
    
        // build the Autofac container
        ApplicationContainer = builder.Build();
    
        // creating the IServiceProvider out of the Autofac container
        return new AutofacServiceProvider(ApplicationContainer);
    }
    
    // IContainer instance in the Startup class 
    public IContainer ApplicationContainer { get; private set; }
    

    Also Autofac works with a kind of a service collection inside the ContainerBuilder and it creates the actual container out of the ContainerBuilder. To get the registered services out of the IServiceCollection into the ContainerBuilder, Autofac uses the Populate() method. This copies all the existing services to the Autofac container.

    Our custom service MyService now gets registered using the Autofac way.

    After that, the container gets build and stored in a property of type IContainer. In the last line of the method ConfigureServices we create a AutofacServiceProvider and pass in the IContainer. This is the IServiceProvider we need to return to use Autofac within our application.

    UPDATE: Introducing Scrutor

    You don't always need to replace the existing .NET Core DI container to get and use nice features. In the beginning I mentioned the auto registration of services. This can also be done with a nice NuGet package called Scrutor by Kristian Hellang (https://kristian.hellang.com/). Scrutor extends the IServiceCollection to automatically register services to the .NET Core DI container.

    "Assembly scanning and decoration extensions for Microsoft.Extensions.DependencyInjection" https://github.com/khellang/Scrutor

    Andrew Lock published a pretty detailed blog post about Scrutor. It doesn't make sense to repeat that. Read that awesome post and learn more about it: Using Scrutor to automatically register your services with the ASP.NET Core DI container

    Conclusion

    Using this approach you are able to use any .NET Standard compatible DI container to replace the existing one. If the container of your choice doesn't provide an ServiceProvider, create an own one that implements IServiceProvider and uses the DI container inside. If the container of your choice doesn't provide a method to populate the registered services into the container, create your own method. Loop over the registered services and add them to the other container.

    Actually the last step sounds easy, but can be a hard task. Because you need to translate all the possible IServiceCollection registrations into registrations of the different container. The complexity of that task depends on the implementation details of the other one.

    Anyway, you have the choice to use any DI container which is compatible to the .NET Standard. You have the choice to change a lot of the default implementations in ASP.NET Core.

    So you can with the default HTTPS behavior on Windows. To learn more about that please read the next post about Customizing ASP.NET Core Part 04: HTTPS.