Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

Close

ASP.NET und mehr ...

Mehr oder weniger regelmäßig werden Artikel auf meinem Blog auf ASP.NET Zone veröffentlicht: ASP.NET und mehr...

Trying BitBucket Pipelines with ASP.NET Core

Friday, December 8, 2017 12:00 AM

BitBucket provides a continuous integration tool called Pipelines. This is based on Docker containers which are running on a Linux based Docker machine. Within this post I wanna try to use BitBucket Pipelines with an ASP.NET Core application.

In the past I preferred BitBucket over GitHub, because I used Mercurial more than Git. But that changed five years ago. Since than I use GitHub for almost every new personal project that doesn't need to be a private project. But at the YooApps we use the entire Atlassian ALM Stack including Jira, Confluence and BitBucket. (We don't use Bamboo yet, because we also use Azure a lot and we didn't get Bamboo running on Azure). BitBucket is a good choice, if you anyway use the other Atlassian tools, because the integration to Jira and Confluence is awesome.

Since a while, Atlassian provides Pipelines as a simple continuous integration tool directly on BitBucket. You don't need to setup Bamboo to build and test just a simple application. At the YooApps we actually use Pipelines in various projects which are not using .NET. For .NET Projects we are currently using CAKE or FAKE on Jenkins, hosted on an Azure VM.

Pipelines can also used to build and test branches and pull request, which is awesome. So why shouldn't we use Pipelines for .NET Core based projects? BitBucket actually provides an already prepared Pipelines configuration for .NET Core related projects, using the microsoft/dotnet Docker image. So let's try pipelines.

The project to build

As usual, I just setup a simple ASP.NET Core project and add a XUnit test project to it. In this case I use the same project as shown in the Unit testing ASP.NET Core post. I imported that project from GitHub to BitBucket. if you also wanna try Pipelines, feel free to use the same way or just download my solution and commit it into your repository on BitBucket. Once the sources are in the repository, you can start to setup Pipelines.

Setup Pipelines

Setting up Pipelines actually is pretty easy. In your repository on BitBucket.com is a menu item called Pipelines. After pressing it you'll see the setup page, where you are able to select a technology specific configuration. .NET Core is not the first choice for BitBucket, because the .NET Core configuration is placed under "More". It is available anyway, which is really nice. After selecting the configuration type, you'll see the configuration in an editor inside the browser. It is actually a YAML configuration, called bitbucket-pipelines.yml, which is pretty easy to read. This configuration is prepared to use the microsoft/dotnet:onbuild Docker image and it already has the most common .NET CLI commands prepared, that will be used with that ASP.NET Core projects. You just need to configure the projects names for the build and test commands.

The completed configuration for my current project looks like this:

# This is a sample build configuration for .NET Core.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: microsoft/dotnet:onbuild

pipelines:
  default:
    - step:
        caches:
          - dotnetcore
        script: # Modify the commands below to build your repository.
          - export PROJECT_NAME=WebApiDemo
          - export TEST_NAME=WebApiDemo.Tests
          - dotnet restore
          - dotnet build $PROJECT_NAME
          - dotnet test $TEST_NAME

If you don't have tests yet, comment the last line out by adding a #-sign in front of that line.

After pressing "Commit file", this configuration file gets stored in the root of your repository, which makes it available for all the developers of that project.

Let's try it

After that config was saved, the build started immediately... and failed!

Why? Because that Docker image was pretty much outdated. It contains an older version with an SDK that still uses the the project.json for .NET Core projects.

Changing the name of the Docker image from microsoft/dotnet:onbuild to microsoft/dotnet:sdk helps. You now need to change the bitbucket-pipelines.yml in your local Git workspace or using the editor on BitBucket directly. After committing the changes, again the build starts immediately and is green now

Even the tests are passed. As expected, I got a pretty detailed output about every step configured in the "script" node of the bitbucket-pipelines.yml

You don't need to know how to configure Docker using the pipelines. This is awesome.

Let's try the PR build

To create a PR, I need to create a feature branch first. I created it locally using the name "feature/build-test" and pushed that branch to the origin. You now can see that this branch got built by Pipelines:

Now let's create the PR using the BitBucket web UI. It automatically assigns my latest feature branch and the main branch, which is develop in my case:

Here we see that both branches are successfully built and tested previously. After pressing save we see the build state in the PRs overview:

This is actually not a specific built for that PR, but the build of the feature branch. So in this case, it doesn't really build the PR. (Maybe it does, if the PR comes from a fork and the branch wasn't tested previously. I didn't test it yet.)

After merging that PR back to the develop (in that case), we will see that this merge commit was successfully built too:

We have four builds done here: The failing one, the one 11 hours ago and two builds 52 minutes ago in two different branches.

The Continuous Deployment pipeline

With this, I would be save to trigger a direct deployment on every successful build of the main branches. As you maybe know, it is super simple to deploy a web application to an Azure web app, by connecting it directly to any Git repository. Usually this is pretty dangerous, if you don't have any builds and tests before you deploy the code. But in this case, we are sure the PRs and the branches are building and testing successfully.

We just need to ensure that the deployment is only be triggered, if the build is successfully done. Does this work with Pipelines? I'm pretty curious. Let's try it.

To do that, I created a new Web App on Azure and connect this app to the Git repository on BitBucket. I'll now add a failing test and commit it to the Git repository. What now should happen is, that the build starts before the code gets pushed to Azure and the failing build should disable the push to Azure.

I'm skeptical whether this is working or not. We will see.

The Azure Web App is created and running on http://build-with-bitbucket-pipelines.azurewebsites.net/. The deployment is configured to listen on the develop branch. That means, every time we push changes to that branch, the deployment to Azure will start.

I'll now create a new feature branch called "feature/failing-test" and push it to the BitBucket. I don't follow the same steps as described in the previous section about the PRs, to keep the test simple. I merge the feature branch directly and without an PR to develop and push all the changes to BitBucket. Yes, I'm a rebel... ;-)

The build starts immediately and fails as expected:

But what about the deployment? Let's have a look at the deployments on Azure. We should only see the initial successful deployment. Unfortunately there is another successful deployment with the same commit message as the failing build on BitBucket:

This is bad. We now have an unstable application running on azure. Unfortunately there is no option on BitBucket to trigger the WebHook on a successful build. We are able trigger the Hook on a build state change, but it is not possible to define on what state we want to trigger the build.

Too bad, this doesn't seem to be the right way to configure the continuous deployment pipeline in the same easy way than the continuous integration process. Sure there are many other, but more complex ways to do that.

Update 12/8/2017

There is anyway a simple option to setup an deployment after successful build. This could be done by triggering the Azure webhook inside the Pipelines. An sample bash script to do that can be found here: https://bitbucket.org/mojall/bitbucket-pipelines-deploy-to-azure/ Without the comments it looks like this:

curl -X POST "https://\$$SITE_NAME:$FTP_PASSWORD@$SITE_NAME.scm.azurewebsites.net/deploy" \
  --header "Content-Type: application/json" \
  --header "Accept: application/json" \
  --header "X-SITE-DEPLOYMENT-ID: $SITE_NAME" \
  --header "Transfer-encoding: chunked" \
  --data "{\"format\":\"basic\", \"url\":\"https://$BITBUCKET_USERNAME:$BITBUCKET_PASSWORD@bitbucket.org/$BITBUCKET_USERNAME/$REPOSITORY_NAME.git\"}"

echo Finished uploading files to site $SITE_NAME.

I now need to set the environment variables in the Pipelines configuration:

Be sure to check the "Secured" checkbox for every password variable, to hide the password in this UI and in the log output of Pipelines.

And we need to add two script commands to the bitbucket-pipelines.yml:

- chmod +x ./deploy-to-azure.bash
- ./deploy-to-azure.bash

The last step is to remove the Azure web hook from the web hook configuration in BitBucket and to remove the failing test. After pushing the changes to BitBucket the build and the first successfull deployment starts immediately.

I now add the failing test again to test the failing deployment again and it worked as expected. The test fails and the next commands don't get executed. The web hook will never triggered and the unstable app will not be deployed.

Now there is a failing build on Pipelines:

(See the commit messages)

And that failing commit is not deployed to azure:

The Continuous Deployment is successfully done.

Conclusion

Isn't it super easy to setup a continuous integration? ~~Unfortunately we are not able to complete the deployment using this.~~ But anyway, we now have a build on any branch and on any pull-request. That helps a lot.

Pros:

Cons:

I would like to have something like this on GitHub too. The usage is almost similar to AppVeyor, but pretty much simpler to configure, less complex and it just works. The reason is Docker, I think. For sure, AppVeyor can do a lot more stuff and couldn't really compared to Pipelines. Anyway, I will do compare it to AppVeyor and will do the same with it in one of the next posts.

Currently there is a big downside with BitBucket Pipelines: Currently this is only working with Docker images that are running on Linux. It is not yet possible to use it for full .NET Framework projects. This is the reason why we never used it at the YooApps for .NET Projects. I'm sure we need to think about doing more projects using .NET Core ;-)

NuGet, Cache and some more problems

Monday, November 27, 2017 12:00 AM

Recently I had some problems using NuGet, two of them were huge, which took me a while to solve them. But all of them are easy to fix, if you know how to do it.

NuGet Cache

The fist and more critical problem was related to the NuGet Cache in .NET Core projects. It seems the underlying problem was a broken package in the cache. I didn't find out the real reason. Anyway, every time I tried to restore or add packages, I got an error message, that told me about an error at the first character in the project.assets.json. Yes, there is still a kind of a project.json even in .NET Core 2.0 projects. This file is in the "obj" folder of a .NET Core project and stores all information about the NuGet packages.

This error looked like a typical encoding error. This happens often if you try to read a ANSI encoded file, from a UTF-8 encoded file, or vice versa. But the project.assets.json was absolutely fine. It seemed to be a problem with one of the packages. It worked with the predefined .NET Core or ASP.NET Core packages, but it doesn't with any other. I wasn't able anymore to work on any .NET Core projects that targets .NET Core, but it worked with projects that are targeting the full .NET Framework.

I couldn't solve the real problem and I didn't really want to go threw all of the packages to find the broken one. The .NET CLI provides a nice tool to manage the NuGet Cache. It provides a more detailed CLI to NuGet.

dotnet nuget --help

This shows you tree different commands to work with NuGet. delete and push are working on the remote server to delete a package from a server or to push a new package to the server using the NuGet API. The third one is a command to work with local resources:

dotnet nuget locals --help

This command shows you the help about the locals command. Try the next one to get a list of local NuGet resources:

dotnet nuget locals all --list

You can now use the clear option to clear all caches:

dotnet nuget locals all --clear

Or a specific one by naming it:

dotnet nuget locals http-cache --clear

This is much more easier than searching for all the different cache locations and to delete them manually.

This solved my problem. The broken package was gone from all the caches and I was able to load the new, clean and healthy ones from NuGet.

Versions numbers in packages folders

The second huge problem is not related to .NET Core, but to classic .NET Framework projects using NuGet. If you also use Git-Flow to manage your source code, you'll have at least to different main branches: Master and Develop. Both branches contain different versions. Master contains the current version code and Develop contains the next version code. It is also possible that both versions use different versions of dependent NuGet packages. And here is the Problem:

Master used e. g. AwesomePackage 1.2.0 and Develop uses AwesomePackage 1.3.0-beta-build54321

Both versions of the code are referencing to the AwesomeLib.dll but in different locations:

If you now release the Develop to Master, you'll definitely forget to go to all the projects to change the reference paths, don't you? The build of master will fail, because this specific beta folder wont exist on the server, or even more bad: The build will not fail because the folder of the old package still exists on the build server, because you didn't clear the build work space. This will result in runtime errors. This problem will probably happen more likely, if you provide your own packages using your own NuGet Server.

I solved this by using a different NuGet client than NuGet. I use Paket, because it doesn't store the binaries in version specific folder and the reference path will be the same as long as the package name doesn't change. Using Paket I don't need to take care about reference paths and every branch loads the dependencies from the same location.

Paket officially supports the NuGet APIs and is mentioned on NuGet org, in the package details.

To learn more about Paket visit the official documentation: https://fsprojects.github.io/Paket/

Conclusion

Being an agile developer, doesn't only mean to follow an iterative process. It also means to use the best tools you can buy. But you don't always need to buy the best tools. Many of them are open source and free to use. Just help them by donating some bugs, spread the word, file some issues or contribute in a way to improve the tool. Paket is one of such tools, lightweight, fast, easy to use and it solves many problems. It is also well supported in CAKE , which is the build DSL I use to build, test and deploy applications.

GraphiQL for ASP.​NET Core

Thursday, October 26, 2017 12:00 AM

One nice thing about blogging is the feedback from the readers. I got some nice kudos, but also great new ideas. One idea was born out of a question about a "graphi" UI for the GraphQL Middleware I wrote some months ago. I never heard about "graphi", which actually is "GraphiQL", a generic HTML UI over a GraphQL endpoint. It seemed to be something like a Swagger UI, but just for GraphQL. That sounds nice and I did some research about that.

What is GraphiQL?

Actually it is absolutely not the same as Swagger and not as detailed as Swagger, but it provides a simple and easy to use UI to play around with your GraphQL end-point. So you cannot really compare it.

GraphiQL is a React component provided by the GraphQL creators, that can be used in your project. It basically provides an input area to write some GraphQL queries and a button to sent that query to your GrapgQL end-point. You'll than see the result or the error on the right side of the UI.

Additionally it provides some more nice features:

Implementing GraphiQL

The first idea was to write something like this by my own. But it should be the same as the existing GraphiQL UI. Why not using the existing implementation? Thanks to Steve Sanderson, we have the Node Services for ASP.NET Core. Why not running the existing GraphiQL implementation in a Middleware using the NodeServices?

I tried it with the "apollo-server-module-graphiql" package. I called this small JavaScript to render the graphiql UI and return it back to C# via the NodeSerices:

var graphiql = require('apollo-server-module-graphiql');

module.exports = function (callback, options) {
    var data = {
        endpointURL: options.graphQlEndpoint
    };

    var result = graphiql.renderGraphiQL(data);
    callback(null, result);
};

The usage of that script inside the Middleware looks like this:

var file = _env.WebRootFileProvider.GetFileInfo("graphiql.js");
var result = await _nodeServices.InvokeAsync(file.PhysicalPath, _options);
await httpCont

That works great, but has one problem. It wraps the GraphQL query in a JSON-Object that was posted to the GraphQL end-point. I would need to change the GraphQlMiddleware implementation, because of that. The current implementation expects the plain GraphQL query in the POST body.

What is the most useful approach? Wrapping the GraphQL query in a JSON object or sending the plain query? Any Ideas? What do you use? Please tell me by dropping a comment.

With this approach I'm pretty much dependent to the Apollo developers and need to change my implementation, whenever they change their implementations.

This is why I decided to use the same concept of generating the UI as the "apollo-server-module-graphiql" package but implemented in C#. This unfortunately doesn't need the NodeServices anymore.

I use exact the same generated code as this Node module, but changed the way the query is send to the server. Now the plain query will be sent to the server.

I started playing around with this and added it to the existing project, mentioned here: GraphQL end-point Middleware for ASP.NET Core.

Using the GraphiqlMiddleware

The result is as easy to use as the GraphQlMiddleware. Let's see how it looks to add the Middlewares:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
	// adding the GraphiQL UI
    app.UseGraphiql(options =>
    {
        options.GraphiqlPath = "/graphiql"; // default
        options.GraphQlEndpoint = "/graph"; // default
    });
}
// adding the GraphQL end point
app.UseGraphQl(options =>
{
    options.GraphApiUrl = "/graph"; // default
    options.RootGraphType = new BooksQuery(bookRepository);
    options.FormatOutput = true; // default: false
});

As you can see the second Middleware is bound to the first one by using the same path "/graph". I didn't create any hidden dependency between the both Middlewares, to make it ease to use it in various combinations. Maybe you want to use the GraphiQL UI only in the Development or Staging environment as shown in this example.

Now start the web using Visual Studio (press [F5]). The web starts with the default view or API. Add "graphiql" to the URL in the browsers address bar and see what happens. You should see a generated UI for your GraphQL endpoint, where you can now start playing around with our API, testing and debugging it with your current data. (See the screenshots on top.)

I'll create a separate NuGet package for the GraphiqlMiddleware. This will not have the GraphQlMiddleware as a dependency and could be used completely separate.

Conclusion

This was a lot easier to implement than expected. Currently there is still some refactoring needed:

You wanna try it? Download, clone or fork the sources on GitHub.

What do you think about that? Could this be useful to you? Please leave a comment and tell me about your opinion.

Update [10/26/2017 21:03]

GraphiQL is much more powerful than expected. I was wondering how the GraphQL create IntelliSense support in the editor and how it creates the documentation. I had a deeper look into the traffic and found two more cool things about it:

First: GraphiQL sends a special query to the GraphQL to request for the GraphQL specific documentation. IN this case it looks like this:

  query IntrospectionQuery {
    __schema {
      queryType { name }
      mutationType { name }
      subscriptionType { name }
      types {
        ...FullType
      }
      directives {
        name
        description
        locations
        args {
          ...InputValue
        }
      }
    }
  }

  fragment FullType on __Type {
    kind
    name
    description
    fields(includeDeprecated: true) {
      name
      description
      args {
        ...InputValue
      }
      type {
        ...TypeRef
      }
      isDeprecated
      deprecationReason
    }
    inputFields {
      ...InputValue
    }
    interfaces {
      ...TypeRef
    }
    enumValues(includeDeprecated: true) {
      name
      description
      isDeprecated
      deprecationReason
    }
    possibleTypes {
      ...TypeRef
    }
  }

  fragment InputValue on __InputValue {
    name
    description
    type { ...TypeRef }
    defaultValue
  }

  fragment TypeRef on __Type {
    kind
    name
    ofType {
      kind
      name
      ofType {
        kind
        name
        ofType {
          kind
          name
          ofType {
            kind
            name
            ofType {
              kind
              name
              ofType {
                kind
                name
                ofType {
                  kind
                  name
                }
              }
            }
          }
        }
      }
    }
  }

Try this query and sent it to your GraphQL API using Postman or a similar tool and see what happens :)

Second: GraphQL for .NET knows how to answer that query and sent the full documentation about my data structure to the client like this:

{
    "data": {
        "__schema": {
            "queryType": {
                "name": "BooksQuery"
            },
            "mutationType": null,
            "subscriptionType": null,
            "types": [
                {
                    "kind": "SCALAR",
                    "name": "String",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Boolean",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Float",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Int",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "ID",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Date",
                    "description": "The `Date` scalar type represents a timestamp provided in UTC. `Date` expects timestamps to be formatted in accordance with the [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) standard.",
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Decimal",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
              	[ . . . ]
                // many more documentation from the server
        }
    }
}

This is really awesome. With GraphiQL I got a lot more stuff than expected. And it didn't take more than 5 hours to implement this middleware.

.NET Core 2.0 and ASP.NET 2.0 Core are here and ready to use

Tuesday, October 24, 2017 12:00 AM

Recently I did a overview talk about .NET Core, .NET Standard and ASP.NET Core at the Azure Meetup Freiburg. I told them about .NET Core 2.0, showed the dotnet CLI and the integration in Visual Studio. Explained the sense of .NET Standard and why developers should take care about it. I also showed them ASP.NET Core, how it works, how to host and explained the main differences to the ASP.NET 4.x versions.

BTW: This Meetup was really great. Well organized on a pretty nice and modern location. It was really fun to talk there. Thanks to Christian, Patrick and Nadine to organize this event :-)

After that talk they asked me some pretty interesting and important questions:

Question 1: "Should we start using ASP.NET Core and .NET Core?"

My answer is a pretty clear YES.

As an library developer, there is almost no reason to not use the .NET Standard. Since .NET Standard 2.0 the full API of the .NET Framework is available and can be used to write libraries for .NET Core, Xamarin, UWP and the full .NET Framework. It also supports to reference full .NET Framework assemblies.

The .NET Standard is an API specification that need to be implemented by the platform specific Frameworks. The .NET Framework 4.6.2, .NET Core 2.0 and Xamarin are implementing the .NET Standard 2.0, which means they all uses the same API (namespaces names, class names, method names). Writing libraries against the .NET Standard 2.0 API will run on .NET Framework 2.0, on .NET Core 2.0, as well as on Xamarin and on every other platform specific Framework that supports that API.

Question 2: Do we need to migrate our existing web applications to ASP.NET Core?

My answer is: NO. You don't need to and I would propose to not do it if there's no good reason to do it.

There are a lot blog posts out there about migrating web applications to ASP.NET Core, but you don't need to, if you don't face any problems with your existing one. There are just a few reasons to migrate:

Depending on the level of customizing you did in your existing application, the migration could be a lot of effort. Someone needs to pay for the effort, that's why I would propose not to migrate to ASP.NET Core, if you don't have any problems or a real need to do it.

Conclusion

I would use ASP.NET Core for every new web project and .NET Standard for every library I need to write. Because it is almost mature and really usable since the versions 2.0. You can do almost all the stuff, you can do with the full .NET framework.

BTW: Rick Strahl also just wrote an article about that. Please read it. It's great, as almost all of his posts: https://weblog.west-wind.com/posts/2017/Oct/22/NET-Core-20-and-ASPNET-20-Core-are-finally-here

BTW: The slides off that talk are on SlideShare. If you want me to do that talk in your meetup or in your user group, just ping me on twitter or drop me an email

Unit Testing an ASP.​NET Core Application

Wednesday, September 27, 2017 12:00 AM

ASP.NET Core 2.0 is out and it is great. Testing worked well in the previous versions, but in 2.0 it is much more easier.

Xunit, Moq and FluentAssertions are working great with the new Version of .NET Core and ASP.NET Core. Using this tools Unit Testing is really fun. Even more fun with testing is provided in ASP.NET Core. Testing Controllers wasn't easier in the previous versions.

If you remember the old Web API and MVC version, based on System.Web, you'll probably also remember how to write unit test for the Controllers.

In this post I'm going to show you how to unit test your controllers and how to write integration tests for your controllers.

Preparing the project to test:

To show you how this works, I created a new "ASP.NET Core Web Application" :

Now I needed to select the Web API project. Be sure to select ".NET Core" and "ASP.NET Core 2.0":

To keep this post simple, I didn't select an authentication type.

In this project is nothing special, except the new PersonsController, which is using a PersonService:

[Route("api/[controller]")]
public class PersonsController : Controller
{
    private IPersonService _personService;

    public PersonsController(IPersonService personService)
    {
        _personService = personService;
    }
    // GET api/values
    [HttpGet]
    public async Task Get()
    {
        var models = _personService.GetAll();

        return Ok(models);
    }

    // GET api/values/5
    [HttpGet("{id}")]
    public async Task Get(int id)
    {
        var model = _personService.Get(id);

        return Ok(model);
    }

    // POST api/values
    [HttpPost]
    public async Task Post([FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        var person = _personService.Add(model);

        return CreatedAtAction("Get", new { id = person.Id }, person);
    }

    // PUT api/values/5
    [HttpPut("{id}")]
    public async Task Put(int id, [FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        _personService.Update(id, model);

        return NoContent();
    }

    // DELETE api/values/5
    [HttpDelete("{id}")]
    public async Task Delete(int id)
    {
        _personService.Delete(id);
        return NoContent();
    }
}

The Person class is created in a new folder "Models" and is a simple POCO:

public class Person
{
  public int Id { get; set; }
  [Required]
  public string FirstName { get; set; }
  [Required]
  public string LastName { get; set; }
  public string Title { get; set; }
  public int Age { get; set; }
  public string Address { get; set; }
  public string City { get; set; }
  [Required]
  [Phone]
  public string Phone { get; set; }
  [Required]
  [EmailAddress]
  public string Email { get; set; }
}

The PersonService uses GenFu to auto generate a list of Persons:

public class PersonService : IPersonService
{
    private List Persons { get; set; }

    public PersonService()
    {
        var i = 0;
        Persons = A.ListOf(50);
        Persons.ForEach(person =>
        {
            i++;
            person.Id = i;
        });
    }

    public IEnumerable GetAll()
    {
        return Persons;
    }

    public Person Get(int id)
    {
        return Persons.First(_ => _.Id == id);
    }

    public Person Add(Person person)
    {
        var newid = Persons.OrderBy(_ => _.Id).Last().Id + 1;
        person.Id = newid;

        Persons.Add(person);

        return person;
    }

    public void Update(int id, Person person)
    {
        var existing = Persons.First(_ => _.Id == id);
        existing.FirstName = person.FirstName;
        existing.LastName = person.LastName;
        existing.Address = person.Address;
        existing.Age = person.Age;
        existing.City = person.City;
        existing.Email = person.Email;
        existing.Phone = person.Phone;
        existing.Title = person.Title;
    }

    public void Delete(int id)
    {
        var existing = Persons.First(_ => _.Id == id);
        Persons.Remove(existing);
    }
}

public interface IPersonService
{
  IEnumerable GetAll();
  Person Get(int id);
  Person Add(Person person);
  void Update(int id, Person person);
  void Delete(int id);
}

This Service needs to be registered in the Startup.cs:

services.AddScoped();

If this is done, we can create the test project.

The unit test project

I always choose xUnit (or Nunit) over MSTest, but feel free to use MSTest. The used testing framework doesn't really matter.

Inside that project, I created two test classes: PersonsControllerIntegrationTests and PersonsControllerUnitTests

We need to add some NuGet packages. Right click the project in VS2017 to edit the project file and to add the references manually, or use the NuGet Package Manager to add these packages:





The first package contains the dependencies to ASP.NET Core. I use the same package as in the project to test. The second package is used for the integration tests, to build a test host for the project to test. FluentAssertions provides a more elegant way to do assertions. And Moq is used to create fake objects.

Let's start with the unit tests:

Unit Testing the Controller

We start with an simple example, by testing the GET methods only:

public class PersonsControllerUnitTests
{
  [Fact]
  public async Task Values_Get_All()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get();

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var persons = okResult.Value.Should().BeAssignableTo>().Subject;

    persons.Count().Should().Be(50);
  }

  [Fact]
  public async Task Values_Get_Specific()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get(16);

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var person = okResult.Value.Should().BeAssignableTo().Subject;
    person.Id.Should().Be(16);
  }
  
  [Fact]
  public async Task Persons_Add()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Post(newPerson);

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var person = okResult.Value.Should().BeAssignableTo().Subject;
    person.Id.Should().Be(51);
  }

  [Fact]
  public async Task Persons_Change()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Put(20, newPerson);

    // Assert
    var okResult = result.Should().BeOfType().Subject;

    var person = service.Get(20);
    person.Id.Should().Be(20);
    person.FirstName.Should().Be("John");
    person.LastName.Should().Be("Doe");
    person.Age.Should().Be(50);
    person.Title.Should().Be("FooBar");
    person.Email.Should().Be("john.doe@foo.bar");
  }

  [Fact]
  public async Task Persons_Delete()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);

    // Act
    var result = await controller.Delete(20);

    // Assert
    var okResult = result.Should().BeOfType().Subject;

    // should throw an eception, 
    // because the person with id==20 doesn't exist enymore
    AssertionExtensions.ShouldThrow(
      () => service.Get(20));
  }
}

This snippets also shows the benefits of FluentAssertions. I really like the readability of this fluent API.

BTW: Enabling live unit testing in Visual Studio 2017 is impressive:

With this unit test approach the invalid ModelState isn't tested. To get this tested we need to test more integrated. Also the ModelBinder should be executed, to validate the input and to set the ModelState

One last thing about unit test: I mentioned Moq before, because I propose to isolate the code to test. This means, wouldn't use a real service, when I test a controller. I also wouldn't use a real repository, when I test a service. And so on... That's why you should work with fake services instead of real ones. Moq is a tool to create such fake objects and to set them up:

[Fact]
public async Task Persons_Get_From_Moq()
{
  // Arrange
  var serviceMock = new Mock();
  serviceMock.Setup(x => x.GetAll()).Returns(() => new List
  {
    new Person{Id=1, FirstName="Foo", LastName="Bar"},
    new Person{Id=2, FirstName="John", LastName="Doe"},
    new Person{Id=3, FirstName="Juergen", LastName="Gutsch"},
  });
  var controller = new PersonsController(serviceMock.Object);

  // Act
  var result = await controller.Get();

  // Assert
  var okResult = result.Should().BeOfType().Subject;
  var persons = okResult.Value.Should().BeAssignableTo>().Subject;

  persons.Count().Should().Be(3);
}

To learn more about Moq, have a look into the repository: https://github.com/moq/moq4

Integration testing the Controller

But let's start with the simple cases, by testing the GET methods first. To test the integration of a web API or any other web based API, you need to have a web server running. Even in this case a web server is needed, but fortunately there is a test server which can be used. With this host it is not needed to set-up a separate web server on the test machine:

public class PersonsControllerIntegrationTests
{
  private readonly TestServer _server;
  private readonly HttpClient _client;

  public PersonsControllerIntegrationTests()
  {
    // Arrange
    _server = new TestServer(new WebHostBuilder()
                             .UseStartup());
    _client = _server.CreateClient();
  }
  // ... 
}

At first, the TestServer will be set up. This guy gets the WebHostBuilder - known from every ASP.NET Core 2.0 application - and uses the Startup of our project to test. At second, we are able to create a HttpClient out of that server. We'll use this HttpClient in the tests to create request to the server and to receive the responses.

[Fact]
public async Task Persons_Get_All()
{
  // Act
  var response = await _client.GetAsync("/api/Persons");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var persons = JsonConvert.DeserializeObject>(responseString);
  persons.Count().Should().Be(50);
}

[Fact]
public async Task Persons_Get_Specific()
{
  // Act
  var response = await _client.GetAsync("/api/Persons/16");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var person = JsonConvert.DeserializeObject(responseString);
  person.Id.Should().Be(16);
}

Now let's test the POST request:

[Fact]
public async Task Persons_Post_Specific()
{
  // Arrange
  var personToAdd = new Person
  {
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  var person = JsonConvert.DeserializeObject(responseString);
  person.Id.Should().Be(51);
}

We need to prepare a little bit more. Here we create a StringContent object, which derives from HttpContent. This will contain the Person we want to add as JSON string. This get's sent via post to the TestServer.

To test the invalid ModelState, just remove a required field or pass a wrong formatted email address or telephone number and test against it. In this sample, I test against three missing required fields:

[Fact]
public async Task Persons_Post_Specific_Invalid()
{
  // Arrange
  var personToAdd = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

This is almost the same pattern for the PUT and the DELETE requests:

[Fact]
public async Task Persons_Put_Specific()
{
  // Arrange
  var personToChange = new Person
  {
    Id = 16,
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

[Fact]
public async Task Persons_Put_Specific_Invalid()
{
  // Arrange
  var personToChange = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

[Fact]
public async Task Persons_Delete_Specific()
{
  // Arrange

  // Act
  var response = await _client.DeleteAsync("/api/Persons/16");

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

Conclusion

That's it.

Thanks to the TestServer! With this it is really easy to write integration tests the controllers. Sure, it is still a little more effort than the much simpler unit tests, but you don't have external dependencies anymore. No external web server to manage. No external web server, which is out of control.

Try it out and tell me about your opinion :)

Current Activities

Monday, September 25, 2017 12:00 AM

Wow! Some weeks without a new blog post. My plan was to write at least one post per week. But this doesn't always work. Usually I write in the train, when I go to work to Basel (CH). Sometimes I write two or three posts in advance, to publish it later on. But I had some vacation, worked two weeks completely at home, and I had some other things to prepare and to manage. This is a short overview of the current activities:

INETA Germany

The last year was kinda busy, even from the developer community perspective. The leadership of the INETA Germany was one, but a pretty small part. We are currently planning some huge changes with the INETA Germany. One of the changes is to open the INETA Germany for Austrian and Swiss user groups and speakers. We already have some Austrian and Swiss speakers registered in the Speakers Bureau. Contact us via hallo@ineta-deutschland.de, if you are a Austrian or Swiss user group and if you wanna be a INETA member.

Authoring articles for a magazine

Also the last twelve months was busy, because almost every month I wrote an article for one of the most popular German speaking developer Magazine. All this articles were about ASP.NET Core and .NET Core. Here is a list of them:

And there will be some more in the next months.

Technical Review of a book

For the first time I do a technical review for a book about "ASP.NET Core 2.0 and Angular 4". I'm a little bit proud of getting asked doing it. The author is one of the famous European authors and this book is awesome and great for Angular and ASP.NET Core beginners. Unfortunately I cannot yet mention the Authors name and link to the book title, but I will definitely if it is finished.

Talks

What else? Yes I was talking and I will talk about various topics.

In August I did a talk at the Zurich Azure meetup about how we (the yooapps.com) created the digital presentation (web site, mobile apps) for one of the most successful soccer clubs in Switzerland on Microsoft Azure. It is also about how we maintain it and how we deploy it to Azure. I'l do the talk at the Basel .NET User Group today and in October at the Azure meetup Karlsruhe. I'd like to do this talk at a lot more user groups, because we are kinda proud of that project. Contact me, if you like to have this talk in your user group. (BTW: I'm in the Seattle (WA) area at the beginning of March 2018. So I could do this talk in Seattle or Portland too.)

Title: Soccer in the cloud – FC Basel on Azure

Abstract: Three years ago we released a completely new version of the digital presentation (website, mobile apps, live ticker, image database, videos, shop and extranet) of one of the most successful soccer clubs In Switzerland. All of the services of that digital presentation are designed to run on Microsoft Azure. Since then we are continuously working on it, adding new features, integrating new services and improving performance, security and so on. This talk is about, how the digital presentation was built, how it works and how we continuously deploy new feature and improvements to Azure.

At the ADC 2017 in Cologne I talked about customizing ASP.NET Core (sources on GitHub) and I did a one day workshop about developing a web application using ASP.NET Core (sources on GitHub)

Also in October I do an introduction talk about .NET Core, .NET Standard and ASP.NET Core at the "Azure & .NET meetup Freiburg" (Germany)

Open Source:

One Project that should be released for a while is LightCore. Unfortunately I didn't find some time the last weeks to work on that. Also Peter Bucher (who initially created that project and is currently working on that) was busy too. We Currently have a critical bug, which needs to be solved first, before we are able to release. Even the ASP.NET Core integrations needs to be finished first.

Doing a lot of sports

Since almost 18 Months, I do a lot of running and biking. This also takes some time, but is a lot of addictive fun. I cannot live two days without being outside running and biking. I wouldn't believe that, if you told me about it two years before. That changed my life a lot. I attend official runs and bike races and last but not least: I lost around 20 kilos the last one and a half years.

Querying AD in SQL Server via LDAP provider

Monday, August 7, 2017 12:00 AM

This is kinda off-topic, because it's not about ASP.NET Core, but l really like it to share. I recently needed to import some additional user data via a nightly run into a SQL Server Database. The base user data came from a SAP database via an CSV bulk import. But not all of the data. E.g. the telephone numbers are maintained mostly by the users itself in the AD. After SAP import, we need to update the telephone numbers with the data from the AD.

The bulk import was done with a stored procedure and executed nightly with an SQL Server job. So it makes sense to do the AD import with a stored procedure too. I wasn't really sure whether this works via the SQL server.

My favorite programming languages are C# and JavaScript, and I'm not really a friend of T-SQL, but I tried it. I googled around a little bit and found a solution quick solution in T-SQL.

The trick is map the AD via an LDAP provider as a linked server to the SQL Server. This can even be done via a dialogue, but I never got it running like this, so I chose the way to use T-SQL instead:

USE [master]
GO 
EXEC master.dbo.sp_addlinkedserver @server = N'ADSI', @srvproduct=N'Active Directory Service Interfaces', @provider=N'ADSDSOObject', @datasrc=N'adsdatasource'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ADSI',@useself=N'False',@locallogin=NULL,@rmtuser=N'\',@rmtpassword='*******'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation compatible',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'data access', @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'dist', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'pub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc out', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'sub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'connect timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation name', @optvalue=null
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'lazy schema validation',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'query timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'use remote collation',  @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'remote proc transaction promotion', @optvalue=N'true'
GO

You can to use this script to set-up a new linked server to AD. Just set the right user and password to the second T-SQL statement. This user should have read access to the AD. A specific service account would make sense here. Don't save the script with the user credentials in it. Once the linked server is set-up, you don't need this script anymore.

This setup was easy. The most painful part, was to setup a working query.

SELECT * FROM OpenQuery ( 
  ADSI,  
  'SELECT cn, samaccountname, mail, mobile, telephonenumber, sn, givenname, co, company
  FROM ''LDAP://DC=company,DC=domain,DC=controller'' 
  WHERE objectClass = ''User'' and co = ''Switzerland''
  ') AS tblADSI
  WHERE mail IS NOT NULL AND (telephonenumber IS NOT NULL OR mobile IS NOT NULL)
ORDER BY cn

Any error in the Query to execute resulted in an generic error message, which told me that there was an problem to built this query. Not really helpful.

It took me two hours to find the right LDAP connection string and some more hours to find the right properties the query.

The other painful thing are the conditions. Because the where clause outside the OpenQuery couldn't be run inside the OpenQuery. Don't ask me why. My Idea was to limit the result set completely with the query inside the OpenQuery, but was only able to limit to the objectType "User" and to the country. Also the AD need to be maintained in a proper way: e.g. the field company btw. didn't return the company (which should be the same in the entire company) but the company units.

BTW: the column order in the result set is completely the other way round, than defined in the query.

Later, I could limit the result set to existing emails (to find out whether this is a real user) and existing telephone numbers.

The rest is easy: Wrap that query in a stored procedure, iterate threw all of the users, find the related ones in the database (previously imported from SAP) and update the telephone numbers.

Creating an email form with ASP.NET Core Razor Pages

Wednesday, July 26, 2017 12:00 AM

In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values.

Creating a new project

To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case)

In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project.

In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK".

That's it. The new ASP.NET Core Razor Pages project is created.

Creating the form

It makes sense to use the contact.cshtml page to add the new contact form. The contact.cshtml.cs is the PageModel to work with. Inside this file, I added a small class called ContactFormModel. This class will contain the form values after the post request was sent.

public class ContactFormModel
{
  [Required]
  public string Name { get; set; }
  [Required]
  public string LastName { get; set; }
  [Required]
  public string Email { get; set; }
  [Required]
  public string Message { get; set; }
}

To use this class, we need to add a property of this type to the ContactModel:

[BindProperty]
public ContactFormModel Contact { get; set; }

This attribute does some magic. It automatically binds the ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a ModelState. And we actually do:

public async Task OnPostAsync()
{
  if (!ModelState.IsValid)
  {
    return Page();
  }

  // create and send the mail here

  return RedirectToPage("Index");
}

This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on.

Let's create the HTML form for this code in the contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:

Contact us

This also looks pretty much the same as in common ASP.NET Core MVC views. There's no difference.

BTW: I'm still impressed by the tag helpers. This guys even makes writing and formatting code snippets a lot easier.

Acessing the form data

As I wrote some lines above, there is a model binding working for you. This fills up the property Contact with data and makes it available in the OnPostAsync() method, if the attribute BindProperty is set.

[BindProperty]
public ContactFormModel Contact { get; set; }

Actually, I expected to have a model, passed as argument to the OnPost, as I saw it the first time. But you are able to use the property directly, without any other action to do:

var mailbody = $@"Hallo website owner,

This is a new contact request from your website:

Name: {Contact.Name}
LastName: {Contact.LastName}
Email: {Contact.Email}
Message: ""{Contact.Message}""


Cheers,
The websites contact form";

SendMail(mailbody);

That's nice, isn't it?

Sending the emails

Thanks to the pretty awesome .NET Standard 2.0 and the new APIs available for .NET Core 2.0, it get's even nicer:

// irony on

Finally in .NET Core 2.0, it is now possible to send emails directly to an SMTP server using the famous and pretty well known System.Net.Mail.SmtpClient():

private void SendMail(string mailbody)
{
  using (var message = new MailMessage(Contact.Email, "me@mydomain.com"))
  {
    message.To.Add(new MailAddress("me@mydomain.com"));
    message.From = new MailAddress(Contact.Email);
    message.Subject = "New E-Mail from my website";
    message.Body = mailbody;

    using (var smtpClient = new SmtpClient("mail.mydomain.com"))
    {
      smtpClient.Send(message);
    }
  }
}

Isn't that cool?

// irony off

It definitely works and this is actually a good thing.

In previews .NET Core versions it was recommended to use an external mail delivery service like SendGrid. This kind of services usually provide a REST based API , which can be used to communicate with that specific service. Some of them also provide various client libraries for the different platforms and languages to wrap that APIs and makes them easier to use.

I'm anyway a huge fan of such services, because they are easier to use and I don't need to handle message details like encoding. I don't need to care about SMTP hosts and ports, because it is all HTTPS. I don't really need to care as much about spam handling, because this is done by the service. Using such services I just need to configure the sender mail address, maybe a domain, but the DNS settings are done by them.

SendGrid could be bought via the Azure marketplace and contains huge number of free emails to send. I would propose to use such services whenever it's possible. The SmtpClient is good in enterprise environments where you don't need to go threw the internet to send mails. But maybe the Exchanges API is another or better option in enterprise environments.

Conclusion

The email form is working and it is actually not much code written by myself. That's awesome. For such scenarios the razor pages are pretty cool and easy to use. There's no Controller to set-up, the views and the PageModels are pretty close and the code to generate one page is not distributed over three different folders as in MVC. To create bigger applications, MVC is for sure the best choice, but I really like the possibility to keep small apps as simple as possible.

New Visual Studio Web Application: The ASP.NET Core Razor Pages

Monday, July 24, 2017 12:00 AM

I think, everyone who followed the last couple of ASP.NET Community Standup session heard about Razor Pages. Did you try the Razor Pages? I didn't. I focused completely on ASP.NET Core MVC and Web API. With this post I'm going to have a first look into it. I'm going to try it out.

I was also a little bit skeptical about it and compared it to the ASP.NET Web Site project. That was definitely wrong.

You need to have the latest preview on Visual Studio 2017 installed on your machine, because the Razor Pages came with ASP.NET Core 2.0 preview. It is based on ASP.NET Core and part of the MVC framework.

Creating a Razor Pages project

Using Visual Studio 2017, I used "File... New Project" to create a new project. I navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and I chose a name and a location for that project.

In the next dialogue, I needed to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other one in the next posts.) I selected the "Web Application (Razor Pages)" and pressed "OK".

Program.cs and Startup.cs

I you are already familiar with ASP.NET core projects, you'll find nothing new in the Program.cs and in the Startup.cs. Both files look pretty much the same.

public class Program
{
  public static void Main(string[] args)
  {
    BuildWebHost(args).Run();
  }

  public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup()
    .Build();
}

The Startup.cs has a services.AddMvc() and an app.UseMvc() with a configured route:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
  if (env.IsDevelopment())
  {
    app.UseDeveloperExceptionPage();
    app.UseBrowserLink();
  }
  else
  {
    app.UseExceptionHandler("/Error");
  }

  app.UseStaticFiles();

  app.UseMvc(routes =>
  {
    routes.MapRoute(
      name: "default",
      template: "{controller=Home}/{action=Index}/{id?}");
  });
}

That means the Razor Pages are actually part of the MVC framework, as Damien Edwards always said in the Community Standups.

The solution

But the solution looks a little different. Instead of a Views and a Controller folders, there is only a Pages folder with the razor files in it. Even there are known files: the _layout.cshtml, _ViewImports.cshtml, _ViewStart.cshtml.

Within the _ViewImports.cshtml we also have the import of the default TagHelpers

@namespace RazorPages.Pages
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

This makes sense, since the Razor Pages are part of the MVC Framework.

We also have the standard pages of every new ASP.NET project: Home, Contact and About. (I'm going to have a look at this files later on.)

As every new web project in Visual Studio, also this project type is ready to run. Pressing F5 starts the web application and opens the URL in the browser:

Frontend

For the frontend dependencies "bower" is used. It will put all the stuff into wwwroot/bin. So even this is working the same way as in MVC. Custom CSS and custom JavaScript ar in the css and the js folder under wwwroot. This should all be familiar for ASP.NET Corer developers.

Also the way the resources are used in the _Layout.cshtml are the same.

Welcome back "Code Behind"

This was my first thought for just a second, when I saw the that there are nested files under the Index, About, Contact and Error pages. At the first glimpse this files are looking almost like Code Behind files of Web Form based ASP.NET Pages, but are completely different:

public class ContactModel : PageModel
{
    public string Message { get; set; }

    public void OnGet()
    {
        Message = "Your contact page.";
    }
}

They are not called Page, but Model and they have something like an handler in it, to do something on a specific action. Actually it is not a handler, it is an an method which gets automatically invoked, if this method exists. This is a lot better than the pretty old Web Forms concept. The base class PageModel just provides access to some Properties like the Contexts, Request, Response, User, RouteData, ModelStates, ViewData and so on. It also provides methods to redirect to other pages, to respond with specific HTTP status codes, to sign-in and sign-out. This is pretty much it.

The method OnGet allows us to access the page via a GET request. OnPost does the same for POST. Gues what OnPut does ;)

Do you remember Web Forms? There's no need to ask if the current request is a GET or POST request. There's a single decoupled method per HTTP method. This is really nice.

On our Contact page, inside the method OnGet the message "Your contact page." will be set. This message gets displayed on the specific Contact page:

@page
@model ContactModel
@{
    ViewData["Title"] = "Contact";
}

@ViewData["Title"].

@Model.Message

As you can see in the razor page, the PageModel is actually the model of that view which gets passed to the view, the same way as in ASP.Net MVC. The only difference is, that there's no Action to write code for and something will invoke the OnGet Method in the PageModel.

Conclusion

This is just a pretty fast first look, but the Razor Pages seem to be pretty cool for small and low budget projects, e. g. for promotional micro sites with less of dynamic stuff. There's less code to write and less things to ramp up, no controllers and actions to think about. This makes it pretty easy to quickly start a project or to prototype some features.

But there's no limitation to do just small projects. It is a real ASP.NET Core application, which get's compiled and can easily use additional libraries. Even Dependency Injection is available in ASP.NET Core RazorPages. That means it is possible to let the application grow.

Even though it is possible to use the MVC concept in parallel by adding the Controllers and the Views folders to that application. You can also share the _layout.cshtml between both, the Razor Pages and the VMC Views, just by telling the _ViewStart.cshtml where the _layout.cshtml is.

Don't believe me? Try this out: https://github.com/juergengutsch/razor-pages-demo/

I'm pretty sure I'll use it in some of my real projects in the future.

LightCore is back - LightCore 2.0

Monday, July 17, 2017 12:00 AM

Until now it is pretty hard to move an existing more complex .NET Framework library to .NET Core or .NET Standard 1.x. More complex in my case just means, e. g. that this specific library uses reflection a little more than maybe some others. I'm talking about the LightCore DI container.

I started to move it to .NET Core in November 2015, but gave up halve a year later. Because it needs much more time to port it to NET Core and because it makes much more sense to port it to the .NET Standard than to .NET Core. And the announcement of .NET Standard 2.0 makes me much more optimistic to get it done with pretty less effort. So I stopped moving it to .NET Core and .NET Standard 1.x and was waiting for .NET standard 2.0.

LightCore is back

Some weeks ago the preview version of .NET Standard 2.0 was announced and I tried it again. It works as expected. The API of .NET Standard 2.0 is big enough to get the old source of LightCore running. Also Rick Strahl did some pretty cool and detailed post about it:

The current status

Roadmap

I don't really want to release the new version until the .NET Standard 2.0 is released. I don't want to have the preview versions of the .NET Standard libraries referenced in a final version of LightCore. That means, there is still some time to get the open issues done.

Open issues

The progress is good, but there is still something to do, before the release of the first preview:

Contributions

I'd like to call for contributions. Try LightCore, raise Issues, create PRs and whatever is needed to get LightCore back to life and whatever is needed to make it a lightweight, fast and easy to use DI container.