Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

Close

ASP.NET und mehr ...

Mehr oder weniger regelmäßig werden Artikel auf meinem Blog auf ASP.NET Zone veröffentlicht: ASP.NET und mehr...

Unit Testing an ASP.​NET Core Application

Wednesday, September 27, 2017 12:00 AM

ASP.NET Core 2.0 is out and it is great. Testing worked well in the previous versions, but in 2.0 it is much more easier.

Xunit, Moq and FluentAssertions are working great with the new Version of .NET Core and ASP.NET Core. Using this tools Unit Testing is really fun. Even more fun with testing is provided in ASP.NET Core. Testing Controllers wasn't easier in the previous versions.

If you remember the old Web API and MVC version, based on System.Web, you'll probably also remember how to write unit test for the Controllers.

In this post I'm going to show you how to unit test your controllers and how to write integration tests for your controllers.

Preparing the project to test:

To show you how this works, I created a new "ASP.NET Core Web Application" :

Now I needed to select the Web API project. Be sure to select ".NET Core" and "ASP.NET Core 2.0":

To keep this post simple, I didn't select an authentication type.

In this project is nothing special, except the new PersonsController, which is using a PersonService:

[Route("api/[controller]")]
public class PersonsController : Controller
{
    private IPersonService _personService;

    public PersonsController(IPersonService personService)
    {
        _personService = personService;
    }
    // GET api/values
    [HttpGet]
    public async Task Get()
    {
        var models = _personService.GetAll();

        return Ok(models);
    }

    // GET api/values/5
    [HttpGet("{id}")]
    public async Task Get(int id)
    {
        var model = _personService.Get(id);

        return Ok(model);
    }

    // POST api/values
    [HttpPost]
    public async Task Post([FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        var person = _personService.Add(model);

        return CreatedAtAction("Get", new { id = person.Id }, person);
    }

    // PUT api/values/5
    [HttpPut("{id}")]
    public async Task Put(int id, [FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        _personService.Update(id, model);

        return NoContent();
    }

    // DELETE api/values/5
    [HttpDelete("{id}")]
    public async Task Delete(int id)
    {
        _personService.Delete(id);
        return NoContent();
    }
}

The Person class is created in a new folder "Models" and is a simple POCO:

public class Person
{
  public int Id { get; set; }
  [Required]
  public string FirstName { get; set; }
  [Required]
  public string LastName { get; set; }
  public string Title { get; set; }
  public int Age { get; set; }
  public string Address { get; set; }
  public string City { get; set; }
  [Required]
  [Phone]
  public string Phone { get; set; }
  [Required]
  [EmailAddress]
  public string Email { get; set; }
}

The PersonService uses GenFu to auto generate a list of Persons:

public class PersonService : IPersonService
{
    private List Persons { get; set; }

    public PersonService()
    {
        var i = 0;
        Persons = A.ListOf(50);
        Persons.ForEach(person =>
        {
            i++;
            person.Id = i;
        });
    }

    public IEnumerable GetAll()
    {
        return Persons;
    }

    public Person Get(int id)
    {
        return Persons.First(_ => _.Id == id);
    }

    public Person Add(Person person)
    {
        var newid = Persons.OrderBy(_ => _.Id).Last().Id + 1;
        person.Id = newid;

        Persons.Add(person);

        return person;
    }

    public void Update(int id, Person person)
    {
        var existing = Persons.First(_ => _.Id == id);
        existing.FirstName = person.FirstName;
        existing.LastName = person.LastName;
        existing.Address = person.Address;
        existing.Age = person.Age;
        existing.City = person.City;
        existing.Email = person.Email;
        existing.Phone = person.Phone;
        existing.Title = person.Title;
    }

    public void Delete(int id)
    {
        var existing = Persons.First(_ => _.Id == id);
        Persons.Remove(existing);
    }
}

public interface IPersonService
{
  IEnumerable GetAll();
  Person Get(int id);
  Person Add(Person person);
  void Update(int id, Person person);
  void Delete(int id);
}

This Service needs to be registered in the Startup.cs:

services.AddScoped();

If this is done, we can create the test project.

The unit test project

I always choose xUnit (or Nunit) over MSTest, but feel free to use MSTest. The used testing framework doesn't really matter.

Inside that project, I created two test classes: PersonsControllerIntegrationTests and PersonsControllerUnitTests

We need to add some NuGet packages. Right click the project in VS2017 to edit the project file and to add the references manually, or use the NuGet Package Manager to add these packages:





The first package contains the dependencies to ASP.NET Core. I use the same package as in the project to test. The second package is used for the integration tests, to build a test host for the project to test. FluentAssertions provides a more elegant way to do assertions. And Moq is used to create fake objects.

Let's start with the unit tests:

Unit Testing the Controller

We start with an simple example, by testing the GET methods only:

public class PersonsControllerUnitTests
{
  [Fact]
  public async Task Values_Get_All()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get();

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var persons = okResult.Value.Should().BeAssignableTo>().Subject;

    persons.Count().Should().Be(50);
  }

  [Fact]
  public async Task Values_Get_Specific()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get(16);

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var person = okResult.Value.Should().BeAssignableTo().Subject;
    person.Id.Should().Be(16);
  }
  
  [Fact]
  public async Task Persons_Add()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Post(newPerson);

    // Assert
    var okResult = result.Should().BeOfType().Subject;
    var person = okResult.Value.Should().BeAssignableTo().Subject;
    person.Id.Should().Be(51);
  }

  [Fact]
  public async Task Persons_Change()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Put(20, newPerson);

    // Assert
    var okResult = result.Should().BeOfType().Subject;

    var person = service.Get(20);
    person.Id.Should().Be(20);
    person.FirstName.Should().Be("John");
    person.LastName.Should().Be("Doe");
    person.Age.Should().Be(50);
    person.Title.Should().Be("FooBar");
    person.Email.Should().Be("john.doe@foo.bar");
  }

  [Fact]
  public async Task Persons_Delete()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);

    // Act
    var result = await controller.Delete(20);

    // Assert
    var okResult = result.Should().BeOfType().Subject;

    // should throw an eception, 
    // because the person with id==20 doesn't exist enymore
    AssertionExtensions.ShouldThrow(
      () => service.Get(20));
  }
}

This snippets also shows the benefits of FluentAssertions. I really like the readability of this fluent API.

BTW: Enabling live unit testing in Visual Studio 2017 is impressive:

With this unit test approach the invalid ModelState isn't tested. To get this tested we need to test more integrated. Also the ModelBinder should be executed, to validate the input and to set the ModelState

One last thing about unit test: I mentioned Moq before, because I propose to isolate the code to test. This means, wouldn't use a real service, when I test a controller. I also wouldn't use a real repository, when I test a service. And so on... That's why you should work with fake services instead of real ones. Moq is a tool to create such fake objects and to set them up:

[Fact]
public async Task Persons_Get_From_Moq()
{
  // Arrange
  var serviceMock = new Mock();
  serviceMock.Setup(x => x.GetAll()).Returns(() => new List
  {
    new Person{Id=1, FirstName="Foo", LastName="Bar"},
    new Person{Id=2, FirstName="John", LastName="Doe"},
    new Person{Id=3, FirstName="Juergen", LastName="Gutsch"},
  });
  var controller = new PersonsController(serviceMock.Object);

  // Act
  var result = await controller.Get();

  // Assert
  var okResult = result.Should().BeOfType().Subject;
  var persons = okResult.Value.Should().BeAssignableTo>().Subject;

  persons.Count().Should().Be(3);
}

To learn more about Moq, have a look into the repository: https://github.com/moq/moq4

Integration testing the Controller

But let's start with the simple cases, by testing the GET methods first. To test the integration of a web API or any other web based API, you need to have a web server running. Even in this case a web server is needed, but fortunately there is a test server which can be used. With this host it is not needed to set-up a separate web server on the test machine:

public class PersonsControllerIntegrationTests
{
  private readonly TestServer _server;
  private readonly HttpClient _client;

  public PersonsControllerIntegrationTests()
  {
    // Arrange
    _server = new TestServer(new WebHostBuilder()
                             .UseStartup());
    _client = _server.CreateClient();
  }
  // ... 
}

At first, the TestServer will be set up. This guy gets the WebHostBuilder - known from every ASP.NET Core 2.0 application - and uses the Startup of our project to test. At second, we are able to create a HttpClient out of that server. We'll use this HttpClient in the tests to create request to the server and to receive the responses.

[Fact]
public async Task Persons_Get_All()
{
  // Act
  var response = await _client.GetAsync("/api/Persons");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var persons = JsonConvert.DeserializeObject>(responseString);
  persons.Count().Should().Be(50);
}

[Fact]
public async Task Persons_Get_Specific()
{
  // Act
  var response = await _client.GetAsync("/api/Persons/16");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var person = JsonConvert.DeserializeObject(responseString);
  person.Id.Should().Be(16);
}

Now let's test the POST request:

[Fact]
public async Task Persons_Post_Specific()
{
  // Arrange
  var personToAdd = new Person
  {
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  var person = JsonConvert.DeserializeObject(responseString);
  person.Id.Should().Be(51);
}

We need to prepare a little bit more. Here we create a StringContent object, which derives from HttpContent. This will contain the Person we want to add as JSON string. This get's sent via post to the TestServer.

To test the invalid ModelState, just remove a required field or pass a wrong formatted email address or telephone number and test against it. In this sample, I test against three missing required fields:

[Fact]
public async Task Persons_Post_Specific_Invalid()
{
  // Arrange
  var personToAdd = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

This is almost the same pattern for the PUT and the DELETE requests:

[Fact]
public async Task Persons_Put_Specific()
{
  // Arrange
  var personToChange = new Person
  {
    Id = 16,
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

[Fact]
public async Task Persons_Put_Specific_Invalid()
{
  // Arrange
  var personToChange = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

[Fact]
public async Task Persons_Delete_Specific()
{
  // Arrange

  // Act
  var response = await _client.DeleteAsync("/api/Persons/16");

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

Conclusion

That's it.

Thanks to the TestServer! With this it is really easy to write integration tests the controllers. Sure, it is still a little more effort than the much simpler unit tests, but you don't have external dependencies anymore. No external web server to manage. No external web server, which is out of control.

Try it out and tell me about your opinion :)

Current Activities

Monday, September 25, 2017 12:00 AM

Wow! Some weeks without a new blog post. My plan was to write at least one post per week. But this doesn't always work. Usually I write in the train, when I go to work to Basel (CH). Sometimes I write two or three posts in advance, to publish it later on. But I had some vacation, worked two weeks completely at home, and I had some other things to prepare and to manage. This is a short overview of the current activities:

INETA Germany

The last year was kinda busy, even from the developer community perspective. The leadership of the INETA Germany was one, but a pretty small part. We are currently planning some huge changes with the INETA Germany. One of the changes is to open the INETA Germany for Austrian and Swiss user groups and speakers. We already have some Austrian and Swiss speakers registered in the Speakers Bureau. Contact us via hallo@ineta-deutschland.de, if you are a Austrian or Swiss user group and if you wanna be a INETA member.

Authoring articles for a magazine

Also the last twelve months was busy, because almost every month I wrote an article for one of the most popular German speaking developer Magazine. All this articles were about ASP.NET Core and .NET Core. Here is a list of them:

And there will be some more in the next months.

Technical Review of a book

For the first time I do a technical review for a book about "ASP.NET Core 2.0 and Angular 4". I'm a little bit proud of getting asked doing it. The author is one of the famous European authors and this book is awesome and great for Angular and ASP.NET Core beginners. Unfortunately I cannot yet mention the Authors name and link to the book title, but I will definitely if it is finished.

Talks

What else? Yes I was talking and I will talk about various topics.

In August I did a talk at the Zurich Azure meetup about how we (the yooapps.com) created the digital presentation (web site, mobile apps) for one of the most successful soccer clubs in Switzerland on Microsoft Azure. It is also about how we maintain it and how we deploy it to Azure. I'l do the talk at the Basel .NET User Group today and in October at the Azure meetup Karlsruhe. I'd like to do this talk at a lot more user groups, because we are kinda proud of that project. Contact me, if you like to have this talk in your user group. (BTW: I'm in the Seattle (WA) area at the beginning of March 2018. So I could do this talk in Seattle or Portland too.)

Title: Soccer in the cloud – FC Basel on Azure

Abstract: Three years ago we released a completely new version of the digital presentation (website, mobile apps, live ticker, image database, videos, shop and extranet) of one of the most successful soccer clubs In Switzerland. All of the services of that digital presentation are designed to run on Microsoft Azure. Since then we are continuously working on it, adding new features, integrating new services and improving performance, security and so on. This talk is about, how the digital presentation was built, how it works and how we continuously deploy new feature and improvements to Azure.

At the ADC 2017 in Cologne I talked about customizing ASP.NET Core (sources on GitHub) and I did a one day workshop about developing a web application using ASP.NET Core (sources on GitHub)

Also in October I do an introduction talk about .NET Core, .NET Standard and ASP.NET Core at the "Azure & .NET meetup Freiburg" (Germany)

Open Source:

One Project that should be released for a while is LightCore. Unfortunately I didn't find some time the last weeks to work on that. Also Peter Bucher (who initially created that project and is currently working on that) was busy too. We Currently have a critical bug, which needs to be solved first, before we are able to release. Even the ASP.NET Core integrations needs to be finished first.

Doing a lot of sports

Since almost 18 Months, I do a lot of running and biking. This also takes some time, but is a lot of addictive fun. I cannot live two days without being outside running and biking. I wouldn't believe that, if you told me about it two years before. That changed my life a lot. I attend official runs and bike races and last but not least: I lost around 20 kilos the last one and a half years.

Querying AD in SQL Server via LDAP provider

Monday, August 7, 2017 12:00 AM

This is kinda off-topic, because it's not about ASP.NET Core, but l really like it to share. I recently needed to import some additional user data via a nightly run into a SQL Server Database. The base user data came from a SAP database via an CSV bulk import. But not all of the data. E.g. the telephone numbers are maintained mostly by the users itself in the AD. After SAP import, we need to update the telephone numbers with the data from the AD.

The bulk import was done with a stored procedure and executed nightly with an SQL Server job. So it makes sense to do the AD import with a stored procedure too. I wasn't really sure whether this works via the SQL server.

My favorite programming languages are C# and JavaScript, and I'm not really a friend of T-SQL, but I tried it. I googled around a little bit and found a solution quick solution in T-SQL.

The trick is map the AD via an LDAP provider as a linked server to the SQL Server. This can even be done via a dialogue, but I never got it running like this, so I chose the way to use T-SQL instead:

USE [master]
GO 
EXEC master.dbo.sp_addlinkedserver @server = N'ADSI', @srvproduct=N'Active Directory Service Interfaces', @provider=N'ADSDSOObject', @datasrc=N'adsdatasource'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ADSI',@useself=N'False',@locallogin=NULL,@rmtuser=N'\',@rmtpassword='*******'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation compatible',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'data access', @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'dist', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'pub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc out', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'sub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'connect timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation name', @optvalue=null
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'lazy schema validation',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'query timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'use remote collation',  @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'remote proc transaction promotion', @optvalue=N'true'
GO

You can to use this script to set-up a new linked server to AD. Just set the right user and password to the second T-SQL statement. This user should have read access to the AD. A specific service account would make sense here. Don't save the script with the user credentials in it. Once the linked server is set-up, you don't need this script anymore.

This setup was easy. The most painful part, was to setup a working query.

SELECT * FROM OpenQuery ( 
  ADSI,  
  'SELECT cn, samaccountname, mail, mobile, telephonenumber, sn, givenname, co, company
  FROM ''LDAP://DC=company,DC=domain,DC=controller'' 
  WHERE objectClass = ''User'' and co = ''Switzerland''
  ') AS tblADSI
  WHERE mail IS NOT NULL AND (telephonenumber IS NOT NULL OR mobile IS NOT NULL)
ORDER BY cn

Any error in the Query to execute resulted in an generic error message, which told me that there was an problem to built this query. Not really helpful.

It took me two hours to find the right LDAP connection string and some more hours to find the right properties the query.

The other painful thing are the conditions. Because the where clause outside the OpenQuery couldn't be run inside the OpenQuery. Don't ask me why. My Idea was to limit the result set completely with the query inside the OpenQuery, but was only able to limit to the objectType "User" and to the country. Also the AD need to be maintained in a proper way: e.g. the field company btw. didn't return the company (which should be the same in the entire company) but the company units.

BTW: the column order in the result set is completely the other way round, than defined in the query.

Later, I could limit the result set to existing emails (to find out whether this is a real user) and existing telephone numbers.

The rest is easy: Wrap that query in a stored procedure, iterate threw all of the users, find the related ones in the database (previously imported from SAP) and update the telephone numbers.

Creating an email form with ASP.NET Core Razor Pages

Wednesday, July 26, 2017 12:00 AM

In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values.

Creating a new project

To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case)

In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project.

In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK".

That's it. The new ASP.NET Core Razor Pages project is created.

Creating the form

It makes sense to use the contact.cshtml page to add the new contact form. The contact.cshtml.cs is the PageModel to work with. Inside this file, I added a small class called ContactFormModel. This class will contain the form values after the post request was sent.

public class ContactFormModel
{
  [Required]
  public string Name { get; set; }
  [Required]
  public string LastName { get; set; }
  [Required]
  public string Email { get; set; }
  [Required]
  public string Message { get; set; }
}

To use this class, we need to add a property of this type to the ContactModel:

[BindProperty]
public ContactFormModel Contact { get; set; }

This attribute does some magic. It automatically binds the ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a ModelState. And we actually do:

public async Task OnPostAsync()
{
  if (!ModelState.IsValid)
  {
    return Page();
  }

  // create and send the mail here

  return RedirectToPage("Index");
}

This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on.

Let's create the HTML form for this code in the contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:

Contact us

This also looks pretty much the same as in common ASP.NET Core MVC views. There's no difference.

BTW: I'm still impressed by the tag helpers. This guys even makes writing and formatting code snippets a lot easier.

Acessing the form data

As I wrote some lines above, there is a model binding working for you. This fills up the property Contact with data and makes it available in the OnPostAsync() method, if the attribute BindProperty is set.

[BindProperty]
public ContactFormModel Contact { get; set; }

Actually, I expected to have a model, passed as argument to the OnPost, as I saw it the first time. But you are able to use the property directly, without any other action to do:

var mailbody = $@"Hallo website owner,

This is a new contact request from your website:

Name: {Contact.Name}
LastName: {Contact.LastName}
Email: {Contact.Email}
Message: ""{Contact.Message}""


Cheers,
The websites contact form";

SendMail(mailbody);

That's nice, isn't it?

Sending the emails

Thanks to the pretty awesome .NET Standard 2.0 and the new APIs available for .NET Core 2.0, it get's even nicer:

// irony on

Finally in .NET Core 2.0, it is now possible to send emails directly to an SMTP server using the famous and pretty well known System.Net.Mail.SmtpClient():

private void SendMail(string mailbody)
{
  using (var message = new MailMessage(Contact.Email, "me@mydomain.com"))
  {
    message.To.Add(new MailAddress("me@mydomain.com"));
    message.From = new MailAddress(Contact.Email);
    message.Subject = "New E-Mail from my website";
    message.Body = mailbody;

    using (var smtpClient = new SmtpClient("mail.mydomain.com"))
    {
      smtpClient.Send(message);
    }
  }
}

Isn't that cool?

// irony off

It definitely works and this is actually a good thing.

In previews .NET Core versions it was recommended to use an external mail delivery service like SendGrid. This kind of services usually provide a REST based API , which can be used to communicate with that specific service. Some of them also provide various client libraries for the different platforms and languages to wrap that APIs and makes them easier to use.

I'm anyway a huge fan of such services, because they are easier to use and I don't need to handle message details like encoding. I don't need to care about SMTP hosts and ports, because it is all HTTPS. I don't really need to care as much about spam handling, because this is done by the service. Using such services I just need to configure the sender mail address, maybe a domain, but the DNS settings are done by them.

SendGrid could be bought via the Azure marketplace and contains huge number of free emails to send. I would propose to use such services whenever it's possible. The SmtpClient is good in enterprise environments where you don't need to go threw the internet to send mails. But maybe the Exchanges API is another or better option in enterprise environments.

Conclusion

The email form is working and it is actually not much code written by myself. That's awesome. For such scenarios the razor pages are pretty cool and easy to use. There's no Controller to set-up, the views and the PageModels are pretty close and the code to generate one page is not distributed over three different folders as in MVC. To create bigger applications, MVC is for sure the best choice, but I really like the possibility to keep small apps as simple as possible.

New Visual Studio Web Application: The ASP.NET Core Razor Pages

Monday, July 24, 2017 12:00 AM

I think, everyone who followed the last couple of ASP.NET Community Standup session heard about Razor Pages. Did you try the Razor Pages? I didn't. I focused completely on ASP.NET Core MVC and Web API. With this post I'm going to have a first look into it. I'm going to try it out.

I was also a little bit skeptical about it and compared it to the ASP.NET Web Site project. That was definitely wrong.

You need to have the latest preview on Visual Studio 2017 installed on your machine, because the Razor Pages came with ASP.NET Core 2.0 preview. It is based on ASP.NET Core and part of the MVC framework.

Creating a Razor Pages project

Using Visual Studio 2017, I used "File... New Project" to create a new project. I navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and I chose a name and a location for that project.

In the next dialogue, I needed to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other one in the next posts.) I selected the "Web Application (Razor Pages)" and pressed "OK".

Program.cs and Startup.cs

I you are already familiar with ASP.NET core projects, you'll find nothing new in the Program.cs and in the Startup.cs. Both files look pretty much the same.

public class Program
{
  public static void Main(string[] args)
  {
    BuildWebHost(args).Run();
  }

  public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup()
    .Build();
}

The Startup.cs has a services.AddMvc() and an app.UseMvc() with a configured route:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
  if (env.IsDevelopment())
  {
    app.UseDeveloperExceptionPage();
    app.UseBrowserLink();
  }
  else
  {
    app.UseExceptionHandler("/Error");
  }

  app.UseStaticFiles();

  app.UseMvc(routes =>
  {
    routes.MapRoute(
      name: "default",
      template: "{controller=Home}/{action=Index}/{id?}");
  });
}

That means the Razor Pages are actually part of the MVC framework, as Damien Edwards always said in the Community Standups.

The solution

But the solution looks a little different. Instead of a Views and a Controller folders, there is only a Pages folder with the razor files in it. Even there are known files: the _layout.cshtml, _ViewImports.cshtml, _ViewStart.cshtml.

Within the _ViewImports.cshtml we also have the import of the default TagHelpers

@namespace RazorPages.Pages
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

This makes sense, since the Razor Pages are part of the MVC Framework.

We also have the standard pages of every new ASP.NET project: Home, Contact and About. (I'm going to have a look at this files later on.)

As every new web project in Visual Studio, also this project type is ready to run. Pressing F5 starts the web application and opens the URL in the browser:

Frontend

For the frontend dependencies "bower" is used. It will put all the stuff into wwwroot/bin. So even this is working the same way as in MVC. Custom CSS and custom JavaScript ar in the css and the js folder under wwwroot. This should all be familiar for ASP.NET Corer developers.

Also the way the resources are used in the _Layout.cshtml are the same.

Welcome back "Code Behind"

This was my first thought for just a second, when I saw the that there are nested files under the Index, About, Contact and Error pages. At the first glimpse this files are looking almost like Code Behind files of Web Form based ASP.NET Pages, but are completely different:

public class ContactModel : PageModel
{
    public string Message { get; set; }

    public void OnGet()
    {
        Message = "Your contact page.";
    }
}

They are not called Page, but Model and they have something like an handler in it, to do something on a specific action. Actually it is not a handler, it is an an method which gets automatically invoked, if this method exists. This is a lot better than the pretty old Web Forms concept. The base class PageModel just provides access to some Properties like the Contexts, Request, Response, User, RouteData, ModelStates, ViewData and so on. It also provides methods to redirect to other pages, to respond with specific HTTP status codes, to sign-in and sign-out. This is pretty much it.

The method OnGet allows us to access the page via a GET request. OnPost does the same for POST. Gues what OnPut does ;)

Do you remember Web Forms? There's no need to ask if the current request is a GET or POST request. There's a single decoupled method per HTTP method. This is really nice.

On our Contact page, inside the method OnGet the message "Your contact page." will be set. This message gets displayed on the specific Contact page:

@page
@model ContactModel
@{
    ViewData["Title"] = "Contact";
}

@ViewData["Title"].

@Model.Message

As you can see in the razor page, the PageModel is actually the model of that view which gets passed to the view, the same way as in ASP.Net MVC. The only difference is, that there's no Action to write code for and something will invoke the OnGet Method in the PageModel.

Conclusion

This is just a pretty fast first look, but the Razor Pages seem to be pretty cool for small and low budget projects, e. g. for promotional micro sites with less of dynamic stuff. There's less code to write and less things to ramp up, no controllers and actions to think about. This makes it pretty easy to quickly start a project or to prototype some features.

But there's no limitation to do just small projects. It is a real ASP.NET Core application, which get's compiled and can easily use additional libraries. Even Dependency Injection is available in ASP.NET Core RazorPages. That means it is possible to let the application grow.

Even though it is possible to use the MVC concept in parallel by adding the Controllers and the Views folders to that application. You can also share the _layout.cshtml between both, the Razor Pages and the VMC Views, just by telling the _ViewStart.cshtml where the _layout.cshtml is.

Don't believe me? Try this out: https://github.com/juergengutsch/razor-pages-demo/

I'm pretty sure I'll use it in some of my real projects in the future.

LightCore is back - LightCore 2.0

Monday, July 17, 2017 12:00 AM

Until now it is pretty hard to move an existing more complex .NET Framework library to .NET Core or .NET Standard 1.x. More complex in my case just means, e. g. that this specific library uses reflection a little more than maybe some others. I'm talking about the LightCore DI container.

I started to move it to .NET Core in November 2015, but gave up halve a year later. Because it needs much more time to port it to NET Core and because it makes much more sense to port it to the .NET Standard than to .NET Core. And the announcement of .NET Standard 2.0 makes me much more optimistic to get it done with pretty less effort. So I stopped moving it to .NET Core and .NET Standard 1.x and was waiting for .NET standard 2.0.

LightCore is back

Some weeks ago the preview version of .NET Standard 2.0 was announced and I tried it again. It works as expected. The API of .NET Standard 2.0 is big enough to get the old source of LightCore running. Also Rick Strahl did some pretty cool and detailed post about it:

The current status

Roadmap

I don't really want to release the new version until the .NET Standard 2.0 is released. I don't want to have the preview versions of the .NET Standard libraries referenced in a final version of LightCore. That means, there is still some time to get the open issues done.

Open issues

The progress is good, but there is still something to do, before the release of the first preview:

Contributions

I'd like to call for contributions. Try LightCore, raise Issues, create PRs and whatever is needed to get LightCore back to life and whatever is needed to make it a lightweight, fast and easy to use DI container.

Three times in a row

Monday, July 10, 2017 12:00 AM

On July 1st, I got the email from the Global MVP Administrator. I got the MVP award the third time in a row :)

I'm pretty proud about that and I'm happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit in March to meet all the other MVPs from around the world.

Not really a fan-boy...?

I'm also proud of that, because I don't really call me a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (feel like a bad boy). But I like most of the development tools build by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And this is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

Sure, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending som much time to contribute to the community. I also need to say Thanks to my company and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

GraphQL end-point Middleware for ASP.NET Core

Thursday, June 22, 2017 12:00 AM

The feedback about my last blog post about the GraphQL end-point in ASP.NET Core was amazing. That post was mentioned on reddit, many times shared on twitter, lInked on http://asp.net and - I'm pretty glad about that - it was mentioned in the ASP.NET Community Standup.

Because of that and because GraphQL is really awesome, I decided to make the GraphQL MiddleWare available as a NuGet package. I did some small improvements to make this MiddleWare more configurable and more easy to use in the Startup.cs

NuGet

Currently the package is a prerelease version. That means you need to activate to load preview versions of NuGet packages:

Install via Package Manager Console:

PM> Install-Package GraphQl.AspNetCore -Pre

Install via dotnet CLI:

dotnet add package GraphQl.AspNetCore --version 1.0.0-preview1

Using the library

You still need to configure your GraphQL schema using the graphql-dotnet library, as described in my last post. If this is done open your Startup.cs and add an using to the GraphQl.AspNetCore library:

using GraphQl.AspNetCore;

You can use two different ways to register the GraphQl Middleware:

app.UseGraphQl(new GraphQlMiddlewareOptions
{
  GraphApiUrl = "/graph", // default
  RootGraphType = new BooksQuery(bookRepository),
  FormatOutput = true // default: false
});
app.UseGraphQl(options =>
{
  options.GraphApiUrl = "/graph-api";
  options.RootGraphType = new BooksQuery(bookRepository);
  options.FormatOutput = false; // default
});

Personally I prefer the second way, which is more readable in my opinion.

The root graph type needs to be passed to the GraphQlMiddlewareOptions object, depending on the implementation of your root graph type, you may need to inject the data repository or a EntityFramework DbContext, or whatever you want to use to access your data. In this case I reuse the IBookRepository of the last post and pass it to the BooksQuery which is my root graph type.

I registered the repository like this:

services.AddSingleton();

and needed to inject it to the Configure method:

public void Configure(
  IApplicationBuilder app,
  IHostingEnvironment env,
  ILoggerFactory loggerFactory,
  IBookRepository bookRepository)
{
  // ...
}

Another valid option is to also add the BooksQuery to the dependency injection container and inject it to the Configure method.

Options

The GraphQlMiddlewareOptions are pretty simple. Currently there are only three properties to configure

This should be enough for the first time. If needed it is possible to expose the Newtonsoft.JSON settings, which are used in GraphQL library later on.

One more thing

I would be happy, if you try this library and get me some feedback about it. A demo application to quickly start playing around with it, is available on GitHub. Feel free to raise some issues and to create some PRs on GitHub to improve this MiddleWare.

A first glimpse into .NET Core 2.0 Preview 1 and ASP.​NET Core 2.0.0 Preview 1

Tuesday, May 30, 2017 12:00 AM

At the Build 2017 conference Microsoft announced the preview 1 versions of .NET Core 2.0, of the .NET Standard 2.0 and ASP.NET Core 2.0. I recently had a quick look into it and want to show you a little bit about it with this post.

.NET Core 2.0 Preview 1

Rich Lander (Program Manager at Microsoft) wrote about the release of the preview 1, .NET Standard 2.0, tools support in this post: Announcing .NET Core 2.0 Preview 1. It is important to read the first part about the requirements carefully. Especially the requirement of Visual Studio 2017 15.3 Preview. At the first quick look I was wondering about the requirement of installing a preview version of Visual Studio 2017, because I have already installed the final version since a few months. But the details is in the numbers. The final version of Visual Studio 2017 is the 15.2. The new tooling for .NET Core 2.0 preview is in the 15.3 which is in preview currently.

So if you want to use .NET Core 2. preview 1 with Visual Studio 2017 you need to install the preview of 15.3

The good thing is, the preview can be installed side by side with the current final of Visual Studio 2017. It doesn't double the usage of disk space, because both versions are able share some SDKs, e.g. the Windows SDK. But you need to install the add-ins you want to use for this version separately.

After the Visual Studio you need to install the new .NET Core SDK which also installs NET Core 2.0 Preview 1 and the .NET CLI.

The .NET CLI

After the new version of .NET Core is installed type dotnet --version in a command prompt. It will show you the version of the currently used .NET SDK:

Wait. I installed a preview 1 version and this is now the default on the entire machine? Yes.

The CLI uses the latest installed SDK on the machine by default. But anyway you are able to run different .NET Core SDKs side by side. To see what versions are installed on our machine type dotnet --info in a command prompt and copy the first part of the base path and past it to a new explorer window:

You are able to use all of them if you want to.

This is possible by adding a "global.json" to your solution folder. This is a pretty small file which defines the SDK version you want to use:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.4"
  }
}

Inside the folder "C:\git\dotnetcore", I added two different folders: the "v104" should use the current final version 1.0.4 and the "v200" should use the preview 1 of 2.0.0. to get it working I just need to put the "global.json" into the "v104" folder:

The SDK

Now I want to have a look into the new SDK. The first thing I do after installing a new version is to type dotnet --help in a command prompt. The first level help doesn't contain any surprises, just the version number differs. The most interesting difference is visible by typing dotnet new --help. We get a new template to add an ASP.NET Core Web App based on Razor pages. We also get the possibility to just add single files, like a razor page, "NuGet.config" or a "Web.Config". This is pretty nice.

I also played around with the SDK by creating a new console app. I typed dotnet new console -n consoleapp:

As you can see in the screenshot dotnet new will directly download the NuGet packages from the package source. It runs dotnet restore for you. It is not a super cool feature but good to know if you get some NuGet restore errors while creating a new app.

When I opened the "consoleapp.csproj", I saw the expected TargetFramework "netcoreapp2.0"



  
    Exe
    netcoreapp2.0
  


This is the only difference between the 2.0.0 preview 1 and the 1.0.4

In ASP.NET Core are a lot more changes done. Let's have a quick look here too:

ASP.NET Core 2.0 Preview 1

Also for the ASP.NET 2.0 Preview 1, Jeffrey T. Fritz (Program Manager for ASP.NET) wrote a pretty detailed announcement post in the webdev blog: Announcing ASP.NET Core 2.0.0-Preview1 and Updates for .NET Web Developers.

To create a new ASP.NET Web App, I need to type dotnet new mvc -n webapp in a command prompt window. This command immediately creates the web app and starts to download the needed packages:

Let's see what changed, starting with the "Program.cs":

public class Program
{
  public static void Main(string[] args)
  {
    BuildWebHost(args).Run();
  }

  public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
      .UseStartup()
      .Build();
}

The first thing I mentioned is the encapsulation of the code that creates and configures the WebHostBuilder. In the previous versions it was all in the static void main. But there's no instantiation of the WebHostBuilder anymore. This is hidden in the .CreateDefaultBuilder() method. This look a little cleaner now, but also hides the configuration from the developer. It is anyway possible to use the old way to configure the WebHostBuilder, but this wrapper does a little more than the old configuration. This Method also wraps the configuration of the ConfigurationBuilder and the LoggerFactory. The default configurations were moved from the "Startup.cs" to the .CreateDefaultBuilder(). Let's have a look into the "Startup.cs":

public class Startup
{
  public Startup(IConfiguration configuration)
  {
    Configuration = configuration;
  }

  public IConfiguration Configuration { get; }

  // This method gets called by the runtime. Use this method to add services to the container.
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddMvc();
  }

  // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
  public void Configure(IApplicationBuilder app, IHostingEnvironment env)
  {
    if (env.IsDevelopment())
    {
      app.UseDeveloperExceptionPage();
    }
    else
    {
      app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();

    app.UseMvc(routes =>
               {
                 routes.MapRoute(
                   name: "default",
                   template: "{controller=Home}/{action=Index}/{id?}");
               });
  }
}

Even this file is much cleaner now.

But if you now want to customize the Configuration, the Logging and the other stuff, you need to replace the .CreateDefaultBuilder() with the previous style of bootstrapping the application or you need to extend the WebHostBuilder returned by this method. You could have a look into the sources of the WebHost class in the ASP.NET repository on GitHub (around line 150) to see how this is done inside the .CreateDefaultBuilder(). The code of that method looks pretty familiar for someone who already used the previous version.

BTW: BrowserLink was removed from the templates of this preview version. Which is good from my perspective, because it causes an error while starting up the applications.

Result

This is just a first short glimpse into the .NET Core 2.0 Preview 1. I need some more time to play around with it and learn a little more about the upcoming changes. For sure I need to rewrite my post about the custom logging a little bit :)

BTW: Last week, I created a 45 min video about it in German. This is not a video with a good quality. It is quite bad. I just wanted to test a new microphone and Camtasia Studio and I chose ".NET Core 2.0 Preview 1" as the topic to present. Even if it has a awful quality, maybe it is anyway useful to some of my German speaking readers. :)

I'll come with some more .NET 2.0 topics within the next months.

Exploring GraphQL and creating a GraphQL endpoint in ASP.NET Core

Monday, May 29, 2017 12:00 AM

A few weeks ago, I found some time to have a look at GraphQL and even at the .NET implementation of GraphQL. It is pretty amazing to see it in actions and it is easier than expected to create a GraphQL endpoint in ASP.NET Core. In this post I'm going to show you how it works.

The Graph Query Language

The GraphQL was invented by Facebook in 2012 and released to the public in 2015. It is a query language to tell the API exactly about the data you wanna have. This is the difference between REST, where you need to query different resources/URIs to get different data. In GrapgQL there is one single point of access about the data you want to retrieve.

That also makes the planning about the API a little more complex. You need to think about what data you wanna provide and you need to think about how you wanna provide that data.

While playing around with it, I created a small book database. The idea is to provide data about books and authors.

Let's have a look into few examples. The query to get the book number and the name of a specific book looks like this.

{
  book(isbn: "822-5-315140-65-3"){
    isbn,
    name
  }
}

This look similar to JSON but it isn't. The property names are not set in quotes, which means it is not really a JavaScript Object Notation. This query need to be sent inside the body of an POST request to the server.

The Query gets parsed and executed against a data source on the server and the server should send the result back to the client:

{
  "data": {
    "book": {
      "isbn": "822-5-315140-65-3",
      "name": "ultrices enim mauris parturient a"
    }
  }
}

If we want to know something about the author, we need to ask about it:

{
  book(isbn: "822-5-315140-65-3"){
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

This is the possible result:

{
  "data": {
    "book": {
      "isbn": "822-5-315140-65-3",
      "name": "ultrices enim mauris parturient a",
      "author": {
        "id": 71,
        "name": "Henderson",
        "birthdate": "1937-03-20T06:58:44Z"
      }
    }
  }
}

You need a list of books, including the authors? Just ask for it:

{
  books{
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

The list is too large? Just limit the result, to get only 20 items:

{
  books(limit: 20) {
    isbn,
    name,
    author{
      id,
      name,
      birthdate
    }
  }
}

Isn't that nice?

To learn more about GraphQL and the specifications, visit http://graphql.org/

The Book Database

The book database is just fake. I love to use GenFu to generate dummy data. So I did the same for the books and the authors and created a BookRepository:

public class BookRepository : IBookRepository
{
  private IEnumerable _books = new List();
  private IEnumerable _authors = new List();

  public BookRepository()
  {
    GenFu.GenFu.Configure()
      .Fill(_ => _.Name).AsLastName()
      .Fill(_=>_.Birthdate).AsPastDate();
    _authors = A.ListOf(40);

    GenFu.GenFu.Configure()
      .Fill(p => p.Isbn).AsISBN()
      .Fill(p => p.Name).AsLoremIpsumWords(5)
      .Fill(p => p.Author).WithRandom(_authors);
    _books = A.ListOf(100);
  }

  public IEnumerable AllAuthors()
  {
    return _authors;
  }

  public IEnumerable AllBooks()
  {
    return _books;
  }

  public Author AuthorById(int id)
  {
    return _authors.First(_ => _.Id == id);
  }

  public Book BookByIsbn(string isbn)
  {
    return _books.First(_ => _.Isbn == isbn);
  }
}

public static class StringFillerExtensions
{
  public static GenFuConfigurator AsISBN(
    this GenFuStringConfigurator configurator) where T : new()
  {
    var filler = new CustomFiller(
      configurator.PropertyInfo.Name, 
      typeof(T), 
      () =>
      {
        return MakeIsbn();
      });
    configurator.Maggie.RegisterFiller(filler);
    return configurator;
  }
  
  public static string MakeIsbn()
  {
    // 978-1-933988-27-6
    var a = A.Random.Next(100, 999);
    var b = A.Random.Next(1, 9);
    var c = A.Random.Next(100000, 999999);
    var d = A.Random.Next(10, 99);
    var e = A.Random.Next(1, 9);
    return $"{a}-{b}-{c}-{d}-{e}";
  }
}

GenFu provides a useful set of so called fillers to generate data randomly. There are fillers to generate URLs, emails, names, last names, states of US and Canada and so on. I also need a ISBN generator, so I created one by extending the generic GenFuStringConfigurator.

The BookRepository is registered as a singleton in the Dependency Injection container, to work with the same set of data while the application is running. You are able to add some more information to that repository, like publishers and so on.

GraphQL in ASP.NET Core

Fortunately there is a .NET Standard compatible implementation of the GraphQL on GitHub. So there's no need to parse the Queries by yourself. This library is also available as a NuGet package:


The examples provided on GitHub, are pretty easy. They directly write the result to the output, which means the entire ASP.NET Applications is a GraphQL server. But I want to add GraphQL as a ASP.NET Core MiddleWare, to add the GraphQL implementation as a different part of the Application. Like this you are able to use REST based POST and PUT request to add or update the data and to use the GraphQL to query the data.

I also want that the middleware is listening to the sub path "/graph"

public class GraphQlMiddleware
{
  private readonly RequestDelegate _next;
  private readonly IBookRepository _bookRepository;

  public GraphQlMiddleware(RequestDelegate next, IBookRepository bookRepository)
  {
    _next = next;
    _bookRepository = bookRepository;
  }

  public async Task Invoke(HttpContext httpContext)
  {
    var sent = false;
    if (httpContext.Request.Path.StartsWithSegments("/graph"))
    {
      using (var sr = new StreamReader(httpContext.Request.Body))
      {
        var query = await sr.ReadToEndAsync();
        if (!String.IsNullOrWhiteSpace(query))
        {
          var schema = new Schema { Query = new BooksQuery(_bookRepository) };
          var result = await new DocumentExecuter()
            .ExecuteAsync(options =>
                          {
                            options.Schema = schema;
                            options.Query = query;
                          }).ConfigureAwait(false);
          CheckForErrors(result);
          await WriteResult(httpContext, result);
          sent = true;
        }
      }
    }
    if (!sent)
    {
      await _next(httpContext);
    }
  }

  private async Task WriteResult(HttpContext httpContext, ExecutionResult result)
  {
    var json = new DocumentWriter(indent: true).Write(result);
    httpContext.Response.StatusCode = 200;
    httpContext.Response.ContentType = "application/json";
    await httpContext.Response.WriteAsync(json);
  }

  private void CheckForErrors(ExecutionResult result)
  {
    if (result.Errors?.Count > 0)
    {
      var errors = new List();
      foreach (var error in result.Errors)
      {
        var ex = new Exception(error.Message);
        if (error.InnerException != null)
        {
          ex = new Exception(error.Message, error.InnerException);
        }
        errors.Add(ex);
      }
      throw new AggregateException(errors);
    }
  }
}

public static class GraphQlMiddlewareExtensions
{
  public static IApplicationBuilder UseGraphQL(this IApplicationBuilder builder)
  {
    return builder.UseMiddleware();
  }
}

With this kind of MiddleWare, I can extend my applications Startup.cs with GraphQL:

app.UseGraphQL();

As you can see, the BookRepository gets passed into this Middleware via constructor injection. The most important part is that line:

var schema = new Schema { Query = new BooksQuery(_bookRepository) };

This is where we create a schema, which is used by the GraphQL engine to provide the data. The schema defines the structure of the data you wanna provide. This is all done in a root type called BooksQuery. This type gets the BookRepostory.

This Query is a GryphType, provided by the GraphQL library. You need to derive from a ObjectGraphType and to configure the schema in the constructor:

public class BooksQuery : ObjectGraphType
{
  public BooksQuery(IBookRepository bookRepository)
  {
    Field("book",
                    arguments: new QueryArguments(
                      new QueryArgument() { Name = "isbn" }),
                      resolve: context =>
                      {
                        var id = context.GetArgument("isbn");
                        return bookRepository.BookByIsbn(id);
                      });

    Field>("books",
                                   resolve: context =>
                                   {
                                     return bookRepository.AllBooks();
                                   });
  }
}

Using the GraphQL library all types used in the Query to define the schema are any kind of GraphTypes, even the BookType:

public class BookType : ObjectGraphType
{
  public BookType()
  {
    Field(x => x.Isbn).Description("The isbn of the book.");
    Field(x => x.Name).Description("The name of the book.");
    Field("author");
  }
}

The difference is just the generic ObjectGraphType which is also used for the AuthorType. The properties of the Book, which are simple types like the name or the ISBN are mapped directly with the lambda. The complex typed properties like the Author are mapped via another generic ObjectGraphType, which is ObjectGraphType in that case.

Like this you need to create your Schema, which can be used to query the data.

Conclusion

If you want to play around with this demo, I pushed it to a repository on GitHub.

This are my first steps using GraphQL and I really like it. I think this is pretty useful and will reduce the effort on both the client side and the server side a lot. Even if the effort to create the schema is lot more than creating just a Web API controller, but usually you need to create a lot more than just one single Web API controller.

This also reduces the amount of data between the client and the server, because the client could just load the needed data and don't need to GET or POST all unneeded stuff.

I think, I'll use it a lot more in the future projects.

What do you think?