Fast growing architectures with serverless framework and .NET Core

Serverless technologies provide a fast and independent way for developers to get implementations into production. This technique is becoming every day more popular in the enterprises’ stack, and it has been featured as a trial technique in the ThoughtWorks technology radar since 2017. The first part of the following article covers some general notions about serverless computing. The second one shows how to build a .NET Core Lambda on AWS using the serverless framework.

Benefits of serverless computing

Serverless technology is part of FaaS (function­-as­-a-service) technologies family. These kind of techniques are becoming popular with the adoption of the cloud systems. Nowadays, serverless implementations are raised as the preferred technology to use solutions that involve the cloud providers, both private and public.

Furthermore, the typical software services and system perform operations by keeping a massive amount of data in­memory and by writing batches of data in complex data sources.

Serverless, and in general FaaS technologies are designed to keep our system quick and reactive, by serving a lot of small requests and events as fast as possible. Serverless components are usually strongly­ coupled with the events provided by the cloud provider where they are running: a notification, an event dispatched by a queue or an incoming request from an API gateway is considered as a trigger for a small unit of computation contained in a serverless component. Therefore, that’s the main reason why the cloud providers pricing systems are based on the number of requests, and not on the computation time.

Moreover, serverless components usually have some limitations on the execution time. Just like every technology, serverless are not suitable for every solution and system. Indeed, it simplifies the life of software engineers. Indeed the lambda deployment cycle is regularly fast and, as a developer, we can quickly get new features into production by doing a small amount of work. Furthermore, building components using serverless techniques means that the developer doesn’t need to care about scaling problems or failure since the cloud provider cares about those problems.

Finally, we should also consider that serverless functions are stateless. Consequently, each system built on top of this technique is more modular and loosely­-coupled.

Serverless pain points

Nothing of this power and agility come for free. First of all, serverless functions are executed on the cloud, and they are usually triggered by events that are strongly ­coupled with your cloud provider, as a consequence, debugging them is not easy. That’s a valid reason to keep their scope as small as possible, and always separate the core logic of your function with the external components and events. Moreover, it very important to cover serverless code with unit tests and integration tests.

Secondly, just like microservices architectures, which has a lot of services with a small focus, also serverless components are hard to monitor, and specific problems are quite challenging to detect. Finally, it is hard to have a comprehensive view of the architecture and the dependencies between the different serverless components. For that reason, both cloud providers and third-­party companies are investing a lot on all-­in­-one tools which provide both monitoring and systems analysis features.

Experiment using serverless computing

Nowadays more than ever, it is essential to quickly evolve services and evolve architectures basing on the incoming business requests. Data-­driven experiments are part of this process. Furthermore, before releasing a new feature, we should implement an MVP and test it on a restricted group of the customer base. If the results of the experiment are positive, it is worth to invest in the MVP to transform it into a feature of our product.

Well, serverless computing provides a way to evolve our architecture evolving without caring about the infrastructure quickly. Serverless light­-overhead delivers a way to implement disposable MVP for experimenting with new features and functionalities. Furthermore, they can be plugged and un­plugged easily.

Implementing AWS Lambda using .NET Core

The following section covers a simple implementation of some AWS Lambda using .NET Core.

The example involves three key technologies:

  • AWS is the cloud provider which hosts our serverless feature;
  • The serverless framework, which is a very useful tool to get our lambdas into AWS. As a general purpose framework, it is compatible with all the main cloud providers;
  • .NET Core which is the open­source, cross­platform framework powered by Microsoft;

The example we are going to discuss it is also present in the serverless GitHub repository at the following URL: serverless/examples/aws­dotnet­rest­api­withdynamodb. The example is part of some template projects provided by the serverless framework.
The AWS Lambda project follows this feature schema:
screenshot-2018-11-25-at-00-15-33-960x516

In summary, the feature implements some reading/writing operations on data. An HTTP request comes from the client through the API Gateway, the lambda project defines three functions: GetItem, InsertItem and UpdateItem Each of them performs an operation to a DynamoDB table.

Project structure

The solution we are going to implement has the following project structure:

  • src/DotNetServerless.Application project contains the core logic executed by the serverless logic;
  • src/DotNetServerless.Lambda project contains the entry points of the serverless  functions and all the components tightly­coupled with the AWS;
  • tests/DotNetServerless.Tests project contains the unit tests and integrations tests of the serverless feature;

Domain project

Let’s start by analyzing the application layer. The core entity of the project is the Item class which represents the stored entity in the dynamo database table:

using Amazon.DynamoDBv2.DataModel;
namespace DotNetServerless.Application.Entity
{
  public class Item
  {
    [DynamoDBHashKey]
    public string Id { get; set; }
    [DynamoDBRangeKey]
    public string Code { get; set; }
    [DynamoDBProperty]
    public string Description { get; set; }
    [DynamoDBProperty]
    public bool IsChecked { get; set; }
  }
}

The entity fields are decorated with some attribute in order to map them with the DynamoDb store model. The Item entity is referred by the IItemsRepository interface which defines the operations for storing data:

using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using DotNetServerless.Application.Entities;
using DotNetServerless.Application.Infrastructure.Configs;
namespace DotNetServerless.Application.Infrastructure.Repositories
{
  public interface IItemRepository
  {
    Task<IEnumerable<T>> GetById<T>(string id, CancellationToken cancellationToken);
    Task Save(Item item, CancellationToken cancellationToken);
  }
  public class ItemDynamoRepository : IItemRepository
  {
    private readonly AmazonDynamoDBClient _client;
    private readonly DynamoDBOperationConfig _configuration;
    public ItemDynamoRepository(DynamoDbConfiguration configuration,
      IAwsClientFactory<AmazonDynamoDBClient> clientFactory)
    {
      _client = clientFactory.GetAwsClient();
      _configuration = new DynamoDBOperationConfig
      {
        OverrideTableName = configuration.TableName,
        SkipVersionCheck = true
      };
    }
    public async Task Save(Item item, CancellationToken cancellationToken)
    {
      using (var context = new DynamoDBContext(_client))
      {
        await context.SaveAsync(item, _configuration, cancellationToken);
      }
    }
    public async Task<IEnumerable<T>> GetById<T>(string id, CancellationToken cancellationToken)
    {
      var resultList = new List<T>();
      using (var context = new DynamoDBContext(_client))
      {
        var scanCondition = new ScanCondition(nameof(Item.Id), ScanOperator.Equal, id);
        var search = context.ScanAsync<T>(new[] {scanCondition}, _configuration);
        while (!search.IsDone)
        {
          var entities = await search.GetNextSetAsync(cancellationToken);
          resultList.AddRange(entities);
        }
      }
      return resultList;
    }
  }
}]

The implementation of the IItemRepository defines two essential operations:

  • Save, which allows the consumer to insert and update the entity on dynamo;
  • GetById which returns an object using the id of the dynamo record;

Finally, the front layer of the DotNetServerless.Application project is the handler part. Furthermore, the whole application project is based on the mediator pattern to guarantee the loosely­coupling between the AWS functions and the core logic. Let’s take as an example the definition of the CreateItemHandler:

using System;
using System.Threading;
using System.Threading.Tasks;
using DotNetServerless.Application.Entities;
using DotNetServerless.Application.Infrastructure.Repositories;
using DotNetServerless.Application.Requests;
using MediatR;
namespace DotNetServerless.Application.Handlers
{
  public class CreateItemHandler : IRequestHandler<CreateItemRequest, Item>
  {
    private readonly IItemRepository _itemRepository;
    public CreateItemHandler(IItemRepository itemRepository)
    {
      _itemRepository = itemRepository;
    }
    public async Task<Item> Handle(CreateItemRequest request, CancellationToken cancellationToken)
    {
      var item = request.Map();
      item.Id = Guid.NewGuid().ToString();
      await _itemRepository.Save(item, cancellationToken);
      return item;
    }
  }
}

As you can see the code is totally easy. The CreateItemHandler implements, IRequestHandler and it uses built­in dependency injection to resolve the IItemRepository interface. The Handler method of the handler simply map the incoming request with the Item entity and it calls the Save method provided by the IItemRepository interface.

Functions project

The function project contains the entry points of the lambda feature. It defines three function classes, which represent the AWS lambda: CreateItemFunction, GetItemFunction and UpdateItemFunction; As we will see later, each function will be mapped with a specific route of the API Gateway. Let’s dig a little bit into the function definitions by taking as an example the CreateItemFunction:

using System;
using System.Threading.Tasks;
using Amazon.Lambda.APIGatewayEvents;
using Amazon.Lambda.Core;
using DotNetServerless.Application.Requests;
using MediatR;
using Microsoft.Extensions.DependencyInjection;
using Newtonsoft.Json;
namespace DotNetServerless.Lambda.Functions
{
  public class CreateItemFunction
  {
    private readonly IServiceProvider _serviceProvider;
    public CreateItemFunction() : this(Startup
      .BuildContainer()
      .BuildServiceProvider())
    {
    }
    public CreateItemFunction(IServiceProvider serviceProvider)
    {
      _serviceProvider = serviceProvider;
    }
    [LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
    public async Task<APIGatewayProxyResponse> Run(APIGatewayProxyRequest request)
    {
      var requestModel = JsonConvert.DeserializeObject<CreateItemRequest>(request.Body);
      var mediator = _serviceProvider.GetService<IMediator>();
      var result = await mediator.Send(requestModel);
      return new APIGatewayProxyResponse { StatusCode =  201,  Body = JsonConvert.SerializeObject(result)};
    }
  }
}

 
The code mentioned above defines the entry point of the function. First of all, it declares a constructor and it uses the BuildContainer and BuildServiceProvider methods exposed by the Startup class. As we will see later, these methods are provided in order to initialize the dependency injection container. The Run method of the CreateItemFunction is decorated with the LambdaSerializer attribute, which means that it is the entry point of the function. Furthermore, the Run function uses the APIGatewayProxyRequest and the APIGatewayProxyReposne as input and output of the computation of the lambda.

Dependency injection

The project uses the built­in dependency injection provided by the out­-of­-box of .NET Core. The Startup class defines the BuildContainer static method which returns a new ServiceCollection which contains the dependency mapping between entities:

using System.IO;
using DotNetServerless.Application.Infrastructure;
using DotNetServerless.Application.Infrastructure.Configs;
using DotNetServerless.Application.Infrastructure.Repositories;
using DotNetServerless.Lambda.Extensions;
using MediatR;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
namespace DotNetServerless.Lambda
{
  public class Startup
  {
    public static IServiceCollection BuildContainer()
    {
      var configuration = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddEnvironmentVariables()
        .Build();
      return ConfigureServices(configuration);
    }
    private static IServiceCollection ConfigureServices(IConfigurationRoot configurationRoot)
    {
      var services = new ServiceCollection();
      services
        .AddMediatR()
        .AddTransient(typeof(IAwsClientFactory<>), typeof(AwsClientFactory<>))
        .AddTransient<IItemRepository, ItemDynamoRepository>()
        .BindAndConfigure(configurationRoot.GetSection("DynamoDbConfiguration"), new DynamoDbConfiguration())
        .BindAndConfigure(configurationRoot.GetSection("AwsBasicConfiguration"), new AwsBasicConfiguration());
      return services;
    }
  }
}

The Startup uses the ConfigureServices to initialize a new ServiceCollection and resolve the dependencies with it. Furthermore, it also uses the BindAndConfigure method to create some configurations objects. The BuildContainer method will be called by the functions to resolve the dependencies.

Testing functions

As said before, testing our code, especially in a lambda project it is essential regarding continuous integration and delivery. In that case, the tests are covering the integration between the IMediator interface and the handlers. Moreover, they also cover the dependency injection part. Let’s see the CreateItemFunctionTests implementation:

using System.Threading;
using System.Threading.Tasks;
using Amazon.Lambda.APIGatewayEvents;
using DotNetServerless.Application.Entities;
using DotNetServerless.Application.Infrastructure.Repositories;
using DotNetServerless.Application.Requests;
using DotNetServerless.Lambda;
using DotNetServerless.Lambda.Functions;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Moq;
using Newtonsoft.Json;
using Xunit;
namespace DotNetServerless.Tests.Functions
{
  public class CreateItemFunctionTests
  {
    public CreateItemFunctionTests()
    {
      _mockRepository = new Mock<IItemRepository>();
      _mockRepository.Setup(_ => _.Save(It.IsAny<Item>(), It.IsAny<CancellationToken>())).Returns(Task.CompletedTask);
      var serviceCollection = Startup.BuildContainer();
      serviceCollection.Replace(new ServiceDescriptor(typeof(IItemRepository), _ => _mockRepository.Object,
        ServiceLifetime.Transient));
      _sut = new CreateItemFunction(serviceCollection.BuildServiceProvider());
    }
    private readonly CreateItemFunction _sut;
    private readonly Mock<IItemRepository> _mockRepository;
    [Fact]
    public async Task run_should_trigger_mediator_handler_and_repository()
    {
      await _sut.Run(new APIGatewayProxyRequest {Body = JsonConvert.SerializeObject(new CreateItemRequest())});
      _mockRepository.Verify(_ => _.Save(It.IsAny<Item>(), It.IsAny<CancellationToken>()), Times.Once);
    }
    [Theory]
    [InlineData(201)]
    public async Task run_should_return_201_created(int statusCode)
    {
      var result = await _sut.Run(new APIGatewayProxyRequest {Body = JsonConvert.SerializeObject(new CreateItemRequest())});
      Assert.Equal(result.StatusCode, statusCode);
    }
  }
}

As you can see, the code mentioned above executes our functions and it performs some verification on the resolved dependencies and it verifies that the Save method exposed by the IItemRepository is called. For obvious reasons, the test class doesn’t cover the DynamoDb feature. Furthermore, when we combine complex entities and operations, it is possible to use a Docker container to cover the database part with some integrations tests. Speaking about what’s next about .NET Core and AWS topics, The .NET AWS team has relased a nice tool to improve lambda testing: LambdaTestTool.

Deploy the project

Let’s discover how to get the project into AWS. For this purpose, we are going to use the serverless framework. The framework is defined as:

The Serverless framework is a CLI tool that allows users to build & deploy auto­scaling, pay­per­execution, event­driven functions.

In order to get serverless into our project we should execute the following command inside our main project:
npm install serverless --save-dev

Define the infrastructure

By default, the definition of the infrastructure will be placed in the serverless.yml file. In that case the file, looks like this one:

service: ${file(env.configs.yml):feature}
frameworkVersion: ">=1.6.0 <2.1.0"
provider:
  name: aws
  stackName: ${file(env.configs.yml):feature}-${file(env.configs.yml):environment}
  runtime: dotnetcore2.1
  region: ${file(env.configs.yml):region}
  accountId: ${file(env.configs.yml):accountId}
  environment:
    DynamoDbConfiguration__TableName: ${file(env.configs.yml):dynamoTable}
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:*
      Resource: "arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.DynamoDbConfiguration__TableName}"
package:
  artifact: bin/release/netcoreapp2.1/deploy-package.zip
functions:
  create:
    handler: DotNetServerless.Lambda::DotNetServerless.Lambda.Functions.CreateItemFunction::Run
    events:
      - http:
          path: items
          method: post
          cors: true
  get:
    handler: DotNetServerless.Lambda::DotNetServerless.Lambda.Functions.GetItemFunction::Run
    events:
      - http:
          path: items/{id}
          method: get
          cors: true
  update:
    handler: DotNetServerless.Lambda::DotNetServerless.Lambda.Functions.UpdateItemFunction::Run
    events:
      - http:
          path: items
          method: put
          cors: true
resources:
  Resources:
    ItemsDynamoDbTable:
      Type: 'AWS::DynamoDB::Table'
      DeletionPolicy: Retain
      Properties:
        AttributeDefinitions:
          - AttributeName: Id
            AttributeType: S
          - AttributeName: Code
            AttributeType: S
        KeySchema:
          - AttributeName: Id
            KeyType: HASH
          - AttributeName: Code
            KeyType: RANGE
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1
        TableName: ${self:provider.environment.DynamoDbConfiguration__TableName}

The code mentioned above performs some operations on the infrastructure using cloud formation. The provider node defines some of the information about our lambda, such as the stack name, the runtime and some information about the AWS account. Furthermore, it also describes the roles, and the authorizations of the lambda, e.g., the lambda should be allowed to perform operations on the DynamoDb table. The functions node defines the different lambda functions, and it maps them with a specific HTTP path. Finally, the resource node is used to set the DynamoDB table schema.

Configuration file

The serverless.yml definition is usually combined with another YAML file which just defines the configurations related to the environment. For example, this is the case of the DynamoDbConfiguration__TableName node which uses the following syntax to get the information from another YAML file: ${file(env.configs.yml):dynamoTable}.
The following snippet shows an example of the env.configs.yml file:

feature: <feature_name>
version: 1.0.0.0
region: <aws_region>
environment: <environment>
accountId: <aws_account_id>
dynamoTable: <dynamo_table_name>

Final thoughts

The post covers some theory topics about serverless computing as well as a practical example of .NET Core lambda implementation. It is important to underline how serverless computing can be used to push fast forward our architecture. Moreover, experimenting is a key aspect of an evolving product, it is important to evolve adapt fast to the incoming changes of the business.
In conclusion, you can find some example of lambda with serverless in the following repository serverless/examples/aws­dotnet­rest­api­with­dynamodb.
@samueleresca
https://samueleresca.net