Building Serverless API’s with TypeScript and Azure Function Proxies

TL;DR: In this post, we build a microservice that uses Azure Functions and other awesome Serverless technologies provided by Azure. We will cover the following features:

  • Azure functions currently has support for TypeScript in preview and we will be using the current features available to develop a read/write REST API.
  • We leverage the Azure Function Bindings to define Input and Output for our functions.
  • We will look at Azure Function Proxies that provide a way to define consistent routing behavior for our function and API calls.

If you want to jump in; the source is available on GitHub here (https://github.com/niksacdev/sample-api-typescript).

TypeScript support for Azure Functions is in preview state as of now; please use caution when using these in your production scenarios.

Problem Context

We will be building a Vehicle microservice which provides CRUD operations for sending vehicle data to a CosmosDB document store.

The architecture is fairly straightforward and looks like this:

Let’s get started …

Setting up TypeScript support for Azure Functions

VSCode has amazingly seamless support for Azure Functions and TypeScript including a development, linting, debugging, and deployment extension, so it was a no-brainer to use that for our development. I use the following extensions:

Additionally, you will need the following to kick-start your environment:

  • azure-function-core-tools: You would need these for setting up the function runtime in your local development. There are two packages here, and if you are using a Mac environment like me, you will need the 2.0 preview version.
     npm install -g azure-functions-core-tools@core
    		
  • Node.js (duh!): Note that the preview features currently works with 8.x.x. I have tried it on 8.9.4 which is the latest LTS (Latest LTS: Carbon), so you may have to downgrade using nvm if you are using the 9.X.X versions.

Interestingly, the Node version supported by Functions deployed in Azure is v6.5.0 so while you can locally play with higher versions you will have to downgrade to 6.5.0 when deploying to Azure as of today!

You can now use the Function Runtime commands or the Extension UI to create your project and Functions. We will use the Extension UI for our development:

Assuming you have installed the extension and connected to your Azure environment, the first thing we do is create a Function project.

Click on Create New Project and then select the folder that will contain our Function App.

The extension creates a bunch of files required for the FunctionApp to work. One of the key files here is host.json which allows you to specify configuration for the Function App. If you are creating HTTPTriggers, some settings that I would recommend tuning to improve your throttling and performance parameters:

{
    "functionTimeout": "00:10:00",
    "http": {
        "routePrefix": "api/vehicle",
        "maxOutstandingRequests": 20,
        "maxConcurrentRequests": 10,
        "dynamicThrottlesEnabled": false
    },
    "logger": {
        "categoryFilter": {
            "defaultLevel": "Information",
            "categoryLevels": {
                "Host": "Error",
                "Function": "Error",
                "Host.Aggregator": "Information"
            }
        }
    }
}

The maxOutstandingRequests can be used to control latency for the function by setting a threshold limit on the max request in waiting and execution queue. The maxConcurrentRequests allows control over concurrent http function requests to optimize resource consumption. The functionTimeOut is useful if you would want to override the timeout settings for the AppService or Consumption Plan which default limit of 5 minutes. Note that configuration in host.json are applied to all functions.

Also note that I have a custom value for route attribute (by default this is api/{functioname}). By adding the prefix, I am specifying that all HTTP functions in this FunctionApp will use the api/vehicle route. This is a good way to set the bounded context for your Microservice since the route will now be applied to all functions in this FunctionApp. You can also use this to define versioning schemes when doing canary testing. This setting can be used in conjunction with the route attribute in a function function.json, the Function Runtime appends your Function route with this default Host route.

Note that this behaviour can be simplified using Azure Function Proxies, we will modify these routes and explore more later in the Azure Function Proxies section.

To know more options available in host.json, refer here

Our project is now created, next, we create our Function.

Click Create Function and follow the onscreen instructions to create the function in the same folder as the Function App.

  • Since TypeScript is in preview, you will notice a (Preview) tag when selecting the language. This was a feature added in a new build for the extension, if you don’t see TypeScript as the language option, you can enable support for preview languages using the VSCode settings page, specify the following in your user settings:
“azureFunctions.projectLanguage": "TypeScript”

The above will use TypeScript as the default language and will skip the language selection dialog when creating a Function.

  • Select the HTTP Trigger for our API and then provide a Function Name.
  • Select Authorization as Anonymous .

Never use Anonymous when deploying to Azure

You should now have a function created with some boilerplate TypeScript code:

  • The function.json defines the configuration for your function including the Triggers and Bindings; the Index.ts is our TypeScript Function Handler. Since TypeScript is a transpiler, your Function needs the output .js files for deployment and not the .ts file. A common practice is to move these output files into a different directory so you don’t accidentally check them in. However, if you move them to a different folder and run the function locally you may get the following error:
vehicle-api: Unable to determine the primary function script. Try renaming your entry point script to 'run' (or 'index' in the caseof Node), or alternatively you can specify the name of the entry point script explicitly by adding a 'scriptFile' property to your function metadata.

To allow using a different folder, add a scriptFile attribute to your function.json and provide a relative path to the output folder.

Make sure to add the destination folder to .gitignore to ensure the output .js and .js.map files are not checked in.

"scriptFile": "../vehicle-api-output-debug/index.js"
  • The one thing that does not get added by default is a tsconfig.json and tslint.json. While the function will execute without these, I always feel that having these as part of the base setup helps in better coding practices. Also, since we are going to use Node packages, we will add a packages.json and install the TypeScript definitions for node
npm install @types/node —save-dev
  • We now have our Function and FunctionApp created, but there is one last step required before proceeding, setting up the debug environment. At this time, VSCode does not provide support for debugging Azure Functions written in TypeScript. However, you can enable support for TypeScript fairly easily. I came across this blog from Tsuyoshi Ushio that describes exactly how to do it.

Now that we have all things running, let’s focus on what our functions are going to do.

Building our Vehicle API

Developing the API is no different from your usual TypeScript development. From a Function perspective, we will split each operation into a Function. There is a huge debate whether you should have a monolith function API or a per operation (GET, POST, PUT, DELETE) API. Both approaches work, but I feel that within a FunctionApp you should try to segregate the service as much as possible, this is to align with the Single Responsibility Principle. Also, in some cases, you may achieve better scale by implementing a pattern like CQRS where your read and write operations go to separate functions. On the flip side, too many small Functions can become a management overhead, so you need to find the right balance. Azure Function Proxies provide a way to surface multiple endpoints using a consistent routing behavior, we will leverage this for our API in the discussion below.

In a nutshell, a FunctionApp is a Bounded Context for the Microservice, each Function is an operation exposed by that Microservice.

For our Vehicle API we will create two functions:

  • vehicle-api-get
  • vehicle-api-post

You can also create a Put, Delete similarly.

So, how do we make sure that each API is called only for the designated REST operation? You can define this in the function.json using the methods array.

For example, the vehicle-api-get is a HTTP GET operation and will be configured as below:

{
      "authLevel": "anonymous", --DONT DO THIS
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route":"",
      "methods": [
        "get"
      ]
},

Adding CosmosDB support to our Vehicle API

The following TypeScript code allows us to access a CosmosDB store and retrieve data based on a Vehicle Id. This represents the HTTP GET operation for our Vehicle API.

import { Collection } from "documentdb-typescript";

export async function run(context: any, req: any) {
    context.log("Entering GET operation for the Vehicle API.");
    // get the vehicle id from url
    const id: number = req.params.id;

    // get cosmos db details and collection
    const url = process.env.COSMOS_DB_HOSTURL;
    const key = process.env.COSMOS_DB_KEY;
    const coll = await new Collection(process.env.COSMOS_DB_COLLECTION_NAME, process.env.COSMOS_DB_NAME, url, key).openOrCreateDatabaseAsync();

    if (id !== 0) {
        // invoke type to get id information from cosmos
        const allDocs = await coll.queryDocuments(
            {
                query: "select * from vehicle v where v.id = @id",
                parameters: [{name: "@id", value: id }]
            },
            {enableCrossPartitionQuery: true, maxItemCount: 10}).toArray();

            //  build the response
            context.res = {
                body: allDocs
                };
    } else {
                context.res = {
                    status: 400,
                    body: `$"No records found for the id: {id}"`
                };
    }

    // context.done();
}

Using Bindings with CosmosDB

While the previous section used code to perform the GET operation, we can also use Bindings for CosmosDB that will allow us to perform operations on our CosmosDB Collection whenever the HTTP Trigger is fired. Below is how the HTTP POST is configured to leverage the Binding with CosmosDB:

{
  "disabled": false,
  "scriptFile": "../vehicle-api-output-debug/vehicle-api-post/index.js",
  "bindings": [
    {
      "authLevel": "anonymous", --DONT DO THIS
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route": "data",
      "methods": [
        "post"
      ]
    },
    {
      "type": "documentDB",
      "name": "$return",
      "databaseName": "vehiclelog",
      "collectionName": "vehicle",
      "createIfNotExists": false,
      "connection": "COSMOS_DB_CONNECTIONSTRING",
      "direction": "out"
    }
  ]
}

Then in your code, you can simply return the incoming JSON request and Azure Function takes care of pushing the values into CosmosDB.

export function run(context: any, req: any): void {
    context.log("HTTP trigger for POST operation.");
    let err;
    let json;
    if (req.body !== undefined) {
        json = JSON.stringify(req.body);
    } else {
        err = {
            status: 400,
            body: "Please pass the Vehicle data in the request body"
        };
    }
    context.done(err, json);
} 

OneClick deployment to Azure using VSCode Extensions

Deployment to Azure from the VSCode extension is straightforward. The interface allows you to create a FunctionApp in Azure and then provides a step by step workflow to deploy your functions into the FunctionApp.

If all goes well, you should see output such as below.

Using Subscription "".
Using resource group "".
Using storage account "".
Creating new Function App "sample-vehicle-api-azfunc"...
>>>>>> Created new Function App "sample-vehicle-api-azfunc": https://<your-url>.azurewebsites.net <<<<<<

00:27:52 sample-vehicle-api-azfunc: Creating zip package...
00:27:59 sample-vehicle-api-azfunc: Starting deployment...
00:28:06 sample-vehicle-api-azfunc: Fetching changes.
00:28:14 sample-vehicle-api-azfunc: Running deployment command...
00:28:20 sample-vehicle-api-azfunc: Running deployment command...
00:28:26 sample-vehicle-api-azfunc: Running deployment command...
00:28:31 sample-vehicle-api-azfunc: Running deployment command...
00:28:37 sample-vehicle-api-azfunc: Running deployment command...
00:28:43 sample-vehicle-api-azfunc: Running deployment command...
00:28:49 sample-vehicle-api-azfunc: Running deployment command...
00:28:55 sample-vehicle-api-azfunc: Running deployment command...
00:29:00 sample-vehicle-api-azfunc: Running deployment command...
00:29:06 sample-vehicle-api-azfunc: Running deployment command...
00:29:12 sample-vehicle-api-azfunc: Running deployment command...
00:29:17 sample-vehicle-api-azfunc: Running deployment command...
00:29:24 sample-vehicle-api-azfunc: Syncing 1 function triggers with payload size 144 bytes successful.
>>>>>> Deployment to "sample-vehicle-api-azfunc" completed. <<<<<<

HTTP Trigger Urls:
  vehicle-api-get: https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle-api-get

Some observations:

  • The extension bundles everything in the App folder including files like local.settings.json and the output .js directories, I could not find a way to filter these using the extension.
  • Another problem that I have faced is that currently neither the extension or the CLI provides a way to upload Application Settings as Environment Variable so they can be accessed by code once deployed to Azure, so these have to be manually added to make things work. For this sample, you will need to add the following key-value pairs in the FunctionApp -> Application Settings added through the Azure Portal so they can be available as Environment Variables!
"COSMOS_DB_HOSTURL": "https://your cosmos-url:443/",
"COSMOS_DB_KEY": "your-key",
"COSMOS_DB_NAME":"your-db-name",
"COSMOS_DB_COLLECTION_NAME":"your-collection-name"
"COSMOS_DB_CONNECTIONSTRING":"your-connection-string"
  • If you are only running it locally, you can use the local.settings.json, there is also a way through CLI to publish the local settings values into Azure using the --publish-local-settings flag, but hey there is a reason these are local values!
  • The Node version supported by Azure Functions is v6.5.0 so while you can locally play with higher versions, you will have to downgrade to 6.5.0 as of today.

In case you guys have a better way to deploy to Azure, do let me know :).

Configuring Azure Function Proxies for our API

At this point, we have a working API available in Azure. We have leveraged the CQRS approach (loosely) to have a separate Read API and a separate Write API, to the client, however, maintaining code with multiple endpoints can quickly become cumbersome. We need a way to package our API into a facade that is consistent and manageable, this is where Azure Function Proxies comes in.

Azure Function Proxies is a toolkit available as part of the Azure Function stack and provide the following features.:

  • Building consistent routing behavior for underlying functions in the FunctionApp and can even include external endpoints.
  • Provides a mechanism to aggregate underlying apis into a single API facade. In a way, it is a lightweight Gateway service to your underlying Functions.
  • Provide a MockUp Proxy to test your endpoint without having integration points. This is useful when testing the request routing with dummy data.
  • One of the key aspects added to Proxies is support for OpenAPI which allows more out of box connectors to other services.
  • Support for Out of Box AppInsights support where a proxy can publish events to AppInsights to generate endpoint metrics for not just functions but also for legacy API’s.

If you are familiar with the Application Request Routing (ARR) stack in IIS, this is somewhat similar. In fact, if you look at the Headers and Cookies for the request processed by the Proxy, you should see some familiar attributes 😉

......
Server →Microsoft-IIS/10.0
X-Powered-By →ASP.NET
......
Cookies: ARRAffinity

Let’s use Function Proxies for our API.

In the previous sections, I showed how we could use the routePrefix in host.json in conjunction with route in function.json. While that approach works, we have to add configuration for each function which can become a maintenance overhead. Additionally, if I want an external API to have the same route path that will not be possible using the earlier approach. Proxies can help overcome this barrier.

Using proxies, we can develop logical endpoints while keeping the configuration centralized. We will use Azure Function Proxies to surface our two functions as a consistent API Endpoint, so essentially to the client, it will look like a single API interface.

Before we continue, we will remove the route attributes we added to our functions and only keep the variable references and change the routeprefix to just "". Our published Function Endpoint(s) now should look something like this:

Http Functions:
        vehicle-api-get: https://sample-vehicle-api-azfunc.azurewebsites.net/{id}
        vehicle-api-post:https://sample-vehicle-api-azfunc.azurewebsites.net/vehicle-api-post/

This is obviously not intuitive, with multiple Functions it can become a nightmare for the client to implement our Service. We create two Proxies that will define the route path and match criteria for our Functions. You can easily create proxies from the Azure UI Portal, but you can also create your proxy.json. The below shows how to define proxies and associate with our Functions.

  {
    "$schema": "http://json.schemastore.org/proxies",
    "proxies": {
        "VehicleAPI-Get": {
            "matchCondition": {
                "route": "api/vehicle/{id}",
                "methods": [
                    "GET"
                ]
            },
            "backendUri": "https://sample-vehicle-api-azfunc.azurewebsites.net/{id}"
        },
        "VehicleAPI-POST": {
            "matchCondition": {
                "route": "/api/vehicle",
                "methods": [
                    "POST"
                ]
            },
            "backendUri": "https://sample-vehicle-api-azfunc.azurewebsites.net/vehicle-api-post"
        }
    }
}

As of today, there is no upload proxy.json functionality in Azure but you can easily copy paste into the Portal Advanced Editor.

We have two proxies defined here. The first is for our GET operation and the other for POST. In both cases, we have been able to define a consistent routing mechanism for selected REST verbs. The key attributes here are the route and backendUri which allows us to map a public route to an underlying endpoint. Note that the backendUri can be anything that needs to be called under the same API facade, so we can club multiple services through a common gateway routing using this approach.

Can you do this with other Services, I would have to say, Yes. You can implement similar routing functionality with Application Gateway, NGINX and Azure API Management. You can also use an MVC framework like Express and write a single function that can do all this routing. So, evaluate the options and choose that works best for your scenario.

Testing our Vehicle API

We now have our Vehicle API endpoints exposed through Azure Function Proxies. We can test it using any HTTP Client. I use Postman for the requests, but you can use any of your favorite clients.

GET Operation

The exposed endpoint from the Proxy is:

https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle/{id}

Our GET request fetches the correct results from CosmosDB

POST Operation

The exposed endpoint from the Proxy is:

https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle/

Our POST request pushes a new record into CosmosDB:

There we have it. Our Vehicle API that leverages Azure Function Proxies and TypeScript is now up and running!

Do have a look at the source code here (https://github.com/niksacdev/sample-api-typescript) and please provide your feedback.

Happy Coding :).


Cross Platform IoT: Developing a .NET Core based simulator for Azure IoT Hub using VS for Mac!

Cross Platform IoT: Developing a .NET Core based simulator for Azure IoT Hub using VS for Mac!

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

The source for the solution is available on GitHub here.

March 7th was a significant milestone for the Microsoft Visual Studio team. With Visual Studio completing 20 years and the launch of Visual Studio 2017, the team demonstrated that Visual Studio continues to lead the path for .NET development on Windows. While the Visual Studio 2017 is not cross-platform and does not work on other OS like MacOS (yet ;)), Microsoft, for some time, has also launched a preview version for another cousin of Visual Studio – The Visual Studio for Mac!!

VS for Mac at first impression seems like a cosmetic redo for the Xamarin Studio (post acquisition of Xamarin by Microsoft) but as the new updates keep coming it seems to be bringing all the goodies of Visual Studio to the native MacOS platform. In this post, I will show how to use Visual Studio for Mac to build a .NET Core solution that will act as a simple device emulator and will send and receive messages commands to Azure IoT Hub.

Wait, don’t we already have VS Code? Yeah, that’s correct, VS Code is there and provides some awesome cross-platform development support. I am a big fan of its simplicity, speed and extension eco-system. #LoveVSCode. However, VS for Mac includes some additional features such as Project templates, Build and Run from IDE support, Native Xamarin App development experience, and now Visual Studio Test Framework support which makes it a step closer to the Visual Studio IDE available in Windows. Which is better? Well, time will tell, for the purpose of this post, however, we will use VS for Mac running on MacOS Sierra (10.12.3).

Getting VS for Mac

VS for Mac is in preview right now, and you should use it for development scenarios and with caution. At the time of writing, VS team had released Preview 5 builds, and that is what we used for this post.

You can download the .dmg from the Visual Studio website here to install the application or use homebrew for the installation:

brew cask install visual-studio

The setup process is fairly straightforward. Once installed, launch the Visual Studio app, and you should be presented with a welcome screen similar to below:

VS for Mac (current build) does not install .NET Core SDK on your machine by default and you will need to manually install it. To install .NET Core SDK on MacOS we will again use homebrew. The VS team provides step by step instructions on how to do this here.

Note that the installation asks for installing openssl which is a dependency for .NET Core, it mainly uses the libcrypto and libssl libraries. In some cases, you may see a warning like this. Warning: openssl is a keg-only and another version is linked to opt. To continue installation use the following command:

brew install --force openssl

Also, ignore the warning Warning: openssl-1.0.2k already installed, it's just not linked. or use brew unlink openssl before executing the above command

Writing our simulator app for IoT Hub

The source for this project is available at the GitHub repo here.

Now that we have VS for Mac and .NET Core SDK installed and setup, we are going to build a .NET Core Simulator app which will be performing the following operations:

  1. Send a message (ingress) using a .NET Core simulator to Azure IoT Hub using AMQP as the protocol.
  2. Receive messages from Azure IoT Hub (egress) using a .NET Core consumer.
  3. Send a command to a device and receive an acknowledgment from the device (coming soon).

The process to create the project is very similar to the experience of Visual Studio on Windows.

  1. Click New Project from the welcome screen to open the New Project dialog. We will select a .NET Core App project of type Console Application. At the time of writing C# and F# are supported applications for .NET Core projects in VS for Mac.
  2. In the next screen, we provide a project and solution name and configure our project for git.
  1. VS for Mac will now setup the project and solution. This first thing it does is to restore any required packages. Since VS for Mac has support for NuGet, it just uses Nuget package restore internally to get all the required dependencies installed. Once all dependencies are restored, you should see a screen similar to below. We have our .NET Core project created in VS for Mac!
  1. Before we start working on the simulator code, let’s ensure we have Source Control enabled for our project. In VS for Mac, the Version Control menu allows you to configure your SVN of Git repo. Note that this functionality is derived from Xamarin Studio, you can use the guidance here to set up version control for your project. We will be using a git repo and GitHub for our project.

Now that we have our project and version control sorted out, let’s start working on the code for our IoT Simulator. As noted earlier, the simulator will perform the following actions:

  • Send a message to IoT Hub using Device credentials
    • Act as a Consumer of the message from IoT Hub
    • Receive Commands and send response acknowledgment

The solution has multiple projects and is described in the GitHub repo here. The repo. also discusses how to use the sample. Instead of talking through all the projects, I will talk about the logic to send a message to a device when using .NET Core assemblies, this should give an idea of how to use VS for Mac and .NET Core with IoT Hub.

Sending messages over AMQP

The easiest option to work with IoT Hub is the Device and Service SDKs. Unfortunately, at the time of writing this post, we do not have a .NET Core version of these SDKs. If you try to add the Nuget packages you should get incompatibility errors like below:

Package Microsoft.Azure.Devices.Client 1.2.5 is **not compatible** with netcoreapp1.1 (.NETCoreApp,Version=v1.1). Package Microsoft.Azure.Devices.Client 1.2.5 supports:
net45 (.NETFramework,Version=v4.5)
portable-monoandroid10+monotouch10+net45+uap10+win8+wp8+wpa81+xamarinios10 (.NETPortable,Version=v0.0,Profile=net45+wp8+wpa81+win8+MonoAndroid10+MonoTouch10+Xamarin.iOS10+UAP10)
  uap10.0 (UAP,Version=v10.0)

So what are our options here:

  1. If you want to use the HTTPS protocol, you can build an HTTP Client and run it through the REST API model.
  2. If you want to use AMQP as the protocol, you can use the AMQP Lite library available on Nuget. AMQP Lite has support for .NET Core today and provides underlying functionality to send and receive packets to IoT Hub using AMQP as the protocol. We will be using this for your sample.
  3. If you want to use MQTT, there are few Nuget packages like M2Mqtt that you can try to use. I have not tried them yet.

When the IoT Hub releases the .NET Core versions, they should be the recommended way to process messages with IoT Hub

Adding the Nuget package

Adding NuGet packages in VS for Mac is straightforward. Simply right click on your project -> Add -> Add Nuget package. It opens the NuGet dialog box that allows searching for all available packages. In our case, we search for the AMQP Lite package and add it to our project.

Note that currently all packages are shown including full .NET Framework packages. VS for Mac, however, validates if the package can be added to a .NET Core project, in case the package has binaries or dependencies that rely on full .NET Framework, you should see an error in the package console

**Checking compatibility** for Microsoft.Azure.Devices.Client 1.2.5 with .NETCoreApp,Version=v1.1.
Package Microsoft.Azure.Devices.Client 1.2.5 is not compatible with netcoreapp1.1 (.NETCoreApp,Version=v1.1). Package Microsoft.Azure.Devices.Client 1.2.5 supports:
net45 (.NETFramework,Version=v4.5)
portable-monoandroid10+monotouch10+net45+uap10+win8+wp8+wpa81+xamarinios10 (.NETPortable,Version=v0.0,Profile=net45+wp8+wpa81+win8+MonoAndroid10+MonoTouch10+Xamarin.iOS10+UAP10)
uap10.0 (UAP,Version=v10.0)

Setting up the environment variables

Once we have the AMQP Lite assemblies, we now need our IoT Hub details. You can use the Azure CLI tools for IoT mentioned in my previous post to fetch this information. In the sample, I leverage a JSON configuration handler to dump the configuration in a JSON file and read from it during runtime.

{
   "settings":{
      "connectionStrings":[
         {
            "name":"youriothubname",
            "connectionString":"youriothub.azure-devices.net",
            "sasKey":"yourdevicekey",
            "sasKeyName":"device"
         }
      ],
      "deviceId":"D1234"
   }
}

Sending the message to IoT Hub

The final part of the puzzle is to write code that will open a connection and send a message to IoT Hub. We leverage the AMQP Lite library to perform these actions. In the sample, I follow a Strategy pattern to execute methods on the underlying library, it gives us the flexibility to change underlying implementations to a different library (for example when the IoT Hub SDK become .NET Core compliant) without making a lot of code changes.

// Create a connection using device context
				Connection connection  = await Connection.Factory.CreateAsync(new Address(iothubHostName, deviceContext.Port));
				Session session  = new Session(connection);

				string audience = Fx.Format("{0}/devices/{1}", iothubHostName, deviceId);
				string resourceUri = Fx.Format("{0}/devices/{1}", iothubHostName, deviceId); 
				// Generate the SAS token
				string sasToken = TokenGenerator.GetSharedAccessSignature(null, deviceContext.DeviceKey, resourceUri, new TimeSpan(1, 0, 0));
				bool cbs = TokenGenerator.PutCbsToken(connection, iothubHostName, sasToken, audience);
				if (cbs)
				{
					// create a session and send a telemetry message
					session = new Session(connection);
					byte[] messageAsBytes = default(byte[]);
					if (typeof(T) == typeof(byte[]))
					{
						messageAsBytes = message as byte[];
					}
					else
					{
						// convert object to byte[]
 					}

					// Get byte[] from 
					await SendEventAsync(deviceId, messageAsBytes, session);
					await session.CloseAsync();
					await connection.CloseAsync();
				}

The above code first opens a connection using the AMQPLite Connection type. It then establishes an AMQP session using the Session type and generates the SAS token based on the Device credentials. The sample also uses a protobuf serializer to encode the payload as a byte [] and finally send it to IoT Hub using SendEventAsync.

If you run the samples.iot.simulator.sender .NET Core console app in the sample, it calls the ExecuteOperationAsync to send the payload and the required environment variables to the AMQP Lite library. The results of a successful message sent would be displayed in a console window.

Consuming messages from IoT Hub

The sample also demonstrates the other scenarios such as consuming messages however you can also use an out of box Azure Service like Azure Stream Analytics to process these messages. Here is an example of using Stream Analytics for event processing.

Phew! This was a long post, it demonstrates some of the capabilities as well as challenges of developing .NET Core solutions with VS for Mac. I think VS for Mac is a great addition to the IDE toolkit especially for developers who are used to the Visual Studio Windows environment. There are few rough edges that need here and there but remember this is just a preview! … Happy Coding 🙂

The source for the solution is available on GitHub here.

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

the “internet of things” is the next big bang!

What’s new about this, so you would ask?

Internet of Things aka IoT is already making a big impact in our day-to-day lives, the fact that the IoT industry became a $1.24 trillion (Source: Markets and Markets) business in 2013 proved that IoT is big and here to stay. However the reason I say this is not because how many under 25 billionaires it will create but how it is going to change our lives forever.

This realization came to me just recently …

the other day my wife was making tea and she realized we were out of sugar (yeah I forgot it during the last visit to the grocers, how that ended for me is another story) and said “I wish someone can fill this up magically every time!” … Now if this was some years back I would be thinking of Aladdin and his Genie but with the advent of IoT I started thinking this might just be possible …

Consider a “Smart Jar” that has sensors which track its quantity, the consumer can set a configuration of sending alerts whenever the quantity reaches below a minimum limit, the device can then send notifications to the user or add the item to their favorite grocery list app. Taking this a step further the “Smart Jar” can connect to an external provider like Amazon or Target which the user has a subscription for (in USA) and then automatically schedule the item for next delivery. Also for people like me who constantly forget the location of items in the kitchen a mobile app lets you find the appropriate jar based on its co-ordinates. From a producer perspective the jar may send telemetry on usage pattern for families which can show demographics on how products are used.

While the above example may seem a little overreached (btw some start ubig-bangp might just be working on a solution for this right now!) the point I was trying to make with the example above is that many such tasks that touch our daily lives will be simplified and automated with the use of connected devices. This, by itself will be a revolution not just in our homes but for corporate as well. It will change how we eat, drink, shop … live and that is why I say it is the next big bang!

Now the next question that comes to mind is … are we ready for it?

Many governments and companies are investing generously in the research and development for IoT solutions (Industry 4.0). This is great for the IoT industry, however any industry needs consistent sales to sustain and prosper. We have seen successes in certain domains such as connected homes and thermostats (Nest) , automobile (BMW), wearable etc but there is still so much untapped potential that this just seems like the tip of the iceberg.

One of the big challenges for any industry is adoption; since the events we are talking here are life changing there are certain principles that should be followed when building devices targeted for IoT to enable mass adoption:

Intuitive: When the iPad was launched several years back it lacked a lot of features but one thing that was prominent in the device was that it looked “sexy” and simplified a lot of tasks that were cumbersome on a laptop or a desktop (aka the dinosaurs that existed some years back) and that was one of the main reasons of its success. History was repeated when Nest launched its smart thermostat which though may not be completely accurate from a temperature or humidity perspective provides great intuitive features that has compelled major manufacturers in this field to release similar variants. So any device that will be launched under the IoT umbrella needs to be intuitive and simplified rather than sophisticated. Some key features that any device may exhibit:

    • Does the job it is meant to do ALL the time and without errors: if my wife had to call support for a simple device like the “Smart Jar” she would not use it next time.
    • Multiple sensors to predict user actions: ease user life with determining what they want to do
    • Easy installation and upgrades: if not my grandma then at least my wife should be able to configure it :).
    • None or minimal maintenance: No frequent battery changes, wired connections, connectivity failures

Privacy and Security: Security is an obvious concern and has been highlighted in many articles. With multiple devices running all the time and sending telemetry data back to manufactures they can literally predict what you are doing right now in your house. This is an invasion in privacy which both consumers and corporate will oppose to. Simply put, convenience at the stake of privacy will not sell!! We are still in an immature stage in this space but work is being done to define policies around data privacy and end-user security. This is an area I would be watching before placing my bets on IoT.

dilbert-140511

Cost: A device by itself does not achieve an IoT scenario, it needs to be backed with the power of cloud and data analytic so when evaluating the cost of a device multiple auxiliary items need to be accounted for such as:

    • Hardware (sensors, MCU, RAM etc.),
    • Communication and network interface
    • Messaging channel transactions
    • Cloud compute for analytic and business systems
    • App development
    • Data storage

These are just high level items, there are multiple hidden costs apart from these that need to be constituted in the selling price of the device. Now all these details make a compelling reasoning for the costs to be higher than a normal device, however the consumer does not care about what goes in the device or that a sophisticated cloud platform is backing the solution (at least not from a cost perspective), a light bulb is a light bulb and if a consumer gets it for 10x of the average price only a small community would be interested in it most likely for experimental purposes. It is thus essential to keep the costs to the minimum.

An effective way can be to keep the device price low and provide subscription based solution for enabling more features on the device. Also as the hardware manufacturing costs keep coming down we will see many of these devices become reachable to the masses.

Interoperability: This is always a hot and controversial topic for me, Hot because just thinking of interoperability as part of IoT standards opens up a plethora of opportunities and can introduce huge gains to consumers. Controversial because a lot of companies today bet their business on their platform, example Windows for Microsoft and iOS for Apple, so from a business perspective if you have a closed platform people will get tied to it and most likely will stick to it for years to come. Besides once you get people on your platform it is easy to sell them supplement services that are optimized for that platform, for example YouTube experience on an Android device is far better as compared to YouTube on a Windows phone.

Now platform dominance was OK for the PC, tablet and phone market primarily because of their controlled production and legacy proficiency but the IoT introduces a whole new generation of devices and it seems almost impossible that companies can become successful without providing an open and inter-operable interface.

Think of an example, your microwave needs to get input from your refrigerator so it knows at what temperature it should warm the food at and for how much time (example shamelessly stolen from the ebook published by CoAP sharp team). Now the microwave is from Samsung but the refrigerator is from LG, the only way they can talk to each other is through a common platform. Now apply this to the hundreds of devices that you will have in your house, automobile, and offices in the coming years that are developed in different countries and by different manufacturers, unless they all have some common form of Interfaces to communicate the IoT vision and goals will not be achieved. Agreed that there will be companies that will create some middle ware such as Belkin Wemo Smart Switch but I would consider these as alternatives to legacy devices, anything new should be inter-operable by design.(period)

I will talk more about work being done in this space in a future post.

Infrastructure: Cloud Computing has been a game changer in how business’s work, it enables organizations to have the impression of unlimited resources for any of their computing needs and that too at a cheaper cost. Well unlimited is a stretch right now since most cloud providers have some restrictions and bars defined on your usage but these limits are still too high for most of business’s. Moreover if required organizations can put in a bag of money to have their own silo data centers in order to achieve their scalability targets, this is still cheaper than hosting and maintaining everything in their in-house data centers.

This all works very well for almost 99% of the Web and Mobile scenarios today however the IoT space is different from how Web and Mobile applications operate:

Most Web and Mobile solutions require human intervention so the spike or bursts in the server load is intermittent or during standard schedules such as peak hours, in case of devices however this load is constant, for example, if a thermostat is configured to send telemetry data back to the server every 30 seconds that is a constant activity that the device will continue to perform unless it breaks due to some failure, it does not stop for lunches or take bio breaks it just keeps transmitting data every 30 seconds. Now consider 10 Million of these thermostats deployed across continents and each thermostat transmits around 10 KB of data (including headers and payload) . We are talking about 100000000 KB (95.4 GB) of data being sent for processing and storage at a constant rate of 30 seconds. This is huge and while the cloud might still be able to accommodate such loads through Big data and Auto Scaling the small and medium business’s would have challenges managing the costs of  running such solutions. Note that I have described a simple ingestion scenario above, add the data coming from mobile application commands, device inquiry etc. and this number just keeps growing.

There is no silver bullet to the explosion of data and how back ends will manage it while still keeping the costs low, as we mature more in the Cloud space we should see price drops in Cloud computing and storage solutions which should bring the overall costs down, also optimization on the device payload and messaging can ensure minimal data being sent over the wire (Protocols like MQTT and CoAP are being designed for such type of solutions)

So where do we go from here?

IoT is one the best things that has happened in the our space, it has blurred the lines between the hardware and software industry, enabled scenarios that we could only see in movies or dream about however there is a need for standardization of how companies design and implement solutions, instead of working in silos we need to work towards a consistent and effective reference solution that may apply to most if not all scenarios. Of course there will be tweaks and turns for each sub domain but if the basic principles and not violated with we are looking towards a new world that will change how we work today … this will truly be a big bang!