Building Serverless API’s with TypeScript and Azure Function Proxies

TL;DR: In this post, we build a microservice that uses Azure Functions and other awesome Serverless technologies provided by Azure. We will cover the following features:

  • Azure functions currently has support for TypeScript in preview and we will be using the current features available to develop a read/write REST API.
  • We leverage the Azure Function Bindings to define Input and Output for our functions.
  • We will look at Azure Function Proxies that provide a way to define consistent routing behavior for our function and API calls.

If you want to jump in; the source is available on GitHub here (https://github.com/niksacdev/sample-api-typescript).

TypeScript support for Azure Functions is in preview state as of now; please use caution when using these in your production scenarios.

Problem Context

We will be building a Vehicle microservice which provides CRUD operations for sending vehicle data to a CosmosDB document store.

The architecture is fairly straightforward and looks like this:

Let’s get started …

Setting up TypeScript support for Azure Functions

VSCode has amazingly seamless support for Azure Functions and TypeScript including a development, linting, debugging, and deployment extension, so it was a no-brainer to use that for our development. I use the following extensions:

Additionally, you will need the following to kick-start your environment:

  • azure-function-core-tools: You would need these for setting up the function runtime in your local development. There are two packages here, and if you are using a Mac environment like me, you will need the 2.0 preview version.
     npm install -g azure-functions-core-tools@core
    		
  • Node.js (duh!): Note that the preview features currently works with 8.x.x. I have tried it on 8.9.4 which is the latest LTS (Latest LTS: Carbon), so you may have to downgrade using nvm if you are using the 9.X.X versions.

Interestingly, the Node version supported by Functions deployed in Azure is v6.5.0 so while you can locally play with higher versions you will have to downgrade to 6.5.0 when deploying to Azure as of today!

You can now use the Function Runtime commands or the Extension UI to create your project and Functions. We will use the Extension UI for our development:

Assuming you have installed the extension and connected to your Azure environment, the first thing we do is create a Function project.

Click on Create New Project and then select the folder that will contain our Function App.

The extension creates a bunch of files required for the FunctionApp to work. One of the key files here is host.json which allows you to specify configuration for the Function App. If you are creating HTTPTriggers, some settings that I would recommend tuning to improve your throttling and performance parameters:

{
    "functionTimeout": "00:10:00",
    "http": {
        "routePrefix": "api/vehicle",
        "maxOutstandingRequests": 20,
        "maxConcurrentRequests": 10,
        "dynamicThrottlesEnabled": false
    },
    "logger": {
        "categoryFilter": {
            "defaultLevel": "Information",
            "categoryLevels": {
                "Host": "Error",
                "Function": "Error",
                "Host.Aggregator": "Information"
            }
        }
    }
}

The maxOutstandingRequests can be used to control latency for the function by setting a threshold limit on the max request in waiting and execution queue. The maxConcurrentRequests allows control over concurrent http function requests to optimize resource consumption. The functionTimeOut is useful if you would want to override the timeout settings for the AppService or Consumption Plan which default limit of 5 minutes. Note that configuration in host.json are applied to all functions.

Also note that I have a custom value for route attribute (by default this is api/{functioname}). By adding the prefix, I am specifying that all HTTP functions in this FunctionApp will use the api/vehicle route. This is a good way to set the bounded context for your Microservice since the route will now be applied to all functions in this FunctionApp. You can also use this to define versioning schemes when doing canary testing. This setting can be used in conjunction with the route attribute in a function function.json, the Function Runtime appends your Function route with this default Host route.

Note that this behaviour can be simplified using Azure Function Proxies, we will modify these routes and explore more later in the Azure Function Proxies section.

To know more options available in host.json, refer here

Our project is now created, next, we create our Function.

Click Create Function and follow the onscreen instructions to create the function in the same folder as the Function App.

  • Since TypeScript is in preview, you will notice a (Preview) tag when selecting the language. This was a feature added in a new build for the extension, if you don’t see TypeScript as the language option, you can enable support for preview languages using the VSCode settings page, specify the following in your user settings:
“azureFunctions.projectLanguage": "TypeScript”

The above will use TypeScript as the default language and will skip the language selection dialog when creating a Function.

  • Select the HTTP Trigger for our API and then provide a Function Name.
  • Select Authorization as Anonymous .

Never use Anonymous when deploying to Azure

You should now have a function created with some boilerplate TypeScript code:

  • The function.json defines the configuration for your function including the Triggers and Bindings; the Index.ts is our TypeScript Function Handler. Since TypeScript is a transpiler, your Function needs the output .js files for deployment and not the .ts file. A common practice is to move these output files into a different directory so you don’t accidentally check them in. However, if you move them to a different folder and run the function locally you may get the following error:
vehicle-api: Unable to determine the primary function script. Try renaming your entry point script to 'run' (or 'index' in the caseof Node), or alternatively you can specify the name of the entry point script explicitly by adding a 'scriptFile' property to your function metadata.

To allow using a different folder, add a scriptFile attribute to your function.json and provide a relative path to the output folder.

Make sure to add the destination folder to .gitignore to ensure the output .js and .js.map files are not checked in.

"scriptFile": "../vehicle-api-output-debug/index.js"
  • The one thing that does not get added by default is a tsconfig.json and tslint.json. While the function will execute without these, I always feel that having these as part of the base setup helps in better coding practices. Also, since we are going to use Node packages, we will add a packages.json and install the TypeScript definitions for node
npm install @types/node —save-dev
  • We now have our Function and FunctionApp created, but there is one last step required before proceeding, setting up the debug environment. At this time, VSCode does not provide support for debugging Azure Functions written in TypeScript. However, you can enable support for TypeScript fairly easily. I came across this blog from Tsuyoshi Ushio that describes exactly how to do it.

Now that we have all things running, let’s focus on what our functions are going to do.

Building our Vehicle API

Developing the API is no different from your usual TypeScript development. From a Function perspective, we will split each operation into a Function. There is a huge debate whether you should have a monolith function API or a per operation (GET, POST, PUT, DELETE) API. Both approaches work, but I feel that within a FunctionApp you should try to segregate the service as much as possible, this is to align with the Single Responsibility Principle. Also, in some cases, you may achieve better scale by implementing a pattern like CQRS where your read and write operations go to separate functions. On the flip side, too many small Functions can become a management overhead, so you need to find the right balance. Azure Function Proxies provide a way to surface multiple endpoints using a consistent routing behavior, we will leverage this for our API in the discussion below.

In a nutshell, a FunctionApp is a Bounded Context for the Microservice, each Function is an operation exposed by that Microservice.

For our Vehicle API we will create two functions:

  • vehicle-api-get
  • vehicle-api-post

You can also create a Put, Delete similarly.

So, how do we make sure that each API is called only for the designated REST operation? You can define this in the function.json using the methods array.

For example, the vehicle-api-get is a HTTP GET operation and will be configured as below:

{
      "authLevel": "anonymous", --DONT DO THIS
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route":"",
      "methods": [
        "get"
      ]
},

Adding CosmosDB support to our Vehicle API

The following TypeScript code allows us to access a CosmosDB store and retrieve data based on a Vehicle Id. This represents the HTTP GET operation for our Vehicle API.

import { Collection } from "documentdb-typescript";

export async function run(context: any, req: any) {
    context.log("Entering GET operation for the Vehicle API.");
    // get the vehicle id from url
    const id: number = req.params.id;

    // get cosmos db details and collection
    const url = process.env.COSMOS_DB_HOSTURL;
    const key = process.env.COSMOS_DB_KEY;
    const coll = await new Collection(process.env.COSMOS_DB_COLLECTION_NAME, process.env.COSMOS_DB_NAME, url, key).openOrCreateDatabaseAsync();

    if (id !== 0) {
        // invoke type to get id information from cosmos
        const allDocs = await coll.queryDocuments(
            {
                query: "select * from vehicle v where v.id = @id",
                parameters: [{name: "@id", value: id }]
            },
            {enableCrossPartitionQuery: true, maxItemCount: 10}).toArray();

            //  build the response
            context.res = {
                body: allDocs
                };
    } else {
                context.res = {
                    status: 400,
                    body: `$"No records found for the id: {id}"`
                };
    }

    // context.done();
}

Using Bindings with CosmosDB

While the previous section used code to perform the GET operation, we can also use Bindings for CosmosDB that will allow us to perform operations on our CosmosDB Collection whenever the HTTP Trigger is fired. Below is how the HTTP POST is configured to leverage the Binding with CosmosDB:

{
  "disabled": false,
  "scriptFile": "../vehicle-api-output-debug/vehicle-api-post/index.js",
  "bindings": [
    {
      "authLevel": "anonymous", --DONT DO THIS
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route": "data",
      "methods": [
        "post"
      ]
    },
    {
      "type": "documentDB",
      "name": "$return",
      "databaseName": "vehiclelog",
      "collectionName": "vehicle",
      "createIfNotExists": false,
      "connection": "COSMOS_DB_CONNECTIONSTRING",
      "direction": "out"
    }
  ]
}

Then in your code, you can simply return the incoming JSON request and Azure Function takes care of pushing the values into CosmosDB.

export function run(context: any, req: any): void {
    context.log("HTTP trigger for POST operation.");
    let err;
    let json;
    if (req.body !== undefined) {
        json = JSON.stringify(req.body);
    } else {
        err = {
            status: 400,
            body: "Please pass the Vehicle data in the request body"
        };
    }
    context.done(err, json);
} 

OneClick deployment to Azure using VSCode Extensions

Deployment to Azure from the VSCode extension is straightforward. The interface allows you to create a FunctionApp in Azure and then provides a step by step workflow to deploy your functions into the FunctionApp.

If all goes well, you should see output such as below.

Using Subscription "".
Using resource group "".
Using storage account "".
Creating new Function App "sample-vehicle-api-azfunc"...
>>>>>> Created new Function App "sample-vehicle-api-azfunc": https://<your-url>.azurewebsites.net <<<<<<

00:27:52 sample-vehicle-api-azfunc: Creating zip package...
00:27:59 sample-vehicle-api-azfunc: Starting deployment...
00:28:06 sample-vehicle-api-azfunc: Fetching changes.
00:28:14 sample-vehicle-api-azfunc: Running deployment command...
00:28:20 sample-vehicle-api-azfunc: Running deployment command...
00:28:26 sample-vehicle-api-azfunc: Running deployment command...
00:28:31 sample-vehicle-api-azfunc: Running deployment command...
00:28:37 sample-vehicle-api-azfunc: Running deployment command...
00:28:43 sample-vehicle-api-azfunc: Running deployment command...
00:28:49 sample-vehicle-api-azfunc: Running deployment command...
00:28:55 sample-vehicle-api-azfunc: Running deployment command...
00:29:00 sample-vehicle-api-azfunc: Running deployment command...
00:29:06 sample-vehicle-api-azfunc: Running deployment command...
00:29:12 sample-vehicle-api-azfunc: Running deployment command...
00:29:17 sample-vehicle-api-azfunc: Running deployment command...
00:29:24 sample-vehicle-api-azfunc: Syncing 1 function triggers with payload size 144 bytes successful.
>>>>>> Deployment to "sample-vehicle-api-azfunc" completed. <<<<<<

HTTP Trigger Urls:
  vehicle-api-get: https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle-api-get

Some observations:

  • The extension bundles everything in the App folder including files like local.settings.json and the output .js directories, I could not find a way to filter these using the extension.
  • Another problem that I have faced is that currently neither the extension or the CLI provides a way to upload Application Settings as Environment Variable so they can be accessed by code once deployed to Azure, so these have to be manually added to make things work. For this sample, you will need to add the following key-value pairs in the FunctionApp -> Application Settings added through the Azure Portal so they can be available as Environment Variables!
"COSMOS_DB_HOSTURL": "https://your cosmos-url:443/",
"COSMOS_DB_KEY": "your-key",
"COSMOS_DB_NAME":"your-db-name",
"COSMOS_DB_COLLECTION_NAME":"your-collection-name"
"COSMOS_DB_CONNECTIONSTRING":"your-connection-string"
  • If you are only running it locally, you can use the local.settings.json, there is also a way through CLI to publish the local settings values into Azure using the --publish-local-settings flag, but hey there is a reason these are local values!
  • The Node version supported by Azure Functions is v6.5.0 so while you can locally play with higher versions, you will have to downgrade to 6.5.0 as of today.

In case you guys have a better way to deploy to Azure, do let me know :).

Configuring Azure Function Proxies for our API

At this point, we have a working API available in Azure. We have leveraged the CQRS approach (loosely) to have a separate Read API and a separate Write API, to the client, however, maintaining code with multiple endpoints can quickly become cumbersome. We need a way to package our API into a facade that is consistent and manageable, this is where Azure Function Proxies comes in.

Azure Function Proxies is a toolkit available as part of the Azure Function stack and provide the following features.:

  • Building consistent routing behavior for underlying functions in the FunctionApp and can even include external endpoints.
  • Provides a mechanism to aggregate underlying apis into a single API facade. In a way, it is a lightweight Gateway service to your underlying Functions.
  • Provide a MockUp Proxy to test your endpoint without having integration points. This is useful when testing the request routing with dummy data.
  • One of the key aspects added to Proxies is support for OpenAPI which allows more out of box connectors to other services.
  • Support for Out of Box AppInsights support where a proxy can publish events to AppInsights to generate endpoint metrics for not just functions but also for legacy API’s.

If you are familiar with the Application Request Routing (ARR) stack in IIS, this is somewhat similar. In fact, if you look at the Headers and Cookies for the request processed by the Proxy, you should see some familiar attributes 😉

......
Server →Microsoft-IIS/10.0
X-Powered-By →ASP.NET
......
Cookies: ARRAffinity

Let’s use Function Proxies for our API.

In the previous sections, I showed how we could use the routePrefix in host.json in conjunction with route in function.json. While that approach works, we have to add configuration for each function which can become a maintenance overhead. Additionally, if I want an external API to have the same route path that will not be possible using the earlier approach. Proxies can help overcome this barrier.

Using proxies, we can develop logical endpoints while keeping the configuration centralized. We will use Azure Function Proxies to surface our two functions as a consistent API Endpoint, so essentially to the client, it will look like a single API interface.

Before we continue, we will remove the route attributes we added to our functions and only keep the variable references and change the routeprefix to just "". Our published Function Endpoint(s) now should look something like this:

Http Functions:
        vehicle-api-get: https://sample-vehicle-api-azfunc.azurewebsites.net/{id}
        vehicle-api-post:https://sample-vehicle-api-azfunc.azurewebsites.net/vehicle-api-post/

This is obviously not intuitive, with multiple Functions it can become a nightmare for the client to implement our Service. We create two Proxies that will define the route path and match criteria for our Functions. You can easily create proxies from the Azure UI Portal, but you can also create your proxy.json. The below shows how to define proxies and associate with our Functions.

  {
    "$schema": "http://json.schemastore.org/proxies",
    "proxies": {
        "VehicleAPI-Get": {
            "matchCondition": {
                "route": "api/vehicle/{id}",
                "methods": [
                    "GET"
                ]
            },
            "backendUri": "https://sample-vehicle-api-azfunc.azurewebsites.net/{id}"
        },
        "VehicleAPI-POST": {
            "matchCondition": {
                "route": "/api/vehicle",
                "methods": [
                    "POST"
                ]
            },
            "backendUri": "https://sample-vehicle-api-azfunc.azurewebsites.net/vehicle-api-post"
        }
    }
}

As of today, there is no upload proxy.json functionality in Azure but you can easily copy paste into the Portal Advanced Editor.

We have two proxies defined here. The first is for our GET operation and the other for POST. In both cases, we have been able to define a consistent routing mechanism for selected REST verbs. The key attributes here are the route and backendUri which allows us to map a public route to an underlying endpoint. Note that the backendUri can be anything that needs to be called under the same API facade, so we can club multiple services through a common gateway routing using this approach.

Can you do this with other Services, I would have to say, Yes. You can implement similar routing functionality with Application Gateway, NGINX and Azure API Management. You can also use an MVC framework like Express and write a single function that can do all this routing. So, evaluate the options and choose that works best for your scenario.

Testing our Vehicle API

We now have our Vehicle API endpoints exposed through Azure Function Proxies. We can test it using any HTTP Client. I use Postman for the requests, but you can use any of your favorite clients.

GET Operation

The exposed endpoint from the Proxy is:

https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle/{id}

Our GET request fetches the correct results from CosmosDB

POST Operation

The exposed endpoint from the Proxy is:

https://sample-vehicle-api-azfunc.azurewebsites.net/api/vehicle/

Our POST request pushes a new record into CosmosDB:

There we have it. Our Vehicle API that leverages Azure Function Proxies and TypeScript is now up and running!

Do have a look at the source code here (https://github.com/niksacdev/sample-api-typescript) and please provide your feedback.

Happy Coding :).


Architecture Blueprint Canvas for IoT

Architecting IoT solutions can become complex very quickly considering the various components involved and multiple interactions between device-services-external sources. In this post, I would like to share a set of Visio templates with the goals to assist a collaborative and brainstorming discussion during IoT design workshops and architectural discussions with customers and teams.

The templates are based on the building blocks of Azure IoT reference architecture but extend it to other attributes such as integration, extensibility etc and can be used for any IoT platform.

The templates are available here: https://github.com/niksacdev/abcforiot

Would love to hear your feedback on relevance on any improvements we can make to these.☺

The template provides the following features:

  • A systematic approach for eliciting business and technical requirements, identifying priorities and recording decisions and next steps for the following components in an IoT Solution: Devices, Messages, Field Gateway, Protocol Gateway, Device Registry, Device Management, Hot Path Analytics, Warm Path Analytics, Cold Path Analytics, Reporting, Client Applications, Integration and Extensibility.
  • It allows evaluating nonfunctional requirements including Security, Scalability, Supportability, and Governance.
  • IoT project teams can expedite their planning and requirement elicitation phase, identify dependencies and design constraints, and solidify design and architecture elements of their IoT Solution.

There are two flavors of the templates available today:

  • Basic: This is primarily the bare bone template for architecting a solution without documenting any decision points or brainstorming discussion. It is targeted towards an individual architect or a team that has already formulated an architecture and want to document their design across the different IoT components.
  • Collaborative: This is an advanced version of the basic template. It allows for team collaboration and brainstorming by providing sections for taking a decision and defining action items. A team can leverage the template to brainstorm ideas for problems for each component and then document them as a design decision in the template itself. For more information on how to use the template, refer here

 

A future goal is to develop some pre-configured scenarios (e.g. Connected Car, Manufacturing etc.) using these templates that can then be used to jump-start IoT Architecture discussions.

Cross Platform IoT: Developing a .NET Core based simulator for Azure IoT Hub using VS for Mac!

Cross Platform IoT: Developing a .NET Core based simulator for Azure IoT Hub using VS for Mac!

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

The source for the solution is available on GitHub here.

March 7th was a significant milestone for the Microsoft Visual Studio team. With Visual Studio completing 20 years and the launch of Visual Studio 2017, the team demonstrated that Visual Studio continues to lead the path for .NET development on Windows. While the Visual Studio 2017 is not cross-platform and does not work on other OS like MacOS (yet ;)), Microsoft, for some time, has also launched a preview version for another cousin of Visual Studio – The Visual Studio for Mac!!

VS for Mac at first impression seems like a cosmetic redo for the Xamarin Studio (post acquisition of Xamarin by Microsoft) but as the new updates keep coming it seems to be bringing all the goodies of Visual Studio to the native MacOS platform. In this post, I will show how to use Visual Studio for Mac to build a .NET Core solution that will act as a simple device emulator and will send and receive messages commands to Azure IoT Hub.

Wait, don’t we already have VS Code? Yeah, that’s correct, VS Code is there and provides some awesome cross-platform development support. I am a big fan of its simplicity, speed and extension eco-system. #LoveVSCode. However, VS for Mac includes some additional features such as Project templates, Build and Run from IDE support, Native Xamarin App development experience, and now Visual Studio Test Framework support which makes it a step closer to the Visual Studio IDE available in Windows. Which is better? Well, time will tell, for the purpose of this post, however, we will use VS for Mac running on MacOS Sierra (10.12.3).

Getting VS for Mac

VS for Mac is in preview right now, and you should use it for development scenarios and with caution. At the time of writing, VS team had released Preview 5 builds, and that is what we used for this post.

You can download the .dmg from the Visual Studio website here to install the application or use homebrew for the installation:

brew cask install visual-studio

The setup process is fairly straightforward. Once installed, launch the Visual Studio app, and you should be presented with a welcome screen similar to below:

VS for Mac (current build) does not install .NET Core SDK on your machine by default and you will need to manually install it. To install .NET Core SDK on MacOS we will again use homebrew. The VS team provides step by step instructions on how to do this here.

Note that the installation asks for installing openssl which is a dependency for .NET Core, it mainly uses the libcrypto and libssl libraries. In some cases, you may see a warning like this. Warning: openssl is a keg-only and another version is linked to opt. To continue installation use the following command:

brew install --force openssl

Also, ignore the warning Warning: openssl-1.0.2k already installed, it's just not linked. or use brew unlink openssl before executing the above command

Writing our simulator app for IoT Hub

The source for this project is available at the GitHub repo here.

Now that we have VS for Mac and .NET Core SDK installed and setup, we are going to build a .NET Core Simulator app which will be performing the following operations:

  1. Send a message (ingress) using a .NET Core simulator to Azure IoT Hub using AMQP as the protocol.
  2. Receive messages from Azure IoT Hub (egress) using a .NET Core consumer.
  3. Send a command to a device and receive an acknowledgment from the device (coming soon).

The process to create the project is very similar to the experience of Visual Studio on Windows.

  1. Click New Project from the welcome screen to open the New Project dialog. We will select a .NET Core App project of type Console Application. At the time of writing C# and F# are supported applications for .NET Core projects in VS for Mac.
  2. In the next screen, we provide a project and solution name and configure our project for git.
  1. VS for Mac will now setup the project and solution. This first thing it does is to restore any required packages. Since VS for Mac has support for NuGet, it just uses Nuget package restore internally to get all the required dependencies installed. Once all dependencies are restored, you should see a screen similar to below. We have our .NET Core project created in VS for Mac!
  1. Before we start working on the simulator code, let’s ensure we have Source Control enabled for our project. In VS for Mac, the Version Control menu allows you to configure your SVN of Git repo. Note that this functionality is derived from Xamarin Studio, you can use the guidance here to set up version control for your project. We will be using a git repo and GitHub for our project.

Now that we have our project and version control sorted out, let’s start working on the code for our IoT Simulator. As noted earlier, the simulator will perform the following actions:

  • Send a message to IoT Hub using Device credentials
    • Act as a Consumer of the message from IoT Hub
    • Receive Commands and send response acknowledgment

The solution has multiple projects and is described in the GitHub repo here. The repo. also discusses how to use the sample. Instead of talking through all the projects, I will talk about the logic to send a message to a device when using .NET Core assemblies, this should give an idea of how to use VS for Mac and .NET Core with IoT Hub.

Sending messages over AMQP

The easiest option to work with IoT Hub is the Device and Service SDKs. Unfortunately, at the time of writing this post, we do not have a .NET Core version of these SDKs. If you try to add the Nuget packages you should get incompatibility errors like below:

Package Microsoft.Azure.Devices.Client 1.2.5 is **not compatible** with netcoreapp1.1 (.NETCoreApp,Version=v1.1). Package Microsoft.Azure.Devices.Client 1.2.5 supports:
net45 (.NETFramework,Version=v4.5)
portable-monoandroid10+monotouch10+net45+uap10+win8+wp8+wpa81+xamarinios10 (.NETPortable,Version=v0.0,Profile=net45+wp8+wpa81+win8+MonoAndroid10+MonoTouch10+Xamarin.iOS10+UAP10)
  uap10.0 (UAP,Version=v10.0)

So what are our options here:

  1. If you want to use the HTTPS protocol, you can build an HTTP Client and run it through the REST API model.
  2. If you want to use AMQP as the protocol, you can use the AMQP Lite library available on Nuget. AMQP Lite has support for .NET Core today and provides underlying functionality to send and receive packets to IoT Hub using AMQP as the protocol. We will be using this for your sample.
  3. If you want to use MQTT, there are few Nuget packages like M2Mqtt that you can try to use. I have not tried them yet.

When the IoT Hub releases the .NET Core versions, they should be the recommended way to process messages with IoT Hub

Adding the Nuget package

Adding NuGet packages in VS for Mac is straightforward. Simply right click on your project -> Add -> Add Nuget package. It opens the NuGet dialog box that allows searching for all available packages. In our case, we search for the AMQP Lite package and add it to our project.

Note that currently all packages are shown including full .NET Framework packages. VS for Mac, however, validates if the package can be added to a .NET Core project, in case the package has binaries or dependencies that rely on full .NET Framework, you should see an error in the package console

**Checking compatibility** for Microsoft.Azure.Devices.Client 1.2.5 with .NETCoreApp,Version=v1.1.
Package Microsoft.Azure.Devices.Client 1.2.5 is not compatible with netcoreapp1.1 (.NETCoreApp,Version=v1.1). Package Microsoft.Azure.Devices.Client 1.2.5 supports:
net45 (.NETFramework,Version=v4.5)
portable-monoandroid10+monotouch10+net45+uap10+win8+wp8+wpa81+xamarinios10 (.NETPortable,Version=v0.0,Profile=net45+wp8+wpa81+win8+MonoAndroid10+MonoTouch10+Xamarin.iOS10+UAP10)
uap10.0 (UAP,Version=v10.0)

Setting up the environment variables

Once we have the AMQP Lite assemblies, we now need our IoT Hub details. You can use the Azure CLI tools for IoT mentioned in my previous post to fetch this information. In the sample, I leverage a JSON configuration handler to dump the configuration in a JSON file and read from it during runtime.

{
   "settings":{
      "connectionStrings":[
         {
            "name":"youriothubname",
            "connectionString":"youriothub.azure-devices.net",
            "sasKey":"yourdevicekey",
            "sasKeyName":"device"
         }
      ],
      "deviceId":"D1234"
   }
}

Sending the message to IoT Hub

The final part of the puzzle is to write code that will open a connection and send a message to IoT Hub. We leverage the AMQP Lite library to perform these actions. In the sample, I follow a Strategy pattern to execute methods on the underlying library, it gives us the flexibility to change underlying implementations to a different library (for example when the IoT Hub SDK become .NET Core compliant) without making a lot of code changes.

// Create a connection using device context
				Connection connection  = await Connection.Factory.CreateAsync(new Address(iothubHostName, deviceContext.Port));
				Session session  = new Session(connection);

				string audience = Fx.Format("{0}/devices/{1}", iothubHostName, deviceId);
				string resourceUri = Fx.Format("{0}/devices/{1}", iothubHostName, deviceId); 
				// Generate the SAS token
				string sasToken = TokenGenerator.GetSharedAccessSignature(null, deviceContext.DeviceKey, resourceUri, new TimeSpan(1, 0, 0));
				bool cbs = TokenGenerator.PutCbsToken(connection, iothubHostName, sasToken, audience);
				if (cbs)
				{
					// create a session and send a telemetry message
					session = new Session(connection);
					byte[] messageAsBytes = default(byte[]);
					if (typeof(T) == typeof(byte[]))
					{
						messageAsBytes = message as byte[];
					}
					else
					{
						// convert object to byte[]
 					}

					// Get byte[] from 
					await SendEventAsync(deviceId, messageAsBytes, session);
					await session.CloseAsync();
					await connection.CloseAsync();
				}

The above code first opens a connection using the AMQPLite Connection type. It then establishes an AMQP session using the Session type and generates the SAS token based on the Device credentials. The sample also uses a protobuf serializer to encode the payload as a byte [] and finally send it to IoT Hub using SendEventAsync.

If you run the samples.iot.simulator.sender .NET Core console app in the sample, it calls the ExecuteOperationAsync to send the payload and the required environment variables to the AMQP Lite library. The results of a successful message sent would be displayed in a console window.

Consuming messages from IoT Hub

The sample also demonstrates the other scenarios such as consuming messages however you can also use an out of box Azure Service like Azure Stream Analytics to process these messages. Here is an example of using Stream Analytics for event processing.

Phew! This was a long post, it demonstrates some of the capabilities as well as challenges of developing .NET Core solutions with VS for Mac. I think VS for Mac is a great addition to the IDE toolkit especially for developers who are used to the Visual Studio Windows environment. There are few rough edges that need here and there but remember this is just a preview! … Happy Coding 🙂

The source for the solution is available on GitHub here.

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

Cross Platform IoT: Devices operations using IoT Hub Explorer

Cross Platform IoT: Devices operations using IoT Hub Explorer

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

IoT Hub Explorer is a node.js based cross platform tool that allows management of device for an existing IoT Hub.

This is one of my favorite tools for Azure IoT Hub development and troubleshooting. It is a command line tool that can run on Windows, Mac or Linux and provides an easy way to execute operations on Azure IoT Hub such as device creation, send a message to an existing device, etc. It comes in handy when you are doing Dev. Testing or even demonstrating the capabilities of the Azure IoT service. You can theoretically use this for troubleshooting a production environment as well, but I would recommend having appropriate telemetry and an operational management pipeline for those scenarios. One of the very useful features I like about this tool is the ability to monitor messages on devices for the IoT Hub, basically getting event data and operational statistics about devices in the hub. We will use this tool to create a device and then watch IoT Hub for messages. IoT Hub Explorer is available on GitHub here.

Note that the Azure IoT CLI (covered in the last post) also has support for managing devices and may soon overlap with functionalities of IoT Hub Explorer. When that happens, Azure CLI should become the preferred tool for all IoT Hub operations.

Let’s use IoT Hub Explorer to create and monitor a device. Before we do that, we need to install it. Being a node package, you can install it through npm.

npm install -g iothub-explorer

Since IoT Hub Explorer is a separate utility, we will need to first login using our IoT Hub connection string. Open a bash terminal and enter the following:

iothub-explorer login "HostName=yourhub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yourkey"

If you do not have the connection string handy, you can use the az iot hub show-connection-string -g youresourcegroup command described in the previous section to fetch your IoT Hub connection string. The login command should open a temporary session with the IoT Hub token policy provided. The default TTL for this session is 1 hour.

Session started, expires on Wed Mar 15 2017 19:59:05 GMT-0500 (CDT)
Session file: /Users/niksac/Library/Application Support/iothub-explorer/config

Note that the above command uses the connection string for iothubowner policy which gives complete control over your IoT Hub.

Creating a new device

To create a new device using IoT Hub explorer, type the following command:

iothub-explorer create -a

The -a flag is used to auto-generate a Device Id and credentials when creating the device. You can also specify a custom Device Id or a Device JSON file to customize the device creation. There are other options to specify credentials such as the symmetric key and X.509 certificates. We will cover those in a separate IoT Hub Security post, for now, we use the default credentials generated by IoT Hub.

If all goes well, you should see an output similar to below:

deviceId:                   youdeviceId
generationId:               63624558311459675
connectionState:            Disconnected
status:                     enabled
statusReason:               null
connectionStateUpdatedTime: 0001-01-01T00:00:00
statusUpdatedTime:          0001-01-01T00:00:00
lastActivityTime:           0001-01-01T00:00:00
cloudToDeviceMessageCount:  0
authentication: 
  symmetricKey: 
    primaryKey:   symmetrickey1=
    secondaryKey: symmetrickey2=
  x509Thumprint: 
    primaryThumbprint:   null
    secondaryThumbprint: null
connectionString:           HostName=youriothub.azure-devices.net;DeviceId=youdeviceId;SharedAccessKey=symmetrickey=

Few important things, the obvious one here is the connectionString. It provides a unique Device Connection string to talk to the device. The privileges for the device connection string are based on the device policy defined in IoT Hub and gives them DeviceConnect permissions only. The policy-based control allows our endpoints to be secure and limited to use with only the specific device. You can learn more about IoT Hub device security here. Also, notice that the device is enabled but the status is disconnected. What this means is that the device has been successfully registered in IoT Hub. However, there are no active connections to it.

Sending and Receiving messages

Lets initiate a connection by sending a device ingestion request. There are multiple ways in IoT Hub Explorer to send and receive messages. One of the options that are useful is a simulate-device command. The simulate-device command allows the tool to behave like a device ingestion and command simulator. It can be used to send Telemetry custom messages and Commands on behalf of the device. The functionality comes handy when you want to perform some Dev. Integration tests on your device, and you don’t want to write too much code. You can create messages and view the send and receive flow simultaneously. The command also exposes properties such as send-interval and send-count and receive-countwhich allows for configuring the simulation. Note that this is not a penetration or load testing tool, it is enabling a feature for you to perform some initial tests before performing more involved test cases. Let send a series of messages on our created device (from Part 1) and then receive a Command message.

Send a message

The following command sends 5 messages every 2 seconds to a specified Device Id.

niksac$ iothub-explorer simulate-device --send "Hello from IoT Hub Explorer" --device-connection-string "HostName=youriothubname.azure-devices.net;DeviceId=D1234;SharedAccessKey==" --send-count 5 --send-interval 2000  

The output for the messages will look like this:

Message #0 sent successfully
Message #1 sent successfully
Message #2 sent successfully
Message #3 sent successfully
Message #4 sent successfully
Device simulation finished.

Monitoring Messages

Another useful feature in IoT Hub Explorer is that it allows you to monitor the event of your device or IoT Hub as a whole. This comes in very handy when you want to troubleshoot your IoT Hub instance. For example, you want to check if messages are flowing correctly in IoT Hub. You can use the monitor-events command to log all events for the device of Hub to the terminal; you can also use the monitor-ops command to monitor the operation endpoint for IoT Hub.

To monitor events, enter the following:

iothub-explorer monitor-events --login "HostName=youriothub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=="

The above created a listener which is now looking for any activating happing on the entire IoT Hub. As mentioned earlier, you can specify a Device connection string to monitor for a specific device.

Now, when you send a message or command to any device in your IoT Hub, you should start seeing the output in the terminal. For example, if you open a monitor-event listener in a terminal window and then execute the simulate-device --send command again, you should see following output in the terminal:

Monitoring events from all devices...
==== From: D1234 ====
Hello from IoT Hub Explorer
====================
==== From: D1234 ====
Hello from IoT Hub Explorer
====================
==== From: D1234 ====
Hello from IoT Hub Explorer
====================
==== From: D1234 ====
Hello from IoT Hub Explorer
====================
==== From: D1234 ====
Hello from IoT Hub Explorer
====================

There are multiple other useful commands in IoT Hub Explorer such as Import, Export devices, regenerate SAS tokens, device management commands. You should play around with the IoT Hub Explorer commands and options; it can save you from writing some piece of code for common operations.

In the next post, we talk about another useful utility for troubleshooting and diagnostics.

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

Cross Platform IoT: Using Azure CLI with Azure IoT Hub

Cross Platform IoT: Using Azure CLI with Azure IoT Hub

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

A brilliant addition to the cross platform toolkit from Microsoft is the Azure CLI. The Azure CLI (Command Line Interface) provides an intuitive and rich command line experience to perform various operations on Azure resources. The support for this tool is great, and in some cases, I feel this is better than the PowerShell version of CLI out there :). Azure CLI has two versions today:

  • The node.js version of the Azure CLI  – It has been out for a while and provided support for all resources in Azure including Resource Groups, VMs, Storage Accounts, etc.
  • A new version which is based on Python was released some time back correctly for better support with Unix / Linux platforms – Azure CLI 2.0. We will be using Azure CLI 2.0 for our samples.

So what’s a CLI got to do with IoT Hub? Well, the team also introduced an IoT module which allows management of IoT Hub operations from the command line. We will show how to enable the IoT Hub module for Azure CLI and then use the commands to create and manage our IoT Hub Service.

There seems to be a feature parity between the two versions, you can use either of them for running the IoT modules. Here is an introduction to the goals behind Azure CLI 2.0.

Let’s start by installing the IoT Components for Azure CLI, we will then use the CLI to create and perform operations on an IoT Hub.

If you have not already installed Azure CLI 2.0, you can install it from here. The link provides instructions for installing CLI on any OS; we will be using it for MacOS. The python scripts ensure dependencies are installed and also sets the PATH variable for the “az” command from a terminal. The script attempts to add the configuration to ~/.bashrc which I have observed does not work well with MacOS Terminal. To ensure that you can use the “az” alias for all Azure CLI commands, you should add the following environment variable to your ~/.bash_profile configuration instead. From any bash terminal enter nano ~/.bash_profile to edit the configuration and type the following and save your settings:

export PATH=\~/bin:$PATH

I don’t see a .bash_profile?
In case you don’t have a .bash_profile (this is common with macOS), you can simply create one using the following:

cd ~/
touch .bash_profile

To make sure Azure CLI is working, open a terminal and type

az --version

This should show a list of all components installed as part of the CLI:

acs (2.0.0)
appservice (0.1.1b5)
batch (0.1.1b4)
cloud (2.0.0)
component (2.0.0)
configure (2.0.0)
container (0.1.1b4)
core (2.0.0)
documentdb (0.1.1b2)
feedback (2.0.0)
keyvault (0.1.1b5)
network (2.0.0)
nspkg (2.0.0)
profile (2.0.0)
redis (0.1.1b3)
resource (2.0.0)
role (2.0.0)
sql (0.1.1b5)
storage (2.0.1)
vm (2.0.0)
Python (Darwin) 2.7.10 (default, Jul 30 2016, 19:40:32) 
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)]

Note that the IoT component is not installed by default, to install the IoT component, run the following commands:

az login
az account set --subscription  
az component update --add iot
az provider register -namespace Microsoft.Devices

The above commands will login and set your account to the desired Azure subscription. It will then install the IoT components and register the IoT provider. If you now run az --version, you should see the IoT component installed amongst the list.

acs (2.0.0)
appservice (0.1.1b5)
......
iot (0.1.1b3)
......

The IoT component is further divided into two groups device and hub:

  • The hub group is for commands specific to the IoT hub itself. These include create, update, delete, list, show-connection-string, show-quota-metrics for the creation, etc.
  • The device group allows for commands that are relevant for device operations. These include create, delete, list, update, show, show-connection-string, import, and export.

You can use az iot device -h or az iot hub -h to see all commands and subgroups.

Now, let’s create our IoT Hub, use the following commands to create a new resource group in the selected subscription and then create a new IoT Hub in the newly created resource group.

az group create --name yourresourcegroupname --location westus
az iot hub create --name youriothubname --location yourlocation --sku S1 --unit 1 --resource-group yourresourcegroupname 

Most of the values are self explanatory. I specify the SKU as S1 but if you want to use the free tier you can specify F1 as the SKU (the allowed values are: {F1,S1,S2,S3}. Also, the unit here refers to the IoT Hub units that you want to create with the IoT Hub, 1 unit is more than enough for development purposes. The optional location attribute is the region where your IoT Hub will be created, you can ignore this to use the default location of the resource group. List of available regions (at the time of writing) are: {westus,northeurope,eastasia,eastus,westeurope,southeastasia,japaneast,japanwest,australiaeast,australiasoutheast,westus2,westcentralus}.

If everything goes OK, you should see the response similar to below:

{
  "etag": "AAAAAAC2NAc=",
  "id": "/subscriptions/yournamespace/providers/Microsoft.Devices/IotHubs/youriothubname",
  "location": "westus",
  "name": "youriothubname",
  "properties": {
       ....
      }
    },
    ... 
''   "type": "Microsoft.Devices/IotHubs"
}

Voila!, we just created an IoT Hub from the command line on a Mac! We will use this IoT hub to add devices and perform other operations in our next post.

Some other useful commands in Azure IoT CLI:

To see the configuration again just type az iot hub list -g yourresourcegroupname you should see a list of all IoT Hub created in your resource group. If you omit the resource group, you can see all IoT Hubs in the subscription. To display the details of a specific IoT Hub, you can use az iot hub show -g yourresourcegroupname --name yourhubname.

IoT Hub by default creates a set of policies that can be used to access IoT Hub. To view a list of policies enter az iot hub policy list --hub-name youriothubname. Policies are very important when dealing with IoT Hub operations, you should always leverage the “least privilege” principle and choose your policy based on the operation being conducted. For example, the iothubowner policy grants you admin rights on the IoT Hub and should not be used for example to connect a device.

To get just the connection strings for the created IoT Hubs enter az iot hub show-connection-string -g yourresourcegroupname.

Note that by default the above command returns the connection string for iothubowner policy which gives complete control over your IoT Hub. It is recommended that you define granular policies in IoT Hub to ensure a least privilege access. To do this, use the --policy-name option and provide a policy name.

You can also use az iot device commands to create a device; we will use IoT Hub Explorer to demonstrate those features instead. IoT Hub Explorer provides some additional commands including device management and command and control. Personally, I do see the prospect for all IoT Hub Explorer functionality coming into the future version of the IoT component for Azure CLI.

Looking under the hood of Azure CLI

One of the beautiful things about Azure CLI is that is uses REST API under the hood to create the resources and entities in Azure. If you are curious how the underlying operations are being performed or simply want to troubleshoot for any errors, you can use the -debug option with any command. The debug command provides a trace of what calls are being made and any underlying exceptions that may have occurred during processing. For example, for a command like:

az iot hub show-connection-string -g yourresourcegroupname --debug"

You should see something like below:

Command arguments ['iot', 'hub', 'show-connection-string', '-g', 'yourrgname']
Current active cloud 'AzureCloud'
{'active_directory': 'https://login.microsoftonline.com',
 'active_directory_graph_resource_id': 'https://graph.windows.net/',
 'active_directory_resource_id': 'https://management.core.windows.net/',
 'management': 'https://management.core.windows.net/',
 .......
'storage_endpoint': 'core.windows.net'}
Registered application event handler 'CommandTableParams.Loaded' at 
Registered application event handler 'CommandTable.Loaded' at 
Successfully loaded command table from module 'iot'.
..........
g': 'gzip, deflate'
msrest.http_logger :     'Accept': 'application/json'
msrest.http_logger :     'User-Agent': 'python/2.7.10 (Darwin-16.4.0-x86_64-i386-64bit) requests/2.13.0 msrest/0.4.6 msrest_azure/0.4.7 iothubclient/0.2.1 Azure-SDK-For-Python AZURECLI/2.0.0'
msrest.http_logger :     'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImEzUU4wQlpTN3M0bk4tQmRyamJGMFlfTGRNTSIsImtpZCI6ImEzUU4wQlpTN3M0bk4tQmRyamJGMFlfTGRNTSJ9.eyJhdWQiOiJodHRwczovL21hbmFnZW1lbnQuY29yZS53aW5kb3dzLm5ldC8iLCJpc3MiOiJodHRwcz0cy53aW5kb3dzLm5ldC83MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMWRiNDcvIiwiaWF0IjoxNDg5NjEzODc5LCJuYmYiOjE0ODk2MTM4NzksImV4cCI6MTQ4OTYxNzc3OSwiX2NsYWltX25hbWVzIjp7Imdyb3VwcyI6InNyYzEifSwiX2NsYWltX3NvdXJjZXMiOnsic3JjMSI6eyJlbmRwb2ludCI6Imh0dHBzOi8vZ3JhcGgud2luZG93cy5uZXQvNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3L3VzZXJzL2UyZTNkNDdlLTRiYzAtNDg1Yy04OTE1LWYyMDVkYzRlODY5YS9nZXRNZW1iZXJPYmplY3RzIn19LCJhY3IiOiIxIiwiYWlvIjoiQVFBQkFBRUFBQURSTllSUTNkaFJTcm0tNEstYWRwQ0pwWnVGcnREQ05QTTBrNjB1NVl0RElLTjZ5cklUVHJna0Fod3JuQXJBd2NMQXFqczlNQ0RhazRtM0E2cjN3T09YZ1FxaWZlUVFIRC1TQ0JXOVVJdjNabzFnMERXMElvYWRLRWgtQ0R3X01XY2dBQSIsImFtciI6WyJwd2QiLCJtZmEiXSwiYXBwaWQiOiIwNGIwNzc5NS04ZGRiLTQ2MWEtYmJlZS0wMmY5ZTFiZjdiNDYiLCJhcHBpZGFjciI6IjAiLCJlX2V4cCI6MTA4MDAsImZhbWlseV9uYW1lIjoiU2FjaGRldmEiLCJnaXZlbl9uYW1lIjoiTmlrIiwiaW5fY29ycCI6InRydWUiLCJpcGFkZHIiOiI2NS4zNi44OC4xNjYiLCJuYW1lIjoiTmlrIFNhY2hkZXZhIiwib2lkIjoiZTJlM2Q0N2UtNGJjMC00ODVjLTg5MTUtZjIwNWRjNGU4NjlhIiwib25wcmVtX3NpZCI6IlMtMS01LTIxLTEyNDUyNTA5NS03MDgyNTk2MzctMTU0MzExOTAyMS0xMzUwMDI4IiwicGxhdGYiOiI1IiwicHVpZCI6IjEwMDM3RkZFODAxQjZCQTIiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiI2Z3d4WUFKem4zM3h6WVhfVWM4cHpNcGc4dzk1NVlHYTJ2VnlpazMtVDZ3IiwidGlkIjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3IiwidW5pcXVlX25hbWUiOiJuaWtzYWNAbWljcm9zb2Z0LmNvbSIsInVwbiI6Im5pa3NhY0BtaWNyb3NvZnQuY29tIiwidmVyIjoiMS4wIn0.jEAMzSd4bV0x_hu8cNnQ7fActL6uIm97U7pkwCz79eaSfEnBdqF8hXEJlwQh9GYL0A3r8olNdjr1ugiVH6Y0RFutn7iD8E-etkVI9btE1aseZ3vvZqYeKPhA1VytBsTpb4XO2ZI094VynTeALoxe7bbuEzl5YaeqtbC5EM0PMhPB04o7K1ZF49nGKvA385MHJU3G-_JT3zV-wdQWDj5QKfkEJ0a9AsQ9fM7bxyTdm_m5tQ4a-kp61r92e1SzcpXgCD7TXRHxrZ2wa65rtD8tRmHt6LOi7a4Yx2wPFUpFoeQAcN7p7RKW6t_Cn8eyyvWrrUXximBcTB4rtQTgXCfVUw'

As you can see, Azure CLI makes the rest calls on our behalf using the current tokens and performs the required operations. Once the responses are received, it formats the response in the desired format and displays it on the terminal.

Here is the output without the --debug option:

[
  {
    "connectionString": "HostName=youriothub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yourkey=,
    "name": "youriothub"
  }
]

Another useful operation provided by Azure CLI is the output option. It allows for formatting of output using different parsers. For example, the using the same command above, we can format the output from JSON (default) to a table.

 az iot hub show-connection-string -g yourresourcegroupname -o table"

The output should now be presented in a tabular format:

ConnectionString                                                                                                                        --------------------------------------------------------------------------------------------------------------------------------------  --------------
HostName=youriothub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yourkey= 
Name
youriothub

Finally, you can create powerful scripts using Azure CLI using features like query and combine them with conventional unix pipe commands to send the output from one command to another.

For example, the below command uses a query to get all IoT Hub created in the west region under my subscription and then formats the output as a tsv:

az iot hub list --query "[?contains(location,'westus')].{Name:name}" -o tsv

The query language (JMESPath) is very powerful when filtering requests, you can find more details about the query language here.

That’s all for now. In the next post, we will use Azure IoT Hub Explorer to create a device and then perform send and receive operations upon it.

This post is part of a Cross-Platform IoT series, to see other posts in the series refer here.

Cross Platform IoT: Developing solutions using Azure IoT on a Mac!

Cross Platform IoT: Developing solutions using Azure IoT on a Mac! 

Not so long ago, a search on the words “Mac” and “Microsoft” would reveal results like rivalry, locked-in platform, competitors, etc. The story has dramatically changed in the last few years especially from a Microsoft perspective. Microsoft is embracing Open Source and Cross Platform and opening up gates for developers and end-users to benefit from the Microsoft platform not just on Windows but across any OS or platform.

In this blog series, I will cover some of the Microsoft Azure IoT Cross-platform tools that are available for developers to build IoT Solutions on several operating systems, most of them are Open Source licensed! My focus will specifically be on development and troubleshooting tools for Azure IoT Hub on a non-Windows platform. We will be covering the following tools:

  1. Azure CLI for IoT
  2. IoT Hub Explorer
  3. IoT Hub Diagnostic Utility
  4. Developing .NET Core apps on Visual Studio for MacOS (Preview)

When should you use these tools

Since most of these tools are cross-platform, they can and should become part of your development and troubleshooting toolkit when developing Azure IoT applications. A few use cases for these utilities include:

  • Import/export devices as part of a workflow or script.
  • These may be included as part of your CI/CD builds to verify if the environment is setup and working correctly.
  • Troubleshooting a Dev / Test environments.
  • Monitoring devices for events during solution development. This is pretty handy to test if a message event is coming through or a command is being passed without writing any code!
  • And of course, building and deploying cross-platform applications on any OS platform.

Let’s get started

There are multiple pieces to the puzzle, let’s list them down:

  • In Part 1, we will install the IoT Module for Azure CLI 2.0 and use the CLI to create a new Resource group and then create an IoT Hub instance in our Azure Subscription.
  • Part 2, we will use IoT Hub Explorer to perform some common device operations on our created IoT Hub instance and create a device Identity that we can play with.
  • Part 3, we discuss the IoT Diagnostic tool for troubleshooting IoT Hub.
  • Finally, in Part 4 we will develop a .NET Core application and connect to our IoT Hub to publish messages and send commands. All of that using Visual Studio for Mac.

Note that in this series we will not connect to an actual device but only show an emulation. The purpose is to demonstrate how we can do rapid development using the provided cross-platform tools. If you are interested in connecting your device to Azure IoT, you can always look at the great samples available here.

Using Win10 Core and Azure IoT Services with Raspberry Pi2

I recently hosted an IoT booth at a Microsoft Internal conference and showcased a demo of using Win 10 Core RTM build on a Raspberry Pi2 (also referred as Pi from here on) and connecting it to Microsoft Azure IoT Services.


I wanted to show a very simple scenario to highlight how easy it is to develop and deploy Windows 10 Universal apps for IoT Devices and connect them to Azure IoT Services. The Scenario in my case was generating metrics of visitors coming to the booth. The application collects details about the organization the visitors belong to and uses Azure IoT Services to process the data and generate reports showing metrics and distribution of visitors. The below snapshot shows the output of how the report looks like:


There were many queries around the code sample and how to setup the device for connecting to Azure IoT Services. In this blog series, I will talk about setting up your device for Windows 10 Core (RTM build) and then communicating with Azure IoT Services to process the data.

This will be a multi-part blog series and will include the following parts:

Part 1: Setting up things

Part 2: Developing a Windows Universal app

Part 3: Processing event streams using Azure Stream Analytics

Part 4: Creating a Dashboard using Microsoft Power BI

Part 5: Using SSL with Raspberry Pi

The Windows 10 Universal app sample code is available here (For reference purpose only):
https://github.com/niksacmsft/IoT.Samples.Universal.EventIngest


Using Win10 Core and Azure IoT Services with Raspberry Pi2 – Part 1

This is Part 1 of the multi -part blog series Using Win10 Core and Azure IoT Services with Raspberry Pi2. For an introduction of the scenario, please refer here.

Setting up Things

In this blog post, we will set up our device to work with Win 10 RTM build.

The first thing is to make sure you have all the modules and devices required for the sample. The following specifies the pre-requisites required for the sample to work:

Pre-requisites

  1. Hardware:
    1. Raspberry Pi 2 Model B
    2. A Monitor with HDMI support: I used this for the demo.

      Note that not all screens are compatible with Win10 Core. For hardware specification for Win 10 Core please refer here.

    3. A USB connected Mouse
    4. Original Raspberry Pi WiFi adapter: (Only when using Wi-Fi)

      NOTE: No other WiFi dongle will work as of today, I specify a workaround below in case you don’t have the dongle.

    5. A Power Supply: I used the Gomadic AA battery operated power stick and it worked well with RPi2 and Win10 Core. You don’t need this if you have a power outlet available or if you just want to boot out of your laptop USB ports. The Pi typically requires around 1.2 A to run most applications.
    6. A Bluetooth Adapter (Optional)– Only if you want to connect a Bluetooth device such as a mouse (NOTE: As of today only this adapter works with Win10 Core)
    7. A USB keyboard: In case you want to enter the network credentials when connecting to Wi-Fi.
  2. Software
    1. Windows 10 RTM image (10240). Check out the release notes for known issues here.
    2. A Windows 10 PC with Visual Studio 2015 RTM: For development purposes only. The code samples for this blog is in C# but you can also use other languages.

Alright! So now we have all the devices, let’s get started with our Win 10 setup. In the section, we will install the Win10 Core image on to our Pi, connect it to a network and ensure that we are able to run a Universal app remotely connecting to our Pi:

Installing Windows 10 Core on RPi2

The Windows team has a great page here on getting started with the installation of Windows 10 Core on a Pi. Use this to get your device and PC ready for development.

Boot up your Pi

Hook the RPi2 to the power supply. If all works OK, you should see a similar screen. Note that if you do not have the Pi connected to a network the Network and IP Address slots will be empty. We discuss connectivity in the next section.


Connecting Pi to a Network

To enable connectivity with Azure IoT Services, the Pi must be connected to the internet. There are multiple ways you can achieve this:

  1. Option 1: Using the Ethernet port: The simplest ways to connect your Pi to a network is to hook it to your router directly using an Ethernet cable. Note that if your Pi was already running you may have to restart to get the IP address for the device.
  2. Option 2: Original Raspberry Pi WiFi dongle: The current Win 10 Core image has been tested on the Original Raspberry Pi dongle and ONLY works with this Wi-Fi adapter.

    If you have any other adapter that you got using a development kit like the Canakit it will not work as of today.

    Connecting to the dongle is fairly straightforward:

  • Click the settings icon on the top right hand of the home screen.
  • On the Device Settings screen, Select Network & Wi-Fi and select the network you want to connect. Enter credentials and you are done!


  1. Option 3: Using a Wi-Fi to Ethernet adapter: I hope the team will provide support for other WiFi dongle’s soon since the Pi dongle is only available from a couple of locations within the US. However, if you still want to use WiFi as an option for your device, you can use a WiFi to Ethernet adapter such as the NetGear Universal N300. I used this for my demo and it worked very well with very few disconnects or reconnections.

NOTE: Make sure both your Win 10 development PC and the Pi are connected to the same network or have network sharing enabled. We will do a remote deployment from Visual Studio so both device should be available to communicate with each other.

Testing remote connectivity

You can test the connectivity of the device using PowerShell or using the Windows IoT Core Watcher desktop app that is installed along with the Win 10 IoT Core setup.

However, I prefer using the Win 10 Core device web page that provides a clean web interface and allows you to remotely view and update* the device configuration. The device web page is courtesy a hosted web server that comes along with the Win10 Core installation.

The URL for the web page is http://<YourDeviceIPAddress>:8080/default.htm

Clicking on this page should take you to a screen such as below. As you can see, here you can select Apps to run, Manage Device, view performance metrics, setup network connectivity etc.


We are now ready to develop of Windows Universal App and deploy it to the Pi. In Part 2 of this series, we will develop a Windows Universal App that connect to Event Hub over an AMQP connection for sending user selections.

Using Win10 Core and Azure IoT Services with Raspberry Pi2 – Part 2

This is Part 2 of the multi-part blog series Using Win10 Core and Azure IoT Services with Raspberry Pi2. For an introduction of the scenario, please refer here.

Developing an Azure IoT Service Connected Windows Universal app

In Part 1 of this blog series, we covered setting up the Pi and related modules for building and deploying our Windows universal app. In this blog post we will develop our app and then remotely deploy it the Pi device we configured.

If you do not have Visual Studio configured for Windows 10 IoT Core, please refer here.

There are two modes in which you can develop a Windows 10 IoT app, Headed and Headless.

The difference really is that the former supports a UI and the latter is mostly used for background services. For this scenario, we will create a headed app.

Note that since Windows supports the concept of Universal apps, you do not need specialized templates for created Headed or Headless app for IoT devices. The beauty of Universal apps is that the same code can run on any Windows supported device.  The only specialized template available for Windows IoT Core  in Visual Studio is for Background apps. 

Instead of giving a step by step of how the Universal App is created, I provide the key steps that you can use to configure your solution. The entire code for the sample is available on my Github repo here: https://github.com/niksacmsft/IoT.Samples.Universal.EventIngest

  • Create a new Project in Visual Studio using Windows -> Universal -> Blank App (Universal) template.
    • Add the IoT Extension to the project using Reference -> Extensions


  • Add another project to the solution using the Windows -> Universal -> Class Library (Universal Windows) template. We will use this project to add helper and common classes.
    • Add the IoT Extension to this project using Reference -> Extensions -> Windows IoT Extensions for the UWP.
    • Add the following Nuget packages to the project
      • AMQPNetLite: We use this for connecting to EventHub over AMQP
      • Newtonsoft.Json: We use this for serialization and de-serialization of our JSON documents.
  • Build your solution to make sure all things are working OK. When building the package, Nuget will attempt to restore the UWP packages and any other configured packages.
  • Create an Event Hub: For this blog post, I will assume that you know how to create an Event Hub. In case you are new to Event Hubs I have a previous blog entry which talks about Event Hubs, you can refer to it here.
  • Connecting to Event Hub programmatically

    For my sample, I wanted to create a simple library that allows me to connect to Event Hubs on either HTTPS or AMQP protocol. I extended the bits from http://connectthedots.io sample and created a re-usable asynchronous wrapper that I call as ConnectionManager that allows connection to Event Hubs over either of the protocols. I plan to add more protocol to this wrapper in future.

    • For HTTPS it uses a REST API call the Event Hub API and
    • For the AMQP connection it uses AMQPNetLite client library.

    Below is an excerpt of the ConnectionManager.cs class:

<code>
public async Task<bool> SendEvent(Event eventStream)
 {
 eventStream.Timecreated = DateTime.UtcNow.ToString("mm:dd:yyyy hh:mm:ss");
 return await SendMessage(eventStream.ToJson());
 }
 private async Task<bool> SendMessage(string message)
 {
 var protocol = Protocol;
 switch (protocol)
 {
 case Protocol.Amqp:
 return await SendMessageAmqp(message);
 case Protocol.Https:
 return await SendMessageHttps(message);
 default:
 return false;
 }
 }
 
 private async Task<bool> SendMessageHttps(string message)
 {
 if (!this._eventHubConnectionInitialized) return false;
 try
 {
 var content = new HttpStringContent(message, Windows.Storage.Streams.UnicodeEncoding.Utf8, "application/json");
 var postResult = await _httpClient.PostAsync(_uri, content);
 
 if (postResult.IsSuccessStatusCode)
 {
 Debug.WriteLine("Message Sent: {0}", content);
 }
 else
 {
 Debug.WriteLine("Failed sending message: {0}", postResult.ReasonPhrase);
 }
 return true;
 }
 catch (Exception e)
 {
 Debug.WriteLine("Exception when sending message:" + e.Message);
 return false;
 }
 }
 private async Task<bool> SendMessageAmqp(string message)
 {
 //TODO: figure out if AMQP.NET lite support async method calls
 // construct message
 var messageValue = Encoding.UTF8.GetBytes(message);
 
 // here, AMQP supports 3 types of body, here we use Data.
 var formattedMessage = new Message { BodySection = new Data { Binary = messageValue } };
 _sender.Send(formattedMessage, null, null); // Send the message 
 // _connection.Close(); // close connection
 return true;
 }
</code>
  • Configuring Event Hub connection strings

    Instead of hard coding my event hub and related connection strings, I use a JSON file for persisting this information. The JSON is packaged as part of the IoT.Samples.Universal.EventIngest project and deployed to the device when the application is installed.

    • Create a new folder under Assets folder called Settings
    • Create a new settings.json file under this folder.

    The JSON file itself is fairly simple and looks like this:

<code>
{
 "settings": {
 "servicebusnamespace": "your service bus namespace",
 "eventhubname": "your event hub name",
 "keyname": "the SAS key for sending messages to event hub",
 "keyvalue": "the SAS key value for sending messages to event hub"
 }
}
</code>
  • Configuring the UI for the Universal App

    The UI for my Universal App is very basic, the idea for me was to demonstrate how to connect a device to an Azure backend and generate reports out of it. I did not put a lot of effort in cosmetic and UI aspects.

    I used a FlipView control to show the different departments and leverage the Tapped event to invoke a call my ConnectionManager for sending the selected option to Event Hub.

    Additionally a TextBlock control is used to display success and exceptions.

<code>
private async void flipView_Tapped(object sender, TappedRoutedEventArgs e)
 {
 try
 {
 // try to cast source as content presenter
 var content = e.OriginalSource as ContentPresenter;
 if (content == null) return;
 // Send data to Event Hub
 var eventData = new Event
 {
 Id = "iotboothdevice",
 Timecreated = DateTime.UtcNow.ToString("mm:dd:yyyy hh:mm"),
 Value = content.Content.ToString()
 };
 var result = await _connectionManager.SendEvent(eventData); // send message over event hub
 if (!result) return;
 var message = string.Format("Last Successful Message sent at: {0}", DateTime.UtcNow);
 textBlock.Text = message;
 InitializeFlipView();
 }
 catch (Exception ex)
 {
 textBlock.Text = ex.Message;
 }
 }
</code>

Deploy and Test the app on the PI

Visual Studio 2015
provides seamless integration for deploying, testing and debugging Universal apps on supported devices. To deploy the app on your Pi devices, there are few changes we need to make to our project configuration:

  • Right click and select Properties for the IoT.Samples.Universal.EventIngest project.
  • In the debug tab, select Target device as Remote Machine and enter the IP Address of your Pi device, do not check the Authentication check box.


  • Select ARM as the architecture (this is required for Pi since it is ARM based), the Remote Machine option will be the only option available in the debug now.
  • Now if you run the solution, Visual Studio will attempt to connect to your Pi device and initiate the deployment of the solution. During the deployment it will package the necessary files and dependencies, install any requirement .NET framework version. Finally, it will select the Universal App as the running app on the device. You should see a screen similar to the below:


  • Our app is now running, if you click on any of the FlipView control items, a call will be sent to EventHub to register the selection. The UI will be updated with the last successful message update.


In Part 3 of this blog series, we will leverage Stream Analytics to make sense of the data coming in from the device.