Creating a GitHub badge with Azure Functions

Recently, I spent some time with the guys from the Stryker Mutator team. First in a hackathon over a weekend back in December last year, then finalizing our work in February and launching the Mutation Score Badge. Even though I had to overcome my fear of JavaScript, I managed to find some good parts in NodeJS and combine them into a Azure Function that provides the actual mutation badge. Get the why, how and what in this post.

Example badges

Why Azure Functions?

Well, simply put: because it’s cheap, easy and it supports multiple languages. Since Stryker is writting in NodeJS, I decided to challenge myself and write the function in NodeJS as well. Our setup is quite simple:

  • We use an Azure Storage Table to score all mutation scores posted from the Dashboard.
  • When someone requests a badge, the function performs a lookup in this table and presents the badge
  • We have a Function Proxy to be able to use our own domain and a friendly URL.

How we developed the function

All code was written in TypeScript and using this excellent post by Tsuyoshi Ushio, I was able to develop and debug it on my mac quite easily (well, after a crashcourse in TypeScript from Nico Jansen).

Is it really all that awesome?

No, it isn’t. We hit quite a few snags while developing but especially when deploying. As you read before, the functions are dirt-cheap on a consumption plan but this also means that they’re not ‘Always-On’. Where this doesn’t really seem to be an issue for regular C# functions, for some reason the NodeJS functions were extremely slow and on top of that, I had to use Kudu to do an npm install.

Azure and NodeJS

We soon discovered that uploading a lot of small files to Azure would take a while, and we decided we’d just want to upload the package.json and run npm install on Azure through Kudu. Notice that on this page it also says that the Node version is locked on 6.5.0. Even adjusting the WEBSITE_NODE_DEFAULT_VERSION environment variable didn’t work.

Adjusting the environment variable

This limited us in our ability to use certain Node functionality that required version 8+ (util.promisify in particular) so we went to look for another solution. This present itself in the portal. If you look carefully at the screenshot above, you can see a variable called FUNCTIONS_EXTENSION_RUNTIME. This is set to ~1 by default, but you can simply change that to run on the ‘beta’ version. Mind you: you can only safely change this if you don’t currently have any functions deployed.

Unfortunately, it turned out that changing this to the beta doesn’t support proxies yet, so we reverted and included our own promisify.

Cold Boot

As mentioned before, we initially planned on deploying through Kudu and simply running NPM install there and we did. Thing is, the functions were really slow. I mean… REALLY slow. It took over 20 seconds to start and as it turns out, we weren’t the only ones. Our solution was to apply FuncPack and by simply running this before our publish:

1
2
npm install -g azure-functions-pack
funcpack pack ./

we were able to pack it all into one file. What it does is that it applies WebPack magic to your function (also rewriting your function.json to reference to index.js as entrypoint). Running this brought our cold boot down to acceptable levels.

What now?

Well, we’re live :) There’s still some work to do by the functions team, but with the newly announced Run-From-Zip functionality, I’m positive that it’ll run even smoother than now. On top of that, we now also know what it has cost us over the month of February: a whopping $0.33 :-) So I guess this still applies:

Microsoft loves Open Source

Or at least they make it pretty easy for Open Source projects to use their services without incurring too much of a cost penalty. I’ll follow up on this post to describe how we wrapped this all up in a neat VSTS pipeline to deploy continuously.

Share Comments

Running Docker 17.10 on Windows Server 1709 without nested virtualization

Although some people have overheard me saying that Containers are Dead, there is actually some use for it when dealing with legacy software and/or on-premise/cloud hybrid applications.
Recently, during a hackathon, I tried to use Docker Swarm’s awesome Routing Mesh but couldn’t get it to work on Windows Server 2016. It turns out that it will only work on Windows Server 1709.

Installing Docker EE Preview

Since it was a hackathon anyway, I figured I might as well try to roll Windows Server 1709 (this is the new semi-annual release channel by the way) on a VM and I followed the instructions to install Docker on it. But… it still didn’t work. Turns out, I needed different instructions and the preview. But for some reason it wouldn’t install! It told me I needed to install the Hyper-V feature:

Missing Hyper-V feature

Forcing it to run anyway

Using the following piece of Powershell, it’s quite easy to get it to work anyway. This does assume you followed the ‘normal’ installation instructions first though.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Stop Docker
Stop-Service Docker
# Get Docker
Save-package -providername DockerProvider -Name Docker -RequiredVersion Preview -Path $Env:TEMP
$dockerZipPath = (Resolve-Path $Env:TEMP\Docker*.zip)
Expand-archive $dockerZipPath $Env:TEMP\Docker
# Move to correct location
Move-Item $Env:TEMP\Docker\docker\dockerd.exe "$Env:ProgramFiles\Docker" -force
Move-Item $Env:TEMP\Docker\docker\docker.exe "$Env:ProgramFiles\Docker" -force
# Disable Linux containers
[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", $null, "Machine")
Start-Service Docker
# Cleanup
Remove-Item $dockerZipPath -Force
Remove-Item $Env:TEMP\Docker -Recurse -Force

The LCOW_SUPPORTED environment variable makes sure you won’t accidently try to run a Linux container anyway :-) O, did I forget to mention that? Docker 17.10 adds support for Linux containers on Windows Server through LinuxKit.

Share Comments

Continuous Delivery of Azure Functions with TFS

In my previous post (Going Serverless - Azure Functions put to use), I showed you how to create a simple serverless app that did some basic alerting based on events on an Azure Service Bus. Now although this example did show you how to create the function and successfully run it, it didn’t show you how to do it properly: by rubbing some DevOps on it.

The code I used before was simple enough to maintain but I can imagine you would want to use Visual Studio to develop your functions. Luckily there’s an extension for that. After you’ve installed the extension (make sure to get the updated one and heed the prerequisites), you will be able to create a new Function App quite easily and although it’s not as complete as the docker integration (yet), you can use it to deploy your functions using web deploy rather than the source control integration from the portal.

Creating the App

In Visual Studio create a new solution using the wizard by selecting the (C# -> ) ‘Cloud’ -> ‘Azure Functions’ project type. You will see a project structure very similar to what you’re used to from other project types. It will feature a few files:

  • host.json - contains global config for all the functions within your project.
  • appsettings.json - this is pretty self-explanatory, right?
  • ProjectReadme.html - you can safely remove this.

Now as you may have noticed, there’s no actual function yet. You still have to add it by right-clicking the project-node and selecting the ‘Add’ -> ‘New Azure Function’ option.

Add new Azure Function

Pick the ‘ServiceBusTopicTrigger - C#’ type and enter the parameters like before.

Parameters Added

You will notice that after creating the functions, you’ll end up with what we have before, including the project.json we had to manually create in the portal. That also means we can just reuse the code from before :-) Take a look at your function.json file and notice that it has a green squiggly underneath the manage permissions (which we have to use, remember?), I didn’t actually test it with the capital ‘M’ there, but I changed it to ‘manage’ before publishing. Let me know if you do try and succeed!
Unfortunately, Visual Studio doesn’t understand this project type completely just yet, so adding NuGet packages is a manual process. You’ll also notice that IntelliSense is limited, it’ll work just fine if you’re using the assemblies which you get out-of-the-box, but if you use external references, I have found it to be lacking.

Why use Visual Studio at all?

By now you might be wondering what the advantage of using Visual Studio is over just creating a function in the portal. Well, there are several reasons:

  • You might want to store your sources in source control and you’re using TFS - which is not supported in the portal.
  • You might want to create more complex solutions, where you share code over functions for instance. You can do this by adding an empty function and loading it in another by using the #load "..\shared\shared.csx" directive at the top of your file (below the #R directives).
  • You can debug your functions. The first time you’ll try this, you will be prompted to download the Azure Functions CLI.

So read on if you want to see how to deploy this from source control.

TFS

I want my release to inject some variables using Guillaume’s replace token build task, then package and publish it. Seeing as a function isn’t really something that you’ll build, it’s rather strange that you’ll need a build to feed your release definition, so you might consider a build definition which directly deploys your function to an Azure Web Application, this won’t allow you to use environments though and because functions don’t support application slots yet, I like using a staging environment before going to production. Whichever way you’ll go, you will have to know that a web deploy is the only possible way to deliver your function to the cloud now.
I will assume that you have created a web application and/or build definition before, so I won’t go into that and assume that it’s all in place.

My build simply copies all files to a file container on the server, nothing special there. My release definition contains 4 steps per environment:

  • Replace Tokens: replaces all tokens with the correct servicebus topics, the email address, etc.
  • Archive Files: zip the $(System.DefaultWorkingDirectory)/AwesomeNotifier/AwesomeNotifier folder and create a zip-file with $(System.DefaultWorkingDirectory)/package.zip as name.
  • Deploy Azure App Service: select your subscription, the app name and tell it which package to use ($(System.DefaultWorkingDirectory)/package.zip in our case).
  • Azure App Service Manage: select your subscription, select the start method, and select the application.
Finished Release Definition

Now if you set the trigger of your build to Continuous Integration and automatically create a release and deploy your (test) environment after a succesful build, you’ll have created a working continuous delivery pipeline to update your Azure Function using Visual Studio and TFS. Good luck!

Share Comments

Going Serverless - Azure Functions put to use

We run an application which is event-driven and utilizes microservices across several trust boundaries. The application originated from our ‘automate everything you do more than twice’-mantra and is now continuously evolving and making our live as a small DevOps team easier.

The underlying messaging mechanism of our app is an Azure Service Bus (or actually, multiple buses), with several topics and subscriptions upon those topics. As all of our events flow through Azure already, it’s easy to store them in blobstorage and use them for auditing/analysis/what-have-you at a later point in time. Now that the usage is increasing, we felt that it was time to add some alerting and we made plans for a new service that would react to our ‘ActivityFailed’-event, it would then send an email as soon as one of those events (luckily they don’t occur that often) would occur. Sounds easy enough, right?

Dockerize or … ?

As you may know Docker is a great tool to envelope your application into a well-known and well-described format so that it can run anywhere the same as it would on your machine. We would develop the service in .NET Core, so it would be easy enough to Dockerize it and host it somewhere just like some of the other services. But last night I thought to myself ‘Wait, we run in Azure, use the Azure Service Bus and only need to react to messages on the bus..’ and I decided I would try to create an Azure Function to react to the event and send me the mail. It literally took me about 15 minutes to develop. I’ll describe the process below.

Going serverless

Azure Functions are a way to process events in an easy way without having to worry about where you run it. It’s basically ‘just code’ and Azure does the rest for you. I had played with Azure Functions before, but didn’t really find a use-case for it. I do however feel that they are the next step after containerization. It may not fit all problems, but there are certainly use-cases out there which would benefit from a completely serverless architecture.

Step one is going to the Azure Portal and creating a new ‘Function App’. Tip: use a consumption plan if you only want to be billed for your actual usage.

Creating the Function App

Once your Function App is created, navigate to it. The first time you navigate to your Function App, you won’t have any functions yet, so you will be presented with the Quickstart Wizard. We will not use it, so scroll down and click ‘Create your own custom function’.

Create your own custom function

Now from the template gallery, select C# as language and ‘Data Processing’ as scenario. Click the ‘ServiceBusTopicTrigger-CSharp’ template and enter the following values in the corresponding fields:

  • Name: a meaningful name for your function, pick something like ‘EmailNotifier’
  • Topic name: this is the name of the topic on your service bus which you’ll listen to
  • Subscription name: The subscription name on top of the topic specified above
  • Access Rights: select ‘Manage’, and make this match the SAS Token. As of writing this post, there’s a bug preventing you from using the expected ‘Listen’ permissions. That is - you can use it, but your function will cease to trigger after a few hours.
  • Service Bus connection: Service Bus connection strings are saved as Application Setting for your entire Function App and can be shared over multiple functions. Just click ‘new’ the first time and enter the connection string without the EntityPath in it

You will now have a basic function. Congratulations!

Making it do something useful

In order to do something meaningful with our app, we’ll need to go through a few steps. First let’s discover what is created for us. Click the ‘Files’ button on the top right of the editor:

Exploring your first function

You will see that you have two files:

  • function.json - which describes your in- and outputs
  • run.csx - which is the code for your function

Take some time to familiarize you with both files and notice that the run.csx isn’t much different from a regular C# program.

It actually has using statements and a public static void Main() alike function called ‘Run’. Azure Functions provides you with framework libraries such as System and System.Linq and you can include some additional assemblies using the #r directive. A full list of all available assemblies can be found here. As you can see, using all types/methods within the Microsoft.ServiceBus namespace will be easy. I can just add a the following lines of code to the beginning of run.csx:

1
2
3
#r "Microsoft.ServiceBus"
using Microsoft.Servicebus;

I also will be using Newtonsoft.Json to deserialize my messages and SendGrid to send my emails, so I will need some way to restore the NuGet packages. This turns out to be quite easy. I just have to add a new file and tell my function what my dependencies are. Add a file called project.json to your function like so:

Adding a file

Now add the following code to it:

1
2
3
4
5
6
7
8
9
10
{
"frameworks": {
"net46":{
"dependencies": {
"Sendgrid": "8.0.5",
"Newtonsoft.Json": "9.0.1"
}
}
}
}

This will trigger my function to perform a NuGet restore before executing my function for the first time. Don’t forget to add the using statements to your code.

We’re almost ready to get the code done but first we’ll need to add an output to our function. Head to the ‘Integrate’ section of your function and take note of the ‘Message parameter name’, we will use this later on. Now click ‘New Output’ and select ‘SendGrid’ (currently in preview).

Integrate

The easiest way to utilize this output, is to enter the from, to, subject and API key here. Mind you that the API key is the name of an Application Setting which contains the actual key!

Configure SendGrid

Save the changes and then add the Application Setting corresponding to the API key name (SendGridApiKey in this example) by clicking ‘Function App Settings’ and then ‘Configure app setings’
Once you’ve added the input, take a look at your function.json and see how it reflects the changes.

Finally adjust the code for run.csx to reflect your application logic. Notice how I named the ‘Message parameter name’ incomingMessage and added an out Mail message to the method signature:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#r "SendGrid"
#r "Newtonsoft.Json"
#r "Microsoft.ServiceBus"
using SendGrid.Helpers.Mail;
using Newtonsoft.Json;
using System;
using System.Threading.Tasks;
using Microsoft.ServiceBus.Messaging;
public static void Run(BrokeredMessage incomingMessage, TraceWriter log, out Mail message)
{
message = null; // set output to null, it must be set as it is a mandatory out parameter
var msgBody = incomingMessage.GetBody<string>();
var msg = JsonConvert.DeserializeObject<dynamic>(msgBody);
log.Info($"Event type: {msg.messageType}");
if(msg.messageType == "activityFailed") {
log.Info($"Found a failed activity: {msg.processId}");
message = new Mail();
var messageContent = new Content("text/html", $"Activity Failed: {msg.processId}");
message.AddContent(messageContent);
}
}

That’s it. Click Run and your message will be parsed, checked and you will be alerted in case something goes wrong :-)

The result

I’ve already received my first alert - even though I triggered it intentionally, it’s still awesome to see that I now have a low-cost, easy to use solution which only runs when it should. Of course there optimizations to be made, but for now it does the trick. And in the meanwhile I’ve learned some more about building serverless applications using Azure Functions.

Share Comments

Git in VS2017 with self-signed SSL

When I’m out of the office, I connect to my team’s TFS server through the firewall and get served up with a properly signed (by a widely trusted CA) SSL certificate.
This means that my browser, and git have no issues connecting and cloning. When I’m in the office and connected to our corporate WiFi network, I get a self-signed SSL certificate.

It’s always been a hassle to add these certificates to Git’s local certificate store but luckily Visual Studio didn’t require you to do the same, seeing as they used Lib2Git. With VS2017, Microsoft switched to git.exe (which is good) but they aren’t using the one already on your path but rather a bundled installation which resides in the VS2017 extensions directory. This means that you have to add SSL certificates to yet another git trusted store.

Let’s fix

Microsoft has done a https://blogs.msdn.microsoft.com/phkelley/2014/01/20/adding-a-corporate-or-self-signed-certificate-authority-to-git-exes-store/ of how to add a certificates should be added to your git.exe client and now this must be applied to Visual Studio as well to prevent this from happening:

Error cloning with untrusted certificate

The Git client resides in your VS2017 installation dir, which by default is C:\Program Files (x86)\Microsoft Visual Studio\2017\. Now if you browse to your edition (i.e. ‘Enterprise’), you will see the familiar Common7\IDE directory and then to the CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs folder, you will find the ca-bundle.crt that Visual Studio uses. So the full path (for a default installation of VS2017 Enterprise) would be:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs

Add your Base64 encoded certificate and the next time you attempt to clone a repo within VS2017, you should be presented with the trusted VS logo ASCII art from TFS:

Visual Studio ASCII art logo Git

Hope this saves you a bit of trouble ;-)

Share Comments

Coretainers

Most people, if not everyone, have seen the .NET Core demo’s in a Docker container on Linux by now. Some may even have experimented with Windows containers and the full fledged .NET framework as I showed at the SDN Event in September.
The thing is, that if you haven’t looked at containers by now, you’re in for a treat. Where it used to be quite hard to figure everything out for yourself, Microsoft announced a new way of integrating today and are taking it to the next level in Visual Studio 2017. Especially when you combine the power of containers with the flexibility of .NET Core.

Docker made easy

The combination of .NET Core and containers is very powerful. It gives a small iamge, which runs anywhere. You can literally ship your ‘machine’ and today it became even easier.
Starting with Visual Studio 2017, when you create a web application, you can enable Docker support from the box:

Built-in Docker support

If you have Docker for Windows installed, you can get going. If not, install it first.
This will automatically generate several files for you:

  • Dockerfile (where it all starts)
  • docker-compose.yml (compose your containers, more on this in a future post)
  • docker-compose.ci.build.yml (instructions for a CI build)

This will be all you need to get going. Really, that’s it. Just press ‘F5’ (or click the debug button, which now conventiently says ‘Docker’).
Visual Studio will now start building your application and put it into a container. The best part here is that it will link your source files on disk into the container by using volumes. If you inspect the docker-compose.vs.debug.yml file, you can clearly see the line that says:

- .:/app

what this line does, is that it links the current directory to the /app directory within the container. This means you can edit your code (and views) live, refresh your browser and it’ll update the app that you’re running within the container. The best thing is though, you can set breakpoints and they work just as though it was an application running on your local dev machine.

Mind you: if your debug experience didn’t go quite as planned and you run into an error. You might just see something like this in the output window:

ERROR: for awesomewebapp Cannot create container for service awesomewebapp: D: drive is not shared. Please share it in Docker for Windows Settings

Although the error message is quite verbose nowadays, right-click the Docker icon in your taskbar and go to settings. Now on the ‘Shared Drives’ tab, you can share the disk where your application resides.

Publish to Azure

Now where it get’s really awesome, is that starting today you can publish your container to Azure with a few simple clicks. If you right-click your project, you can press ‘Publish’. We all know this action from years of publishing web applications through WebDeploy - and we all know what joy that brought ;-)
We then got the ability to quickly select ‘host in Azure’ when we created the project and now we have this:

Publish Container to Azure

The settings are simple:

  • Provide a unique name for your app
  • Select an Azure Subscription
  • Select a resource group, or create one
  • Select or create an App Service Plan
  • Select or create a Docker registry

I’m assuming you’re familiar with Azure terms such as the resource group and service plan, but the last one deserves a bit of explanation. A Docker registry is like a repository where your containers are stored. You can have both private and public registries - DockerHub being the most famous one. By default this will create a private registry where you can store the different versions of your container.

Press the ‘create’ button. Visual Studio and Azure will do the rest for you, it’s that simple.

Mind you: make sure that both your app service plan and registry are in the same Azure region. As of writing this post, only West US is supported. You can select the region from the ‘Services’ tab and then pressing the gears next to the app service or registry you’re creating.

Result

After pushing the ‘create’ button, my container got published to Azure and I’m able to access it from my browser. And although this is of course an awesome way to publish your application, this is probably not what you want from a DevOps perspective. You want to be able to make a change to the app, commit and push your changes to the repo and have an automated build/release pipeline to put your changes in production… and you can!
That’s what another new option in VS2017 does for you:

Continuous Delivery from VS2017

More on this feature in a later post though. For now, experiment with the containers and new features you have and I’ll show you how to automatically create a CI/CD pipeline from right within Visual Studio in a future post.

Share Comments

New Blog

So as you may have noticed, I have started a new blog. It’s been a long time coming but I finally found some time this weekend. My colleague Edwin van Wijk tipped me off on using hexo quite a while ago and I seem to have gotten the hang of it. This blog itself is still a work in progress and I’ll be migrating old posts over soon, but in the meanwhile I figured I’d share some tips.

Free Blog

As you might know, GitHub offers you a free website through GitHub Pages. This means that you can host your static website right from GitHub. Combine this with Hexo magic and you can start your own blog quite easily. What you might not know is that you can also add a custom domain to your GitHub page:

Add a custom domain to GitHub pages

Now although this by itself is pretty cool, it gets better. Although it’s possible to use SSL on GitHub pages, this isn’t currently possible when using a custom domain, or is it?

CloudFlare to the rescue

CloudFlare offers a free tier that not only makes your website faster by using a smart caching mechanism (which you might want to turn off seeing as hexo generates static content), it also offers free SSL for all sites. Simply register for a free account on their site, go to the ‘DNS’ tab and add a CNAME for your domain, like so:

For the DNS-savvy, yes, I used a CNAME as my domain’s root, please refer to this page on details as to why this is still RFC compliant.

Then nagivate to the ‘Crypto’ tab in the menu and set it to the following:

Set the Encryption level to Full

Now for the final step, which ensures all your users are automatically redirected to your SSL page, navigate to the ‘Page Rules’ tab and add the following rules (where you replace the domain with your own domain). If you use a sub-domain such as ‘blog.domain.com’, make sure to use two asterisks (*) in the first rule and replace $1 in the rule with $2 so that it will correctly rewrite:

Add Rewrite Rules

In case you do want to disable caching to prevent issues with your static site, enable a third rule where you match https://yourdomain.ext/* and set the action to ‘Cache Level = ByPass’:

Disable Caching

Sit back and relax

That’s it. You’re done. You have just setup your new secure site using hexo, GitHub pages and CloudFlare. Of course you can also use this with the Basic Tier in Azure which allows you to use your own custom SSL for just 8 odd euro’s a month ;-)

Share Comments

Bash for Windows

So last week at //Build/ Microsoft announced native Bash-integration on the Windows 10 platform and today they delivered the first preview. Being a Windows Insider since nearly day 1 – including installing those buggy mobile builds on my daily driver – I still have my daily driver set to the fast ring and I received build 14316 today. After about 30 mins of installation (ymmv), I eagerly logged in and typed ‘bash’. Unfortunately, nothing happened.

Then I realized I had to switch some options on. First you need to enable the ‘developer mode’. You can do this by opening the settings app and selecting the correct option:

Enable Developer Mode

Next you can enable the optional windows feature ‘Windows Subsystem for Linux (Beta)’:

Enable Windows Feature

After a reboot, you can press the windows key and enter ‘bash’. A new prompt will open with the question if you want to install Ubuntu – say what:

Installing Bash... on Windows

And that’s it, you’re root:

Root on Windows!

A few tips:

  • right click the title bar and go to ‘properties’ enable ‘quick editing’ here, this allows you to copy/paste into the window.
  • if you’re like me, and you try to install Docker even though you kind of knew it wouldn’t work: it doesn’t work. Luckily there’s an easy integration running a docker host in HyperV just around the corner (and I run the beta already), so no sweat there, just had to try 🙂
Share Comments

Dev Intersection 2015 - dag 6

Ook al ben ik inmiddels alweer een tijdje terug uit Las Vegas en inmiddels de jet-lag te boven, wilde ik jullie toch niet mijn laatste dag op Dev Intersection onthouden. Dit was namelijk de dag waar ik het meest naar had uitgekeken.

Ondanks mijn eerdere experimenten met IoT (dotnetFlix aflevering 3) waarbij ik mijn gas- en electriciteitsmeter liet ‘praten’ met het internet en wat andere simpele projectjes, had ik nog steeds niet echt het idee dat ik met IoT bezig was, zo had ik wel een koppeling met Azure gemaakt maar niet de IoT hub gebruikt, geen stream analytics toegepast en geen PowerBI. Dat was nou precies waar mijn laatste workshop over ging: IoT, Azure, PowerBI en dat aangestuurd met Node.js!

Particle Photon

Gedurende de gehele dag werd ik meegenomen door Doug Seven en zijn team waarbij we in eerste instantie aan de slag gingen met een Particle Photon. Deze mini-module van $19 (zie foto waar hij bovenop een breadboard ligt) is in staat om out-of-the-box te communiceren met wifi en heeft een aantal digitale en analoge poorten aan boord waarmee je kunt communiceren. Plug ‘m in in je PC (of een andere USB power source) en je kunt gaan zodra je jouw particle hebt ‘geclaimt’ via hun cloud service.

Tijdens de workshop wordt uitgelegd dat je op verschillende manieren om kunt gaan met je devices, zo kun je rechtstreeks met het internet communiceren, of je kunt via een gateway-device werken. Zo doen wij dat ook deze dag: via onze pc. Gewapend met een text-editor (ik koos voor Visual Studio Code), de Johnny Five node module en de Particle-cli module, kon ik aan de slag met Node.js. Aangezien er geen ‘hello world’ te outputten was op de module aangezien er geen display op zit, moest een knipperend lampje het doen (dat mijn lampje in morse alsnog ‘hello world’ seinde, laten we maar even buiten beschouwing ;-)). Probeer overigens ook vooral het particle-cli commando ‘nyan’ en ik geef je alvast als tip dat je ook ‘particle-cli nyan off’ kunt doen zonder een reboot te geven.

Gedurende de dag kwamen we steeds verder met onze particles en koppelden we deze aan een SparkFun weathershield waarmee een simpel weerstation werd gebouwd. Door deze metrieken vervolgens met behulp van de Azure IoT node-module naar Azure te pushen en deze met een stream analytics job in een Power BI DataSet te gieten, kun je in Power BI vervolgens een mooi dashboard er overheen gieten. Let er hierbij op dat je om PowerBI als output voor je Stream Analytics Job te selecteren, je in de oude huidige Azure portal moet kijken!

Zie hieronder mijn resultaat met op de horizontale as de tijd en verticaal de temperatuur :-)

Temperatuur

Al met al was dit een leerzame workshop waar je in korte tijd met een hoop informatie tot je neemt, en je de kans krijgt te werken met de mannen die hier achter zitten en ze vragen te stellen. Krijg je dus de kans om een workshop van Doug en de mannen te volgen: grijp ‘m! Kijk op hun github voor de code, guides en workshop planning.

Share Comments

Dev Intersection 2015 - dag 3

Samen met Mark Rexwinkel en Edwin van Wijk ben ik deze week aanwezig op de Dev Intersection 2015 conferentie in Las Vegas. Via een dagelijkse blog proberen wij jullie op de hoogte te houden van wat we hier zien en horen. Na Edwin en Mark ben ik vandaag aan de beurt.

De derde dag van de conferentie was de eerste dag waarop ‘reguliere’ sessies werden gegeven. Na een goed ontbijt, begon de dag met een keynote van Scott Guthrie. Hij vertelde voornamelijk over de ‘Journey to the Cloud’ en deelde Microsoft’s visie op DevOps met Visual Studio Online, enkele indrukwekkende cijfers over Azure (wat te denken van 777 biljoen storage queries per dag?!) en de manier waarop Microsoft’s Clouddiensten zoals Office 365 in het grote plaatje van de moderne IT-industrie passen.

DevOps met VSO

Na de keynote zijn we elk onze eigen kant uit gegaan. Ik heb sessies gevolgd van Julie Lerman (Domain Driven Design for the Database Driven Mind), welke erg goed wordt samengevat in een drietal blogposts, een sessie van Steven Murawski (Survive and Thrive in a DevOps World) die erop neer kwam dat het invoeren van DevOps voornamelijk een cultuur-shift is waarbij men de ‘fear culture’ en ‘blame game’ moet laten varen. Hij heeft een flink aantal tips op zijn blog staan om met DevOps aan de slag te gaan.

In de middag ben ik verder gegaan met een sessie van Troy Hunt (Securing ASP.NET in an Azure Environment). Nadat ik zijn workshop had gevolgd op maandag, was ik erg benieuwd wat hij over dit onderwerp had te zeggen en ik werd niet teleurgesteld. Alhoewel het in het begin voornamelijk om no-brainers ging zoals het least-privileged-account principe, kwam hij uiteindelijk tot tips omtrent dynamic data masking in Azure SQL databases, stipte hij nog even het belang van application settings en connection strings in de Azure portal aan en dat je eigenlijk altijd two step verification aan moet zetten als je met jouw account een Azure subscription gaat beheren. Dit laatste kun je instellen via accountbeheer.

Al met al was dit weer een geslaagde dag en kijk ik al uit naar morgen!

Share Comments