Continuous Delivery of Azure Functions with TFS

In my previous post (Going Serverless - Azure Functions put to use), I showed you how to create a simple serverless app that did some basic alerting based on events on an Azure Service Bus. Now although this example did show you how to create the function and successfully run it, it didn’t show you how to do it properly: by rubbing some DevOps on it.

The code I used before was simple enough to maintain but I can imagine you would want to use Visual Studio to develop your functions. Luckily there’s an extension for that. After you’ve installed the extension (make sure to get the updated one and heed the prerequisites), you will be able to create a new Function App quite easily and although it’s not as complete as the docker integration (yet), you can use it to deploy your functions using web deploy rather than the source control integration from the portal.

Creating the App

In Visual Studio create a new solution using the wizard by selecting the (C# -> ) ‘Cloud’ -> ‘Azure Functions’ project type. You will see a project structure very similar to what you’re used to from other project types. It will feature a few files:

  • host.json - contains global config for all the functions within your project.
  • appsettings.json - this is pretty self-explanatory, right?
  • ProjectReadme.html - you can safely remove this.

Now as you may have noticed, there’s no actual function yet. You still have to add it by right-clicking the project-node and selecting the ‘Add’ -> ‘New Azure Function’ option.

Add new Azure Function

Pick the ‘ServiceBusTopicTrigger - C#’ type and enter the parameters like before.

Parameters Added

You will notice that after creating the functions, you’ll end up with what we have before, including the project.json we had to manually create in the portal. That also means we can just reuse the code from before :-) Take a look at your function.json file and notice that it has a green squiggly underneath the manage permissions (which we have to use, remember?), I didn’t actually test it with the capital ‘M’ there, but I changed it to ‘manage’ before publishing. Let me know if you do try and succeed!
Unfortunately, Visual Studio doesn’t understand this project type completely just yet, so adding NuGet packages is a manual process. You’ll also notice that IntelliSense is limited, it’ll work just fine if you’re using the assemblies which you get out-of-the-box, but if you use external references, I have found it to be lacking.

Why use Visual Studio at all?

By now you might be wondering what the advantage of using Visual Studio is over just creating a function in the portal. Well, there are several reasons:

  • You might want to store your sources in source control and you’re using TFS - which is not supported in the portal.
  • You might want to create more complex solutions, where you share code over functions for instance. You can do this by adding an empty function and loading it in another by using the #load "..\shared\shared.csx" directive at the top of your file (below the #R directives).
  • You can debug your functions. The first time you’ll try this, you will be prompted to download the Azure Functions CLI.

So read on if you want to see how to deploy this from source control.

TFS

I want my release to inject some variables using Guillaume’s replace token build task, then package and publish it. Seeing as a function isn’t really something that you’ll build, it’s rather strange that you’ll need a build to feed your release definition, so you might consider a build definition which directly deploys your function to an Azure Web Application, this won’t allow you to use environments though and because functions don’t support application slots yet, I like using a staging environment before going to production. Whichever way you’ll go, you will have to know that a web deploy is the only possible way to deliver your function to the cloud now.
I will assume that you have created a web application and/or build definition before, so I won’t go into that and assume that it’s all in place.

My build simply copies all files to a file container on the server, nothing special there. My release definition contains 4 steps per environment:

  • Replace Tokens: replaces all tokens with the correct servicebus topics, the email address, etc.
  • Archive Files: zip the $(System.DefaultWorkingDirectory)/AwesomeNotifier/AwesomeNotifier folder and create a zip-file with $(System.DefaultWorkingDirectory)/package.zip as name.
  • Deploy Azure App Service: select your subscription, the app name and tell it which package to use ($(System.DefaultWorkingDirectory)/package.zip in our case).
  • Azure App Service Manage: select your subscription, select the start method, and select the application.
Finished Release Definition

Now if you set the trigger of your build to Continuous Integration and automatically create a release and deploy your (test) environment after a succesful build, you’ll have created a working continuous delivery pipeline to update your Azure Function using Visual Studio and TFS. Good luck!

Share Comments

Going Serverless - Azure Functions put to use

We run an application which is event-driven and utilizes microservices across several trust boundaries. The application originated from our ‘automate everything you do more than twice’-mantra and is now continuously evolving and making our live as a small DevOps team easier.

The underlying messaging mechanism of our app is an Azure Service Bus (or actually, multiple buses), with several topics and subscriptions upon those topics. As all of our events flow through Azure already, it’s easy to store them in blobstorage and use them for auditing/analysis/what-have-you at a later point in time. Now that the usage is increasing, we felt that it was time to add some alerting and we made plans for a new service that would react to our ‘ActivityFailed’-event, it would then send an email as soon as one of those events (luckily they don’t occur that often) would occur. Sounds easy enough, right?

Dockerize or … ?

As you may know Docker is a great tool to envelope your application into a well-known and well-described format so that it can run anywhere the same as it would on your machine. We would develop the service in .NET Core, so it would be easy enough to Dockerize it and host it somewhere just like some of the other services. But last night I thought to myself ‘Wait, we run in Azure, use the Azure Service Bus and only need to react to messages on the bus..’ and I decided I would try to create an Azure Function to react to the event and send me the mail. It literally took me about 15 minutes to develop. I’ll describe the process below.

Going serverless

Azure Functions are a way to process events in an easy way without having to worry about where you run it. It’s basically ‘just code’ and Azure does the rest for you. I had played with Azure Functions before, but didn’t really find a use-case for it. I do however feel that they are the next step after containerization. It may not fit all problems, but there are certainly use-cases out there which would benefit from a completely serverless architecture.

Step one is going to the Azure Portal and creating a new ‘Function App’. Tip: use a consumption plan if you only want to be billed for your actual usage.

Creating the Function App

Once your Function App is created, navigate to it. The first time you navigate to your Function App, you won’t have any functions yet, so you will be presented with the Quickstart Wizard. We will not use it, so scroll down and click ‘Create your own custom function’.

Create your own custom function

Now from the template gallery, select C# as language and ‘Data Processing’ as scenario. Click the ‘ServiceBusTopicTrigger-CSharp’ template and enter the following values in the corresponding fields:

  • Name: a meaningful name for your function, pick something like ‘EmailNotifier’
  • Topic name: this is the name of the topic on your service bus which you’ll listen to
  • Subscription name: The subscription name on top of the topic specified above
  • Access Rights: select ‘Manage’, and make this match the SAS Token. As of writing this post, there’s a bug preventing you from using the expected ‘Listen’ permissions. That is - you can use it, but your function will cease to trigger after a few hours.
  • Service Bus connection: Service Bus connection strings are saved as Application Setting for your entire Function App and can be shared over multiple functions. Just click ‘new’ the first time and enter the connection string without the EntityPath in it

You will now have a basic function. Congratulations!

Making it do something useful

In order to do something meaningful with our app, we’ll need to go through a few steps. First let’s discover what is created for us. Click the ‘Files’ button on the top right of the editor:

Exploring your first function

You will see that you have two files:

  • function.json - which describes your in- and outputs
  • run.csx - which is the code for your function

Take some time to familiarize you with both files and notice that the run.csx isn’t much different from a regular C# program.

It actually has using statements and a public static void Main() alike function called ‘Run’. Azure Functions provides you with framework libraries such as System and System.Linq and you can include some additional assemblies using the #r directive. A full list of all available assemblies can be found here. As you can see, using all types/methods within the Microsoft.ServiceBus namespace will be easy. I can just add a the following lines of code to the beginning of run.csx:

1
2
3
#r "Microsoft.ServiceBus"
using Microsoft.Servicebus;

I also will be using Newtonsoft.Json to deserialize my messages and SendGrid to send my emails, so I will need some way to restore the NuGet packages. This turns out to be quite easy. I just have to add a new file and tell my function what my dependencies are. Add a file called project.json to your function like so:

Adding a file

Now add the following code to it:

1
2
3
4
5
6
7
8
9
10
{
"frameworks": {
"net46":{
"dependencies": {
"Sendgrid": "8.0.5",
"Newtonsoft.Json": "9.0.1"
}
}
}
}

This will trigger my function to perform a NuGet restore before executing my function for the first time. Don’t forget to add the using statements to your code.

We’re almost ready to get the code done but first we’ll need to add an output to our function. Head to the ‘Integrate’ section of your function and take note of the ‘Message parameter name’, we will use this later on. Now click ‘New Output’ and select ‘SendGrid’ (currently in preview).

Integrate

The easiest way to utilize this output, is to enter the from, to, subject and API key here. Mind you that the API key is the name of an Application Setting which contains the actual key!

Configure SendGrid

Save the changes and then add the Application Setting corresponding to the API key name (SendGridApiKey in this example) by clicking ‘Function App Settings’ and then ‘Configure app setings’
Once you’ve added the input, take a look at your function.json and see how it reflects the changes.

Finally adjust the code for run.csx to reflect your application logic. Notice how I named the ‘Message parameter name’ incomingMessage and added an out Mail message to the method signature:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#r "SendGrid"
#r "Newtonsoft.Json"
#r "Microsoft.ServiceBus"
using SendGrid.Helpers.Mail;
using Newtonsoft.Json;
using System;
using System.Threading.Tasks;
using Microsoft.ServiceBus.Messaging;
public static void Run(BrokeredMessage incomingMessage, TraceWriter log, out Mail message)
{
message = null; // set output to null, it must be set as it is a mandatory out parameter
var msgBody = incomingMessage.GetBody<string>();
var msg = JsonConvert.DeserializeObject<dynamic>(msgBody);
log.Info($"Event type: {msg.messageType}");
if(msg.messageType == "activityFailed") {
log.Info($"Found a failed activity: {msg.processId}");
message = new Mail();
var messageContent = new Content("text/html", $"Activity Failed: {msg.processId}");
message.AddContent(messageContent);
}
}

That’s it. Click Run and your message will be parsed, checked and you will be alerted in case something goes wrong :-)

The result

I’ve already received my first alert - even though I triggered it intentionally, it’s still awesome to see that I now have a low-cost, easy to use solution which only runs when it should. Of course there optimizations to be made, but for now it does the trick. And in the meanwhile I’ve learned some more about building serverless applications using Azure Functions.

Share Comments

Git in VS2017 with self-signed SSL

When I’m out of the office, I connect to my team’s TFS server through the firewall and get served up with a properly signed (by a widely trusted CA) SSL certificate.
This means that my browser, and git have no issues connecting and cloning. When I’m in the office and connected to our corporate WiFi network, I get a self-signed SSL certificate.

It’s always been a hassle to add these certificates to Git’s local certificate store but luckily Visual Studio didn’t require you to do the same, seeing as they used Lib2Git. With VS2017, Microsoft switched to git.exe (which is good) but they aren’t using the one already on your path but rather a bundled installation which resides in the VS2017 extensions directory. This means that you have to add SSL certificates to yet another git trusted store.

Let’s fix

Microsoft has done a https://blogs.msdn.microsoft.com/phkelley/2014/01/20/adding-a-corporate-or-self-signed-certificate-authority-to-git-exes-store/ of how to add a certificates should be added to your git.exe client and now this must be applied to Visual Studio as well to prevent this from happening:

Error cloning with untrusted certificate

The Git client resides in your VS2017 installation dir, which by default is C:\Program Files (x86)\Microsoft Visual Studio\2017\. Now if you browse to your edition (i.e. ‘Enterprise’), you will see the familiar Common7\IDE directory and then to the CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs folder, you will find the ca-bundle.crt that Visual Studio uses. So the full path (for a default installation of VS2017 Enterprise) would be:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs

Add your Base64 encoded certificate and the next time you attempt to clone a repo within VS2017, you should be presented with the trusted VS logo ASCII art from TFS:

Visual Studio ASCII art logo Git

Hope this saves you a bit of trouble ;-)

Share Comments

Coretainers

Most people, if not everyone, have seen the .NET Core demo’s in a Docker container on Linux by now. Some may even have experimented with Windows containers and the full fledged .NET framework as I showed at the SDN Event in September.
The thing is, that if you haven’t looked at containers by now, you’re in for a treat. Where it used to be quite hard to figure everything out for yourself, Microsoft announced a new way of integrating today and are taking it to the next level in Visual Studio 2017. Especially when you combine the power of containers with the flexibility of .NET Core.

Docker made easy

The combination of .NET Core and containers is very powerful. It gives a small iamge, which runs anywhere. You can literally ship your ‘machine’ and today it became even easier.
Starting with Visual Studio 2017, when you create a web application, you can enable Docker support from the box:

Built-in Docker support

If you have Docker for Windows installed, you can get going. If not, install it first.
This will automatically generate several files for you:

  • Dockerfile (where it all starts)
  • docker-compose.yml (compose your containers, more on this in a future post)
  • docker-compose.ci.build.yml (instructions for a CI build)

This will be all you need to get going. Really, that’s it. Just press ‘F5’ (or click the debug button, which now conventiently says ‘Docker’).
Visual Studio will now start building your application and put it into a container. The best part here is that it will link your source files on disk into the container by using volumes. If you inspect the docker-compose.vs.debug.yml file, you can clearly see the line that says:

- .:/app

what this line does, is that it links the current directory to the /app directory within the container. This means you can edit your code (and views) live, refresh your browser and it’ll update the app that you’re running within the container. The best thing is though, you can set breakpoints and they work just as though it was an application running on your local dev machine.

Mind you: if your debug experience didn’t go quite as planned and you run into an error. You might just see something like this in the output window:

ERROR: for awesomewebapp Cannot create container for service awesomewebapp: D: drive is not shared. Please share it in Docker for Windows Settings

Although the error message is quite verbose nowadays, right-click the Docker icon in your taskbar and go to settings. Now on the ‘Shared Drives’ tab, you can share the disk where your application resides.

Publish to Azure

Now where it get’s really awesome, is that starting today you can publish your container to Azure with a few simple clicks. If you right-click your project, you can press ‘Publish’. We all know this action from years of publishing web applications through WebDeploy - and we all know what joy that brought ;-)
We then got the ability to quickly select ‘host in Azure’ when we created the project and now we have this:

Publish Container to Azure

The settings are simple:

  • Provide a unique name for your app
  • Select an Azure Subscription
  • Select a resource group, or create one
  • Select or create an App Service Plan
  • Select or create a Docker registry

I’m assuming you’re familiar with Azure terms such as the resource group and service plan, but the last one deserves a bit of explanation. A Docker registry is like a repository where your containers are stored. You can have both private and public registries - DockerHub being the most famous one. By default this will create a private registry where you can store the different versions of your container.

Press the ‘create’ button. Visual Studio and Azure will do the rest for you, it’s that simple.

Mind you: make sure that both your app service plan and registry are in the same Azure region. As of writing this post, only West US is supported. You can select the region from the ‘Services’ tab and then pressing the gears next to the app service or registry you’re creating.

Result

After pushing the ‘create’ button, my container got published to Azure and I’m able to access it from my browser. And although this is of course an awesome way to publish your application, this is probably not what you want from a DevOps perspective. You want to be able to make a change to the app, commit and push your changes to the repo and have an automated build/release pipeline to put your changes in production… and you can!
That’s what another new option in VS2017 does for you:

Continuous Delivery from VS2017

More on this feature in a later post though. For now, experiment with the containers and new features you have and I’ll show you how to automatically create a CI/CD pipeline from right within Visual Studio in a future post.

Share Comments

New Blog

So as you may have noticed, I have started a new blog. It’s been a long time coming but I finally found some time this weekend. My colleague Edwin van Wijk tipped me off on using hexo quite a while ago and I seem to have gotten the hang of it. This blog itself is still a work in progress and I’ll be migrating old posts over soon, but in the meanwhile I figured I’d share some tips.

Free Blog

As you might know, GitHub offers you a free website through GitHub Pages. This means that you can host your static website right from GitHub. Combine this with Hexo magic and you can start your own blog quite easily. What you might not know is that you can also add a custom domain to your GitHub page:

Add a custom domain to GitHub pages

Now although this by itself is pretty cool, it gets better. Although it’s possible to use SSL on GitHub pages, this isn’t currently possible when using a custom domain, or is it?

CloudFlare to the rescue

CloudFlare offers a free tier that not only makes your website faster by using a smart caching mechanism (which you might want to turn off seeing as hexo generates static content), it also offers free SSL for all sites. Simply register for a free account on their site, go to the ‘DNS’ tab and add a CNAME for your domain, like so:

For the DNS-savvy, yes, I used a CNAME as my domain’s root, please refer to this page on details as to why this is still RFC compliant.

Then nagivate to the ‘Crypto’ tab in the menu and set it to the following:

Set the Encryption level to Full

Now for the final step, which ensures all your users are automatically redirected to your SSL page, navigate to the ‘Page Rules’ tab and add the following rules (where you replace the domain with your own domain). If you use a sub-domain such as ‘blog.domain.com’, make sure to use two asterisks (*) in the first rule and replace $1 in the rule with $2 so that it will correctly rewrite:

Add Rewrite Rules

In case you do want to disable caching to prevent issues with your static site, enable a third rule where you match https://yourdomain.ext/* and set the action to ‘Cache Level = ByPass’:

Disable Caching

Sit back and relax

That’s it. You’re done. You have just setup your new secure site using hexo, GitHub pages and CloudFlare. Of course you can also use this with the Basic Tier in Azure which allows you to use your own custom SSL for just 8 odd euro’s a month ;-)

Share Comments

Bash for Windows

So last week at //Build/ Microsoft announced native Bash-integration on the Windows 10 platform and today they delivered the first preview. Being a Windows Insider since nearly day 1 – including installing those buggy mobile builds on my daily driver – I still have my daily driver set to the fast ring and I received build 14316 today. After about 30 mins of installation (ymmv), I eagerly logged in and typed ‘bash’. Unfortunately, nothing happened.

Then I realized I had to switch some options on. First you need to enable the ‘developer mode’. You can do this by opening the settings app and selecting the correct option:

Enable Developer Mode

Next you can enable the optional windows feature ‘Windows Subsystem for Linux (Beta)’:

Enable Windows Feature

After a reboot, you can press the windows key and enter ‘bash’. A new prompt will open with the question if you want to install Ubuntu – say what:

Installing Bash... on Windows

And that’s it, you’re root:

Root on Windows!

A few tips:

  • right click the title bar and go to ‘properties’ enable ‘quick editing’ here, this allows you to copy/paste into the window.
  • if you’re like me, and you try to install Docker even though you kind of knew it wouldn’t work: it doesn’t work. Luckily there’s an easy integration running a docker host in HyperV just around the corner (and I run the beta already), so no sweat there, just had to try 🙂
Share Comments

Dev Intersection 2015 - dag 6

Ook al ben ik inmiddels alweer een tijdje terug uit Las Vegas en inmiddels de jet-lag te boven, wilde ik jullie toch niet mijn laatste dag op Dev Intersection onthouden. Dit was namelijk de dag waar ik het meest naar had uitgekeken.

Ondanks mijn eerdere experimenten met IoT (dotnetFlix aflevering 3) waarbij ik mijn gas- en electriciteitsmeter liet ‘praten’ met het internet en wat andere simpele projectjes, had ik nog steeds niet echt het idee dat ik met IoT bezig was, zo had ik wel een koppeling met Azure gemaakt maar niet de IoT hub gebruikt, geen stream analytics toegepast en geen PowerBI. Dat was nou precies waar mijn laatste workshop over ging: IoT, Azure, PowerBI en dat aangestuurd met Node.js!

Particle Photon

Gedurende de gehele dag werd ik meegenomen door Doug Seven en zijn team waarbij we in eerste instantie aan de slag gingen met een Particle Photon. Deze mini-module van $19 (zie foto waar hij bovenop een breadboard ligt) is in staat om out-of-the-box te communiceren met wifi en heeft een aantal digitale en analoge poorten aan boord waarmee je kunt communiceren. Plug ‘m in in je PC (of een andere USB power source) en je kunt gaan zodra je jouw particle hebt ‘geclaimt’ via hun cloud service.

Tijdens de workshop wordt uitgelegd dat je op verschillende manieren om kunt gaan met je devices, zo kun je rechtstreeks met het internet communiceren, of je kunt via een gateway-device werken. Zo doen wij dat ook deze dag: via onze pc. Gewapend met een text-editor (ik koos voor Visual Studio Code), de Johnny Five node module en de Particle-cli module, kon ik aan de slag met Node.js. Aangezien er geen ‘hello world’ te outputten was op de module aangezien er geen display op zit, moest een knipperend lampje het doen (dat mijn lampje in morse alsnog ‘hello world’ seinde, laten we maar even buiten beschouwing ;-)). Probeer overigens ook vooral het particle-cli commando ‘nyan’ en ik geef je alvast als tip dat je ook ‘particle-cli nyan off’ kunt doen zonder een reboot te geven.

Gedurende de dag kwamen we steeds verder met onze particles en koppelden we deze aan een SparkFun weathershield waarmee een simpel weerstation werd gebouwd. Door deze metrieken vervolgens met behulp van de Azure IoT node-module naar Azure te pushen en deze met een stream analytics job in een Power BI DataSet te gieten, kun je in Power BI vervolgens een mooi dashboard er overheen gieten. Let er hierbij op dat je om PowerBI als output voor je Stream Analytics Job te selecteren, je in de oude huidige Azure portal moet kijken!

Zie hieronder mijn resultaat met op de horizontale as de tijd en verticaal de temperatuur :-)

Temperatuur

Al met al was dit een leerzame workshop waar je in korte tijd met een hoop informatie tot je neemt, en je de kans krijgt te werken met de mannen die hier achter zitten en ze vragen te stellen. Krijg je dus de kans om een workshop van Doug en de mannen te volgen: grijp ‘m! Kijk op hun github voor de code, guides en workshop planning.

Share Comments

Dev Intersection 2015 - dag 3

Samen met Mark Rexwinkel en Edwin van Wijk ben ik deze week aanwezig op de Dev Intersection 2015 conferentie in Las Vegas. Via een dagelijkse blog proberen wij jullie op de hoogte te houden van wat we hier zien en horen. Na Edwin en Mark ben ik vandaag aan de beurt.

De derde dag van de conferentie was de eerste dag waarop ‘reguliere’ sessies werden gegeven. Na een goed ontbijt, begon de dag met een keynote van Scott Guthrie. Hij vertelde voornamelijk over de ‘Journey to the Cloud’ en deelde Microsoft’s visie op DevOps met Visual Studio Online, enkele indrukwekkende cijfers over Azure (wat te denken van 777 biljoen storage queries per dag?!) en de manier waarop Microsoft’s Clouddiensten zoals Office 365 in het grote plaatje van de moderne IT-industrie passen.

DevOps met VSO

Na de keynote zijn we elk onze eigen kant uit gegaan. Ik heb sessies gevolgd van Julie Lerman (Domain Driven Design for the Database Driven Mind), welke erg goed wordt samengevat in een drietal blogposts, een sessie van Steven Murawski (Survive and Thrive in a DevOps World) die erop neer kwam dat het invoeren van DevOps voornamelijk een cultuur-shift is waarbij men de ‘fear culture’ en ‘blame game’ moet laten varen. Hij heeft een flink aantal tips op zijn blog staan om met DevOps aan de slag te gaan.

In de middag ben ik verder gegaan met een sessie van Troy Hunt (Securing ASP.NET in an Azure Environment). Nadat ik zijn workshop had gevolgd op maandag, was ik erg benieuwd wat hij over dit onderwerp had te zeggen en ik werd niet teleurgesteld. Alhoewel het in het begin voornamelijk om no-brainers ging zoals het least-privileged-account principe, kwam hij uiteindelijk tot tips omtrent dynamic data masking in Azure SQL databases, stipte hij nog even het belang van application settings en connection strings in de Azure portal aan en dat je eigenlijk altijd two step verification aan moet zetten als je met jouw account een Azure subscription gaat beheren. Dit laatste kun je instellen via accountbeheer.

Al met al was dit weer een geslaagde dag en kijk ik al uit naar morgen!

Share Comments

Custom build tasks in TFS 2015

Since I upgraded my team’s private TFS instance to TFS 2015 RC1, followed by RC2, the whole team has been working with TFS 2015 quite a lot. Of course one of the major features is the new build engine and we’ve given that quite a ride. From cross platform builds on Mac and Linux to custom build tasks, we’ve accomplished quite a lot. Seeing as during yesterday’s Visual Studio 2015 launch, Brian Harry stated that it was ‘quite easy’ to build your own tasks, I figured I’d give a short write-down of our experiences with custom tasks.

Preface

From the moment I upgraded our R&D server to RC1, we’ve been working with the new build system. Up until RC2 it was only possible to add custom build tasks, but we weren’t able to remove them. On top of that, the whole process isn’t documented quite yet. Seeing as we quite often add NuGet packages to a feed and didn’t want to add a, not very descriptive, PowerShell task to all of our build definitions, we decided to use this example for a custom task and see how it would fare.

Prerequisite one: What is a task?

To make a custom build task, we first need to know what it looks like. Luckily Microsoft has open-sourced most of the current build tasks in https://github.com/Microsoft/vso-agent-tasks which gave us a fair idea of what a build task is:

  1. a JSON file describing the plugin
  2. a PowerShell or Node.JS file containing the functionality (this post will focus on PowerShell)
  3. an (optional) icon file
  4. optional resources translating the options to another language

Now the only thing we needed to find out was: how to upload these tasks and in what format?

Good to know:

  1. To make sure your icon displays correctly, it must be 32×32 pixels
  2. The task ID is a GUID which you need to create yourself
  3. The task category should be an existing category
  4. Visibility tells you what kind of task it is, possible values are: Build, Release and Preview. Currently only Build-type tasks are shown

Prerequisite two: How to upload a task?

We quickly figured out that the tasks were simply .zip files containing the aforementioned items, so creating a zip was an easy but then we needed to get it there. By going through the github repository’s, we figured out there was a REST-API which controls all the tasks and we figured that by doing a PUT-call to said endpoint we could create a new task, but also overwrite tasks.

The following powershell-script enables you to upload tasks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
param(
[Parameter(Mandatory=$true)][string]$TaskPath,
[Parameter(Mandatory=$true)][string]$TfsUrl,
[PSCredential]$Credential = (Get-Credential),
[switch]$Overwrite = $false
)
# Load task definition from the JSON file
$taskDefinition = (Get-Content $taskPath\task.json) -join "`n" | ConvertFrom-Json
$taskFolder = Get-Item $TaskPath
# Zip the task content
Write-Output "Zipping task content"
$taskZip = ("{0}\..\{1}.zip" -f $taskFolder, $taskDefinition.id)
if (Test-Path $taskZip) { Remove-Item $taskZip }
Add-Type -AssemblyName "System.IO.Compression.FileSystem"
[IO.Compression.ZipFile]::CreateFromDirectory($taskFolder, $taskZip)
# Prepare to upload the task
Write-Output "Uploading task content"
$headers = @{ "Accept" = "application/json; api-version=2.0-preview"; "X-TFS-FedAuthRedirect" = "Suppress" }
$taskZipItem = Get-Item $taskZip
$headers.Add("Content-Range", "bytes 0-$($taskZipItem.Length - 1)/$($taskZipItem.Length)")
$url = ("{0}/_apis/distributedtask/tasks/{1}" -f $TfsUrl, $taskDefinition.id)
if ($Overwrite) {
$url += "?overwrite=true"
}
# Actually upload it
Invoke-RestMethod -Uri $url -Credential $Credential -Headers $headers -ContentType application/octet-stream -Method Put -InFile $taskZipItem

Good to know:

  1. Currently only ‘Agent Pool Administrators’ are able to add/update or remove tasks.
  2. Tasks are server-wide, this means that you will upload to the server, not to a specific collection or project.

Creating the actual task

So like I said, we’ll be creating a new task that’s going to publish our NuGet packages to a feed. So first we need to decide what information we need to push our packages:

  1. The target we want to pack (.csproj or .nuspec file relative to the source-directory)
  2. The package source we want to push to

For this example I’m assuming you’re only building for a single build configuration and single target platform, which we’ll use in the PowerShell-script.

First we’ll make the task definition. As I said, this is simply a JSON file describing the task and its inputs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
"id": "61ed0e1d-efb7-406e-a42b-80f5d22e6d54",
"name": "NuGetPackAndPush",
"friendlyName": "Nuget Pack and Push",
"description": "Packs your output as NuGet package and pushes it to the specified source.",
"category": "Package",
"author": "Info Support",
"version": {
"Major": 0,
"Minor": 1,
"Patch": 0
},
"minimumAgentVersion": "1.83.0",
"inputs": [
{
"name": "packtarget",
"type": "string",
"label": "Pack target",
"defaultValue": "",
"required": true,
"helpMarkDown": "Relative path to .csproj or .nuspec file to pack."
},
{
"name": "packagesource",
"type": "string",
"label": "Package Source",
"defaultValue": "",
"required": true,
"helpMarkDown": "The source we want to push the package to"
}
],
"instanceNameFormat": "Nuget Pack and Push $(packtarget)",
"execution": {
"PowerShell": {
"target": "$(currentDirectory)\\PackAndPush.ps1",
"argumentFormat": "",
"workingDirectory": "$(currentDirectory)"
}
}
}

This version of the task will be a very rudimentary one, which doesn’t do much (any) validation, so you might want to add that yourself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
[cmdletbinding()]
param
(
[Parameter(Mandatory=$true)][string] $packtarget,
[Parameter(Mandatory=$false)][string] $packagesource
)
####################################################################################################
# 1 Auto Configuration
####################################################################################################
# Stop the script on error
$ErrorActionPreference = "Stop"
# Relative location of nuget.exe to build agent home directory
$nugetExecutableRelativePath = "Agent\Worker\Tools\nuget.exe"
# These variables are provided by TFS
$buildAgentHomeDirectory = $env:AGENT_HOMEDIRECTORY
$buildSourcesDirectory = $Env:BUILD_SOURCESDIRECTORY
$buildStagingDirectory = $Env:BUILD_STAGINGDIRECTORY
$buildPlatform = $Env:BUILDPLATFORM
$buildConfiguration = $Env:BUILDCONFIGURATION
$packagesOutputDirectory = $buildStagingDirectory
# Determine full path of pack target file
$packTargetFullPath = Join-Path -Path $buildSourcesDirectory -ChildPath $packTarget
# Determine full path to nuget.exe
$nugetExecutableFullPath = Join-Path -Path $buildAgentHomeDirectory -ChildPath $nugetExecutableRelativePath
####################################################################################################
# 2 Create package
####################################################################################################
Write-Host "2. Creating NuGet package"
$packCommand = ("pack `"{0}`" -OutputDirectory `"{1}`" -NonInteractive -Symbols" -f $packTargetFullPath, $packagesOutputDirectory)
if($packTargetFullPath.ToLower().EndsWith(".csproj"))
{
$packCommand += " -IncludeReferencedProjects"
# Remove spaces from build platform, so 'Any CPU' becomes 'AnyCPU'
$packCommand += (" -Properties `"Configuration={0};Platform={1}`"" -f $buildConfiguration, ($buildPlatform -replace '\s',''))
}
Write-Host ("`tPack command: {0}" -f $packCommand)
Write-Host ("`tCreating package...")
$packOutput = Invoke-Expression "&'$nugetExecutableFullPath' $packCommand" | Out-String
Write-Host ("`tPackage successfully created:")
$generatedPackageFullPath = [regex]::match($packOutput,"Successfully created package '(.+(?<!\.symbols)\.nupkg)'").Groups[1].Value
Write-Host `t`t$generatedPackageFullPath
Write-Host ("`tNote: The created package will be available in the drop location.")
Write-Host "`tOutput from NuGet.exe:"
Write-Host ("`t`t$packOutput" -Replace "`r`n", "`r`n`t`t")
####################################################################################################
# 3 Publish package
####################################################################################################
Write-Host "3. Publish package"
$pushCommand = "push `"{0}`" -Source `"{1}`" -NonInteractive"
Write-Host ("`tPush package '{0}' to '{1}'." -f (Split-Path $generatedPackageFullPath -Leaf), $packagesource)
$regularPackagePushCommand = ($pushCommand -f $generatedPackageFullPath, $packagesource)
Write-Host ("`tPush command: {0}" -f $regularPackagePushCommand)
Write-Host "`tPushing..."
$pushOutput = Invoke-Expression "&'$nugetExecutableFullPath' $regularPackagePushCommand" | Out-String
Write-Host "`tSuccess. Package pushed to source."
Write-Host "`tOutput from NuGet.exe:"
Write-Host ("`t`t$pushOutput" -Replace "`r`n", "`r`n`t`t")

To finish up, don’t forget to add a .png logo to your task ;-)
You should now be able to add a custom task to your build pipeline from the “Package” category:

Custom Task in Package category

Words of warning

Tasks can be versioned, use this to your advantage. All build definitions use the latest available version of a specific task, you can’t change this behavior from the web interface, so always assume the latest version is being used.

If you don’t change the version number of your task when updating it, the build agents that have previously used your task will not download the newer version because the version number is still the same. This means that if you change the behavior of your task, you should always update the version number!

When deleting a task, this task is not automatically removed from current build definitions, on top of that you won’t get a notification when editing the build definition but you will get an exception on executing a build based on that definition.

Tasks are always available for the entire TFS instance, this means that you shouldn’t include credentials or anything that you don’t want others to see. Use ‘secret variables’ for this purpose:

Secret Variables

Further Reading

If you’ve followed this post so far, I recommend you also check out my team member Jonathan’s post/videos (in Dutch) out:

Blog Post about Invoke SQLCmd in build vNext
Video on build vNext (in Dutch)

Share Comments

Load testing from the Azure portal

Before you launch a new web application, you make sure you have thoroughly tested it, you have performed unit-, integration-, usability- and load-tests but for some reason when the application goes into production, it comes to a grinding halt and you’re left puzzled as to why this happened.

Back in 2013 Microsoft released a solution for this issue: Azure-based load testing which is able to simulate real-world load-testing on your application from Azure with unlimited resources (well, the only real limiting factor is your wallet). The only strange thing here was that in order to use this Azure-based load testing, I had to go to my VSO account to start a test instead of just starting a load test in the Azure portal where I published my web application.

This has changed now.

Introducing Azure load testing from the portal

Yesterday I stumbled onto this post (which contains way more pictures than this post will) by Charles Sterling, where he revealed that as an ‘nascent feature PM’ he more or less accidentally released a new feature into the wild. As of now it’s possible to start a load test from the Azure portal right from where you control your web application. It’s as easy as adding a tile to your web app and starting the test. Or even better, by enabling a feature flag and simply adding a new load test.

To get started, load up your Azure Portal (the new one!) and navigate to one of your web apps and then follow these steps:

  1. Right-click the space in between any of the tiles already displayed and click ‘Add Tiles’
  2. Now choose the ‘Operations’ category and select ‘Cloud Load Test’
  3. You will now get a new tile in your web app panel
  4. Click ‘Done’ on the top left
  5. Click the tile and add a new Load Test, enter the VSO account you want to use, the URL and a name for the test. Mind you, the test name can’t contain any spaces or non-alphanumeric characters.
Load Test

In case you don’t want to add a new tile, you can also include the following feature flag in the portal URL: ?websitesextension_cloudloadtest=true turning the URL into something like: https://portal.azure.com/?websitesextension_cloudloadtest=true
After doing so, you will be able to access load testing from your web app’s settings option.

Summarizing

You now have a new way to perform load testing in the Azure portal, snugly in your Web App blade. It is currently lacking some of the features that VSO does offer, such as browser distribution and think time, but who knows, they might just add them before the final version:

VSO has slightly more options

All in all it’s a nice time-saver and the tests are now in a place where I’d actually expect them to be.

Share Comments