Going paperless: step 1

In an effort to leave a (little) smaller footprint on this world, I’ve been minimizing the amount of paper I receive through mail every day by requesting digital versions of everything from invoices to tax statements to payment slips. Right now I receive most of these through email as a PDF file and I put them in a folder on my NAS and then never touch it again until I need it. That’s when I spend hours and hours trying to locate what I need. I mean.. search only gets you so far when it concerns images or passworded PDF’s.

This post describes how to at least get rid of those pesky passworded PDF’s and in the follow-ups on this post, I’ll also dive deeper into my efforts to go truly paperless.

Pesky Passworded PDF’s and how to get rid of them

Some companies insist on supplying you with a passworded PDF. The problem with these is that you do need the password before they’re searchable and what’s worse.. the password may change over time (or you forget it), leaving you with useless files. Of course I could just open the PDF, enter the password, print the file to another PDF and then save it to my NAS. Or I could print it on paper and put it in a physical folder for safekeeping. But seeing as I want to reduce paper and I hate doing manual labor, I just couldn’t resist automating this. So this is how it went:

  • Every 24 hours, an Azure Logic App checks a subfolder in my Exchange Online mailbox for new mails with PDF attachments
  • If it finds any emails matching those criteria, it pipes the attachments into an Azure Function
  • The function removes the password using a password which is stored in Azure Key vault
  • It then returns a filestream to my Azure Logic App which safely stores it as a PDF in OneDrive

The OneDrive folder is then synced with my Synology NAS overnight and there we go.. password removed.

If only it were that easy

The idea was simple enough and after finding a library that didn’t need a commercial license and was also .NET Core compatible iText 7 (MIND YOU: in commercial software you’ll probably want to buy a license due to the AGPL that’s attached to this library), I started to code my Function which was easy enough. But then when deploying, there were a few caveats. The first was that I wouldn’t get any attachments in my function for some reason and then there was the issue of getting my password accessible from within my Function.

Attachments and Logic Apps’ Exchange Connector

Azure Logic Apps is an awesome way to quickly create an application with standard functionality but also allows you to call custom code with a Function connector. In this application I use a few standard connectors and add a custom function. My trigger is a standard Office 365 connector that periodically checks a subfolder in my mailbox, then I have some conditional logic that checks for specific conditions based on the email that triggered the Logic App and if the condition is met, it will proceed with a foreach loop on all the attachments attached to the email and push them through my custom Function which removes the password. Finally another standard OneDrive connector stores the file in my OneDrive account.

This process took a whole 2 minutes to create the Logic App, create the logic and make it all work. Unfortunately it didn’t work. In fact, I wasn’t getting any attachments into my Function and I couldn’t for the life of it figure out why. But I’ll help you out here and say what I’m almost ashamed to say: the ‘Include Attachments’ dropdown, actually means ‘Do you want to do something with these attachments in the following step?’ and you will probably want to answer this with a firm ‘yes’ :-)

After this, it was on to the next step.. securing my secrets.

Azure Functions, MSI, Key vault and VNet integration

For a while now it is possible to use Managed Identities in Azure. This is a great solution in case you don’t want to specify a username/password (a secret) to access your secrets because that kind of defeats the point or at the very least complicates things. So there really was no choice other than to store my secrets in a Key vault and then use an MSI (Managed Service Identity) to access the proper secrets. The best thing there is that you can even reference these secrets from within a function by merely using a specially crafted Environment Variable, this process is called ‘Key vault References’. You simply create an environment variable called something like ‘MyPDFPassword’ and then as a value you use @Microsoft.KeyVault({referenceString}) where {referenceString} is the secret location, something like this:

Now all you need to do is grant the MSI you’re using for the function permission to access the secret in your vault and don’t firewall your Key vault. Wait.. what did you say? Isn’t it a good thing to make sure only Azure Services can access my resources? Well yes dear reader, generally it is, but not when your service isn’t supported yet! Even though Azure App Service is mentioned there, and even though Azure Functions may run in ‘sort of an App Service’, do not make make the same mistake I did and tick the ‘Selected networks’ box, or you’ll spend quite some time figuring out why your Function gets the name of the Environment Variable that you’re using to reference rather than the value of the secret it is supposed to reference..

After you do NOT check that box, you can happily access the passwords for your PDF’s and can even version the secret which might come in handy in case you do ever need that old password.

One step closer

This solution has been spinning for a few months now and has been removing those pesky passwords from my PDF’s and it’s even quite cheap.. In fact, it cost me a whopping €0.01 per month :-) I’m not sure if I’d make this investment as a company, but at least I’ve learned some more and got to play around with some of the more recent concepts in Azure. The code for the Function can be found on GitHub. Over the course of the next few months I’ll continue writing about my efforts of going paperless which includes my first steps into the world of AI (more specifically Machine Learning) to classify documents. And by the end of the year, I hope to be completely paperless.

Share Comments

It's been a while

It’s been a while since I’ve posted on this blog. There are several reasons for this. First one being that I’ve always gotten more energy from (public) speaking rather than writing. The second being that I actually really needed to conserve my energy for about a year or so - and actually still do.

It all started this with this:

One of the cars involved in this accident was mine. Whilst driving to a friend, an accident happened just in front of me. All three of the lanes came to an immediate halt. From 130 km/h to 0. I always steer towards the side of the road in these situations and that allowed the van driving behind me to avoid a collision. He came to a halt with the nose of his vehicle close to my passenger door. Unfortunately the driver operating the van behind him, wasn’t paying attention to the road and hit me from behind while driving about 120-130 km/h in his fully loaded Mercedes Sprinter.

At first I was fine

Immediately after the accident, I seemed to be just fine. My leg was hurting a little from hitting the steering wheel, but other than that I seemed fine. Although I knew I’d have a muscle ache the day after, I counted my blessings and went home. That night, I celebrated the fact that I’d walked out of a severe accident relatively unscathed. The day after, I did indeed have severely painful muscles but if that was the worst of it, I still wasn’t too worried. But then…

I remember waking up on Sunday the 1st of April 2018 having a really bad neck pain and I was dizzy. I figured I had slept in a weird position and that this would pass. Unfortunately it didn’t. Not the next day, not during the week after, nor the 9 months that followed. On Tuesday I went into the office and everything was still spinning. My colleagues sent me home.

After a visit to a doctor and physical therapist, I was diagnosed with a concussion and a whiplash.

… and then I wasn’t

This all meant that I had to take a lot of rest. For those who know me a little, it shouldn’t come as a surprise that this wasn’t easy. I am not one to sit still, but now I was forced to. I couldn’t work (wasn’t allowed by my employer either), couldn’t read, watch tv, look at a screen and low- or high-pitched sounds made my head spin like nothing else. I ended up sleeping with ear plugs and even then it was hardly doable which in turn worsened my condition. Even though I knew that eventually all would be well, it took way too long for my liking. So I developed a new hobby: I planted chili seeds and watched them grow.

And seeing as doing things half baked isn’t really my thing (also, did I mention that I had a lot of time?), I ended up harvesting about 5 kg’s of chili’s. It was so bad that the whole living room was filled with chili plants :-)

Meanwhile I was slowly re-integrating at work. I started out with 4x 30 mins spread out over the day and slowly worked my way up to 4 hours a day. As soon as I could work 1 hour in a row (this took me several weeks), I decided it was also time to go into the office and slowly worked my way back to fulltime (40 hours/week) working.

So basically you were on holiday for about a year?

Not really. It hasn’t always been easy. Just like that van went from 130 km/h to 0, so did I. There have been times where I was really down and didn’t think it would ever get better. But besides my chili’s, my dog, my girlfriend (in no particular order), there was another thing that kept me going and enabled me to slowly stretch out my days: community. Right after my accident, I couldn’t do much but I had already made a commitment to mentor some students, I had already planned some workshops (DevOps principles combined with Chaos Engineering with students), I had some speaking engagements, events, etc. and even though I didn’t think these things would go well, I noticed I got an enourmous amount of energy from it. Mentoring, teaching, speaking, engaging in conversations, that is what makes me get out of my bed, both in good times as in the less-than-good times.

Are you okay now?

Yes I am. I’m doing great. Unfortunately this doesn’t mean that I’m the same as before. I still struggle with long days - I still get headaches when I go on for too long. I get dizzy when I sleep too little and I have a lot less energy than I used to. But I like to think in possibilities and when I look back at the first weeks and compare that to now, I’ve come a long way. If anyone had told me before (and people have), I wouldn’t have believed that a ‘simple’ accident with so little visible damage, could have such an impact on someone’s life. So I guess I’ve also learned from this experience: patience, relaxation, but especially to appreciate the little things in life and not complain so much ;-)

I’m looking forward to the next year where things will be stabilizing and I can pick up where I left off. I also hope to see you out there :)

Share Comments

Creating a GitHub badge with Azure Functions

Recently, I spent some time with the guys from the Stryker Mutator team. First in a hackathon over a weekend back in December last year, then finalizing our work in February and launching the Mutation Score Badge. Even though I had to overcome my fear of JavaScript, I managed to find some good parts in NodeJS and combine them into a Azure Function that provides the actual mutation badge. Get the why, how and what in this post.

Why Azure Functions?

Well, simply put: because it’s cheap, easy and it supports multiple languages. Since Stryker is writting in NodeJS, I decided to challenge myself and write the function in NodeJS as well. Our setup is quite simple:

  • We use an Azure Storage Table to score all mutation scores posted from the Dashboard.
  • When someone requests a badge, the function performs a lookup in this table and presents the badge
  • We have a Function Proxy to be able to use our own domain and a friendly URL.

How we developed the function

All code was written in TypeScript and using this excellent post by Tsuyoshi Ushio, I was able to develop and debug it on my mac quite easily (well, after a crashcourse in TypeScript from Nico Jansen).

Is it really all that awesome?

No, it isn’t. We hit quite a few snags while developing but especially when deploying. As you read before, the functions are dirt-cheap on a consumption plan but this also means that they’re not ‘Always-On’. Where this doesn’t really seem to be an issue for regular C# functions, for some reason the NodeJS functions were extremely slow and on top of that, I had to use Kudu to do an npm install.

Azure and NodeJS

We soon discovered that uploading a lot of small files to Azure would take a while, and we decided we’d just want to upload the package.json and run npm install on Azure through Kudu. Notice that on this page it also says that the Node version is locked on 6.5.0. Even adjusting the WEBSITE_NODE_DEFAULT_VERSION environment variable didn’t work.

This limited us in our ability to use certain Node functionality that required version 8+ (util.promisify in particular) so we went to look for another solution. This present itself in the portal. If you look carefully at the screenshot above, you can see a variable called FUNCTIONS_EXTENSION_RUNTIME. This is set to ~1 by default, but you can simply change that to run on the ‘beta’ version. Mind you: you can only safely change this if you don’t currently have any functions deployed.

Unfortunately, it turned out that changing this to the beta doesn’t support proxies yet, so we reverted and included our own promisify.

Cold Boot

As mentioned before, we initially planned on deploying through Kudu and simply running NPM install there and we did. Thing is, the functions were really slow. I mean… REALLY slow. It took over 20 seconds to start and as it turns out, we weren’t the only ones. Our solution was to apply FuncPack and by simply running this before our publish:

1
2
npm install -g azure-functions-pack
funcpack pack ./

we were able to pack it all into one file. What it does is that it applies WebPack magic to your function (also rewriting your function.json to reference to index.js as entrypoint). Running this brought our cold boot down to acceptable levels.

What now?

Well, we’re live :) There’s still some work to do by the functions team, but with the newly announced Run-From-Zip functionality, I’m positive that it’ll run even smoother than now. On top of that, we now also know what it has cost us over the month of February: a whopping $0.33 :-) So I guess this still applies:

Or at least they make it pretty easy for Open Source projects to use their services without incurring too much of a cost penalty. I’ll follow up on this post to describe how we wrapped this all up in a neat VSTS pipeline to deploy continuously.

Share Comments

Running Docker 17.10 on Windows Server 1709 without nested virtualization

Although some people have overheard me saying that Containers are Dead, there is actually some use for it when dealing with legacy software and/or on-premise/cloud hybrid applications.
Recently, during a hackathon, I tried to use Docker Swarm’s awesome Routing Mesh but couldn’t get it to work on Windows Server 2016. It turns out that it will only work on Windows Server 1709.

Installing Docker EE Preview

Since it was a hackathon anyway, I figured I might as well try to roll Windows Server 1709 (this is the new semi-annual release channel by the way) on a VM and I followed the instructions to install Docker on it. But… it still didn’t work. Turns out, I needed different instructions and the preview. But for some reason it wouldn’t install! It told me I needed to install the Hyper-V feature:

Forcing it to run anyway

Using the following piece of Powershell, it’s quite easy to get it to work anyway. This does assume you followed the ‘normal’ installation instructions first though.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Stop Docker
Stop-Service Docker

# Get Docker
Save-package -providername DockerProvider -Name Docker -RequiredVersion Preview -Path $Env:TEMP

$dockerZipPath = (Resolve-Path $Env:TEMP\Docker*.zip)
Expand-archive $dockerZipPath $Env:TEMP\Docker

# Move to correct location
Move-Item $Env:TEMP\Docker\docker\dockerd.exe "$Env:ProgramFiles\Docker" -force
Move-Item $Env:TEMP\Docker\docker\docker.exe "$Env:ProgramFiles\Docker" -force

# Disable Linux containers
[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", $null, "Machine")
Start-Service Docker

# Cleanup
Remove-Item $dockerZipPath -Force
Remove-Item $Env:TEMP\Docker -Recurse -Force

The LCOW_SUPPORTED environment variable makes sure you won’t accidently try to run a Linux container anyway :-) O, did I forget to mention that? Docker 17.10 adds support for Linux containers on Windows Server through LinuxKit.

Share Comments

Continuous Delivery of Azure Functions with TFS

In my previous post (Going Serverless - Azure Functions put to use), I showed you how to create a simple serverless app that did some basic alerting based on events on an Azure Service Bus. Now although this example did show you how to create the function and successfully run it, it didn’t show you how to do it properly: by rubbing some DevOps on it.

The code I used before was simple enough to maintain but I can imagine you would want to use Visual Studio to develop your functions. Luckily there’s an extension for that. After you’ve installed the extension (make sure to get the updated one and heed the prerequisites), you will be able to create a new Function App quite easily and although it’s not as complete as the docker integration (yet), you can use it to deploy your functions using web deploy rather than the source control integration from the portal.

Creating the App

In Visual Studio create a new solution using the wizard by selecting the (C# -> ) ‘Cloud’ -> ‘Azure Functions’ project type. You will see a project structure very similar to what you’re used to from other project types. It will feature a few files:

  • host.json - contains global config for all the functions within your project.
  • appsettings.json - this is pretty self-explanatory, right?
  • ProjectReadme.html - you can safely remove this.

Now as you may have noticed, there’s no actual function yet. You still have to add it by right-clicking the project-node and selecting the ‘Add’ -> ‘New Azure Function’ option.

Pick the ‘ServiceBusTopicTrigger - C#’ type and enter the parameters like before.

You will notice that after creating the functions, you’ll end up with what we have before, including the project.json we had to manually create in the portal. That also means we can just reuse the code from before :-) Take a look at your function.json file and notice that it has a green squiggly underneath the manage permissions (which we have to use, remember?), I didn’t actually test it with the capital ‘M’ there, but I changed it to ‘manage’ before publishing. Let me know if you do try and succeed!
Unfortunately, Visual Studio doesn’t understand this project type completely just yet, so adding NuGet packages is a manual process. You’ll also notice that IntelliSense is limited, it’ll work just fine if you’re using the assemblies which you get out-of-the-box, but if you use external references, I have found it to be lacking.

Why use Visual Studio at all?

By now you might be wondering what the advantage of using Visual Studio is over just creating a function in the portal. Well, there are several reasons:

  • You might want to store your sources in source control and you’re using TFS - which is not supported in the portal.
  • You might want to create more complex solutions, where you share code over functions for instance. You can do this by adding an empty function and loading it in another by using the #load "..\shared\shared.csx" directive at the top of your file (below the #R directives).
  • You can debug your functions. The first time you’ll try this, you will be prompted to download the Azure Functions CLI.

So read on if you want to see how to deploy this from source control.

TFS

I want my release to inject some variables using Guillaume’s replace token build task, then package and publish it. Seeing as a function isn’t really something that you’ll build, it’s rather strange that you’ll need a build to feed your release definition, so you might consider a build definition which directly deploys your function to an Azure Web Application, this won’t allow you to use environments though and because functions don’t support application slots yet, I like using a staging environment before going to production. Whichever way you’ll go, you will have to know that a web deploy is the only possible way to deliver your function to the cloud now.
I will assume that you have created a web application and/or build definition before, so I won’t go into that and assume that it’s all in place.

My build simply copies all files to a file container on the server, nothing special there. My release definition contains 4 steps per environment:

  • Replace Tokens: replaces all tokens with the correct servicebus topics, the email address, etc.
  • Archive Files: zip the $(System.DefaultWorkingDirectory)/AwesomeNotifier/AwesomeNotifier folder and create a zip-file with $(System.DefaultWorkingDirectory)/package.zip as name.
  • Deploy Azure App Service: select your subscription, the app name and tell it which package to use ($(System.DefaultWorkingDirectory)/package.zip in our case).
  • Azure App Service Manage: select your subscription, select the start method, and select the application.

Now if you set the trigger of your build to Continuous Integration and automatically create a release and deploy your (test) environment after a succesful build, you’ll have created a working continuous delivery pipeline to update your Azure Function using Visual Studio and TFS. Good luck!

Share Comments

Going Serverless - Azure Functions put to use

We run an application which is event-driven and utilizes microservices across several trust boundaries. The application originated from our ‘automate everything you do more than twice’-mantra and is now continuously evolving and making our live as a small DevOps team easier.

The underlying messaging mechanism of our app is an Azure Service Bus (or actually, multiple buses), with several topics and subscriptions upon those topics. As all of our events flow through Azure already, it’s easy to store them in blobstorage and use them for auditing/analysis/what-have-you at a later point in time. Now that the usage is increasing, we felt that it was time to add some alerting and we made plans for a new service that would react to our ‘ActivityFailed’-event, it would then send an email as soon as one of those events (luckily they don’t occur that often) would occur. Sounds easy enough, right?

Dockerize or … ?

As you may know Docker is a great tool to envelope your application into a well-known and well-described format so that it can run anywhere the same as it would on your machine. We would develop the service in .NET Core, so it would be easy enough to Dockerize it and host it somewhere just like some of the other services. But last night I thought to myself ‘Wait, we run in Azure, use the Azure Service Bus and only need to react to messages on the bus..’ and I decided I would try to create an Azure Function to react to the event and send me the mail. It literally took me about 15 minutes to develop. I’ll describe the process below.

Going serverless

Azure Functions are a way to process events in an easy way without having to worry about where you run it. It’s basically ‘just code’ and Azure does the rest for you. I had played with Azure Functions before, but didn’t really find a use-case for it. I do however feel that they are the next step after containerization. It may not fit all problems, but there are certainly use-cases out there which would benefit from a completely serverless architecture.

Step one is going to the Azure Portal and creating a new ‘Function App’. Tip: use a consumption plan if you only want to be billed for your actual usage.

Once your Function App is created, navigate to it. The first time you navigate to your Function App, you won’t have any functions yet, so you will be presented with the Quickstart Wizard. We will not use it, so scroll down and click ‘Create your own custom function’.

Now from the template gallery, select C# as language and ‘Data Processing’ as scenario. Click the ‘ServiceBusTopicTrigger-CSharp’ template and enter the following values in the corresponding fields:

  • Name: a meaningful name for your function, pick something like ‘EmailNotifier’
  • Topic name: this is the name of the topic on your service bus which you’ll listen to
  • Subscription name: The subscription name on top of the topic specified above
  • Access Rights: select ‘Manage’, and make this match the SAS Token. As of writing this post, there’s a bug preventing you from using the expected ‘Listen’ permissions. That is - you can use it, but your function will cease to trigger after a few hours.
  • Service Bus connection: Service Bus connection strings are saved as Application Setting for your entire Function App and can be shared over multiple functions. Just click ‘new’ the first time and enter the connection string without the EntityPath in it

You will now have a basic function. Congratulations!

Making it do something useful

In order to do something meaningful with our app, we’ll need to go through a few steps. First let’s discover what is created for us. Click the ‘Files’ button on the top right of the editor:

You will see that you have two files:

  • function.json - which describes your in- and outputs
  • run.csx - which is the code for your function

Take some time to familiarize you with both files and notice that the run.csx isn’t much different from a regular C# program.

It actually has using statements and a public static void Main() alike function called ‘Run’. Azure Functions provides you with framework libraries such as System and System.Linq and you can include some additional assemblies using the #r directive. A full list of all available assemblies can be found here. As you can see, using all types/methods within the Microsoft.ServiceBus namespace will be easy. I can just add a the following lines of code to the beginning of run.csx:

1
2
3
#r "Microsoft.ServiceBus"

using Microsoft.Servicebus;

I also will be using Newtonsoft.Json to deserialize my messages and SendGrid to send my emails, so I will need some way to restore the NuGet packages. This turns out to be quite easy. I just have to add a new file and tell my function what my dependencies are. Add a file called project.json to your function like so:

Now add the following code to it:

1
2
3
4
5
6
7
8
9
10
{
"frameworks": {
"net46":{
"dependencies": {
"Sendgrid": "8.0.5",
"Newtonsoft.Json": "9.0.1"
}
}
}
}

This will trigger my function to perform a NuGet restore before executing my function for the first time. Don’t forget to add the using statements to your code.

We’re almost ready to get the code done but first we’ll need to add an output to our function. Head to the ‘Integrate’ section of your function and take note of the ‘Message parameter name’, we will use this later on. Now click ‘New Output’ and select ‘SendGrid’ (currently in preview).

The easiest way to utilize this output, is to enter the from, to, subject and API key here. Mind you that the API key is the name of an Application Setting which contains the actual key!

Save the changes and then add the Application Setting corresponding to the API key name (SendGridApiKey in this example) by clicking ‘Function App Settings’ and then ‘Configure app setings’
Once you’ve added the input, take a look at your function.json and see how it reflects the changes.

Finally adjust the code for run.csx to reflect your application logic. Notice how I named the ‘Message parameter name’ incomingMessage and added an out Mail message to the method signature:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#r "SendGrid"
#r "Newtonsoft.Json"
#r "Microsoft.ServiceBus"

using SendGrid.Helpers.Mail;
using Newtonsoft.Json;
using System;
using System.Threading.Tasks;
using Microsoft.ServiceBus.Messaging;

public static void Run(BrokeredMessage incomingMessage, TraceWriter log, out Mail message)
{
message = null; // set output to null, it must be set as it is a mandatory out parameter

var msgBody = incomingMessage.GetBody<string>();
var msg = JsonConvert.DeserializeObject<dynamic>(msgBody);

log.Info($"Event type: {msg.messageType}");

if(msg.messageType == "activityFailed") {
log.Info($"Found a failed activity: {msg.processId}");

message = new Mail();

var messageContent = new Content("text/html", $"Activity Failed: {msg.processId}");
message.AddContent(messageContent);
}
}

That’s it. Click Run and your message will be parsed, checked and you will be alerted in case something goes wrong :-)

The result

I’ve already received my first alert - even though I triggered it intentionally, it’s still awesome to see that I now have a low-cost, easy to use solution which only runs when it should. Of course there optimizations to be made, but for now it does the trick. And in the meanwhile I’ve learned some more about building serverless applications using Azure Functions.

Share Comments

Git in VS2017 with self-signed SSL

When I’m out of the office, I connect to my team’s TFS server through the firewall and get served up with a properly signed (by a widely trusted CA) SSL certificate.
This means that my browser, and git have no issues connecting and cloning. When I’m in the office and connected to our corporate WiFi network, I get a self-signed SSL certificate.

It’s always been a hassle to add these certificates to Git’s local certificate store but luckily Visual Studio didn’t require you to do the same, seeing as they used Lib2Git. With VS2017, Microsoft switched to git.exe (which is good) but they aren’t using the one already on your path but rather a bundled installation which resides in the VS2017 extensions directory. This means that you have to add SSL certificates to yet another git trusted store.

Let’s fix

Microsoft has done a https://blogs.msdn.microsoft.com/phkelley/2014/01/20/adding-a-corporate-or-self-signed-certificate-authority-to-git-exes-store/ of how to add a certificates should be added to your git.exe client and now this must be applied to Visual Studio as well to prevent this from happening:

The Git client resides in your VS2017 installation dir, which by default is C:\Program Files (x86)\Microsoft Visual Studio\2017\. Now if you browse to your edition (i.e. ‘Enterprise’), you will see the familiar Common7\IDE directory and then to the CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs folder, you will find the ca-bundle.crt that Visual Studio uses. So the full path (for a default installation of VS2017 Enterprise) would be:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\ssl\certs

Add your Base64 encoded certificate and the next time you attempt to clone a repo within VS2017, you should be presented with the trusted VS logo ASCII art from TFS:

Hope this saves you a bit of trouble ;-)

Share Comments

Coretainers

Most people, if not everyone, have seen the .NET Core demo’s in a Docker container on Linux by now. Some may even have experimented with Windows containers and the full fledged .NET framework as I showed at the SDN Event in September.
The thing is, that if you haven’t looked at containers by now, you’re in for a treat. Where it used to be quite hard to figure everything out for yourself, Microsoft announced a new way of integrating today and are taking it to the next level in Visual Studio 2017. Especially when you combine the power of containers with the flexibility of .NET Core.

Docker made easy

The combination of .NET Core and containers is very powerful. It gives a small iamge, which runs anywhere. You can literally ship your ‘machine’ and today it became even easier.
Starting with Visual Studio 2017, when you create a web application, you can enable Docker support from the box:

If you have Docker for Windows installed, you can get going. If not, install it first.
This will automatically generate several files for you:

  • Dockerfile (where it all starts)
  • docker-compose.yml (compose your containers, more on this in a future post)
  • docker-compose.ci.build.yml (instructions for a CI build)

This will be all you need to get going. Really, that’s it. Just press ‘F5’ (or click the debug button, which now conventiently says ‘Docker’).
Visual Studio will now start building your application and put it into a container. The best part here is that it will link your source files on disk into the container by using volumes. If you inspect the docker-compose.vs.debug.yml file, you can clearly see the line that says:

- .:/app

what this line does, is that it links the current directory to the /app directory within the container. This means you can edit your code (and views) live, refresh your browser and it’ll update the app that you’re running within the container. The best thing is though, you can set breakpoints and they work just as though it was an application running on your local dev machine.

Mind you: if your debug experience didn’t go quite as planned and you run into an error. You might just see something like this in the output window:

ERROR: for awesomewebapp Cannot create container for service awesomewebapp: D: drive is not shared. Please share it in Docker for Windows Settings

Although the error message is quite verbose nowadays, right-click the Docker icon in your taskbar and go to settings. Now on the ‘Shared Drives’ tab, you can share the disk where your application resides.

Publish to Azure

Now where it get’s really awesome, is that starting today you can publish your container to Azure with a few simple clicks. If you right-click your project, you can press ‘Publish’. We all know this action from years of publishing web applications through WebDeploy - and we all know what joy that brought ;-)
We then got the ability to quickly select ‘host in Azure’ when we created the project and now we have this:

The settings are simple:

  • Provide a unique name for your app
  • Select an Azure Subscription
  • Select a resource group, or create one
  • Select or create an App Service Plan
  • Select or create a Docker registry

I’m assuming you’re familiar with Azure terms such as the resource group and service plan, but the last one deserves a bit of explanation. A Docker registry is like a repository where your containers are stored. You can have both private and public registries - DockerHub being the most famous one. By default this will create a private registry where you can store the different versions of your container.

Press the ‘create’ button. Visual Studio and Azure will do the rest for you, it’s that simple.

Mind you: make sure that both your app service plan and registry are in the same Azure region. As of writing this post, only West US is supported. You can select the region from the ‘Services’ tab and then pressing the gears next to the app service or registry you’re creating.

Result

After pushing the ‘create’ button, my container got published to Azure and I’m able to access it from my browser. And although this is of course an awesome way to publish your application, this is probably not what you want from a DevOps perspective. You want to be able to make a change to the app, commit and push your changes to the repo and have an automated build/release pipeline to put your changes in production… and you can!
That’s what another new option in VS2017 does for you:

More on this feature in a later post though. For now, experiment with the containers and new features you have and I’ll show you how to automatically create a CI/CD pipeline from right within Visual Studio in a future post.

Share Comments

New Blog

So as you may have noticed, I have started a new blog. It’s been a long time coming but I finally found some time this weekend. My colleague Edwin van Wijk tipped me off on using hexo quite a while ago and I seem to have gotten the hang of it. This blog itself is still a work in progress and I’ll be migrating old posts over soon, but in the meanwhile I figured I’d share some tips.

Free Blog

As you might know, GitHub offers you a free website through GitHub Pages. This means that you can host your static website right from GitHub. Combine this with Hexo magic and you can start your own blog quite easily. What you might not know is that you can also add a custom domain to your GitHub page:

Now although this by itself is pretty cool, it gets better. Although it’s possible to use SSL on GitHub pages, this isn’t currently possible when using a custom domain, or is it?

CloudFlare to the rescue

CloudFlare offers a free tier that not only makes your website faster by using a smart caching mechanism (which you might want to turn off seeing as hexo generates static content), it also offers free SSL for all sites. Simply register for a free account on their site, go to the ‘DNS’ tab and add a CNAME for your domain, like so:

For the DNS-savvy, yes, I used a CNAME as my domain’s root, please refer to this page on details as to why this is still RFC compliant.

Then nagivate to the ‘Crypto’ tab in the menu and set it to the following:

Now for the final step, which ensures all your users are automatically redirected to your SSL page, navigate to the ‘Page Rules’ tab and add the following rules (where you replace the domain with your own domain). If you use a sub-domain such as ‘blog.domain.com’, make sure to use two asterisks (*) in the first rule and replace $1 in the rule with $2 so that it will correctly rewrite:

In case you do want to disable caching to prevent issues with your static site, enable a third rule where you match https://yourdomain.ext/* and set the action to ‘Cache Level = ByPass’:

Sit back and relax

That’s it. You’re done. You have just setup your new secure site using hexo, GitHub pages and CloudFlare. Of course you can also use this with the Basic Tier in Azure which allows you to use your own custom SSL for just 8 odd euro’s a month ;-)

Share Comments

Bash for Windows

So last week at //Build/ Microsoft announced native Bash-integration on the Windows 10 platform and today they delivered the first preview. Being a Windows Insider since nearly day 1 – including installing those buggy mobile builds on my daily driver – I still have my daily driver set to the fast ring and I received build 14316 today. After about 30 mins of installation (ymmv), I eagerly logged in and typed ‘bash’. Unfortunately, nothing happened.

Then I realized I had to switch some options on. First you need to enable the ‘developer mode’. You can do this by opening the settings app and selecting the correct option:

Next you can enable the optional windows feature ‘Windows Subsystem for Linux (Beta)’:

After a reboot, you can press the windows key and enter ‘bash’. A new prompt will open with the question if you want to install Ubuntu – say what:

And that’s it, you’re root:

A few tips:

  • right click the title bar and go to ‘properties’ enable ‘quick editing’ here, this allows you to copy/paste into the window.
  • if you’re like me, and you try to install Docker even though you kind of knew it wouldn’t work: it doesn’t work. Luckily there’s an easy integration running a docker host in HyperV just around the corner (and I run the beta already), so no sweat there, just had to try 🙂
Share Comments