Setting up my development machine

Here's the configuration of my developer machine. I'm a .NET and web developper, so only the software relevant to these technologies are part of the list. I also add a few tools I use everyday, but are not directly related to dev. I'm sure I forgot some of the tools I used regularly. If you think I should add something (software/extension/configuration/services), feel free to comment.

Web browsers



Windows Configuration

Online services

How to publish a dotnet global tool with .NET Core 2.1

People doing web development are used to install tools using npm. For instance, you can install TypeScript using npm install -g TypeScript. Then you can use TypeScript directly from the command line. This is very convenient. .NET Core 2.1 allows to create tools that can be install the same way as nodejs. This feature provides a simple way to create and share cross-platform console tools.

.NET tools are packaged as NuGet packages, so you rely on processes that are well settle

Create the project

First, you need to download .NET Core 2.1 SDK. Then you need to create a console application that target .NET Core 2.1. You can use the command line or Visual Studio. Let's use the command line (it's very easy):

dotnet new console

This creates a basic console project that prints "Hello world" to the console.

Now, let's modify the project file to add the tool configuration. You need to add <PackAsTool> and ToolCommandName. The first one indicates that the NuGet package to create is not a library but a tool. The second one indicates the name of the command to run your tool from the command line.

<Project Sdk="Microsoft.NET.Sdk">




That all you need to create a .NET Core global tool, now we can publish it!

Publish the tool on NuGet

A tool is just a package NuGet, so to publish the tool and make it available to everyone, you just need to publish the NuGet package as any other NuGet package.

First, you need to get an API key on

Create NuGet API Key

Copy NuGet API Key

Then, you can create the package and publish it to NuGet:

dotnet pack --configuration Release
dotnet nuget push .\bin\release\MyDotNetCoreTool.1.0.0.nupkg --source --api-key <Your NuGet API key>

It will take a few minutes for the package to be indexed and become available.

Package on

Install the tool

The command line to install the tool is visible on NuGet, so the following command is not a surprise 😉

dotnet tool install --global MyDotNetCoreTool

Install Tool

Then, you can use the tool using the name set in <ToolCommandName>:


Tool demo

List the .NET Core global tools installed on the machine

If you don't remember the tools you installed, you can run the following command to get the full list:

dotnet tool list -g

Update the tool

If you want to update the tool, you must publish a newer version of the NuGet package. You can set the version of the generated NuGet package in the csproj file using the <Version> element:

<Project Sdk="Microsoft.NET.Sdk">




Then, you can use the same commands as to publish the first NuGet package:

dotnet pack --configuration Release
dotnet nuget push .\bin\release\MyDotNetCoreTool.2.0.0.nupkg --source --api-key <Your NuGet API key>

There is no auto-update functionality, so the users of you tool must update it manually when needed.

dotnet tool update --global MyDotNetCoreTool

Uninstall the tool

You can uninstall a tool using the following command:

dotnet tool uninstall --global MyDotNetCoreTool

The tool won't be available from the command line anymore. You can check the tool is well uninstall by running dotnet tool list -g

Test a tool without publishing it on NuGet

When you create a tool, you may want to test it on your computer without publishing it on or any NuGet server. The command line allows to specify the feed source. This source can be a local repository. So, you can use the following command lines:

dotnet pack --output ./
dotnet tool install -g MyDotNetCoreTool --add-source ./

If you want to try a new version, you first need to uninstall the tool or update it

dotnet tool uninstall -g MyDotNetCoreTool
dotnet tool install -g MyDotNetCoreTool --add-source ./


dotnet tool update -g MyDotNetCoreTool --add-source ./


.NET Core global tools provides a simple way to create and share cross-platform console tools. There are so easy to create. I think many tools will be created in the next months. Here's some tools:

Tip: Automatically create a crash dump file on error

Crash dumps are very useful to debug an application. Recently, I worked on a Visual Studio extension we use in my company. It's very easy to develop such kind of application. However, there are lots of reasons for your extension to crash. Of course you can add lots of try/catch, but you'll for sure forget the good one, so VS will crash. When it's happening, you would like to be able to attach a debugger and see the exception and the stack trace. Instead of attaching a debugger, you can automatically generate a crash dump that you can use to debug the application later.

In Windows, you can configure Windows Error Reporting (WER) to generate a dump when an application crashes.

  1. Open regedit.exe
  2. Open the key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps
  3. Set the value DumpFolder (REG_EXPAND_SZ) to the directory you want the dump to be created
  4. Optionally, you can prevent WER to keep lots of crash dumps by setting DumpCount (DWORD) to a low number

Maybe you prefer set the configuration using PowerShell:

New-Item -Path "HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting" -Name "LocalDumps"
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps" -Name "DumpFolder" -Value "%LOCALAPPDATA%\CrashDumps" -PropertyType "ExpandString"
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps" -Name "DumpCount" -Value 10 -PropertyType DWord

You can also configure WER per application. So, if you want to generate a full dump only for one application, you can create a key for your application under LocalDumps with the configuration you want. If you application is devenv.exe the key name is devenv.exe. This is very simple!

WER configuration

When you application crash, you can go to %LOCALAPPDATA%\CrashDumps:

WER configuration

Find the latest dump and open it using Visual Studio to start debugging the application. By default, it shows system information, the exception that crahes the application, and the list of modules. You can use the action section on the right to start the debugger and find more information about the exception.

Debug crash dump in Visual Studio


Write your own DOM element factory for TypeScript

The DOM api allows to manipulate the html document in a browser. It's simple to use, but the code is not very readable. Here's the code needed to create 2 elements and set an attribute:

function foo() {
    let div = document.createElement("div");
    let anchor = document.createElement("a");
    anchor.href = "";
    return div;

The JSX syntax introduced by React allows to mix JavaScript and HTML-ish. The goal is the create a code that is easy to write and to read. JSX will be compiled by a preprocessor such as Babel (or TypeScript as we'll see later) to a valid JavaScript code. Using JSX, the previous function can be written as:

function foo() {
    return (
            <a href="">meziantou</a>

If you use React, the code will be converted to:

function foo() {
    return React.createElement("div", { "class": className },
        React.createElement("a", { href: "" }, "meziantou"));

React.createElement creates a virtual DOM that will be translated to real DOM at the end. The idea of the virtual dom is to reduce the number of DOM operations, but it's not very important for this post… For simplicity, just imagine React.createElement is similar to document.createElement.

TypeScript supports the JSX syntax, so you can take advantages of the type checker, IDE autocompletion, and refactoring. TypeScript can preserve the JSX syntax (in case you want to use another precompiler), replace it with React.createElement, or using a custom factory. The last option allows you to use JSX without using React, as long as you provide your own factory.

A factory is just a function with the following declaration:

 interface AttributeCollection {
    [name: string]: string | boolean;

var Fragment;

function createElement(tagName: string, attributes: AttributeCollection | null, ...children: any[]): any

So, it's not very complicated to implement this function. You just need to use document.createElement, set the attributes, and add the children. However, there are some point of attentions.

  • JSX does not allow the attribute class. Instead, you have to use className. So, the factory has to handle this case.
  • Using JSX, you can register an event handler using <a onclick={...}></a>. However, the setAttribute function only accept a string value. So, you have to handle this case by using addEventListener.
  • Fragments (<>...</>) are replaced by factory.createElement(factory.Fragment, null, ...). So, we can use a special name to create a DocumentFragment in the createElement function.
namespace MyFactory {
    const Fragment = "<></>";

    export function createElement(tagName: string, attributes: JSX.AttributeCollection | null, ...children: any[]): Element | DocumentFragment {
        if (tagName === Fragment) {
            return document.createDocumentFragment();

        const element = document.createElement(tagName);
        if (attributes) {
            for (const key of Object.keys(attributes)) {
                const attributeValue = attributes[key];

                if (key === "className") { // JSX does not allow class as a valid name
                    element.setAttribute("class", attributeValue);
                } else if (key.startsWith("on") && typeof attributes[key] === "function") {
                    element.addEventListener(key.substring(2), attributeValue);
                } else {
                    // <input disable />      { disable: true }
                    // <input type="text" />  { type: "text"}
                    if (typeof attributeValue === "boolean" && attributeValue) {
                        element.setAttribute(key, "");
                    } else {
                        element.setAttribute(key, attributeValue);

        for (const child of children) {
            appendChild(element, child);

        return element;

    function appendChild(parent: Node, child: any) {
        if (typeof child === "undefined" || child === null) {

        if (Array.isArray(child)) {
            for (const value of child) {
                appendChild(parent, value);
        } else if (typeof child === "string") {
        } else if (child instanceof Node) {
        } else if (typeof child === "boolean") {
            // <>{condition && <a>Display when condition is true</a>}</>
            // if condition is false, the child is a boolean, but we don't want to display anything
        } else {

Finally, you need to change the tsconfig.json file to indicate the TypeScript compiler how to convert JSX:

    "compilerOptions": {
        "jsx": "react", // use the React mode, so call the factory
        "jsxFactory": "MyFactory.createElement" // The name of the factory function

You can now create a file with the extension .tsx, and use the JSX syntax. Hope this helps you writting DOM code!

If you want to see a real usage of a custom factory, you can check my Password Manager project on GitHub.

Library Manager, a client-side library manager in Visual Studio 2017

Microsoft released Library Manager a few weeks ago. Library Manager is a Visual Studio's new client-side static content management system. Designed as a replacement for Bower and npm, Library Manager helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalog.

Library Manager is open source. You can find the source of the project on GitHub:

How does it work?

In Visual Studio, you have a new contextual menu item "Manage Client-Side Libraries…":

It creates a new file named libman.json. This files contains the list of libraries to download. Each library has a name, a version, a list of files to download, and the location where the file will be copied. Of course there is an autocompletion for the name, the version, and files!

  "version": "1.0",
  "defaultProvider": "cdnjs",
  "libraries": [
      "library": "systemjs@0.21.2",
      "destination": "wwwroot/lib/systemjs",
      "files": [
      "library": "font-awesome@4.7.0",
      "destination": "wwwroot/lib/font-awesome",
      "files": [

The file is easy to read and write (and even more with the auto-completion). library is composed of the name and the version of the library. destination is the path where the file will be downloaded. files is the list of files of the library to download.

Every time the file is saved, Visual Studio will install/restore the packages. You can also restore them manually using the context menu:

If you want to restore the package at build time, you can use an MS Build task. This may be useful for building on a build server (CI), or when working outside of Visual Studio. You can add the MS Task automatically by clicking on "Enable Restore on Build":

This will add the package Microsoft.Web.LibraryManager.Build to you project. Then, when you build the project, the files will be downloaded and copied to the specified destination:

Tips: You can quickly update or uninstall a library using the light bulb. This will helps you to keep up to date easily:

Why do we need this tool?

A few years ago, you would add a library such as bootstrap using a NuGet package. While NuGet is very good for managing dll dependencies, it doesn't feet well with client-side dependencies. Indeed, you cannot choose where the files are copied, nor which files you want. This decision is made by the owner of the package, so you can have multiple locations and hierarchy. This wasn't a good idea, so people moved to Bower. Bower was great, but on the Bower's website they recommend to migrate to yarn and webpack.

If you are doing a Single Page Application or a complex front-end application using tools like npm/yarn and webpack, you may already have everything you need to manage your dependencies. So, you don't need a new tool like LibMan.

If you are doing a basic website, and you want to add libraries such as Bootstrap or FontAwesome, you may not want to bother with NodeJS and npm. Indeed, npm has some drawbacks:

  • npm downloads everything in node_modules, so you need to copy the files you want to wwwroot. You may use a MSBuild task or another toolchain maybe based on nodejs.
  • npm downloads the whole repository even if you need one file, so the first time it can take lots of times
  • npm requires NodeJS. While npm is installed with Visual Studio, it may not be the case on a build server

Library Manager tries to address these issues:

  • LibMan is well integrated in the .NET ecosystem (NuGet package, Visual Studio extension). You don't need to do npm install before building the .NET project. Instead, building the project will restore the NuGet packages and thenm restore the files.
  • LibMan is faster because it only downloads the necessary files
  • LibMan can download the files directly in wwwroot or wherever you want, so you don't need to add a post restore step to copy them.

To conclude, if your are building a basic website and you need to add a few libs, Library Manager is a good option. For something more complex such as a SPA, you may go with npm and webpack.