Library Manager, a client-side library manager in Visual Studio 2017

Microsoft released Library Manager a few weeks ago. Library Manager is a Visual Studio's new client-side static content management system. Designed as a replacement for Bower and npm, Library Manager helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalog.

Library Manager is open source. You can find the source of the project on GitHub: https://github.com/aspnet/LibraryManager.

How does it work?

In Visual Studio, you have a new contextual menu item "Manage Client-Side Libraries
":

It creates a new file named libman.json. This files contains the list of libraries to download. Each library has a name, a version, a list of files to download, and the location where the file will be copied. Of course there is an autocompletion for the name, the version, and files!

{
  "version": "1.0",
  "defaultProvider": "cdnjs",
  "libraries": [
    {
      "library": "systemjs@0.21.2",
      "destination": "wwwroot/lib/systemjs",
      "files": [
        "system.js",
        "system.js.map"
      ]
    },
    {
      "library": "font-awesome@4.7.0",
      "destination": "wwwroot/lib/font-awesome",
      "files": [
        "css/font-awesome.min.css",
        "fonts/fontawesome-webfont.eot",
        "fonts/fontawesome-webfont.svg",
        "fonts/fontawesome-webfont.ttf",
        "fonts/fontawesome-webfont.woff",
        "fonts/fontawesome-webfont.woff2",
        "fonts/FontAwesome.otf"
      ]
    }
  ]
}

The file is easy to read and write (and even more with the auto-completion). library is composed of the name and the version of the library. destination is the path where the file will be downloaded. files is the list of files of the library to download.

Every time the file is saved, Visual Studio will install/restore the packages. You can also restore them manually using the context menu:

If you want to restore the package at build time, you can use an MS Build task. This may be useful for building on a build server (CI), or when working outside of Visual Studio. You can add the MS Task automatically by clicking on "Enable Restore on Build":

This will add the package Microsoft.Web.LibraryManager.Build to you project. Then, when you build the project, the files will be downloaded and copied to the specified destination:

Tips: You can quickly update or uninstall a library using the light bulb. This will helps you to keep up to date easily:

Why do we need this tool?

A few years ago, you would add a library such as bootstrap using a NuGet package. While NuGet is very good for managing dll dependencies, it doesn't feet well with client-side dependencies. Indeed, you cannot choose where the files are copied, nor which files you want. This decision is made by the owner of the package, so you can have multiple locations and hierarchy. This wasn't a good idea, so people moved to Bower. Bower was great, but on the Bower's website they recommend to migrate to yarn and webpack.

If you are doing a Single Page Application or a complex front-end application using tools like npm/yarn and webpack, you may already have everything you need to manage your dependencies. So, you don't need a new tool like LibMan.

If you are doing a basic website, and you want to add libraries such as Bootstrap or FontAwesome, you may not want to bother with NodeJS and npm. Indeed, npm has some drawbacks:

  • npm downloads everything in node_modules, so you need to copy the files you want to wwwroot. You may use a MSBuild task or another toolchain maybe based on nodejs.
  • npm downloads the whole repository even if you need one file, so the first time it can take lots of times
  • npm requires NodeJS. While npm is installed with Visual Studio, it may not be the case on a build server

Library Manager tries to address these issues:

  • LibMan is well integrated in the .NET ecosystem (NuGet package, Visual Studio extension). You don't need to do npm install before building the .NET project. Instead, building the project will restore the NuGet packages and thenm restore the files.
  • LibMan is faster because it only downloads the necessary files
  • LibMan can download the files directly in wwwroot or wherever you want, so you don't need to add a post restore step to copy them.

To conclude, if your are building a basic website and you need to add a few libs, Library Manager is a good option. For something more complex such as a SPA, you may go with npm and webpack.

Which version of EcmaScript should I use in the TypeScript configuration

TypeScript allows to convert most of the ES next features to ES3, ES5, ES6, ES2016, ES2017. Of course, you can also target ES Next. But which version should you target?

Why you should use the highest possible version?

Using the highest version allows allows you to write shorter code, and use more readable features, such as async/await, for..of, spread, etc. It's not only shorter, but also easier to debug. For instance, async/await is rewritten by TypeScript to a state machine. So, the call stack is harder to understand, steping into the next statement is not easy because you have to step into the step machine functions. The source map can sometimes helps, but it doesn't solve all issues. You can also blackbox some scripts. For instance, you can blackbox tslib if you import TypeScript helpers.

So, if you can target ES Next, do it! But unfortunately this is not always possible. Let's see how to choose the right version!

Select your target JS runtime

First, you must know which runtime you want to support. Do you need to run you application in a web browser and which one, in NodeJS or in Electron. Depending of this choice, you know the JS flavor you can use. For instance, if you choose Electron, you know it uses Chromium version XXX so you know which functionalities are available. In you use NodeJS, it also use V8, the JS engine of Chromium. So, it easy to know which features are supported. For web browser, it's a little more complicated. You may want to support multiple browsers, and multiple versions of each browser.

You can check which features are supported by the web browsers and js runtime here: https://kangax.github.io/compat-table/es6. Tip: you can change the ES version on the top.

For instance, if you want to target IE11, you'll have to target ES5. If you want to support NodeJS, Edge or Chromium, ES6 is ok.

Once you know which version you want to use, update the tsconfig.json file to reflect your decision:

{
    "compilerOptions": {
        "target": "ES2016" // "ES3" (default), "ES5", "ES6"/"ES2015", "ES2016", "ES2017" or "ESNext".
    }
}

Which libraries to target?

Changing the target version also change the available libraries. For instance, if you target ES5, you cannot use Promise. But Promise is not a feature that must be implemented by the engine. You can use another library to replace them, such as bluebird. This means you can target ES5 and use Promise as long as you add them using an external library. It's the same for Array.include, and lots of functions.

TypeScript allows to specify which libraries are available. You can select them in the configuration file tsconfig.json:

{
    "compilerOptions": {
        "lib": [
            "ES5",
            "ES2015.Promise",
            "ES2016.Array.Include"
        ],
    }
}

You can find the list of available libraries in the TypeScript documentation.

BTW, you can read my previous post to dynamically import polyfills

Development vs Release configurations?

As I said in the introduction, using a higher version of EcmaScript may help debugging your application. So, it may not be a bad idea to have 2 configurations, one for the development and another one for the release. For instance, the first one can target ES next because you are debugging on a recent browser, while the second one can target ES5 because your customers may use an older browser.

TypeScript supports the configuration inheritance. So, you can create common tsconfig.json that contains all the settings, and a tsconfig.dev.json that inherits from tsconfig.json. You can build using tsc tsconfig.dev.json. You can read the documentation about configuration inheritance for more information.

Babel

If you are using webpack, gulp or any build tool, you may consider the Babel option. The idea is to configure TypeScript to target ES Next, and transpile to another version using Babel. Based on the previous link, Babel allows transpiling more things to a lower ES version than TypeScript. Using webpack, you can also automatically include polyfills with a plugin such as webpack-polyfill-injector.

Conclusion

Choose the configuration that make you the most productive and which will run on your target browsers / runtimes.

Get the name of a TypeScript class at runtime

in .NET, it's easy to get the class name of an object using obj.GetType().Name. In JavaScript, this doesn't work: typeof obj return "object" or something else, but never the name of the class. This doesn't mean you cannot get the name of a class in JS.

In ES6, you can use Function.name to get the name of a function (documentation).

function test() { }
console.log(test.name); // print "test"

Well, in JavaScript, a class is a function! So, you can get its name using the name property:

class Sample { }
console.log(Sample.name); // print "Sample"

For an instance of a class, you can use the constructor property to get the constructor function: obj.constructor. This way you can get the name of the class by getting the name of the constructor function:

const obj = new Sample();
console.log(obj.constructor.name); // print "Sample"

Note 1: If you minify your scripts, some functions/classes may be renamed. So, the name of the class won't be the original name (I mean the name from your original source file), but the name after minification. UglifyJS has an option to not mangle some names uglifyjs ... -m reserved=['$', 'Sample']

Note 2: This doesn't work if the class contains a method named name. In this case, the browser won't automatically add the name property, so you won't be able to get the name of the class. You can read the specification about this behavior.

Note 3: TypeScript doesn't show constructor in the auto-completion. However, this is totally supported and typed.

JWT authentication with ASP.NET Core

In a previous post, I've written about using cookie authentication for an ASP.NET Core web site. Authenticating user by using a cookie is common for a web site. However, for an API, it's more common to use a token for authentication. Json Web Token (JWT) is a way to create and validate a token. In this post, we'll see how to use JWT with ASP.NET Core to authenticate the users. While the client can be any kind of application, I'll use a front-end application with JavaScript/TypeScript.

What's Json Web Token (JWT)

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA. https://jwt.io/introduction

JWTs consist of 3 parts:

  • The header contains the type of the token (JWT) and the algorithm used to sign it
  • The payload contains the list of claims. Claims can be of 3 types: predefined claims (issuer, subject, expiration date, etc.), public claims (defined in the IANA JWT registry), and private claims (custom names)
  • The signature is used to verify the message wasn't changed along the way

To create the JWT, the three parts are encoded in base64 and separated by a dot. Here's the header and the payload of a JWT:

{
  "alg": "HS256",
  "typ": "JWT"
}

{
  "sub": "meziantou",
  "iss": "meziantou.net"
}

This token is encoded in this form:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJtZXppYW50b3UiLCJpc3MiOiJtZXppYW50b3UubmV0In0.LR3RSg_y4xtsvc7vnKh3JXySCQtEsSNo4xkDBW9J2r4

Then, you can use it to authenticate by using the Authorization header:

Authorization: bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJtZXppYW50b3UiLCJpc3MiOiJtZXppYW50b3UubmV0In0.LR3RSg_y4xtsvc7vnKh3JXySCQtEsSNo4xkDBW9J2r4

Note that the authority that deliver the token and the one that validate the token may be differents. The only requirement is to be able to validate the signature, so you are sure the token is generated by the trusted authority.

Now, you have a better understanding of what is Json Web Token. Let's create an ASP.NET Core application that uses JWT to authenticate users!

Prerequisites - Generate a secret key

To create and validate a token, you must use a secret key. From the following Information Security Stack Exchange post, the length of the key should be 256 bits for the HmacSha256 algorithm (read carefully this thread because it may change depending on the algorithm). Using .NET it's very easy to generate a random key. Create a console application and copy the following code:

public static void Main(string[] args)
{
    var rng = System.Security.Cryptography.RandomNumberGenerator.Create();
    var bytes = new byte[256 / 8];
    rng.GetBytes(bytes);
    Console.WriteLine(Convert.ToBase64String(bytes));
}

Then, you can store the generated key in the configuration file of your ASP.NET Core web site. Open the appsettings.json file and add the following section:

{
  "JwtAuthentication": {
    "SecurityKey": "ouNtF8Xds1jE55/d+iVZ99u0f2U6lQ+AHdiPFwjVW3o=",
    "ValidAudience": "https://localhost:44318/",
    "ValidIssuer": "https://localhost:44318/"
  }
}

Finally, you can register the configuration in the service collection to retrieve these settings easily:

using Microsoft.IdentityModel.Tokens;

public class JwtAuthentication
{
    public string SecurityKey { get; set; }
    public string ValidIssuer { get; set; }
    public string ValidAudience { get; set; }

    public SymmetricSecurityKey SymmetricSecurityKey => new SymmetricSecurityKey(Convert.FromBase64String(SecurityKey));
    public SigningCredentials SigningCredentials => new SigningCredentials(SymmetricSecurityKey, SecurityAlgorithms.HmacSha256);
}

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
        services.Configure<JwtAuthentication>(Configuration.GetSection("JwtAuthentication"));
    }
}

By the way, it can be a good idea to look at Azure Key Vault, AWS KMS or their competitors to store this secret key.

Generate a token for a user

The first part is to generate a token for a client. Before issuing a token, you must validate the user is valid. In this sample we'll use a dummy validation. In your application, you should consider using ASP.NET Core Identity or a similar solution to handle users and validate password.

public class UserController : Controller
{
    private readonly IOptions<JwtAuthentication> _jwtAuthentication;

    public UserController(IOptions<JwtAuthentication> jwtAuthentication)
    {
        _jwtAuthentication = jwtAuthentication ?? throw new ArgumentNullException(nameof(jwtAuthentication));
    }

    [HttpPost]
    [AllowAnonymous]
    public IActionResult GenerateToken([FromBody]GenerateTokenModel model)
    {
        // TODO use your actual logic to validate a user
        if (model.Password != "654321")
            return BadRequest("Username or password is invalid");

        var token = new JwtSecurityToken(
            issuer: jwtAuthentication.ValidIssuer,
            audience: jwtAuthentication.ValidAudience,
            claims: new[]
            {
                // You can add more claims if you want
                new Claim(JwtRegisteredClaimNames.Sub, model.Username),
                new Claim(JwtRegisteredClaimNames.Jti, Guid.NewGuid().ToString()),
            },
            expires: DateTime.UtcNow.AddDays(30),
            notBefore: DateTime.UtcNow,
            signingCredentials: _jwtAuthentication.Value.SigningCredentials);

        return Ok(new
        {
            token = new JwtSecurityTokenHandler().WriteToken(token)
        });
    }

    public class GenerateTokenModel
    {
        [Required]
        public string Username { get; set; }
        [Required]
        public string Password { get; set; }
    }
}

Authenticate the user on the server

The next part is to authenticate the user using the token. ASP.NET Core already contains everything for that. It will get the value of the Authorization header and parse its value. Then, it will check the token is valid.

First, you need to protect your action from anonymous user. You can use the Authorize with the Bearer scheme:

public class SampleController : Controller
{
    [Authorize(JwtBearerDefaults.AuthenticationScheme)]
    public IActionResult Get()
    {
        return Ok();
    }
}

In the startup.cs file, add the following code to register the JWT authentication handler:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        // [...]

        services.Configure<JwtAuthentication>(Configuration.GetSection("JwtAuthentication"));

        // I use PostConfigureOptions to be able to use dependency injection for the configuration
        // For simple needs, you can set the configuration directly in AddJwtBearer()
        services.AddSingleton<IPostConfigureOptions<JwtBearerOptions>, ConfigureJwtBearerOptions>();
        services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
            .AddJwtBearer();
    }

    private class ConfigureJwtBearerOptions : IPostConfigureOptions<JwtBearerOptions>
    {
        private readonly IOptions<JwtAuthentication> _jwtAuthentication;

        public ConfigureJwtBearerOptions(IOptions<JwtAuthentication> jwtAuthentication)
        {
            _jwtAuthentication = jwtAuthentication ?? throw new System.ArgumentNullException(nameof(jwtAuthentication));
        }

        public void PostConfigure(string name, JwtBearerOptions options)
        {
            var jwtAuthentication = _jwtAuthentication.Value;

            options.ClaimsIssuer = jwtAuthentication.ValidIssuer;
            options.IncludeErrorDetails = true;
            options.RequireHttpsMetadata = true;
            options.TokenValidationParameters = new TokenValidationParameters
            {
                ValidateActor = true,
                ValidateIssuer = true,
                ValidateAudience = true,
                ValidateLifetime = true,
                ValidateIssuerSigningKey = true,
                ValidIssuer = jwtAuthentication.ValidIssuer,
                ValidAudience = jwtAuthentication.ValidAudience,
                IssuerSigningKey = jwtAuthentication.SymmetricSecurityKey,
                NameClaimType = ClaimTypes.NameIdentifier
            };
        }
    }
}

The website is now ready. You can generate a token and use this token to authenticate the users. It's time to create a client. Let's create a JavaScript client.

JavaScript Client

First, you need to generate a token for the current user. The request is a POST that contains your username and password.

Note: the url may be different in your context

 const response = await fetch("/user/generatetoken", {
        method: "POST",
        body: JSON.stringify({
            username: "foo@bar",
            password: "654321"
        }),
        headers: {
            "Content-Type": "application/json",
            "Accept": "application/json"
        }
    });
const json = await response.json();
const token = json.token;
console.log(token);

Then, you can use this token. In each request, you must add the Authorization header with the token. Here's the code:

const response = await fetch("/sample", {
        method: "GET",
        headers: {
            "Authorization": "Bearer " + token, // Add the authentication header
            "Accept": "application/json"
        },
        credentials: "include"
    });
console.log(response.ok);

It's so easy using the fetch API 😃

Protecting the whole website / api

This part is optional.

If you want the users to be authenticated to access the API, you can decorate all the controllers with the [Authorize] attribute. But, this is not very convenient. Instead, you can apply the attribute globally using an authorization policy. Open the Startup.cs file and change the ConfigureService method to add the policy.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(config =>
    {
        var policy = new AuthorizationPolicyBuilder()
            .AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme)
            .RequireAuthenticatedUser()
            .Build();
        config.Filters.Add(new AuthorizeFilter(policy));
    });
}

You can read more about authorization policy in the documenation

Security considerations

It's strongly recommended to use HTTPS for your web api. The JWT token is like the username/password of the user. So, you must prevent man-in-the-middle attack. It's very easy to get a free certificate with Let's Encrypt.

The RFC7518 section 8 contains many security considerations dependant on the used algorithms. You should read them before implementing JWT to be sure to follow the best practices.

JWT is signed and encoded only, not encrypted. This means you should not store sensitive information into it, because anyone with the token can read the data. You can check the content of a token using https://jwt.io

Conclusion

Using JWT authentication with an ASP.NET Core application is pretty easy. In a few lines of code, you can add it to your web api. You can see the JWT authentication in action in my PasswordManager application.

Hide your email address on GitHub

When you create a commit on git, the username and email is associated with the commit. So when someone clones the repository, they can see the list of commits and the associated user. Thus, you can get a list of email address. People often use their private email address, so you can be spam. For instance, here's the output of git log on the repo corefx. You can see some email address

Configure git

GitHub provides a noreply address for every user. For instance, my email is meziantou@users.noreply.github.com. You can find it on the settings page: https://github.com/settings/emails.

Then, you can change your git configuration:

git config --global user.email meziantou@users.noreply.github.com
git config --global user.name meziantou

If you want to configure only a single repo,

cd "path to the git repository"
git config user.email meziantou@users.noreply.github.com
git config user.name meziantou

Rewrite history

Then, you want to replace your email in the previous commits. git provides a functionality to rewrite the history of the repository. If you are on Windows, I advise you to use bash as the escape characters are not the same on both system. If you have not yet configured bash, read this documentation.

git filter-branch --commit-filter '
    if [ "$GIT_COMMITTER_EMAIL" = "<Your old email address>" ];
    then
        GIT_COMMITTER_NAME="<Your name>";
        GIT_AUTHOR_NAME="<Your name>";
        GIT_COMMITTER_EMAIL="<Your noreply email address>";
        GIT_AUTHOR_EMAIL="<Your noreply email address>";
        git commit-tree "$@";
    else
         git commit-tree "$@";
    fi' HEAD

Then, you have to push your changes to the remote repository. You must use --force to overwrite the remote repository.

git push --force

Change GitHub settings to block commits containing your email address

Finally, you can configure GitHub to block commits that contains your actual email address. Go into the settings / email section and check the box Block command line pushes that expose my email