Usability - Productivity - Business - The web - Singapore & Twins

Fun with Azure Active Directory & JWT

Active Directory has been the dominant standard for IT directories, even if it isn't the prettiest tree in the forrest. It's younger sibling ~~Azure Active Directory~~ Entra ID is a big player in cloud based Identity Providers (IdP). Unsurprisingly it behaves differently than the gold standard KeyCloak

JWT expectations

A Json Web Token (JWT) payload is a very losely definded JSON object with various claims. There is only a minimal consent of properties":

  "iss": "https://where-it-came-from",
  "audience": "https://where-it-should-be-valid",
  "iat": "DATE/TIME -> issued at",
  "exp": "DATE/TIME -> expiry",
  "scope": "space separated list of scopes",
  "email": "user's email"

The whole thing is (un)defined in RFC7519, sufficiently loose, so anyone can claim to be standard compliant and nothing is interoperable (just like ical). There is a list of known claims, but RFC7519 states: "None of the claims
defined below are intended to be mandatory to use or implement in all
cases, but rather they provide a starting point for a set of useful,
interoperable claims.

To ease validation of signatures, one can use an URL .../.well-known/openid-configuration which provides a number of needed properties:

  • various endpoint URLs for authentication and token exchange
  • issuer: The value corresponding to the iss property in a JWT
  • jwks_uri: URL to read the public key to validate signatures
  • scopes_supported: what scopes does the API support

Azure - same but different

When you setup Domino for JWT you need a series of specific conditions. The interesting parts from the documentation:

  • One of the JWT's "aud" (audience) claims must match the Domino Internet Site's host name
  • JWTs must contain a "iss" (issuer) claim matching the "issuer" returned from the OIDC provider's .well-known/openid-configuration endpoint
  • JWTs must contain a "scope" claim that includes "Domino.user.all"

When you follow KEEP's how to configure Azure AD you will find a set of pain points, in no specific order:

  • You can't remove claims you don't need
  • Azure AD will not issue a scope claim, but an scp claim
  • The aud claim is fixed to the "Application ID URI"
  • The iss claim in a token does not match the issuer property from well-known/openid-configuration
  • The jwks_uri URL does not return an alg property for the algorythm (nor did I find any way to request an Elliptic-curve signer)

So there's tons of fun to be had with Azure ~~Active Directory~~ Entra ID

Posted by on 29 August 2023 | Comments (0) | categories: JWT WebDevelopment

Primary Posture Applications

We use a multitude of applications per day. Each of them captures some level of attention and interaction. Alan Cooper coined the term Application posture, with the mainly used application being the sovereign application. I personally like the term primary posture application better and will use it in this post

Being primany

Since users spend most of their time in it, there's a willingness to become "senior intermediate experts". Shortcuts are learned, workflows get shared and a deeper understanding is desired. Depending on the nature of your work, very different application are your primary

  • for a graphic desiger it might be GIMP or Inkscape
  • a vlogger spends lot of time in OBS
  • The controllers spend their days in spreadsheets
  • The sales manager in CRM
  • Operations is fond of ERP
  • eMail and chat are strong contenders too
  • the Scrum master lives in Jira, while developers on the command-line and IDE

Primary posture by association

To cover anything else, aggregators were used. Trailblazer here was the Lotus Notes Client: One did everything in Notes, the main job and all the auxiliary and transient would be there. This consistency was attempted to recreate using portals and intranets (for inspiration what intranets can achieve, head over to The Nielsen Norman Group).

Auxiliary applications

You need to complete a task fast and want effortless results. An auxiliary posture helps with that. Adding an appointment in a calendar, booking a ride share, filing tax returns.

Auxiliary applications with a primary posture

One's primary application is another's auxiliary. This is a huge problem especially for bespoke applications. Typically they are comissioned by departments who will use them in "primary posture" (e.g. the leave management system gets commissioned by HR). The leave administrator will happily learn all bells and whistles, while mortal users are irritated by the complexity. I recall working on a leave management system where the initial application form had over 30 fields to cover all eventualities. We were able to convince the application owner to take a 2 form approach: the initial form had: coming, going, type of leave and optional "on behalf". 2 buttons were offered: "more" and "submit". "More" would lead to the 30+ fields form. We monitored usage for 6 month. Not a single time the larger form was submitted.

Multiple front-ends

To avoid the primary auxiliary trap, a clear API that separates UI from business logic helps. It allows to build smaller front-ends that are auxiliary in nature but don't compromize integrity. OpenAPI is your friend

Posted by on 21 August 2023 | Comments (0) | categories: Software

Passphrase Generator

Passphrases are considered easier to remember for humans and harder to crack for machines, famously explained in this comic:

Pasword strength

The challenge then is to have a good word list to pick from. There are various measurements on how many words one person would use which could be as low as a thousand. Note there is a huge difference between recognize and use.

Passphrases and dices

In a recent Toot exchange ospalh pointed me to Diceware, a method to use dice rolls and a word list to determine a passphrase. Usually one uses the regular 6 sides dices and 5 dices, which lets you pick from a 7776 member word list. The EFF published a version using the 20-sided dice from Dungeon and Dragons as well as various word lists.


An attacker who doesn't know that they are dealing with a passphrase, using conventional cracking methods stands little chance to decipher the phrase. However as the defender you must assume, they know your word list, so it is imperative to keep it long, while maintaining the odds to remember (in any case you can use some extra brain). SOme of the word lists you can find online:

Math.random() to replace dices

Let's roll (pun intended) our own passphrase generator. To make it a little more fun these are our constrains:

  • passphrase has 5 elements: 4 words and one 6 digit number
  • the number appears at a random position
  • elements are separated by a - (for readability, in active use you might just filter them out)

Read more

Posted by on 24 July 2023 | Comments (0) | categories: Java WebDevelopment

Keep your github container registry tidy

SO you drank the cool-aid, like me, and use GitHub Actions to build your projects and GitHub pacckages for your private containers, maven produced Jars, npm modules. Soon your honeymoon is over and you hit the storage limit of your account.

You need to clean up

Looking at the packages you will notice, that they are all there, all the version, in case of containers even the untagged ones. The root of the problem is equally the solution: a GitHub Action to delete package versions. The package is very flexible and well documented, outlining several scenarios how to put it to use

Things to watch out for

You have to decide when you want to put it to use:

  • on schedule, like every Friday
  • manual, pressing a button
  • on each build, when you add a new package

I also experienced that {{ secrets.GITHUB_TOKEN }} wouldn't work when the package you target is private, even when it is in the same repository. Once you know, it's not a big deal, just create a PAT and add it to the repository's secrets. You might want to add workflow_dispatch to all triggers, so you can test run them anytime.

Read more

Posted by on 18 July 2023 | Comments (0) | categories: Container Docker GitHub Java JavaScript NodeJS

Deploy private npm packages into private containers using github actions

GitHub Actions are rapidly becoming my favorite CI environment. Their marketplace has an action for everything. Sometimes it takes a little trial and error before things work smoothly. This is one of that stories.

Authentication is everything

Imagine the following scenario: you have developed a set of private TypeScript (or JavaScript) packages and have successfully deployed them to the private GitHub npm registry under the name @myfamousorg/coolpackage - where myfamousorg must match the repository owner (org or individual).

Now you want to use them in your application. That application shall be packed in a Container and made available in GitHub's private registry. All that automated using GitHub Actions.

You will need a PAT (or two)

In GitHub, head to the Personal access tokens / Tokens (classic) section of your developer settings in profile. You need to create tokens that allow you to handle packages.

GitHub Tokens

There are two places where you want to enter that token:

  • In https://github.com/[your-org]/[your-repo]/settings/secrets/actions create a key GIT_NPM_PACKAGES and copy your PAT there. You can pick any name, you will need it in the GitHub action later
  • In ~/.npmrc, your global settings for npm in your home directory. Don't put the info in the .npmrc in your git project.
prefix=/home/[your username]/.npm-packages
//npm.pkg.github.com/:_authToken=[here goes the token]

The prefix property allows you to run `npm install -g [package] without admin access.

Read more

Posted by on 16 July 2023 | Comments (0) | categories: GitHub JavaScript WebDevelopment

Handle HTTP chunked responses

Objects I need a lot of objects. When dealing with APIs there is one fundamental question to answer: how much data do you want to retrieve?

The old school answer: let's page results, 25 at a time. Then infinite scrolling came along and changed expectations.

I got some chunk for you

One way to operate is for the server to send all data, but using Transfer-Encoding: chunked (RFC 9112) in the header and deliver data in several packages, aptly named chunks. A client can process each chunk on arrival to allow interactivity before data transmission concludes.

However this requires adjustments on both sides. The server needs to send data with a clear delimiter, e.g. \n (newline) and the client needs to process the data as a stream

The usual way won't work

We typically find code like this:

  .then((resp) => resp.json())
  .then((json) => {
    for (let row in json) {
      addRow(json[row], parentElement);

fetch hides a lot of complexity, we need to handle when we process a chunked result as it arrives.

Read more

Posted by on 04 July 2023 | Comments (0) | categories: JavaScript WebDevelopment

Docker, nginx, SPA and brotli compression

Contemporary web development separates front-end and back-end, resulting in the front-end being a few static files. Besides setting long cache headers, pre-compression is one way to speed up delivery

Setting the stage

  • we have a NodeJS project that outputs our SPA in /usr/dist directory. Highly recommended here: VITE. Works for multi-page applications too.
  • We target only modern browsers that understand brotli (Sorry not IE). Legacy will have to deal with uncompressed files
  • We want to go light on CPU, so we compress at build time, not runtime

Things to know

  • When nginx is configured for brotli and the file index.html gets requested, the file index.html.br gets served if present and the browser indicated (what it does by default) that it can accept br
  • There are tons of information about the need to compile nginx due to the lack of brotli support out of the box. That's not necessary (see below)
  • brotli is both OpenSource and the open standard RFC 7932
  • brotli currently lacks gzip's -r flag, so some bash magic is needed

Moving parts

  • DockerFile
  • nginx configuration

The Dockerfile will handle the brotli generation

Read more

Posted by on 24 June 2023 | Comments (0) | categories: Docker nginx WebDevelopment

Deploy a TypeScript app using Docker

An application developed in TypeScript actually runs as JavaScript application. When deploying into a Docker image, wwe want to keep it small, here's how.

Docker with a side of Docker

Deployment has a few steps:

  • Compile to JavaScript
  • Successfully run all test
  • Run code quality (e.g. Sonar)
  • Finally package all up into the smallest of containers

Using last weeks example these are the moving parts.

Read more

Posted by on 04 June 2023 | Comments (0) | categories: Docker JavaScript TypeScript

TipToe in TypeScript

TypeScript is all the rage in JavaScript land and I'm enjoying the ride so far. I shall refrain from debating TypeScript vs. JavaScript or geeting started activities. This article's focus getting a TypeScript (server side) project going in VSCode. It reflects what worked for me with my limited knowledge.

Who's at the party?

On a first look it seems, one just needs tsc and all is good. However there are more moving parts involved, lets have a look:

TypeScript project

VSCode plugins, style and build automation shall be subject to a future post, lets focus on the TypeScript parts here. Let's get started with sample ExpressJS project. My test framework of choice shall be MochaJS with the Chai assertion library.

# Setting up an Express TypeScript project
mkdir ts-demo
cd ts-demo
curl https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore -o .gitignore
git init -q
npm init -y
npm install --save express
npm install --save-dev @types/express @types/node
npm install --save-dev chai chai-as-promised mocha ts-node ts-node-dev typescript
npm install --save-dev @types/chai @types/chai-as-promised @types/mocha
mkdir src
mkdir test

We see, development has more dependencies than runtime. Note all the @types packages are only needed in development, so the are added to the devDependencies only.

Read more

Posted by on 31 May 2023 | Comments (0) | categories: JavaScript TypeScript

Develop your SPA with vite

You drank the SPA coolaid to develop with KEEP. While you can use the usual suspects, most cases Vanilla JS will do fine: one each of index.html, index.css and index.js

The preview problem

Since the files are static, throw them on the server a you are good - of course your regular operation gets disrupted. Throw them on a preview server and your calls to /api/... will fail. You could hack around by providing full URLs, you just enter CORS hell then.

viteJS to the rescue

viteJS brands itself as "Next Generation Frontend Tooling" with the catchy tagline "Get ready for a development environment that can finally catch up with you". Let's give it a spin:

npm create vite@latest

The result is simple

Vite start

The package.json lists no runtime dependencies and you can run npm run dev to preview the sample page.

Adding the proxy

When starting vite, it looks for vite.config.js for settings. There you can specify all needed proxy settings.

import { defineConfig } from 'vite';

// https://vitejs.dev/config/
export default defineConfig({
  server: {
    proxy: {
      '/api': 'http://localhost:8880',
      '/.well-known': 'http://localhost:8880'

The vite.config.js allows for sophisticated configuration like conditional settings (think testing against dev, staging, production), which is up to you to evaluate.

Using npm run build vite works its magic to build a combined distributable app, SPA or otherwise.

As ususal YMMV

Posted by on 25 April 2023 | Comments (0) | categories: Software WebDevelopment