Usability - Productivity - Business - The web - Singapore & Twins

Unit Tests and Singletons

Good developers test their code. There are plenty of frameworks and tools around to make it less painful. Not every code is testable, so you write test first, to avoid getting stuck with untestable code. However there are situations where Unit Testing and pattern get into each other's way

The singleton and test isolation

I'm a big fan of design patterns, after all they are a well-proven solutions for specific known problems. E.g. the observer pattern is the foundation of reactive programming.

A common approach to implementing cache is the singleton pattern, that ensures all your code talks to the same cache instance ,independent of what cache you actually use: Aerospike, Redis, Guava, JCS or others.

Your singleton would look like this:

public enum TicketCache {
  public Set<String> getTickets(final String systemId) {
    Set<String> result = new HashSet<>();
    // Your cache related code goes here
    return result;
  public TicketCache addTicket(final String systemId, final String ticketId) {
    // Your cache/persistence code goes here
    return this;

and a method in a class returning tickets (e.g. in a user object) for a user could look like this:

  public Set<String> getUserTickets() {
    Set<String> result = new HashSet<>();
    Set<String> systemsResponsibleFor = this.getSystems();
    systemsResponsibleFor.forEach(systemId -> 
    return result;

Now when you want to test this method, you have a dependency on TicketCache and can't test the getUserTickets() method in isolation. You are at the mercy of your cache implementation. But there is a better way

Read more

Posted by on 10 January 2020 | Comments (0) | categories: Java Salesforce

http(s) debugging cheat sheet

Martin Luther is attributed with the utterance "You have to watch how people talk". What works for the famous bible translation applies to APIs as well. Despite Open Standards and a standard body, APIs do their own thing, not necessarily documented as they are.

While it is reasonable easy to watch them inside a browser using Developer Tools (also here) it gets tricky when you want to watch an application like macOS Calendar, Thunderbird, Slack, Bodo (for Jira).

This is your cheat-sheet.

Setup your HTTP(s) Forensics

  • Install the application you want to investigate (Yeah, a no brainer)
  • Install a HTTP debugger, one of them (I use Charles Proxy)
  • Configure your HTTP debugger to be able to analyse https traffic to your choosen calendar server
  • Install PostMan and curl
  • Have a place where you save your investigation results in Markdown, /docs in a GitHub repo is a good place
  • Configure your HTTP debugger to intercept the http traffic on your target domain. This works different for each OS and debugger, so read the manual!
  • Fetch a cup of your favorite beverage, we are good to go


  • Switch on your HTTP debugger (this line is sponsored by Captain Obvious)
  • Execute one command in your custom app. Typically something like "login" or "add account" or "File Open" (that's my favorite in Office apps, fires 4 or more HTTP request when done against a http endpoint that understands webDAV)
  • Look at the raw results. Each debugger has a nice gui that separates headers, cookies and body payload, but you want to look at raw data:
    • Your request starts with METHOD /route HTTP_Version, e.g. POST /login HTTP/1.1 Everything until the first empty line is header, followed eventually with a body. Knowing methods helps to set expectations. See also RFC 7231, RFC 5789, RFC 2518, RFC 4918, RFC 3744 and RFC 4791
    • Your response starts with a status line HTTP/1.1 StatusCode StatusMessage e.g. HTTP/1.1 200 OK Again: everything until the first empty line is header, followed by the optional response body
  • It can get trickier when you app is already using HTTP/2.0 or later since it allows streaming or non-text payloads like gRPC
  • Document that in markdown, bracket the http with three backticks, so it renders as source
  • Repeat for other commands in your app


What fun is detective work without verifying results. This is where you turn to Postman or if you have it - curl.

You want to use parameters for your hostname and the user specific parts (username, passwords) and you need to have a clear idea what are variable return values. Good candidtes to watch out for are cookies or header values. You need to extract these values for chaining to the next request. With a little practise you should be able to make Postman behave like the original app

Parting words

  • Check the terms of service of the application you are investigating. While public end-point are, well, public, you might have agreed in your T&C not to touch them
  • This isn't an instuction for hacking, you still need to be you - with your credentials. Nevertheless you might stumble over "security by obscurity" or other annoyances
  • Any app that uses http instead of https needs to die a horrible death
  • Reading the API spec is potentially faster

As usual YMMV

Posted by on 30 December 2019 | Comments (0) | categories: WebDevelopment

A Streaming Pattern for the vert.x EventBus (Part 1)

When dealing with large amounts of data, using Streams allows processing happen the moment data arrives, not just when data is complete. Streaming is core to reactive programming. This blog entry describes an approach where the vert.x EventBus sits between requester and resource

The scenario

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus

participant Requester
participant EventBus
participant DataSource

Requester->EventBus: request DataSet
EventBus->DataSource: forward request
DataSource->EventBus: reply with data
EventBus->Requester: forward reply

A requester (e.g. the handler of a HTTP Listener) sends a request via the EventBus using a request-response pattern with EventBus.request(...). Simple and easy. The problem with this: one request has one response. That doesn't work for streaming data.

Taking a page from the military

The standard pattern for military commands is:

  1. Utter the command
  2. Accnowledge the command
  3. Execute the command (Conquer 14 countries, might take time. For Germans: Liberate 14 countries)
  4. Report completion of command

Applying to the EventBus

Following the pattern above the first request/response only will establish the Intent (btw. Intent Based Leadership is a smoking hot topic). Item 2 and 3 will be handled by a publish and subscribe pattern.

So our scenario now looks like this:

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus Streaming

participant Requester
participant EventBus
participant DataSource

Requester->EventBus: start listening\non temp address
note over Requester, DataSource: Start request/response
Requester->EventBus: request Stream\notify on temp address
EventBus->DataSource: forward request
DataSource->EventBus: reply withaccnowledgement
EventBus->Requester: forward response
note over Requester, DataSource: End of request/response
note over Requester, DataSource: Start publish/subscribe
DataSource->EventBus: publish first data
EventBus->Requester: forward response
DataSource->EventBus: publish more data
EventBus->Requester: forward response
DataSource->EventBus: publish last data
EventBus->Requester: forward response
Requester->EventBus: end listening\non temp address
note over Requester, DataSource: End of publish/subscribe

To implement this, I'm taking advantage of EventBus' DeliveryOptions that allow me to set header values. I define a header StreamListenerAddress that my data source will use for publishing data:

// Error handling omitted
public void initiateStreamResponse(final String dataAddress, final JsonObject requestMessage, Promise<Void> didItWork) {
	final String streamListenerAddress = "tempListenerAddresses." + UUID.randomUUID().toString();
	final EventBus eventBus = this.getVertx().eventBus();
	final MessageConsumer<JsonObject> dataReceiver = eventBus.consumer(streamListenerAddress);
	dataReceiver.handler(handler -> {
		final boolean isFirst = Boolean.parseBoolean(headers.get("first"));
		final boolean isComplete = Boolean.parseBoolean(headers.get("complete"));
	      Here goes the code feeding into the requester's logic e.g. a chunked HTTP response
	      or a websocket publish or a gRPC push. isFirst and isComplete can be true at the
	      same time when there is only a single response
	DeliveryOptions deliveryOptions = new DeliveryOptions();
	eventBus.request(dataAddress, requestMessage, deliveryOptions, ar -> {
			if (ar.succeeded()) {
				final Message<Object> resultMessage = ar.result();
				final boolean success = Boolean.parseBoolean(resultMessage.headers().get(Constants.HEADER_SUCCESS));
				if (!success) {
					didItWork.fail(new Error("Request for Data unsuccessfull"));

			} else {

What next?

  • In Part 2 I will describe the data source part of this approach
  • In Part 2 I will wrap that in observable and observer

I'm using this pattern in the Keep API, YMMV

Posted by on 04 December 2019 | Comments (0) | categories: Java Reactive vert.x

Deep Human Super Skills for a VUCA world

We live in a world dominated by volatility, uncertainty, complexity and ambiguity (VUCA). Our traditional approach to a working career, learning a specific skill and stick to it, doesn't fit anymore. What is needed instead, is the subject of the book Deep Human writen by Crystal Lim-Lange and Dr. Gregor Lim-Lange.

5 skills

5 human super skills

The 5 skills build on each other, forming the foundation and prerequisite for the next level. Here is my take, paraphrasing what I learned, how they fit together. Full details, including experiences how to get there are in the book.

  1. Mindfulness Being rooted in reality, seeing what is, without judgment and deep filters is the foundation of any progress. The practise of precisely observing your suroundings allows you to gather evidence for any assessment and action. The mindful person is master of their thoughs and doesn't fall prey easily to illusions. Of course it takes lifelong practise. The mind is like a muscle: once you start or stop training, it changes

  2. Self-Awarness Once the mind has been sharpened and silenced, you can turn attention to the self. What are the sensations, emotions, thoughts, fears, hopes and believes that drive you? Armed with focus and mindfulness you can wrestle the driver's seat back from the monkey mind. Clarity about yourself leads to freedom to decide who you want to be instead of running on auto-pilot

  3. Empathy Having honed the skill of self-awareness you can apply that to other sentients. Without clarity about yourself, this would fail, so self-awareness is the foundation of empathy, as mindfulness is the foundation of self-awareness. Learning to walk in someone elses shoes deepends your understanding of a complex world. Empathy isn't a woolsy-shoolsy be nice to everybody feeling, but the application of reality from a different viewpoint. As the late Lama Marut would say: "Be nice and don't take shit"

  4. Complex communication You are able to see things as they are, you recognise strength and weaknesses in yourself and others. You value reality over opinions and solutions over debate. Skilled like this, explaining the not-so-simple, cutting to the chase, getting your point across becomes your next level. You won't get there without the foundation of Empathy, Self-Awareness and Mindfulness

  5. Resillience and Adaptability Life changes, subtle or sudden, minimal or radical. You have practised to communicate clearly, see reality from different perspectives as it is and know yourself. These skills and the resulting confidence enables you to face whatever comes your way. Not clinging to illusions makes you flexible like the bamboo in the wind. You will clearly see what is needed and where you can find purpose. You adapt.

The whole book is an insightful and interesting read, so go and get your copy.

Posted by on 27 October 2019 | Comments (0) | categories: After Hours Singapore

A certificate wants a SAN

Following my recent blog about creating you own CA you will find out, like I did, that the certs are quite wanting.

The Subject Alternate Name (SAN)

Even after importing the ca-chain.cert.pem into your keyring / keystore Chrome will barf at the certificate, complaining about a missing SAN.

The idea of a SAN is to allow additional name variations to be recognised for one given certificate, reducing the effort for multi-purpose servers. E.g.: myawesomesite.com, www.myawesomesite.com, myawesomesite.io, www.myawesomesite.com, crazydata.com

I tried really hard, but at the time of writing, it seems the only way to create SAN for your certs is to provide a configuration file. I didn't find a command line option (short of various attempts on redirection and pipeing).

The hack I came up with:

Edit the intermediate\openssl.cnf and add to the [ server_cert ] section one line: subjectAltName = @alt_names. The @ sign tells OpenSSL to look for a section with that name and expand its content as the parameter.

Using the following shell script generates a certificate that works for:

  • www.domain (e.g. www.awesome.io)
  • domain (e.g. awesome.io)
  • domain.local (e.g. awesome.io.local)

The last one is helpful when you want to try SSL on localhost and amend your hosts file to contain awesome.io.local

# Create new server certificates with the KEEP intermediate CA
if [ -z "$1" ]
    echo "Usage: ./makecert.sh domain_name (without www) e.g. ./makecert.sh funsite.com"
    exit 1
export CONFNAME=intermediate/$1.cnf
cat intermediate/openssl.cnf > $CONFNAME
echo [alt_names] >> $CONFNAME
echo DNS.2 = $SSL_DOMAIN_NAME.local  >> $CONFNAME
openssl ecparam -genkey -name prime256v1 -outform PEM -out intermediate/private/$SSL_DOMAIN_NAME.key.pem
chmod 400 intermediate/private/$SSL_DOMAIN_NAME.key.pem
openssl req  -config $CONFNAME  -key intermediate/private/$SSL_DOMAIN_NAME.key.pem -new -sha256 -out intermediate/csr/$SSL_DOMAIN_NAME.csr.pem
openssl ca -config $CONFNAME -extensions server_cert -days 375 -notext -md sha256 -in intermediate/csr/$SSL_DOMAIN_NAME.csr.pem -out intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
chmod 444 intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
openssl pkcs12 -export -in intermediate/certs/$SSL_DOMAIN_NAME.cert.pem -inkey intermediate/private/$SSL_DOMAIN_NAME.key.pem -out intermediate/private/$SSL_DOMAIN_NAME.pfx -certfile intermediate/certs/ca-chain.cert.pem

This will settle the Subject Alternate Name challenge. There are a more challenges to be had. Depending on what application you use, you need to import your intermediate keychain ca-chain.cert.pem in multiple places in different formats (Remember, I urged you not to do that in production!).

On Mac and Linux you have a keychain, but NodeJS and Java don't recognize them. Edge (and its older sibling) have their own key store, as has Firefox. Python, depending on version and library, has its own ideas about keys too. So manual management is a PITA.

As usual YMMV

Posted by on 26 October 2019 | Comments (0) | categories: OpenSource WebDevelopment

Create your own Certificate Authority (CA)

Warning Do NOT, never, ever do that to a production system!

Promised? OK! Here's the use case: you want to test your systems that have made up addresses like awesomeserver.local and don't want to deal with certificate warnings or fancy errors that arise when you just use a self signed cert. This post is a self-reference for my convenience. There are ample other instructions out there.

Disclaimer: I mostly followed this instructions short of updating some of the commands to use elliptic-curve cyphers.

Useful with a side of work

The process requires a series of steps:

  • Create the private key and root certificate
  • Create an intermediate key and certificate
  • Create certs for your servers
  • Convert them if necessary (e.g. for import in Java Keystors JKS)
  • Make the public key of the root and intermediate certs available
  • Import these certs in all browsers and runtimes that you will use for testing

Normal mortal users, without these imports will get scary error messages. While this doesn't deter the determined, it's good for a laugh.
We don't want old school certs, so we aim at a modern Elliptic-curve cert (Details here). Here we go:

Setting up the directory structure

mkdir -pv -m 600 /root/ca/intermediate
cd /root/ca
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/root-config.txt -o openssl.cnf
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/intermediate-config.txt -o intermediate/openssl.cnf
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
cd intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > crlnumber
cd ..

You want to check the downloaded files and eventually change the path in case you have chosen to us a different one.

The Root CA

export OPENSSL_CONF=./openssl.cnf
openssl ecparam -genkey -name prime256v1 -outform PEM | openssl ec -aes256 -out private/ca.key.pem
chmod 400 private/ca.key.pem
openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -SHA384 -extensions v3_ca -out certs/ca.cert.pem

Keep them save - remember: its on my harddrive only isn't save!!!
You want to check the file using openssl x509 -noout -text -in certs/ca.cert.pem or on macOS just hit the space key in finder.

Read more

Posted by on 16 October 2019 | Comments (0) | categories: Domino WebDevelopment

What's on your gRPC wire, Protocol Buffers or JSON?

The hot kid on the block for microservice APIs is gRPC, a Google developed, OpenSource binary wire protocol.

Its native serialization format is Protocol Buffers, advertised as "Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data". How does that fit into Domino picture?

Same same, but different

When old bags, like me, hear the word RPC a flood of memories and technologies come to mind:

  • DCOM Microsoft's take on: like Java, but Windows only, superceded by WCF for dotNet
  • Corba a standard defined by a commitee, mainly Java (and YES Domino still ships with a Corba Server)
  • SOAP with our beloved (or was the word: cursed?) WSDL

There are a few more modern contenders like Apache Thrift, Apache Avro or the KF - TEE. Good to have so many open standards.

the good

Especially with SOAP the common reaction to the rise of REST was: Good riddance RPC. I'm using the term REST fast and loose here, since a lot of the APIs are more like "http endpoints accepting JSON payloads" rather than REST in the formal sense of the definition.

So what's different with gRPC, so it got adopted by the Cloud Native Computing Foundation? IMHO there are several reasons:

  • It is designed by really smart engineers to run up to Google scale
  • It is ground up optimized, not bothering with legacy, but betting on HTTP/2 and its wire efficiencies
  • It is a compact binary protocol, making it efficient in low bandwidth and/or high volume use cases (Google scale anyone)
  • It transmits data only, no repeated meta data as in JSON or XML based approaches (at least when you use Protocol Buffers)
  • It focused on code generation, functioning more like an SDK than an API
  • It has versioning support built in
  • It uses rich structured data types (15 on last count) including enumerations. Notably absent: date/time and currency

And of course: it's the current fashion. RedHat provides a compehensive comparison to OpenAPI, as do others. Poking around YouTube I gained the impression, that most comparisons are made to REST and its limitations, almost similar to sessions about GraphQL. Mr. Sandoval tries to describe differentiators and use cases, go read it, it is quite good.

Read more

Posted by on 15 October 2019 | Comments (0) | categories: Domino gRPC WebDevelopment

A calDAV reference server

After having a look at the many standards involved, it is time to check out a standard or reference implementation. Cutting a long story short: it looks to me the OpenSource Apple Calendar and Contacts Server (ccs) is my best bet. While the documentation is rather light, it has been battle tested with my range of targeted clients

To Docker or to VM?

Trying to avoid the Works on my machine certification, a native install was out of the question. So Docker or VM? A search yielded one hit (with explanation) and none for for a ready baked VM. On closer inspection, the docker image, being 2 years old, didn't use the current version, so we had to re-create the image. While on it, I decided to give a VM a shot:

Apple calendar server on Ubuntu 18.04

To keep things light, I started with the current LTS version 18.04 desktop and a minimal install with 4G RAM. First order after the install is to get updates and install modules for the VirtualBox extensions:

sudo apt update
sudo apt install gcc make perl
sudo apt dist-upgrade

Read more

Posted by on 11 October 2019 | Comments (0) | categories: calDAV Domino

The calDAV Standard - navigating the RFC jungle

Application interoperability is key to wide spread adoption. Luckily there are so many open standards that one can claim to be open without being interoperable. On a protocol level HTTP and SMTP were huge successes, as well as HTML/MIME for message content. Beyond that it gets murky. None of the big vendors (outside the OpenSource space) has adopted an open protocol for chat and presence.

For other standards, most notably Calendaring, support is murkey. On key contributor might be the RFC process that produces documents that are hard to follow and lack sample implementations. They are work outcomes of a committee after all. In this series of blog entries I will (try to) highlight the moving parts of a calendar server implementation. The non-moving parts here are the calendar clients to target: Apple calendar on iOS and macOS, Thurnderbird and a few others.

Involved standards

There is a series of RFC that cover calendar operation, with various degrees of relevance:

  • RFC 4918: webDAV. Defines additional HTTP verbs and XML formats
  • RFC 4791: calDAV. Defines again additional HTTP verbs
  • RFC 5545: iCalendar. Calendar data as plain text, or XML or JSON
  • RFC 7953: vAvailability. Free/Busy lookup specification
  • RFC 7986: Extended properties for iCalendar
  • RFC 6638: Scheduling extensions
  • RFC 8607: Managed attachments in calendar entries
  • RFC 8144: Use of Prefer Header field in webDAV
  • RFC 5785: Definitions for the /.well-known/ URL

Read more

Posted by on 09 October 2019 | Comments (0) | categories: calDAV Domino

Vert.x and OpenAPI

In the shiny new world of the API Economy your API definition and its enforcement is everything. The current standard for REST based APIs is OpenAPI. What it gives you is a JSON or YAML file that describes how your API looks like. There is a whole zoo of tools around that allow to visualize, edit, run Mock servers or generate client and server code.

My favorite editor for OpenAPI specs is Apicurio, a project driven by RedHat. It strikes a nice balance between being UI driven and leaving you access to the full source code of your specification.

What to do with it

Your API specification defines:

  • the endpoints (a.k.a the URLS that you can use)
  • the mime types that can be sent or will received
  • the parameters in the path (the URL)
  • the parameters in the query (the part that looks like ?color=red&shape=circle)
  • the body to send and receive
  • the authentication / authorization requirements
  • the potential status codes (we love 2xx)

To handle all this, it smells like boilerplate or, if you are lucky, ready library. vert.x has the later. It provides the API Contract module that is designed to handle all this for you. You simply add the module to your pom.xml and load your json or yaml OpenApi specification file:


The documentation shows the code to turn the OpenApi speccification into a Router factory:

  ar -> {
    if (ar.succeeded()) {
      // Spec loaded with success
      OpenAPI3RouterFactory routerFactory = ar.result();
    } else {
      // Something went wrong during router factory initialization
      Throwable exception = ar.cause();

As you can see, you can load the spec from an URL (there's an auth option too). So while your API is evolving using Apicurio you can live load the latest and greated from the live preview (should make some interesting breakages ;-) ).

You then add your routes using routerFactory.addHandlerByOperationId("awesomeOperation",this::awesomeOperationHandler). Vert.x doesn't use the path to match the handler, but the operationId. This allows you to update path information without breaking your code. There is a detailed how-to document describing the steps.

Generate a skeleton for vert.x

As long as you haven't specified a handler for an operation, Vert.x will automatically reply with 501 Not implemented and not throw any error. To give you a headstart, you can generate the base code. First option is to head to start.vertx.io to generate a standard project skeleton, saving you the manual work of creating all dependencies in your pom.xml file. Using "Show dependency panel" provides a convenient way to pick the modules you need.

But there are better ways. You can use an OpenAPI Generator or the advanced Vert.x Starter courtesy of Paulo Lopes. In his tool you specify what it shall generate in a dropdown that defaults to "Empty Project". Once you change that to "OpenAPI Server" the form will alloow you to upload your OpenAPI specification and you get a complete project rendered with all handler stubs including the security handler. There's also a JavaScript version available.

Read more

Posted by on 06 September 2019 | Comments (0) | categories: Java vert.x WebDevelopment