wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Domino Administration Back to Basics (Part 2) - Networking


In Part 1 we learned about the marvelous world of Notes Names, X400 and the perils of messing with certificates. One big difference to X509 is the (almost) absence of certificate Command Line tools that can be so much fun.

Domino Networking - protocols as you like it

Domino predates the rise of TCP/IP and the internet. To no surprise it has its own idea about networking. Starting with protocol support:

  • Netbios using NDIS (doesn't route) and
  • IPX/SPX A protocol from days long past, when red boxes weren't Redhat but Novell
  • X.PC DialUp - Yes. A modem or something that takes modem commands and will establish a serial connection, no longer ships with Notes
  • A few more obscure protocols: Vines, SPXII
  • last not least TCP/IP

Having this zoo of protocols, Notes needs its own version of name resolution. That version is called Notes Named Network

One step back: What makes a Notes Domain?

A Notes Domain consists out of one or more server that use a Domino directory (a.k.a Public Name & Addressbook a.k.a names.nsf) with the same replica id (a story for another time) as the other member servers and have the same Domain name in their server document (that's where most of the server's setting are stored).

A popular point of confusion: Notes Names (from Part1) and Notes Domains: It is quite common to name your Domain after your orgID, but not mandatory. SO you could have HeavyRock/Acme@Acme or Sandstone/Acme@ToonsInc or Machine/Blowup@Acme The first and the last would be in the same Domain, while the first and second share the Org certifier. Anything goes, but to keep it simple, keep OrgId and Domain the same - unless you have 5 good reasons not to.

Another one: NEVER name your Notes Domain so it could be mistaken for an internet Domain. So no . in the name. Spaces interestingly are OK!


Read more

Posted by on 05 February 2020 | Comments (1) | categories: Domino Networking

Domino Administration - Back to Basics (Part 1) - Certificates


Domino is different, a lot of its concepts predate the internet and quite often inspired the standards. This is not an step-by-step instruction, but an introduction into concepts. The "step by step" approach is another story for another time.

In the beginning was the Certificate

Notes and Domino run using ID files. This are not merly files that can arbitrarily reconstructed, but cryptographically created Public Private key pairs. To avoid naming collisions the key names are hierarchical (Since R2), so anyone can call their server Server1 without confusion (sort of). This hierarchy is achieved using X400 naming conventions, an early competitor to DNS naming. A X400 name can consist out of multiple parts, these are the ones Domino is using:

X500 Name

So the minimum is a common name and an Org. The starting point of each Domino journey is the creation of the OrgID. All other parts depend on it. Note: there isn't a country ID, even if country mentioned after the Org. When you create your OrgID (while setting up the very first server), you can specify the country and it becomes part of the OrgID.

In practise however I haven't seen many OrgIDs that would carry a country code in them. So you can skip that part.

Signing Ids

Using the OrgID as signing certificat, one can go and create server and user certificates. E.g using the OrgID /O=Acme one can create /CN=Coyotee/O=Acme. For convenience the qualifiers are usually omitted, their meaning results from the position (and the fact that country is 2 letter if present and Orgs have 3 or more). So instead of /CN=Coyotee/O=Acme one can write Coyotee/Acme.

In practise however more enlightened organisation use their OrgCert as cautiously as root certificates in the X509 world and only register/sign Organizational Unit (/OU=) Ids which then can be used to sign server and user certificates.

Trust between certificate is hierarchical, similar to Internet certificates, so IDs having the same root (/O) certifier recognizse each other. The hierachy can be used in Access Control (another story for another time) to grant access to all IDs at a given level e.g. /OU=Management/O=Acme


Read more

Posted by on 04 February 2020 | Comments (3) | categories: Domino

Generating JWT tokens for tests


There are many options for Authentication and Authorisation. I'm fond of Json Web Tokens (JWT), implementing RFC7519. Mostly because they are like LTPA for grownups, but standard compliant

Quis custodiet ipsos custodes?

JWT contains information that is digitally signed (and optional encrypted), so a receiving end can verify that the information is correct. The key elements here are:

  • the JWT contains a claim, at least the subject that is tamper resistant by being digitally signed
  • JWT issuer and JWT consumer trust each other, by either having a shared secret (bad idea) or using a public/private key pair. The issuer signs the information with a private key. The consumer (your application server) verifies the signature using the public key

So besides protecting the private key of the issuer, you also want to be clear who's keys you trust. The one who holds your identity can impersonate you at any time (so you might rethink if "Login with Facebook" is such a brilliant idea after all).

However when testing systems, you develop local, you want to be able to have any user at your disposal, so you can generate the claim that is the Open Sesame to your test regime

Building a claim

We start by building the raw claim template.json file:

{
  "iss": "Alibaba Caves",
  "aud": "40Thiefs",
  "expSeconds": 300
}

It contains an issuer, an audience and the duration in seconds. The later one is for my convenience. The receiving system might or might not check issuer (iss) and/or audience (aud).

Next step is to create a public/private key pair


Read more

Posted by on 01 February 2020 | Comments (0) | categories: JavaScript WebDevelopment

Running a CalDAV server on Ubuntu (2020 edition)


When playing with calDAV it makes sense to have a reference implementation to refer to. The GOLD standard ist Apple's CalendarServer with its lovely Open Source Repository.

Getting it to work on Linux (Ubuntu in my case), isn't for the faint of heart. Here is what you need to do.

File system preparation

Calendar server needs extended attributes. The default Ext4 file system does support them out of the box, but you want to check first (you might have a different file system after all):

Check if file system supports extended attributes

Check using this command:

touch test.txt
setfattr -n user.test -v "hello" test.txt
getfattr test.txt

Expected output: user.test

If the command setfattr is not available, install with sudo install attr

Enable extended atttributes if required

Important if the previous command worked, skip this step. Don't mess with fstab! Make a backup copy if required and have your emergency boot stick ready. A messed up fstab will prevent your machine from booting!

Edit /etc/fstab. In the 4th colum of the file system add user_xattr. There might be values like noatime or defaults. Add user_xattr separated by a comma. Reboot.

Install calendar server

Execute sudo apt install calendarserver postgresql.

The server wants a postgresql database, so you need to be sure to have that installed too. Apple loves Python, so there ill be quite some phython packages installed, as well as a new user caldavd Since this is a system user, its home directory is /var/spool/caldav


Read more

Posted by on 01 February 2020 | Comments (0) | categories: calDAV WebDevelopment

Unit Tests and Singletons


Good developers test their code. There are plenty of frameworks and tools around to make it less painful. Not every code is testable, so you write test first, to avoid getting stuck with untestable code. However there are situations where Unit Testing and pattern get into each other's way

The singleton and test isolation

I'm a big fan of design patterns, after all they are a well-proven solutions for specific known problems. E.g. the observer pattern is the foundation of reactive programming.

A common approach to implementing cache is the singleton pattern, that ensures all your code talks to the same cache instance ,independent of what cache you actually use: Aerospike, Redis, Guava, JCS or others.

Your singleton would look like this:

public enum TicketCache {
  INSTANCE;
  
  public Set<String> getTickets(final String systemId) {
    Set<String> result = new HashSet<>();
    // Your cache related code goes here
    return result;
  }
  
  public TicketCache addTicket(final String systemId, final String ticketId) {
    // Your cache/persistence code goes here
    return this;
  }
}

and a method in a class returning tickets (e.g. in a user object) for a user could look like this:

  public Set<String> getUserTickets() {
    Set<String> result = new HashSet<>();
    Set<String> systemsResponsibleFor = this.getSystems();
    systemsResponsibleFor.forEach(systemId -> 
      result.addAll(TicketCache.INSTANCE.getTickets(systemId)));
    return result;
  }

Now when you want to test this method, you have a dependency on TicketCache and can't test the getUserTickets() method in isolation. You are at the mercy of your cache implementation. But there is a better way


Read more

Posted by on 10 January 2020 | Comments (0) | categories: Java UnitTesting

http(s) debugging cheat sheet


Martin Luther is attributed with the utterance "You have to watch how people talk". What works for the famous bible translation applies to APIs as well. Despite Open Standards and a standard body, APIs do their own thing, not necessarily documented as they are.

While it is reasonable easy to watch them inside a browser using Developer Tools (also here) it gets tricky when you want to watch an application like macOS Calendar, Thunderbird, Slack, Bodo (for Jira).

This is your cheat-sheet.

Setup your HTTP(s) Forensics

  • Install the application you want to investigate (Yeah, a no brainer)
  • Install a HTTP debugger, one of them (I use Charles Proxy)
  • Configure your HTTP debugger to be able to analyse https traffic to your choosen calendar server
  • Install PostMan and curl
  • Have a place where you save your investigation results in Markdown, /docs in a GitHub repo is a good place
  • Configure your HTTP debugger to intercept the http traffic on your target domain. This works different for each OS and debugger, so read the manual!
  • Fetch a cup of your favorite beverage, we are good to go

Investigate

  • Switch on your HTTP debugger (this line is sponsored by Captain Obvious)
  • Execute one command in your custom app. Typically something like "login" or "add account" or "File Open" (that's my favorite in Office apps, fires 4 or more HTTP request when done against a http endpoint that understands webDAV)
  • Look at the raw results. Each debugger has a nice gui that separates headers, cookies and body payload, but you want to look at raw data:
    • Your request starts with METHOD /route HTTP_Version, e.g. POST /login HTTP/1.1 Everything until the first empty line is header, followed eventually with a body. Knowing methods helps to set expectations. See also RFC 7231, RFC 5789, RFC 2518, RFC 4918, RFC 3744 and RFC 4791
    • Your response starts with a status line HTTP/1.1 StatusCode StatusMessage e.g. HTTP/1.1 200 OK Again: everything until the first empty line is header, followed by the optional response body
  • It can get trickier when you app is already using HTTP/2.0 or later since it allows streaming or non-text payloads like gRPC
  • Document that in markdown, bracket the http with three backticks, so it renders as source
  • Repeat for other commands in your app

Re-enactment

What fun is detective work without verifying results. This is where you turn to Postman or if you have it - curl.

You want to use parameters for your hostname and the user specific parts (username, passwords) and you need to have a clear idea what are variable return values. Good candidtes to watch out for are cookies or header values. You need to extract these values for chaining to the next request. With a little practise you should be able to make Postman behave like the original app

Parting words

  • Check the terms of service of the application you are investigating. While public end-point are, well, public, you might have agreed in your T&C not to touch them
  • This isn't an instuction for hacking, you still need to be you - with your credentials. Nevertheless you might stumble over "security by obscurity" or other annoyances
  • Any app that uses http instead of https needs to die a horrible death
  • Reading the API spec is potentially faster

As usual YMMV


Posted by on 30 December 2019 | Comments (0) | categories: HTTP(S) Networking WebDevelopment

A Streaming Pattern for the vert.x EventBus (Part 1)


When dealing with large amounts of data, using Streams allows processing happen the moment data arrives, not just when data is complete. Streaming is core to reactive programming. This blog entry describes an approach where the vert.x EventBus sits between requester and resource

The scenario

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus

participant Requester
participant EventBus
participant DataSource

Requester->EventBus: request DataSet
EventBus->DataSource: forward request
DataSource->EventBus: reply with data
EventBus->Requester: forward reply

A requester (e.g. the handler of a HTTP Listener) sends a request via the EventBus using a request-response pattern with EventBus.request(...). Simple and easy. The problem with this: one request has one response. That doesn't work for streaming data.

Taking a page from the military

The standard pattern for military commands is:

  1. Utter the command
  2. Accnowledge the command
  3. Execute the command (Conquer 14 countries, might take time. For Germans: Liberate 14 countries)
  4. Report completion of command

Applying to the EventBus

Following the pattern above the first request/response only will establish the Intent (btw. Intent Based Leadership is a smoking hot topic). Item 2 and 3 will be handled by a publish and subscribe pattern.

So our scenario now looks like this:

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus Streaming

participant Requester
participant EventBus
participant DataSource

Requester->EventBus: start listening\non temp address
note over Requester, DataSource: Start request/response
Requester->EventBus: request Stream\notify on temp address
EventBus->DataSource: forward request
DataSource->EventBus: reply withaccnowledgement
EventBus->Requester: forward response
note over Requester, DataSource: End of request/response
note over Requester, DataSource: Start publish/subscribe
DataSource->EventBus: publish first data
EventBus->Requester: forward response
DataSource->EventBus: publish more data
EventBus->Requester: forward response
DataSource->EventBus: publish last data
EventBus->Requester: forward response
Requester->EventBus: end listening\non temp address
note over Requester, DataSource: End of publish/subscribe

To implement this, I'm taking advantage of EventBus' DeliveryOptions that allow me to set header values. I define a header StreamListenerAddress that my data source will use for publishing data:

// Error handling omitted
public void initiateStreamResponse(final String dataAddress, final JsonObject requestMessage, Promise<Void> didItWork) {
	final String streamListenerAddress = "tempListenerAddresses." + UUID.randomUUID().toString();
	final EventBus eventBus = this.getVertx().eventBus();
	final MessageConsumer<JsonObject> dataReceiver = eventBus.consumer(streamListenerAddress);
	dataReceiver.handler(handler -> {
		final boolean isFirst = Boolean.parseBoolean(headers.get("first"));
		final boolean isComplete = Boolean.parseBoolean(headers.get("complete"));
		/*
	      Here goes the code feeding into the requester's logic e.g. a chunked HTTP response
	      or a websocket publish or a gRPC push. isFirst and isComplete can be true at the
	      same time when there is only a single response
		*/
	    ditItWork.complete();  
	});
	DeliveryOptions deliveryOptions = new DeliveryOptions();
	deliveryOptions.addHeader("StreamListenerAddress",streamListenerAddress);
	eventBus.request(dataAddress, requestMessage, deliveryOptions, ar -> {
			if (ar.succeeded()) {
				final Message<Object> resultMessage = ar.result();
				final boolean success = Boolean.parseBoolean(resultMessage.headers().get(Constants.HEADER_SUCCESS));
				if (!success) {
					consumer.unregister();
					didItWork.fail(new Error("Request for Data unsuccessfull"));
				}

			} else {
				consumer.unregister();
				didItWork.fail(ar.cause());
			}
		});
}

What next?

  • In Part 2 I will describe the data source part of this approach
  • In Part 2 I will wrap that in observable and observer

I'm using this pattern in the Keep API, YMMV


Posted by on 04 December 2019 | Comments (0) | categories: Java Reactive vert.x

Deep Human Super Skills for a VUCA world


We live in a world dominated by volatility, uncertainty, complexity and ambiguity (VUCA). Our traditional approach to a working career, learning a specific skill and stick to it, doesn't fit anymore. What is needed instead, is the subject of the book Deep Human writen by Crystal Lim-Lange and Dr. Gregor Lim-Lange.

5 skills

5 human super skills

The 5 skills build on each other, forming the foundation and prerequisite for the next level. Here is my take, paraphrasing what I learned, how they fit together. Full details, including experiences how to get there are in the book.

  1. Mindfulness Being rooted in reality, seeing what is, without judgment and deep filters is the foundation of any progress. The practise of precisely observing your suroundings allows you to gather evidence for any assessment and action. The mindful person is master of their thoughs and doesn't fall prey easily to illusions. Of course it takes lifelong practise. The mind is like a muscle: once you start or stop training, it changes

  2. Self-Awarness Once the mind has been sharpened and silenced, you can turn attention to the self. What are the sensations, emotions, thoughts, fears, hopes and believes that drive you? Armed with focus and mindfulness you can wrestle the driver's seat back from the monkey mind. Clarity about yourself leads to freedom to decide who you want to be instead of running on auto-pilot

  3. Empathy Having honed the skill of self-awareness you can apply that to other sentients. Without clarity about yourself, this would fail, so self-awareness is the foundation of empathy, as mindfulness is the foundation of self-awareness. Learning to walk in someone elses shoes deepends your understanding of a complex world. Empathy isn't a woolsy-shoolsy be nice to everybody feeling, but the application of reality from a different viewpoint. As the late Lama Marut would say: "Be nice and don't take shit"

  4. Complex communication You are able to see things as they are, you recognise strength and weaknesses in yourself and others. You value reality over opinions and solutions over debate. Skilled like this, explaining the not-so-simple, cutting to the chase, getting your point across becomes your next level. You won't get there without the foundation of Empathy, Self-Awareness and Mindfulness

  5. Resillience and Adaptability Life changes, subtle or sudden, minimal or radical. You have practised to communicate clearly, see reality from different perspectives as it is and know yourself. These skills and the resulting confidence enables you to face whatever comes your way. Not clinging to illusions makes you flexible like the bamboo in the wind. You will clearly see what is needed and where you can find purpose. You adapt.

The whole book is an insightful and interesting read, so go and get your copy.


Posted by on 27 October 2019 | Comments (0) | categories: After Hours Singapore

A certificate wants a SAN


Following my recent blog about creating you own CA you will find out, like I did, that the certs are quite wanting.

The Subject Alternate Name (SAN)

Even after importing the ca-chain.cert.pem into your keyring / keystore Chrome will barf at the certificate, complaining about a missing SAN.

The idea of a SAN is to allow additional name variations to be recognised for one given certificate, reducing the effort for multi-purpose servers. E.g.: myawesomesite.com, www.myawesomesite.com, myawesomesite.io, www.myawesomesite.com, crazydata.com

I tried really hard, but at the time of writing, it seems the only way to create SAN for your certs is to provide a configuration file. I didn't find a command line option (short of various attempts on redirection and pipeing).

The hack I came up with:

Edit the intermediate\openssl.cnf and add to the [ server_cert ] section one line: subjectAltName = @alt_names. The @ sign tells OpenSSL to look for a section with that name and expand its content as the parameter.

Using the following shell script generates a certificate that works for:

  • www.domain (e.g. www.awesome.io)
  • domain (e.g. awesome.io)
  • domain.local (e.g. awesome.io.local)

The last one is helpful when you want to try SSL on localhost and amend your hosts file to contain awesome.io.local

#!/bin/bash
# Create new server certificates with the KEEP intermediate CA
if [ -z "$1" ]
  then
    echo "Usage: ./makecert.sh domain_name (without www) e.g. ./makecert.sh funsite.com"
    exit 1
fi
export SSL_DOMAIN_NAME=$1
export CONFNAME=intermediate/$1.cnf
cat intermediate/openssl.cnf > $CONFNAME
echo [alt_names] >> $CONFNAME
echo DNS.0 = $SSL_DOMAIN_NAME >> $CONFNAME
echo DNS.1 = www.$SSL_DOMAIN_NAME  >> $CONFNAME
echo DNS.2 = $SSL_DOMAIN_NAME.local  >> $CONFNAME
openssl ecparam -genkey -name prime256v1 -outform PEM -out intermediate/private/$SSL_DOMAIN_NAME.key.pem
chmod 400 intermediate/private/$SSL_DOMAIN_NAME.key.pem
openssl req  -config $CONFNAME  -key intermediate/private/$SSL_DOMAIN_NAME.key.pem -new -sha256 -out intermediate/csr/$SSL_DOMAIN_NAME.csr.pem
openssl ca -config $CONFNAME -extensions server_cert -days 375 -notext -md sha256 -in intermediate/csr/$SSL_DOMAIN_NAME.csr.pem -out intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
chmod 444 intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
openssl pkcs12 -export -in intermediate/certs/$SSL_DOMAIN_NAME.cert.pem -inkey intermediate/private/$SSL_DOMAIN_NAME.key.pem -out intermediate/private/$SSL_DOMAIN_NAME.pfx -certfile intermediate/certs/ca-chain.cert.pem
rm $CONFNAME

This will settle the Subject Alternate Name challenge. There are a more challenges to be had. Depending on what application you use, you need to import your intermediate keychain ca-chain.cert.pem in multiple places in different formats (Remember, I urged you not to do that in production!).

On Mac and Linux you have a keychain, but NodeJS and Java don't recognize them. Edge (and its older sibling) have their own key store, as has Firefox. Python, depending on version and library, has its own ideas about keys too. So manual management is a PITA.

As usual YMMV


Posted by on 26 October 2019 | Comments (0) | categories: HTTP(S) Networking OpenSource WebDevelopment

Create your own Certificate Authority (CA)


Warning Do NOT, never, ever do that to a production system!

Promised? OK! Here's the use case: you want to test your systems that have made up addresses like awesomeserver.local and don't want to deal with certificate warnings or fancy errors that arise when you just use a self signed cert. This post is a self-reference for my convenience. There are ample other instructions out there.

Disclaimer: I mostly followed this instructions short of updating some of the commands to use elliptic-curve cyphers.

Useful with a side of work

The process requires a series of steps:

  • Create the private key and root certificate
  • Create an intermediate key and certificate
  • Create certs for your servers
  • Convert them if necessary (e.g. for import in Java Keystors JKS)
  • Make the public key of the root and intermediate certs available
  • Import these certs in all browsers and runtimes that you will use for testing

Normal mortal users, without these imports will get scary error messages. While this doesn't deter the determined, it's good for a laugh.
We don't want old school certs, so we aim at a modern Elliptic-curve cert (Details here). Here we go:

Setting up the directory structure

mkdir -pv -m 600 /root/ca/intermediate
cd /root/ca
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/root-config.txt -o openssl.cnf
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/intermediate-config.txt -o intermediate/openssl.cnf
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
cd intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > crlnumber
cd ..

You want to check the downloaded files and eventually change the path in case you have chosen to us a different one.

The Root CA

export OPENSSL_CONF=./openssl.cnf
openssl ecparam -genkey -name prime256v1 -outform PEM | openssl ec -aes256 -out private/ca.key.pem
chmod 400 private/ca.key.pem
openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -SHA384 -extensions v3_ca -out certs/ca.cert.pem

Keep them save - remember: its on my harddrive only isn't save!!!
You want to check the file using openssl x509 -noout -text -in certs/ca.cert.pem or on macOS just hit the space key in finder.


Read more

Posted by on 16 October 2019 | Comments (0) | categories: HTTP(S) Networking WebDevelopment