wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

A Streaming Pattern for the vert.x EventBus (Part 1)


When dealing with large amounts of data, using Streams allows processing happen the moment data arrives, not just when data is complete. Streaming is core to reactive programming. This blog entry describes an approach where the vert.x EventBus sits between requester and resource

The scenario

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus

participant Requester
participant EventBus
participant DataSource

Requester->EventBus: request DataSet
EventBus->DataSource: forward request
DataSource->Requester: reply with data

A requester (e.g. the handler of a HTTP Listener) sends a request via the EventBus using a request-response pattern with EventBus.request(...). Simple and easy. The problem with this: one request has one response. That doesn't work for streaming data.

Taking a page from the military

The standard pattern for military commands is:

  1. Utter the command
  2. Accnowledge the command
  3. Execute the command (Conquer 14 countries, might take time. For Germans: Liberate 14 countries)
  4. Report completion of command

Applying to the EventBus

Following the pattern above the first request/response only will establish the Intent (btw. Intent Based Leadership is a smoking hot topic). Item 2 and 3 will be handled by a publish and subscribe pattern.

So our scenario now looks like this:

Classic Eventbus Request Response

Image created using WebSequenceDiagramms

title Vert.x EventBus Streaming

participant Requester
participant EventBus
participant DataSource

Requester->Requester: start listening\non temp address
note over Requester, DataSource: Start request/response
Requester->EventBus: request Stream\notify on temp address
EventBus->DataSource: forward request
DataSource->Requester: reply withaccnowledgement
note over Requester, DataSource: End of request/response
note over Requester, DataSource: Start publish/subscribe
DataSource->Requester: publish first data
DataSource->Requester: publish more data
DataSource->Requester: publish last data
Requester->Requester: end listening\non temp address
note over Requester, DataSource: End of publish/subscribe

To implement this, I'm taking advantage of EventBus' DeliveryOptions that allow me to set header values. I define a header StreamListenerAddress that my data source will use for publishing data:

// Error handling omitted
public void initiateStreamResponse(final String dataAddress, final JsonObject requestMessage, Promise<Void> didItWork) {
	final String streamListenerAddress = "tempListenerAddresses." + UUID.randomUUID().toString();
	final EventBus eventBus = this.getVertx().eventBus();
	final MessageConsumer<JsonObject> dataReceiver = eventBus.consumer(streamListenerAddress);
	dataReceiver.handler(handler -> {
		final boolean isFirst = Boolean.parseBoolean(headers.get("first"));
		final boolean isComplete = Boolean.parseBoolean(headers.get("complete"));
		/*
	      Here goes the code feeding into the requester's logic e.g. a chunked HTTP response
	      or a websocket publish or a gRPC push. isFirst and isComplete can be true at the
	      same time when there is only a single response
		*/
	    ditItWork.complete();  
	});
	DeliveryOptions deliveryOptions = new DeliveryOptions();
	deliveryOptions.addHeader("StreamListenerAddress",streamListenerAddress);
	eventBus.request(dataAddress, requestMessage, deliveryOptions, ar -> {
			if (ar.succeeded()) {
				final Message<Object> resultMessage = ar.result();
				final boolean success = Boolean.parseBoolean(resultMessage.headers().get(Constants.HEADER_SUCCESS));
				if (!success) {
					consumer.unregister();
					didItWork.fail(new Error("Request for Data unsuccessfull"));
				}

			} else {
				consumer.unregister();
				didItWork.fail(ar.cause());
			}
		});
}

What next?

  • In Part 2 I will describe the data source part of this approach
  • In Part 2 I will wrap that in observable and observer

I'm using this pattern in the Keep API, YMMV


Posted by on 04 December 2019 | Comments (0) | categories: Java Reactive vert.x

Deep Human Super Skills for a VUCA world


We live in a world dominated by volatility, uncertainty, complexity and ambiguity (VUCA). Our traditional approach to a working career, learning a specific skill and stick to it, doesn't fit anymore. What is needed instead, is the subject of the book Deep Human writen by Crystal Lim-Lange and Dr. Gregor Lim-Lange.

5 skills

5 human super skills

The 5 skills build on each other, forming the foundation and prerequisite for the next level. Here is my take, paraphrasing what I learned, how they fit together. Full details, including experiences how to get there are in the book.

  1. Mindfulness Being rooted in reality, seeing what is, without judgment and deep filters is the foundation of any progress. The practise of precisely observing your suroundings allows you to gather evidence for any assessment and action. The mindful person is master of their thoughs and doesn't fall prey easily to illusions. Of course it takes lifelong practise. The mind is like a muscle: once you start or stop training, it changes

  2. Self-Awarness Once the mind has been sharpened and silenced, you can turn attention to the self. What are the sensations, emotions, thoughts, fears, hopes and believes that drive you? Armed with focus and mindfulness you can wrestle the driver's seat back from the monkey mind. Clarity about yourself leads to freedom to decide who you want to be instead of running on auto-pilot

  3. Empathy Having honed the skill of self-awareness you can apply that to other sentients. Without clarity about yourself, this would fail, so self-awareness is the foundation of empathy, as mindfulness is the foundation of self-awareness. Learning to walk in someone elses shoes deepends your understanding of a complex world. Empathy isn't a woolsy-shoolsy be nice to everybody feeling, but the application of reality from a different viewpoint. As the late Lama Marut would say: "Be nice and don't take shit"

  4. Complex communication You are able to see things as they are, you recognise strength and weaknesses in yourself and others. You value reality over opinions and solutions over debate. Skilled like this, explaining the not-so-simple, cutting to the chase, getting your point across becomes your next level. You won't get there without the foundation of Empathy, Self-Awareness and Mindfulness

  5. Resillience and Adaptability Life changes, subtle or sudden, minimal or radical. You have practised to communicate clearly, see reality from different perspectives as it is and know yourself. These skills and the resulting confidence enables you to face whatever comes your way. Not clinging to illusions makes you flexible like the bamboo in the wind. You will clearly see what is needed and where you can find purpose. You adapt.

The whole book is an insightful and interesting read, so go and get your copy.


Posted by on 27 October 2019 | Comments (0) | categories: After Hours Singapore

A certificate wants a SAN


Following my recent blog about creating you own CA you will find out, like I did, that the certs are quite wanting.

The Subject Alternate Name (SAN)

Even after importing the ca-chain.cert.pem into your keyring / keystore Chrome will barf at the certificate, complaining about a missing SAN.

The idea of a SAN is to allow additional name variations to be recognised for one given certificate, reducing the effort for multi-purpose servers. E.g.: myawesomesite.com, www.myawesomesite.com, myawesomesite.io, www.myawesomesite.com, crazydata.com

I tried really hard, but at the time of writing, it seems the only way to create SAN for your certs is to provide a configuration file. I didn't find a command line option (short of various attempts on redirection and pipeing).

The hack I came up with:

Edit the intermediate\openssl.cnf and add to the [ server_cert ] section one line: subjectAltName = @alt_names. The @ sign tells OpenSSL to look for a section with that name and expand its content as the parameter.

Using the following shell script generates a certificate that works for:

  • www.domain (e.g. www.awesome.io)
  • domain (e.g. awesome.io)
  • domain.local (e.g. awesome.io.local)

The last one is helpful when you want to try SSL on localhost and amend your hosts file to contain awesome.io.local

#!/bin/bash
# Create new server certificates with the KEEP intermediate CA
if [ -z "$1" ]
  then
    echo "Usage: ./makecert.sh domain_name (without www) e.g. ./makecert.sh funsite.com"
    exit 1
fi
export SSL_DOMAIN_NAME=$1
export CONFNAME=intermediate/$1.cnf
cat intermediate/openssl.cnf > $CONFNAME
echo [alt_names] >> $CONFNAME
echo DNS.0 = $SSL_DOMAIN_NAME >> $CONFNAME
echo DNS.1 = www.$SSL_DOMAIN_NAME  >> $CONFNAME
echo DNS.2 = $SSL_DOMAIN_NAME.local  >> $CONFNAME
openssl ecparam -genkey -name prime256v1 -outform PEM -out intermediate/private/$SSL_DOMAIN_NAME.key.pem
chmod 400 intermediate/private/$SSL_DOMAIN_NAME.key.pem
openssl req  -config $CONFNAME  -key intermediate/private/$SSL_DOMAIN_NAME.key.pem -new -sha256 -out intermediate/csr/$SSL_DOMAIN_NAME.csr.pem
openssl ca -config $CONFNAME -extensions server_cert -days 375 -notext -md sha256 -in intermediate/csr/$SSL_DOMAIN_NAME.csr.pem -out intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
chmod 444 intermediate/certs/$SSL_DOMAIN_NAME.cert.pem
openssl pkcs12 -export -in intermediate/certs/$SSL_DOMAIN_NAME.cert.pem -inkey intermediate/private/$SSL_DOMAIN_NAME.key.pem -out intermediate/private/$SSL_DOMAIN_NAME.pfx -certfile intermediate/certs/ca-chain.cert.pem
rm $CONFNAME

This will settle the Subject Alternate Name challenge. There are a more challenges to be had. Depending on what application you use, you need to import your intermediate keychain ca-chain.cert.pem in multiple places in different formats (Remember, I urged you not to do that in production!).

On Mac and Linux you have a keychain, but NodeJS and Java don't recognize them. Edge (and its older sibling) have their own key store, as has Firefox. Python, depending on version and library, has its own ideas about keys too. So manual management is a PITA.

As usual YMMV


Posted by on 26 October 2019 | Comments (0) | categories: OpenSource WebDevelopment

Create your own Certificate Authority (CA)


Warning Do NOT, never, ever do that to a production system!

Promised? OK! Here's the use case: you want to test your systems that have made up addresses like awesomeserver.local and don't want to deal with certificate warnings or fancy errors that arise when you just use a self signed cert. This post is a self-reference for my convenience. There are ample other instructions out there.

Disclaimer: I mostly followed this instructions short of updating some of the commands to use elliptic-curve cyphers.

Useful with a side of work

The process requires a series of steps:

  • Create the private key and root certificate
  • Create an intermediate key and certificate
  • Create certs for your servers
  • Convert them if necessary (e.g. for import in Java Keystors JKS)
  • Make the public key of the root and intermediate certs available
  • Import these certs in all browsers and runtimes that you will use for testing

Normal mortal users, without these imports will get scary error messages. While this doesn't deter the determined, it's good for a laugh.
We don't want old school certs, so we aim at a modern Elliptic-curve cert (Details here). Here we go:

Setting up the directory structure

mkdir -pv -m 600 /root/ca/intermediate
cd /root/ca
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/root-config.txt -o openssl.cnf
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/intermediate-config.txt -o intermediate/openssl.cnf
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
cd intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > crlnumber
cd ..

You want to check the downloaded files and eventually change the path in case you have chosen to us a different one.

The Root CA

export OPENSSL_CONF=./openssl.cnf
openssl ecparam -genkey -name prime256v1 -outform PEM | openssl ec -aes256 -out private/ca.key.pem
chmod 400 private/ca.key.pem
openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -SHA384 -extensions v3_ca -out certs/ca.cert.pem

Keep them save - remember: its on my harddrive only isn't save!!!
You want to check the file using openssl x509 -noout -text -in certs/ca.cert.pem or on macOS just hit the space key in finder.


Read more

Posted by on 16 October 2019 | Comments (0) | categories: Domino WebDevelopment

What's on your gRPC wire, Protocol Buffers or JSON?


The hot kid on the block for microservice APIs is gRPC, a Google developed, OpenSource binary wire protocol.

Its native serialization format is Protocol Buffers, advertised as "Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data". How does that fit into Domino picture?

Same same, but different

When old bags, like me, hear the word RPC a flood of memories and technologies come to mind:

  • DCOM Microsoft's take on: like Java, but Windows only, superceded by WCF for dotNet
  • Corba a standard defined by a commitee, mainly Java (and YES Domino still ships with a Corba Server)
  • SOAP with our beloved (or was the word: cursed?) WSDL

There are a few more modern contenders like Apache Thrift, Apache Avro or the KF - TEE. Good to have so many open standards.

the good

Especially with SOAP the common reaction to the rise of REST was: Good riddance RPC. I'm using the term REST fast and loose here, since a lot of the APIs are more like "http endpoints accepting JSON payloads" rather than REST in the formal sense of the definition.

So what's different with gRPC, so it got adopted by the Cloud Native Computing Foundation? IMHO there are several reasons:

  • It is designed by really smart engineers to run up to Google scale
  • It is ground up optimized, not bothering with legacy, but betting on HTTP/2 and its wire efficiencies
  • It is a compact binary protocol, making it efficient in low bandwidth and/or high volume use cases (Google scale anyone)
  • It transmits data only, no repeated meta data as in JSON or XML based approaches (at least when you use Protocol Buffers)
  • It focused on code generation, functioning more like an SDK than an API
  • It has versioning support built in
  • It uses rich structured data types (15 on last count) including enumerations. Notably absent: date/time and currency

And of course: it's the current fashion. RedHat provides a compehensive comparison to OpenAPI, as do others. Poking around YouTube I gained the impression, that most comparisons are made to REST and its limitations, almost similar to sessions about GraphQL. Mr. Sandoval tries to describe differentiators and use cases, go read it, it is quite good.


Read more

Posted by on 15 October 2019 | Comments (0) | categories: Domino gRPC WebDevelopment

A calDAV reference server


After having a look at the many standards involved, it is time to check out a standard or reference implementation. Cutting a long story short: it looks to me the OpenSource Apple Calendar and Contacts Server (ccs) is my best bet. While the documentation is rather light, it has been battle tested with my range of targeted clients

To Docker or to VM?

Trying to avoid the Works on my machine certification, a native install was out of the question. So Docker or VM? A search yielded one hit (with explanation) and none for for a ready baked VM. On closer inspection, the docker image, being 2 years old, didn't use the current version, so we had to re-create the image. While on it, I decided to give a VM a shot:

Apple calendar server on Ubuntu 18.04

To keep things light, I started with the current LTS version 18.04 desktop and a minimal install with 4G RAM. First order after the install is to get updates and install modules for the VirtualBox extensions:

sudo apt update
sudo apt install gcc make perl
sudo apt dist-upgrade

Read more

Posted by on 11 October 2019 | Comments (0) | categories: calDAV Domino

The calDAV Standard - navigating the RFC jungle


Application interoperability is key to wide spread adoption. Luckily there are so many open standards that one can claim to be open without being interoperable. On a protocol level HTTP and SMTP were huge successes, as well as HTML/MIME for message content. Beyond that it gets murky. None of the big vendors (outside the OpenSource space) has adopted an open protocol for chat and presence.

For other standards, most notably Calendaring, support is murkey. On key contributor might be the RFC process that produces documents that are hard to follow and lack sample implementations. They are work outcomes of a committee after all. In this series of blog entries I will (try to) highlight the moving parts of a calendar server implementation. The non-moving parts here are the calendar clients to target: Apple calendar on iOS and macOS, Thurnderbird and a few others.

Involved standards

There is a series of RFC that cover calendar operation, with various degrees of relevance:

  • RFC 4918: webDAV. Defines additional HTTP verbs and XML formats
  • RFC 4791: calDAV. Defines again additional HTTP verbs
  • RFC 5545: iCalendar. Calendar data as plain text, or XML or JSON
  • RFC 7953: vAvailability. Free/Busy lookup specification
  • RFC 7986: Extended properties for iCalendar
  • RFC 6638: Scheduling extensions
  • RFC 8607: Managed attachments in calendar entries
  • RFC 8144: Use of Prefer Header field in webDAV
  • RFC 5785: Definitions for the /.well-known/ URL

Read more

Posted by on 09 October 2019 | Comments (0) | categories: calDAV Domino

Vert.x and OpenAPI


In the shiny new world of the API Economy your API definition and its enforcement is everything. The current standard for REST based APIs is OpenAPI. What it gives you is a JSON or YAML file that describes how your API looks like. There is a whole zoo of tools around that allow to visualize, edit, run Mock servers or generate client and server code.

My favorite editor for OpenAPI specs is Apicurio, a project driven by RedHat. It strikes a nice balance between being UI driven and leaving you access to the full source code of your specification.

What to do with it

Your API specification defines:

  • the endpoints (a.k.a the URLS that you can use)
  • the mime types that can be sent or will received
  • the parameters in the path (the URL)
  • the parameters in the query (the part that looks like ?color=red&shape=circle)
  • the body to send and receive
  • the authentication / authorization requirements
  • the potential status codes (we love 2xx)

To handle all this, it smells like boilerplate or, if you are lucky, ready library. vert.x has the later. It provides the API Contract module that is designed to handle all this for you. You simply add the module to your pom.xml and load your json or yaml OpenApi specification file:

<dependency>
 <groupId>io.vertx</groupId>
 <artifactId>vertx-web-api-contract</artifactId>
 <version>3.8.1</version>
</dependency>

The documentation shows the code to turn the OpenApi speccification into a Router factory:

OpenAPI3RouterFactory.create(
  vertx,
  "https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/petstore.yaml",
  ar -> {
    if (ar.succeeded()) {
      // Spec loaded with success
      OpenAPI3RouterFactory routerFactory = ar.result();
    } else {
      // Something went wrong during router factory initialization
      Throwable exception = ar.cause();
    }
  });

As you can see, you can load the spec from an URL (there's an auth option too). So while your API is evolving using Apicurio you can live load the latest and greated from the live preview (should make some interesting breakages ;-) ).

You then add your routes using routerFactory.addHandlerByOperationId("awesomeOperation",this::awesomeOperationHandler). Vert.x doesn't use the path to match the handler, but the operationId. This allows you to update path information without breaking your code. There is a detailed how-to document describing the steps.

Generate a skeleton for vert.x

As long as you haven't specified a handler for an operation, Vert.x will automatically reply with 501 Not implemented and not throw any error. To give you a headstart, you can generate the base code. First option is to head to start.vertx.io to generate a standard project skeleton, saving you the manual work of creating all dependencies in your pom.xml file. Using "Show dependency panel" provides a convenient way to pick the modules you need.

But there are better ways. You can use an OpenAPI Generator or the advanced Vert.x Starter courtesy of Paulo Lopes. In his tool you specify what it shall generate in a dropdown that defaults to "Empty Project". Once you change that to "OpenAPI Server" the form will alloow you to upload your OpenAPI specification and you get a complete project rendered with all handler stubs including the security handler. There's also a JavaScript version available.


Read more

Posted by on 06 September 2019 | Comments (0) | categories: Java vert.x WebDevelopment

Adding a proxy to your Salesforce Communities


Running a community site might come with a number of interesting requirement:

  • Scan uploaded files for maleware or copyright violations
  • Filter language for profanities
  • Comply with local data retention rules (e.g. local before cloud)

For most of these task AppExchange will be the goto place to find solution. However sometimes you want to process before data hits the platform. This is the moment where you need a proxy.

Clicks not Code

To be ready to proxy, there are a few steps involved. I went through a few loops, to come to this working sequence:

  1. Register a domain. You will use it to run your community. Using a custom domain is essential to avoid https headaches later on
  2. Obtain a SSL certificate for the custom domain. The easiest part, if you have access to a public host, is to use LetsEncrypt to obtain the cert and then transform it to JKS. The certs are only valid for 90 days, but we only need it for a short while in JKS. On e.g. Nginx one can auto renew the certs
  3. Upload the cert into Salesforce in Security - Certificate and Key Management - Import from Keystore
  4. Follow the Steps 1 and 4 (you did 3 already). You need access to your DNS for that. The Domain needs to be fully qualified, you can't use your root (a DNS limitation). Let's say your base is acme.com and you want your partner community to be reachable at partners.acme.com and your Salesforce Org ID is 1234567890abcdefgh, then you need a CNAME entry that says partners -> partners.acme.com.1234567890abcdefgh.live.siteforce.com. Important: The entry needs to end with a DOT (.) otherwise CNAME tries to link it back to your domain
  5. Test the whole setup. Make sure you can use all community functions using the URL https://partners.acme.com/
  6. Now back to the DNS. Point the CNAME entry to your host (e.g. Heroku or delete it and create a A record pointing to e.g. DigitalOcean
  7. Make sure the Proxy sends the HOST header has the value of your custom domain, not the force.com. Your proxy serves as your own CDN

Little boomer: You can't do this in a sandbox or a developer org, needs to be production or trial.

Next stop: discuss what proxy to use and options to consider. As usual YMMV.


Posted by on 30 June 2019 | Comments (0) | categories: Salesforce Singapore

Turning a blog into a video with invideo.io


my last entry on LWC was a fairly technical piece. To my surprise Nirav from InVideo approached me and suggested to turn this into a video.

Watching instead of reading

The team at InVideo did a nice job for the first draft. Quite some of the visualizations make the approach and content very approachable. You spend less than 2 minutes to learn if the details solve an issue you are looking for.

See for yourself!

Let us know what you think in the comments! Disclaimer: Invideo did not compensate (in kind or financial) for working with them, I wouldn't do that. They approached me and it looked like an interesting idea.


Posted by on 15 June 2019 | Comments (0) | categories: Salesforce Singapore