Usability - Productivity - Business - The web - Singapore & Twins

Create your own Certificate Authority (CA)

Warning Do NOT, never, ever do that to a production system!

Promised? OK! Here's the use case: you want to test your systems that have made up addresses like awesomeserver.local and don't want to deal with certificate warnings or fancy errors that arise when you just use a self signed cert. This post is a self-reference for my convenience. There are ample other instructions out there.

Disclaimer: I mostly followed this instructions short of updating some of the commands to use elliptic-curve cyphers.

Useful with a side of work

The process requires a series of steps:

  • Create the private key and root certificate
  • Create an intermediate key and certificate
  • Create certs for your servers
  • Convert them if necessary (e.g. for import in Java Keystors JKS)
  • Make the public key of the root and intermediate certs available
  • Import these certs in all browsers and runtimes that you will use for testing

Normal mortal users, without these imports will get scary error messages. While this doesn't deter the determined, it's good for a laugh.
We don't want old school certs, so we aim at a modern Elliptic-curve cert (Details here). Here we go:

Setting up the directory structure

mkdir -pv -m 600 /root/ca/intermediate
cd /root/ca
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/root-config.txt -o openssl.cnf
curl https://jamielinux.com/docs/openssl-certificate-authority/_downloads/intermediate-config.txt -o intermediate/openssl.cnf
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
cd intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > crlnumber
cd ..

You want to check the downloaded files and eventually change the path in case you have chosen to us a different one.

The Root CA

export OPENSSL_CONF=./openssl.cnf
openssl ecparam -genkey -name prime256v1 -outform PEM | openssl ec -aes256 -out private/ca.key.pem
chmod 400 private/ca.key.pem
openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -SHA384 -extensions v3_ca -out certs/ca.cert.pem

Keep them save - remember: its on my harddrive only isn't save!!!
You want to check the file using openssl x509 -noout -text -in certs/ca.cert.pem or on macOS just hit the space key in finder.

Read more

Posted by on 16 October 2019 | Comments (0) | categories: Domino WebDevelopment

What's on your gRPC wire, Protocol Buffers or JSON?

The hot kid on the block for microservice APIs is gRPC, a Google developed, OpenSource binary wire protocol.

Its native serialization format is Protocol Buffers, advertised as "Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data". How does that fit into Domino picture?

Same same, but different

When old bags, like me, hear the word RPC a flood of memories and technologies come to mind:

  • DCOM Microsoft's take on: like Java, but Windows only, superceded by WCF for dotNet
  • Corba a standard defined by a commitee, mainly Java (and YES Domino still ships with a Corba Server)
  • SOAP with our beloved (or was the word: cursed?) WSDL

There are a few more modern contenders like Apache Thrift, Apache Avro or the KF - TEE. Good to have so many open standards.

the good

Especially with SOAP the common reaction to the rise of REST was: Good riddance RPC. I'm using the term REST fast and loose here, since a lot of the APIs are more like "http endpoints accepting JSON payloads" rather than REST in the formal sense of the definition.

So what's different with gRPC, so it got adopted by the Cloud Native Computing Foundation? IMHO there are several reasons:

  • It is designed by really smart engineers to run up to Google scale
  • It is ground up optimized, not bothering with legacy, but betting on HTTP/2 and its wire efficiencies
  • It is a compact binary protocol, making it efficient in low bandwidth and/or high volume use cases (Google scale anyone)
  • It transmits data only, no repeated meta data as in JSON or XML based approaches (at least when you use Protocol Buffers)
  • It focused on code generation, functioning more like an SDK than an API
  • It has versioning support built in
  • It uses rich structured data types (15 on last count) including enumerations. Notably absent: date/time and currency

And of course: it's the current fashion. RedHat provides a compehensive comparison to OpenAPI, as do others. Poking around YouTube I gained the impression, that most comparisons are made to REST and its limitations, almost similar to sessions about GraphQL. Mr. Sandoval tries to describe differentiators and use cases, go read it, it is quite good.

Read more

Posted by on 15 October 2019 | Comments (0) | categories: Domino gRPC WebDevelopment

A calDAV reference server

After having a look at the many standards involved, it is time to check out a standard or reference implementation. Cutting a long story short: it looks to me the OpenSource Apple Calendar and Contacts Server (ccs) is my best bet. While the documentation is rather light, it has been battle tested with my range of targeted clients

To Docker or to VM?

Trying to avoid the Works on my machine certification, a native install was out of the question. So Docker or VM? A search yielded one hit (with explanation) and none for for a ready baked VM. On closer inspection, the docker image, being 2 years old, didn't use the current version, so we had to re-create the image. While on it, I decided to give a VM a shot:

Apple calendar server on Ubuntu 18.04

To keep things light, I started with the current LTS version 18.04 desktop and a minimal install with 4G RAM. First order after the install is to get updates and install modules for the VirtualBox extensions:

sudo apt update
sudo apt install gcc make perl
sudo apt dist-upgrade

Read more

Posted by on 11 October 2019 | Comments (0) | categories: calDAV Domino

The calDAV Standard - navigating the RFC jungle

Application interoperability is key to wide spread adoption. Luckily there are so many open standards that one can claim to be open without being interoperable. On a protocol level HTTP and SMTP were huge successes, as well as HTML/MIME for message content. Beyond that it gets murky. None of the big vendors (outside the OpenSource space) has adopted an open protocol for chat and presence.

For other standards, most notably Calendaring, support is murkey. On key contributor might be the RFC process that produces documents that are hard to follow and lack sample implementations. They are work outcomes of a committee after all. In this series of blog entries I will (try to) highlight the moving parts of a calendar server implementation. The non-moving parts here are the calendar clients to target: Apple calendar on iOS and macOS, Thurnderbird and a few others.

Involved standards

There is a series of RFC that cover calendar operation, with various degrees of relevance:

  • RFC 4918: webDAV. Defines additional HTTP verbs and XML formats
  • RFC 4791: calDAV. Defines again additional HTTP verbs
  • RFC 5545: iCalendar. Calendar data as plain text, or XML or JSON
  • RFC 7953: vAvailability. Free/Busy lookup specification
  • RFC 7986: Extended properties for iCalendar
  • RFC 6638: Scheduling extensions
  • RFC 8607: Managed attachments in calendar entries
  • RFC 8144: Use of Prefer Header field in webDAV
  • RFC 5785: Definitions for the /.well-known/ URL

Read more

Posted by on 09 October 2019 | Comments (0) | categories: calDAV Domino

Vert.x and OpenAPI

In the shiny new world of the API Economy your API definition and its enforcement is everything. The current standard for REST based APIs is OpenAPI. What it gives you is a JSON or YAML file that describes how your API looks like. There is a whole zoo of tools around that allow to visualize, edit, run Mock servers or generate client and server code.

My favorite editor for OpenAPI specs is Apicurio, a project driven by RedHat. It strikes a nice balance between being UI driven and leaving you access to the full source code of your specification.

What to do with it

Your API specification defines:

  • the endpoints (a.k.a the URLS that you can use)
  • the mime types that can be sent or will received
  • the parameters in the path (the URL)
  • the parameters in the query (the part that looks like ?color=red&shape=circle)
  • the body to send and receive
  • the authentication / authorization requirements
  • the potential status codes (we love 2xx)

To handle all this, it smells like boilerplate or, if you are lucky, ready library. vert.x has the later. It provides the API Contract module that is designed to handle all this for you. You simply add the module to your pom.xml and load your json or yaml OpenApi specification file:


The documentation shows the code to turn the OpenApi speccification into a Router factory:

  ar -> {
    if (ar.succeeded()) {
      // Spec loaded with success
      OpenAPI3RouterFactory routerFactory = ar.result();
    } else {
      // Something went wrong during router factory initialization
      Throwable exception = ar.cause();

As you can see, you can load the spec from an URL (there's an auth option too). So while your API is evolving using Apicurio you can live load the latest and greated from the live preview (should make some interesting breakages ;-) ).

You then add your routes using routerFactory.addHandlerByOperationId("awesomeOperation",this::awesomeOperationHandler). Vert.x doesn't use the path to match the handler, but the operationId. This allows you to update path information without breaking your code. There is a detailed how-to document describing the steps.

Generate a skeleton for vert.x

As long as you haven't specified a handler for an operation, Vert.x will automatically reply with 501 Not implemented and not throw any error. To give you a headstart, you can generate the base code. First option is to head to start.vertx.io to generate a standard project skeleton, saving you the manual work of creating all dependencies in your pom.xml file. Using "Show dependency panel" provides a convenient way to pick the modules you need.

But there are better ways. You can use an OpenAPI Generator or the advanced Vert.x Starter courtesy of Paulo Lopes. In his tool you specify what it shall generate in a dropdown that defaults to "Empty Project". Once you change that to "OpenAPI Server" the form will alloow you to upload your OpenAPI specification and you get a complete project rendered with all handler stubs including the security handler. There's also a JavaScript version available.

Read more

Posted by on 06 September 2019 | Comments (0) | categories: Java vert.x WebDevelopment

Adding a proxy to your Salesforce Communities

Running a community site might come with a number of interesting requirement:

  • Scan uploaded files for maleware or copyright violations
  • Filter language for profanities
  • Comply with local data retention rules (e.g. local before cloud)

For most of these task AppExchange will be the goto place to find solution. However sometimes you want to process before data hits the platform. This is the moment where you need a proxy.

Clicks not Code

To be ready to proxy, there are a few steps involved. I went through a few loops, to come to this working sequence:

  1. Register a domain. You will use it to run your community. Using a custom domain is essential to avoid https headaches later on
  2. Obtain a SSL certificate for the custom domain. The easiest part, if you have access to a public host, is to use LetsEncrypt to obtain the cert and then transform it to JKS. The certs are only valid for 90 days, but we only need it for a short while in JKS. On e.g. Nginx one can auto renew the certs
  3. Upload the cert into Salesforce in Security - Certificate and Key Management - Import from Keystore
  4. Follow the Steps 1 and 4 (you did 3 already). You need access to your DNS for that. The Domain needs to be fully qualified, you can't use your root (a DNS limitation). Let's say your base is acme.com and you want your partner community to be reachable at partners.acme.com and your Salesforce Org ID is 1234567890abcdefgh, then you need a CNAME entry that says partners -> partners.acme.com.1234567890abcdefgh.live.siteforce.com. Important: The entry needs to end with a DOT (.) otherwise CNAME tries to link it back to your domain
  5. Test the whole setup. Make sure you can use all community functions using the URL https://partners.acme.com/
  6. Now back to the DNS. Point the CNAME entry to your host (e.g. Heroku or delete it and create a A record pointing to e.g. DigitalOcean
  7. Make sure the Proxy sends the HOST header has the value of your custom domain, not the force.com. Your proxy serves as your own CDN

Little boomer: You can't do this in a sandbox or a developer org, needs to be production or trial.

Next stop: discuss what proxy to use and options to consider. As usual YMMV.

Posted by on 30 June 2019 | Comments (0) | categories: Salesforce Singapore

Turning a blog into a video with invideo.io

my last entry on LWC was a fairly technical piece. To my surprise Nirav from InVideo approached me and suggested to turn this into a video.

Watching instead of reading

The team at InVideo did a nice job for the first draft. Quite some of the visualizations make the approach and content very approachable. You spend less than 2 minutes to learn if the details solve an issue you are looking for.

See for yourself!

Let us know what you think in the comments! Disclaimer: Invideo did not compensate (in kind or financial) for working with them, I wouldn't do that. They approached me and it looked like an interesting idea.

Posted by on 15 June 2019 | Comments (0) | categories: Salesforce Singapore

LWC components with self contained images

When you create your LWC components, it is easy to include Salesforce predefined icons using lightning-icon. However once you need a custom icon, you point to an external URL, breaking the self containment. Unless you use url(data:). Here is what I did

A scalable check mark

Dynamic Lookup

For a list selection component I wanted a green check mark, like the picture above, indicate a selected record (more on the component in a later post). Since LWC doesn't allow (yet?) to store image assets inside a bundle and I wanted the component to be self contained.

The solution is to use data:image/svg+xml for a background image. The details were nicely outlined in css-tricks. I tried to use svg as source code directly, but failed to get it to work. So I resorted to use base64. It is an additional step, using an online Base64 encoder.

Making images

SVG is just an XML based text format, so you could create your image in notepad (take that jpg!). However you probably want to use a graphic editor. My choice here is Sketch (which gives me funny looks from designers: why a developer uses one of their tools). There were some steps worth to mention:

  • When using text (like the check mark), convert that to a svg path. Right click on text and select "Convert to outlines". This allows the text to scale with the rest of the image
  • Use the Edit-Copy-Copy SVG Code rather than use the export functionality
  • The resulting SVG is "talkative", you can edit and remove quite some content:
    • remove width and height attributes from the <svg> element, but keep the viewbox. Also remove the xlink name space
    • the <g> element doesn't need any attribute
    • <polygon> only needs fill and points attribute
    • All numeric values have many digits. You can round them up

Read more

Posted by on 18 April 2019 | Comments (0) | categories: Lightning Salesforce

Lightning Layouts, Input Fields and Field Level Security

The more control you want (or need) to exercise over the page layouts presented to your users, the more details you need to take care of. While the default record details and the lightning-record-form take care of hiding fields, without a trace, the current user doesn't have access to, you need to handle that yourself in a custom layout. Here is how.

Show me yours

A typical custom form layout might look like this:

        <lightning-layout multiple-rows="true">
            <lightning-layout-item size="12">
                <lightning-messages> </lightning-messages>
            <lightning-layout-item padding="around-small" size="6">
                <lightning-input-field field-name="Name">
            <lightning-layout-item padding="around-small" size="6">
                <lightning-input-field field-name="Department">
            <lightning-layout-item padding="around-small" size="6">
                <lightning-input-field field-name="HomePhone">
            <lightning-layout-item padding="around-small" size="6">
                <lightning-input-field field-name="CleanStatus">

Now, when a user doesn't have field level access to, let's say HomePhone (GDPR anyone?), the form would render with an empty space in the second row. To prevent this two steps are necessary:

  • Add a render condition to the lightning-layout-item
  • Compute the value for it in the onload event of the lightning-record-edit-form

A lightning-layout-item would look like this:

<lightning-layout-item padding="around-small" size="6" if:true={canSee.HomePhone}>
    <lightning-input-field field-name="HomePhone">

The only difference is the if:true={canSee.number}

In your JavaScript file you add @track canSee = {} to initialize your visibility tracker. Finally you amend the formLoadHandler to populate the canSee variable:

formLoadHandler(event) {
        let fields = event.detail.record.fields;

        for (let f in fields) {
            if (fields.hasOwnProperty(f)) {
            	this.canSee[f] = true;

As usual YMMV.

Posted by on 11 April 2019 | Comments (0) | categories: Lightning Salesforce

Mixing lightning-input-fields with custom data aware fields

Salesforce lightning offers a developer various ways to design custom forms when page layouts are not enough. The record-edit-form strikes a nice balance: it uses Lightning data service and allows one to design your own layout and field selection.

Beyond lightning-input-fields

Most of the time lightning-input-field is all you need for this forms. They auto-magically talk to the UI API and display the right input type.

However there are cases, where that's not what your users want. A recent example from a project: Phone numbers are stored as text field in Salesforce, but the users wanted a guided input: a country picker, then showing the area code picker (if the country has those) and an checker for field length for the main number (which varies greatly by country) and an eventual extension field (popular in the US, but not elsewhere).

So I started digging. Shouldn't it be possible to have something like <c-phone-helper field-name="Phone" /> and the same data magic as for lightning-input-field would happen? Turns out: not so fast. With events and a little code it would be possible, but that glue code needed to be applied to any custom field.

This got me thinking. The solution, it turns out, was to "extend" the record-edit-form to handle "rough" input components. You can give the result a try in experiment 8

Design goals

  • The component should be a drop-in "replacement" for record-edit-form
  • Structure of a page should be similar to they way one builds record-edit-form based forms
  • All lightning-input-fields should work out of the box
  • No additional glue code should be required in the component hosting the new form
  • Custom input field types should be easy to build. Once I figure out extensions, based on a base component
  • Opinionated: form layout is using a lightning-layout


The replacement for lightning-record-form is c-extended-form (from experiment 8).
"Replacement" is a mouth-full, since the component just wraps around a lightning-record-form. A few components are ready to be used for it:

  • specialInput a little test component. It just returns the input in upper case. Not very useful other than studying the boiler plate
  • uxDebouncedInput returns changed values after a debounce period. Default is 300ms, the attribute delay allows to specify duration. The component shows different behavior depending on the attribute field-name being present with a value. The original purpose of the field is to be used in uxQuickLookup, now you can use it standalone
  • uxQickLookup which allows you to lookup an object. It works in lightning apps, mobile and communities and can serve as a stop-gap for the missing lookup on mobile. I recently updated it to show additional fields beside the object name

How it works

Read more

Posted by on 06 April 2019 | Comments (1) | categories: Lightning Salesforce WebComponents