Usability - Productivity - Business - The web - Singapore & Twins

Custom REST service in XPages using a service bean

Talking to your backend using JSON and REST is all the rage for contemporary development. Domino has supported, at least reading, this access for quite a while using ?ReadViewEntries[&OutputFormat=JSON]. Using Domino Access Services (DAS) this has been extended to read/write support for documents as well.
However, as a result, your front-end application now needs to deal with the Domino way to present data, especially the odd use of @ in JSON keys (which e.g. jquery isn't fond of). Contemporary approaches mandate that you minimize the data you send over the wire and send data in your business structure, not in your database format. Furthermore, when sending data back, you want to validate and act on the data.
In the Extension Library there is the REST control, that you can use instead of the DAS service. It allows you to define what you want to expose as XML or JSON. There are a number of predefined service, but my favorite is the customRestService. When you use the custom service, you can write JavaScript for all events happening: doGet, doPost, doPut and doDelete, but you also can use a service bean. A service bean is not a managed bean, so you don't need to specify anything in your faces-config.xml. However it is a little special. A sample XPage could look like this:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core"
	<h1>This is the landing page of the orgSearch Service</h1>
	<p>Please use "search.xsp/json" for the actual query</p>

	<xe:restService id="JSONSearch" pathInfo="json" state="false">
			<xe:customRestService contentType="application/json"
if your page name is demo.xsp then the access to the service based on the pathInfo property is demo.xsp/json.

Read more

Posted by on 2014-10-22 11:09 | Comments (2) | categories: XPages


" Follow your dreams" sounds good at first sight. Usually it gets grounded by " I need to make a living". Now Timothy Huges nicely summed it up: It isn't about passion or dreams, but purpose. Purpose is the intersection of four entities: Passion, Mission, Profession and Vocation
Click on the image for a bigger version

Posted by on 2014-10-21 11:44 | Comments (0) | categories: After hours

Put an angular face on your inbox

In the last instalment I got vert.x to emit a Notes view, read from your local mail file, to be emitted as JSON stream. While that might be perfectly fine for the inner geek, normal mortals want to look (and interact) with something more pretty.
The cool kids on the block for web interfaces and applications are Twitter BootStrap and AngularJS, so these will be the tools in this instalment.
Not familiar with them? Go and watch some videos. Back? Let's get going.
Since I'm not much of a designer, I choose the ready made Admin LTE template based on bootstrap. The main reason for choosing this one, was a timeline layout that I wanted for the inbox.
My inbox is already sorted by date, so it should fit nicely (so I thought). However the view optically is categorized by date and under the hood just a flat list of <li> elements. A little tweaking was necessary. The result looks quite OK for an after hour exercise:
Alternate layout for the Inbox
The steps to get there:
  1. Tweak the vert.x service to render a categorized by date version of the inbox. You can do that without touching the underlying view. I'll provide details on that modules in a later article (and some sample data below). The main difference for now: the inbox data will be available at the URL /notes/categorized/($Inbox)
  2. Create templates and directives for Angular
  3. Create the Angular.js app and its Controllers
Of course, no application is complete with a nice set of challenges. The biggest here was the flat list for the time line. I tried to adjust the CSS to accommodate a hierarchical list, where the outer elements are the date containing all messages arrived at that day, but there was too much CSS. So I decided to tweak the output a little.

Read more

Posted by on 2014-09-30 09:26 | Comments (0) | categories: IBM Notes angular.js

Rendering a Notes view as JSON REST service - on your client

My next goal after getting the basic connection to Notes working is to be able to serve a potential API. Still making friends with the non-blocking approach of vert.x, I'm taking baby steps forward. In this round I want to be able to deliver a view or folder as JSON string. On a Domino server that is easy. You can use ?ReadViewEntries&OutputFormat=JSON. On a Notes client you have to do it yourself.
In round one I will ignore categorized views (that's for the next time), but I already will massage the JSON to be leaner. After all why send it over the wire what you don't need. So I have a little AppConfig.INSTANCE singleton, that delivers a viewConfig object. This object has the list of columns and the inteded labels that I want to be returned.
Since last time some of the libraries have been updated and I'm now running vert.x 3.0.0.Preview1 and the OpenNTF Domino API RC2. I unpacked the OpenNTF release and removed the Jar files and replaced them with Maven dependencies. This step isn't necessary, but I'm expanding my Maven knowledge, so it was good practise. The starter application looks quite simple:
package com.notessensei.vertx.notes;

import io.vertx.core.Handler;
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpServer;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.http.HttpServerRequest;
import java.io.IOException;
import org.openntf.domino.thread.DominoExecutor;

public class NotesClient {

     * @param args
     * @throws IOException
    public static void main(String[] args) throws IOException {
        final NotesClient nc = new NotesClient();

    private static final int     listenport        = 8110;
    private static final int     dominothreadcount = 10;
    private final Vertx          vertx;
    private final HttpServer     hs;
    private final DominoExecutor de;

    public NotesClient() throws IOException {
        this.vertx = Vertx.factory.vertx();
        final HttpServerOptions options = HttpServerOptions.options();
        this.hs = this.vertx.createHttpServer(options);
        this.de = new DominoExecutor(NotesClient.dominothreadcount);

    public void runUntilKeyPresses(String keystring) throws IOException {
        int quit = 0;
        final int quitKey = keystring.charAt(0);


        while (quit != quitKey) { // Wait for a keypress
            System.out.print("Notes Client Verticle started, version ");
            System.out.print("Started to listen on port ");
            System.out.print("Press ");
            System.out.println("<Enter> to stop the Notes Client Verticle");
            quit = System.in.read();


        System.out.println("\n\nNotes Client Verticle terminated!");

    private void startListening() {
        final Handler<HttpServerRequest> h = new NotesRequestHandler(this.de);

    private void stopListening() {

The Notes request handler, checks what is requested and renders the view into JSON using a "homegrown" JSONBuilder which I designed similar to a SAX writer.
package com.notessensei.vertx.notes;

import java.util.Map;

import io.vertx.core.Handler;
import io.vertx.core.http.HttpServerRequest;
import io.vertx.core.http.HttpServerResponse;

import org.openntf.domino.Database;
import org.openntf.domino.Session;
import org.openntf.domino.View;
import org.openntf.domino.ViewEntry;
import org.openntf.domino.ViewNavigator;
import org.openntf.domino.thread.AbstractDominoRunnable;
import org.openntf.domino.thread.DominoExecutor;
import org.openntf.domino.thread.DominoSessionType;

public class NotesRequestHandler extends AbstractDominoRunnable implements Handler<HttpServerRequest> {

    private static final long           serialVersionUID = 1L;
    private transient HttpServerRequest req;
    private ViewConfig                  viewConfig       = null;
    private final DominoExecutor        de;

    public NotesRequestHandler(DominoExecutor de) {
        this.de = de;

    public void run() {
        Session s = this.getSession();
        HttpServerResponse resp = this.req.response();
        this.renderInbox(s, resp);

    public void handle(HttpServerRequest req) {
        HttpServerResponse resp = req.response();

        String path = req.path();

        String[] pathparts = path.split("/");
        // The request must have notes in the URL
        if (pathparts.length < 3 || !pathparts[1].equals("notes")) {
            this.sendEcho(req, resp);
        } else {
            this.req = req;
            // Parameter 3 is either view or inbox
            // if it is inbox, we pull in the inbox
            if (pathparts[2].equals("inbox")) {
                this.viewConfig = AppConfig.INSTANCE.getViewConfig("($Inbox)");
                // if it is view we pull the respective view
            } else if (pathparts.length > 3 && pathparts[2].equals("view")) {
                this.viewConfig = AppConfig.INSTANCE.getViewConfig(pathparts[3]);
            } /* more here */ else {
                // Nothing value, so we send an check only
                this.sendEcho(req, resp);

    private void renderInbox(Session s, HttpServerResponse resp) {
        resp.headers().set("Content-Type", "application/json; charset=UTF-8");
        Database mail = s.getMailDatabase();
        resp.end(this.renderView(mail, this.viewConfig));

    private void sendEcho(HttpServerRequest req, HttpServerResponse resp) {
        StringBuilder txt = new StringBuilder();
        resp.headers().set("Content-Type", "text/html; charset=UTF-8");
        txt.append("<html><body><h1>Notes request handler</h1>");
        System.out.println("Got request: " + req.uri());

    public boolean shouldStop() {
        // TODO Auto-generated method stub
        return false;

    private String renderView(Database db, ViewConfig vc) {
        JsonBuilder b = new JsonBuilder();
        View view = db.getView(vc.getViewName());
        ViewNavigator vn = view.createViewNav();

        b.addValue("count", vn.getCount());
        b.addValue("name", vc.getViewName());

        for (ViewEntry ve : vn) {
            b.addValue("position", ve.getPosition());
            b.addValue("isRead", ve.getRead());
            Map<String, Object> entries = ve.getColumnValuesMap();
            for (Map.Entry<String, Object> entry : entries.entrySet()) {
                String key = vc.isEmpty() ? entry.getKey() : vc.getColumnName(entry.getKey());
                if (key != null) {
                    b.addValue(key, entry.getValue());
        return b.toString();

Read more

Posted by on 2014-09-25 11:17 | Comments (0) | categories: Lotus Notes vert.x

Keeping up with all the GIT

Unless you stuck in the last century, you might have noticed, that the gold standard for version control is GIT. Atlassian likes it, IBM DevOps supports it and of course the Linux Kernel is build with it.
The prime destination for opensource projects is GitHub, with BitBucket coming in strong too. Getting the code of a project you work with (and I bet you do - jquery anyone) is just a git clone away. Of course that opens the challenge to keep up with all the changes and updates. While in the projects you work on, a branch, pull and push is daily work - using the command line or a nice UI. For that "keep an eye one" projects this gets tedious quite fast.
I'm using a little script (you even could cron it or attach it to a successful network connection) to keep all my "read-only" repositories up-to-date. You need to change the basedir= to match your path. Enjoy
# Helper script to keep all the things I pulled from GITHUB updated
# most of them are in ~/github, but some are somewhere else

# Pulls a repository from GIT origin or Mercurial
syncrep() {
	echo "Processing $f file..."
	cd $1
	isHG=`find -maxdepth 1 -type d -name ".hg"`
	if [ -n "$isHG"]
		git pull origin master &
		echo "$f is a Mercurial directory"
		hg pull


# Part 1: all in ~/github
notify-send -t 20000 -u low -i gtk-dialog-info "Starting GIT threaded update"
for f in $FILES
	syncrep $f

# Part 2: all in ~/company
notify-send -t 20000 -u low -i gtk-dialog-info "Starting COMPANY threaded update"
for f in $FILES
	syncrep $f

cd ~
notify-send -t 20000 -u low -i gtk-dialog-info "All GIT pull requests are on their way!"

# Wait for the result
while [ "$stillrunning" -eq "0" ]
	sleep 60
	pgrep git- > /dev/null
notify-send -t 20000 -u low -i gtk-dialog-info "GIT pull updates completed"

A little caveat: when you actively work on a repository you might not want the combination origin - master, so be aware: as usual YMMV

Posted by on 2014-09-25 05:17 | Comments (0) | categories: Software

Collaboration in context

Harry, a storm is coming, at least if you follow Cary Youman. Nothing less that the way we collaborate will be, again, a focus for IBM. The need has not found a definite solution. The attempt to reinvent eMail is starving in the incubator. Great minds try to reinvent the conversation (and looks suspiciously like Wave). So what is so tricky about collaboration?
In short it is context, the famous 5 W. In our hyperconnected world context can get big rather fast:
Collaboration In Context
An eMail system usually provides limited context: From, When, Subject. Using Tools and Advanced Analytics modern systems try to spice that context. Other shoot the messenger without addressing the next level of problem: Flood vs. Scatter

Read more

Posted by on 2014-09-24 10:14 | Comments (0) | categories: Software

Creating nginx configurations for Domino SSL

Websites need to be secure, so the SHA-1 cipher is coming to an end. Despite best efforts, Domino is stuck with this outdated Cipher. While you can, on Windows, hide Domino behind IHS, I find nginx easier to tame.
Jesse explains how to configure nginx as the Domino proxy. So all is good, expecially since he also covered High availability.
But when you have a lot of sites, that's a lot of typing (and copy & paste from the Internet site documents). Mustache to the rescue! I've written about Mustache before and it suits the task quite nicely:
  1. Create one well working sample configuration
  2. Replace the site specific values with {{mustache}} variables
  3. Run it against all Internet site documents
The code I used (see below) generates just 4 variables:
  1. {{name}} The name of the site according to the configuration document. I use it here to configure the file name
  2. {{siteName}}The first web name, it will become the listen parameter
  3. {{allNames}} All web names, they will be listed as server_name
  4. {{settings}} all field values of the Internet site documents as concatenated strings. Using a dot notation they can be used directly. e.g. {{settings.SSLKeyFile}}. Using this approach you can do whatever is needed to generate your desired output
This is the initial template, based on Jesse's article:
server {
        listen {{siteName}}:443;
        server_name {{#allNames}} {{.}}{{/allNames}};
        client_max_body_size 100m;
        ssl on;
        # Original keyfile: {{settings.SSLKeyFile}}
        ssl_certificate      /etc/nginx/ssl/{{name}}.pem;
        ssl_certificate_key /etc/nginx/ssl/{{name}}.key;
        location / {
                proxy_read_timeout 240;
                proxy_pass http://localhost:8088;
                proxy_redirect off;
                proxy_buffering off;
                proxy_set_header        Host               $host;
                proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
                proxy_set_header        $WSRA              $remote_addr;
                proxy_set_header        $WSRH              $remote_addr;
                proxy_set_header        $WSSN              $host;
                proxy_set_header        $WSIS              True;

The Java code takes in the file name of that template as parameter, so when you feel you rather use Apache or need a different output (e.g. a report), you are free to supply a different file here.

Read more

Posted by on 2014-09-21 01:26 | Comments (1) | categories: IBM Notes nginx

Tracking down slow internet on SingTel Fibre to the home

SingTel makes big claims about the beauty of their fibre offering. I do not experience the claimed benefits. So I'm starting to track down what is happening. Interestingly when you visit SpeedTest, it shows fantastic results. I smell rat.
So I ran a test with Pocketinet in Walla Walla, WA. SpeedTest claims a 5ms ping response, but when I, immediate before or after such a test, issue a ping -c5 www.pocketinet.com I get results rather in the range of 200-230ms.
Ein Schelm wer böses dabei denkt!
While this evidence isn't strong enough, to accuse someone of tampering, it points to the need to investigate why the results are so different ( IDA are you listening?). So I started looking a little deeper. Using traceroute with the -I parameter (that uses the same packets as ping) I checked a bunch of websites. Here are the results (I stripped out the boring parts):
traceroute -I www.pocketinet.com
  9  3.913ms  3.919ms  4.033ms 
 10  204.256ms  170.493ms  171.314ms

traceroute -I www.economist.com
  9  4.316ms  4.882ms  4.680ms 
 10  193.164ms  188.148ms  196.526ms

traceroute -I www.cnn.com
  9  4.772ms  4.679ms  5.160ms 
 10  171.006ms  187.336ms  171.447ms 

traceroute -I www.ibm.com
 9  4.385ms  5.857ms  3.853ms 
 10  178.135ms  183.842ms  181.097ms

Something is rotten in the state of Denmark international connectivity! (Sorry Wil)
Only Google bucks that pattern. But that's due to the fact that their DNS sends me to a server in Singapore. So the huge jump in latency happens in the 203.208.182.* and 203.208.151.* subnets.
whois tells me:
% [whois.apnic.net]
% Whois data copyright terms    http://www.apnic.net/db/dbcopyright.html

% Information related to ' -'

inetnum: -
netname:        SINGTEL-IX-AP
descr:          Singapore Telecommunications Pte Ltd

So, the servers might be in the SingTel overseas location? InfoSniper sees them in Singapore (you can try others with the same result). Now I wonder, is the equipment undersized, wrongly configured or something else happening that takes time on that machine?
Looking for an explanation.

Update 19 Sep 2014: Today the speedtest.net site shows ping results that are similar to traceroute/ping from the command line - unfortunately not due to the fact that the command line results got better, but the speedtest.net results worse - Now ranging in the 200ms. I wonder what happened. Checking the effective up/down speed will be a little more tricky. Stay tuned

Posted by on 2014-09-17 08:29 | Comments (1) | categories: Buying Broadband

Foundation of Software Development

When you learn cooking, there are a few basic skills that need to be in place before you can get started: cutting, measuring, stiring and understanding of temperature's impact on food items. These skills are independent from what you want to cook: western, Chinese, Indian, Korean or Space Food.
The same applies to software development. Interestingly we try to delegate these skills to ui designers, architects, project managers analyst or infrastructure owners. To be a good developer, you don't need to excel in all of those skills, but at least develop a sound understanding of the problem domain. There are a few resources that I consider absolute essential: All these resources are pretty independent from what language, mechanism or platform you actually use, so they provide value to anyone in the field of software development.
As usual YMMV

Posted by on 2014-09-12 01:44 | Comments (1) | categories: Software

Flow, Rules, Complexity and Simplicity in Workflow

When I make the claim " Most workflows are simple", in return I'm hit with bewildered looks and the assertion: " No, ours are quite complex". My little provocation is quite deliberate, since it serves as an opening gambit to discuss the relation between flow, rules and lookups. All workflows begin rather simple. I'll take a travel approval workflow as a sample (resemblance of workflows of existing companies would be pure coincidence).
The workflow as described by the business owner
The explanation is simple: " You request approval from management, finance and compliance. When all say OK, you are good to go". Translating them into a flow diagram is quite easy:
With a simple straight line, one might wonder what is the fuzz about workflow systems and all the big diagrams are all about. So when looking closer, each box "hides" quite some complexity. Depending on different factors some of the approval are automatic. " When we travel to a customer to sign a sizable deal and we are inside our budget allocation, we don't need to ask finance". So the workflow looks a little more complex actually:
Different levels need different approvals
The above diagram shows the flow in greater detail, with a series of decision points, both inside a functional unit (management, finance and compliance) as well as between them. The question still stays quite high level. "required Y/N", "1st line manager approves". Since the drawing tools make it easy - and it looks impressive - the temptation to draw further lures strongly. You could easily model the quest for the manager inside a workflow designer. The result might look a little like this:

Read more

Posted by on 2014-09-03 05:55 | Comments (1) | categories: Workflow