Usability - Productivity - Business - The web - Singapore & Twins

Structuring a Proof of Concept

A common practise in IT, as run up to a sale or a project is to proof that the intention of the undertaking can be fulfilled.

The challenge

A PoC needs to strike a challenge between effort and coverage. A final proof of a project is its completion, so the temptation lures to try to proof everything. On the flip side: if they core functionalities aren't covered the proof has little value.

The second challenge is to define concise successs criteria. Quite often, especially for standard product PoC, it is left to ?how users like it' - which isn't a really qantifiable result.

Use cases

A workable approach is to define use cases, that cover a typical scenario, like ?Sale of an ice cream'. This scenario needs to be broken down into business steps until a step can be looked at: ?did work / did not work'.
The breakdown needs to be business level, business language. So ?Can click on customer info' should rather read ?Customer info is retrievable'.

Use cases and steps are hierarchical, typically 2-3 levels are sufficient for most PoC. Deeper levels are a smell that you are looking at a pilot or full fledged project, not a PoC.

So, in a nutshell: A PoC line item needs to have a binary answer. If a binary answer isn't possible break the line item into smaller units. Stick to the domain specific language (usually: the business steps)


When a use case line item has a binary outcome (works / doesn't work), the simplest measure is to check if everything worked to declare the PoC a success. Usually doesn't help.

The next level is to define a pass percentage. Like 70% of 200 line items must pass. Again a simple solution. Challenge there: nice to have and essential features have equal weight. You could end with an outcome that has all nice-to-have features, but might miss essentials.

So the next level is to define weights for each items, including a showstopper flag for must-have features. Weighting discussions are popular battle grounds for feuding fractions, since the weight determines outcomes, especially for concurrent PoC execution.

Another weakness of this approach: works/doesn't work as binary value doesn't cover: ?Does it work well?'. Like ?Is a pair of sneakers suitable to get from Boston to New York?' The binary answer: Yes you can walk, but the real answer: use a car, train, bus or plane.

Balanced Scorecard to the rescue

Looking at the definition of Usability, one can find 3 criteria:

  • Does it work?
  • Is it efficient?
  • Is the user pleased?

I would treat the first column as a binary value and the later two as scales from 1-5. This allows to generate a balanced score card that reflects important aspects of a proof. Depending on the nature of the system, you could add additional columns like ?failure resistance, error recovery, risk'.

While it doesn't relieve you from the weight bickering, it provides a clearer picture of actual outcomes.

As usual: YMMV

Posted by on 04 October 2018 | Comments (0) | categories: Salesforce Software

Adding Labels to Lightning Datatable

In Part 1 I described a way to make any SOQL result fit for use in a Datatable. The next step is to provide column labels. While it would be easy to just hardcode them, Abhishek suggested to use the original field names. The beauty of that approach: The admin can adjust and/or translate field labels without touching the code.

Going Meta

APEX has a rich API that allow querying an object's properties in the Schema name space. Getting the information for a collection of objects can be done using Schema.describeSObjects(sObjectTypes) or Schema.getGlobalDescribe() (see details here).

The interesting challenge is to find the object names of the relationships, the default field list will only tell you the name you gave it, but not the object you relate to. So some more code is required (see below).

Relationship fields can be identified by the part before the . either ending in __r or id. So we break a query apart and extract a List<String> for the field name and a String for the start object name. Our scenario doesn't cater to subqueries. These two parameters get fed into a utility function.

public without sharing class AuraLabelHelper {
    private static Map<String, Schema.SObjectType> globalDescribe = Schema.getGlobalDescribe();
    public static Map<String, String> retrieveFieldLablesFromFieldList(List<String> fieldList, String objName) {
        Map<String, String> result = new Map<String, String> ();
        Map<String, Schema.SObjectField> fieldDefinitions = internalFieldLablesFromFieldList(fieldList, objName, '');
        for (String key : fieldDefinitions.keySet()) {
            DescribeFieldResult dfr = fieldDefinitions.get(key).getDescribe();
            result.put(key, dfr.getLabel());
        return result;
    private static Map<String, Schema.SObjectField> internalFieldLablesFromFieldList(List<String> fieldList, String objName, String prefix) {
        Map<String, Schema.SObjectField> result = new Map<String, Schema.SObjectField> ();
        Schema.SObjectType objectType = AuraLabelHelper.globalDescribe.get(objName);
        Schema.DescribeSObjectResult describeResult = objectType.getDescribe();
        // Labels for the top level object - needs to be lowercased  
        Map<String, Schema.SObjectField> fieldMap = describeResult.fields.getMap();
        Map<String, Schema.SObjectField> fieldMapLower = new Map<String, Schema.SObjectField> ();
        for (String key : fieldMap.keySet()) {
            fieldMapLower.put(key.toLowerCase(), fieldMap.get(key));
        for (String fieldName : fieldList) {
            String fieldNameLower = fieldName.toLowerCase();
            if (fieldMapLower.containsKey(fieldNameLower)) {
                result.put(prefix + fieldNameLower, fieldMapLower.get(fieldNameLower));
            } else if (fieldNameLower.contains('__r.')) {
                // We have a potential relationship field at hand
                String relationFieldName = fieldNameLower.left(fieldNameLower.indexOf('__r.')) + '__c';
                Schema.DescribeFieldResult relationDescribe = fieldMapLower.get(relationFieldName).getDescribe();
                Schema.SObjectType reference = relationDescribe.getReferenceTo().get(0);
                String objApiName = reference.getDescribe().getName();
                List<String> subFieldList = new List<String> ();
                String newPrefix = prefix + fieldNameLower.left(fieldNameLower.indexOf('.'))+'_';
                subFieldList.add(fieldNameLower.substring(fieldNameLower.indexOf('__r.') + 4));
                result.putAll(internalFieldLablesFromFieldList(subFieldList, objApiName, newPrefix));
        return result;

Good boys and girls create a test class:

public class AuraLabelHelperTest {

    public static void simpleAccountTest() {
        List<String> fieldNames = new List<String> ();

        Map<String, String> result = AuraLabelHelper.retrieveFieldLablesFromFieldList(fieldNames, 'Address__c');



Next stop is putting it all together. As usual YMMV.

Posted by on 31 August 2018 | Comments (0) | categories: JavaScript Lightning Salesforce

Lightning Datatable and Relationship Queries

The Lightning Datatable is a flexible component to show data in a sortable, actionable table. Formatting is automatic provided by the Lightning Design System. Data gets provided as JSON array.

The challenge

A prime use case for a data table is to show results returned via @AuraEnabled from a SOQL query. Ideally relationship identity fields should turn into links and data from relationship queries (something like MyCustomObj__r.Color) should be usable in the table as well.

The tricky part: Relationship fields are returned (IMHO properly) as JSON objects. Datatable can't deal with object values for their columns. A returned value might look like this (deliberately using a generic example):

  Id: 'payloadid',
  Color__: 'Blue',
  Stuff__r: {
    Id: 'ToyId',
    Name: 'Teddy',
    Price__c: 34.5,
    Shape__r: {
      Size__c: 'XL',
      Geometry__c: 'round'
  Dance__c: 'Tango'

Read more

Posted by on 29 August 2018 | Comments (0) | categories: JavaScript Lightning Salesforce

Designing Lightning Components for Reuse

This is a living document about a common sense approach to developing reusable Lightning components. It might change over time.

Salesforce documentation

As well as the instance specific component library


  • Components shall serve a single purpose, designed for reusability
  • Components shall use the most feasible least code approach
  • Components shall not contain country specific logic in the front-end
  • Components shall be documented and tested
  • Components shall use composition over inheritance. Inheritance is NOT forbidden, use it wisely
  • Components shall observe case sensitivity even for non case sensitive item (e.g. field names)
  • Components shall prefer component markup over html markup (e.g. lightning:card over div class="slds-...")
  • Components shall use component navigation (if navigation is needed)


Related files and components need to be named so they appear close to each other. E.g. a component ?VehicleInfoList? that depends on inner components. Those would also start with ?VehicleInfo? e.g. ?VehicleInfoCard? ?VehicleInfoLineItem?, ?VehicleInfoInterested? etc.
Files should be named like this:

  • SalesProcess.cmp
  • SalesProcessController.js
  • SalesProcessHelper.js
  • SalesProcess[WhatEvent].evt
  • SalesProcess.SVG


  • A component shall only implement the interfaces that it actually uses.
  • A component that relies on a current record, shall not use ?availableForAllPageTypes? and must implement ?hasRecordId? and the attribute ?recordId?.
  • Components that are not used on a page layout, but rather inside other components shall not implement interfaces (?availableFor??) that make them appear in the page editor
  • Components shall only implement the interfaces they actually use. Avoid interfaces the component ?might use in future?

Data access

Components shall use the ?least code? principles for data access. To be checked in this sequence:

  1. Does the component need data access or can the attributes of it provide all the input it requires?
  2. Can lightning:recordForm be used?
  3. Can lightning:recordEditForm and lightning:recordReadForm be used?
  4. Can force:recordData be used?
  5. Is a custom @AuraEnabled method in the controller needed for data provision?

This doesn't preclude fetching Meta data or configuration. Ensure to use storable actions where feasible. More principles:

  • Use data change handlers where appropriate
  • Use component events

Code principles

This section probably will expand over time

  • Code needs to be readable
  • The controllers (both client and server side) shall be light modules that marshall actual work to helper classes and helper functions
  • In Apex helper classes shall be instantiated using factory classes - this allows intoducing country specific behavior
  • All Apex helper classes shall be based on Interfaces
  • Methods and functions shall be single purpose and not exceed a page size. Break them down (makes them more testable anyway) if to big
  • Don't copy/paste
  • Run PMD (free download) on all Apex (eventually on JavaScript too)
  • Operations that can fail need to be handled with try/catch or its equivalent
  • Use @ApexDoc and @JSDoc style comments


  • All components need test code: both for Apex (natural) and the client side component.
  • A component is incomplete without a ?Lightning testing service? test.
  • Use assertions generously!


  • Lightning components have a description
  • Each lightning component comes with a documentation section - don't waste time documenting them outside Salesforce.
  • Use the documentation to briefly explain what it does (no Pulitzer price for this writing!).
  • Include at least one example in the documentation


  • Components that can be dragged onto a page can benefit from having parameters the page maintainer can use to configure the component, thus increasing reusability and limit the number of components that need to show up in the palette.
  • Parameter documentation - Check the documentation for details.
  • If a component is usable only for a specific object page, add that to the Design Resource.

As usual YMMV

Posted by on 26 July 2018 | Comments (1) | categories: Lightning Salesforce

Postman and the Salesforce REST API

The Salesforce API is a great way to access Salesforce data and can be used with tools like SoqlXplore or the Salesforce Workbench. The API uses OAuth and a Bearer Authentication, so some steps are required to make that work in Postman

Prepare Salesforce

You will need a connected APP. I usually create one that is pre-approved for my user profile(s), so I don't need to bother with the approval steps in Postman. However you could opt for self-approval and access the app once to approve its use, before you continue with the command line. Note down the ClientId and ClientSecret values.

Prepare Postman

Postman has great build in support for all sorts of authorization interactively. However my goal here is to fully automate it, so you can run a test suite without manual intervention. First stop is the creation of one environment. You can have multiple environments to cater to different Salesforce instances.

Important Never ever ever store the environment into version control. It would contain credentials -> bad bad idea!

My environment variables look like this:

	"CLIENT_ID" : "the ClientId from Salesforce",
	"CLIENT_SECRET" : "The ClientSecret from Salesforce",
    "USER_ID" : "some@email.com",
    "PASSWORD" : "DontTell",
    "LOGIN_URL" : "https://login.salesforce.com/"

Providing the Login URL allows to reuse postman collections between Sandboxes, Developer Orgs or Production Orgs without the need to actually edit the postman entries. Next on the menu: getting a token

Read more

Posted by on 06 July 2018 | Comments (2) | categories: Salesforce Software WebDevelopment

Mime is where Legacy Systems go to die

Your new system went live. Migration of current, active data went well. A decision was made not to move historic data and keep the old system around in ?read-only? mode, just in case some information needs to be looked up. Over time your zoo of legacy systems grows. I'll outline a way to put them to rest.

The challenges

All recent systems (that's younger than 30 years) data is stored more or less normalized. A business document, like a contract, is split over multiple tables like customer, address, header, line items, item details, product etc.

Dumping this data as is (csv rules supreme here) only creates a data graveyard instead of the much coveted data lake or data warehouse.

The issue gets aggravated by the prevalence of magic numbers and abbreviations that are only resolved inside the legacy system. So looking at one piece of data tells you squid. Only an old hand would be able to make sense of Status 82 or Flags x7D3z

Access to meaningful information is confined to the user interface of the legacy application. It provides search and assembly of business relevant context

The solution approach

Solving this puzzle requires a three step approach:

  • denormalize
  • transform
  • make accessible

Read more

Posted by on 22 June 2018 | Comments (1) | categories: Software Technology

Adventures in TDD

There are two challenges getting into TDD:

  • Why should I test upfront when I know it fails (there's this massive aversion of failure in my part of the world)?
  • Setting up the whole thing.

I made peace with the first requirement using a very large monitor and a split screen, writing code and test on parallel, deviating from the ?pure teachings' for the comfort of my workflow.

The second part is trickier, There are so many moving parts. This post documents some of the insights.

Testing in the IDE

TDD has the idea that you create your test first and only write code until your test passes. Then you write another failing test and start over writing code.

As a consequence you need to test in your IDE. For JavaScript or Java that's easy (the languages I use most):

  • In JavaScript you define a script test in your package.json you can run any time. For a connoisseur there are tools like WallabyJS or VSCode Mocha Sidebar that run your tests as you type and/or save. The tricky part is: what testing libraries (more on that below) to use?
  • In Java Maven has a default goal validate and junit is the gold standard for tests. For automated continuous IDE testing there is Infinitest
  • For Salesforce you have a combination of JavaScript and Apex (and clicks-not-code), testing is a little trickier. The commercials IDE TheWelkingSuite and Illuminated Cloud make that a lot easier. How easy is in they eye of the beholder. (Honorable mention: JetForcer - I simply haven't tested that one yet)

Testing in your Continuous Integration

Automated testing, after a commit to Github, GitLab or BitBucket happens once you configure a pipeline as a hook into the repository and have tests specified the pipeline can pick up. Luckily your maven and npm scripts will most likely work as a starting point.

The bigger challenge is the orchestration of various services like static testing, dependency management and reporting (and good luck if your infra guys claim, they could setup and run everything inhouse).

Some of the selections available:

Read more

Posted by on 10 June 2018 | Comments (4) | categories: JavaScript Salesforce TDD

What really happens in OAuth

OAuth in its various versions is the gold standard for Authorization (and usingOpenID Connect for Authentication as well). There are plenty of introductions around explaining OAuth. My favorite HTTP tool Postman makes it really simple to obtain access via OAuth.

Nevertheless all those explanations are quite high level, so I wondered what happens on the wire for the getToken part so I started digging. This is what I found. Nota bene: There is no inherit security in OAuth if you don't use https.

The components

  • Authorization server: server to interact with to get an authorization
  • Client identifier (ClientID): ?userid? of the application
  • Client Secret: ?password? of the application
  • A user

I'm not looking at the Resource Server here - it only comes into play before or after the actual token process.

The Form-Post Flow

There are several flows available to pick from. I'm looking at the Form-Post flow where user credentials are passed to the authentication server to obtain access and refresh tokens.

For this flow we need to post a HTTP form to the authorization server. The post has 2 parts: Header and body. A request looks like this:

POST /yourOAuthEndPoint HTTP/1.1
Host: authserver.acme.com
Accept-Encoding: gzip, deflate
Accept: *.*
Authorization: Basic Y2xpZW50aWQ6Y2xpZW50c2VjcmV0
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache


Some remarks:
- The Authorization header is just as Base64version of clientid:clientsecret - you have t replace it with your actual info
- Content-Type must be application/x-www-form-urlencoded
- The body is just one line with no spaces, I split it only for readability
- scope is a encoded list the + signs are actually spaces. Keeping that in mind you want to keep the server side scope names simple
- You need to repeat the clientid as header value

As a result you get back a JSON structure with authorization information. It can look like this:

    "access_token": "wildStringForAccess",
    "refresh_token": "wildStringForRefreshingAccess",
    "token_type": "Bearer",
    "expires_in": 300

The result is easy to understand:
- expires_in: Duration for the access token in seconds
- token_type: Bearer denotes that you call your resource server with a header value of Authorization: Bearer wildStringForAccess

As usual YMMV

Posted by on 04 June 2018 | Comments (0) | categories: Software WebDevelopment

Reuse a 3rd Party Json Web Token (JWT) for Salesforce authentication

The scenario

You run an app, could be a mobile native, a SPA, a PWA or just an application with JavaScript logic, in your domain that needs to incorporate data from your Salesforce instance or one of your Salesforce communities.

Users have authenticated with your website and the app is using a JWT Bearer Token to establish identity. You don't want to bother users with an additional authentication.

What you need

Salesforce has very specific requirements how a JWT must be formed to qualify for authentication. For example the token can be valid only for 5 minutes. It is very unlikely that your token matches the requirements.

Therefore you will need to extract the user identity from existing token, while checking that it isn't spoofed and create a new token that you present to Salesforce to obtain the session token. So you need:

  1. The key that can be used to verify the existing token. This could be a simple String, used for symmetrical signature or an X509 Public Key
  2. A private key for Salesforce to sign a new JWT (See below)
  3. A configured Connected App in Salesforce where you upload they full certificate and obtain the Consumer Key
  4. Some place to run the code, like Heroku

Authentication Flow for 3rd party JWT

Read more

Posted by on 03 May 2018 | Comments (0) | categories: Heroku Salesforce

Function length and double byte languages

Complexity is a prime enemy of maintainability. So the conventional wisdom suggests methods should be around 20 lines, with some evidence suggesting up to 100+ lines.

When I review code written by non-native English speakers, especially when their primary language is double byte based, I find methods in the 500-1000 lines range, with some special champions up to 5000 lines. So I wondered what might contribute to these function/method worms.

Read more

Posted by on 09 April 2018 | Comments (1) | categories: Java JavaScript NodeJS Software