wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

From Excel to package.xml


Cleaning up an org that has gone through several generations of ownership and objectives is fun. Some tooling helps

Data frugality

A computing principle, very much the anathema to Google and Facebook, is Data Frugality, storing only what you actually need. It is the data equivalent to coders' YAGNI principle. Latest since GDPR it got center stage attention.

Your cleanup plan

So your cleanup exercise has a few steps:

  • Find fields that don't have any data. You can use tools like Field Trip to achieve that
  • Verify that these fields are not "about to be used", but "really obsolete"
  • Add all the fields that did have some data left over, but unused now
  • Add fields that contain data legal told you to get rid off

The absolute standard approach, of any consultant I have encountered, is to fire up an Excel sheet and track all fields in a list, capture insights in the remarks column and have another column that indicates can be deleted Status. Something like Yes,No,Investigating or "Call Paul to clarify". I would be surprised if there's a different approach in the wild (in theory there are).

Excel as source?

In a current project the consultant neatly created one sheet (that's the page, not the file) per object, labeled with the object name, containing rows for all custom fields. Then the team went off to investigate. In result they identified more than one thousand fields to be deleted.

Now to actually get rid of the fields, you could outsource some manual labor to either go into you org or use Copy-Paste to create a destructivechanges.xml package file for use with the Salesforce ANT tool.

In any case: the probability that there will be errors in transferring is approximately 100%. The business owner will point to: I signed off that spreadsheet and not that XML file! Finger pointing commencing.

There must be a better way!


Read more

Posted by on 23 February 2019 | Comments (0) | categories: Salesforce XML

Draining the happy soup - Part 3


In Part 2 we had a look at the plan. Now it is time to put it into motion. Let's setup our project structure

Put some order in your files

Our goal is to distribute happy soup artifacts into packages. In this installment we setup the directory structure for that. Sticking to a clear structure makes it easier to get a step closer to package Nirvana step by step.

Proposed directory structure

Let me run through some of the considerations:

  • I'll keep all packages inside a single directory structure. Name the root after your org. What might pose a challenge is to name it sfdx - too close to that hidden directory .sfdx that does exist in your home directory and might exist in the project directories
  • You could keep the whole tree in a single repository or subject each package directory to its own repository. I'd prefer the later, since it allows a developer to pull only the relevant directories from source control (That's Option B)
  • The base directory, containing the artifacts that won't be packaged shall be named HappySoup. While it is a rather colloquial term, it is well established
  • I'm a little old fashioned when it comes to directory names: no spaces, double byte characters (that includes ?) or special characters
  • You need to pay attention to sfdx-project.json and .sfdx as well as .gitignore. More and that below
  • When you have mixed OS developer communities using Windows, MAC or Linux, directory delimiters could become a headache. My tongue-in-cheek recommendation for Windows would be to use WSL

Key files and directories

Initially you want to divide, but not yet package. So your projects need to know about each other. Higher level packages, that in future will depend on base packages need to know about them and each package needs to know about the HappySoup. To get there I adjust my sfdx-project.json:

{
"packageDirectories" : [
    { "path": "force-app", "default": true},
    { "path" : "../ObjectBase/force-app" },
    { "path" : "../HappySoup/force-app" }
  ],
"namespace": "",
"sfdcLoginUrl" : "https://login.salesforce.com",
"sourceApiVersion": "45.0"
}

The key here are the relative path entries like ../HappySoup/force-app. When you use sfdx force:source:push the content gets pushed to your scratch org, so it is complete. When you use sfdx force:source:pull changes you made are copied down to the default path, so the adjacent projects remain as is.

When using pull and push from VSCode it will use the default user name configured for SFDX. To ensure that you don't push to or pull from the wrong place, you need to create one scratch org each using sfdx force:org:create --f config/project-scratch-def.json -a [ScratchOrgAlias] and then execute sfdx force:config:set defaultusername=[ScratchOrgAlias].

The command will create a .sfdx directory and config files inside in your project. Unless all developers checking out that repository use the same aliases (unlikely), you want to add .sfdx to your .gitignore file.

Now you are all set to move files from the happy soup to future package directories. With the relative path in your sfdx-project.json no packaging is required now and you still can get a fully functioning scratch org.

One pro tip: instead of relying on individual scratch definition files, you might opt to use the one in the happy soup, so all your scratches have the same shape.

Next stop: building the solution before you package. As usual YMMV.


Posted by on 22 February 2019 | Comments (0) | categories: Salesforce SFDX

Draining the happy soup - Part 2


We stormed ahead in Part 1 and downloaded all the meta data in SFDX format. Now it's time to stop for a moment and ask: what's the plan?

You need a plan

When embarking on the SFDX package journey, the start is Phase 0. You have an org that contains all your meta data and zero or more (managed) packages from AppExchange. That's the swamp you want to drain.

Phase 0 - happy soup

Before you move to phase 1, you need to be clear how you want to structure your packages. High level could look like this:

Structure - happy soup

  1. You have an unpackaged base, that over time will shrink. The interesting challenge is to deal with dependencies there
  2. Some of the components will be used across all system - most likely extensions to standard objects or triggers and utility classes. Core LWC components are good candidates for base packages too. There can be more than one base package
  3. Your business components. Slice them by business function, country specifics or business unit. Most likely will resemble some of your organization structure
  4. A package from AppExchange or a legacy package will not depend on anything. In my current project we moved all VisualForce stuff (pages and controllers) there, since these won't be needed after the lightning migration is concluded and then can be uninstalled easily.

Read more

Posted by on 18 February 2019 | Comments (0) | categories: Salesforce SFDX

The Efficiency Paradox


A common setup in many organizations is to outsource development and/or operation to a system integrator. For agile organizations that can post a challenge. A key is skillfulness - how fast and good can it be implemented?

Does your System Integrator invest in efficiency?

Competition is supposed to keep cost at bay, however customer relation and familiarity with the environment (In Dreamland everything is documented) pose a substantial barrier to entry. A barrier to entry will enable an incumbent vendor to charge more.

So an engagement manager might see him/herself confronted with an interesting dynamic.

Feedback loop for efficiency

There are a slow and a fast loop running concurrently. Depending on the planning horizon, the engagement manager might not see the outer loop to the detriment of all participants. Let me walk you through:

  1. Investment in better tools or skills leads to improved efficiency. Work is delivered faster, closer to actual requirements and with less defects
  2. In the short run this leads to a reduction in hours sold (bad for time and material contracts)
  3. A reduction in hours sold leads to reduced profitability since you have more resources sitting on the bench

    In conclusion: As long as the barrier to entry protects you, investing in efficiency is bad for the bottom line. So investment in efficiency should only be made to keep the barrier to entry high enough (Add you own sarcasm tag here). However there's a longer running loop in motion:
  4. Improved efficiency leads to better quality and shorter delivery time. Work is done fast and good (which might justify higher charges per hour)

  5. Getting good quality soon leads to an increase in customer satisfaction. Who doesn't like swift and sure delivery
  6. Happy customers, especially when delivery times are short, will find an endless stream (only throttled by budget) of additional requirement to implement
  7. Having more and more new requirements coming in, keeps people off the bench and keeps utilization high. High utilization is the base of service profitability
  8. Investment in efficiency is justified

This is a nice example of a Systems Thinking Feedback Loop. Conclusions vary on observed time frames.


Posted by on 18 February 2019 | Comments (0) | categories: Salesforce Singapore

Draining the happy soup - Part 1


Unleashing unlocked packages promises to reduce risk, improve agility and drive home the full benefits of SFDX

Some planning required

I'm following the approach "Throw and see what sticks to the wall". The rough idea: retrieve all meta data, convert it into SFDX format, distribute it over a number of packages and put it back together.

To make it more fun I picked an heavily abused customized and used org with more than 20,000 meta data artifacts (and a few surprises). Follow along.

Learning

Trailhead has a module on unlocked packages on its trail Get Started with Salesforce DX.

While you are there, check out the (at time of writing the 15) modules on Application Lifecycle Management.

Downloading

The limits for retrieving packages (10,000 elements, 39MB zip or about 400 MB raw) posed an issue for my XL org. So I used, growing fond of it, PackageBuilder to download all sources. It automatically creates multiple package.xml files when you exceed the limits.


Read more

Posted by on 14 February 2019 | Comments (0) | categories: Salesforce SFDX

Reporting your validation formulas


Validation formula are a convenient way to ensure your data integrity. With great powers? comes the risk of alienating users by preventing them entering data.

Why look at them?

You can easily look at all formula in the Object Manager, but it is tedious to look at every formula one by one. You might ask yourself:

  • Do all my formula exclude the integration profile?
  • Are context (e.g. the country) specific formulas set correctly?
  • Do validation rules follow the naming conventions?
  • Are messages helpful or intimidating?

Extract and report

You already use PackageBuilder to extract objects (and other stuff) as XML, so it is just a small step: slap all *.object files into one big file and run an XSLT report over it.

Not so fast! If you concatenate XML files using OS copy you end up with three problems:

  • You don't have an XML root element. Like the Highlander - there can be only one. You could sandwich the files in opening and closing tags, but then you have the next problem
  • XML files start <?xml version="1.0" encoding="UTF-8"?> and copying that file will sprinkle that statement multiple times into your result. The XSLT processor will barf
  • The result will get very big and any report will take a long time or even run out of memory

A bit of tooling

I solved, for my needs, using a small Java class and one XSLT stylesheet. Java because: I'm familiar with it and NodeJS still sucks with XML. XSLT, because: I'm familiar with it (heard that before?) and the styling of the output is independent from the processing step. I presume you know how to initiate an XSLT 2.0 transformation.


Read more

Posted by on 07 February 2019 | Comments (0) | categories: Salesforce

Avoid the "Clean Code Shock" with PMD


Your new year resolution includes "Write clean Apex code". So you run PMD with a full ruleset and get shocked by the number of violations. You drop the resolution in a blink.

Don't boil the Ocean

Even a journey of a thousand miles starts with a single step, so let's break down the task into manageable chunks to divide and rule.
There are 2 dimensions you can use: Type of code and priority levels. Using them you can turn your Clean Code journey into manageable stages.

Code Types

  • Legacy code: all code that doesn't fall in any of the two other categories
  • Changed code: code that needs change due to business requirements
  • New code: new code written for new or changed functionality (applies to copy & paste too)

Priority Levels

  • 1 = security and performance, will fail build
  • 2 = bad code, will fail build
  • 3 & 4 = hard to maintain code, will generate warning
  • 5 = ugly code, will generate hint

PMD rules for code types should have different priorities. A different number of tests will fail a build:

  • 11 for legacy code (all around performance and security)
  • 33 for changed code
  • 44 for new code

This will require to run PMD with different rulesets on subsets of your code base


Read more

Posted by on 02 January 2019 | Comments (0) | categories: Apex PMD Salesforce

Pattern in your Apex Controller


A (software) design pattern is a general, reusable solution to a commonly occurring problem withing a given context. Christoper Alexander inspired the Gang of four to apply pattern to software and enumerate 23 classic software pattern.

This article discusses how to use some of them in the context of Apex controllers.

The context: Same same, but different

You are creating an application to support construction on force.com that will serve multiple countries. Part of the requirements is to compute a risk score for any given project. While ISO standards form the foundation of the assessment, each jurisdiction has some specialties that alter the logic eventually.

This is just one of the requirements, you have many more that follow the pattern Same Same - But different.

Patterns used

Besides those, you want to know Apex Enterprise Patterns. Go Trailhead and learn.


Read more

Posted by on 29 December 2018 | Comments (0) | categories: Apex Salesforce Software

Lightning Web Components (LWC) quick overview


On December 13 Salesforce announced Lightning Web Components (LWC) a new way to build components on the Salesforce platform. Here is my take.

Expanding Lightning Family

"Lightning" serves now as a family name for modern Salesforce development. LWC are the latest family members. We now have:

The linked blog entries explain the rationale, so check them out.

Same but different

The look and feel doesn't change, the way you code them does. For now SFDX, Visual Studio Code and the Salesforce Extension Pack are to goto tools for the Spring 2019 release.

The new file structure 4 instead of 8 files

Instead of up to 8 files you only need 4. For one: all JavaScript (3 files) now lives in one ES6 JS file and we don't have an auradoc or svg file for now

Co-existence

Existing Aura based components will continue to work and will even allow to contain LWC components.

LWC in Aura, but not Aura in LWC

What you can't do is putting Aura components inside LWC. So your transition to LWC is bottom-up, not top-down


Read more

Posted by on 14 December 2018 | Comments (0) | categories: Lightning Salesforce WebDevelopment

Salesforce login statistics aggregation


A recent requirement from a customer was "I'd like to analyze logins by users in Excel", despite a dashboard approach would be sufficient. With a few million records aggregating in Excel wasn't particularly appealing

Download the log

Salesforce setup allows to download the log as csv or csv.gz file. In any case you should use the later. I learned the hard way: the chunked transfer encoding might leave you with less data be processed than you expect.

The Scanner simply stopped after a few thousand entries, while the csv parser barfed with an error.

Processing

After downloading and extracting the csv I used a small Java routine (yep, I'm that old) to aggregate logins per user, capturing the count and the first/last login date as well as the country of login (with the disclaimer caveats) and the eventual community.

For reliably and robustly reading csv in Java, usually I would use a robust library, however in this case having no dependencies and using the scanner did just nicely.


Read more

Posted by on 05 December 2018 | Comments (0) | categories: Java Salesforce