wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: February 2019

From Excel to package.xml


Cleaning up an org that has gone through several generations of ownership and objectives is fun. Some tooling helps

Data frugality

A computing principle, very much the anathema to Google and Facebook, is Data Frugality, storing only what you actually need. It is the data equivalent to coders' YAGNI principle. Latest since GDPR it got center stage attention.

Your cleanup plan

So your cleanup exercise has a few steps:

  • Find fields that don't have any data. You can use tools like Field Trip to achieve that
  • Verify that these fields are not "about to be used", but "really obsolete"
  • Add all the fields that did have some data left over, but unused now
  • Add fields that contain data legal told you to get rid off

The absolute standard approach, of any consultant I have encountered, is to fire up an Excel sheet and track all fields in a list, capture insights in the remarks column and have another column that indicates can be deleted Status. Something like Yes,No,Investigating or "Call Paul to clarify". I would be surprised if there's a different approach in the wild (in theory there are).

Excel as source?

In a current project the consultant neatly created one sheet (that's the page, not the file) per object, labeled with the object name, containing rows for all custom fields. Then the team went off to investigate. In result they identified more than one thousand fields to be deleted.

Now to actually get rid of the fields, you could outsource some manual labor to either go into you org or use Copy-Paste to create a destructivechanges.xml package file for use with the Salesforce ANT tool.

In any case: the probability that there will be errors in transferring is approximately 100%. The business owner will point to: I signed off that spreadsheet and not that XML file! Finger pointing commencing.

There must be a better way


Read more

Posted by on 23 February 2019 | Comments (0) | categories: Salesforce XML

Draining the happy soup - Part 3


In Part 2 we had a look at the plan. Now it is time to put it into motion. Let's setup our project structure

Put some order in your files

Our goal is to distribute happy soup artifacts into packages. In this installment we setup the directory structure for that. Sticking to a clear structure makes it easier to get a step closer to package Nirvana step by step.

Proposed directory structure

Let me run through some of the considerations:

  • I'll keep all packages inside a single directory structure. Name the root after your org. What might pose a challenge is to name it sfdx - too close to that hidden directory .sfdx that does exist in your home directory and might exist in the project directories
  • You could keep the whole tree in a single repository or subject each package directory to its own repository. I'd prefer the later, since it allows a developer to pull only the relevant directories from source control (That's Option B)
  • The base directory, containing the artifacts that won't be packaged shall be named HappySoup. While it is a rather colloquial term, it is well established
  • I'm a little old fashioned when it comes to directory names: no spaces, double byte characters (that includes 💩) or special characters
  • You need to pay attention to sfdx-project.json and .sfdx as well as .gitignore. More and that below
  • When you have mixed OS developer communities using Windows, MAC or Linux, directory delimiters could become a headache. My tongue-in-cheek recommendation for Windows would be to use WSL

Key files and directories

Initially you want to divide, but not yet package. So your projects need to know about each other. Higher level packages, that in future will depend on base packages need to know about them and each package needs to know about the HappySoup. To get there I adjust my sfdx-project.json:

{
"packageDirectories" : [
    { "path": "force-app", "default": true},
    { "path" : "../ObjectBase/force-app" },
    { "path" : "../HappySoup/force-app" }
  ],
"namespace": "",
"sfdcLoginUrl" : "https://login.salesforce.com",
"sourceApiVersion": "45.0"
}

The key here are the relative path entries like ../HappySoup/force-app. When you use sfdx force:source:push the content gets pushed to your scratch org, so it is complete. When you use sfdx force:source:pull changes you made are copied down to the default path, so the adjacent projects remain as is.

When using pull and push from VSCode it will use the default user name configured for SFDX. To ensure that you don't push to or pull from the wrong place, you need to create one scratch org each using sfdx force:org:create --f config/project-scratch-def.json -a [ScratchOrgAlias] and then execute sfdx force:config:set defaultusername=[ScratchOrgAlias].

The command will create a .sfdx directory and config files inside in your project. Unless all developers checking out that repository use the same aliases (unlikely), you want to add .sfdx to your .gitignore file.

Now you are all set to move files from the happy soup to future package directories. With the relative path in your sfdx-project.json no packaging is required now and you still can get a fully functioning scratch org.

One pro tip: instead of relying on individual scratch definition files, you might opt to use the one in the happy soup, so all your scratches have the same shape.

Next stop: building the solution before you package. As usual YMMV.


Posted by on 22 February 2019 | Comments (0) | categories: Salesforce SFDX

Draining the happy soup - Part 2


We stormed ahead in Part 1 and downloaded all the meta data in SFDX format. Now it's time to stop for a moment and ask: what's the plan?

You need a plan

When embarking on the SFDX package journey, the start is Phase 0. You have an org that contains all your meta data and zero or more (managed) packages from AppExchange. That's the swamp you want to drain.

Phase 0 - happy soup

Before you move to phase 1, you need to be clear how you want to structure your packages. High level could look like this:

Structure - happy soup

  1. You have an unpackaged base, that over time will shrink. The interesting challenge is to deal with dependencies there
  2. Some of the components will be used across all system - most likely extensions to standard objects or triggers and utility classes. Core LWC components are good candidates for base packages too. There can be more than one base package
  3. Your business components. Slice them by business function, country specifics or business unit. Most likely will resemble some of your organization structure
  4. A package from AppExchange or a legacy package will not depend on anything. In my current project we moved all VisualForce stuff (pages and controllers) there, since these won't be needed after the lightning migration is concluded and then can be uninstalled easily.

Read more

Posted by on 18 February 2019 | Comments (0) | categories: Salesforce SFDX

The Efficiency Paradox


A common setup in many organizations is to outsource development and/or operation to a system integrator. For agile organizations that can post a challenge. A key is skillfulness - how fast and good can it be implemented?

Does your System Integrator invest in efficiency?

Competition is supposed to keep cost at bay, however customer relation and familiarity with the environment (In Dreamland everything is documented) pose a substantial barrier to entry. A barrier to entry will enable an incumbent vendor to charge more.

So an engagement manager might see him/herself confronted with an interesting dynamic.

Feedback loop for efficiency

There are a slow and a fast loop running concurrently. Depending on the planning horizon, the engagement manager might not see the outer loop to the detriment of all participants. Let me walk you through:

  1. Investment in better tools or skills leads to improved efficiency. Work is delivered faster, closer to actual requirements and with less defects
  2. In the short run this leads to a reduction in hours sold (bad for time and material contracts)
  3. A reduction in hours sold leads to reduced profitability since you have more resources sitting on the bench

    In conclusion: As long as the barrier to entry protects you, investing in efficiency is bad for the bottom line. So investment in efficiency should only be made to keep the barrier to entry high enough (Add you own sarcasm tag here). However there's a longer running loop in motion:
  4. Improved efficiency leads to better quality and shorter delivery time. Work is done fast and good (which might justify higher charges per hour)

  5. Getting good quality soon leads to an increase in customer satisfaction. Who doesn't like swift and sure delivery
  6. Happy customers, especially when delivery times are short, will find an endless stream (only throttled by budget) of additional requirement to implement
  7. Having more and more new requirements coming in, keeps people off the bench and keeps utilization high. High utilization is the base of service profitability
  8. Investment in efficiency is justified

This is a nice example of a Systems Thinking Feedback Loop. Conclusions vary on observed time frames.


Posted by on 18 February 2019 | Comments (0) | categories: Salesforce Singapore

Draining the happy soup - Part 1


Unleashing unlocked packages promises to reduce risk, improve agility and drive home the full benefits of SFDX

Some planning required

I'm following the approach "Throw and see what sticks to the wall". The rough idea: retrieve all meta data, convert it into SFDX format, distribute it over a number of packages and put it back together.

To make it more fun I picked an heavily ~~abused~~ customized and used org with more than 20,000 meta data artifacts (and a few surprises). Follow along.

Learning

Trailhead has a module on unlocked packages on its trail Get Started with Salesforce DX.

While you are there, check out the (at time of writing the 15) modules on Application Lifecycle Management.

Downloading

The limits for retrieving packages (10,000 elements, 39MB zip or about 400 MB raw) posed an issue for my XL org. So I used, growing fond of it, PackageBuilder to download all sources. It automatically creates multiple package.xml files when you exceed the limits.


Read more

Posted by on 14 February 2019 | Comments (0) | categories: Salesforce SFDX

Reporting your validation formulas


Validation formula are a convenient way to ensure your data integrity. With great powers... comes the risk of alienating users by preventing them entering data.

Why look at them?

You can easily look at all formula in the Object Manager, but it is tedious to look at every formula one by one. You might ask yourself:

  • Do all my formula exclude the integration profile?
  • Are context (e.g. the country) specific formulas set correctly?
  • Do validation rules follow the naming conventions?
  • Are messages helpful or intimidating?

Extract and report

You already use PackageBuilder to extract objects (and other stuff) as XML, so it is just a small step: slap all *.object files into one big file and run an XSLT report over it.

Not so fast! If you concatenate XML files using OS copy you end up with three problems:

  • You don't have an XML root element. Like the Highlander - there can be only one. You could sandwich the files in opening and closing tags, but then you have the next problem
  • XML files start <?xml version="1.0" encoding="UTF-8"?> and copying that file will sprinkle that statement multiple times into your result. The XSLT processor will barf
  • The result will get very big and any report will take a long time or even run out of memory

A bit of tooling

I solved, for my needs, using a small Java class and one XSLT stylesheet. Java because: I'm familiar with it and NodeJS still sucks with XML. XSLT, because: I'm familiar with it (heard that before?) and the styling of the output is independent from the processing step. I presume you know how to initiate an XSLT 2.0 transformation.


Read more

Posted by on 07 February 2019 | Comments (0) | categories: Salesforce