wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Random insights in Bluemix development (a.k.a Die Leiden des Jungen W)


Each platform comes with it's own little challenges, things that work differently than you expect. Those little things can easily steal a few hours. This post collects some of my random insights:
  • Development cycle

    I'm a big fan of offline development. My preferred way is to use a local git repository and push my code to Bluemix DevOps service to handle compilation and deployment. It comes with a few caveats
    • When you do anything beyond basic Java, you want to use Apache Maven. The dependency management is worth the learning curve. If you started with the Java boilerplate, you end up with an ANT project. Take some time, to not only mavenize it, but adjust the directories to follow the maven standards. This involves shuffling a few files around (/src vs. /src/main/java and /bin vs. /target/main/java for starters) and edit the pom.xml to remove the custom path
    • Make sure you clear out the path in the build job on Devops, maven already deploys to target. If you have specified target in Devops, you end with the code in target/target and the deploy task won't find anything
    • Learn about the liberty profile and its available feature, so you can properly specify <scope>provided</scope> in the POM.xml
    • In node.js, when you manually install a module in node_modules, that isn't pulled from a repository through an entry in package.json, that module will not be visible to standard build and deploy, since (surprise surprise) node_modules are excluded from version control and build checkout.
      Now there are a bunch of workarounds described, but I'll sum it up: don't bother. Either you move your module into a repository DevOps can reach or you build the application locally and use cf push
    • manifest.yml is your friend. Learn about it. Especially the path command. When deploying a maven build your path will be /target/[name-of-app]-[maven-version].war
    • You can specify a buildpack and environment parameters in a manifest. Works like a charm. However removing them from the manifest has no effect. You have to manually unset the values using the cf tool. Also the buildpack needs to be reset manually, so be careful there!
  • Services

    The automagical configuration of services is one of the things to love in Bluemix. This especially holds true for Java
    • The samples suggest to use the VCAP_SERVICES environment variable to get credentials and urls for your services. In short: don't. The Java Liberty build pack does a nice job making the values available though JNDI or Spring. So simply use those. To make sure that Java:comp/env can see them properly, don't forget to reference them in web.xml
    • In diversion from this: I found the mqLite Java classes less stressful that configuring JMS via JNDI. The developers did a good job making that library too work automagical on Bluemix.
    • For some services (e.g. JAX-RS 2.0 client; BlueMix SSO) you do have to touch the server.xml.
      The two methods are a packaged server or a server directory. The former requires a local liberty profile installed, so I prefer the later. It is actually easier than it sounds. In your (Maven) project, you create new directories DefaultServer and DefaultServer/apps (case sensitive!). You create/edit the server.xml in the DefaultServer directory. Then check for your maven plugin in pom.xml and change the output directory (in bold):
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-war-plugin</artifactId>
        <version>2.3</version>
        <configuration>
          <failOnMissingWebXml>false</failOnMissingWebXml>
          <warName>${artifactId}</warName>
          <outputDirectory>${basedir}/defaultServer/apps</outputDirectory>
        </configuration>
      </plugin>
      

      Then you can deploy your application using mvn install and cf push [appname] -p defaultServer. These two commands work in DevOps too!
    • The SSO service is "Single Sign On", there is no real "Single Sign Out". That's not an issue specific to Bluemix, but something all SSO solutions struggle with, just to be clear what to expect. The login dialog is ugly, but fully customizable. The nature of SSO (Corporate and/or a public provider) makes it a minimal provider: identity only, no roles, attributes or groups. In the spirit of micro services: build a rest based service for that
  • Node-RED

    While it is advertised as IoT tool, there is much more to this little gem
    • Node-RED runs on Bluemix, your local PC or even a Rasberry Pi. For the later head over to The Thingbox to get your ready OS image
    • Node-RED can be easily expanded, there are tons of ready modules at Node-RED flows. Not all are suitable for Bluemix (e.g. the ones talking Bluetooth), but a local Node-RED can easily talk to a Bluemix Node-RED making it easy for applications to run distributed
    • My little favourite: connect a HTTP post input directly to a Cloudant output. Node-RED converts the encoded form into a JSON object you can drop into the database as is. You might want to add a small filter (a compute node) to avoid data contamination
As usual YMMV

Posted by on 2015-06-29 03:31 | Comments (0) | categories: Bluemix

Investigating JNDI


When developing Java, locally or for Bluemix a best practise is to use JNDI to access resources and services you use. In Cloud Foundry all services are listed in the VCAP_SERVICES environment variable and could be parsed as JSON string. However this would make the application platform dependent, which is something you want to avoid.
Typically a JNDI service requires to edit the server.xml to point to the right service. However editing the server.xml in Bluemix is something you do want to avoid as much as possible. Luckily the Websphere Java Liberty Buildpack, which is the one Bluemix uses for Java by default, does handle that for you automagic and all Bluemix services turn into discoverable JNDI objects. So far in theory. I found myself in the tricky situation to check what services are actually there. So I wrote some code that turns the available JNDI objects into a JSON string.
    @GET
    @Path("/jndi")
    @Produces(MediaType.APPLICATION_JSON)
    public Response getJndi() {
        StringBuilder b = new StringBuilder();
        b.append("{ \"java:comp\" : [");
        this.renderJndi("java:comp", b);
        b.append("]}");

        return Response.status(Status.OK).entity(b.toString()).build();
    }

    private void renderJndi(String prefix, StringBuilder b) {
        boolean isFirst = true;

        try {
            InitialContext ic = new InitialContext();
            NamingEnumerationlt;NameClassPairgt; list = ic.list(prefix);
            while (list.hasMore()) {
                if (!isFirst) {
                    b.append(", \n");
                }

                NameClassPair ncp = list.next();
                String theName = ncp.getName();
                String className = ncp.getClassName();

                b.append("{\"name\" : \"");
                b.append(theName);
                b.append("\",");
                b.append("\"javaClass\" : \"");

                b.append(className);
                b.append("\"");
                if ("javax.naming.Context".equals(className)) {
                    b.append(", \"children\" : [");
                    this.renderJndi(prefix + (prefix.endsWith(":") ? "" : "/") + theName, b);
                    b.append("]");
                }
                b.append("}");
                isFirst = false;
            }
        } catch (Exception e) {
            e.printStackTrace();
            b.append("\"");
            b.append(e.getMessage());
            b.append("\"");
        }

    }

Enjoy - As usual you YMMV

Posted by on 2015-06-18 04:18 | Comments (0) | categories: Bluemix Java

Adventures with Node-RED


Node-RED is a project that succesfully escaped "ET" - not the alien but IBM's Emerging Technology group. Build on top of node.js, Node-RED runs in many places, including the Rasberry PI and IBM Bluemix.
In Node-RED the flow between nodes is graphically represented by lines you drag between them, requiring just a little scripting to get them going.
The interesting part are the nodes that are available (unless you fancy to write your own): A large array of ready made flows with nodes and sample applications makes Node-RED extremly flexible (I wonder if it would make sense to build a workflow engine with it). In case you don't find a node you fancy, you can build your own. Not all nodes are created equal, so you need to check what works. When you run Node-RED on Bluemix, you won't get access to hardware like serial port or Bluetooth, but you gain a DNS addressable IP endpoint (you are not limited to http(s)). Furthermore, IBM provides direct access to the IBM IoT cloud, that takes the headache out of device configuration by providing an extensive library of device libraries.
So how to get additional nodes, own or others, onto Bluemix? Here are the steps:
  1. create a new application with the IoT Boilerplate
  2. link that application to version control on hub.jazz.net
  3. clone the repository locally git clone ...
  4. edit the package.json and add the item you would like to add
  5. commit and push the changes back to jazzhub and let "build and deploy" sort it out

Read more

Posted by on 2015-06-02 06:42 | Comments (0) | categories: Bluemix

Your API needs a plan (a.k.a. API Management)


You drank the API Economy cool aid and created some neat https addressable calls using Restify or JAX-RS. Digging deeper into the concept of micro services you realize, a https callable endpoint doesn't make it an API. There are a few more steps involved.
O'Reilly provides a nice summary in the book Building Microservices, so you might want to add that to your reading list. In a nutshell:
  • You need to document your APIs. The most popular tool here seems to be Swagger and WSDL 2.0 (I also like Apiary)
  • You need to manage who is calling your API. The established mechanism is to use API keys. Those need to be issued, managed and monitored
  • You need to manage when your API is called. Depending on the ability of your infrastructure (or your ability to pay for scale out) you need to limit the rate your API is called by second, hour or billing period
  • You need to manage how your API is called. In which sequence, is the call clean, where does it come from
  • You need to manage versions of your API, so innovations and improvements don't break existing code
  • You need to manage grouping of your endpoints into "packages" like: free API, fremium API, partner API, pro API etc. Since the calls will overlap, building code for the bundles would lead to duplicates
And of course, all of this need statistics and monitoring. Adding that to you code will create quite some overhead, so I would suggest: use a service for that.
In IBM Bluemix there is the API Management service. This service isn't a new invention, but the existing IBM Cloud API management made available in a consumption based pricing model.
Your first 5000 calls are free, as is your first developer account. After that is is less than 6USD (pricing as of May 2015) for 100,000 calls. This provides a low investment way to evaluate the power of IBM API Management.
API Management IBM Style
The diagram shows the general structure. Your APIs only need to talk to the IBM cloud, removing the headache of security, packet monitoring etc.
Once you build your API you then expose it back to Bluemix as a custom service. It will appear like any other service in your catalogue. The purpose of this is to make it simple using those APIs from Bluemix - you just read your VCAP_SERVICES.
But you are not limited to use these APIs from Bluemix. You can call the IBM API management directly (your API partners/customers will like that) from whatever has access to the Intertubes.
There are excellent resources published to get you started. Now that you know why, check out the how: If you not sure about that whole micro services thing, check out Chris' example code.
As usual YMMV

Posted by on 2015-05-20 08:12 | Comments (0) | categories: Bluemix

The Rise of JavaScript and Docker


I loosely used JavaScript in this headline to refers to a set of technologies: node.js, Meteor, Angular.js ( or React.js). They share a communality with Docker that explains their (pun intended) meteoric rise.
Lets take a step back:
JavaScript on the server isn't exactly new. The first server side JavaScript was implemented 1998 and the Union mount, that made Docker possible, is from 1990. Client side JavaScript frameworks are plenty too. So what made the mentioned ones so successful?
I make the claim that it is machine readable community. This is where these tools differ. node.js is inseparable from its packet manager npm. Docker is unimaginable without its registry and Angular/React (as well as jquery) live on cushions of myriads of plug-ins and extensions. While the registries/repositories are native to Docker and node.js, the front-ends take advantage of tools like Bower and Yeoman, that make all the packaged feel native.
These registries aren't read-only, which is a huge point. Providing the means of direct contribution and/or branching on GitHub the process of contribution and consumption became two way. The mere possibility to "give back" created a stronger sense of belonging (even if that sense might not be fully concious).
machine readable community is a natural evolution born out of the open source spirit. For decades developers have collaborated using chat ( IRC anyone), discussion boards, Q & A sites and code sharing places. With the emergence of GIT and GitHub as de facto standard for code sharing the community was ready.
The direct access from scripts and configurations to source repository replaced the flow of "human vetting, human download, human unpack and copy to the right location" with: "specify what you need and the machine will know where to get it". Even this idea wasn't new. In the Java the Maven plug-in provided that functionality since 2002.
The big difference now: Maven wasn't native to Java, as it required a change of habit. Things are done differently with it than without. npm on the other hand is "how you do things in node.js". Configuring a docker container is done using the registry (and you have to put in extra effort if you want to avoid that).
So all the new tooling use repositories as "this is how it works" and complement human readable community with machine readable community. Of course, there is technical merit too - but that has been discussed elsewhere in great length.

Posted by on 2015-05-09 01:25 | Comments (0) | categories: Software

Cloud with a chance of TAR balls (or: what is your exit strategy)


Cloud computing is here to stay, since it does have many benefits. However even unions made " until death do us part" come with wagers these days. So it is prudent for your cloud strategy to contemplate an exit strategy.
Such a strategy depends on the flavour of cloud you have chosen (IaaS, PaaS, SaaS, BaaS) and might require to adjust the way you on-board in the first place. Let me shed some light on the options:

IaaS

When renting virtual machines from a book seller, a complete box from classic hosting provider or a mix of bare metal and virtual boxes from IBM, the machine part is easy: can you copy the VM image over the network (SSH, HTTPS, SFTP) to a new location? When you have a bare metal box, that won't work (there isn't a VM after all), so you need a classic "move everything inside" strategy.
If you drank the Docker cool aid, the task might be just be broken down into managable junks, thanks to the containers. Be aware: Docker welds you to a choice of host operating systems (and Windows isn't currently on the host list).
There are secondary considerations: how easy is it, to switch the value-added services like: DNS, CDN, Management console etc. on/off or to another vendor?

PaaS

Here you need to look separately at runtime and the services you use. Runtimes like Java, JavaScript, Phython or PHP tend to be offered by almost all vendors. dotNet and C# not so much. When your cloud platform vendor has embraced an open standard, it is most likely, that you can deploy your application code elsewhere too, including back into your own data center or a bunch of rented IaaS devices.
It get a little more complicated when you look at the services.
First look at persistence: is your data stored in a vendor propriety database? If yes, you probably can export it, but need to switch to a different database when switching cloud vendors. This means you need to alter your code and retest (but you do that with CI anyway?). So before your jump onto DocumentDB or DynamoDB (which run in a single vendor's PaaS only), you might want to checkout MongoDB, CouchDB (and its commercial siblings Cloudant or Couchbase) , Redis or OrientDB which run in multiple vendor environments.
The same applies to SQL databases and blob stores. This is not a recommendation for a specific technology (SQL vs. NoSQL or Vendor A vs. Vendor B), but an aspect you must consider in your cloud strategy.
The next check point are the services you use. Here you have to distinguish between common services, that are offered by multiple cloud vendors: DNS, auto scaling, messaging (MQ and eMail) etc. and services specific to one vendor (like IBM's Watson).
Taking a stand " If a service isn't offered by multiple vendors, we won't use it" can help you avoid a lock-in and will ensure that you stifle your innovation too. After all, you use a service, not for the sake of the service, but to solve a business problem and to innovate.
The more sensible approach would be to check if you can limit your exposure to a vendor to that special services only, should you decide to move on. This gives you the breathing space to then look for alternatives. Adding a market watch to see how alternatives might evolve improves your hedging.
Services are the " Damned if you do, damned if you don't" area of PaaS. All vendors scramble to provide top performance and availability for the common platform and distinction in the services on top of that.
After all one big plus of the PaaS environment are the services that enable " composable businesses" - and save you the headache to code them yourself. IMHO the best risk mitigation, and incidentally state of the art, is a sound API management a.k.a Microservices.
Once you are there, you will learn, that a classic Monolithic Architecture isn't cloud native (Those architectures survive inside of Virtual Machines) - but that's a story for another time.

SaaS

Here you deal with applications like IBM Connections Cloud S1, Google Apps for Work, Microsoft Office 365, Salesforce, SAP SaaS but also Slack, Basecamp, Github and gazillions more.
Some of them (e.g. eMail or documents) have open standard or industry dominating formats. Here you need to make sure, you get the data out in that format. I like the way Google is approaching this task. They offer Google Takeout, that tries to stick to standard formats and offers all data, any time for export.
Other have at least machine readable formats like CSV, JSON, XML. The nice challenge: getting data out is only half the task. Is your new destination capable of taking them back in?

BaaS

In a business process as a service (BaaS) the same considerations as the SaaS environment come to play: can I export data in a machine-readable, preferably industry standard format. E.g. you used a payroll service and want to bring it back inhouse or move to a different service provider. You need to make sure your master data can be exported and that you have the reports for historical records. When covered in reports, you might get away without transactional data. Typical formats are: CSV, JSON, XML

As you can see, not rocket science, but a lot to consider. For all options the same: do you have what it takes to move? Is there enough bandwidth (physical and mental) to pull it off? So don't get carried away with the wedding preparations and check your prenuptials.

Posted by on 2015-04-12 04:15 | Comments (0) | categories: Cloud Computing

email Dashboard for the rest of us - Part 2


In Part 1 I introduced a potential set of Java interfaces for the dashboard. In this installment I'll have a look on how to extract this data from a mail database. There are several considerations to be taken into account:
  • The source needs to supply data only from a defined range of dates - I will use 14 as an example
  • The type of entries needed are:
    • eMails
    • replies
    • Calendar entries
    • Followups I'm waiting for
    • Followups I need to action
  • Data needs to be in details and in summary (counts)
  • People involved come in Notes addresses, groups and internet addresses, they need to be dealt with
Since I have more than a hammer, I can split the data retrieval into different tooling. Dealing with names vs. groups is something best done with LDAP code or lookups into an address book. So I leave that to Java later on. Also running a counter when reading individual entries works quite well in Java.
Everything else, short of the icons for the people, can be supplied by a classis Notes view (your knowledge of formula language finally pays off).

Read more

Posted by on 2015-04-12 10:43 | Comments (1) | categories: IBM Notes XPages

email Dashboard for the rest of us - Part 1


One of the cool new features of IBM Verse is the Collaboration Dashboard. Unfortunately not all of us can switch to Verse overnight, so I asked myself: can I have a dashboard in the trusted old Notes 9.0 client?
Building a dashboard requires 3 areas to be covered:
  1. What data to show
  2. Where does the data come from
  3. How should the data be visualised, including actionable options (that's the place we preferences between users will differ strongly)
For a collaboration dashboard I see 3 types of data: collaborators (who), summary data (e.g. number of unread eMails) and detail data (e.g. the next meeting). Eventually there could be a 4th type: collections of summary data (e.g. number of emails by category). In a first iteration I would like to see:
  • Number of unread eMails
  • Number of meetings left today
  • Number of waiting for actions
  • Number of action items
  • List of top collaborators
  • List of todays upcoming meetings
  • List of top waiting for actions
  • List of top action items
I'm sure there will be more numbers and list coming up when thinking about it, but that's a story for another time.

Read more

Posted by on 2015-04-11 07:15 | Comments (0) | categories: IBM Notes XPages

XPages XML Document DataSource - Take 2


For a recent project I revisited the idea of storing XML documents as MIME entries in Notes - while preserving some of the fields for use in views and the Notes client. Jesse suggested I should have a look at annotations. Turns out, it is easier that it sound. To create an annotation that works at runtime, I need a one liner only:
@Retention(RetentionPolicy.RUNTIME) public @interface ItemPathMappings { String[] value(); }
To further improve usefulness, I created a "BaseConfiguration" my classes will inherit from, that contains the common properties I want all my classes (and documents) to have. You might want to adjust it to your needs:
ackage com.notessensei.domino;
import java.io.Serializable;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
/**
 * Common methods implemented by all classes to be Dominoserialized
 */
@XmlRootElement(name = "BaseConfiguration")
@XmlAccessorType(XmlAccessType.NONE)
public abstract class BaseConfiguration implements Serializable, Comparable<BaseConfiguration> {
    private static final long serialVersionUID = 1L;
    @XmlAttribute(name = "name")
    protected String          name;

    public int compareTo(BaseConfiguration bc) {
        return this.toString().compareTo(bc.toString());
    }
    public String getName() {
        return this.name;
    }
    public BaseConfiguration setName(String name) {
        this.name = name;
        return this;
    }
    @Override
    public String toString() {
        return Serializer.toJSONString(this);
    }
    public String toXml() {
        return Serializer.toXMLString(this);
    }
}

The next building block is my Serializer support with a couple of static methods, that make dealing with XML and JSON easier.
package com.notessensei.domino;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintWriter;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;

/**
 * Helper class to serialize / deserialize from/to JSON and XML
 */
public class Serializer {

    public static String toJSONString(Object o) {
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        try {
            Serializer.saveJSON(o, out);
        } catch (IOException e) {
            return e.getMessage();
        }
        return out.toString();
    }

    public static String toXMLString(Object o) {
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        try {
            Serializer.saveXML(o, out);
        } catch (Exception e) {
            return e.getMessage();
        }
        return out.toString();
    }

    public static void saveJSON(Object o, OutputStream out) throws IOException {
        GsonBuilder gb = new GsonBuilder();
        gb.setPrettyPrinting();
        gb.disableHtmlEscaping();
        Gson gson = gb.create();
        PrintWriter writer = new PrintWriter(out);
        gson.toJson(o, writer);
        writer.flush();
        writer.close();
    }

    public static void saveXML(Object o, OutputStream out) throws Exception {
        JAXBContext context = JAXBContext.newInstance(o.getClass());
        Marshaller m = context.createMarshaller();
        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
        m.marshal(o, out);
    }

    public static org.w3c.dom.Document getDocument(Object source) throws ParserConfigurationException, JAXBException {
        DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
        DocumentBuilder db = dbf.newDocumentBuilder();
        org.w3c.dom.Document doc = db.newDocument();
        JAXBContext context = JAXBContext.newInstance(source.getClass());
        Marshaller m = context.createMarshaller();
        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
        m.marshal(source, doc);
        return doc;
    }

    @SuppressWarnings("rawtypes")
    public static Object fromByte(byte[] source, Class targetClass) throws JAXBException {
        ByteArrayInputStream in = new ByteArrayInputStream(source);
        JAXBContext context = JAXBContext.newInstance(targetClass);
        Unmarshaller um = context.createUnmarshaller();
        return targetClass.cast(um.unmarshal(in));
    }
}

The key piece is for the XML serialization/deserialization to work is the abstract class AbstractXmlDocument. That class contains the load and save methods that interact with Domino's MIME capabilities as well as executing the XPath expressions to store the Notes fields. The implementations of this abstract class will have annotations that combine the Notes field name, the type and the XPath expression. An implementation would look like this:
package com.notessensei.domino.xmldocument;
import javax.xml.bind.JAXBException;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;
import lotus.domino.Session;
import com.notessensei.domino.ApplicationConfiguration;
import com.notessensei.domino.Serializer;
import com.notessensei.domino.xmldocument.AbstractXmlDocument.ItemPathMappings;

// The ItemPathMappings are application specific!
@ItemPathMappings({ "Subject|Text|/Application/@name",
					"Description|Text|/Application/description",
					"Unid|Text|/Application/@unid",
					"Audience|Text|/Application/Audiences/Audience",
					"NumberOfViews|Number|count(/Application/Views/View)",
					"NumberOfForms|Number|count(/Application/Forms/Form)",
					"NumberOfColumns|Number|count(/Application/Views/View/columns/column)",
					"NumberOfFields|Number|count(/Application/Forms/Form/fields/field)",
					"NumberOfActions|Number|count(//action)" })
public class ApplicationXmlDocument extends AbstractXmlDocument {

    public ApplicationXmlDocument(String formName) {
        super(formName);
    }

    @SuppressWarnings("unchecked")
    @Override
    public ApplicationConfiguration load(Session session, Document d) {

        ApplicationConfiguration result = null;
        try {
            result = (ApplicationConfiguration) Serializer.fromByte(this.loadFromMime(session, d), ApplicationConfiguration.class);
        } catch (JAXBException e) {
            e.printStackTrace();
        }
        try {
            result.setUnid(d.getUniversalID());
        } catch (NotesException e) {
            // No Action Taken
        }
        return result;
    }

    @SuppressWarnings("unchecked")
    @Override
    public ApplicationConfiguration load(Session session, Database db, String unid) {
        Document doc;
        try {
            doc = db.getDocumentByUNID(unid);
            if (doc != null) {
                ApplicationConfiguration result = this.load(session, doc);
                doc.recycle();
                return result;
            }

        } catch (NotesException e) {
            e.printStackTrace();
        }

        return null;
    }
}


Read more

Posted by on 2015-03-05 04:53 | Comments (0) | categories: XPages

Develop local, deploy (cloud) global - Java and CouchDB


Leaving the cosy world of Domino Designer behind, venturing into IBM Bluemix, Java and Cloudant, I'm challenged with a new set of task to master. Spoiled by Notes where Ctrl+O gives you instant access to any application, regardless of being stored locally or on a server I struggled a little with my usual practise of

develop local, deploy (Bluemix) global

The task at hand is to develop a Java Liberty based application, that uses CouchDB/Cloudant as its NoSQL data store. I want to be able to develop/test the application while being completely offline and deploy it to Bluemix. I don't want any code to have conditions offline/online, but rather use configuration of the runtimes for it.
Luckily I have access to really smart developers (thx Sai), so I succeeded.
This is what I found out, I needed to do. The list serves as reference for myself and others living in a latency/bandwidth challenged environment.
  1. Read: There are a number of articles around, that contain bits and pieces of the information required. In no specific order:
  2. Install: This is a big jump forward. No more looking for older versions, but rather bleeding edge. Tools of the trade:
    • GIT. When you are on Windows or Mac, try the nice GUI of SourceTree, and don't forget to learn git-flow (best explained here)
    • A current version of the Eclipse IDE (Luna at the time of writing, the Java edition suffices)
    • The liberty profile beta. The Beta is necessary, since it contains some of the features, notably couchdb, which are available in Bluemix by default. Use the option to drag the link onto your running Eclipse client
    • Maven - the Java way to resolve dependencies (guess where bower and npm got their ideas from)
    • CURL (that's my little command line ninja stuff, you can get away without it)
    • Apache CouchDB
  3. Configure: Java loves indirection. So there are a few moving parts as well (details below)
    • The Cloudant service in Bluemix
    • The JNDI name in the web.xml. Bluemix will discover the Cloudant service and create the matching entries in the server.xml automagically
    • A local profile for a server running the Liberty 9.0 profile
    • The configuration for the local CouchDB in the local server.xml
    • Replication between your local CouchDB instance and the Cloudant server database (if you want to keep the data in sync)
The flow of the data access looks like this
Develop local, deploy global

Read more

Posted by on 2015-03-03 12:14 | Comments (2) | categories: CouchDB Java Bluemix