wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Building a high performance Domino Server


Domino can take huge user populations. To do this successfully all elements of a Domino server have to be considered carefully. Following the old insight " It is always the cable" you need to pay attention to the hardware layout. While you perfectly well can install a Domino server on a low-end laptop or a VM Image, it wouldn't give you the peak performance you are looking for. You rather want something looking like this:
Server layout for a high performance Domino server
Let us look at the details:
  • Disk layout
    • Operating system and Applications: This is your first RAID 1 Array. Since data hardly change and are really not that much a small but fast spinning drive will do. RAID1 protects you against failure of one drive and speeds read operations. Some suggest to have separate drives for application and OS, but that might be overkill. You could consider having separate partitions (easy on Linux/Unix).
    • View Rebuild Directory: There is a nice notes.ini variable View_Rebuild_Dir. You can point to a separate drive to store the temporary files created during index updates. The default is the system temp directory. This directory is a good candidate for a RAM disk or a solid state disk when your system is updating a lot of views all the time.
    • Domino Data: Typically you have a RAID5/RAID10 storage here to accommodate the large amount of data (users demand Google size mailboxes and your applications don't shrink magically). More and more we do see SAN systems for Domino storage, which is OK. Just keep in mind: Don't store Domino cluster databases from different clusters in the same SAN since it defeats the idea of a share-nothing cluster. While we support the use of NAS, the network latency and bandwidth is a limiting factor. Archival servers run fine with NAS, but not your high performance primary production server.
      Update: Fixed the graphic to show RAID10 since is shows much better performance than RAID5
    • Transaction Logging: You have tried it. Switched it on, expected great things and it didn't perform. The flaw: for good transaction logging performance you need your own disk. Not just another partition, but your very own spindle (RAID1) ideally with its own controller. It would be interesting to see how solid state disks work here.
    • Full Text Index: Since Domino 8.5.3 you can move the FTIndex to a different drive. This improves data throughput and reduces fragmentation on your data drive. Add FTBasePath=d:\full_text to the notes.ini and run updall -f. Your 100 user server won't notice. Large environments will benefit
  • Network layout
    • Cluster Replication (If you cluster your server only): You want to have your cluster on its own network segment. If you have 2 boxes next to each other a cross-over cable would do (afaik 1GB Ethernet requires a hub). If your go three-way (highly recommended), then a hub and an IP address segment that doesn't get routed will do.
    • Server Network: All servers should be connected on the server backbone. Put them into their own subnet clients can't see. Replication never gets disrupted by clients jamming the network ports. The server network also handles mail routing.
    • Client access: If you have huge numbers of clients you might reach the physical capability of your network card or the TCP/IP stack. Use more than one card and/or more than one IP address to have sufficient ports available for clients to connect.
Of course all of this isn't new (except the shiny picture), you can read much more details on IBM's Domino Performance Best Practices pages. This is just about the hardware layout. You need to consider the operating system too. But that's a story for another time. As usual YMMV.
Update: There is now addtional material available how to tune an IBM System x server to peak performance. Update 2: Samir points to a nice comparison between RAID5 and RAID10. It's not Domino related but insightful. One key point there: watch your controller.
Update 3: Added the separate drive for the full-text index

Posted by on 21 April 2009 | Comments (10) | categories: IBM Notes Lotus Notes Show-N-Tell Thursday

Comments

  1. posted by Darren Duke on Tuesday 21 April 2009 AD:
    Shame most standalone servers only allow a max of 8 drives ;)

    I guess you really do need a SAN for a high performance setup.
  2. posted by Selcuk A on Tuesday 21 April 2009 AD:
    FYI: It is possible to build a cross over cable for gigabit ports: { Link }
  3. posted by Jim Casale on Tuesday 21 April 2009 AD:
    @Darren High performance SAN is only possible if the SAN people give you what you ask for. This is the second position I have found that they give you want they want to give you, not what you ask for.
    In the end it is just under performing storage.
  4. posted by Stephan H. Wissel on Tuesday 21 April 2009 AD:
    @Selcuk: Thx for the link, learned something

    @Darren: put the RAID5/10 in an external casing just below your Domino.

    @Jim: Jep. Known problem.
  5. posted by Ulrich Krause on Tuesday 21 April 2009 AD:
    Don't forget your DAOS repository!!
  6. posted by Fred Janssen on Wednesday 22 April 2009 AD:
    This is exactly what I recommend my customers. Since 8.5 I also add a separate DAOS RAID5/SAN. Emoticon cool.gif

    It's just that some customer do not want to invest in these large servers anymore, but use SAN for all storage. This is where I warn them to get the right SAN configuration, or else... Emoticon undecided.gif
  7. posted by Richard Schwartz on Wednesday 22 April 2009 AD:
    My recommendations have been the same for years now. Direct-attached external cabinet rather than SAN, and the fastest spin rate available for the drives. Multiple controllers if possible, too. And these days I'm recommending either RAID-10 for the data array, or RAID-6 -- with RAID-10 for the best performance or RAID-6 for the best combination of performance and reliability.

    I'm also interested in the idea of solid state drives for Domino servers, and given their performance advantages I'm wondering if it would be ok to combine transaction log, view rebuild and transaction logs on one large solid state device. There's no head movement to worry about, so I suspect that mixing the sequential i/o of the translog and the random i/o of the view rebuilding would be ok.
  8. posted by Paul Mooney on Wednesday 22 April 2009 AD:
    I find myself very cynical to SAN's these days. Hard to justify, especially for Domino considering the active active clustering model.

    Aside from that, all is great there sir
  9. posted by Ray on Wednesday 22 April 2009 AD:
    Hi Stephan, good post. I would add two things and that is multiple Domino Partitions especially now with Windows 64 bit support and DIR links to move some lesser used DB's to cheaper and slower storage like a single big SATA drive. Did you know you can use DIR links for the mail boxes as well? All you do is create a text file called mail1/2/3/4/etc.box and inside the file point it to the actual location. Emoticon biggrin.gif
  10. posted by Edwin Kanis on Thursday 23 April 2009 AD:
    Anyone experienced performance with Solid State Drives allready?
  11. posted by David Brown on Tuesday 11 May 2010 AD:
    Stephan, thanks for taking the time to create this example. I am intrigued by your suggestion to use ram disk for view rebuilds. I found some (relatively old) but still exciting info here about the potential benefits of doing so: { Link }

    However, I'm reminded of the difficultly in estimating just how large the drive should be, for instance, the method described here seems to indicate it is a partially proprietary calculation:
    { Link }

    How would one go about ensuring there is enough RAM available to make something like this feasible (I'm particularly thinking about Domino on windows and Linux)?


  12. posted by Andrew Luder on Saturday 15 October 2011 AD:
    Hi Stephan,

    have you got a IBM URL reference to this new 853 feature. A customer I'm working for would like to see that's it's offiicially supported before implementing.

    Assistance appreciated

    Andrew

  13. posted by Lars Berntrop-Bos on Wednesday 19 October 2011 AD:
    System i people keep saying this is not needed for system i. I would like to hear the opinion of the esteemed experts gathered here!
  14. posted by Stephan H. Wissel on Tuesday 01 November 2011 AD:
    @Andrew: It is in the official announcement: { Link }

    @Lars: The System i storage model is different from the rest, but you still can distribute loads over different physical storage paths. Of course the 128Bit addresses allow you to push a lot of data at one time. I would still bet that a performance troubled System i can be tuned by carefully configuring storage groups.
    Emoticon biggrin.gif stw
  15. posted by Stephan H. Wissel on Monday 09 February 2015 AD:
    Al: Separating out the view indexes is only available in the upcoming R9.0.2 release. For now you can separate FTIndex and transaction log.
    Have enough space on the disk. You see your existing databases, so you know. In the Admin client you can see the view sizes per app/nsf. And the same holds true for SSD: separation of I/O channels is key
  16. posted by Al on Monday 09 February 2015 AD:
    Hi Stephen,

    Great post. I am having some performance issues and am looking at using a SSD. From your article, I am leaning towards just putting the view indexes on an SSD. How do I know how large a drive to use?

    Also, if I am looking for my best 'bang for buck', would you recommend putting just the view indexes, or the indexes, transaction logs and databases on the same SSD? I typically have the transaction logs on another drive, but will probably not get approval for a separate SSD for each.

    Appreciate your thoughts.

    Al