<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Bron,<br>
    Thanks for the response.  Your solution has pointed us towards the
    proper approach.  We are using ZFS, so would be performing ZFS
    send/receive replication rather than "mv", to move the filesystems. 
    Our typical approach for this sort of thing, then, would be:<br>
    <ul>
      <li>Create a filesystem snapshot</li>
      <ul>
        <li>zfs snapshot $ropt ${SOURCE_FS}@$newsnap<br>
        </li>
      </ul>
      <li>Perform ZFS send/receive, something like this:</li>
      <ul>
        <li>zfs send -p $SENDVERBOSE $cloneorigin@$sourcestartsnap | zfs
          recv -u $RECVVERBOSE -F "$destfs"</li>
      </ul>
      <li>Then, once we've completed a replication, we need to quiesce
        the filesystem and do the same, again, to catch-up to current
        state</li>
      <li>Finally, replace the existing filesystem with the new replica,
        and discard the original<br>
      </li>
    </ul>
    To quiesce the filesystem, we would normally like to tell whatever
    applications are using it to temporarily freeze operations, so the
    underlying filesystem is in a consistent state.  When I was in
    Melbourne, back in April, we (you, Ellie & I) discussed what
    would be needed to introduce such a feature to Cyrus.  I'm curious
    if there's been any further discussion or work on this?  Should I
    open a feature request?<br>
    <br>
    It would be nice to be able to complete consistent snapshots for
    filesystem operations like replication or backup, and this is a
    feature of many large applications, such as mail and DB servers.<br>
    <br>
    Thanks again for shining a welcome light on how to achieve the goal
    of adding archive to an existing system.<br>
    <br>
    Cheers,<br>
        -nic<br>
    <br>
    <div class="moz-cite-prefix">On 11/04/2017 10:59 PM, Bron Gondwana
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:1509832770.219005.1161749752.2A8D72E1@webmail.messagingengine.com">
      <title></title>
      <div style="font-family:Arial;">Hi Nic,<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Sorry I didn't get back to
        answering you on this the other day!<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">So... this one is kinda tricky,
        because everything is going to be on "spool", but here's how I
        would do it.<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Before:<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">/mnt/smalldisk/conf -> meta
        files only<br>
      </div>
      <div style="font-family:Arial;">/mnt/bigdisk/spool -> all email
        right now<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Stage 1: splitting:<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">/mnt/smalldisk/conf<br>
      </div>
      <div style="font-family:Arial;">/mnt/bigdisk/spool<br>
      </div>
      <div style="font-family:Arial;">/mnt/bigdisk/spool-archive ->
        empty<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">And set archivepartition-default
        to /mnt/bigdisk/spoolarchive<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Now you need to run an initial
        cyr_expire.  This will take a long time, but it should be able
        to use hardlinks to move the files - it's using cyrus_copyfile.<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Once cyr_expire has finished an
        most of your email is moved into spool-archive, shut down cyrus.<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">mv /mnt/bigdisk/spool
        /mnt/smalldisk/spool<br>
      </div>
      <div style="font-family:Arial;"><br>
        And set partition-default to /mnt/smalldisk/spool</div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">That way your downtime is only
        while the small remaining spool gets moved to the other disk.<br>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <div style="font-family:Arial;">Bron.<br>
      </div>
      <div><br>
      </div>
      <div><br>
      </div>
      <div>On Sun, 5 Nov 2017, at 02:58, Nic Bernstein wrote:<br>
      </div>
      <blockquote type="cite">
        <div>Thanks much to you  both for your comments and
          suggestions.  We had<br>
        </div>
        <div>already considered creating a temporary "staging" partition
          and<br>
        </div>
        <div>shuffling mailboxes around, as Michael discussed, but have
          the same<br>
        </div>
        <div>reservations about it.  Since we're dealing with nearly 6TB
          of data,<br>
        </div>
        <div>most of it old, this scheme would introduce considerable
          disruption to a<br>
        </div>
        <div>very active mail system.  We have a hard time getting a two
          hour<br>
        </div>
        <div>maintenance window, and this would take days!<br>
        </div>
        <div><br>
        </div>
        <div>Bron, other Fastmailers, any thoughts??<br>
        </div>
        <div>    -nic<br>
        </div>
        <div><br>
        </div>
        <div>On 11/03/2017 11:20 AM, Michael Menge wrote:<br>
        </div>
        <blockquote>
          <div>Hi,<br>
          </div>
          <div><br>
          </div>
          <div>Quoting Reinaldo Gil Lima de Carvalho <<a
              href="mailto:reinaldoc@gmail.com" moz-do-not-send="true">reinaldoc@gmail.com</a>>:<br>
          </div>
          <div><br>
          </div>
          <blockquote>
            <div>I think that singleinstancestore (message hard links)
              will not<br>
            </div>
            <div>survive when<br>
            </div>
            <div>moving from one partition to the other and storage
              total size will<br>
            </div>
            <div>increase<br>
            </div>
            <div>significantly.<br>
            </div>
            <div><br>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>thanks for the hint. This was not a problem while
            migration to the<br>
          </div>
          <div>meta-data partition,<br>
          </div>
          <div>as the mails stayed on the same partition (as in
            file-system and not<br>
          </div>
          <div>cyrus-partition)<br>
          </div>
          <div>and only hardlinks where change at all.<br>
          </div>
          <div><br>
          </div>
          <div>So one more reason for an other migration path.<br>
          </div>
          <div><br>
          </div>
          <div><br>
          </div>
          <blockquote>
            <div><br>
            </div>
            <div>2017-11-03 12:22 GMT-03:00 Michael Menge<br>
            </div>
            <div><<a href="mailto:michael.menge@zdv.uni-tuebingen.de"
                moz-do-not-send="true">michael.menge@zdv.uni-tuebingen.de</a><br>
            </div>
            <blockquote>
              <div>:<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <blockquote>
              <div>Hi Nic,<br>
              </div>
              <div><br>
              </div>
              <div>Quoting Nic Bernstein <<a
                  href="mailto:nic@onlight.com" moz-do-not-send="true">nic@onlight.com</a>>:<br>
              </div>
              <div><br>
              </div>
              <div>Friends,<br>
              </div>
              <blockquote>
                <div>I have a client with Cyrus 2.5.10 installed.  Last
                  year we migrated<br>
                </div>
                <div>their<br>
                </div>
                <div>old 2.3.18 system to 2.5.10, with an eye towards an
                  eventual move to<br>
                </div>
                <div>3.0.x.  Based on Bron's most excellent email of
                  last year,<br>
                </div>
                <div>([Subject: Cyrus<br>
                </div>
                <div>database and file usage data] from Cyrus Devel of 8
                  January 2016)<br>
                </div>
                <div>we used a<br>
                </div>
                <div>tiered layout for the storage:<br>
                </div>
                <div><br>
                </div>
                <div>The main categories are:<br>
                </div>
                <div><br>
                </div>
                <div> * Config directory (ssd) [/var/lib/imap]<br>
                </div>
                <div>     o sieve<br>
                </div>
                <div>     o seen<br>
                </div>
                <div>     o sub<br>
                </div>
                <div>     o quota<br>
                </div>
                <div>     o mailboxes.db<br>
                </div>
                <div>     o annotations.db<br>
                </div>
                <div> * Ephemeral [/var/run/cyrus -- in tmpfs]<br>
                </div>
                <div>     o tls_sessions.db<br>
                </div>
                <div>     o deliver.db<br>
                </div>
                <div>     o statuscache.db<br>
                </div>
                <div>     o proc (directory)<br>
                </div>
                <div>     o lock (directory)<br>
                </div>
                <div> * Mailbox data [typical 2.5.X usage]<br>
                </div>
                <div>     o Meta-data (ssd)<br>
                </div>
                <div>         + header<br>
                </div>
                <div>         + index<br>
                </div>
                <div>         + cache<br>
                </div>
                <div>         + expunge<br>
                </div>
                <div>         + squat (search index)<br>
                </div>
                <div>         + annotations<br>
                </div>
                <div>     o Spool data (disk: raidX)<br>
                </div>
                <div>         + messages (rfc822 blobs)<br>
                </div>
                <div><br>
                </div>
                <div>We sized the Fast SSD pool (this is three-drive
                  mirrors on ZFS) to be<br>
                </div>
                <div>extra large, so it could eventually handle "Hot"
                  data, and left<br>
                </div>
                <div>about 300GB<br>
                </div>
                <div>free there.  Data, on spinning media, is currently
                  5.74TB with<br>
                </div>
                <div>4.8TB free<br>
                </div>
                <div>(RAID10).  Metadata is 35GB and /var/lib/imap is
                  8GB, all of which<br>
                </div>
                <div>is in<br>
                </div>
                <div>the Fast pool.<br>
                </div>
                <div><br>
                </div>
                <div>Now the client is ready to take the dive into v3.0,
                  and I'm trying to<br>
                </div>
                <div>figure out how to put "archive" operation in
                  effect.<br>
                </div>
                <div><br>
                </div>
                <div>I have read the documentation (hell, I wrote most
                  of it) and<br>
                </div>
                <div>understand<br>
                </div>
                <div>the settings, but what I cannot quite wrap my brain
                  around is this:<br>
                </div>
                <div>There<br>
                </div>
                <div>is already all of this data sitting in all of these
                  data partitions<br>
                </div>
                <div>(we use<br>
                </div>
                <div>a total of 34 separate partitions each for data
                  & metadata) so how<br>
                </div>
                <div>do I<br>
                </div>
                <div>make the transition to separate archive partitions,
                  since all that<br>
                </div>
                <div>data is<br>
                </div>
                <div>on the "slow" drives? Can I just reassign all of
                  the current data<br>
                </div>
                <div>partitions to archivedata partitions, define the
                  new set of "Hot" data<br>
                </div>
                <div>partitions on the Fast pool, and let 'er rip, or
                  what?<br>
                </div>
                <div><br>
                </div>
                <div>I promise, if you tell me, I'll write it up as real
                  documentation. :-)<br>
                </div>
                <div><br>
                </div>
                <div><br>
                </div>
              </blockquote>
              <div>We are interested in such a migration too. Our
                fallback plan, if we<br>
              </div>
              <div>don't<br>
              </div>
              <div>find a<br>
              </div>
              <div>better way to do it is, do use the same method as we
                introduced the ssd<br>
              </div>
              <div>meta-data<br>
              </div>
              <div>partition.<br>
              </div>
              <div><br>
              </div>
              <div><br>
              </div>
              <div>1. We created a new partition in our cyrus
                configuration,<br>
              </div>
              <div>2. we moved moved the accounts from one partition to
                the other one<br>
              </div>
              <div>by one.<br>
              </div>
              <div>3. (this will be new for the archive partition) run
                cyrus expire to<br>
              </div>
              <div>move<br>
              </div>
              <div>the old mails back to the slow disks.<br>
              </div>
              <div><br>
              </div>
              <div>This method will have two downsides.<br>
              </div>
              <div>1. we have to copy all mails to the fast storage, and
                move the old<br>
              </div>
              <div>mails<br>
              </div>
              <div>   back to the slow storage. So we have to move most
                of the mails<br>
              </div>
              <div>twice.<br>
              </div>
              <div>2. the path of the old mail will change so they will
                be stored again in<br>
              </div>
              <div>   our file based backup<br>
              </div>
              <div><br>
              </div>
              <div>so a method without these downsides will be
                appreciated<br>
              </div>
              <div><br>
              </div>
              <div>Regards<br>
              </div>
              <div><br>
              </div>
              <div>   Michael<br>
              </div>
              <div><br>
              </div>
              <div>------------------------------------------------------------<br>
              </div>
              <div>--------------------<br>
              </div>
              <div>M.Menge                                Tel.: (49)
                7071/29-70316<br>
              </div>
              <div>Universität Tübingen                   Fax.: (49)
                7071/29-5912<br>
              </div>
              <div>Zentrum für Datenverarbeitung          mail:<br>
              </div>
              <div><a href="mailto:michael.menge@zdv.uni-tuebingen.de"
                  moz-do-not-send="true">michael.menge@zdv.uni-tuebingen.de</a><br>
              </div>
              <div>Wächterstraße 76<br>
              </div>
              <div>72074 Tübingen<br>
              </div>
              <div><br>
              </div>
              <div>----<br>
              </div>
              <div>Cyrus Home Page: <a href="http://www.cyrusimap.org/"
                  moz-do-not-send="true">http://www.cyrusimap.org/</a><br>
              </div>
              <div>List Archives/Info: <a
                  href="http://lists.andrew.cmu.edu/pipermail/info-cyrus/"
                  moz-do-not-send="true">http://lists.andrew.cmu.edu/pipermail/info-cyrus/</a><br>
              </div>
              <div>To Unsubscribe:<br>
              </div>
              <div><a
                  href="https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus"
                  moz-do-not-send="true">https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus</a><br>
              </div>
            </blockquote>
          </blockquote>
          <div><br>
          </div>
          <div><br>
          </div>
          <div><br>
          </div>
          <div>--------------------------------------------------------------------------------<br>
          </div>
          <div><br>
          </div>
          <div>M.Menge                                Tel.: (49)
            7071/29-70316<br>
          </div>
          <div>Universität Tübingen                   Fax.: (49)
            7071/29-5912<br>
          </div>
          <div>Zentrum für Datenverarbeitung          mail:<br>
          </div>
          <div><a href="mailto:michael.menge@zdv.uni-tuebingen.de"
              moz-do-not-send="true">michael.menge@zdv.uni-tuebingen.de</a><br>
          </div>
          <div>Wächterstraße 76<br>
          </div>
          <div>72074 Tübingen<br>
          </div>
          <div><br>
          </div>
          <div>----<br>
          </div>
          <div>Cyrus Home Page: <a href="http://www.cyrusimap.org/"
              moz-do-not-send="true">http://www.cyrusimap.org/</a><br>
          </div>
          <div>List Archives/Info: <a
              href="http://lists.andrew.cmu.edu/pipermail/info-cyrus/"
              moz-do-not-send="true">http://lists.andrew.cmu.edu/pipermail/info-cyrus/</a><br>
          </div>
          <div>To Unsubscribe:<br>
          </div>
          <div><a
              href="https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus"
              moz-do-not-send="true">https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus</a><br>
          </div>
        </blockquote>
        <div><br>
        </div>
        <div>--<br>
        </div>
        <div>Nic Bernstein                             <a
            href="mailto:nic@onlight.com" moz-do-not-send="true">nic@onlight.com</a><br>
        </div>
        <div>Onlight Inc.                              <a class="moz-txt-link-abbreviated" href="http://www.onlight.com">www.onlight.com</a><br>
        </div>
        <div>6525 W Bluemound Rd., Ste 24              v. 414.272.4477<br>
        </div>
        <div>Milwaukee, Wisconsin  53213-4073          f. 414.290.0335<br>
        </div>
        <div><br>
        </div>
        <div>----<br>
        </div>
        <div>Cyrus Home Page: <a href="http://www.cyrusimap.org/"
            moz-do-not-send="true">http://www.cyrusimap.org/</a><br>
        </div>
        <div>List Archives/Info: <a
            href="http://lists.andrew.cmu.edu/pipermail/info-cyrus/"
            moz-do-not-send="true">http://lists.andrew.cmu.edu/pipermail/info-cyrus/</a><br>
        </div>
        <div>To Unsubscribe:<br>
        </div>
        <div><a
            href="https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus"
            moz-do-not-send="true">https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus</a><br>
        </div>
      </blockquote>
      <div style="font-family:Arial;"><br>
      </div>
      <div id="sig56629417">
        <div class="signature">--<br>
        </div>
        <div class="signature">  Bron Gondwana, CEO, FastMail Pty Ltd<br>
        </div>
        <div class="signature">  <a class="moz-txt-link-abbreviated" href="mailto:brong@fastmailteam.com">brong@fastmailteam.com</a><br>
        </div>
        <div class="signature"><br>
        </div>
      </div>
      <div style="font-family:Arial;"><br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">----
Cyrus Home Page: <a class="moz-txt-link-freetext" href="http://www.cyrusimap.org/">http://www.cyrusimap.org/</a>
List Archives/Info: <a class="moz-txt-link-freetext" href="http://lists.andrew.cmu.edu/pipermail/info-cyrus/">http://lists.andrew.cmu.edu/pipermail/info-cyrus/</a>
To Unsubscribe:
<a class="moz-txt-link-freetext" href="https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus">https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus</a></pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Nic Bernstein                             <a class="moz-txt-link-abbreviated" href="mailto:nic@onlight.com">nic@onlight.com</a>
Onlight Inc.                              <a class="moz-txt-link-abbreviated" href="http://www.onlight.com">www.onlight.com</a>
6525 W Bluemound Rd., Ste 24              v. 414.272.4477
Milwaukee, Wisconsin  53213-4073          f. 414.290.0335
</pre>
  </body>
</html>