OpenIO integration in Cyrus3

Bron Gondwana brong at
Thu Jun 4 08:53:28 EDT 2015

On Thu, Jun 4, 2015, at 08:22 PM, Jean-Francois SMIGIELSKI wrote:
> Hi!
> Yesterday I woked on integrating OpenIO as a Blob store for Cyrus. I
> temporarily pushed my code in
> (the
 repository is a fork for your reference, on github for simplicity
 purposes). Because I sometimes fix and upgrade OpenIO in the same time,
 that code currently requires my own fork of OpenIO at[1]

I've added CC Raymond so he can take a look at this as well.  He'll be
working on Caringo and/or Ceph support, but it will want to be
compatible with what you do.

Raymond, you missed out on a huge set of whiteboard work we did in Lille
a few weeks ago!  There are tons of different IO things happening in
Cyrus.  I'll see if I can find the photo :)

> This is just a first iteration, managing the download from the blob
> store (in "mailbox_map_record") and the upload (in "mailbox_archive").
> Sorry, this is really a work in progress, not really clean (hardcoded
> configuration, etc).

Great!  It's good to have progress :)  That's definitely the right
way to start.

> At this point, I raised a few questions. I didn't investigated yet
> around them, and I will do in further iterations. Anyway, any useful
> information will be appreciated :)
> * What is the preferred way to manage the configuration pour such a
>   blob store module ? I typically need to provide a "namespace name",
>   and maybe timeouts, etc. At present, this is hardcoded according to
>   my test environment.

Mostly this is done with prefixed config options, for example sasl_*
options get passed to the sasl engine.  You would create openio_timeout,
openio_namespace, etc in lib/imapoptions, and then in your code:

struct openio_context *context = ...; const char *ns =
openio_set_namespace(context, ns);

or something like this (I haven't read your API at all, so I'm
making stuff up)

> * What is the preferred way to keep a structured in cache? For each
>   operation, I need a structure representing the OpenIO client. This
>   structure has an internal cache (that
 takes time to load but greatly helps laters). At present, I create a
 new client for each operation.

Does that need to hold a socket open?  One option would be to run a
cyr_openiod or something, which clients talk to via a socket and have
the daemon stay open handling requests to upload and download messages.

Otherwise you can just create a global variable and store it there.  See
things like open_davdbs in the DAV code.  We use refcounted global
database lists quite a lot.

> * I currently do poor error management, and later I will maybe need
>   some tips about this. (e.g. what is the best behavior when the file
>   is also missing on the blob store).

Return IMAP_IOERROR I guess :)

> Last but not least, I currently meet a problem, and I cannot run a
> single test successfully. When used in cyrus, the
> (from OpenIO) does not behave the same as when it is used in another
> standalone application.
> E.g. When the OIO client receives and parses a reply for an internal
>      RPC, the reply contains unexpected fields. This is a clue for bad
>      memory management, and my best track for the moment. I also
>      experience troubles when trying to debug this. Is there some
>      function overloading by cyrus ? (e.g. syslog, fprintf, etc)

There really shouldn't be.

I'll see if I can get it running on my machine.  Probably not tonight
(nearly 11pm for me now), but soon.


Bron Gondwana
brong at


-------------- next part --------------
An HTML attachment was scrubbed...

More information about the Cyrus-devel mailing list