performance tradeoffs
j.hudecek at nki.nl
j.hudecek at nki.nl
Thu Dec 8 08:12:56 EST 2016
Hi,
I'm struggling with performance of OpenSlide to dynamically generate tiles for OpenSeadragon from images stored on a network drive.
Some tiles load in 50ms (usually the bottom level) but some take 2s or more - some images only have two layers so OpenSlide needs to read huge pieces of image (I've seen up to 16 times the size of resulting tile) to satisfy my deepzoom request. When Openseadragon fires 20 requests for tiles on one level, each takes 2s and there's only ~6 connections to the server, clients will have to wait quite a bit to get a sharp image.
Are there any compilation flags or other options I could use that would improve the performance for this scenario?
I attached procmon when reading the files and it seems that OpenSlide is making a lot of overlapping 4K reads unaligned to 4K. Is it possible to force it to use a larger buffer for example? I guess I would prefer fewer larger freads with a file on a network drive. It seems Windows caching is not handling it all too well - the same read_region takes <20ms on local SSD and 500ms to a network harddisk (attached over 1Gbps).
It also seems that tile size (for the Deepzoom) makes a lot of difference. Would it make sense to tweak it based on the underlying image and its tile size (if it's a pyramidal image)?
Or should I just precompute some levels in the pyramid so that I'm never too "far" in level from the actual stored bits? Or disallow the non-performing levels on the client side?
What optimizations do people use?
Regards,
Jan Hudecek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/openslide-users/attachments/20161208/4baadab4/attachment.html>
More information about the openslide-users
mailing list