The Cranach Digital Archive

Case Study, IIPMooViewer, Server No Comments


Guest Post by Gunnar Heydenreich & Jörg Stahlmann, Cranach Digital Archive

Cranach Digital Archive

Cranach Digital Archive

The Cranach Digital Archive (cda) is an interdisciplinary collaborative research resource, providing access to art historical, technical and conservation information on paintings by Lucas Cranach (c.1472-1553), his sons and his workshop. The repository has been available online since January 2012 and currently provides information on more than 700 paintings including around 8000 images and documents from 92 contributing institutions, including major museums such as the Metropolitan Museum in New York, the National Gallery in London, the Getty Museum in Los Angeles, the Alte Pinakothek in Munich and many more. The archive contains over 400 infrared-reflectograms, 200 X-radiographs, numerous technical reports and a literature database with more than 2100 entries.

The project started in 2009 and is currently in its second phase (2012-2014). During this period, the Cranach Digital Archive aims to expand the existing network, to develop a shared infrastructure and to increase its content in order to build the foundations for an innovative, comprehensive and collaboratively produced repository of knowledge about Lucas Cranach and his workshop that will be significantly different from the traditional model of the single-author catalogue raisonné.

The Cranach Digital Archive is a joint initiative of the Stiftung Museum Kunstpalast in Düsseldorf and the Cologne Institute of Conservation Sciences / Cologne University of Applied Sciences in collaboration with nine founding partner institutions, 18 associate partners and many project contributors. The project is funded by the Andrew W. Mellon Foundation.

Detail of infrared-reflectogram showing fluid underdrawing on 'Lamentation of Christ' (1503), Alte Pinakothek, Munich.

Detail of infrared-reflectogram showing fluid underdrawing on ‘Lamentation of Christ’ (1503), Alte Pinakothek, Munich.

For the Cranach Digital Archive we were required to create an online web presence accessible to everybody which should include a digital archive connecting all collected information of an object and be capable of delivering high resolution zoomable images. In collaboration with the Düsseldorf Art Archive (d:kult), we modified the existing structure in the collection management system TMS to store information from Word documents and thumbnail link-references to PDFs and images. The task was then to unite the various existing structures, which included TMS, a lot of additional information in the form of static Word and PDF documents and a large number of images in various formats and sizes to create a user friendly web database.

The collection management software acts as a version control system giving us the possibility to work and test the data provided by our project partners and entered or generated by our team before publishing it. In order to update we export all data from TMS to XML format, generating 4 different documents with Overall, Conservation, Literature and Thesauri-Data, each of which can be up to 50MB  in size and contain thousands of entities and attributions. An application written in Java parses the documents and inserts or updates the data in our database.

For our infrastructure we decided to use a locally hosted server, due to the fact that our data and images would create an immense amount of traffic if uploading was carried out via the internet. Our server system is based on Ubuntu (long term support version) and the Apache web server, though we are considering switching to Zend-Server for performance reasons. Also we decided to work with a standard open source database and of course with IIPImage.

The IIPImage server system allows us to view, navigate and zoom our high resolution images in real-time. With more than 8000 images currently available, the IIPImage server combined with tiled multi-resolution TIFF images is the perfect tool to comfortably and efficiently remotely browse the high resolution images within the digital archive. The images were batch converted using a shell script, using the free image processing system VIPS. Compared to most image processing libraries, VIPS requires little memory and runs quickly and efficiently, especially on parallel multi-processor machines.

For security reasons we are happy that IIPImage provides us with the possibility to place the source TIFF images outside of the web server’s document root. The images are streamed and can only be viewed via the IIPImage viewer. We are looking at ways to optimize this and to enrich the resource as well as to implement new tools.

by Gunnar Heydenreich & Jörg Stahlmann, Cranach Digital Archive

Open Access with IIPImage at Ghent University Library

Case Study, IIPMooViewer, JPEG2000, Server No Comments



The Book Tower, Ghent University Library

Guest Post by Nicolas Franck, Ghent University Library

Ghent University Library (Belgium) – better known as the “Book Tower” – started digitizing its collections back in 2004. It has a wide variety of material, ranging from books, manuscripts, papyri, posters, coins to drawings and old maps. From the beginning, the main purpose was to provide its audience – mainly researchers and students – open access to its digital content online.

A website was created with a search interface, and a commercial viewer (EREZ viewer) to view the digital images online at http://adore.ugent.be

But in 2010 we decided to create a new website with more possibilities, and to use an open source software to view our images online. We needed software that was fast, reliable, open source and that used the new possibilities of HTML5.

That is where IIPImage came in. The IIPMooviewer client was slightly adapted to our needs with, for example, the addition of a zoom level indicator.

Some of the resources consists of more than one image, such as scans belonging to a book. So we created a carousel, that wraps IIPMooviewer within an iframe. See this example showing Liber Floridus a rare 12th century medieval encyclopaedia of Flanders: http://adore.ugent.be/OpenURL/app?id=archive.ugent.be:018970A2-B1E8-11DF-A2E0-A70579F64438&type=carousel&scrollto=60:

Liber Floridus – book page carousel wrapping iipmooviewer

The raw images are first converted to 8 bit sRGB using PerlMagick before conversion to JPEG2000 format with Kakadu using the following parameters:

kdu_compress -i input.tif -o output.jp2 -quiet -rate 24 Clayers=8 Clevels=8
    Cprecincts="{256,256},{256,256},{128,128}" Corder=RPCL ORGgen_plt=yes
    ORGtparts=R Cblk="{64,64}" Cuse_sop=yes -no_palette -num_threads 4

Trajan’s Column

This results in a maximum of 8 resolution levels and 8 quality layers with a precinct size of 256×256 pixels. They are stored with lossy compression to reduce the file size, but with a high encoding rate of 24 bits per pixel.

The largest image is this 1774 etching of Trajan’s column by the Italian artist Giovanni Battista Piranesi (1720-1778). The scan is 832 megapixels in size (15261×54519 pixels). The compressed JPEG2000 file itself is about 400MB in size, while the uncompressed image was 2GB in size.

Interactive viewer: http://adore.ugent.be/OpenURL/resolve?rft_id=archive.ugent.be:EBFE8CFC-AC6F-11E1-81A7-AD7FAAF23FF7:1&svc_id=zoomer&url_ver=Z39.88-2004

JPEG2000 file download: http://adore.ugent.be/OpenURL/resolve?rft_id=archive.ugent.be:EBFE8CFC-AC6F-11E1-81A7-AD7FAAF23FF7:1&svc_id=original&url_ver=Z39.88-2004

We use Red Hat Enterprise Linux Server release 6.2 with Nginx as the web server. The iipsrv process is started via an init-script using spawn-fcgi:

export VERBOSITY="1"
export MAX_IMAGE_CACHE_SIZE="100"
export FILENAME_PATTERN="_pyr_"
export JPEG_QUALITY="100"
export MAX_CVT="300"
spawn-fcgi -f /path/to/iipsrv.fcgi -U iipsrv -u iipsrv
                          -s /var/run/iipsrv.socket -n

As you can see, a Unix socket is created and Nginx uses its fastcgi-module to proxy all requests to /iip to this socket. Although the IIPImage server has it own methods of caching its results, we decided to let Nginx cache the results itself, to lower the number of requests to the backend server.The IIPImage-server can be reached through this proxy at http://adore.ugent.be/iip.

by Nicolas Franck, Ghent University Library

Virtual Nanoscopy: Ultra-high resolution electron microscopy with IIPImage

Case Study, Client, IIPMooViewer, Server No Comments
Guest Post by Frank Faas & Raimond Ravelli, Department of Molecular Cell Biology, Leiden University Medical Center

The Leiden University Medical Centre in The Netherlands conducts a wide range of scientific research ranging from pure fundamental medical research to applied clinical research. Recently, our group in the Department of Molecular Cell Biology, published a paper entitled Virtual nanoscopy: Generation of ultra-large high resolution electron microscopy maps in the Journal of Cell Biology about a pipeline to acquire, combine and visualize electron microscopy images as panoramas of virtually unlimited size. The technique, named Virtual Nanoscopy, can reveal large macromolecules, organelles, cells, tissue, up to entire animal cross-sections.

Virtual slide of a sagittal section of a Zebrafish, imaged at 1.6 nm/pixel resolution over an area of 1.5 x 0.6 mm2

As a proof of principle we have put online a 281 gigapixel ultrastructural map of a 5 days old zebrafish embryo, which is a commonly used vertebrate model organism. The virtual slide was recorded at 120 kV with a magnification at the detector plane of 9460. A total of 26,434 unbinned 4k × 4k images was collected with a FEI Eagle CCD camera (>8 s readout time full frame) in 4.5 d. The sample was maintained at −1 µm defocus throughout the whole data collection. The resulting slide of 1,461 × 604 µm2 in size consists of 921,600 × 380,928 pixels of 1.6 nm square each.

The image shows the cartilage in slate blue, the eye in sienna, the brain in forest green, the muscles in salmon pink, the liver in indian red, the intestine in dark khaki, the pancreas in plum, the pronephric duct in yellow, the olfactory pit in lime green and the yolk in turquoise. Other examples can be found in the data section of our website.

Virtual nanoscopy has changed the way electron microscopy is carried out in our laboratory. Instead of collecting just a snapshot of part of a cell, we now routinely collect entire cross-sections which provides a much better context of the observed phenomena. Afterwards the data can be browsed as if behind a microscope at nanometer scale, hence the name virtual nanoscopy.

To visualise these images on the web we use the IIPImage server in combination with IIPMooviewer, both of which we adapted slightly to our needs. The images are stored as TIFF tiled image pyramids using the BigTIFF extensions available in libtiff 4.0 with a tile size of 256×256 pixels.

We can only recommend IIPImage and greatly appreciate the quick feedback from the developers. Many thanks to the developers of IIPImage for sharing their creation with the world!

by Frank Faas & Raimond Ravelli, Department of Molecular Cell Biology, Leiden University Medical Center

One Billion Stars and IIPImage

Case Study, Client, IIPMooViewer, Server No Comments
Guest Post by Mike Read, Wide-field Astronomy Unit, Institute for Astronomy, University of Edinburgh

The Wide-Field Astronomy Unit (WFAU), Institute for Astronomy, University of Edinburgh specializes in building and maintaining science archives from astronomical surveys. The bulk of the data we currently process come from two infrared telescopes: VISTA in the southern hemisphere and UKIRT in the north.

The surveys carried out by these telescopes have produced around 100-200 TB of data. WFAU holds the pixel data (images) in flat files whilst the object catalogues, derived from the pixels, are held in MS SQL databases.

Recently the number of stars in the catalogues from two adjoining surveys of our own Milky Way, the VISTA Variables in the Via Lactea (VVV) and the Galactic Plane Survey (GPS) exceeded one billion. It was decided to mark this milestone by trying to produce a zoomable/pannable image that displays these one billion stars.

Image Processing

To produce the final image several thousand individual images from each of three passbands had to be stitched/mosaiced together. This was preformed using SWarp, astronomy resampling software that can handle arbitrarily large images. Unfortunately as we store the original images in a compressed format which SWarp is unable to currently read, the individual images and their confidence images also required uncompressing. Astromatic’s STIFF was then used to combine the three separate passbands into an RGB colour pyramidal TIFF image

OK so it wasn’t that simple! In the end we had to split up the mosaic into five chunks to get it through the stitching. The resulting five TIFFs were bolted together using the VIPS image processing software.

We now had a 1,267,500 x 120,000 pixel TIFF image weighing in at around 100GB, so over to IIPImage. We had opted for IIPImage over other technologies as it offered a lightweight fast image server able to work from a single image (and other formats if we later go that route) and a javascript viewer that looked like it would be customizable for our needs, i.e. coordinate display and a goTo function.

Early testing with IIPImage and excellent support from Ruven had already shown that it would likely cope with such an image but how would it fare in practice?

Putting 150 Gigapixels Online

At the end of March we put out a press release at the National Astronomy Meeting. This was picked up by various news sites and blogs. For a short while it was the most read story on the BBC website. Over 300,000 unique visitors later the image is still there for all to see:

The image is hosted on a 64bit Linux box with an 8 core 2.6 GHz AMD Opteron Processor and 32 GB RAM. Around 3TB of disk space was needed whilst the image was being created, though the final image is a mere 100GB (LZW compressed). Initial tests on tile size showed that 128×128 resulted in very slow performance. In the end we went with 768×768 with the reasoning being that this would reduce the number of requests and given a typical browser window size the user would be able to pan a bit without initiating new requests.

We used the latest IIPMooViewer version 2.0 code to which the coordinate display has been added and we’ve also altered the viewer to adjust the contrast at different zoom levels.

At peak usage we did encounter a few slow-downs (which Memcached might help with) but generally it works very well and the response from fellow astronomers and the public has been fantastic.

Many thanks to Ruven and the IIPImage forum members for all their help!

by Mike Read, Institute for Astronomy, University of Edinburgh

From scan to delivery: Special Collections and IIPImage at Utrecht University Library

Case Study, Client, JPEG2000, Server No Comments

Guest Post by Edu Hackenitz, University Library Utrecht

The Utrecht University Library Special Collections contain many extensive collections of manuscripts, pre-1901 printed works, more recent rare and valuable printed works, maps and nautical charts. The library takes care of the acquisition, conservation, scanning, cataloging and availability of this material.

Read the rest…

« Previous Entries

Flattr this




Donations appreciated Bookmark and Share
Get IIPImage at SourceForge.net. Fast, secure and Free Open Source software downloads