Google PhotoScan pour numériser vos anciennes photos

Avec l’application Google PhotoScan, les photos du passé rencontrent le scanner du futur.

Pour numériser des anciennes photos de famille dans le passé, il fallait utiliser un scanner de bureau et un logicel de retouche photo pour recadrer les résultats et, le cas échéant, pour corriger des défauts. Si les photos étaient collées dans un album, le recours à un scanner plat était onéreux ou même impossible. Il restait la possibilité de reproduire les anciennes photos avec sa caméra ou son mobilophone. Dans ces cas on devait également utiliser un logiciel de retouche pour éliminer des reflets éventuels et pour corriger la perspective, suite à l’inclinaison inévitable de l’appareil photo ou du mobilophone. Dans tous les cas la procédure était lourde et prenait beaucoup de temps.

Fin 2016, Google présentait son application PhotoScan pour smartphone et tablette, permettant de numériser en un temps record toutes ses anciennes photos. Cette application intelligente ne nécessite que peu de travail, les algorithmes de Google se chargent de gommer automatiquement les défauts des photos argentiques et de redonner des couleurs vives pour disposer d’une version numérique et propre des clichés, sans aucun reflet.

La version actuelle de PhotoScan est 1.5, elle a été publiée le 14 décembre 2017. Cette app est disponible dans le PlayStore pour Android et dans l’AppStore pour iOS.

En l’absence de reflets indésirables sur les photos à reproduire, la procédure est simple et rapide. On démarre l’application PhotoScan sur son mobilophone (ou tablette), on désactive la fonctionnalité de suppression des reflets et on pousse sur le bouton rond.

PhotoScan

Numériser une ancienne photo avec PhotoScan est simple comme Bonjour

Les photos numérisées sont enregistrées sur votre appareil lors de la numérisation. Le recadrage se fait automatiquement, avec détection des bords. La photo résultante est droite et rectangulaire, avec correction de la perspective. Il y a en outre un outil intégré, avec loupe, pour faire glisser les bords et les coins aux fins d’ajuster le recadrage automatique de la photo numérisée, le cas échéant. Une rotation intelligente est également assurée et les photos restent dans le bon sens, quelle que soit leur orientation au moment de la numérisation.

La dimension la plus large des photos numérisées avec Google Scan sur iPhone X est de 4.096 pixels. Chaque photo est enregistrée en deux exemplaires : la photo telle que vue sur l’écran au début de la numérisation et la photo reconstituée. Le nom du fichier d’une photo numérisée est défini au hasard avec une séquence de quatre lettres majuscules et de quatre chiffres, par exemple RZMM2336.JPG ou MQLB9372.JPG. Une lettre E est ajoutée au milieu du nom du fichier pour les photos reconstituées (RZMME2336.JPG et MQLBE9372.JPG.)

Les images qui suivent montrent la qualité d’anciennes photos numérisées avec GoogleScan.

GoogleScan

A gauche la photo dans l’album telle qu’elle est affichée sur l’écran du mobile. A droite la photo résultante de la numérisation intelligente, en haute résolution, avec recadrage, rotation et correction automatique.

GoogleScan

En haut la photo dans l’album telle qu’elle est affichée sur l’écran du mobile. En bas la photo résultante de la numérisation intelligente, en haute résolution, avec recadrage et correction automatique.

Sur les images qui suivent, la qualité des photos numérisées avec Google Scan peut être comparée avec des photos digitalisées moyennant un scanner de bureau standard.

scan comparison

Les photos en haut ont été numérisées avec un scanner de bureau. Les photos en bas ont été numérisées avec GoogleScan sur iPhone X.

scan comparison

La photo à gauche a été numérisée avec un scanner de bureau. La photo à droite a été numérisée avec GoogleScan sur iPhone X.

Si la photo à reproduire présente des reflets à cause d’une lumière directe ou indirecte, la procédure est un peu plus longue, mais également très simple d’utilisation. L’application nous indique quoi faire. La marche à suivre est basique : il faut placer son image dans le cadre, prendre la photo, déplacer ensuite son smartphone (sans incliner l’appareil) tout en visant les quatre points situés à chaque coin de l’image. Grâce aux quatre clichés complémentaires prises, GoogleScan supprimer les reflets habituels d’une photo imprimée sur papier brillant et produit une photo numérisée avec une meilleure qualité.

PhotoScan Mariage

Pour éliminer les reflets sur la photo ancienne, il faut bouger l’applareil (si possible sans inclinaison) pour faire passer le cercle ouvert au mileu sur les quatre cercles blancs dans les coins de la photo. Les quatres clichés additionnelles sont pris automatiquement en cas d’accord.

L’application PhotoScan permet de sauvegarder aisément les photos numérisées sur la plateforme Google Photos pour les stocker en lieu sûr, les rechercher et les organiser. Sur cette plateforme on peut animer les photos, appliquer des filtres, faire des retouches avancées et envoyer des liens aux amis et membres de la famille pour partager les photos favoris.

Digital Imaging

Last update: February 12, 2017

Digital imaging is the creation of digital images, such as of a physical scene, and includes the processing, compression, storage, printing, and display of such images. The information is converted by image sensors into digital signals that are processed by a computer. If the medium which conveys the information that constitutes the image is visible light, it’s called digital photography.

Photosensor Array

A digital camera or a scanner uses an array of photosensors (photosites) to record and store photons. Once the exposure finishes, the relative quantity of photons in each photosite is then sorted into various intensity levels, whose precision is determined by bit depth (0 – 255 for an 8-bit image).

One photosensor per image pixel would only create grayscale pictures. To capture color images, distinct photosensors are necessary for each of the three primary colors (RGB). To separate the colors, a filter array is placed over each photosensor. The most common type of color filter array is called a Bayer Array, as shown in the figure below.

credit

image credit : www.cambridgeincolour.com

A Bayer Array consists of alternating rows of red-green and green-blue filters. Because the human eye is more sensitive to green light than both red and blue light, the Bayer Array contains twice as many green as red or blue sensors to approximate the human vision.

Dynamic Range

When speaking about dynamic range, we need to distinguish between recordable or displayable dynamic range. Let’s start with the first one.

The Dynamic Range in digital imaging describes the ratio between the maximum (white) and minimum (black) measurable light intensities. The black level (few photons) is limited by noise. The white level (large number of photons) is limited by overflow (saturation). If an ideal photosensor identifies one photon for black and hosts a maximum of 1.000 photons (white), the dynamic range would be 1.000:1. The most commonly used unit for measuring dynamic ranges in digital photography is the f-number (f-stop) which describes the total light range by powers of 2. A dynamic range of 1.000:1 is therefore equivalent to 10 f-stops (2 exp 10 = 1.024). In scanners the dynamic range is described in terms of density (D). Density is measured using powers of 10. A dynamic range of 1.000:1 is represented by a density of 3 (10 exp 3 = 1.000). As a scanner has full control over it’s light source, such a device can ensure that minimal photosensor overflow occurs.

The approximated dynamic range in f-stops for several devices is indicated below :

recordable displayable
  • human eye : 24
  • scanner : 8 – 12
  • digital camera : 5 – 9
  • monitor display : 6 – 11
  • printed media : 2 – 7

ISO sensitivity

How much light is needed to saturate a medium is determined by it’s sensitivity. That was as true for glass plates as it was for film and now for digital photosensors. The sensitivity (film speed) is expressed in ISO. The normal range of ISO is about 200 to 1.600, but can sometimes go as low as 50 or as high as 204.800.

Image Bit Depth

Bit depth quantifies how many values are available to specify each image pixel. Even if a digital imaging system can capture a vast dynamic range, the light measurements must be translated into discrete numerical values with an analog to digital (A/D) converter. With 8 bits per color channel, the dynamic range cannot exceed 8 f-stops (density of 2.4) if the numerical values are linearly interspaced. With 16 bits per color channel, the theoretical value for the dynamic range in an ideal linear system would be 16 f-stops (densitiy 4.8). In practice the dynamic range of a linear system is much lower, even with 16 bits (typically about 12 f-stops). If we use however an nonlinear system to interspace and save the discrete numerical values, we could even conceive to record an infinite dynamic (posterized) range with an image depth of a few bits.

RAW image

At the end of the A/D conversion, we have a raw digital image, with W x H pixels, specified with consecutive discrete numerical values, each value coded with N bits per color channel. Each camera manufacturer and each scanner software developer uses a proprietary format for a raw digital image. A common format called Digital Negative (DNG) has been defined by Adobe in 2004.

Image Histograms

Image histograms are great tools to evaluate the correct exposure of a captured digital image.

Each pixel in the raw image is specified by the primary colors red, green and blue (RGB). Each of these colors can have a brightness value ranging from 0 to X ( X = 2 exp N ). A histogram results when the computer scans through each of the brightness values and counts how many are at each level from 0 through X. Low brightness values are called shadows, high values are the highlights, in-between are the midtones.

histogram

Histogram

A well balanced histogram where no shadows or highlights are clipped is shown at left. The region where most of the brightness values are present is called the tonal range. When highlights are heaped at the right edge in the histogram, they are clipped (blown). Some regions in the image have been overexposed and the corresponding details can never been recovered.

When shadows are heaped at the left edge in the histogram, some regions of the image have been underexposed and the relevant dark details are also lost. The histogram of a raw digital image should not present high values at the left (shadows) and right (highlights) edges of the chart. If clipping occurs you see a tall vertical line at the far left or right side of the histogram.

Usually an image is underexposed if no channel of the histogram goes all the way to the right. Images that are too dark are easy to correct later in this case; just drag the right slider in Photoshop’s Levels command to the left to meet the edge of the histogram.

The distribution of peaks in a histogram depends on the tonal range of the subject. Images where most of the tones occur in the shadows are called low key, whereas in high key images most of the tones are in the highlights. The histogram describes also the contrast which is the measure of the difference in brightness between dark and light areas in an image. Broad histograms reflect a significant contrast, narrow histograms present low contrast resulting in flat (dull) images.

All histograms are normalized and are intentionally scaled so that the top of the tallest peak always reach full height. Scale is relative, shown percentage-wise.

There exist three types of image histograms :

  • Color Histograms
  • RGB Histograms
  • Luminosity (Luminance) Histograms

Each histogram type has it’s own use and it’s own shortcomings. All three should be used as a collective tool. The following figures show the different histograms relative to a scanned Kodachrome slide of a landscape.

Landscape

Scanned Kodachrome slide of a landscape

Photoshop histogram windows

Landscape photo histogram windows RGB, R, G, B, Luminosity and Colors in Photoshop

Color Histograms

A color histogram describes the brightness distribution for any of the three primary color channels R, G, B. This is helpful to assess whether or not individual colors have been clipped.

Sometimes color histograms are presented as color overlays (colors histogram).

RGB Histograms

An RGB histogram produces three independent histograms for each color channel and then adds them together, irrespective of whether the color came from the same pixel. This way RGB histograms discard the location of each pixel.

Luminosity (Luminance) Histograms

The terms luminosity and luminance are often used interchangeably, even though each describes a different aspect of light intensity. Technically the term luminosity is correct and I will use it in the following, even if luminance is more common. The luminosity histogram takes into account that our eyes are most sensitive to green; we see green as being brighter than we see blue or red. Luminosity weighs the effect of this to indicate the actual perceived brightness of the image pixels based on the NTSC television formula

Luminosity = Red x 0.3 + Green x 0.59 + Blue x 0.11

Color Models

Until now we have used the terms primary colors (RGB), color channels, color and colors histograms, luminosity, luminance, light intensity, brightness, but we never really dealt with colors.

The search for a comprehension of exactly what color is and how it functions has been going on for hundreds of years. Artists and philosophers have theorized that color is three-dimensional. Contemporary neuroscientists have confirmed this theory, finding that our sensation of color comes from nerve cells that send messages to the brain about:

  • The brightness of color
  • Greenness versus redness
  • Blueness versus yellowness

Numerous models and systems have been developed :

There are several ways to associate the converted discrete numerical values of the primary color channels R, G, B to colors. We can rely on the physics of light waves (visible spectrum), on the characteristics of inks, dyes, paints or pigments, on the human eye or visual perception. In all cases we need a color model as reference to process (adjust)  the discrete numerical values.

Wikipedia defines colors and color models as follows :
Color (American English) or colour (Commonwealth English) is the visual perceptual property corresponding in humans to the categories called red, blue, yellow, etc.”
A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components.”

The ICC defines colors as :
Color is the sensation produced in response to selective absorption of wavelengths from visible light. It possesses the attributes of Brightness, Colorfulness and Hue. White, grey and black are achromatic colors.

My personal definition of color in digital imaging is the following :
Color is the tone displayed when the numerical values of the three color channels are not all the same. This implies that black, white and all grey tones are not colors.”

This personal definition is consistent with all what has been said up to now in this post. With an image bit depth of 8 bits, 256 x 256 x 256 = 16.777.216 different colors and grey tones can be specified (in theory).

The color models used today are the following :

  • RGB (ca 1860) : Additive Color Model ( Red + Green + Blue = white)
  • CMYK (ca 1906) : Subtractive Color Model (Cyan + Magenta + Yellow = brown; + K = Black)
  • LAB (1948) : developed by Richard S. Hunter
  • NCS (1964) : Natural Color System
  • HSV (1978) : Hue, Saturation and Value (Alvy Ray Smith)
  • HSL (1978) : Hue, Saturation, and Lightness (Alvy Ray Smith)
  • HWB (1996) : Hue, Whiteness, Blackness (Alvy Ray Smith)
Main color models : RGB, CMYK, Lab

Main color models : RGB, CMYK, Lab

The most common color model is RGB. The following figure shows the RGB cube with the 3D representation of all possible (in theory) colors and grey-tones, including black (R = G = B = 0) in the back lower corner and white (R = G = B = max) in the front upper corner.

RGB cubes

RGB cube

RGBA is a RGB color model with and additional alpha (opacity) channel. There is an open ended set of RGB spaces; anyone can invent one by picking new primaries and a gamma value. Some color spaces are commercial ones and copyrighted, some are defined for special purposes and some are obsolete.

Typically used in color printing, CMYK assumes that the background is white, and thus subtracts the assumed brightness of the white background from four colors: cyan, magenta, yellow, and black. Black is used because the combination of the three primary colors (CMY) doesn’t produce a fully saturated black. You should however be aware that some desktop printers have only an RGB interface. Some printers use special premixed inks called Spot Colors.

The Natural Color System (NCS) is a proprietary perceptual color model based on the color opponency hypothesis of color vision, first proposed by Ewald Hering. The current version of the NCS was developed by the Swedish Colour Centre Foundation.

HSV and HSL have been developed in 1978 by Alvy Ray Smith, a pioneer in computer graphics and cofounder of the animation studio Pixar. They are used today in color pickers. The two representations rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation. The colors are represented in a cylindrical coordinate system. Twenty years later Alvy Ray Smith created HWB to address some of the issues with HSV and HSL. HWB came to prominence in 2014 following its use in the CSS Level 4 Color Module.

In contrast to color models which define a coordinate space to describe colors, the Color Apperance Model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision.

Color Spaces

A color space is a specific implementation of a color model. The physical colors represented in the coordinate space (cube, cylinder) of a color model are not all visible by humans. For this reason the International Commission on Illumination (CIE) defined in 1931 quantitative links between physical pure colors (wavelengths) in the electromagnetic visible spectrum and physiological perceived colors in human color vision. These links are represented as 3D regions (3D solids) containing all producible colors, called the CIE 1931 color space. The CIE 1931 color space standard defines both the CIE 1931 RGB space, which is an RGB color space with monochromatic primaries, and the CIE 1931 XYZ color space, which works like an RGB color space except that it has non-physical primaries that cannot be said to be red, green, and blue. The CIE standards are based on a function called the standard (colorimetric) observer, to represent an average human’s chromatic response.

3D Color Space

Color Space : different vues of the 3D solid representing visible colors

Visualizing color spaces in 3D is not very easy and intuitive. For this reason color spaces are usually represented using 2D slices from their full 3D shape. Unless specified otherwise, a 2D chromacity diagram shows the cross-section containing all colors which are at 50% luminosity (luminance). The next figure shows the CIE 1931 XYZ color space in two dimensions.

CIE 1931 XYZ color space

CIE 1931 XYZ color space at 50% luminosity (mid-tones)

The CIE defined additional standards for color spaces for special purposes like TV, video, computer graphics. A list is shown below :

CIE color spaces : XYZ, Lab, Luv

CIE color spaces : CIEXYZ, CIELAB, CIELUV

Gamuts

It’s good to know that the CIE XYZ color space encompasses all color sensations that an average person can experience, but it’s more important to know the subsets of colors that a given digital device can handle and reproduce. Such a portion of the CIE XYZ color space is called a device color space or gamut. The term gamut was adopted from the field of music, where it means the set of pitches of which musical melodies are composed. The following figure shows typical gamuts for some digital devices.

Typical gamuts of digital devices

Typical gamuts of digital devices with 50% luminosity

Keep in mind that this representation only shows mid-tones with 50% luminosity. When colors are dark or light, we perceive less variation in their vividness. We see the maximum range of color saturation for middle-toned colors. This is why the 2D slices of color models are usually represented with 50% luminosity. If we are interested in the color gamuts for the shadows or highlights, we could look instead at a 2D slice of the color space at about 25% and 75% luminosity.

The following figure shows the gamuts of real devices, the iPad 2 and iPad 3.

Gamuts of iPad 2 and iPad 3

Gamuts of iPad 2 and iPad 3

Color Transformation

Color transformation (color space conversion) is the translation of the representation of a color from a source color space to a target (destination) color space.

Out of

Out of

A typical use case is to print on an inkjet printer in the CMYK space a photo captured with a camera in the RGB color space. The printer gamut is different than the camera gamut, certain camera colors cannot be reproduced with the printer. Those colors are said to be out of gamut.

During the color transformation process, the RGB colors out of gamut must be converted to values within the CMYK gamut. This conversion is called gamut mapping. There are several reasonable strategies for performing gamut mapping, these are called rendering intents. Four particular strategies were defined by the International Color Consortium (ICC), with the following names:

  • Absolute Colormetric
  • Relative Colormetric
  • Perceptual
  • Saturation

If a complete gamut mapping is not possible, a gamut mismatch results and the best approximation is aimed. An interactive Flash demo explaining color gamut mapping is available at the website of the Stanford University.

In digital image edition programs (for example Adobe Photoshop), device independent color spaces, called working spaces, are used as a reference for the device-dependent gamuts. Working color spaces are color models that are well suited to image editing tasks such as tonal or color adjustments. The most important working spaces are listed below :

sRGB is sort of common denominator and used as default for unmanaged computers. This color space is appropriate for uploading images to the web and to send them for printing to minilabs if no custom space is specified. It has been endorsed by the W3C and by many industry leaders. sRGB is not well suited as working space because it has a narrow gamut.

Usually the input and output color spaces are smaller than the working color space.

Color temperatures

Color temperature is another characteristic of visible light that is important in digital imaging and photography. Color temperature is conventionally stated in the unit of absolute temperature, the Kelvin, having the unit symbol K. Color temperatures over 5.000K are called cool colors, while lower color temperatures (2.700–3.000 K) are called warm colors.

The color temperature of sunlight above the atmosphere is about 5.900 K. Tungsten incandescent lamps used formerly in the photography had a temperature of 3.200 K. The CIE introduced in 1931 the concept of the Standard Illuminant, a theoretical source of visible light. Standard illuminants provide a basis for comparing images or colors recorded under different lighting. Each of these is defined by a letter or by a letter-number combination.

Fluorescent lighting adds a bluish cast to photos whereas tungsten lights add a yellowish tinge to photos. Humans don’t generally notice this difference in temperature because our eyes adjust automatically for it. The process in digital system to compensate these color casts is called white balance. The goal is to correct the lighting so that white objects appear white in images. White balance can be done automatically or manually. Two standard white points are used in white balance : D50 and D65.

In digital imaging, it is important to know a monitor’s color temperature. Common monitor color temperatures, along with matching standard illuminants, are as follows:

  • 5.000 K (D50)
  • 5.500 K (D55)
  • 6.500 K (D65)
  • 7.500 K (D75)

The spectrum of a standard illuminant, like any other profile of light, can be converted into tristimulus values. The set of three tristimulus coordinates of an illuminant is called a white point and can equivalently be expressed as a pair of chromaticity coordinates.

Color Profiles

Informations about device gamuts and illuminants are registered in ICC profiles. ICC is the International Color Consortium which was formed in 1993 by eight vendors in order to create an open, vendor-neutral color management system which would function transparently across all operating systems and software packages. Every device that captures or displays color can be profiled. A profile can be considered as a description of a specific color space.

Profiles describe the color attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space (PCS, either CIEXYZ or CIELAB) serving as reference.There are two types of profiles :

  • matrix-based : mathematical formulas
  • table-based : large tables of sample points (LUT = look up table) to define the 3D color space

Mappings may be specified this way using tables, to which interpolation is applied, or through a series of parameters for transformations.

ICC profiles help you to get the correct color reproduction when you input images from a camera or scanner and display them on a monitor or print them.

Color

Color conversion with ICC profiles

An ICC profile must conform to the ICC specification. The latest profile version is 4.3.0.0, the corresponding specification ICC.1:2010-12 is technically identical to the ISO standard 15076-1:2010.

There are different device classes of profiles : input, output, display, link, abstract, colorspace, … ICC profiles may have the suffix .icc or .icm. Display profiles are commonly of the Matrix/TRC type with a 3×3 matrix of the colorant primaries tristimulus values and a one-dimensional tone curve for each colorant. They can also be of the multi-dimensional look-up table (LUT) type with a three-dimensional look-up table and a second one-dimensional tone curve. Some device-independant profiles are purely theoretical and describe a way to turn color into numbers. Others are device-dependant and describe the color signature of a particular device.

A profile does not correct anything in the image. An original with a color cast (Farbstich) keeps the cast during the color conversion. Image correction and cast removals are separate processes which need specific software.

ICC profiles can be embedded in digital images, for example in JPEG files. If the profile uses a standard color space like sRGB, a simple EXIF tag is sufficient to indicate it. If a custom (non-standard) color space is used, the complete data segment can be embedded. Photoshop features check-boxes to embed ICC profiles in dialog boxes when saving or creating images.

A free program for Windows to view the content of ICC profiles has been developed by Huan (Huanzhao) Zeng. The software is called ICC Profile Inspector; the current version 2.4.0 was updated on February 22, 2009. The following figure shows a screenshot of the program displaying the header and the tag table of the Blurb ICC Profile for Blurb books.

ICC Inspector

ICC Profile Inspector showing the Blurb book ICC profile

The device class of the Blurb ICC profile is output, the color space is CMYK, the profile connection space is Lab, the rendering intent is relative colormetric and the illuminant has the values X = 0.9642, Y = 1.0, Z = 0.82491. AToBn (A2Bx) and BToAn (B2Ax) are gamut mapping tables used in printer profiles. A refers to the device, B to the profile connection space. A2B tags are used for proofing, B2A tags are used for printing.

By clicking on a tag in the ICC Profil Inspector, the corresponding content is displayed.

The next figure shows a screenshot of the program displaying the header and the tag table of my Sony Vaio laptop :

sony_srgb

ICC Profile Inspector showing the ICC profile of a labtop display

The device class of the Sony ICC profile is display, the color space is RGB, the profile connection space is XYZ, the rendering intent is perceptual and the illuminant has the values X = 0.96419, Y = 1.0, Z = 0.82489. The tags rXYZ, gXYZ and bXYZ present the gamut for the three channels, the tag wtpt shows the white point, the tags rTRC, gTRC and bTRC indicate the Tone Response Curves for the three channels in 16bit mode (see gamma encoding later in this post).

ICC

ICC color profile  for Sony Vaio labtop display : gamut, white point, gamma

Windows Color Management panel allows to change settings for the ICC profiles. Mac OS X has an inbuilt ICC profile inspector inside the umbrella application ColorSync Utility.

An OpenICC project was launched in 2004. Files are available at Sourceforge.

RAW to JPEG / TIFF conversion

To view our image, we must display it on a monitor or print it on paper. In both cases we need to process (edit) the image to cope with limitations of the output medium and with another particularity of the human vision. Compared to a photosensor, our eyes are much more sensitive to changes in dark tones than we are to similar changes in bright tones.

A standard computer monitor can only display 8 bits per color channel. The common image file formats used in this environment are compressed JPEG or uncompressed TIFF files. To convert our raw image into on of these standards, we need to apply several image adjustments; some are irreversible. Often these adjustments are done automatically inside the digital imaging system (camera, scanner), but it’s also possible to do it manually outside with an image editing software like Photoshop.

The steps to adjust the digital image are the following :

  • Demosaicing
  • Gamma encoding
  • White Balance
  • Tonal compensation
  • Color Compensation
  • Sharpening
  • Compression

Demosaicing

Let’s come back to our photosensors without considering their dynamic range, sensitivity or bit depth. To create a color image from the captured photons in the photosensors, a first process is the Bayer Demosaicing to provide full color information at each image pixel. Different demosaicing algorithms are applied to improve the image resolution or to reduce the image noise. Small-scale details in images near the resolution limit of the digital sensor can produce visual artifacts, the most common artifact is Moiré.

Gamma encoding

To translate between our eye’s light sensitivity and that of a digital imaging system, a function called gamma is used. In the simplest case the nonlinear gamma function is defined by the following power-law expression:

Vout = A * Vin exp gamma

Vout and Vin are the input and output luminositye values, A is a constant (usually A = 1) and gamma is the exponent. A gamma value lower than 1 is called an encoding gamma, a value greater than one is a decoding gamma. In the first case the compressive power-law nonlinearity is called gamma compression; conversely the application of the expansive power-law nonlinearity is called gamma expansion. The term gamma correction is sometimes used for both processes.

We distinguish three types of gamma :

  • Image Gamma
  • Display Gamma
  • System Gamma

The image gamma is applied on the raw image data before converting them to a standard JPEG or TIFF file and saving it to the memory card. The gamma encoding redistributes the native tonal levels into ones that are perceptually more uniform, making a more efficient use of a given bit depth. The encoding gamma is usually about 1 / 2.2 = 0.455.

The display gamma refers to the video card and monitor and compensate for the image gamma to prevent that the image is displayed too bright on the screen. The display gamma is usually equal to 2.2.  On old Mac computers the value was 1.8.

The system gamma (viewing gamma) is the net effect of all gammas applied to the image. The system gamma should ideally be close to 1, resulting in a straight line in the gamma chart.

The following figures shows the three gamma plots :

Gamma charts

Gamma charts image, display and system

The precise gamma is usually specified by the ICC color profile that is embedded within the image file. If no color profile is indicated, then a standard gamma of 1/2.2 is assumed.

Tone and color adjustment

We should now have a good understanding about colors, but we didn’t yet explained what are tones. Are tones synonymous to brightness ? Some specialists refer to musical allusions to define tones. Other say that tones include colors. Years ago in a photo forum it was stated that there are only two terms needed to specify tones and colors : hue and luminosity. Ken Bhasin concluded in this forum : “Tone is the degree of greyness. If the subject has color, imagine taking away its color – what remains is its tone. Absence of any tone makes a subject bright (light grey/white). Presence of a tone makes a subject dark (Dark grey/black).” I endorse this definition.

There are several tools to adjust or correct tones and colors. Most are interrelated and influence both tones and colors. The most common tools are presented hereafter with reference to the Photoshop software.

Levels is a tool which can move and stretch the levels of an image histogram. It adjust brightness, contrast and tonal range by specifying the location of complete black, complete white and midtones in a histogram.

The following example shows two Kodachrome slides scanned with a cheap diascanner Maginon FS6600.

Scanned Kodachrome portrait with histograms

Scanned Kodachrome portrait with histograms

The histograms of the three color channels indicate an underexposure.

Color adjustment with Levels Tool in Photoshop

Color adjustment with Levels Tool in Photoshop

By moving the white point to the left in the R, G and B histograms in Photoshop the levels are adjusted. Holding down the ALT-key while dragging the black or white slider is a trick to visualize shadow or highlight clipping and avoid it.

Adjusted portrait

Adjusted portrait

Because the levels have been modified differently in the three color channels, the adjustment influenced also the hue of the image.

Photoshop curves tool

Photoshop curves tool (non-sense adjustment)

A second tool is the Photoshop curves. It’s a very powerful and flexible image transformation utility. Similar to Photoshop levels, the curves tool can take input levels and selectively stretch or compress them. A curve is controlled using up to a total of 16 anchor points. The left figure shows an example of an (artistic nonsense) curve applied to the preceding portrait. The curves tool only redistributes contrast and allow us to better utilize a limited dynamic range. You can never add contrast in one tonal region without also decreasing it in another region. You must decide how to spend your contrast budget. Curves also preserves the tonal hierarchy, unless there are negative slopes in it. The following figure shows the resulting modified image.

Portrait

Portrait modified in Photoshop with the Curves Tool based on nonsense points

 

Curves can also be used on individual color channels to correct color casts (Farbstiche) in specific tonal areas. A typical example of digital images with color casts are scanned film negatives which have a strong orange mask. The reason an orange mask was added to color negative films was because of imperfections in the cmy dyes.

Scanned film negative with orange color cast and inverted into a positive

Scanned film negative

The orange color cast becomes purple when the image is inverted to a positive. All film scanning software comes  with color negative options. Typically a variety of color negative film types, such as Kodak Gold 100, Agfa, Fuji etc are listed in the scanner software. A good scan should avoid clipping in all the color channels, which can be easily checked in the histograms.

If the scanned image is not converted to a positive in the scanner, it can be done in Photoshop. The third Photoshop adjustment tool, called eyedropper (pipette), is well suited for this purpose. The eyedropper figures in the levels and curves panels (see figures above). The far left dropper tool is used to set the black point by clicking on a location within the image that should be black. The far right dropper tool does the same for the white point. The middle dropper tool sets the grey point which is an area in the image that should be colorless.

In a negative white and black are inverted. The lightest part of the negative (the darkest part of the scene) can be no lighter than the color of the base orange cast. If the orange cast can be converted to pure white in the negative (black in positive), then the remainder of the colors will be converted as expected. The next figure shows the areas where the eyedropper tool has been applied and the resulting inverted positive.

positive

Adjusted negative with the dropper (3 areas black, white, grey) and resulting positive

The global adjustment of the colors is called color balance. If the goal is to render specific neutral colors correctly, the method is called grey balance, neutral balance, or white balance respectively. General color balance changes the overall mixture of colors in an image and is used to get colors other than neutrals to appear correct or pleasing.

Photoshop offers various methods to automatize tone and color adjustments :

  • Auto Levels
  • Auto Contrast
  • Auto Color
  • Photo Filter
  • Special filters like ColorPerfect

Photoshop provides also various sliders to adjust manually the parameters color balance, brightness, contrast, hue, saturation, exposure, shadows, highlights etc. A great help is the Photoshop Variations Tools showing incremental changes of different parameters in images, with an indication of eventual clippings. The next figure shows variations of the portrait image for the shadows, mid-tones, highlights and saturation.

Photoshop Variations

Photoshop Variations

Another method to automatize color balance used by several image editors are selectors for  presets, for example :

  • landcsape
  • portraits, skin tones
  • night
  • beach
  • jewelry

Sharpening

The next step in the image processing workflow is sharpening. Resolution adds the detail that lets us recognize features. Sharpness makes edges clear and distinct. The standard tool of choice for sharpening is the UnSharp Mask filter (USM).

All color filter Bayer array algorithms, by definition, blur the image more than could have been theoretically captured by a perfect camera or scanner. Thats why sharpening is often integrated in the demosaicing process. If not, it can be done separately in an image editor like Photoshop.

Compression

A last step in the image processing workflow is compression to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient way. Image compression may be lossy or lossless.

The common image formats used in digital imaging today are JPEG and TIFF.

Color Management

Color management is the cross-platform view of all the features presented in the present post, based on ICC standards. Wikipedia defines color management in digital imaging systems as “the controlled conversion between the color representations of various devices. A color management system transforms data encoded for one device (camera, scanner) into that for another device (monitor, printer) in such a way that it reproduces the original colors. Where exact color matching is not possible, the result should be a pleasing approximation.”

Parts of the color management technology are implemented in the operating system (OS), for example ColorSync in Mac OS X and Windows Color System (WCS, formerly ICM) in Windows. Other parts are included in applications (for example Photoshop) and in devices (for example cameras). An open source color management called Little CMS (LCMS) was initiated by Marti Maria in 1998. LCMS is released under the terms of the MIT License as a software library for use in other programs which will allow the use of ICC profiles. The current version is 2.7, updated on March 17, 2015, available on Github.

One of the main components of a color management system is the Color Matching  Module (CMM), which is the software engine in charge of controlling the color transformations that take place inside the system. A Color Transformation Language (CTL) was created by the Academy of Motion Picture Arts and Sciences (A.M.P.A.S.) in 2007.
Besides the color profiles for devices and color spaces, the ICC has standardized a CMM. CMM’s are built into ColorSync and WCS. Photoshop is also a good example of a CMM.
Proper color management requires that all images have an embedded profile. Recent web browsers like Internet Explorer 9, Safari 6 or Firefox support color management.

Calibration

Profiling a device is called characterization. Instruments used for measuring device colors include colorimeters and spectrophotometers. Calibration is like characterization, except that it can include the adjustment of the device, as opposed to just the measurement of the device. When all device are calibrated to a common standard color space such as sRGB, no color translations are needed to get all devices to handle colors consistently. Monitors, scanners and printers are the common devices that can be calibrated.

Windows Display Calibration Tool

Windows Display Calibration Tool

Display Calibration Tool Adobe_gamma

Display Calibration Tool Adobe_gamma

Modern monitors include a factory-created profile that is loaded into the monitor firmware and is communicated to the computer. Some people prefer to replace these profiles with custom ones. Most operating systems include tools to calibrate the monitor. Adobe Gamma is a monitor calibration tool included in Photoshop.

Color charts such as IT8 are used to calibrate scanners. Printers should be calibrated for every type of paper and inks you use. One solution is to print a test chart and to scan it with a IT8 calibrated scanner. Scanner software like SilverFast calculates then an ICC profile for the printer and the paper and ink combination.

IT8 color chart

IT8 color chart

Photo Restoration

Digital photo restoration uses specific image editing techniques to remove visible damage, fading, color casts and other aging effects from digital copies of physical photographs. The most common tools are :

  • levels, curves, contrast and black level tools to remove fading
  • white balance, color balance and other color tools to remove color casts
  • clone stamp, healing brush and other selective editing tools to remove localized damage

Conclusions and recommendations

The human eyes and brain work together to create what we call vision. The eyes collect input and send it to the brain for processing. It’s the brain that decides what it is we see (or think we see). The brain makes its decisions based largely on perceived color and contrast data sent to it by the eye’s sensory elements such as cones and rods. Sometimes these decisions don’t match reality which can give rise to what we know as optical illusions. The human vision still performs better than the most complex digital imaging system.

Here are some rules based on the explanations given in the present post to guide you through the digital imaging process :

  • use Adobe RGB as a working space for 8 bit images and ProPhoto RGB for 16 bit images
  • assign the sRGB profile as default for unprofiled images
  • use a generic CMYK profile for printing if the printer does not supply a custom profile and if it’s not restricted to an RGB interface
  • use perceptual as default rendering intent; it’s the best choice for general and batch use and for images with intense colors
  • use relative colormetric rendering intent  for images with subtle tones (portraits); they benefit from the increased accuracy
  • apply the Photoshop Curves Tool only to 16 bit images

Links

A list with links to websites providing additional information about digital imaging is shown hereafter :

DICOM image viewers

Last update : May 30, 2016

Referring to my recent post about the DICOM standard, this contribution presents an overview about an important entity in the medical imaging workflow : DICOM image viewers. The list is not exhaustive; I did the following segmentation to present my personal selection of current DICOM image viewers :

  1. Reference viewer
  2. Reference toolkits
  3. Open source viewers
  4. Free proprietary viewers
  5. Licensed commercial viewers
  6. Mobile viewer apps
  7. Other viewers

1. Reference DICOM Viewer

Today one project is generally considered as a reference for DICOM applications : OsiriX.

OsiriX

The OsiriX project started in November 2003. The first version was developed by Antoine Rosset, a radiologist from Geneva, Switzerland, working now at the La Tour Hospital  in Geneva. He received a grant from the Swiss National Fund to spend one year in UCLA, Los Angeles, with Prof. Osman Ratib, to explore and learn about medical digital imaging. In October 2004, Antoine Rosset went back to the Geneva University Hospital in Switzerland, to continue his career as a radiologist, where he published an OsiriX reference article in June 2004 in the Journal of Digital Imaging. Joris Heuberger, a mathematician from Geneva, joined the project in March 2005 on a voluntary fellowship of 6 months in UCLA, Los Angeles. In June 2005, OsiriX received two prestigious Apple Design Awards : Best Use of Open Source and Best Mac OS X Scientific Computing Solution. Osman Ratib, Professor of Radiology in UCLA, returned to Geneva at the end of 2005 as the chairman of the Nuclear Medicine service.

In March 2009, Antoine Rosset, Joris Heuberger and Osman Ratib created the OsiriX Foundation to promote open-source in medicine. In February 2010, Antoine Rosset and Joris Heuberger created the company Pixmeo to promote and distribute the OsiriX MD version, certified for medical imaging. This version complies with the European Directive 93/42/EEC concerning medical devices. The price for a single licence is 678 EUR. The free lite version can be downloaded from the OsiriX website, the source code is available at Github.

OsiriX runs on Mac OSX and is released under the version 3 of the GNU Lesser General Public License. The current version is 7.0 and was released on December 7, 2015. Osirix can also be configured as a PACS server. The power of OsiriX can be extended with plugins.

OsririX Lite

OsririX Lite

An Osirix HD version for the iPad is available at the AppStore for 49,99 EUR.

2. Reference DICOM toolkits

DICOM toolkits are more than simple viewers; they are a complete set of tools, code samples, examples, documentation, tutorials etc to develop great healthcare applications.

DCMTK

DCMTK is a collection of libraries and applications implementing large parts the DICOM standard. It includes software for examining, constructing and converting DICOM image files, handling offline media, sending and receiving images over a network connection, as well as demonstrative image storage and worklist servers. DCMTK is is written in a mixture of ANSI C and C++. It comes in complete source code and is made available as open source software.

DCMTK is an ancestor of DICOM applications. In 1993, before the official release of the standard, a DICOM prototype implementation was created by OFFIS, the University of Oldenburg and the CERIUM (Centre Européen d’Imagerie à Usage Médical) research centre in Rennes (France) on behalf of the European Committee for Standardization (CEN/TC251/WG4).

The current version of DCMTK is 3.6.1, released in June 2015. The related snapshot is available at the dicom.offis.de website. DICOMscope is the related free DICOM viewer which can display uncompressed, monochrome DICOM images from all modalities and which supports monitor calibration according to DICOM part 14 as well as presentation states. DICOMScope 3.6.0 for Windows, implemented in a mixture of Java and C++, was released in 2003. DICOMscope can’t be installed on newer Windows systems (Vista, Windows 7, Windows 8.1), an error 105 (setup.lid missing) is issued.

DICOMscope

DICOMscope version 3.5.1 (archive image)

Some DCMTK modules, especially those that are not part of the free toolkit, are covered by a separate license which can be found in the COPYRIGHT file in the corresponding module directory. These tools can be evaluated during a period of four months, any further use of the software requires a full licence agreement, free of charge.

The following sub-packages are part of DCMTK :

  • config: configuration utilities for dcmtk
  • dcmdata: a data encoding/decoding library and utility apps
  • dcmimage: adds support for color images to dcmimgle
  • dcmimgle: an image processing library and utility apps
  • dcmjpeg: a compression/decompression library and utility apps
  • dcmjpls: a compression/decompression library and utility apps
  • dcmnet: a networking library and utility apps
  • dcmpstat: a presentation state library and utility apps
  • dcmrt: a radiation therapy library and utility apps
  • dcmsign: a digital signature library and utility apps
  • dcmsr: a structured report library and utility apps
  • dcmtls: security extensions for the network library
  • dcmwlm: a modality worklist database server
  • dcmqrdb: an image database server
  • oflog: a logging library based on log4cplus
  • ofstd: a library of general purpose classes

Each sub-package (module) contains a collection of sub-modules (functions). For example, the networking library dcmnet contains the following command line tools :

  • dcmrecv: Simple DICOM storage SCP (receiver)
  • dcmsend: Simple DICOM storage SCU (sender)
  • echoscu: DICOM verification (C-ECHO) SCU
  • findscu: DICOM query (C-FIND) SCU
  • getscu: DICOM retrieve (C-GET) SCU
  • movescu: DICOM retrieve (C-MOVE) SCU
  • storescp: DICOM storage (C-STORE) SCP
  • storescu: DICOM storage (C-STORE) SCU
  • termscu: DICOM termination SCU

dcm4che

dcm4che2 is a collection of open source applications and utilities for the healthcare enterprise developed in the Java programming language. dcm4chee2 is a DICOM Clinical Data Manager system.

dcm4che2 contains a number of useful sample applications that may be used in conjunction with dcm4chee, with another archive application, or to operate on DICOM objects in a standalone fashion. A list of the dcm4che2 utilities is shown hereafter :

  • dcm2txt- Convert a DICOM object to text
  • dcm2xml- Convert a DICOM object to XML
  • dcmdir- Manipulate a DICOM dir
  • dcmecho – Initiate a C-ECHO command as an SCU
  • cmgpwl – Query a General Purpose Worklist SCP
  • dcmmwl – Query a Modality Worklist SCP
  • dcmof – Simulate an Order Filler application
  • dcmqr – Perform C-FIND, C-GET and C-MOVE operations as an SCU
  • dcmrcv – DICOM receiver (C-STORE SCP)
  • dcmsnd – Perform C-STORE operations as an SCU
  • dcmups – Unified Worklist and Procedure Step SCU
  • dcmwado – Initiate DICOM WADO requests
  • jpg2dcm – Convert a JPEG image to DICOM
  • logger – Log files to a Syslog destination
  • mkelmdic – Create the serialized dcm4che2 DICOM Dictionary
  • mkuiddic – Create the dcm4che2 UID dictionary
  • mkvrmap – Create the dcm4che2 VR Mappings
  • pdf2dcm – Convert a PDF document to DICOM
  • rgb2ybr – Convert pixel data from YBR to RGB format
  • txt2dcmsr – Convert text to a DICOM Structured Report
  • xml2dcm – Convert XML to DICOM

The dcm4che history states that back around the year 2000, Gunter Zeilinger wrote the popular JDicom utility suite using commercial Java DICOM Toolkit (JDT). After this experience, he decided to develop his own toolkit and to name it after Che Guevara.
dcm4che and dcm4chee are licensed under an MPL/GPL/LGPL triple license, similar to Mozilla.

The dcm4che DICOM viewer is called Weasis. The current version is 2.0.4, released on June 23, 2015.

WEASIS version 2.0.4

WEASIS version 2.0.4

Grassroots DICOM

Grassroots DiCoM is a C++ library for DICOM medical files. It is accessible from Python, C#, Java and PHP. It supports RAW, JPEG, JPEG 2000, JPEG-LS, RLE and deflated transfer syntax. It comes with a super fast scanner implementation to quickly scan hundreds of DICOM files. It supports SCU network operations (C-ECHO, C-FIND, C-STORE, C-MOVE).

The current version is gdcm-2.6.3, released on January 27, 2016. The GDCM source code is available at Github. A Wiki is available at Sourceforge, a reference to GDCM is available at Wikipedia. The project is developed by Mathieu Malaterre (malat) from Lyon, France.

3. Open Source DICOM Viewers

Most open source DICOM viewer projects are web viewers based on HTML5, CCS3 and Javascript. The big advantage of these viewers is the cross-platform compatibility; they can be used with any modern browser.

DICOM web viewers are presented in a separate contribution. Among them are the following open source projects :

  • Cornerstone
  • DWV
  • Papaya
  • jsDICOM
  • webDICOM
  • dcmjs

There are also some non web open source DICOM viewers :

  • 3DSlicer
  • 3DimViewer

3DSlicer

3D Slicer is a free and open source software package for image analysis and scientific visualization. It’s more than a simple DICOM viewer. This outstanding project started as a masters thesis project between the Surgical Planning Laboratory at the Brigham and Women’s Hospital and the MIT Artificial Intelligence Laboratory in 1998.

3D Slicer is written in C++, Python, Java and Qt and can be compiled for use on multiple computing platforms, including Windows, Linux, and Mac OS X. 3D Slicer needs a powerful computer to run. The current version is 4.5.0-1, released on November 11, 2015. It’s distributed  under a BSD style, free, open source license. More than 50 plug-ins and packages of plug-ins are available.

3D Slicer

3D Slicer

The main developers are now Steve Pieper, Slicer’s principal architect and Ron Kikinis, Principal Investigator for many Slicer-related projects. The names of all contributors are available at the 3D slicer.org website.

3DimViewer

3DimViewer is a lightweight 3D viewer of medical DICOM datasets distributed as open source software. The viewer is multiplatform software written in C++ that runs on Windows, Linux and Mac OSX systems.

3DimViewer is developed by 3Dim Laboratory s.r.o., a company specializing in applications of modern computer graphics in medicine and developing innovative solutions. Founded since 2008, the company focuses on medical image processing, 3D graphics, geometry processing and volumetric data visualization. The company office is located in Brno, Czech Republic, next to many high tech companies inheriting the spirit of South Moravian Innovation Centre.

3DimViewer

3DimViewer version 2.2

The current version of 3DimViewer is 2.2, released on February 6, 2015. Several plugins are available to extend the functions. Binaries are available for download on the 3DimLab website, source code is available at BitBucket.

GDCMviewer

GDCMviewer is the simple tool that show how to use vtkGDCMImageReader. It is basically only just a wrapper around GDCM. The tool is meant for testing integration of GDCM in VTK.

4. Free Proprietary DICOM Viewers

Most free proprietary DICOM viewers are copyrighted by their owner and are available for use, as is, free of charge, for educational and scientific, non-commercial purposes. Some of them are included on DICOM CDs provided by the hospitals to the patients.

Mango

Mango (short for Multi-image Analysis GUI) is a viewer for medical research images, developed by Jack L. Lancaster, Ph.D. and Michael J. Martinez at the University of Texas.

There are several versions of Manga available :

  • Manga Desktop, a Java application running on Windows Mac OSX and Linux
  • iMango, running on iPads and available at the AppStore
  • webMango, running as a Java applet
  • Papaya, running as HTML5 application in all browsers

The software and data derived from Mango software may be used only for research and may not be used for clinical purposes. If Mango software or data derived from Mango software is used in scientific publications, the Research Imaging Institute UTHSCSA must be cited as a reference.

Mango DICOM viewer

Mango DICOM viewer

Orpalis

The Orpalis DICOM Viewer is a free tool for medical staff to view DICOM files. The current version 1.0.1, released on June 20, 2014, should run on any 32- or 64-bit Windows System, but I experienced serious problems on my Windows 8.1 system (thumbnails are not displayed, frequent viewer crashes, …). The ORPALIS DICOM Viewer is based on the GdPicture.NET SDK.

Orpalis DICOM viewer

Orpalis DICOM viewer

MicroDICOM

MicroDicom is an application for primary processing and preservation of medical images in DICOM format, with an intuitive user interface and being free for use and accessible to everyone. MicroDicom runs on Windows, the current version is 0.9.1, released on June 2, 2015.

MicroDicom viewer

MicroDicom viewer

EMV Medical Viewer

The EMV viewer is developed by Escape, which was founded in 1991 and is based in downtown Thessaloniki, Greece. EMV 4 for Windows was released on October 10, 2014, EMV 4.4.1 for Mac OSX was released on July 21, 2015.

You can download and evaluate the software for free, but you need a license for using it in a commercial environment. The price for one license is 245 EUR, for use on up to three computers.

5. Licensed commercial DICOM viewers

Photoshop

Since version 10 (CS3) launched in April 2007, Photoshop provides a comprehensive image measurement and analysis tools with DICOM file support.

Photoshop CS3 with DICOM support

Photoshop CS3 with DICOM support

DICOMIZER

Dicomizer is a Point-Of-Care Imaging and Reporting tool provided by H.R.Z Software Services LTD in Tel-Aviv, Israel. The company is specialized in developing Medical Device, Healthcare IT, DICOM and HL7 solutions and provides medical imaging consultation, development services and professional courses. The company was founded in 2002 (formerly RZ Software Services) by Roni Zaharia, a medical imaging and connectivity expert, who is acting as its CEO. Roni Zaharia is the author of the blog DICOM is easy, providing useful news about medical images and an outstanding DICOM tutorial. Dicomizer works on Windows, the current version is 5.0. The price of a licence is $470 USD, a free evaluation version is available. The annual update costs are  $120 USD. Dicomizer can also be used as an DICOM image generator.

DICOMIZER

DICOMIZER version 4.1

H.R.Z Software Services LTD provides also the following medical toolkits :

  • RZDCX : Fast Strike DICOM Toolkit
  • DSRSVC : extensible DICOM Server (PACS) for OEM
  • HL7Kit Pro : WYSIWYG HL7 Integration Engine for MS SQL Server

MedImaView – PowerDicom

MedImaView is a multi-modality DICOM viewer with an intuitive Windows Graphical User Interface. It’s part of PowerDicom Technologies, an All-in-One application for handling DICOM files developed by DICOM Solutions, an MHGS company. Licenses for PowerDicom (version 4.8.6 released on May 4, 2015) are available in a price-range from 39 EUR to 310 EUR. PowerDicom allows also the generation of DICOM images. A free trial version can be downloaded from the DICOM Solutions website. MedImaView (version 1.8) is free for personal use and students.

MedimaView DICOM viewer

MedimaView DICOM viewer

DICOM PowerTools

DICOM PowerTools are developed by Laurel Bridge who provides imaging workflow solutions and DICOM software products to the medical imaging industry. PowerTools are a suite designed for the testing, troubleshooting, or debugging of applications that use DICOM communications. PowerTools also provides for the viewing, repair, or creation of DICOM data sets and their contents.

The current version is 1.0.34, released on November 24, 2015.

PowerTools File Editor

PowerTools File Editor

RadiAnt

RadiAnt is a DICOM viewer for medical images designed with an intuitive interface and unrivaled performance. It runs on Windows, the latest version is 2.2.8.10726, released on December 11, 2015. The prices for a license range from 72 EUR to 400 EUR. A free evaluation version is available. RadiAnt is not certified as a medical product and is not intended for diagnostic purposes. RadiAnt is developed by Medixant, a small, privately funded company that was first formed by Maciej Frankiewicz in 2011 in Poznan, Poland.

RadiAnt DICOM viewer

RadiAnt DICOM viewer

CODONICS Clarity Viewer

Headquartered in Cleveland, Ohio, Codonics develops, designs, sells and supports leading-edge medical imaging and information management devices used in diagnostic imaging.
Codonics Clarity Viewer features simple image navigation and selection, an intuitive user interface, quick viewer launch and rapid image loading. The Codonics Clarity 3D/Fusion Viewer is extremely useful for viewing diagnostic imaging results. It is a comprehensive PET/CT viewer that is simple to use for single or comparison study review. All basic features of the Codonics Clarity Viewer are also included.

Codonics

Codonics Clarity 3D/Fusion viewer

MatLab Dicom Toolbox

The Image Processing Toolbox of MatLab includes import, export and conversion functions for scientific file formats, amomg them DICOM files. The available functions are dicomanon, dicomdict, dicomdisp, dicominfo, dicomlookup, dicomread, dicomuid, dicomwrite. A tutorial shows how to write data to a DICOM file.

6. Mobile DICOM viewer apps

The mobile DICOM viewers are presented in a separate contribution.

7. Other DICOM viewers

The following list provides links to additional DICOM viewers developed by the industry’s leading medical imaging equipment suppliers and by independant developers :

Links

The following list shows links to websites with additional informations about DICOM viewers :

Image Manipulations with Javascript

Introduction

Today most computers, graphic cards and monitors can display 16-bit, 24-bit, 32-bit or even 48-bit color depth. The color quality can be selected in the control center of the graphic (video) card.

ATI Radeon Control Center Window

Example : ATI Radeon Control Center Window

8-bit-color

In 8-bit color graphics each pixel is represented by one byte, the maximum number of colors that can be displayed at any one time is 256. There are two forms of 8-bit color graphics. The most common uses a separate palette of 256 colors, where each of the 256 entries in the palette map is given red, green, and blue values. The other form is where the 8 bits directly describe red, green, and blue values, typically with 3 bits for red, 3 bits for green and 2 bits for blue.

16-bit color

With 16-bit color, also called High color, one of the bits of the two bytes is set aside for an alpha channel and the remaining 15 bits are split between the red, green, and blue components, allowing 32,768 possible colors for each pixel. When all 16 bits are used, one of the components (usually green) gets an extra bit, allowing 64 levels of intensity for that component, and a total of 65.536 available colors.

24-bit color

Using 24-bit color, also called True color, computers and monitors can display as many as 16.777.215 different color combinations.

32-bit color

Like 24-bit color, 32-bit color supports 16.777.215 colors with an additional alpha channel to create more convincing gradients, shadows, and transparencies. With the alpha channel 32-bit color supports 4.294.967.296 color combinations.

48-bit color

Systems displaying a billion or more colors are called Deep Color. In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image.

CLUT

A colour look-up table (CLUT) is a mechanism used to transform a range of input colours into another range of colours. It can be a hardware device built into an imaging system or a software function built into an image processing application.

HDR

High-dynamic-range imaging (HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present the human eye with a similar range of luminance as that which, through the visual system, is familiar in everyday life.

Pixel Image

PixelImage 8 x 8

PixelImage 8×8

To dive into the Image Manipulations with Javascript, we will use the Pixel Image shown left which has 8 x 8 pixels and a color depth of 1 bit. The bit value 0 is associated to the color white, 1 means black. We see later that in real systems the colors are inverted (1 = white, 0 = black). In the next steps we will look how to display this image in a browser with HTML5 and javascript.

The following table shows the pixel data for the image. I used the MathIsFun website to do the binary to hexadecimal and decimal conversion.

rows column bits hexadecimal decimal
1 01111110 7E 126
2 10000001 81 129
3 10100101 A5 165
4 10000001 81 129
5 10011001 99 153
6 10000001 81 129
7 11100111 E7 231
8 00111100 3C 60

We can now use the following code to draw the pixels on a canvas :

<body>
<canvas id="pixelboard" width="512" height="512"></canvas> 
<script>
var myCanvas = document.getElementById("pixelboard");
var myContext = myCanvas.getContext("2d");
myContext.fillStyle = "silver";
myContext.fillRect(0, 0, myCanvas.width, myCanvas.height);
myContext.fillStyle = "black";
// here are the pixel data for the 8 rows
var pixelData = [126, 129, 165, 129, 153, 129, 231, 60];
for (i = 0; i < pixelData.length; i++ ) {
 var base2 = (pixelData[i]).toString(2);
 var p = 7; 
 // set pixels in canvas from right to left
 for (j = (base2.length-1); j >= 0; j-- ) {
 if (base2[j] == 1) {
 myContext.fillRect(p * 64, i * 64, 64, 64);
 } // end if
 p--;
 } // end base2
} // end pixelData
</script>
</body>

Click this PixelData link to see it working. The image is stored in 8 bytes.

PNG Image

To draw the picture in the original size of 8×8 pixels, we change the canvas size

<canvas id="pixelboard" width="8" height="8"></canvas> 

and the code line in the inner loop as follows

myContext.fillRect(p, i, 1, 1);

Click this PixelData link to see it working.

We can save the small Pixel image in the browser with a right mouse click as canvas.png file. The size of this PNG image file is 108 bytes, 100 bytes more than the size of the image stored in our javascript.Thats a lot of overhead. Sort od design overkill !

Let’s have a look inside this file with an HexEditor (HxD from Maël Hörz).

PNG

Anatomy of a small PNG Image File

We can identify the words PNG, IHDR, IDAT and IEND. The PNG format is specified by the W3C. A lite description is available at the FileFormat.Info website. PNG (pronounced “ping”) is a bitmap file format used to transmit and store bitmapped images. PNG supports the capability of storing up to 16 bits (gray-scale) or 48 bits (truecolor) per pixel, and up to 16 bits of alpha data. It handles the progressive display of image data and the storage of gamma, transparency and textual information, and it uses an efficient and lossless form of data compression.

A PNG format file consists of an 8-byte identification signature followed by chunks of data :

  • Header chunk (IHDR) : the header chunk (13 bytes) contains basic information about the image data and must appear as the first chunk, and there must only be one header chunk in a PNG file.
  • Palette chunk (PLTE) : the palette chunk stores the colormap data associated with the image data. This chunk is present only if the image data uses a color palette and must appear before the image data chunk.
  • Image data chunk (IDAT) : the image data chunk stores the actual image data, and multiple image data chunks may occur in a data stream and must be stored in contiguous order.
  • Image trailer chunk (IEND) : the image trailer chunk must be the final chunk and marks the end of the PNG file or data stream.
  • Optional chunks are called ancillary chunks (examples : background, gamma, histogram, transparency, …) and can be inserted before or after the image data chunks. Ten ancillary chunks have been defined in the first PNG version.

Each chunk has the following structure, each chunk has an overhead of 12 bytes :

  • DataLength (4 bytes)
  • ChunkType (4 bytes)
  • Data (number of bytes specified in DataLength)
  • CRC-32 (4 bytes)

The IHDR chunk specifies the following parameters in the 13 data bytes :

  • ImageWidth in pixels (4 bytes)
  • ImageHeight in pixels (4 bytes)
  • BitDepth (1 byte)
  • ColorType (1 byte)
  • Compression (1 byte)
  • Filter (1 byte)
  • Interlace (1 byte)

An analysis of our PixelData PNG image provides the following results :

  • ImageWidth in pixels :  00 00 00 08 (big-endian) > 8 pixels
  • ImageHeight in pixels : 00 00 00 08 (big-endian) > 8 pixels
  • BitDepth : 08 > 8 bit
  • ColorType : 06 > Truecolour with alpha (RGBA)
  • Compression : 00 > default = deflate
  • Filter : 00 > default = adaptive filtering
  • Interlace : 00 > no
  • ImageDataLength : 00 00 00 31 (big-endian) > 49 bytes

In the HexEditor we see that the 49 bytes of deflated image data are :

18 95 63 38 70 E0 C0 7F 06 06 06 AC 18 2A 07 61 
60 C3 50 85 70 95 28 12 18 0A 08 9A 80 EC 16 9C 
0A 70 9A 80 43 27 04 63 15 44 52 0C 00 67 20 8C 41

The image data is zlib-compressed using the deflate algorithm. zlib is specified in RFC1950, deflate is specified in RFC1951. The process is sufficient complex to not do it manually. We can use the javascript pako.js library to decompress the data block. This library was designed by Vitaly Puzrin and Andrey Tupitsin.

Here comes the code :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Inflate byte block of PNG image pixel data with pako.js</title>
 <script type="text/javascript" src="js/pako.js"></script>
</head>
<body>
<h1>Inflate byte block of PNG image pixel data with pako.js</h1>
<div id="main"></div>
 <script type="text/javascript" >
// enter datastream as array
var hexData = [0x18, 0x95, 0x63, 0x38, 0x70, 0xE0, 0xC0, 0x7F, 0x06, 0x06, 
0x06, 0xAC, 0x18, 0x2A, 0x07, 0x61, 0x60, 0xC3, 0x50, 0x85, 0x70, 0x95, 0x28, 
0x12, 0x18, 0x0A, 0x08, 0x9A, 0x80, 0xEC, 0x16, 0x9C, 0x0A, 0x70, 0x9A, 0x80, 
0x43, 0x27, 0x04, 0x63, 0x15, 0x44, 0x52, 0x0C, 0x00, 0x67, 0x20, 0x8C, 0x41];
 // Pako inflate
 var inflateData = pako.inflate(hexData);
// output inflated data
var output = "<p>The lenght of the inflated data sequence is : " 
+ inflateData.length + "bytes.<br/>"; 
 for (i = 0 ; i < 8; i++) {
 for (j = 0 ; j < 33; j++) {
 console.log((i * 33) + j);
 output+= decimalToHexString(inflateData[(i * 33) + j]) + " ";
 } // end for loop j
 output+= "<br/>";
 } // end for loop i
 output+= "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{ if (number < 0)
 { number = 0xFFFFFFFF + number + 1; }
 return number.toString(16).toUpperCase();
}
</script>
</body>
</html>
Byte sequence in PNG image rows

Byte sequence in PNG image rows

The byte sequence of pixel data stored in  PNG images is shown in the left figure.

In our case we have 8 rows with 8 * 4 bytes (RGBA) plus one null byte, giving a total of 8 * 33 = 264 bytes.

Click the inflate link to see the result of the inflate process. The sequence length is really 264 bytes and the structure of the PNG format is visible in the output.

inflating

inflating PNG image data

The RGB hexadecimal values C0 generate grey (white) pixels, the values 0 generate black pixels. The alpha channel is always transparent (hex FF).

Synthesize a PNG image

To synthesize a minimal PNG image with monochrome PixelData, we modify the original canvas.png data as follows :

1. The signature does not change, the bytes in hexadecimal format are :

89 50 4E 47 0D 0A 1A 0A

2. In the header we set the bit depth to 1 (mono-chrome) and the color type to 0 (gray-scale). We get the following byte sequence in hexadecimal format :

00 00 00 0D 49 48 44 52 00 00 00 08 00 00 00 08 01 00 00 00 00

We have several possibilities to calculate the new CRC32 checksum over the header name and the new data :

CRC32 calculation with desktop and online tool

CRC32 calculation with desktop and online tool

Here comes the code for the javascript CRC32 calculation :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Calculate checksum crc32 with SheetJS/js-crc32 
of canvas.png chunks</title>
 <script type="text/javascript" src="js/SheetJS_crc32.js"></script>
</head>
<body>
<h1>Calculate checksum crc32 with SheetJS/js-crc32 of canvas.png chunks</h1>
<div id="main"></div>
 <script type="text/javascript" >
 // calculate crc32 over chunk name and data
// enter datastream as hexadecimal numbers
var charData = [0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 
0x00, 0x08, 0x01, 0x00, 0x00, 0x00, 0x00];
var myCRC32 = CRC32.buf(charData);
var crc = decimalToHexString(myCRC32);
var output = "<p>Here is the signed 32 bit number of the CRC32 : " 
+ myCRC32 + "<br/>Here is the hexadecimal value of the CRC32 : " 
+ crc + "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{  if (number < 0)
   { number = 0xFFFFFFFF + number + 1 }  
   return number.toString(16).toUpperCase();
} // end function
 </script>
</body>
</html>

Click this CRC32 link to see it working. The checksum to add to the IHDR chunk is EC 74 83 26.

Now we tackle the IDAT chunk. We have 8 rows for the PixelData, starting each with a NullByte (filter), followed by 1 byte in each row for the monochrome pixels. That makes a total of 16 bytes. The data length in hexadecial format is 10. We use 1 for black and 0 for white, giving us the following byte sequence :

00 7E 00 81 00 A5 00 81 00 99 00 81 00 E7 00 3C

This byte sequence is deflated with the Pako.js library with the following script :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Deflate byte block of PNG image pixel data with pako.js</title>
 <script type="text/javascript" src="js/pako.js"></script>
</head>
<body>
<h1>Deflate byte block of PNG image pixel data with pako.js</h1>
<div id="main"></div>
 <script type="text/javascript" >
 // enter datastream as numbers
var charData = [0x00, 0x7E, 0x00, 0x81, 0x00, 0xA5, 0x00, 0x81, 0x00, 0x99, 
0x00, 0x81, 0x00, 0xE7, 0x00, 0x3C];
 // Pako deflate
 var deflateData = pako.deflate(charData);
 var output = "<p>The length of the deflated data sequence is : " 
+ deflateData.length + " bytes.<br/>";
 for (i = 0; i < deflateData.length; i++) {
 output+= decimalToHexString(deflateData[i]) + " ";
 } // end for loop i
 output+= "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{ if (number < 0)
 { number = 0xFFFFFFFF + number + 1; }
 return number.toString(16).toUpperCase();
}
</script>
</body>
</html>

Click this deflate link to see the result. The length of the deflated sequence has 21 bytes (hex : 15) and is longer than the original sequence.That happens with very short image sequences.

deflating

deflating PNG image data

There are possibilities to minify the deflated sequence lenght, but this is not our goal. There are several blogs and posts dealing with smallest possible png images.

The last step is the calculation of the CRC32 checksum, same procedure as above. The following crc32 link shows the 4 byte hexadecimal number : EC 01 89 73.

The final byte sequence for the IDAT chunk is displayed hereafter :

00 00 00 15 49 44 41 54 78 9C 63 A8 63 68 64 58 0A C4 33 81 F8 39 83 0D 00 23 
44 04 63 EC 01 89 73 

3. The IEND chunk remains unchanged and has no associated data :

00 00 00 00 49 45 4E 44 AE 42 60 82

To create and display this synthetic PNG image, we copy all the hexadecimal data in our HexEditor and save it as mysynth.png file. To check that the format is right, we can use the pngcheck tool or  load the image in Photoshop. It works.

png_check

Analayse file mysynth.png with pngcheck.exe

PNG in

Open file mysynth.png in Photoshop

Display the PNG image in the Browser

The typical HTML code to display an image in a web browser is

<img src="url" alt="abcde" width="xxx" height="yyy" />

The src attribute specifies the URI (uniform resource identifier) of the image. The most common form of an URI is an URL (uniform resource locator) that is frequently referred as a web address. URIs identify and URLs locate. Every URL is also an URI, but there are URIs which are not URLs.

The URI syntax consists of a URI scheme name (such as “http”, “ftp”, “mailto” or “file”) followed by a colon character, and then by a scheme-specific part. An example of an URI which is not an URL is a dataURI, for example

data:,Hello%20World

The data URI scheme is a URI scheme that provides a way to include data in-line in web pages as if they were external resources. This technique allows normally separate elements such as images and style sheets to be fetched in a single HTTP request rather than multiple HTTP requests, which can be more efficient.

We will use the dataURI to display our synthesized PNG image in a web browser without saving it to an external source. The data URI scheme is defined in RFC 2397 of IETF. URI’s are character strings, therefore we must convert (encode) the image data to ASCII text. The most common conversion is base64, another method is percent encoding.

There are several possibilities to encode our image data in base64 :

Here comes the code for the javascript btoa() conversion :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Display mysynth.png with dataURI</title>
 </head>
<body>
<h1>Display mysynth.png with dataURI</h1>
<div id="main"></div>
 <script type="text/javascript" >
var signature = [0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A]; 
var ihdr = [0x00, 0x00, 0x00, 0x0D, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 
0x00, 0x08, 0x00, 0x00, 0x00, 0x08, 0x01, 0x00, 0x00, 0x00, 0x00, 0xEC, 
0x74, 0x83, 0x26];
var idat = [0x00, 0x00, 0x00, 0x15, 0x49, 0x44, 0x41, 0x54, 0x78, 0x9C, 
0x63, 0xA8, 0x63, 0x68, 0x64, 0x58, 0x0A, 0xC4, 0x33, 0x81, 0xF8, 0x39, 
0x83, 0x0D, 0x00, 
0x23, 0x44, 0x04, 0x63, 0xEC, 0x01, 0x89, 0x73];
var iend = [0x00, 0x00, 0x00, 0x00, 0x49, 0x45, 0x4E, 0x44, 0xAE, 0x42, 
0x60, 0x82];
var mysynthPNG = signature.concat(ihdr).concat(idat).concat(iend);
var imageStringBase64 = btoa(String.fromCharCode.apply(null, mysynthPNG));
var mysynthImg=document.createElement("img");
mysynthImg.setAttribute('src', 'data:image/png;base64,' + imageStringBase64);
mysynthImg.setAttribute('alt', 'mysynthPNG');
mysynthImg.setAttribute('height', '8px');
mysynthImg.setAttribute('width', '8px');
document.body.appendChild(mysynthImg);
</script>
</body>
</html>

Click the following base64 link to see the result. The pixel colors are inverted, 1 is white and 0 is black.

Links

The following list provides links to websites with additional informations about image pixel manipulations :

Photoshop plugins : JPEG-LS, JPEG-XR, JPEG-2000

Last update: July 19, 2015

JPEG

JPEG (file extension.jpg) is a commonly used method of lossy compression for digital images. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.

The term JPEG is an acronym for the Joint Photographic Experts Group, which was organized in 1986 and created the first standard in 1992. JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). The method lossy means that some original image information is lost and cannot be restored, possibly affecting image quality.

Progressive JPEG

The interlaced Progressive JPEG format compresses data in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data.

Standard

Standard Photoshop CS2 JPEG Save as options

JPEG-HDR

JPEG-HDR is an extension to the standard JPEG image file format allowing it to store high dynamic range images. It was created by Greg Ward and Maryann Simmons as a way to store high dynamic range images inside a standard JPEG file.

JPEG-XR

JPEG XR is a still-image compression standard and file format for continuous tone photographic images, based on technology originally developed and patented by Microsoft under the name HD Photo. It supports both lossy and lossless compression and is supported in Internet Explorer 9 and alter versions. Today there are not yet cameras that shoot photos in the JPEG XR (.JXR) format.

Lossless JPEG

An optional lossless mode was defined in the JPEG standard in 1993, using a completely different technique as the lossy JPEG standard. The lossless coding process employs a simple predictive coding model called differential pulse code modulation (DPCM). Lossless JPEG has some popularity in medical imaging (DICOM), and is used in DNG and some digital cameras to compress raw images, but otherwise was never widely adopted.

Today the term lossless JPEG is usually used as umbrella term to refer to all lossless image compression schemes including JPEG-LS and JPG 2000.

JPEG-LS

JPEG-LS is a lossless/near-lossless compression standard for continuous-tone images with the official designation ISO-14495-1/ITU-T.87. Part 1 of this standard was finalized in 1999. Besides lossless compression, JPEG-LS also provides a lossy mode (“near-lossless”) where the maximum absolute error can be controlled by the encoder. The filename extension is .jls. Compression for JPEG-LS is generally much faster than JPEG 2000 and much better than the original lossless JPEG standard. The JPEG-LS standard is based on the LOCO-I algorithm (LOw COmplexity LOssless COmpression for Images) developed at Hewlett-Packard Laboratories.

JPEG 2000

JPEG 2000 is an image compression standard and coding system defined in 2000 with the intention of superseding the original discrete cosine transform-based JPEG standard with a newly designed, wavelet-based method. The standardized filename extension is .jp2 or .jpx for the extended part-2 specification. JPEG 2000 is not widely supported in web browsers and therefore not generally used on the Internet. JP2 includes mandatory metadata such as information about an image’s color space. It handles alpha transparency and 16-bit color mode.

Photoshop plugins : JPEG-LS, JPEG-XR, JPEG-2000

JPEG

Photoshop CS2 JPEG-LS plugin by HP

HP offers a copyrighted Photoshop plugin for JPEG-LS. Additional informations about JPEG-LS source code are available at the Wayback Archive Webpage of Aleks Jakulin.

A free JPEG 2000 (j2k) plugin for Photoshop is provided by fnord software, a small software boutique in San Francisco, creating graphics software primarily for Macintosh computers. fnord software provides also a SuperPNG plugin for Adobe Photoshop and a WebM plugin for Adobe Premiere.

JPEG 2000 plugin

JPEG 2000 plugin from fnor software in Photoshop CS2

The advanced features are not working as expected in my Photoshop CS2 version.

Microsoft provides a Photoshop plugin for JPEG-XR

plugin

JPEG-XR plugin from Microsoft in Photoshop CS2

.

DICOM standard

Last update : May 31, 2016

DICOM (Digital Imaging and Communications in Medicine) is a software integration standard that is used in Medical Imaging. It’s success relies on the ability of modern imaging equipment, manufactured by many different vendors, to seamlessly collaborate and integrate together. Medical imaging equipments are called Imaging Modalities and include X-Rays, Ultrasound (US), Computed Radiography (CR), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Endoscopie (ES),  and others.

DICOM standard

The Digital Imaging and Communications in Medicine (DICOM) standard (now ISO 12052) was first conceived by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) in 1983. DICOM changed the face of clinical medicin. The release 3 of the standard was published in 1993 and since then an updated version is edited every year.  The current version is PS3-2016b. The DICOM standard is often considered to be be very complicated or tricky, the publication is now more than 3.000 pages long. A list of the 18 parts (parts 9 and 13 were retired) of the DICOM standard is shown below :

  • Part 3.1 : Introduction and Overview
  • Part 3.2: Conformance
  • Part 3.3: Information Object Definitions
  • Part 3.4: Service Class Specifications
  • Part 3.5: Data Structures and Encoding
  • Part 3.6: Data Dictionary
  • Part 3.7: Message Exchange
  • Part 3.8: Network Communication Support for Message Exchange
  • Part 3.10: Media Storage and File Format for Media Interchange
  • Part 3.11: Media Storage Application Profiles
  • Part 3.12: Media Formats and Physical Media for Media Interchange
  • Part 3.14: Grayscale Standard Display Function
  • Part 3.15: Security and System Management Profiles
  • Part 3.16: Content Mapping Resource
  • Part 3.17: Explanatory Information
  • Part 3.18: Web Access to DICOM Persistent Objects (WADO)
  • Part 3.19: Application Hosting
  • Part 3.20 : Transformation of DICOM to and from HL7 Standards

The DICOM standard is promoted by IHE (Integrating the Healthcare Enterprise), an initiative by healthcare professionals and industry to improve the way computer systems in healthcare share information.

DICOM core

The DICOM core is a file format and a networking protocol.

All medical images are saved in DICOM format, but DICOM files contain more than just images. Every DICOM file holds informations about the patient, the context of the study, the type of equipment used etc.

All medical imaging equipments that are connected to a health network (hospital, …) use the DICOM network protocol to exchange informations, mainly images and binary data. This protocol allows to search for medical images in an archive and to download them to a doctor’s workstation in order to display them.

DICOM Objects

Developers knowing object oriented programming should be familiar with DICOM objects. Patients, medical devices, clinical studies etc are entities in the real world. DICOM has abstract definitions for these entities, called DICOM Information Entities (IEs). DICOM uses a data model with SOP (Service Object Pair) classes and  Information Object Definitions (IODs)  to handle these entities in the DICOM world. For example, a patient IOD has attributes for patient’s name, ID, date of birth, weight, sex and all other clinically relevant patient related informations.

The DICOM data model defines four object levels :

  1. Patient
  2. Study
  3. Series & Equipment
  4. Instance (image)

The name instance, instead of image, was introduced some years ago, because there are now DICOM objects at the fourth level which are not images. These objects are sometimes called DICOM P10 instances with reference to the PS3.10 DICOM standard.

Each of the levels can contain several sub-levels. The following figure shows the DICOM information hierarchy :

dicom_b

DICOM Information Hierarchy

One patient may have multiple studies at different times, each study may include several series with one or more instances.

DICOM differentiates between normalized and composite IODs. A normalized IOD represent a single real-world entity with inherent attributes. Composite IODs are mixtures of several real-world entities.

The DICOM Information Modules are used to group attributes into logical and structured units. Modules can be mandatory, conditional, optional or user-defined. DICOM specifies for each IE the modules it should include. For example, the patient IE should include the patient module, the specimen identification module and the clinical trial subject module. All DICOM objects must include the SOP common module and the four modules of the data model. All DICOM instances that are images must include the Image module.

The attributes of an IOD are encoded as Data Elements.

Let’s summarize the chapter about DICOM objects in a few other words :

  • The DICOM data model is made of Information Entities (IEs)
  • The classes of the DICOM data model are called SOP classes
  • SOP is a pair of a DICOM object and a DICOM service
  • SOP classes are defined by IODs
  • IODs are collections of modules
  • Modules are collections of data elements
  • Data elements collections refer to an Information Entity (IE)

DICOM Data Elements

DICOM objects are encoded in binary form as sequential lists of attributes, called Data Elements.  Every data element has :

  • a tag to uniquely define the element and its proprieties. A tag is comprised of a 16-bit Group number and a 16-bit Element number. Data elements that are related to another have the same Group number.
  • an optional Value Representation (VR), represented as two character code, to define the data type (examples : UI = Unique identifier, CS = coded string, US = unsigned short, …). The VR is called explicit when available. The data type is also specified by the tag and VR is omitted (implicit VR) if determined by the negotiated Transfer Syntax.
  • a length to specify the size of the value field. DICOM values length are always even. Values with a single data are padded by a space in the case of strings and by a null (0x0) in the case of binary types. DICOM lengths are specified with 16 bit if VR is explicit and with 32 bit if VR is implicit. The value filed length can be undefined (FFFFFFFFH).
  • a value field with the corresponding informations. If the length of the value field is undefined, a sequence delimitation item marks its end.
DICOM File displayed in Hexeditor

DICOM File displayed in Hexeditor

There are thousands of different DICOM data elements with a well specified meaning. The names of the data elements are referenced in a DICOM dictionary. Here are some examples :

Tag Group Number Tag Element Number Element or Attribute Name
0010 0010 Patient’s Name
0010 0030 Patient’s Birthday
0020 0011 Series Number
0028 0010 Image Rows
0028 0011 Image Columns

The data elements are sorted sequentially by ascending tag number. It’s possible to define additional elements, called private elements, to add specific informations, but it is recommended to use existing elements whenever possible. Standard DICOM data elements have an even group number and private data elements have an odd group number.

Secondary Capture Image Object

The simplest DICOM object is the Secondary Capture (SC) Image. It has a minimal set of data elements that a DICOM application needs in order to display and archive a medical image. It’s not related to a particular medical image device. The following table shows the mandatory modules for the Secondary Capture Image Object :

Information Entity (IE) Module Reference
Patient Patient C.7.1.1
Study General Study C.7.2.1
Series General Series C.7.3.1
Equipment SC Equipment C.8.6.1
Instance (Image) General Image C.7.6.1
Image Image Pixel C.7.6.3
Image SC Image C.8.6.2
Image SOP Common C.12.1

There are only two modules that are specific to SC (Secondary Capture), the other modules are common and shared by many IODs. Every module is rather large and includes lots of data elements, but luckily most of these data elements are optional. In the DICOM standard the modules are marked with a type column :

  • value 1 : data element is mandatory and must be set
  • value 2 : data element is mandatory, but can be null
  • value 3. data element is optional

Values 1 and 2 can also been marked as c for conditional.

The following table shows the mandatory data elements for the Secondary Capture Image Object :

Attribute Name Tag Type
Patient’s Name (0010, 0010) 2
Patient ID (0010, 0020) 2
Patient’s Birth Date (0010,0030) 2
Patient’s Sex (0010, 0040) 2
Study Instance UID (0020, 000D) 1
Study Date (0008, 0020) 2
Study Time (0008, 0030) 2
Referring Physician’s Name (0008, 0090) 2
Study ID (0020, 0010) 2
Accession Number (0008, 0050) 2
Modality (0008, 0060) 1
Series Instance UID (0020, 000E) 1
Series Number (0020, 0011) 2
Laterality (0020, 0060) 2C
Conversion Type (0008, 0064) 1
Instance Number (0020, 0013) 2
Patient Orientation (0020, 0020) 2C
Samples per Pixel (0028, 0002) 1
Photometric Interpretation (0028, 0004) 1
Rows (0028, 0010) 1
Columns (0028, 0011) 1
Bits Allocated (0028, 0100) 1
Bits Stored (0028, 0101) 1
High Bit (0028, 0102) 1
Pixel Representation 0028, 0103) 1
Pixel Data (7FE0, 0010) 1C
Planar Configuration (0028, 0006) 1C
SOP Class UID (0008, 0016) 1
SOP Instance UID (0008, 0018) 1
Specific Character Set (0008, 0005) 1C

Unique Identifiers

DICOM makes extensive use of Unique Identifiers (UIDs). Almost every entity has a UID, with the exception of the patient (patients are identified with their name and their ID). DICOM defines a mechanism in order to make sure UIDs are globally unique. Every DICOM application should acquire a root UID that is used as a prefix for the UIDs it creates.

Usually the Study Instance UID is provided through a DICOM service called Modality Worklist. The Series UID and the SOP Instance UID are always generated by the application.

DICOM Networking Protocol

The following figure shows the typical medical imaging workflow.

Workflow

Medical Imaging Workflow

The workflow begins when the patient gets registered in the Hospital Information System (HIS). An Electronic Medical Record (EMR) is generated for the new patient. A medical procedure, for example an X-Ray of the thorax, is ordered and placed into the Radiology Information System (RIS). The RIS generates a study scheduled and sends an HL7-event to a HL7-DICOM gateway. The gateway sends the study scheduled event to the Picture Archiving and Communication System (PACS) to prefetch previous exams of the patient. When the patient arrives at the modality (X-Ray in this example), the technologist requests the Modality Worklist (MWL : a sort of task manager) and updates the RIS. The acquired images are forwarded to the PACS where they are checked, stored and forwarded to the review workstation of the radiologist. The radiologist views the study images and reports possible pathologies.

The Modality Worklist is combined with a Modality Performed Procedure Step (MPPS) that allows the Modality to report the task status and to take ownership over the task. Associated to MPPS are the Requested Procedure (RP), the Requested Procedure Description, the Requested Procedure Code Sequence and the Scheduled Procedure Step (SPS). MPPS is the checkmark for MWL.

The main DICOM nodes in this workflow are the Modality, the PACS and the Workstation. DICOM’s network communication is based on the ISO/OSI model. The entities (nodes) in the network are specified with the following parameters :

  • Application Entity (AE) Title : max 16 characters, case sensitive;
    it’s a sort of alias for the combination of IP address and port number
  • IP address
  • Port number

With the parameters of the source and the target AE, we can start a per-to-peer session called Association. This is the first step of a DICOM network communication. The second step is exchanging DICOM commands. Most problems in DICOM communications are related to failing association negotiations. Errors are reported in the DICOM log.

The calling AE sends an Association Request to the target (called) AE. It’s a description of the application, its capabilities and its intentions in this session (presentation context). The called AE checks the request and sends back an Association Response, confirming what can and can’t be done. If its doesn’t match, the calling AE can reject the association. An important parameter is the Max PDU Size, an application level packet that says how big is the buffer consumed for the request. Some calling application entities uses buffer sizes too large for the called application entity; the result can be a crash due to a buffer overflow. In case of none-response by the called AE, the request will timeout.

A DICOM service related to the exchange step is Storage Commitment (SCM) that lets you verify if files sent to a PACS where indeed stored correctly. The result is sent using the N-EVENT-REPORT command.

DICOM services are used for communication within an entity and between entities. A service is built on top of a set of DICOM Message Service Elements (DIM-SEs) adapted to normalized and composite objects. DIM-SEs are paired : when a device issues a command, the receiver responds accordingly.

DICOM services are referred to as Service Classes. A Service Class Provider (SCP) provides a service, a Service Class User (SCU) uses a service. A device can be an SCP, an SCU or both. The following table shows the main DIM-SE’s : commands with a C-prefix are composite and those with a N-prefix are normalized.

C-ECHO sort of application level ping to verify a connection;
this service is mandatory for all AEs
C-FIND inquiries about information object instances
C-STORE allows one AE to send a DICOM object to another AE
C-GET transmission of an information object instance
C-MOVE similar to C-GET, but receiver is usually not the command initiator
C-CANCEL interrupt a command
N-CREATE creation of an information object
N-GET retrieval of information object attribute value
N-SET specification of an information object
N-DELETE Deletion of an information object
N-EVENT-REPORT result from SCM

A fundamental service implemented by every workstation is the Query / Retrieve (Q / R) task. The query is done with C-FIND. The keys to search are Patient Name, Patient ID, Accession Number, Study Description, Study Date, Modality, … The Retrieve is done with C-MOVE and C-STORE commands.

DICOM Serialization = Transfer Syntax

To transmit DICOM objects through a network or to save them into a file, they must be serialized. The DICOM Transfer Syntax defines how this is be done. There are 3 basic transfer syntaxes presented in the next table :

UID Transfer Syntax Notes
1.2.840.10008.1.2.1 Little Endian Explicit (LEE) stores data little-end first, explicit VR
1.2.840.10008.1.2.2 Big Endian Explicit (BEE) stores data big-end first, explicit VR
1.2.840.10008.1.2 Little Endian Implicit (LEI) stores data little-end first, implicit VR

A complete list is shown in my post DICOM TransferSyntaxUID.

The transfer syntax specifies 3 points :

  • if VRs are explicit
  • if the low byte of muti-byte data is the first serialized or the last serialized
  • if pixel data is compressed and when yes, what compression algorithm is used

LEI is the default Transfer Syntax which shall be supported by every conformant DICOM Implementation. Compressed pixel data transfer syntax are always explicit VR little Endian. The following compression methods can be used :

  • jpeg
  • jpeg lossless
  • jpeg 2000
  • RLE
  • JPIP
  • MPEG2
  • MPEG4

If DICOM objects are serialized into files, there are additional informations to provide :

  • a  preamble of 132 bytes, where the first 128 bytes are null and the last 4 bytes are the DICOM magic number DICM.
  • a file meta header consisting of data elements of group 0002, written in Little Endian Explicit
  • an element (0002, 0010) with the Transfer Syntax UID that is used for all the data elements other than group 0002.

Group 0002 data elements are strictly used for files and must be removed before sending DICOM objects over the network.

If DICOM files are saved on a standard CD / DVD, they should have a file named DICOMDIR in its root directory. DICOMDIR is a DICOM object listing the paths to the files stored on the media. The file names of these files should be capital alphanumeric up to 8 characters with no suffix. An example of naming conventions is shown hereafter :

  • directory PAxxx for every patient  (xxx = 001, 002, …)
  • inside PAxxx a directory STyyy for every (yyy = 001, 002, …)
  • inside STyyy a directory SEzzz for every series (zzz = 001, 002, …)
  • inside SEzzz a DICOM file MODwwwww for every instance
    (MOD = CT, CR, …; wwwww = 00001, 00002, …)

DICOM files stored in the cloud or on a non-DICOM media have the suffix .dcm.

Pixel Data

Because imaging is the heart of DICOM, we will deal with the bits of images in a specific chapter called pixel data. The data elements of the pixel module have group number 0028 and are responsible for describing how to read the image pixels. The next table shows the different data elements :

Tag VR Description
(0028, 0002) US Samples PerPixel : number of color channels; grey = 1
(0028, 0004) CS PhotometricInterpretation : MONOCHROME1 -> 0 = white ;
MONOCHROME2 -> 0 = black ; YBR_FULL -> YCbCr space
(0028, 0006) US PlanarConfiguration : 0 = interlaced color pixels ; 1 = separated pixels
(0028, 0008) IS NumberOfFrames : number of frames in the image; multi-frame >1
(0028, 0010) US Rows : height of the frame
(0028, 0011) US Columns : width of the frame
(0028, 0100) US BitsAllocated : space allocated in the buffer (usually 16)
(0028, 0101) US BitsStored : how many allocated space is used (in CT usually 12)
(0028, 0102) US HighBit : the bit number of the last bit used (in CT usually 11)
(0028, 0103) US PixelRepresentation : unsigned = 0 (default) ; signed = 1
(0028, 1050) DS WindowCenter :
(0028, 1051) DS WindowWidth :
(0028, 1052) DS RescaleIntercept :
(0028, 1053) DS RescaleSlope :
(7FE0, 0010) OB Image Pixel Data : (for CT -> VR = OW : other word)

The group number of the image data element is 7FE0 to be sure that this element is the last in the DICOM object. It is usually a very long data element and can be easily skipped if we want to read only the headers.

The Image Plane Module defines the direction of the image with reference to the patient body. It also gives the dimensions of the pixels in mm. The corresponding data element has tag 0020, 0037. DICOM defines a Reference Coordinate System (RCS) of the patient body.

X-direction is from Right to Left in the Axial (transversal) Cut. Y-direction is from Front (Anterior) to Back (Posterior) in the Sagittal Cut. Z-direction is from feet to head in Coronal Cut. The following letters are assigned to the ends of each direction :

  • [R] – Right – X decreases
  • [L] – Left – X increases
  • [A] – Anterior – Y decreases
  • [P] – Posterior – Y increases
  • [F] – Feet – Z decreases
  • [H] – Head – Z increases

These letters are usually displayed on the sides of a DICOM viewer. If the patient is in an oblique position, there can be letter combinations like [PR] for Posterior Right or [ALH] for Anterior Left Head.
onformance Statement

DICOM Conformance Statement

DICOM requires that a Conformance Statement must be written for each device or program claiming to be DICOM conformant. The format and content of a Conformance Statement is defined in the standard itself.

Links

Additional informations about the DICOM standard are available at the following websites :

HDR : high-dynamic-range imaging

HDR (High-dynamic-range imaging) is a set of methods used in imaging and photography to capture a greater dynamic range between the lightest and darkest areas of an image than current standard digital imaging methods or photographic methods. The two main sources of high-dynamic-range images are computer renderings and merging of multiple standard-dynamic-range (SDR) photographs created with exposure bracketing.

As the popularity of the HDR imaging method increased in the last years, several camera manufactures are now offering built-in high-dynamic-range features. HDR is also integrated in new smartphones. Since iOS4.1, Apple iPhones have a built-in HDR functionality. Android launched HDR mode for the camera app in version 4.2 (Jelly Bean); Blackberry introduced HDR in the Z10 with OS update 10.1.

hdr

HDR Photo by Jon Rutlen on Flickr

More informations about High-dynamic-range imaging are available at the following links :

JPEG Chroma Subsampling

The JPEG compressed file format can produce significant reductions in file size through lossy compression. The compression techniques take advantage of the limitations of the human eye by discarding additional image details that may not be as noticeable to the human observer.

Humans are much more sensitive to changes in luminance (brightness) than  to chrominance (color) differences. JPEG can discard a lot more color information than luminance in the compression process. Chroma subsampling is the process whereby the color information in the image is sampled at a lower resolution than the original. JPEG translates 8-bit RGB data (Red, Green, Blue) into 8-bit YCbCr data (Luminance, Chroma Blue, Chroma Red).

The different levels of YCbCr subsampling are :

  • 4:4:4 – The resolution of chrominance information is preserved at the same rate as the luminance information. (1×1, subsampling disabled)
  • 4:2:2 – Half of the horizontal resolution in the chrominance is dropped, while the full resolution is retained in the vertical direction, with respect to the luminance. (2×1 chroma subsampling)
  • 4:1:1 – Only a quarter of the chrominance information is preserved in the horizontal direction with respect to the luminance information
  • 4:2:0 – With respect to the information in the luminance channel, the chrominance resolution in both the horizontal and vertical directions is cut in half (2×2 chroma subsampling)

JPEG chroma subsampling is not a particularly good mechanism for compressing images used in the medical field where the chrominance may be equally as important as the luminance.

Photoshop uses different chroma subsampling levels depending on the Quality settings:

  • 2×2 Chroma Subsampling – Save Quality 0-6 or Save For Web Quality 0-50
  • No Chroma Subsampling – Save As Quality 7-12  or Save For Web Quality 51-100

Additional informations about JPEG Chroma subsampling are available at the following links :

Metadata handled by Synology Photostation

Last update : September 17, 2013

The following metadata for images are handled by the Photostation application of the Synology Diskstation :

—- EXIF —-

  • Make (IFD0)
  • Camera Model Name (IFD0)
  • Exposure Time (ExifIFD)
  • F Number (ExifIFD)
  • ISO (ExifIFD)
  • Exif Version (ExifIFD)
  • Date/Time Original (ExifIFD) : yyyy:mm:dd hh:mm:ss
  • GPS Version ID (GPS)
  • GPS Latitude Ref (GPS)
  • GPS Latitude (GPS)
  • GPS Longitude Ref (GPS)
  • GPS Longitude (GPS)

—- XMP —-

  • XMP Toolkit (XMP-x)
  • Region Person Display Name (XMP-MP)
  • Region Rectangle (XMP-MP) : x1, y1, x2, y2
  • Description (XMP-dc)
  • Subject (XMP-dc) : keywords

XMP-exif metadata tags are not recognised.

Optimize favicons

Last update : March 21, 2016

Favicons plugin by Telegraphics

Favicons plugin by Telegraphics

Favicons (.ico files) are the little pictures at the left of the URL on the address bar of the browser. Favicons are an excellent free branding tool for webmasters and blog owners. They help you create brand awareness. Favicons are downloaded by each new visitor to a website. Favicons are extremely important because they are requested before any other components by the browser. By optimizing faveicons, you reduce bandwidth costs and server load.

With the code

<link rel="shortcut icon" href="chateau.ico" />

you can reference any location for the favicon. If it is absent, the Browser tries to fetch it from the domains root instead and each time the browser requests this file, the cookies for the server’s root are sent.

An outstanding tutorial how to include favicons on websites and especially in WordPress blogs is shown at MaxBlogPress. There are various free tools available to create favicons. A plugin for Photoshop to create and optimize favicons is available from Telegraphics, free software by Toby Tain.

Thera are also numerous online webtools available to create and optimize favicons, for example :

Some useful links to websites with more information about optimizing favicons are listed below :