DICOM TransferSyntaxUID

Referring to my recent post about the DICOM standard, the list of all valid transfer syntaxes is shown below. A DICOM transfer syntax defines how DICOM objects are serialized to transmit them through a network or to save them into a file.

The DICOM transfer syntax is specified by the TransferSyntaxUID located in element number (0002, 0010). There exist 35 different DICOM transfer syntaxes, but 14 have been retired from earlier standard versions and will not be supported in future DICOM releases.

The 21 valid DICOM transfer syntaxes are listed in the following table :

TransferSyntaxUID Transfer Syntax Name Comments
1.2.840.10008.1.2 Implicit VR Endian Default
1.2.840.10008.1.2.1 Explicit VR Little Endian
1.2.840.10008.1.2.1.99 Deflated Explicit VR Big Endian
1.2.840.10008.1.2.2 Explicit VR Big Endian
1.2.840.10008.1.2.4.50 JPEG Baseline (Process 1) Default Lossy JPEG 8-bit
1.2.840.10008.1.2.4.51 JPEG Baseline (Process 2 & 4) Default Lossy JPEG 12-bit
1.2.840.10008.1.2.4.57 JPEG Lossless, Nonhierarchical (Processes 14)
1.2.840.10008.1.2.4.70 JPEG Lossless, Nonhierarchical First- Order Prediction
1.2.840.10008.1.2.4.80 JPEG-LS Lossless
1.2.840.10008.1.2.4.81 JPEG-LS Near-Lossless
1.2.840.10008.1.2.4.90 JPEG 2000 Lossless Only
1.2.840.10008.1.2.4.91 JPEG 2000
1.2.840.10008.1.2.4.92 JPEG 2000 Multicomponent Lossless Only
1.2.840.10008.1.2.4.93 JPEG 2000 Multicomponent
1.2.840.10008.1.2.4.94 JPIP Referenced
1.2.840.10008.1.2.4.95 JPIP Referenced Deflate
1.2.840.10008.1.2.5 RLE Lossless
1.2.840.10008.1.2.6.1 RFC 2557 MIME Encapsulation
1.2.840.10008.1.2.4.100 MPEG2 Main Profile Main Level
1.2.840.10008.1.2.4.102 MPEG-4 AVC/H.264 High Profil Level 4.1
1.2.840.10008.1.2.4.103 MPEG-4 AVC/H.264 BD High Profil Level 4.1

Image Manipulations with Javascript

Introduction

Today most computers, graphic cards and monitors can display 16-bit, 24-bit, 32-bit or even 48-bit color depth. The color quality can be selected in the control center of the graphic (video) card.

ATI Radeon Control Center Window

Example : ATI Radeon Control Center Window

8-bit-color

In 8-bit color graphics each pixel is represented by one byte, the maximum number of colors that can be displayed at any one time is 256. There are two forms of 8-bit color graphics. The most common uses a separate palette of 256 colors, where each of the 256 entries in the palette map is given red, green, and blue values. The other form is where the 8 bits directly describe red, green, and blue values, typically with 3 bits for red, 3 bits for green and 2 bits for blue.

16-bit color

With 16-bit color, also called High color, one of the bits of the two bytes is set aside for an alpha channel and the remaining 15 bits are split between the red, green, and blue components, allowing 32,768 possible colors for each pixel. When all 16 bits are used, one of the components (usually green) gets an extra bit, allowing 64 levels of intensity for that component, and a total of 65.536 available colors.

24-bit color

Using 24-bit color, also called True color, computers and monitors can display as many as 16.777.215 different color combinations.

32-bit color

Like 24-bit color, 32-bit color supports 16.777.215 colors with an additional alpha channel to create more convincing gradients, shadows, and transparencies. With the alpha channel 32-bit color supports 4.294.967.296 color combinations.

48-bit color

Systems displaying a billion or more colors are called Deep Color. In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image.

CLUT

A colour look-up table (CLUT) is a mechanism used to transform a range of input colours into another range of colours. It can be a hardware device built into an imaging system or a software function built into an image processing application.

HDR

High-dynamic-range imaging (HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present the human eye with a similar range of luminance as that which, through the visual system, is familiar in everyday life.

Pixel Image

PixelImage 8 x 8

PixelImage 8×8

To dive into the Image Manipulations with Javascript, we will use the Pixel Image shown left which has 8 x 8 pixels and a color depth of 1 bit. The bit value 0 is associated to the color white, 1 means black. We see later that in real systems the colors are inverted (1 = white, 0 = black). In the next steps we will look how to display this image in a browser with HTML5 and javascript.

The following table shows the pixel data for the image. I used the MathIsFun website to do the binary to hexadecimal and decimal conversion.

rows column bits hexadecimal decimal
1 01111110 7E 126
2 10000001 81 129
3 10100101 A5 165
4 10000001 81 129
5 10011001 99 153
6 10000001 81 129
7 11100111 E7 231
8 00111100 3C 60

We can now use the following code to draw the pixels on a canvas :

<body>
<canvas id="pixelboard" width="512" height="512"></canvas> 
<script>
var myCanvas = document.getElementById("pixelboard");
var myContext = myCanvas.getContext("2d");
myContext.fillStyle = "silver";
myContext.fillRect(0, 0, myCanvas.width, myCanvas.height);
myContext.fillStyle = "black";
// here are the pixel data for the 8 rows
var pixelData = [126, 129, 165, 129, 153, 129, 231, 60];
for (i = 0; i < pixelData.length; i++ ) {
 var base2 = (pixelData[i]).toString(2);
 var p = 7; 
 // set pixels in canvas from right to left
 for (j = (base2.length-1); j >= 0; j-- ) {
 if (base2[j] == 1) {
 myContext.fillRect(p * 64, i * 64, 64, 64);
 } // end if
 p--;
 } // end base2
} // end pixelData
</script>
</body>

Click this PixelData link to see it working. The image is stored in 8 bytes.

PNG Image

To draw the picture in the original size of 8×8 pixels, we change the canvas size

<canvas id="pixelboard" width="8" height="8"></canvas> 

and the code line in the inner loop as follows

myContext.fillRect(p, i, 1, 1);

Click this PixelData link to see it working.

We can save the small Pixel image in the browser with a right mouse click as canvas.png file. The size of this PNG image file is 108 bytes, 100 bytes more than the size of the image stored in our javascript.Thats a lot of overhead. Sort od design overkill !

Let’s have a look inside this file with an HexEditor (HxD from Maël Hörz).

PNG

Anatomy of a small PNG Image File

We can identify the words PNG, IHDR, IDAT and IEND. The PNG format is specified by the W3C. A lite description is available at the FileFormat.Info website. PNG (pronounced “ping”) is a bitmap file format used to transmit and store bitmapped images. PNG supports the capability of storing up to 16 bits (gray-scale) or 48 bits (truecolor) per pixel, and up to 16 bits of alpha data. It handles the progressive display of image data and the storage of gamma, transparency and textual information, and it uses an efficient and lossless form of data compression.

A PNG format file consists of an 8-byte identification signature followed by chunks of data :

  • Header chunk (IHDR) : the header chunk (13 bytes) contains basic information about the image data and must appear as the first chunk, and there must only be one header chunk in a PNG file.
  • Palette chunk (PLTE) : the palette chunk stores the colormap data associated with the image data. This chunk is present only if the image data uses a color palette and must appear before the image data chunk.
  • Image data chunk (IDAT) : the image data chunk stores the actual image data, and multiple image data chunks may occur in a data stream and must be stored in contiguous order.
  • Image trailer chunk (IEND) : the image trailer chunk must be the final chunk and marks the end of the PNG file or data stream.
  • Optional chunks are called ancillary chunks (examples : background, gamma, histogram, transparency, …) and can be inserted before or after the image data chunks. Ten ancillary chunks have been defined in the first PNG version.

Each chunk has the following structure, each chunk has an overhead of 12 bytes :

  • DataLength (4 bytes)
  • ChunkType (4 bytes)
  • Data (number of bytes specified in DataLength)
  • CRC-32 (4 bytes)

The IHDR chunk specifies the following parameters in the 13 data bytes :

  • ImageWidth in pixels (4 bytes)
  • ImageHeight in pixels (4 bytes)
  • BitDepth (1 byte)
  • ColorType (1 byte)
  • Compression (1 byte)
  • Filter (1 byte)
  • Interlace (1 byte)

An analysis of our PixelData PNG image provides the following results :

  • ImageWidth in pixels :  00 00 00 08 (big-endian) > 8 pixels
  • ImageHeight in pixels : 00 00 00 08 (big-endian) > 8 pixels
  • BitDepth : 08 > 8 bit
  • ColorType : 06 > Truecolour with alpha (RGBA)
  • Compression : 00 > default = deflate
  • Filter : 00 > default = adaptive filtering
  • Interlace : 00 > no
  • ImageDataLength : 00 00 00 31 (big-endian) > 49 bytes

In the HexEditor we see that the 49 bytes of deflated image data are :

18 95 63 38 70 E0 C0 7F 06 06 06 AC 18 2A 07 61 
60 C3 50 85 70 95 28 12 18 0A 08 9A 80 EC 16 9C 
0A 70 9A 80 43 27 04 63 15 44 52 0C 00 67 20 8C 41

The image data is zlib-compressed using the deflate algorithm. zlib is specified in RFC1950, deflate is specified in RFC1951. The process is sufficient complex to not do it manually. We can use the javascript pako.js library to decompress the data block. This library was designed by Vitaly Puzrin and Andrey Tupitsin.

Here comes the code :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Inflate byte block of PNG image pixel data with pako.js</title>
 <script type="text/javascript" src="js/pako.js"></script>
</head>
<body>
<h1>Inflate byte block of PNG image pixel data with pako.js</h1>
<div id="main"></div>
 <script type="text/javascript" >
// enter datastream as array
var hexData = [0x18, 0x95, 0x63, 0x38, 0x70, 0xE0, 0xC0, 0x7F, 0x06, 0x06, 
0x06, 0xAC, 0x18, 0x2A, 0x07, 0x61, 0x60, 0xC3, 0x50, 0x85, 0x70, 0x95, 0x28, 
0x12, 0x18, 0x0A, 0x08, 0x9A, 0x80, 0xEC, 0x16, 0x9C, 0x0A, 0x70, 0x9A, 0x80, 
0x43, 0x27, 0x04, 0x63, 0x15, 0x44, 0x52, 0x0C, 0x00, 0x67, 0x20, 0x8C, 0x41];
 // Pako inflate
 var inflateData = pako.inflate(hexData);
// output inflated data
var output = "<p>The lenght of the inflated data sequence is : " 
+ inflateData.length + "bytes.<br/>"; 
 for (i = 0 ; i < 8; i++) {
 for (j = 0 ; j < 33; j++) {
 console.log((i * 33) + j);
 output+= decimalToHexString(inflateData[(i * 33) + j]) + " ";
 } // end for loop j
 output+= "<br/>";
 } // end for loop i
 output+= "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{ if (number < 0)
 { number = 0xFFFFFFFF + number + 1; }
 return number.toString(16).toUpperCase();
}
</script>
</body>
</html>
Byte sequence in PNG image rows

Byte sequence in PNG image rows

The byte sequence of pixel data stored in  PNG images is shown in the left figure.

In our case we have 8 rows with 8 * 4 bytes (RGBA) plus one null byte, giving a total of 8 * 33 = 264 bytes.

Click the inflate link to see the result of the inflate process. The sequence length is really 264 bytes and the structure of the PNG format is visible in the output.

inflating

inflating PNG image data

The RGB hexadecimal values C0 generate grey (white) pixels, the values 0 generate black pixels. The alpha channel is always transparent (hex FF).

Synthesize a PNG image

To synthesize a minimal PNG image with monochrome PixelData, we modify the original canvas.png data as follows :

1. The signature does not change, the bytes in hexadecimal format are :

89 50 4E 47 0D 0A 1A 0A

2. In the header we set the bit depth to 1 (mono-chrome) and the color type to 0 (gray-scale). We get the following byte sequence in hexadecimal format :

00 00 00 0D 49 48 44 52 00 00 00 08 00 00 00 08 01 00 00 00 00

We have several possibilities to calculate the new CRC32 checksum over the header name and the new data :

CRC32 calculation with desktop and online tool

CRC32 calculation with desktop and online tool

Here comes the code for the javascript CRC32 calculation :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Calculate checksum crc32 with SheetJS/js-crc32 
of canvas.png chunks</title>
 <script type="text/javascript" src="js/SheetJS_crc32.js"></script>
</head>
<body>
<h1>Calculate checksum crc32 with SheetJS/js-crc32 of canvas.png chunks</h1>
<div id="main"></div>
 <script type="text/javascript" >
 // calculate crc32 over chunk name and data
// enter datastream as hexadecimal numbers
var charData = [0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 
0x00, 0x08, 0x01, 0x00, 0x00, 0x00, 0x00];
var myCRC32 = CRC32.buf(charData);
var crc = decimalToHexString(myCRC32);
var output = "<p>Here is the signed 32 bit number of the CRC32 : " 
+ myCRC32 + "<br/>Here is the hexadecimal value of the CRC32 : " 
+ crc + "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{  if (number < 0)
   { number = 0xFFFFFFFF + number + 1 }  
   return number.toString(16).toUpperCase();
} // end function
 </script>
</body>
</html>

Click this CRC32 link to see it working. The checksum to add to the IHDR chunk is EC 74 83 26.

Now we tackle the IDAT chunk. We have 8 rows for the PixelData, starting each with a NullByte (filter), followed by 1 byte in each row for the monochrome pixels. That makes a total of 16 bytes. The data length in hexadecial format is 10. We use 1 for black and 0 for white, giving us the following byte sequence :

00 7E 00 81 00 A5 00 81 00 99 00 81 00 E7 00 3C

This byte sequence is deflated with the Pako.js library with the following script :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Deflate byte block of PNG image pixel data with pako.js</title>
 <script type="text/javascript" src="js/pako.js"></script>
</head>
<body>
<h1>Deflate byte block of PNG image pixel data with pako.js</h1>
<div id="main"></div>
 <script type="text/javascript" >
 // enter datastream as numbers
var charData = [0x00, 0x7E, 0x00, 0x81, 0x00, 0xA5, 0x00, 0x81, 0x00, 0x99, 
0x00, 0x81, 0x00, 0xE7, 0x00, 0x3C];
 // Pako deflate
 var deflateData = pako.deflate(charData);
 var output = "<p>The length of the deflated data sequence is : " 
+ deflateData.length + " bytes.<br/>";
 for (i = 0; i < deflateData.length; i++) {
 output+= decimalToHexString(deflateData[i]) + " ";
 } // end for loop i
 output+= "</p>";
 element = document.getElementById("main");
 element.innerHTML = output;
 function decimalToHexString(number)
{ if (number < 0)
 { number = 0xFFFFFFFF + number + 1; }
 return number.toString(16).toUpperCase();
}
</script>
</body>
</html>

Click this deflate link to see the result. The length of the deflated sequence has 21 bytes (hex : 15) and is longer than the original sequence.That happens with very short image sequences.

deflating

deflating PNG image data

There are possibilities to minify the deflated sequence lenght, but this is not our goal. There are several blogs and posts dealing with smallest possible png images.

The last step is the calculation of the CRC32 checksum, same procedure as above. The following crc32 link shows the 4 byte hexadecimal number : EC 01 89 73.

The final byte sequence for the IDAT chunk is displayed hereafter :

00 00 00 15 49 44 41 54 78 9C 63 A8 63 68 64 58 0A C4 33 81 F8 39 83 0D 00 23 
44 04 63 EC 01 89 73 

3. The IEND chunk remains unchanged and has no associated data :

00 00 00 00 49 45 4E 44 AE 42 60 82

To create and display this synthetic PNG image, we copy all the hexadecimal data in our HexEditor and save it as mysynth.png file. To check that the format is right, we can use the pngcheck tool or  load the image in Photoshop. It works.

png_check

Analayse file mysynth.png with pngcheck.exe

PNG in

Open file mysynth.png in Photoshop

Display the PNG image in the Browser

The typical HTML code to display an image in a web browser is

<img src="url" alt="abcde" width="xxx" height="yyy" />

The src attribute specifies the URI (uniform resource identifier) of the image. The most common form of an URI is an URL (uniform resource locator) that is frequently referred as a web address. URIs identify and URLs locate. Every URL is also an URI, but there are URIs which are not URLs.

The URI syntax consists of a URI scheme name (such as “http”, “ftp”, “mailto” or “file”) followed by a colon character, and then by a scheme-specific part. An example of an URI which is not an URL is a dataURI, for example

data:,Hello%20World

The data URI scheme is a URI scheme that provides a way to include data in-line in web pages as if they were external resources. This technique allows normally separate elements such as images and style sheets to be fetched in a single HTTP request rather than multiple HTTP requests, which can be more efficient.

We will use the dataURI to display our synthesized PNG image in a web browser without saving it to an external source. The data URI scheme is defined in RFC 2397 of IETF. URI’s are character strings, therefore we must convert (encode) the image data to ASCII text. The most common conversion is base64, another method is percent encoding.

There are several possibilities to encode our image data in base64 :

Here comes the code for the javascript btoa() conversion :

<!DOCTYPE HTML>
<html>
<head>
 <meta charset="utf-8">
 <title>Display mysynth.png with dataURI</title>
 </head>
<body>
<h1>Display mysynth.png with dataURI</h1>
<div id="main"></div>
 <script type="text/javascript" >
var signature = [0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A]; 
var ihdr = [0x00, 0x00, 0x00, 0x0D, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 
0x00, 0x08, 0x00, 0x00, 0x00, 0x08, 0x01, 0x00, 0x00, 0x00, 0x00, 0xEC, 
0x74, 0x83, 0x26];
var idat = [0x00, 0x00, 0x00, 0x15, 0x49, 0x44, 0x41, 0x54, 0x78, 0x9C, 
0x63, 0xA8, 0x63, 0x68, 0x64, 0x58, 0x0A, 0xC4, 0x33, 0x81, 0xF8, 0x39, 
0x83, 0x0D, 0x00, 
0x23, 0x44, 0x04, 0x63, 0xEC, 0x01, 0x89, 0x73];
var iend = [0x00, 0x00, 0x00, 0x00, 0x49, 0x45, 0x4E, 0x44, 0xAE, 0x42, 
0x60, 0x82];
var mysynthPNG = signature.concat(ihdr).concat(idat).concat(iend);
var imageStringBase64 = btoa(String.fromCharCode.apply(null, mysynthPNG));
var mysynthImg=document.createElement("img");
mysynthImg.setAttribute('src', 'data:image/png;base64,' + imageStringBase64);
mysynthImg.setAttribute('alt', 'mysynthPNG');
mysynthImg.setAttribute('height', '8px');
mysynthImg.setAttribute('width', '8px');
document.body.appendChild(mysynthImg);
</script>
</body>
</html>

Click the following base64 link to see the result. The pixel colors are inverted, 1 is white and 0 is black.

Links

The following list provides links to websites with additional informations about image pixel manipulations :

Photoshop plugins : JPEG-LS, JPEG-XR, JPEG-2000

Last update: July 19, 2015

JPEG

JPEG (file extension.jpg) is a commonly used method of lossy compression for digital images. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.

The term JPEG is an acronym for the Joint Photographic Experts Group, which was organized in 1986 and created the first standard in 1992. JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). The method lossy means that some original image information is lost and cannot be restored, possibly affecting image quality.

Progressive JPEG

The interlaced Progressive JPEG format compresses data in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data.

Standard

Standard Photoshop CS2 JPEG Save as options

JPEG-HDR

JPEG-HDR is an extension to the standard JPEG image file format allowing it to store high dynamic range images. It was created by Greg Ward and Maryann Simmons as a way to store high dynamic range images inside a standard JPEG file.

JPEG-XR

JPEG XR is a still-image compression standard and file format for continuous tone photographic images, based on technology originally developed and patented by Microsoft under the name HD Photo. It supports both lossy and lossless compression and is supported in Internet Explorer 9 and alter versions. Today there are not yet cameras that shoot photos in the JPEG XR (.JXR) format.

Lossless JPEG

An optional lossless mode was defined in the JPEG standard in 1993, using a completely different technique as the lossy JPEG standard. The lossless coding process employs a simple predictive coding model called differential pulse code modulation (DPCM). Lossless JPEG has some popularity in medical imaging (DICOM), and is used in DNG and some digital cameras to compress raw images, but otherwise was never widely adopted.

Today the term lossless JPEG is usually used as umbrella term to refer to all lossless image compression schemes including JPEG-LS and JPG 2000.

JPEG-LS

JPEG-LS is a lossless/near-lossless compression standard for continuous-tone images with the official designation ISO-14495-1/ITU-T.87. Part 1 of this standard was finalized in 1999. Besides lossless compression, JPEG-LS also provides a lossy mode (“near-lossless”) where the maximum absolute error can be controlled by the encoder. The filename extension is .jls. Compression for JPEG-LS is generally much faster than JPEG 2000 and much better than the original lossless JPEG standard. The JPEG-LS standard is based on the LOCO-I algorithm (LOw COmplexity LOssless COmpression for Images) developed at Hewlett-Packard Laboratories.

JPEG 2000

JPEG 2000 is an image compression standard and coding system defined in 2000 with the intention of superseding the original discrete cosine transform-based JPEG standard with a newly designed, wavelet-based method. The standardized filename extension is .jp2 or .jpx for the extended part-2 specification. JPEG 2000 is not widely supported in web browsers and therefore not generally used on the Internet. JP2 includes mandatory metadata such as information about an image’s color space. It handles alpha transparency and 16-bit color mode.

Photoshop plugins : JPEG-LS, JPEG-XR, JPEG-2000

JPEG

Photoshop CS2 JPEG-LS plugin by HP

HP offers a copyrighted Photoshop plugin for JPEG-LS. Additional informations about JPEG-LS source code are available at the Wayback Archive Webpage of Aleks Jakulin.

A free JPEG 2000 (j2k) plugin for Photoshop is provided by fnord software, a small software boutique in San Francisco, creating graphics software primarily for Macintosh computers. fnord software provides also a SuperPNG plugin for Adobe Photoshop and a WebM plugin for Adobe Premiere.

JPEG 2000 plugin

JPEG 2000 plugin from fnor software in Photoshop CS2

The advanced features are not working as expected in my Photoshop CS2 version.

Microsoft provides a Photoshop plugin for JPEG-XR

plugin

JPEG-XR plugin from Microsoft in Photoshop CS2

.

DICOM standard

Last update : May 31, 2016

DICOM (Digital Imaging and Communications in Medicine) is a software integration standard that is used in Medical Imaging. It’s success relies on the ability of modern imaging equipment, manufactured by many different vendors, to seamlessly collaborate and integrate together. Medical imaging equipments are called Imaging Modalities and include X-Rays, Ultrasound (US), Computed Radiography (CR), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Endoscopie (ES),  and others.

DICOM standard

The Digital Imaging and Communications in Medicine (DICOM) standard (now ISO 12052) was first conceived by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) in 1983. DICOM changed the face of clinical medicin. The release 3 of the standard was published in 1993 and since then an updated version is edited every year.  The current version is PS3-2016b. The DICOM standard is often considered to be be very complicated or tricky, the publication is now more than 3.000 pages long. A list of the 18 parts (parts 9 and 13 were retired) of the DICOM standard is shown below :

  • Part 3.1 : Introduction and Overview
  • Part 3.2: Conformance
  • Part 3.3: Information Object Definitions
  • Part 3.4: Service Class Specifications
  • Part 3.5: Data Structures and Encoding
  • Part 3.6: Data Dictionary
  • Part 3.7: Message Exchange
  • Part 3.8: Network Communication Support for Message Exchange
  • Part 3.10: Media Storage and File Format for Media Interchange
  • Part 3.11: Media Storage Application Profiles
  • Part 3.12: Media Formats and Physical Media for Media Interchange
  • Part 3.14: Grayscale Standard Display Function
  • Part 3.15: Security and System Management Profiles
  • Part 3.16: Content Mapping Resource
  • Part 3.17: Explanatory Information
  • Part 3.18: Web Access to DICOM Persistent Objects (WADO)
  • Part 3.19: Application Hosting
  • Part 3.20 : Transformation of DICOM to and from HL7 Standards

The DICOM standard is promoted by IHE (Integrating the Healthcare Enterprise), an initiative by healthcare professionals and industry to improve the way computer systems in healthcare share information.

DICOM core

The DICOM core is a file format and a networking protocol.

All medical images are saved in DICOM format, but DICOM files contain more than just images. Every DICOM file holds informations about the patient, the context of the study, the type of equipment used etc.

All medical imaging equipments that are connected to a health network (hospital, …) use the DICOM network protocol to exchange informations, mainly images and binary data. This protocol allows to search for medical images in an archive and to download them to a doctor’s workstation in order to display them.

DICOM Objects

Developers knowing object oriented programming should be familiar with DICOM objects. Patients, medical devices, clinical studies etc are entities in the real world. DICOM has abstract definitions for these entities, called DICOM Information Entities (IEs). DICOM uses a data model with SOP (Service Object Pair) classes and  Information Object Definitions (IODs)  to handle these entities in the DICOM world. For example, a patient IOD has attributes for patient’s name, ID, date of birth, weight, sex and all other clinically relevant patient related informations.

The DICOM data model defines four object levels :

  1. Patient
  2. Study
  3. Series & Equipment
  4. Instance (image)

The name instance, instead of image, was introduced some years ago, because there are now DICOM objects at the fourth level which are not images. These objects are sometimes called DICOM P10 instances with reference to the PS3.10 DICOM standard.

Each of the levels can contain several sub-levels. The following figure shows the DICOM information hierarchy :

dicom_b

DICOM Information Hierarchy

One patient may have multiple studies at different times, each study may include several series with one or more instances.

DICOM differentiates between normalized and composite IODs. A normalized IOD represent a single real-world entity with inherent attributes. Composite IODs are mixtures of several real-world entities.

The DICOM Information Modules are used to group attributes into logical and structured units. Modules can be mandatory, conditional, optional or user-defined. DICOM specifies for each IE the modules it should include. For example, the patient IE should include the patient module, the specimen identification module and the clinical trial subject module. All DICOM objects must include the SOP common module and the four modules of the data model. All DICOM instances that are images must include the Image module.

The attributes of an IOD are encoded as Data Elements.

Let’s summarize the chapter about DICOM objects in a few other words :

  • The DICOM data model is made of Information Entities (IEs)
  • The classes of the DICOM data model are called SOP classes
  • SOP is a pair of a DICOM object and a DICOM service
  • SOP classes are defined by IODs
  • IODs are collections of modules
  • Modules are collections of data elements
  • Data elements collections refer to an Information Entity (IE)

DICOM Data Elements

DICOM objects are encoded in binary form as sequential lists of attributes, called Data Elements.  Every data element has :

  • a tag to uniquely define the element and its proprieties. A tag is comprised of a 16-bit Group number and a 16-bit Element number. Data elements that are related to another have the same Group number.
  • an optional Value Representation (VR), represented as two character code, to define the data type (examples : UI = Unique identifier, CS = coded string, US = unsigned short, …). The VR is called explicit when available. The data type is also specified by the tag and VR is omitted (implicit VR) if determined by the negotiated Transfer Syntax.
  • a length to specify the size of the value field. DICOM values length are always even. Values with a single data are padded by a space in the case of strings and by a null (0x0) in the case of binary types. DICOM lengths are specified with 16 bit if VR is explicit and with 32 bit if VR is implicit. The value filed length can be undefined (FFFFFFFFH).
  • a value field with the corresponding informations. If the length of the value field is undefined, a sequence delimitation item marks its end.
DICOM File displayed in Hexeditor

DICOM File displayed in Hexeditor

There are thousands of different DICOM data elements with a well specified meaning. The names of the data elements are referenced in a DICOM dictionary. Here are some examples :

Tag Group Number Tag Element Number Element or Attribute Name
0010 0010 Patient’s Name
0010 0030 Patient’s Birthday
0020 0011 Series Number
0028 0010 Image Rows
0028 0011 Image Columns

The data elements are sorted sequentially by ascending tag number. It’s possible to define additional elements, called private elements, to add specific informations, but it is recommended to use existing elements whenever possible. Standard DICOM data elements have an even group number and private data elements have an odd group number.

Secondary Capture Image Object

The simplest DICOM object is the Secondary Capture (SC) Image. It has a minimal set of data elements that a DICOM application needs in order to display and archive a medical image. It’s not related to a particular medical image device. The following table shows the mandatory modules for the Secondary Capture Image Object :

Information Entity (IE) Module Reference
Patient Patient C.7.1.1
Study General Study C.7.2.1
Series General Series C.7.3.1
Equipment SC Equipment C.8.6.1
Instance (Image) General Image C.7.6.1
Image Image Pixel C.7.6.3
Image SC Image C.8.6.2
Image SOP Common C.12.1

There are only two modules that are specific to SC (Secondary Capture), the other modules are common and shared by many IODs. Every module is rather large and includes lots of data elements, but luckily most of these data elements are optional. In the DICOM standard the modules are marked with a type column :

  • value 1 : data element is mandatory and must be set
  • value 2 : data element is mandatory, but can be null
  • value 3. data element is optional

Values 1 and 2 can also been marked as c for conditional.

The following table shows the mandatory data elements for the Secondary Capture Image Object :

Attribute Name Tag Type
Patient’s Name (0010, 0010) 2
Patient ID (0010, 0020) 2
Patient’s Birth Date (0010,0030) 2
Patient’s Sex (0010, 0040) 2
Study Instance UID (0020, 000D) 1
Study Date (0008, 0020) 2
Study Time (0008, 0030) 2
Referring Physician’s Name (0008, 0090) 2
Study ID (0020, 0010) 2
Accession Number (0008, 0050) 2
Modality (0008, 0060) 1
Series Instance UID (0020, 000E) 1
Series Number (0020, 0011) 2
Laterality (0020, 0060) 2C
Conversion Type (0008, 0064) 1
Instance Number (0020, 0013) 2
Patient Orientation (0020, 0020) 2C
Samples per Pixel (0028, 0002) 1
Photometric Interpretation (0028, 0004) 1
Rows (0028, 0010) 1
Columns (0028, 0011) 1
Bits Allocated (0028, 0100) 1
Bits Stored (0028, 0101) 1
High Bit (0028, 0102) 1
Pixel Representation 0028, 0103) 1
Pixel Data (7FE0, 0010) 1C
Planar Configuration (0028, 0006) 1C
SOP Class UID (0008, 0016) 1
SOP Instance UID (0008, 0018) 1
Specific Character Set (0008, 0005) 1C

Unique Identifiers

DICOM makes extensive use of Unique Identifiers (UIDs). Almost every entity has a UID, with the exception of the patient (patients are identified with their name and their ID). DICOM defines a mechanism in order to make sure UIDs are globally unique. Every DICOM application should acquire a root UID that is used as a prefix for the UIDs it creates.

Usually the Study Instance UID is provided through a DICOM service called Modality Worklist. The Series UID and the SOP Instance UID are always generated by the application.

DICOM Networking Protocol

The following figure shows the typical medical imaging workflow.

Workflow

Medical Imaging Workflow

The workflow begins when the patient gets registered in the Hospital Information System (HIS). An Electronic Medical Record (EMR) is generated for the new patient. A medical procedure, for example an X-Ray of the thorax, is ordered and placed into the Radiology Information System (RIS). The RIS generates a study scheduled and sends an HL7-event to a HL7-DICOM gateway. The gateway sends the study scheduled event to the Picture Archiving and Communication System (PACS) to prefetch previous exams of the patient. When the patient arrives at the modality (X-Ray in this example), the technologist requests the Modality Worklist (MWL : a sort of task manager) and updates the RIS. The acquired images are forwarded to the PACS where they are checked, stored and forwarded to the review workstation of the radiologist. The radiologist views the study images and reports possible pathologies.

The Modality Worklist is combined with a Modality Performed Procedure Step (MPPS) that allows the Modality to report the task status and to take ownership over the task. Associated to MPPS are the Requested Procedure (RP), the Requested Procedure Description, the Requested Procedure Code Sequence and the Scheduled Procedure Step (SPS). MPPS is the checkmark for MWL.

The main DICOM nodes in this workflow are the Modality, the PACS and the Workstation. DICOM’s network communication is based on the ISO/OSI model. The entities (nodes) in the network are specified with the following parameters :

  • Application Entity (AE) Title : max 16 characters, case sensitive;
    it’s a sort of alias for the combination of IP address and port number
  • IP address
  • Port number

With the parameters of the source and the target AE, we can start a per-to-peer session called Association. This is the first step of a DICOM network communication. The second step is exchanging DICOM commands. Most problems in DICOM communications are related to failing association negotiations. Errors are reported in the DICOM log.

The calling AE sends an Association Request to the target (called) AE. It’s a description of the application, its capabilities and its intentions in this session (presentation context). The called AE checks the request and sends back an Association Response, confirming what can and can’t be done. If its doesn’t match, the calling AE can reject the association. An important parameter is the Max PDU Size, an application level packet that says how big is the buffer consumed for the request. Some calling application entities uses buffer sizes too large for the called application entity; the result can be a crash due to a buffer overflow. In case of none-response by the called AE, the request will timeout.

A DICOM service related to the exchange step is Storage Commitment (SCM) that lets you verify if files sent to a PACS where indeed stored correctly. The result is sent using the N-EVENT-REPORT command.

DICOM services are used for communication within an entity and between entities. A service is built on top of a set of DICOM Message Service Elements (DIM-SEs) adapted to normalized and composite objects. DIM-SEs are paired : when a device issues a command, the receiver responds accordingly.

DICOM services are referred to as Service Classes. A Service Class Provider (SCP) provides a service, a Service Class User (SCU) uses a service. A device can be an SCP, an SCU or both. The following table shows the main DIM-SE’s : commands with a C-prefix are composite and those with a N-prefix are normalized.

C-ECHO sort of application level ping to verify a connection;
this service is mandatory for all AEs
C-FIND inquiries about information object instances
C-STORE allows one AE to send a DICOM object to another AE
C-GET transmission of an information object instance
C-MOVE similar to C-GET, but receiver is usually not the command initiator
C-CANCEL interrupt a command
N-CREATE creation of an information object
N-GET retrieval of information object attribute value
N-SET specification of an information object
N-DELETE Deletion of an information object
N-EVENT-REPORT result from SCM

A fundamental service implemented by every workstation is the Query / Retrieve (Q / R) task. The query is done with C-FIND. The keys to search are Patient Name, Patient ID, Accession Number, Study Description, Study Date, Modality, … The Retrieve is done with C-MOVE and C-STORE commands.

DICOM Serialization = Transfer Syntax

To transmit DICOM objects through a network or to save them into a file, they must be serialized. The DICOM Transfer Syntax defines how this is be done. There are 3 basic transfer syntaxes presented in the next table :

UID Transfer Syntax Notes
1.2.840.10008.1.2.1 Little Endian Explicit (LEE) stores data little-end first, explicit VR
1.2.840.10008.1.2.2 Big Endian Explicit (BEE) stores data big-end first, explicit VR
1.2.840.10008.1.2 Little Endian Implicit (LEI) stores data little-end first, implicit VR

A complete list is shown in my post DICOM TransferSyntaxUID.

The transfer syntax specifies 3 points :

  • if VRs are explicit
  • if the low byte of muti-byte data is the first serialized or the last serialized
  • if pixel data is compressed and when yes, what compression algorithm is used

LEI is the default Transfer Syntax which shall be supported by every conformant DICOM Implementation. Compressed pixel data transfer syntax are always explicit VR little Endian. The following compression methods can be used :

  • jpeg
  • jpeg lossless
  • jpeg 2000
  • RLE
  • JPIP
  • MPEG2
  • MPEG4

If DICOM objects are serialized into files, there are additional informations to provide :

  • a  preamble of 132 bytes, where the first 128 bytes are null and the last 4 bytes are the DICOM magic number DICM.
  • a file meta header consisting of data elements of group 0002, written in Little Endian Explicit
  • an element (0002, 0010) with the Transfer Syntax UID that is used for all the data elements other than group 0002.

Group 0002 data elements are strictly used for files and must be removed before sending DICOM objects over the network.

If DICOM files are saved on a standard CD / DVD, they should have a file named DICOMDIR in its root directory. DICOMDIR is a DICOM object listing the paths to the files stored on the media. The file names of these files should be capital alphanumeric up to 8 characters with no suffix. An example of naming conventions is shown hereafter :

  • directory PAxxx for every patient  (xxx = 001, 002, …)
  • inside PAxxx a directory STyyy for every (yyy = 001, 002, …)
  • inside STyyy a directory SEzzz for every series (zzz = 001, 002, …)
  • inside SEzzz a DICOM file MODwwwww for every instance
    (MOD = CT, CR, …; wwwww = 00001, 00002, …)

DICOM files stored in the cloud or on a non-DICOM media have the suffix .dcm.

Pixel Data

Because imaging is the heart of DICOM, we will deal with the bits of images in a specific chapter called pixel data. The data elements of the pixel module have group number 0028 and are responsible for describing how to read the image pixels. The next table shows the different data elements :

Tag VR Description
(0028, 0002) US Samples PerPixel : number of color channels; grey = 1
(0028, 0004) CS PhotometricInterpretation : MONOCHROME1 -> 0 = white ;
MONOCHROME2 -> 0 = black ; YBR_FULL -> YCbCr space
(0028, 0006) US PlanarConfiguration : 0 = interlaced color pixels ; 1 = separated pixels
(0028, 0008) IS NumberOfFrames : number of frames in the image; multi-frame >1
(0028, 0010) US Rows : height of the frame
(0028, 0011) US Columns : width of the frame
(0028, 0100) US BitsAllocated : space allocated in the buffer (usually 16)
(0028, 0101) US BitsStored : how many allocated space is used (in CT usually 12)
(0028, 0102) US HighBit : the bit number of the last bit used (in CT usually 11)
(0028, 0103) US PixelRepresentation : unsigned = 0 (default) ; signed = 1
(0028, 1050) DS WindowCenter :
(0028, 1051) DS WindowWidth :
(0028, 1052) DS RescaleIntercept :
(0028, 1053) DS RescaleSlope :
(7FE0, 0010) OB Image Pixel Data : (for CT -> VR = OW : other word)

The group number of the image data element is 7FE0 to be sure that this element is the last in the DICOM object. It is usually a very long data element and can be easily skipped if we want to read only the headers.

The Image Plane Module defines the direction of the image with reference to the patient body. It also gives the dimensions of the pixels in mm. The corresponding data element has tag 0020, 0037. DICOM defines a Reference Coordinate System (RCS) of the patient body.

X-direction is from Right to Left in the Axial (transversal) Cut. Y-direction is from Front (Anterior) to Back (Posterior) in the Sagittal Cut. Z-direction is from feet to head in Coronal Cut. The following letters are assigned to the ends of each direction :

  • [R] – Right – X decreases
  • [L] – Left – X increases
  • [A] – Anterior – Y decreases
  • [P] – Posterior – Y increases
  • [F] – Feet – Z decreases
  • [H] – Head – Z increases

These letters are usually displayed on the sides of a DICOM viewer. If the patient is in an oblique position, there can be letter combinations like [PR] for Posterior Right or [ALH] for Anterior Left Head.
onformance Statement

DICOM Conformance Statement

DICOM requires that a Conformance Statement must be written for each device or program claiming to be DICOM conformant. The format and content of a Conformance Statement is defined in the standard itself.

Links

Additional informations about the DICOM standard are available at the following websites :

Hybrid iOS app’s

Native iOS app’s are built with the platform SDK provided by Apple (Xcode/Objective-C / Swift). The main advantage of native applications is their performance because they are compiled into machine code.

Mobile Web iOS app’s are built with server-side technologies (PHP, Node.js, ASP.NET) that are mobile-optimized to render HTML well on iPhones and iPads. Mobile app’s load within the mobile browser Safari like every other website. They generally require an Internet connection to work, but it’s also possible to use cache technologies to use them off-line and you can install them with an icon on the iOS home screen. Mobile Web app’s for iOS are faster and easier to develop and to maintain than native iOS app’s.

Hybrid iOS app’s

Hybrid iOS app’s are somewhere between native and web app’s. The bulk of hybrid iOS app’s is written with cross-compatible web technologies (HTML5, CSS and JavaScript), the same languages used to write web app’s. Hybrid app’s run inside a native container on the device using the UIWebView or WKWebView classes.  Some native code is used however to allow the app to access the wider functionality of the device and to produce a more refined user experience. The advantage of this approach is obvious: only a portion of native code has to be re-written to make the app work on another type of device, for instance on Android, Blackberry or Windows Mobile.

Some developer consider hybrid iOS app’s as beeing the best of both worlds. Great examples of hybrid iOS app’s are Facebook and LinkedIn.

Hybrid iOS app tools

There are numerous tools available to create hybrid app’s, not only for iOS, but for all mobile platforms. The most important tools to embed HTML5, CSS and Javascript in mobile app’s are :

  • PhoneGap : a mobile development framework created by Nitobi, which was purchased by Adobe in 2011. The latest version is 4.0.0.
  • Apache Cordova : a free and open-source platform for building native mobile applications using HTML, CSS and JavaScript, underlying PhoneGap. Apache Cordova graduated in October 2012 as a top level project within the Apache Software Foundation (ASF). The latest version is 5.0.0.
  • BridgeIt : an open source mobile technology sponsored by ICEsoft Technologies Inc.
  • MoSync : free, easy-to-use, open-source tools for building cross-platform mobile apps (MoSync Reload 1.1 and MoSync SDK 3.3.1).
  • Ionic : the final release of Ionic 1.0.0 (uranium-unicorn) was released on May 12, 2015.
  • CocoonJS : a platform to test, accelerate, deploy and monetize your HTML5 apps and games on all mobile devices with many interesting features to help you deliver great web products faster.
  • ChocolateChip-UI : the tool uses standard Web technologies and is built on top of the popular & familiar jQuery library.
  • Sencha-Touch : the leading (very expensive) cross-platform mobile web application framework based on HTML5 and JavaScript for creating universal mobile apps.
  • Kendo UI : everything for building web and mobile apps with HTML5 and JavaScript.
  • AppGyver (Steroids, Supersonic, Composer) : a combination of great CSS defaults (from the Ionic Framework) with the best native APIs (from PhoneGap), adding a boatload of extra features on top.

There are also some tools and platforms available which use other technologies than HTML5 +CSS + Javascript :

  • Titanium : write in JavaScript, run native everywhere at the speed of mobile, with no hybrid compromises.
  • Xamarin : share the same C# code on iOS, Android, Windows, Mac and more.
  • RhoMobile Suite : gain the ease of consumer smartphone platforms, without sacrificing the critical data functionality of enterprise solutions.
  • Corona : designed to enable super-fast development. You develop in Lua, a fast and easy-to-learn language with elegant API’s and powerful workflows.
  • Meteor : discover the easiest way to build amazing web and mobile apps in JavaScript.

WebViews

The tools to create hybrid app’s give you access to the underlying operating system calls or to underlying hardware functionality. Howver you do not really need these tools if you want only deploy a web app as a native app. All you need is the bare minimal code to launch a Web View and load your Web App as local resources.

UIWebView

To embed web content in an iOS application you can use the UIWebView class. To do so, you simply create a UIWebView object, attach it to a window, and send it a request to load web content. You can also use this class to move back and forward in the history of webpages, and you can even set some web content properties programmatically.

The steps to create a minimal demo to load this web content from an external webserver are the following :

  1. Create an Xcode Swift project with a single view application.
  2. Open the Main.storyboard file and drag the WebView UI element from the object library to the Interface Builder view.
  3. Open the assistant editor to display the ViewController.swift file, ctrl-click-drag the WebView behind the class ViewController text line to create an IBOutlet named mywebView :
     @IBOutlet weak var mywebView: UIWebView!
  4. Define the url of the remote website :
    let url = "http://www.canchy.eu"
  5. Add the following code to the viewDidLoad function :
    let requestURL = NSURL(string:url)
    let request = NSURLRequest(URL: requestURL!)
    mywebView.loadRequest(request)
  6. Run the project :
Remote

Remote website displayed in iOS Simulator

The website loads, but it looks awful. The reason is that the UIWebView doesn’t handle things that Safari does by default. Telling the program to scale pages is easy. We need to add the following line of code :

mywebView.scalesPageToFit = true

The whole script of the ViewController.swift file looks now as follows :

View

ViewController.swift script for RemoteWeb

The layout is still not perfect, but you can now pinch the view to zoom in and out. Holding down the alt key and click-dragging the view allows you to perform this action in the simulator. We will enhance the scaling in a further chapter.

Note that for a very simple app like the present one it’s not necessary to import the webkit framework. UIWebView has about a dozen methods you can use, but to get about 160 methods you need the webkit classes.

Project

Project window with local web content

To include the web content into the iOS app, we add the folder canchy with html files, css files, javascript files, images and photos into the project window.

The three first steps to create a minimal demo to load local web content are the same as those for the remote content.

The next steps are the following :

4. Define the path for the local content with the NSBundle.mainBundle() method :

  • pathForResource
  • ofType
  • inDirectory

5. Create an URL object with this path with the NSURL.fileURLWithPath() method

6. Create a Request object with this URL with the NSURLRequest() method.

7. Scale the page with the scalesPageToFit method.

8. Load the Request object with the loadRequest() method.

9. Run the project.

The result is the same as with the remote content. The layout is not adapted to the screen. We will fix that in the last chapter.

The complete script of the ViewController.swift file is shown below :

 

local

ViewController.swift script for local web content

WKWebView

With iOS 8 and Mac OSX Yosemite, Apple released the new WKWebView class, which is faster and more performant than the UIWebView class. WKWebView uses the same Nitro Javascript engine as the Safari browser.

To create an hybrid iOS app with the WKWebView class, the procedure to access content on an external web-server is the same as for the UIWebView class. Only the code in the ViewController.swift file is slightly different.

WK

ViewController.swift script for remote web content with WKWebView

We need to import the WebKit framework, because the WKWebView class is now part of the WebKit itself.

Remote

Remote website displayed in iOS Simulator

By default the website display in WKWebView is better adapted to the screen size,  compared to the UIWebView.

The worst issue of the WKWebView class is that it doesn’t allow loading local files from the file:// protocol like the UIWebView class. There are some workarounds available, most of them use an embedded local HTTP server to serve the local files. The most renowned solutions are the following :

XWebView uses a very tiny embedded http server, whereas the Cordova WKWebView Engine uses the more versatile GCDWebServer.

GCDWebServer

GCDWebServer is a modern and lightweight GCD (Grand Central Dispatch) based HTTP 1.1 server designed to be embedded in OS X & iOS app’s. It was written from scratch by Pierre-Olivier Latour (swisspol) with the following features :

  • four core classes: server, connection, request and response
  • no dependencies on third-party source code
  • full support for both IPv4 and IPv6
  • HTTP compression with gzip
  • chunked transfer encoding
  • JSON parsing and serialization
  • fully asynchronous handling of incoming HTTP requests

WKWebView versus UIWebView

Both WKWebView and UIWebView in iOS8 support WebGL which allows rendering 3D and 2D graphics on WebView. The performance gains are significant (greater than 2x) in WebGL applications when using WKWebView, compared to UIWebView.

You can use the WebView Rendering App available in Apple AppStore to test any website performance between UIWebView and WkWebView. An In-App debug console for your UIWebView & WKWebView classes is available at Github.

CORS

If a javascript on a webpage displayed in UIWebView or WKWebView is running from domain abc.com and would like to request a resource via AJAX (XmlHttpRequest or XDomainRequest) from domain efg.com, this is a cross-origin request. Historically, for security reasons, these types of requests have been prohibited by browsers.

The Cross Origin Resource Sharing (CORS) specification, now a W3C recommendation, provides a browser-supported mechanism to make web requests to another domain in a secure manner. To allow AJAX requests across domain boundaries, you need to add the following headers to your server :

Access-Control-Allow-Origin.*
Access-Control-Allow-Headers : Accept, Origin, Content-Type
Access-Control-Allow-Methods : GET, POST, PUT, DELETE, OPTION

Size the web views

Xcode provides an auto layout function. By clicking the second icon (pin) from the left in the storyboard, a popup window opens to set the spacing contraints to the nearest neighbours. By clicking the dotted red lines leading from the small square at the top to four textboxes, you can define these constraints. I set the top border to 20 points and the right and left border to 16 points.

Constraints

Adding constraints to execute the auto layout function in the storyboard

By clicking the button “Add 3 Constraints” the values are entered and displayed with the selected values in the View Controller Scene.

Constraints

Viewing the layout constraints in the View Controller Scene

The result is a correct display of the webpage in the iOS Simulator with a free toolbar of 20 points at the top of the web view.

Display

Webpage view with auto layout

Links