Beethoven’s Google Doodle

To celebrate Ludwig van Beethoven’s 245th Year, Google created an interactive doodle to help Beethoven arranging his masterpieces during his unfortunate journey to the symphony hall. Produced by Gregory Capuano and designed by Leon Hong, the Google engineers Jordan Thompson, Jonathan Shneier, Kris Hom and Charlie Gordon programmed a new masterpiece of animation. The Piano recordings have been done by Tim Shneier. Nate Swinehart was responsible for animatics and additional art.

The following figures show some key scenes from the interactive animation.


Wearable Technology

The term Wearable technology refers to clothing and accessories incorporating computer and advanced electronic technologies. The designs often incorporate practical functions and features, but may also have a purely critical or aesthetic agenda.

Other terms used are wearable devices, wearable computers or fashion electronics. A healthy debate is emerging over whether wearables are best applied to the wrist, to the face or in some other form.

Smart watches

A smart watch is a computerized wristwatch with functionality that is enhanced beyond timekeeping. A first digital watch was already launched in 1972, but the production of real smart watches started only recently. The most notable smart watches which are currently available or announced are listed below :


Android Wear

Android Wear

On March 18, 2014, Google officially announced Android’s entrance into wearables with the project Android Wear. Watches powered by Android Wear bring you :

  • Useful information when you need it most
  • Straight answers to spoken questions
  • The ability to better monitor your health and fitness
  • Your key to a multiscreen world

An Android Wear Developer Preview is already available. It lets you create wearable experiences for your existing Android apps and see how they will appear on square and round Android wearables. Late 2014, the Android Wear SDK will be launched enabling even more customized experiences.

Google Glass


Google Glass

Google Glass

Google Glass is a wearable computer with an optical head-mounted display (OHMD). Wearers communicate with the Internet via natural language voice commands. In the summer of 2011, Google engineered a prototype of its glass. Google Glass became officially available to the general public on May 15, 2014, for a price of $1500 (Open beta reserved to US residents). Google provides also four prescription frames for about $225. Apps for Goggle Glass are called Glassware.

Tools, patterns and documentation to develop glassware are available at Googles Glass developer website. An Augmented Reality SDK for Google Glass is available from Wikitude.

Smart Shirts

Smart shirts, also known as electronic textiles (E-textiles) are clothing made from smart fabric and used to allow remote physiological monitoring of various vital signs of the wearer such as heart rate, temperature etc. E-textiles are distinct from wearable computing because emphasis is placed on the seamless integration of textiles with electronic elements like microcontrollers, sensors, and actuators. Furthermore, E-textiles need not be wearable. They are also found in interior design, in eHealth or in baby brathing monitors.

At the Recode Event 2014, Intel recently announced its own smart shirt which uses embedded smart fibers that can tell you things about your heart rate or other health data.


Cover of the Science Magazine January 14, 2011

Cover of the Science Magazine January 14, 2011

Culturomics is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts. The term was coined in December 2010 in a Science article called Quantitative Analysis of Culture Using Millions of Digitized Books. The paper was published by a team spanning the Cultural Observatory at Harvard, Encyclopaedia Britannica, the American Heritage Dictionary and Google. At the same time was launched the world’s first real-time culturomic browser on Google Labs.

The Cultural Observatory at Harvard is working to enable the quantitative study of human culture across societies and across centuries. This is done in three ways:

  • Creation of massive datasets relevant to human culture
  • Use of these datasets to power new types of analysis
  • Development of tools that enable researchers and the general public to query the data

The Cultural Observatory is directed by Erez Lieberman Aiden and Jean-Baptiste Michel who helped create the Google Labs project Google N-gram Viewer. The Observatory is hosted at Harvard’s Laboratory-at-Large.

Logo of the Science Hall of Fame

Logo of the Science Hall of Fame

Links to additional informations about Culturomics and related topics are provided in the following list :

N-gram databases & N-gram viewers

Last update : May 13, 2013

An N-gram is a contiguous sequence of n items from a given sequence, collected from a text or speech corpus. An N-gram could be any combination of letters, phonemes, syllables, words or base pairs, according to the application.

An N-gram of size 1 is referred to as a unigram, size 2 is a bigram, size 3 is a trigram. Larger sizes are referred to by the value of N (four-gram, five-gram, …). N-gram models are widely used in statistical natural language processing. In speech recognition, phonemes and sequences of phonemes are modeled using a N-gram distribution.

“All Our N-gram are Belong to You” was the title of a post published in August 2006 by Alex Franz and Thorsten Brants in the Google Research Blog. Google believed that the entire research community should benefit from access to their massive amounts of data collected by scanning books and by analysing the web. The data was distributed by the Linguistics Data Consortium (LDC) of the University of Pennsylvania. Four years later (December 2010), Google unveiled an online tool for analyzing the history of the data digitized as part of the Google Books project (N-Gram Viewer). The appeal of the N-gram Viewer was not only obvious to scholars (professional linguists, historians, and bibliophiles) in the digital humanities, linguistics, and lexicography, but also casual users got pleasure out of generating graphs showing how key words and phrases changed over the past few centuries.

Google Books N-gram Viewer, an addictive tool

Google Books N-gram Viewer, an addictive tool

The version 2 of the N-Gram Viewer was presented in October 2012 by engineering manager Jon Orwant. A detailed description how to use the N-Gram Viewer is available at the Google Books website. The maximum string that can be analyzed is five words long (Five gram). Mathematical operators allow you to add, subtract, multiply, and divide the counts of N-grams. Part-of-speech tags are available for advanced use, for example to distinguish between verbs or nouns of the same word. To make trends more apparent, data can be viewed as a moving average (0 = raw data without smoothing, 3 = default, 50 = maximum). The results are normalized by the number of books published in each year. The data can also be downloaded for further exploration.

N-Gram data is also provided by other institutions. Some sources are indicated hereafter :

Links to further informations about N-grams are provided in the following list :

Alan Turing and Robert Moog Google Doodles

To celebrate Robert Moog’s 78th Birthday, Google published on May 23, 2012 an interactive doodle of the electronic analog Moog Synthesizer.

Google Moog Doodle

Google Doodle : Moog Synthesizer

The doodle was synthesized from a number of smaller components to form a unique instrument. When experienced with browsers supporting the Web Audio API, the sound is generated natively. For other browsers the Flash plugin is used. The doodle takes advantage of JavaScript, Closure libraries, CSS3 and tools like Google Web Fonts, the Google+ API, the Google URL Shortener and App Engine.

The Moog doodle was created by Google engineers Reinaldo Aguiar and Rui Lopes and the doodle team lead Ryan Germick.

For Alan Turing’s Centennial, Google published one month later (June 23, 2012) an interactive doodle showing a Turing Machine. The doodle was designed by Jered Wierzbicki and Corrie Scalisi, Software Engineers, and by Doodler Sophia Foster-Dimino. The code for this doodle was open sourced and is available at Google Code.

Turing Machine

Google Doodle : Turing Machine

A video about the Art & Technology behind Google Doodles is available at Youtube.

Google text to speech (TTS) with processing

Referring to the post about Google STT, this post is related to Google speech synthesis with processing. Amnon Owed presented in November 2011 processing code snippets to make use of Google’s text-to-speech webservice. The idea was born in the processing forum.

The sketch makes use of the Minim library that comes with Processing. Minim is an audio library that uses the JavaSound API, a bit of Tritonus, and Javazoom’s MP3SPI to provide an easy to use audio library for people developing in the Processing environment. The author of Minim is Damien Di Fede (ddf), a creative coder and composer interested in interactive audio art and music games. In November 2009, Damien was joined by Anderson Mills who proposed and co-developed the UGen Framework for the library.

I use the Minim 2.1.0 beta version with this new UGen Framework. I installed the Minim library in the libraries folder in my sketchbook and deleted the integrated 2.0.2 version in the processing (2.0b8) folder modes/java/libraries.

Today I run succesful trials with the english, french and german Google TTS engine. I am impressed by the results.

Google speech to text (STT) with processing

Processing is an open source programming language and environment for people who want to create images, animations, and interactions.

Florian Schulz, Interaction Design Student at FH Potsdam, presented a year ago in the processing forum a speech to text (STT) library, based on the Google API. The source code is available at GitHub, a project page provides additional informations. The library is based on an article of Mike Pultz, named Accessing Google Speech API / Chrome 11, published in March 2011.

I installed the library in my processing environment (version 2.0b8) and run the test examples with success. I did some trials with the french and german Google speech recognition engines. I am impressed by the results.

Additional informations about this topic are provided in the following link list :


Google Art Project

The Google Art Project is a unique online art experience, using a combination of various advanced Google technologies and expert information, provided by 151 acclaimed art partners (museums, galleries, …) from across 40 countries.

Google Art Project

Users can

  • explore a wide range of artworks at brushstroke level detail
  •  take a virtual tour of a museum or gallery (with Street View images and navigation)
  • build their own collections to share (user gallery)
  • enjoy over 30 000 artworks from sculpture to architecture
  • explore over 150 collections
  • edit, reorder, upload Youtube videos and more in the “My Galleries” section
  • use a dedicated Education section providing simple tools to learn about the artworks featured on the Google Art Project

The Google Art Project was launched on 1 February 2011. Seventeen galleries and museums were included in the launch of the project.

In France, the Centre de recherche et de restauration des musées de France (C2rmf) launched in 2009 the project 3D*COFORM to advance the state-of-the-art in 3D-digitisation and make 3D-documentation an everyday practical choice for digital documentation campaigns in the cultural heritage sector.

Google Chrome Frame

Google Chrome Frame is an open source plug-in that seamlessly brings Google Chrome’s open web technologies (for instance the canvas tag) and speedy JavaScript engine to Internet Explorer (IE 6, 7, 8, or 9).

Enabling Google Chrome Frame is simple. For most web pages, all you have to do is add a single tag to your pages and detect whether your users have installed Google Chrome Frame.

  • If Google Chrome Frame is not installed, you can direct your users to an installation page.
  • If Google Chrome Frame is installed, it detects the tag you added and works automatically.