Volunteer Computing

Volunteer computing is an arrangement in which people (volunteers) provide computing resources to projects which use the resources to do distributed computing and/or storage. Distributed computing is a field of computer science that studies distributed systems. A distributed system is a software system in which components, located on networked computers, communicate and coordinate their actions by passing messages.

Neural networks are very good candidates for simulation by distributed computing systems because of their inherent parallelism and beacuse its simulation is a very time consuming process, due to the complex iterative process involved.

The first volunteer computing project was the Great Internet Mersenne Prime Search, which was started in January 1996. The term volunteer computing was coined by Luis F. G. Sarmenta, the developer of Bayanihan.

The Berkeley Open Infrastructure for Network Computing (BOINC) is the most widely-used middleware system for volunteer computing. It offers client software for Windows, Mac OS X, Linux, and other Unix variants. The project was founded at the University of California, Berkeley Space Sciences Laboratory, funded by the National Science Foundation. Other systems are XtremWeb,  Xgrid and Grid MP.

Volunteer computing systems must deal with the following problems, related to correctness :

  • Volunteers are unaccountable and essentially anonymous
  • Some volunteer computers occasionally malfunction and return incorrect results
  • Some volunteers intentionally return incorrect results or claim excessive credit for results

A list of distributed computing projects is provided at Wikipedia. Links to a few selected BOINC volunteer computing projects are listed below :

Neuromorphic computing

neuromorphic computing by Spike Gerrell

credit : Spike Gerrell for the Economist

Neuromorphic computing is a concept developed by Carver Mead, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Carver Mead is a key pioneer of modern microelectronics.

Today the term neuromorphic is used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems. Neuromorphic computing is a new interdisciplinary discipline that takes inspiration from biology, physics, mathematics, computer science and engineering to design artificial neural systems and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.

The goal is to make computers more like brains and to design computers that have  features that brains have and computers do not have up to now :

  • low power consumption (human brains use about 20 watts)
  • fault tolerance (brains lose neurons all time without impact)
  • lack of need to be programmed (brains learn and change)

An important property of a real brain is that each neuron has tens of thousands of synaptic connections with other neurons, which form a sort of small-world network. Many neuromorphic chips use what is called a cross-bar architecture, a dense grid of wires, each of which is connected to a neuron at the periphery of the grid, to create this small-world network. Other chips employs what is called synaptic time multiplexing.

The Economist published a few days ago a great article “Neuromorphic computing – The machine of a new soul” with illustrations from the London-based illustrator Spike Gerrell.

Some neuromorphic computing reletad projects are listed below :

Neuromorphic computing is dominated by European researchers rather than American ones. The following links provide additional informations about neuromorphic computing related institutions and topics :