Intelligence

Last update : August 4, 2013

Intelligence Test

Intelligence Test

Intelligence has been defined in many different ways including, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving. Intelligence is related to humans, animals, plants and machines (artificial intelligence).

A comprehensive definition of intelligence is controversial, what is considered intelligent varies with culture.

Psychometrics are often used to measure Intelligence. An intelligence quotient (IQ) is used to assess intelligence. The abbreviation IQ comes from the German term Intelligenz-Quotient, originally coined by the psychologist William Stern. IQ is a score derived from one of several standardized tests.

When a new IQ test is normed, the standard scoring is calculated so they conform to a normal distribution with a mean of 100 and a standard deviation (SD) of 15. The intention is that approximately 95% of the population scores an IQ between 70 and 130 (within two SDs of the mean).

The average IQ scores for many populations have been rising at an average rate of three points per decade since 1930, a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities. Attempted explanations of the IQ rise have included improved nutrition, a trend toward smaller families, better education, greater environmental complexity, and heterosis.

People having an IQ higher than 130 are considered as very intelligent. Organizations supporting people who are within a certain high percentile of IQ test results are called High IQ societies : the oldest, largest and best-known such society is Mensa International (website : www.mensa.org).

Other high IQ societies are :

The Theory of Multiple Intelligences, as a model of intelligence that differentiates intelligence into various specific modalities, rather than seeing it as dominated by a single general ability, was proposed by Howard Gardner in his 1983 book Frames of Mind.

Artificial Intelligence

Last update : May 13, 2024

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it.The term was coined by John McCarthy in 1955. The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert A. Simon, became the leaders of Artificial Intelligence research for many decades.

Good Old-Fashioned Artificial Intelligence (GOFAI)

AI research began in the mid 1950s after the Dartmouth conference. The field AI was founded on the claim that a central property of humans, intelligence, can be so precisely described that it can be simulated by a machine. The first generation of AI researchers were convinced that this sort of AI was possible and that it would exist in just a few decades.

In the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. By the 1990s, AI researchers had gained a reputation for making promises they could not keep. The AI research suffered from longstanding differences of opinion how it should be done and from the application of widely differeing tools.

The field of AI regressed into a multitude of relatively well insulated domains like logics, neural learning, expert systems, chatbots, robotics, semantic web, case based reasoning etc., each with their own goals and methodologies. These subfields, which often failed to communicate with each other, are often referred as applied AI, narrow AI or weak AI.

The old original approach to achieving artificial intelligence is called GOFAI. The term was coined by John Haugeland in his 1986 book Artificial Intelligence: The Very Idea.

Weak Artificial intelligence

After the AI winter, the mainstream of AI research has turned with success toward domain-dependent and problem-specific solutions. These subfields of weak AI have grown up around particular institutions and individual researchers, some of them are listed hereafter:

Peter Norvig, Google’s head of research, and Eric Horvitz, a distinguished scientist at Microsoft Research, are optimistic about the future of machine intelligence. They spoke recently to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, they talked with Technology Review‘s IT editor, Tom Simonite.

A few AI searchers continue to believe that artificial intelligence could match or exceed human intelligence. The term strong AI, now in wide use, was introduced for this category of AI by the philosopher John Searle of the University of California at Berkeley. Among his notable concepts is the Chinese Room, a thought experiment which is an argument against strong AI.

Strong Artificial Intelligence

Strong AI is the intelligence of a machine that could successfully perform any intellectual task that a human being can. Strong AI is associated with traits such as consciousness, sentience, sapience (wisdom) and self-awareness observed in living beings.

There is a wide agreement among AI researchers that strong artificial intelligence is required to do the following :

  • reason, use strategy, solve puzzles and make judgements under uncertainty
  • represent knowledge, including commonsense knowledge
  • plan
  • learn
  • communicate in natural language
  • integrate all these skills towards common goals

Other important capabilities include the ability to sense (see, …) and the ability to act (move and manipulate objects, …) in the observed world.

Some AI researchers adopted the term of Artificial General Intelligence (AGI) to refer to the advanced interdisciplinary research field of strong AI. Other AI researchers prefer the term of Synthetic Intelligence to make a clear distinction with GOFAI.

The following links provide some informations about the history and the concepts of Artificial Intelligence :

A list of organizations and institutions dealing with Artificial Intelligence is shown below :

“Artificial intelligence is no match for human stupidity!”

Evo Devo Universe

Evo Devo Universe (EDU) is a global community of theoretical and applied physicists, chemists, biologists, cognitive and social scientists, computer scientists, technologists, philosophers, information theorists, complexity scholars and systems theorists who are interested in better characterizing the relationship and difference between evolutionary and developmental processes in the universe and its subsystems. The project originated from John Smart and Clément Vidal in January 2008.

The first international EDU conference was held in Paris in October 2008. The second international EDU conference is planned in 2013 at the East Coast, USA.

EDU is looking for researchers to collaborate on investigating on free energy rate density (FERD) and its larger human implications, as described in a brief research project overview, created by Clément Vidal.

Cosmic Evolution

Last update : July 17, 2013
Eric Chaisson defined the grand scenario of cosmic evolution as follows :

cosmic evolution = physical evolution + biological evolution + cultural evolution

Eric Chaisson segmented the physical evolution in five epochs :

  • Particulate evolution
  • Galactic evolution
  • Stellar evolution
  • Planetary evolution
  • Chemical evolution
Cosmic Evolution : Time Arrow by Eric Chaisson

From Big Bang to Humankind : Time Arrow by Eric Chaisson

Eric Chaisson uses an time arrow to highlight salient features of cosmic history, from the Big Bang to the present, encompassing 14 Giga Years (Ga). He defined the concept of free energy rate density as the amount of energy that flows through a certain amount of mass during a certain period of time. The concept of power density was not new, but Eric Chaisson has been the first to make a systematic comparison of these values all across nature.

Eric Chaisson’s understanding of the cosmic evolution is related to the Big History.

The following list shows links to websites with further informations about cosmic evolution:

The four laws of thermodynamics and the entropy

The four laws of thermodynamics define fundamental physical quantities (temperature, energy, and entropy) that characterize thermodynamic systems. The laws describe how these quantities behave under various circumstances, and forbid certain phenomena (such as the perpetuum mobile).

The four laws were developed during the 19th and early 20th century. Many researchers consider that the zeroth and third laws follow directly from the frist and second laws, thus that there are really only two fundamental laws of thermodynamics.

Thermodynamic entropy is a measure of how evenly energy is distributed in a system. The term was coined in 1865 by the German physicist Rudolf Clausius.

In a physical system, entropy provides a measure of the amount of energy that cannot be used to do work.

Free Energy Rate Density and STEM Compression

A metric to characterize the complexity of physical, biological and cultural systems in the universe has been proposed by Eric Chaisson. It is called Free Energy Rate Density (FERD). The Evo Devo Universe Community is looking for researchers to collaborate on investigating FERD and its larger human implications.

Birds have higher energy rate densities, compared to humans, probably because they operate in 3 dimensions.

An idea that the most complex of the universe’s extant systems at any time  use progressively less space, time, energy and matter to create the next level of complexity in their evolutionary development has been advocated by John Smart. The concept is called STEM compression (formerly MEST compression).

A list of links to websites dealing with FERD and STEM is shown hereafter :

Big History and ChronoZoom

Last update : July 17, 2013
Big History is a field of historical study that examines history on large scales across long time frames through a multidisciplinary approach, to understand the integrated history of the cosmos, earth, life, and humanity, using the best available empirical evidence and scholarly methods. Big History evolved from interdisciplinary studies in the mid-20th century, some of the first efforts were Cosmic Evolution at the Center for Astrophysics, Harvard University and Universal History in the Soviet Union.

An International Big History Association (IBHA) was founded in 2010. The same year, Walter Alvarez and Roland Saekow from the department of Earth and Planetary Science at the University of California, Berkeley, developed ChronoZoom, an online program that visualizes time on the broadest possible scale from the Big Bang to the present day. A beta version of ChromoZoom 2 in HTML5 was released in March 2012 by Outercurve Foundation, a non-profit organization that supports open-source software.

In 2011, Bill Gates and David Christian started The Big History Project to enable the global teaching of big history. Seven schools have been selected for the initial classroom pilot phase of the project. IBHA is one of the partners of the project. Educators can register to participate in the beta program of the Big History Project. At the TED talks in March 2011, David Christian narrated a complete history of the universe, from the Big Bang to the Internet, in a riveting 18 minutes.

Macquaire University has launched a Big History Institute as part of the Big History Project. Big History is teached since 1994 at the University of Amsterdam by Fred Spier.

A list of links to great websites illustrating different epochs (particulate, galactic, stellar, planetary, chemical, biological, culturel) of the Big History is shown hereafter :

Cybernetics

Cybernetics is a transdisciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. Cybernetics is applicable when a system being analyzed is involved in a closed signaling loop and it is relevant to the study of mechanical, physical, biological, cognitive, and social systems. These concepts are studied by other fields such as engineering and biology, but in cybernetics these are abstracted from the context of the individual organism or device.

Cybernetics was defined in the mid 20th century, by Norbert Wiener as the scientific study of control and communication in the animal and the machine. It grew out from Claude Shannon’s information theory, which was designed to optimize the transfer of information through communication channels.

Cybernetics is related to System Dynamics, an approach to understand the behaviour of complex systems over time, and to Teleology.

Cybernetics is sometimes used as a generic term, which serves as an umbrella for many systems-related scientific fields.

Metasystem Transition

A metasystem transition is the emergence, through evolution, of a higher level of organization or control. The concept of metasystem transition was introduced by the cybernetician Valentin Turchin in his 1977 book The Phenomenon of Science, and developed among others by Francis Heylighen in the Principia Cybernetica Project.

The classical sequence of metasystem transitions in the history of animal evolution, from the origin of animate life to sapient culture, has been defined by Valentin Turchin  :

  1. Control of Position = Motion
  2. Control of Motion = Irritability
  3. Control of Irritability = Reflex
  4. Control of Reflex = Association
  5. Control of Association = Thought
  6. Control of Thought = Culture

Principia Mathematica and Principia Cybernetica

Principia commonly refers to Philosophiæ Naturalis Principia Mathematica, a work in three books by Sir Isaac Newton, first published 5 July 1687. The Principia is considered as one of the most important works in the history of science.

The Principia Mathematica (PM) is a three-volume work on the foundations of mathematics, written by Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. PM is an attempt to derive all mathematical truths from a well-defined set of axioms and inference rules in symbolic logic.

The Principia Cybernetica Project is an attempt by a group of researchers to build a complete and consistent system of philosophy. Principia Cybernetica tries to tackle age-old philosophical questions with the help of the most recent cybernetic theories and technologies. Principia Cybernetica Web is one of the oldest, best organized, and largest, fully connected hypertexts on the Net. It contains over 2000 web pages (nodes), numerous papers, and even complete books.

The Principia Cybernetica Project was conceived by Valentin Turchin. With the help of Cliff Joslyn and Francis Heylighen, the first public activities started in 1989. An FTP server went online in March 1993 at the Free University of Brussels , followed a few months later by an hypertext server, which turned out to be the first one in Belgium.

The specific goals for the Principia Cybernetica Project are :

  • Collaboration
  • Constructivity
  • Active
  • Semantic Representations and Analysis
  • Consensus
  • Multiple Representational Forms
  • Flexibility
  • Publication
  • Multi-Dimensionality