From flute players to the Dartmouth Conference

The technical beginnings of AI

The start of artificial intelligence is often put at 1956, however, its technical origins go much further back. People have dreamt of creating a machine with human characteristics for centuries. In the 18th century, a clockmaker developed automata, mechanical replica of humans. The Frenchman Jacques de Vaucanson (1709–1782) built a flute player with humanoid features in 1738. In the 1760s Wolfgang von Kempelen presented his “Turk,” a chess machine which he claimed acted independently and which also played well-known personalities of his day. However, it came to light that the machine was manned by a dwarf—so that it was really a kind of Fakebot. In the 1770s, the father and son pair Jaquet-Droz went on to develop automata including a writer who could write any text of up to 40 characters using a feather and ink. The automaton has programmable memory and can be regarded as being an early form of the computer—it still works today.

It comes as no surprise that the first machines regarded as being super-human back in the 1700s were chess computers. The very first modern artificial intelligences were chess computers—the game is a standard-setter in Western mental exercises. In the early phase of AI, the symbolic period, human thought was simulated, with the aim of recreating human thought. One of the greatest successes was celebrated when the great chess champion Garry Kasparov lost to IBM’s Deep Blue computer in 1996. However, other masterful artificial intelligence works also mimic human capabilities, such as language: Joseph Weizenbaum’s language program ELIZA from 1966 for example, or the expert programme Mycin, developed from 1972 on to answer medical questions.

Before things came to fruition, the following happened: In 1943, the logician Walter Pitts and the neurophysiologist Warren McCulloch proved that nerve cells could perform logical operations such as “or,” “not,” or “and” and their combinations, if they were put together as networks. In so doing there were only the options “on” or “off” and these were only activated if the input from other nerve cells exceeded a certain threshold. All artificial neural networks to date are based on this logic.

Through to 1955 scientists developed the first computers and prototypes of neural networks based on the influences of the up-and-coming cybernetics and the teachings of Alan Turing. The valve computer Manchester Mark I was created in 1948/49 at the University of Manchester. Marvin Minsky constructed the first neural network in 1951. SNARC (Stochastic Neural Analog Reinforcement Computer) was a system of valve transistors which could find the fastest way out of a labyrinth. Bolstered by the developments, the scientists applied for grants to find opportunities “how to get machines to talk, to form abstract concepts, to solve all kinds of problems that only humans can deal with today, and to improve as they do so.” The application led to the conference in Dartmouth in 1956, which is commonly regarded as the birth of AI.

Two schools of AI developed from the 1950s to the end of the 1960s. Symbolic AI and neuronal AI, with symbolic AI initially being the stronger, recording the successes mentioned above.

Joseph Weizenbaum developed his ELIZA program in 1966. This is a question and answer system which is based on a stored vocabulary which it uses to react to statements using questions. This resulted in communication between man and machine using natural language. The program reacted to keywords with specific phrases which test candidates clearly regarded as being human in some cases. ELIZA was able to respond to sentences that it couldn’t process with evasive phrases. In turn, Weizenbaum was devastated when he learned that his program created the impression that psychoanalysis could be automated.

SHRDLU, a program developed in 1972 by Terry Winograd, also reacted to spoken language. It could shift virtual blocks according to instructions. It was also able to build up knowledge about its virtual world and observe this when solving problems. Even if its knowledge of its own world had to work with a very limited vocabulary, SHRDLU was regarded as being a very successful example of artificial intelligence.

However, the early 1970s are also regarded as being the cradle of so-called expert systems. These aim to prepare expert knowledge so that the system reacts with expert behaviour when it is requested to solve a problem. Mycin is an example of this, regarded as being the first prototype for medical diagnosis.

Mycin was developed at Stanford University starting in 1972, to assist in the diagnosis and therapy of infectious diseases using antibiotics. As an increasingly critical view of antibiotics was taken, methods were being sought to optimise their use for the respective illness. The development of Mycin was driven by the complexity of this problem. Even though the program finally reached a hit rate of 60%, it was never really used in practice.

While symbolic AI thus received a wide variety of support and recorded success, neural AI lagged a little way behind. However, Frank Rosenblatt, then with Cornell Aeronautical Laboratory in New York, had already developed a learning network based on the human nerve system. This was founded on one component, the perceptron. In a simple perceptron network, a few input neurones activate the corresponding output neurones. Rosenblatt based his system on the research results generated by Walter Pitts and Warren McCulloch and had his artificial neurones perform the “or,” “not,” or “and” operation. In polemic debates with representatives of symbolic AI, Rosenblatt could prove that the limits of single-layer perceptrons could be solved using multi-layer perceptrons. In this so-called feedforward architecture, information only ever flows in one direction. This system is currently used in all neural networks. In doing so, so-called backpropagation is being used.

This backpropagation has been developed on fundamentals dating back to the 1960s. In 1974, Paul Werbos showed the theoretical opportunities for using artificial neural networks and he put these into practice in 1982. These methods still apply today. This form of learning optimisation requires an external teacher who monitors the process. Errors are fed back into the system where they can be re-compared, thus resulting in learning.

This means that we have come a long way: From simulating human activities by mechanical androids through to simulating the human exchange of information and on to recreating human nerve systems—often linked to the dream of creating a machine that can surpass humans.

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer