The Philosophic Origins of Artificial Intelligence
“I think therefore I am” — from Descartes to Searle
If we talk about philosophic principles for artificial intelligence, we often hear of Descartes’ mechanistic thought-flows. In the 16th century, this French philosopher presented the view that the difference between humans and animals exists only in the soul — otherwise all organisms function according to mechanical principles. The British thinker Thomas Hobbes followed this concept with his view of mechanistic state principles. The “machine paradigm” that grew out of these approaches continued to evolve until it was replaced by relativity, quantum, and chaos theories in the 20th century. You could say that the deterministic view of the world was replaced by ideas of networked causalities.
However, philosophy and natural sciences primarily dealt with the essence of nature. With the emergence of cybernetics in the 20th century, the view of natural phenomena moved on to the view of systems. In West Germany, in 1970 Karl Steinbruch set out the philosophical framework of this new discipline:
“[…] two questions [appear to me] to be of central importance:
1. Can machines possibly develop something which we can call intelligence in future?
2. Is there a real possibility that we can explain the intelligence of living beings, in particular of humans, using its physical structure?”
Cybernetics was, however, not able to survive and found its niche in information technology and artificial intelligence (AI). As for the development of AI, the philosophical debate followed a similar track. After AI was able to establish itself as a field of scientific research, symbolic AI initially determined theory. Its representatives reduced human intelligence to processing abstract symbols and on this basis developed their own ideas:
“Within the next 20 years, machines will be able to do everything that humans can.” (Herbert Simon, 1965)
The ideas of a neural AI, developed in parallel, were initially not able to carry the day. In symbolic AI, the computer is fed with all possible versions of activities that result in the AI behaviour. Through the 1980s, the success of chess machines and other AI models proved these ideas. Neural AI is based on the discovery of the perceptron by Frank Rosenblatt in 1958. The perceptron replicates human nerve cells. In a perceptron network, the input neurons activate the corresponding output neurons. While information flows through this so-called feedforward architecture in only one direction, it’s able to accumulate and generate new information much faster. Networking across various levels has replaced a hierarchical order as a philosophical foundation—but it comes with constraints.
Philosophy has been able to break free from the mechanistic view of the world during the 20th century, together with natural sciences, but the theorems of materialism, behaviourism, and functionalism continue to apply to computer sciences. The first theorem states that there is nothing other than material. It implies that everything that makes humans human can be analysed and constructed in principle using scientific methods. Behaviourism, which emerged at the start of the 20th century, stipulates that only verifiable questions and problems can be of importance for a scientific observation. Consciousness, faith, ideas, knowledge are just euphemisms that can only be indirectly observed for behavioural patterns. In functionalism, these mental conditions are only internal components in a complex system. Only functions describe the system, which take the same input to produce the same output and transfer it into functionally equivalent conditions. These three modes of thinking have one thing in common: they don’t consider conditions of the soul—as with the mechanistic view, a result of Descartes’ cogito ergo sum: “I think therefore I am.” Only the spirit is relevant, not the body or emotions.
And despite this, the success of AI is measured by its ability to simulate human senses. AI-specific philosophies differ between the weak and strong hypothesis. The first hypothesis states that machines can simulate humans and their thinking. The second goes one step further and declares that artificially generated thinking is real thinking. The spirit is not necessarily human, and what is human about humans doesn’t have anything to do with their spirit.
The Turing test can prove the weak hypothesis if the person asking the question cannot tell if the answers come from a human or a machine. However, this test can only be used to prove the strong hypothesis to a limited extent, as it cannot prove that AI can think for itself. The American linguistic philosopher John Searle totally contradicts the theory of understanding computers. A technical system, he argues, cannot understand the content it processes. His thought experiment of the Chinese room proves that someone can function in a language system without understanding the content.
The philosophic conflict thus has its origins in the understanding of the spirit and thought that, on the one hand, highlights the purely mechanical function. As a result of mechanism, humans and machines can be understood as being “information processing systems.” Humans boast that they are rational beings, and in this case rationality appears to be a moral instance, which, ironically machines are better able to solve, because humans are considered to be highly irrational.
On the other hand, another element in Descartes’ thinking runs counter to strong AI theories. He separates the spirit and material, with the spirit being of human provenance. That distinction explains why a machine has no soul.
In the case of machines, though, the question of having a spirit or a soul is irrelevant. Learning methods in AI are initially based on human methods. As a result, these should become independent. Representatives of strong AI believe that the resulting independence for machines is equivalent to human independence. According to behaviourism only the result of independent actions is relevant. Humanism cannot use this mindset as a basis.
It seems impossible to reconcile both philosophical views.