A brief introduction to artificial intelligence (AI)

What actually makes us intelligent?

This is one of the greatest questions for humankind. Intelligence is nevertheless difficult to define. Many aspects of human or animal behaviour are considered intelligent: language, creativity, logic, and learning – all are essential ingredients, and each one has given rise to entire fields of research. Richard Feynmann once said,

“If I can’t re-create something, I haven’t understood it.”

The motivation for every researcher in artificial intelligence (AI) is to emulate the multi-faceted aspects of intelligent systems – and perhaps at some point – develop a system that can match human intelligence or even exceed it.

However, only a small part of the community is seriously dedicated to developing a “General AI,” or an artificial intelligence with the capabilities of delivering human performance across the entire spectrum of intelligence tasks. The notion behind this is best understood as a guiding principle, a vision for the future that drives the researchers. Most working groups focus on a specific sub-discipline and drive the development of AI within their field. The AI research landscape is divided into disciplines covering practically every aspect of intelligence.

Any intelligent system must first sense its environment and represent it in a suitable manner. In addition, often-enormous volumes of high-dimensional data must be processed, for the system to recognise the relevant structures in the world around it and map them internally. Let us imagine somebody registering the physical appearance, smell, and external form of an item: for example, a laptop computer. His brain merges the multi-sensory information, and represents all of them using a series of neurons. Were we to only present him with a proportion of the associated stimuli (such as, only the sound of the keys being hit), a human can still make the necessary correlation.

With the help of (learned) internal representations, intelligent systems can draw logical conclusions and calculate target-oriented strategies to address recognised or predefined problems. A human recognising the sound of typing on a keyboard would not only conclude that somebody was typing something on computer, but also apply his or her contextual knowledge to determine who it could be, their reason for writing, and how long it might take.

Taking a future-oriented approach to planning your own activities and having an expectation of how the world might react to them are also among the central cognitive feats for intelligent agents. This not only applies to humans; other animals also demonstrate intelligent behaviour that suggests an ability to plan. In the presence of other birds, crafty ravens appear to hide their food. They anticipate the threat of theft and take precautions.

Because the world is constantly changing, an intelligent agent must be capable of learning.

Learning and memory creation are central to all of the abilities described previously. Last but not least, motor skills and with them, the ability to manipulate surroundings, are also important factors. Only through a closed cycle of action and perception can an intelligent system recognise itself and its influence on the world.

The field of artificial intelligence is divided into two primary methodological disciplines: symbolic AI and statistical AI. Since the emergence of AI in the 1950s, symbolic or rule-based AI has been attempting to map central cognitive functions such as logic, deduction, and planning using computers. In this context, symbolic means using concrete and clear representations to determine facts, events, or actions. These representations can then be assigned to precise mathematical operations such as, “If X, then Y, if not Z.”

Symbolic AI is capable of modelling abstract processes and can also be easily read by humans. Nevertheless, an autonomous robot must also cope with the same day-to-day uncertainty that animals experience as a result of incorrect, distorted, or incomplete information. What happens when a particular event isn’t covered by the program’s logic?

Statistical AI tries to approach the problem using data. A process model such as the optimum action for a robot or the classification of sensor data is “learned” on the basis of data or experience.

This discipline – which is also known as “machine learning”– unites mathematical theory with optimisation, statistics, and data mining.

The underlying principle is that all problem-relevant patterns and rules can be found within the data. So, the objective is to record as much data as is possible, evaluate it statistically, and to reveal hidden patterns. If a model for the relevant process has been learned, it enables the intelligence system to make robust predictions – even if the input is unfamiliar. For example, facial recognition systems can be trained using thousands of sample images, and they can still recognise faces that were not part of the training data. To deduce which image structures exist in a face and how these relate geometrically to one another, it is not necessary to have seen every possible face; a representative cross-section is sufficient.

AI research has seen several crazes over the past few decades, and these were also promoted by some of the researchers involved to increase their likelihood of receiving funding. Often, the areas of research failed to live up to the fantastic promises that the grant-awarding agencies, media, and even the researchers themselves, wanted to believe.

Although we have come much closer to our objective since the 1950s, even today there is still not a robot capable of independently performing all the housework; no AI can answer every question, regardless of the subject matter. Consequently, AI research has had to endure several so-called AI winters, or periods during which subsidies and public opinion about the field dropped to a miserable low. Looking back, even the use of the term “artificial intelligence” was considered very carefully 15 years ago and was often substituted with less hackneyed alternatives like “intelligent systems.”

However, developments in the last decade have been remarkable. Very high-performance artificial intelligences have been created over the course of just a few years. Computer systems have beaten human rivals in specialist domains, such as the question-and-answer game “Jeopardy” or in the strategic board game “Go”. Autonomous cars have driven many thousands of accident-free miles, and chatbots are influencing democratic elections. These quantum leaps over the last few years, in many disciplines, are the result of so-called “deep neuronal networks”. The technology has been around for several decades; however, it has only recently become possible to apply it to solving challenging problems.

Ultimately, it was advances in theory as well as constant development of computer hardware that have enabled this quantum leap. Memory is becoming ever more affordable and computing power ever greater, making it possible to record and process sufficiently large quantities of data. The more complex the problem, the more data is generally required to represent it in a model.

It has been mathematically proven that neuronal networks are able to approximate any depiction, for example translating the input x into the desired output f(x): input x might be a high-dimension input such as an image, whilst f(x) could be, for example, how that that image is classified – “face,” “dog,” or “cat.” Deep neuronal networks are nothing more than high-performance tools for finding relevant patterns and applying them to solving problems. The computing power at our disposal in this current decade has made it possible to process sufficiently large quantities of data to deal with interesting problems. The field known as “Deep Learning” has brought about a revolution in recent years, allowing state-of-the-art AI systems to make a major step forwards.

Object and image recognition is now as good as the human benchmark (better even), thanks to Deep Learning.

Deep Learning is also being painstakingly combined with existing methods, such as reinforcement learning. Robots such as drones and self-driving vehicles use deep reinforcement learning to generate optimum movement plans. Systems for speech synthesis and understanding natural language are now astonishingly effective as a result of deep learning processes.

However, we are still a long way from reaching a human being’s cognitive abilities, and it is unclear whether simply upgrading memory and processor capacity is sufficient to change this. Moreover, the development of conventional computer architectures is limited by the laws of physics. Leakage current and heat build-up within increasingly small CPU architectures are driving chip manufacturers to increasingly parallelise, or use ever more cores instead of ever-faster ones. Indeed, we cannot expect chip architectures to be substantially smaller, and in turn, this means that the available space in the computer case also limits the strategy of parallelisation.

One possible solution might be found in so-called neuromorphic chips. These comprise both analogue and hybrid circuits, which replicate the stimulation and impulses in nerve cells. Such chips do not allow us to execute standard program code; however, robots as well as immobile intelligent systems could delegate specialised tasks, such as processing sensor data, to this highly specialised hardware. And just maybe, this will also help us to better understand the complex interactions within our own nerve cells. It would be fantastic if – by emulating our own brains – we could not only better understand ourselves but also create robots that we could truly call intelligent.

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer