The History of Artificial Intelligence

by Valentin Calomme, Data Scientist for Mediaan

Artificial Intelligence (AI) seems to be the next big thing. Nowadays, it is impossible to avoid the subject. Either when discussing with clients, when watching the news, or simply when looking at the supercomputers lodged in our pockets. AI is quickly moving into our everyday life and is becoming mainstream. But just how ‘new’ is this latest technology? It is much older than you would imagine. Long before your connected home could order groceries by drone-delivery, humanity was already fantasizing and talking about mechanical and artificial beings, as shown in ancient Greek and Egyptian myths.

Mediaan takes you on a journey to discover the evolution and history of artificial intelligence, starting with the first computational machines, calculators!


1900 – 20th century
First forms of computational machines, calculators!

In the 17th century, scientists and philosophers like Hobbes, Leibniz, and Descartes were proposing that all human reasoning could be reduced to computations, and therefore be carried on by a machine. This hypothesis is what has been driving AI research to this day. It led to the first forms of computational machines, calculators!

However, it still took more than 200 years for the first calculators to be sold to the general public, further democratizing reasoning automation. Around the same time, Charles Babbage and Ada Lovelace were theorizing the first computers as we know them today.


Artificial Intelligence as a threat to our culture


After the industrial revolution finished transforming the world as it was once known, artificially intelligent beings made a comeback in culture, being portrayed as a menace to humans.

Frankenstein’s monster turning back on his creator, Rossum’s Universal Robots rebelling and putting an end to the human race (1920), or even robots being used as war weapons in Master of the World (1934). Isaac Asimov then came to the rescue by proposing its “Three Laws of Robotics“, a spoof of Newton’s Laws of Motion, providing a safeguard to robot takeover.


A wave of optimism


After Alan Turing “broke the code” during World War II, using humanity’s first full-scale computer, a wave of optimism swept through the planet. Machines and computers could not only be useful, they could save lives. In 1950, the same Alan Turing proposed his infamous namesake test seeking to provide a formal benchmark for artificial intelligence.
A couple of years later, Arthur Samuel from IBM, a forefather of machine learning, created a Checkers program capable of learning and improving by itself.

1950 – early 90’s

The official birth of Artificial Intelligence as a field

In 1956, the Dartmouth Conference marked what is commonly known as the official birth of AI as an academic field. As the first high-level computer languages like FORTRAN, LISP, or COBOL were invented, enthusiasm and hope were at an all-time high.

Frank Rosenblatt’s Perceptron and Hubel and Wiesel’s experiments on the visual cortex of cats promised that machines could mimic the human brain. Joseph Weizenbaum’s chatterbot ELIZA (1966) became one of the first programs to successfully pass the Turing Test. At the same time, in Japan, WABOT-1, the world’s first full scale intelligent humanoid robot was born.


Hitting the glass ceiling


Though the overall state of the economy in the 1970s did not help, many of the expectations from the previous years hit the glass ceiling, and hit it hard. Not able to deliver on their promises of superhuman intelligence, AI researchers saw their funding virtually completely cut, in what will be remembered as the first of two AI “winters”.

In the 1970s, though it lost popularity and sheer financial support, AI gained something arguably much more valuable. More focus was spent on improving the programming languages that had become too rigid for their own good. This is when languages like C, Prolog, SQL, and Pascal were born. The fact that these languages are not only still alive today, but are the cornerstone of modern day programming speaks for itself. Winter had come, but Spring was going to last for a long time.


Artificial Intelligence as a cost-saving technology


In the 1980s, researchers reviewed their ambitions and figured that instead of an all-knowing AI, they could build very efficient expert systems. Systems that could perform incredibly well in specific fields like scheduling, stock management, or order processing. This provided humongous savings for the world’s largest corporations. AI was not a theoretical utopia for a lone group of researchers, it was a cost-saving technology prized by the business world. And money followed.

Companies started to focus on funding and creating infrastructures capable of maintaining these systems. Better machines were built, languages like C++, Perl, or even MATLAB that allowed for large-scale implementation and maintainability made their debuts, and new techniques capable of using data and logic to automate processes were conceived.


The second Artificial Intelligence war

The rise in enthusiasm did not last long. Corporations were spending large sums of money on AI research and on machines specifically built for these purposes, and it was not worth it anymore. The programs showed their limitations of being too specific and difficult to improve, and the machines were outdone in speed and power by Apple and IBM’s newest desktop computers.

Once again, goals that were set at the beginning of the decade were not met, and a sudden disbelief in the field as a whole took over. It was time for the second AI winter.

1990 – today

Cross-field collaboration

Thankfully, the lack of popular belief in the field proved to be incredibly beneficial. Free of unachievable goals and public scrutiny, researchers could work in peace and came up with findings that are still incredibly relevant today.

Another surprising effect of the lack of funding was that AI researchers started to create more specific subfields and started to work alongside experts from other fields in order to finance their research. This cross-field collaboration made AI more rigorous, more scientific, and made it incorporate concepts like probability theory and classical optimization. A huge victory for the field.


Computer beats best chess player in the world


Though AI was negatively portrayed in pop culture, through movies like the Matrix, Terminator, or 2001: A Space Odyssey, it made giant leaps. The combination of better hardware, new theorems and techniques, and new “internet-age” programming languages such as Python, Java, JavaScript, R, or PHP, resulted in new milestones.

Deep Blue beat chess world-champion Gary Kasparov in what will forever remain a huge victory for AI. For the first time, in a highly publicized fashion, AI had shown that it could be better than humans at a task widely considered as the epitome of human intelligence. In the meantime, without necessarily receiving the credit it deserved, AI continued to solve very difficult problems. In data mining, medical diagnosis, banking software, or speech recognition. This was also at that time that the world was introduced to Google and its infamous search engine.


The social media era

After nearly two decades of being shunned by the general public and the business world, AI made a huge comeback in public opinion. As Millennials and the Internet generation became a larger part of the population, enthusiasm about AI and technology soared again. In 2006, Facebook went public, Twitter was founded, Youtube was bought by Google, and a new economic landscape was born.

The social media era turned the business world upside down. Heaps of data about people’s likes and dislikes were now available, everything had to be faster, better, and more personalized. The world became too fast and too complex for humans to handle alone. They needed help and turned back to AI as their savior, as terms like “Deep Learning”, “Data Science”, or even “Big Data” became mainstream.


Artificial Intelligence embedded in our everyday life


In the past few years, AI has become more and more embedded in our everyday life. Siri can plan meetings for you. Netflix knows what movies you will enjoy. Supermarkets offer you personalized discounts based on your shopping habits. Facebook can tell you who is in your pictures and who you might know. And Google knows what you want before you even finish typing.

Even in pop culture, nerds made a comeback. Entrepreneurs like Elon Musk, Bill Gates, or Steve Jobs are held as modern-day heroes. And great news for me and my fellow colleagues, data scientist has even been coined the sexiest job of the 21st century.


Artificial Intelligence for Business


AI is not only there in our everyday life. It also provides big opportunities for the business. It can, for instance, be used in law or healthcare. Companies in the logistics industry use it to improve their resource planning, which also allows them to be more green. Digital agents are utilized to improve customer care or to help current employees to be better at their job. These are just a few of the many examples of using AI in business today.

I believe we can honestly conclude that the future of AI looks very promising.

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer