A short History of Artificial Intelligence - from Antiquity to ChatGPT

(Note: If you would like to read ChatGPT's abridged and reworded version of this article, click here.)

The history of artificial intelligence (AI) is a fascinating journey through technological advances, setbacks and breakthrough innovations. From early concepts to today's state-of-the-art AI systems, AI has evolved in leaps and bounds over the past few decades.

Antiquity

At first glance, artificial intelligence seems like a modern invention. In fact, the idea of an artificial form of human intelligence and body is thousands of years old.

Already in the oldest work of European literature, Homer's Iliad (late 8th century BCE), we read of self-propelled tripods (book 18.373) and maidens made of gold (book 18.417) who support their inventor, the god Hephaestus, in his work. In the Argonautika by Apollonius of Rhoodos (3rd century BCE) the talking ship Argo and the bronze man Talos are featured.

These mythological tales certainly inspired ancient Greek inventors such as Philo of Byzantium and Heron of Alexandria to create hydraulically, pneumatically and mechanically powered, partially programmable automatons of various kinds. The self-driving, three-wheeled theater cart by Heron clearly resembles Hephaistos' autonomous tripods.

Middle Ages and Renaissance

Even in the Middle Ages there are legends and efforts to create artificial people or human-like creatures. An example from Jewish mysticism is the golem. Instructions for its creation can be found for the first time in the 12th century commentary by Eleazar ben Judah on the kabbalistic work Sefer Yetzira. The doctor and alchemist Paracelsus also described the creation of an artificial homunculus (i.e., little human being) in the 16th century.

Enlightenment and Modernity

Since antiquity, philosophers and mathematicians have been concerned with formal thinking, which is the basis for programming of any kind. Since René Descartes, the idea has slowly been established that humans (at least physically) resemble machines. Thomas Hobbes, for example, writes in the introduction to Leviathan:

Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. [...] why may we not say, that all Automata [...] have an artificial life? For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer?

  • Hobbes, T. (1996). Leviathan: Revised student edition. Cambridge texts in the history of political thought, 9.

Gottfried Wilhelm Leibniz went one step further and also saw rational thinking as a kind of mathematical operation which, expressed with suitable symbols, can be calculated formally.

All of these and many more precursors took firmer shape in the work of Alan Turing, the father of theoretical computer science and artificial intelligence. As early as 1950, he designed the Turing Test, which checks whether a computer can express itself so well that a human cannot distinguish it from another human.

Rise of modern AI

One of the most significant developments for modern AI was McCulloch and Pitts' artificial neurons, inspired by discoveries in neuroscience. McCulloch and Pitts showed as early as 1943 that due to the activation properties of biological neurons (a neuron either fires or not), mathematical logic can be used to calculate the behavior of neural networks.

The term "artificial intelligence" was established for this field of research in 1956, through the Dartmouth Workshop organized by John McCarthy and other scientists. The proposal for the workshop states:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

Thus, the general goal of this research area was set. There has been great enthusiasm among researchers and government organizations such as the US' DARPA, and many conceptual and practical advances have been made.
ELIZA, a chat program by Joseph Weizenbaum, was able to communicate in English and sometimes even passed the Turing Test. In fact, however, ELIZA did not understand the content. The program generated answers by skillfully applying grammar rules to rephrase the input texts, asking questions and outputting some pre-made texts.

AI researchers made very optimistic predictions about the future of this area. For example, in 1958, Simon and Newell predicted that within the a computer would be world chess champion in the next ten years, insofar as it was allowed to participate in the world chess championship.

But it would be decades before artificial intelligence would bear fruit outside of universities and research groups.

AI Winters

The AI researchers' optimism led to major disappointments when their predictions fell short. In the 1970s, this led to less and less research funds being invested in AI. As a result, development progressed slowly.

It was not until the 1980s that there was a renewed upswing when the first expert systems achieved commercial success. For example, the system XCON saved an estimated $15 million. These systems were based on the knowledge of human experts and could use logic rules to solve technical problems and answer questions. As a result, knowledge processing has increasingly become the focus of AI research.

But this wave of enthusiasm did not last either. By the late 1980s, expert systems were losing interest because they were very expensive to build, limited in use, and difficult to keep up to date. They were also too unstable to deal with flexible requests. This led to a second "AI winter" in the 1990s, i.e. a major decrease in investments and research funds.

Nevertheless, work continued on AI applications. IBM developed the chess computer Deep Blue, which beat the reigning world chess champion Garry Kasparov in May 1997 beat. New concepts and algorithms were developed, often through interdisciplinary exchanges with other fields such as statistics, economics, and engineering. Many commercial products for speech recognition, in finance or in robotics included methods from AI research - even if this was not often emphasized, since AI was perceived as an idealistic dream at the time.

After every Winter, Spring follows

Due to the rapidly increasing storage and computing capacities of computers in the 90s and 2000s, more and more AI methods became practically useful. Algorithms, some of which were decades old, could now be used to their full potential and extended. At the same time, the increasing digitalization and easier collection of large amounts of data via the internet meant that AI methods (such as neural networks) could be trained on enough data to actually come close to the original goals of AI research.

Today's achievements in image and language processing, which made self-driving cars and ChatGPT possible, are built on so-called Deep Learning. Artificial neurons are arranged in many layers one after the other and trained with an arsenal of mathematical tools. Through the clever combination of different methods and the use of high-quality data sets, artificial intelligence can now write texts or program code, generate images from descriptions or edit videos.

It is not easy to say what the future holds. But to discover fascinating opportunities, we only have to look to the present:
Today's AI applications are versatile, flexible and increasingly affordable, making it possible for companies of all sizes to reap the benefits of this emerging technology. From automating processes to personalizing customer experiences, AI has the potential to revolutionize the way businesses work.

Scroll to Top