A Universal Language for Reasoning
Leibniz's dream and the search for a universal language of thoughts
Probably the greatest idea in all of Computer Science is the very definition of “computation”. This is the foundational step that enabled everything else in our field to fall into place. And it started with a dream of one of the greatest mathematicians, philosophers, and scientists of all time and very likely the first computer scientist, Gottfried Leibniz.
This is the first issue of a series exploring the greatest ideas in Computer Science. Today, we will talk about the long story of humanity’s search for the ultimate machine, a machine to solve all problems; the kind of machine we today call a “computer”.
Humanity’s ingenuity extends far back into the foggy ages of the agricultural revolution, and probably even farther back. Since the dawn of homo sapiens, our species has always been interested in predicting the future. Tracking the seasons, planning crops, estimating an enemy’s forces, and building cities, all of these tasks require significantly complex computations that often needed to be fairly accurate to be useful.
For this reason, ancient civilizations came up with all sorts of computational devices to keep track of complex phenomena such as planetary motion. Probably the most famous case is the Antikythera mechanism, a two-thousand-and-some-old device used to predict eclipses that is sometimes regarded as the first known analogue computer. The zairja is another interesting example from around 1000 CE, a device used to make astrological predictions, much like a simplified language model (only half-jokingly here).

These are but two examples of a trend we can see developing well into the middle ages and the renaissance: the construction of machines that perform some sort of calculation automatically. These find their ultimate realization (before the arrival of electrical computers) in Charles Babbage’s differential and analytical engines. But this is a story for another day.
In this post, I want to begin exploring the ideas that lead to the notion of “algorithm” or computational process, regardless of their physical realization.
The modern history of Computer Science can be said to begin with Gottfried Leibniz, one of the most famous mathematicians of all time. Leibniz worked on so many fronts that it is hard to overestimate his impact. He made major contributions to math, philosophy, laws, history, ethics, and politics. He is probably best known for co-inventing (or co-discovering) calculus along with Isaac Newton. Most of our modern calculus notation, including the integral and differential, symbols are heavily inspired by Leibniz’s original notation. But in this post, we care about him primarily for his larger dream about what mathematics could be.

Leibniz was amazed by how much notation could simplify a complex mathematical problem. Take a look at the following example:
“Bob and Alice are brothers. Bob is 5 years older than Alice. Two years ago, his age was twice as hers. How old are they?”
Back before Algebra was invented, the only way to work through this problem was to think hard, or maybe try a few lucky guesses. But with algebraic notation, you just write a couple of equations like the following and then apply some straightforward rules to obtain an answer.
Any high-school math student can solve this kind of problem today, and most of them don’t really have the slightest idea of what the heck they’re doing. They just apply the rules and voila, the answer comes out. The point is not about this problem in particular, but about the fact that using the right notation (algebraic notation, in this case), and applying a set of prescribed rules (an algorithm), you can just plug in some data, and the math seems to work by itself, pretty much guaranteeing a correct answer.
In cases like this, you are the computer, following a set of instructions devised by some smart “programmers” that pretty much require no creativity or intelligence from you, just to painstakingly apply every step correctly.
Leibniz saw this and thought something like “What if we can devise a language, like algebra or calculus, but instead of manipulating known and unknown magnitudes, it takes known and unknown truth statements in general?” In his dream, you would write equations relating known truths with some statements you don’t know, and by the pure syntactic manipulation of symbols, you could arrive at the truth of those statements.
Leibniz called it Characteristica Universalis. He imagined it to be a universal language for expressing human knowledge, in a way that is independent of any particular human language, and applicable to all areas of human thought. If this language existed, it would be just a matter of building a large physical device —like a giant windmill, he might have imagined—, to be able to cram into it all of the current human knowledge, let it run, and it would output new theorems about math, philosophy, laws, ethics, etc.
In short, Leibniz was asking for an algebra of thoughts, what we today call “logic”. He wasn’t the first to consider the possibility of automating reasoning, though. The idea of having a language that can produce true statements reliably is a major trend in western philosophy, starting with Aristotle's syllogisms and continuing through the works of Descartes, Newton, Kant, and basically every single rationalist that has ever lived. Around four centuries earlier, philosopher Ramon Llull had a similar but narrower idea which he dubbed the Ars Magna, a logical system devised to prove statements about, among other things, God and the Creation.
However, Leibniz is the first to go as far as to consider that all human thought could be systematized in a mathematical language, and even further, to dare dream about building a machine that could apply a set of rules and derive new knowledge automatically. For this reason, he is widely regarded as the first computer scientist, in an age where Computer Science wasn’t even an idea.
Leibniz's dream echoes some of the most fundamental ideas in modern Computer Science. He imagined, for example, that prime numbers could play a major part in formalizing thought, an idea that is eerily prescient of Gödel’s numbering. But what I find most resonant with modern Computer Science is the equivalence between a formal language and a computational device, which ultimately becomes the central notion of computability theory, a topic we will be exploring in future issues.
Unfortunately, most of Leibniz's work in this regard remained unpublished until the beginning of the 20th century, when most of the developments in logic needed to fulfil his dream were well on their way, pioneered by logicians like Gottlob Frege, Georg Cantor, and Bertrand Russell. But that’s a story for another day :)
And that’s it for today. If you enjoyed this post, feel free to share it with anyone you think might find it interesting. Also, I’d love to read any comments about this topic or ideas about other topics you’d like me to write about.
Special thanks to the following people who collaborated on important suggestions and corrections: Luis Ángel Méndez Gort and Warren Sirota.
Hey folks, if you're reading this in the app, you may notice that the equations are cropped vertically so the image is missing the first and last equation. That's a sad consequence of the way the app renders images.
Amazing article