Skip to content

Where do our numbers come from?

By Keith Devlin @KeithDevlin@fediscience.org@profkeithdevlin.bsky.social

The numbers we use every day are so familiar, and so much a part of life, it’s easy to forget that they are a recent invention, having been introduced in the late nineteenth and early twentieth centuries. Sure, humanity has had some form of numbers for at least ten thousand years, and counting systems much further back in time. But the numbers familiar to almost everyone alive today, pure abstractions that can be represented as points on a line stretching to infinity in both directions (or the points on an infinite plane if you include complex numbers) are recent.

Their definition came after several centuries of attempts to create a number system that would support the demands on numbers made by, in particular, two mathematical techniques that came into being in the seventeenth century: modern algebra and calculus. The pioneers of those two new techniques knew that the numbers with which they were working did not support the uses they were making of them. But they pressed ahead anyway.

One problem was the absence of negative numbers. Seventeenth century mathematicians would happily—or perhaps furtively—put minus signs in their arithmetic expressions, but they did so knowing that “negative number” was an oxymoron. And there were other problems. I wrote about this in the August 2024 Angle, titled Algebra. It’s powerful. But it’s not what it was.

Newton was one of many mathematicians who attempted to define numbers in a way that would support the demands being put on them. His definition took “numbers” beyond the pre-modern numbers I described in that August 2024 essay, which were multitude-species pairs. But he was not successful and he did not publish that work. (Being defined as geometric ratios, Newton’s numbers were not negative.)

The numbers we use today

The way out of the dilemma, when it finally came, was to re-conceptualize mathematics itself, as a game played with formally defined abstract entities, according to carefully formulated rules. Much like chess, it didn’t matter what the pieces were made of or looked like, the game was defined by the rules that specified how the pieces could be moved. Relative to that framework, integers were simply game-pieces that obeyed the rules that governed “integers”. Likewise for real numbers. Or complex numbers. And so on.

That step—establishing mathematics as a game—was taken in the latter part of the nineteenth century. Then, in the early twentieth century, formal definitions were given for the various kinds of numbers, within the new mathematical subject (i.e., another game) of Axiomatic Set Theory. Within Set Theory, mathematicians were in fact able to answer questions like “What is an integer?” or “What is a real number?” The answer in each case was a certain kind of set.

But specifying what numbers are was just philosophical icing on the cake; the real work as far as mathematics was concerned was done by the axioms (i.e., rules) of the Mathematics Game itself.

What made this work—i.e., what made this approach yield mathematics that could be applied effectively and reliably in the world—was that the axioms were formulated after carefully analyzing mathematical practice over several centuries. For instance, the definitions and axioms for the real numbers came from a new subject called Real Analysis, and today’s complex number system resulted from Complex Analysis.

Modern numbers: from adjectives to nouns

Today’s numbers are (abstract) objects; the words we use to denote them are nouns. But the pre-modern numbers that preceded them were multitude-species pairs, such as “seven camels” or “three dollars”. The multitude part (which is what we would think of as the number) was an adjective. So, the number revolution at the end of the nineteenth century turned number words from adjectives into nouns.

But where did the pre-modern numbers come from? The answer will pre-date all of the written texts on which our current knowledge of the history of mathematics rests. Those texts all used some form of pre-modern numbers. And pre-modern numbers have the form of reporting the result of a count. So to answer the question, we need to look back to the beginning of counting. That’s a fascinating issue in its own right. By no means the least fascinating aspect is, how do we set about finding an answer, given that there are no ancient texts we can consult?

There are, of course, various counting artifacts that have been discovered, such as notched bones and inscribed tablets, presumed to be records of various counts (say, the lunar cycle) relevant to everyday life (such as the planting and harvesting of crops). The oldest such known is the Lebombo Bone, a 29-notched baboon fibula found in South Africa, dating between approximately 44,200 and 43,000 years old.

But how was the counting done? In particular, did it involve numbers, i.e. cognitive entities used to quantify collections?

At this point, we enter the world of informed, reflective speculation. And as you would expect, there are a variety of different (informed) opinions.

There are, however, some scientifically established facts about an innate sense of number that humans—and other species—are born with. Researchers who study that issue refer to that capacity as number sense, which is an unfortunate choice of terminology since the same term is used in the mathematics education world to mean something different (though not entirely separate).

The biological meaning of “number sense”, which is the one of relevance here, is the innate ability to recognize and reason about numerical quantities without recourse to counting. (So it does not require numbers.) I wrote a general-market book about that biologically inherited number capacity some years ago, called The Math Instinct.

(Biological) number sense

Biological number sense is built on two innate systems possessed by humans and other animal species. The first, the object tracking system (OTS), can accurately determine numerical quantity for small collections of objects, with four being the maximum for adult humans and three the maximum for human infants and for other species. This has been verified by tests that require subjects to distinguish between two such collections, without counting; say 2 dots and 3 dots displayed on a computer screen. (This is not to be confused with statements about instant recognition of the size of collections up to  seven objects; those involve instantly discerning a grouping pattern, say 4 and 3.)

The second system is called the approximate number system (ANS). It provides for approximations of the size of larger collections without resource counting. Again, this can be tested by asking subjects (human or animal) to distinguish between two collections. Success depends on the size of the difference between the two collections. The closer in size they are, the less accurate is recognition of the distinction. Six-months-old infants need a 2:1 ratio, say collections of 3 and 6 objects, or 8 and 16. By age nine months, a 3/2 ratio is enough, say collections of 8 and 12 objects. Children improve until they read the adult level of around 15% difference, instantly identifying the larger of two collections of 100 and 115 objects, respectively.

Running those studies with pre-linguistic infants is interesting. It relies on the observed fact that infants indicate surprise when presented with an image that differs somehow from a previous (but similar) one in a sequence by staring at it for longer. This surprising behavior can be elicited by a sudden change in the number of objects in a  sequence of displays.

As far as is known, the OTS and the ANS are the only numeracy capacities that are innate in humans or other species. Going beyond that requires counting, which (therefore) has to be a culturally acquired capability, involving “numbers” (of some form).

Almost all human societies have numbers. The only extant exception is the Pirahä tribe in Amazonia; the closest they have for words depicting size are hói, hoí, and baagiso, which translate roughly as, respectively, “small” or “few”; “a bit larger”; and “large”. Tellingly, the Pirahä seem unable to differentiate quantities greater than three.

It is, then, to the linguistic study of languages that we must turn to find evidence of numbers and counting in the centuries (or millennia) prior to the era when mathematical texts were written. Specifically, we have to find linguistic evidence of precise quantities and counting. (That cuts out a lot of the analyses linguists have made, which cover at most imprecise notions of quantity and size.)

What can we learn from linguistic studies?

When we reach back to the origins of language, we become embroiled to an even greater extent in a domain of informed speculation that offers little likelihood of anything resembling hard evidence. I trod some of that ground some years ago with my book The Math Gene, which presented a rational reconstruction of the evolution, by natural selection, of the human capacity to do mathematics.

I first identified eight human cognitive capacities for which (1) a good natural election argument could be made for each finding its way into the human gene pool, and (2) an argument could be made that, taken together, they provided all the ingredients for mathematical thought. With that collection to hand, I went on to argue that the key to them being brought together to yield mathematical thinking was the emergence of human language (with its grammar). (I drew in particular on the work of the linguist Derek Bickerton for that step.) I did not go any further and look at any of the fine details. But others were (independently) doing just that, and continue to do so.

Some studies factor in what we know about how children today acquire a concept of number. Of the many approaches, the one that rings “truest” to my ears is that this step comes about by virtue of finger counting. And, the argument goes, it has always been so.

Certainly, many of us acquire our knowledge of numbers by:

  1. Being taught, by rote, the numerals one, two, three, etc.;
  2. Chanting those numbers while pointing to, or touching, the objects in a collection, one-by-one;
  3. Raising or lowering our fingers one-by-one as we speak or hear the words.

Or something along those lines. This combines two key ingredients. Matching two collections in a one-to-one fashion, much like the ancients who made notches in bones; and mastering an ordered list of linguistic tokens (number words).

Parents who have done this (are there any who have not?) observe that, at first the verbal process of speaking the words in order bears little relationship to the answer they give to the question “How many … are there in the collection?” But at some point, the child realizes the connection. They grasp the idea of linear counting.

Arguably, at that point they have acquired a concept of (counting) numbers. It’s not today’s concept. Heavens, even Newton didn’t have that. But it seems awfully like pre-modern numbers. It’s a concept of number that is tied to a collection of objects as the final stage of a counting process.

Parents (or teachers) probably go one step further, and tell their children that when they were young they used their fingers to perform simple arithmetical calculations; and some continue to do so into adulthood. (Not so for people with Gerstmann syndrome, a neurological disorder consisting of dyscalculia, an inability to handle numbers, together with finger agnosia, an inability to discriminate between their fingers. Now there’s a telling connection.)

Personally, I always thought that this, or something analogous, is the way we all acquire our number concept. Be that as it may, however, there is linguistic evidence to suggest that it is the finger connection that gave rise to our number-words. (Note that word “suggest”. That’s all I am suggesting; that makes suggest-squared.)

Our use of 10 as the base of our number system suggests that we adopted that system because of finger reckoning. Our use (in English) of the word “digit” for the individual number symbols suggest the same. Our word “five” comes from the term for a fist (i.e., a closed hand) which is what you get if you use your fingers to count down from an open hand. And more.

By more, I mean “a lot more.” You could start by reading a fairly recent account of work along these lines by the linguist Roslyn Frank, in her article, “Exploring the evolutionary pathways from number sense to numeracy”, in The Oxford Handbook of Human Symbolic Evolution, edited by Nathalie Gontier, Andy Lock and Chris Sinha, Oxford: Oxford University Press, 2024. The book is pricey, but there is an Open Access PDF of Frank’s chapter at this URL  (University of Iowa).

If you buy the argument I outlined above (you don’t have to buy that book, but it sure looks interesting and there’s that open source version), then it’s pretty clear that:

  • our ancestors’ (and our childrens’) concept of (counting-) number likely began as a reification of the final step in finger-counting processes on sufficiently small collections,
  • and what historians of mathematics call pre-modern numbers was a system built on that concept, adding a numerical lexicon, together with its notational instantiation.