There are some pretty nebulous concepts in mathematics that can be hard to wrap your head around, but the meaning of 'equals' was one we thought we had covered.
It turns out that mathematicians actually can't agree on the definition of what makes two things equal, and that could cause some headaches for computer programs that are increasingly being used to check mathematical proofs.
This academic squabble has been bubbling along for decades, but has finally come to a head because computer programs used for 'formalizing' or checking proofs need to have clear, specific instructions; not ambiguous definitions of mathematical concepts that are open to interpretation or rely on context computers don't have.
British mathematician Kevin Buzzard of Imperial College London ran into this problem when collaborating with computer programmers, and it prompted him to revisit the definitions of 'this is equal to that', to "challenge various reasonable-sounding slogans about equality."
"Six years ago," Buzzard writes in his preprint posted to the arXiv server, "I thought I understood mathematical equality. I thought that it was one well-defined term… Then I started to try and do masters level mathematics in a computer theorem prover, and I discovered that equality was a rather thornier concept than I had appreciated."
The equals sign (=) with its two parallel lines elegantly representing a parity between objects placed on either side, was invented by a Welsh mathematician, Robert Recorde, in 1557.
It didn't catch on at first, but in time Recorde's brilliantly intuitive symbol replaced the Latin phrase 'aequalis' and later laid the groundwork for computer science. Exactly 400 years after its invention, the equals sign was first used as part of a computer programming language, FORTRAN I, in 1957.
The concept of equality has a much longer history though, dating back to ancient Greece at least. And modern mathematicians, in practice, use the term "rather loosely," Buzzard writes.
In familiar usage, the equals sign sets up equations that describe different mathematical objects that represent the same value or meaning, something which can be proven with a few switcharoos and logical transformations from side to side. For example, the integer 2 can describe a pair of objects, as can 1 + 1.
But a second definition of equality has been used amongst mathematicians since the late 19th century, when set theory emerged.
Set theory has evolved and with it, mathematicians' definition of equality has expanded too. A set like {1, 2, 3} can be considered 'equal' to a set like {a, b, c} because of an implicit understanding called canonical isomorphism, which compares similarities between the structures of groups.
"These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well," Buzzard told New Scientist's Alex Wilkins.
However, taking canonical isomorphism to mean equality is now causing "some real trouble", Buzzard writes, for mathematicians trying to formalize proofs – including decades-old foundational concepts – using computers.
"None of the [computer] systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol," Buzzard told Wilkins, referring to Alexander Grothendieck, a leading mathematician of the 20th century who relied on set theory to describe equality.
Some mathematicians think they should just redefine mathematical concepts to formally equate canonical isomorphism with equality.
Buzzard disagrees. He thinks the incongruence between mathematicians and machines should prompt math minds to rethink what exactly they mean by mathematical concepts as foundational as equality so computers can understand them.
"When one is forced to write down what one actually means and cannot hide behind such ill-defined words," Buzzard writes. "One sometimes finds that one has to do extra work, or even rethink how certain ideas should be presented."
The research has been posted on arXiv.