There are some pretty nebulous concepts in mathematics that can be hard to wrap your head around, but the meaning of 'equals' was one we thought we had covered.

It turns out that mathematicians actually can't agree on the definition of what makes two things equal, and that could cause some headaches for computer programs that are increasingly being used to check mathematical proofs.

This academic squabble has been bubbling along for decades, but has finally come to a head because computer programs used for 'formalizing' or checking proofs need to have clear, specific instructions; not ambiguous definitions of mathematical concepts that are open to interpretation or rely on context computers don't have.

British mathematician Kevin Buzzard of Imperial College London ran into this problem when collaborating with computer programmers, and it prompted him to revisit the definitions of 'this is equal to that', to "challenge various reasonable-sounding slogans about equality."

"Six years ago," Buzzard writes in his preprint posted to the arXiv server, "I thought I understood mathematical equality. I thought that it was one well-defined term… Then I started to try and do masters level mathematics in a computer theorem prover, and I discovered that equality was a rather thornier concept than I had appreciated."

The equals sign (=) with its two parallel lines elegantly representing a parity between objects placed on either side, was invented by a Welsh mathematician, Robert Recorde, in 1557.

It didn't catch on at first, but in time Recorde's brilliantly intuitive symbol replaced the Latin phrase 'aequalis' and later laid the groundwork for computer science. Exactly 400 years after its invention, the equals sign was first used as part of a computer programming language, FORTRAN I, in 1957.

The concept of equality has a much longer history though, dating back to ancient Greece at least. And modern mathematicians, in practice, use the term "rather loosely," Buzzard writes.

In familiar usage, the equals sign sets up equations that describe different mathematical objects that represent the same value or meaning, something which can be proven with a few switcharoos and logical transformations from side to side. For example, the integer 2 can describe a pair of objects, as can 1 + 1.

But a second definition of equality has been used amongst mathematicians since the late 19th century, when set theory emerged.

Set theory has evolved and, with it, mathematicians' definition of equality has expanded too, to encompass the notion of isomorphisms, where two distinct sets can be considered 'equal' in the sense that the elements within them correspond to one another.

"These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well," Buzzard told New Scientist's Alex Wilkins.

However, confusion can arise in mathematics when equality and isomorphism are treated as meaning the same thing, which they do not – and the tension is especially apparent in the world of computers, which only recognize the traditional mathematical notion of equality.

"None of the [computer] systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol," Buzzard told Wilkins, referring to Alexander Grothendieck, a leading mathematician of the 20th century who relied on set theory to describe equality.

Buzzard thinks the incongruence between mathematicians and machines should prompt math minds to rethink what exactly they mean by mathematical concepts as foundational as equality so computers can understand them.

"When one is forced to write down what one actually means and cannot hide behind such ill-defined words," Buzzard writes, "one sometimes finds that one has to do extra work, or even rethink how certain ideas should be presented."

The research has been posted on arXiv.