The thing is, there is no problem. We understand what division by zero means. You can’t do it. There is no number that meaningfully expresses the concept of what it means to divide by zero.
You can assign a name to the concept of “not a number”, which is what this bozo has done; but that’s not new. The standard floating point representation used by computer hardware manufacturers (IEEE 754) specifically defines a set of values for operations that can’t return a meaningful number: NaN (Not a Number). NaN works as you should expect a non-number to work: it can’t be compared to anything, and no arithmetic operation works on it – because comparisons and arithmetic only work on numbers.
What he’s done is to take projective geometry – which (as I mentioned in the Steiner post a few weeks back) gives you some useful ways of using infinity; added the concept of a NaN value called nullity, and redefined the multiplication and division operators so that they’re defined to be able to produce nullity.
What good is it? Well, the crank behind it claims two things:
- That currently, dividing by zero on a computer causes problems because division by zero is undefined. But if computers adopted nullity, then division by zero errors could be gotten rid of, because they’ll produce nullity. Except of course that modern floating point hardware already does have a NaN value, and it doesn’t help with the kind of problem he’s talking about! Because the result is not a number; whether you call it undefined, or you define it as a value that’s outside the set of real numbers, it doesn’t help – because the calculation you’re performing can’t produce a meaningful result. He says if your pacemaker’s software divides by zero, you’ll die, because it will stop working; how will returning nullity instead of signalling a divide by zero error make it work?
- That it provides a well-defined value for 00, which he claims is a 1200 year old problem. Except that again, it’s not a problem. It’s a meaningless expression. If you’re raising 0 to the 0th power in a calculation, then something’s wrong with that calculation. Modifying basic math so that the answer is defined as NaN doesn’t help that.
Basically, he’s defined a non-solution to a non-problem. And by teaching it to his students, he’s doing them a great disservice. They’re going to leave his class believing that he’s a great genius who’s solved a supposed fundamental problem of math, and believing in this silly nullity thing as a valid mathematical concept.
It’s not like there isn’t already enough stuff in basic math for kids to learn; there’s no excuse for taking advantage of a passive audience to shove this nonsense down their throats as an exercise in self-aggrandizement.
To make matters worse, this idiot is a computer science professor! No one who’s studied CS should be able to get away with believing that re-inventing the concept of NaN is something noteworthy or profound; and no one who’s studied CS should think that defining meaningless values can somehow magically make invalid computations produce meaningful results. I’m ashamed for my field.
Further, the reporters who are breathlessly repeating his nonsense claims need to be sent back to school. If you can’t understand the subject that you’re reporting on well enough to recognize this as nonsense, then you shouldn’t be reporting on it.