I do remember that definition. I remember you ignoring the fact that it is useless for almost every problem.
Alright, let’s prove that this doesn’t work.
Suppose you have some process that upon a prompt yields any random number between zero and one. A random number generator, essentially. I’ll be generous and let us assume that every outcome is equally likely. That’s almost never true in practice, which is another reason counting measures are almost never useful in practice, but I’ll grant it, just for the sake of simplicity and undeserved generosity.
Now, let’s say we want to a priori determine the probability to extract a number that’s smaller than one half from that generator.
So, how many numbers are there between zero and a half? Well, if our generator returns any real number, then it’s uncountably infinitely many. If it only returns rational numbers, it’s countably infinite. So, that’s our numerator: \infty_{\text{num}}.
How many numbers are there in total between zero and one? Well, if our generator returns any real number, then it’s uncountably infinitely many. If it only returns rational numbers, it’s countably infinite. So that’s our denominator: \infty_{\text{den}}.
So the probability that a number from our random number generator is smaller than one half is \frac{\infty_{\text{num}}}{\infty_{\text{den}}}. Which is ill-defined at the best of times. Now, one could try and blurt out the mantra that “some infinities are greater than others”. Never mind that this is not what that means. But even if it did, we can prove that the infinities in our “calculation” are not, in fact different. There are exactly as many numbers between 0 and one half as there are between 0 and 1, and we can show this fairly easy:
Say x is between 0 and one half. Then there exists a number y=2x that is between zero and one. And if x_1\neq x_2, then y_1=2x_1\neq2x_2=y_2. So definitely the set of numbers between zero and one does not contain fewer elements than the set of numbers between zero and a half. But the reverse also works. For every number y between 0 and one there exists a number x=\frac y2 that is guaranteed to be between zero and one half, and again the mapping is unique. So the set of numbers between zero and one does not contain more elements than the set of numbers between zero and a half. And if one quantity is not greater or smaller than the other, then what else can it be but equal?
So the \infty_{\text{num}} in our numerator, if we are going by a counting measure, is exactly the same as the \infty_{\text{den}} in our denominator. We can just call them both \infty. So by your “definition” of probability, the probability that a random number between zero and one should be smaller than one half is p=\frac\infty\infty=100\%.
Not only that, but this could have worked with any restriction of the initial interval. The probability that a random number between zero and one should be smaller than \frac1{1000000} is also 100%, because it’s still \frac\infty\infty if we go with this counting-measures-only definition of probability, and with the mapping y=1000000x allows us to conclude that again the numerator and denominator are the same in that fraction, too.
It works for offset intervals as well. The probability to find that the generator yields something within the middle thousandth of the zero-to-one interval is also 100%. The mapping to see the identity of the cardinalities is a bit more sophisticated, but nothing a sixth-grader couldn’t find: y=1000x-\frac{999}2.
Notably, since you love Kolmogorov’s axioms so much, it may interest you that this definition of probability contradicts them. For the set of number smaller than one half shares no elements with the set of numbers greater than one half. So by the third axiom, the probability that our generator generates a number that is in either set should be the sum of the probabilities that the number be in either of them. But since both halves have a probability of 1, this means the probability of the union is 1+1=2, which contradicts the second axiom, which states that the probability of the total sample space should be 1\neq2 and the first axiom, since no probability can be greater than 1, but 2>1.
So yeah. That definition is garbage. It is completely useless for almost every statistical phenomenon anybody would ever wish to model, and it is trivial to even construct examples where it is irreconcilable with your beloved axioms of probability. That you do not understand this is one more demonstration that you lied when you said
and while being a liar doesn’t make you wrong, not being wrong because you are a liar doesn’t make you non-wrong either.