Does this mean this formula can calculate prime numbers without finding all smaller primes (as the Sieve of Erastothenes does)? Or is that somehow just implicitly done with all the factorials and sums and whatnot?
It’s implicit in the factorial (Wilson’s theorem) which, for the purposes it is being used here, is effectively the Sieve of Eratosthenes turned inside out.
That is, where you might expect a large number of trial divisions, they become the multiplications inside the factorial.
Of course, the efficient saving of doing this (multiplication is generally easier than division) is immediately out of the window because 1) there’s an n-th root in there, 2) there are 2^n of those and 3) n-th roots are usually calculated by … division. (And this doesn’t even get into the cosine.)
Interesting, thanks. And yeah, it would have surprised me, if it was possible to do so without finding the smaller primes.
My intuition tells me that one could probably prove that the computational complexity can’t be lowered.
I’m sure, there’s already a proof that there are infinite prime numbers, because well, by multiplying whole numbers, you can’t fill out all gaps in a number line.
And if you look at it like Erastothenes does, iterating from 2 towards ∞, then anytime you successfully find a prime, your prime detection function f(x) needs to be extended to also check for that new prime number as potential divisor.
This feels like a self-extending rule system, where the rules depend on the (partial) solution. So, presumably by definition, you have to do the partial solution for all numbers that could be relevant for the rules of a given number, which is the normal bounds of Erastothenes.
Does this mean this formula can calculate prime numbers without finding all smaller primes (as the Sieve of Erastothenes does)? Or is that somehow just implicitly done with all the factorials and sums and whatnot?
It’s implicit in the factorial (Wilson’s theorem) which, for the purposes it is being used here, is effectively the Sieve of Eratosthenes turned inside out.
That is, where you might expect a large number of trial divisions, they become the multiplications inside the factorial.
Of course, the efficient saving of doing this (multiplication is generally easier than division) is immediately out of the window because 1) there’s an n-th root in there, 2) there are 2^n of those and 3) n-th roots are usually calculated by … division. (And this doesn’t even get into the cosine.)
Interesting, thanks. And yeah, it would have surprised me, if it was possible to do so without finding the smaller primes.
My intuition tells me that one could probably prove that the computational complexity can’t be lowered.
I’m sure, there’s already a proof that there are infinite prime numbers, because well, by multiplying whole numbers, you can’t fill out all gaps in a number line.
And if you look at it like Erastothenes does, iterating from 2 towards ∞, then anytime you successfully find a prime, your prime detection function f(x) needs to be extended to also check for that new prime number as potential divisor.
This feels like a self-extending rule system, where the rules depend on the (partial) solution. So, presumably by definition, you have to do the partial solution for all numbers that could be relevant for the rules of a given number, which is the normal bounds of Erastothenes.