The short answer is: this isn't actually a big deal. It's a bit like finding another digit of $\pi $ - scientific journalists love writing articles about it, but it doesn't have that much significance.

That's not to say it's easy to generate the next largest prime. But the difficulties are all in the computer science, not the mathematics. The algorithm that falls out of Euclid's proof works like this: make a list ${p}_{1},\dots ,{p}_{n}$ of all primes that you can, form $k={p}_{1}\cdot \dots \cdot {p}_{n}+1$, and compute the prime factorization of k. The prime factorization is guaranteed to contain a prime which wasn't on your original list.

However, this algorithm starts to become intractable computationally before too long: multiplying the known primes together, let alone factoring their product plus 1, starts to exceed the capabilities of modern computers long before you find primes which compete with the latest records.

A better idea is to look for really big numbers which you have reason to hope are prime and then try to check that they really are prime. A good collection of numbers to look at are the Mersenne numbers, i.e. numbers of the form ${2}^{n}-1$. It's much easier to compute Mersenne numbers than it is to, say, multiply the first ten thousand prime numbers together, so you only have to test if the Mersenne numbers are prime. This is still really hard (the best primality testing algorithms are only so fast), but as computers get faster they can handle bigger numbers. Indeed, the reason the latest number is called M74207281 is that the number is:

${2}^{74207281}-1$

So there were no mathematical breakthroughs here - just getting more mileage out of a familiar algorithm. It's not clear to me that there were even any computer science breakthroughs; as far as I know they just use the same old primality testing algorithm on a faster computer. But there was something shiny and new for the journalists.

###### Did you like this example?

Subscribe for all access