Gerhard Niklasch on Wed, 24 Jul 2002 17:19:42 +0200 (MET DST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: Strange loop in factorint/ECM ?


In response to:
> Message-id: <Pine.GSO.3.96.1020724141043.13823V-100000@math39>
> Date: Wed, 24 Jul 2002 16:27:59 +0200 (MET DST)
> From: Gottfried Barthel <barthel@fmi.uni-konstanz.de>
> 
> I came across a strange problem and don't know what to do. I am trying to
> factor the following composed integer with 98 digits using factorint : 
> 
> 6204279888025911871492956512013727029580743831970370591158040237796021\
> 9673213931222368764555941101
> 
> Rho/SQUFOF did not find any "cheap" factor. Then factorint entered the ECM
> phase with the message
> 
> IFAC: trying Lenstra-Montgomery ECM
> ECM: working on 64 curves at a time; initializing for up to 50 rounds...
> ECM: time =      0 ms
> ECM: dsn = 12,  B1 = 1800,      B2 = 198000,    gss =  128*420
> ECM: time =  68920 ms, B1 phase done, p = 1801, setting up for B2
> ECM: time =   1210 ms, entering B2 phase, p = 2017
> ECM: time =  42990 ms
> 
> and continued for quite a while, but now it seems sort of stuck in a loop
> since it has consecutively issued the same values 
> 
>    dsn = 72,  B1 = 1073500,   B2 = 118085000, gss = 1024*420 :

No, it is not *stuck* in a loop, it is still making progress in the loop.
Different elliptic curves are used each time.

Brief sketch of what is going on here:  At this point, we are trying
our darndest to snare any reasonably-small factors before going to an
all-out effort, where "reasonably small" means something like 18-25
digits or thereabouts.  The next thing to be tried would be MPQS, and
MPQS, unlike Pollard Rho and ECM, does not benefit at all from the
presence of a small factor - the running time essentially depends only
on the size of the input number, and once we embark on that, we can
only plod along until we have enough relations to attempt the Gaussian
elimination - or give up  (usually in my experience because of a power
outage hitting first ;^{).  MPQS is going to take weeks if not months
for an input of this size on a single-threaded implementation like ours.
So before even starting such an attempt, we should really spend several
*days* trying to catch any smallish factors, if they exist, using ECM,
which *will* terminate within seconds of having snared a factor.  Of
course, it might miss a factor, and there might not even be a factor
small enough for it to find, but when you average over all composite
98-digit numbers without factors <10^18, you still gain by doing this
since enough of them do have factors <10^25-or-so.

Now for large input and for technical reasons, ECM's tabulated bounds
increase somewhat faster than the theoretical optimum, which means once
we get to dsn = 72, we really haven't yet examined a sufficient number
of curves!  So we spend a while looping on the same bounds examining
more and more curves.  This is the reason for the apparent "stuckness".

(The "technical reason" here is essentially that one would need vastly
more tabulated bounds to get a better behaviour, viz. a growth rate
of the bounds which depends both on the input size and on the history
of what has already been done on the input number.  Or an algorithm
which computes suitable bounds on the fly.  There is certainly room
for improvement here.  The present code uses the input size to figure
out how many times we'll go through the loop, and to guesstimate a
nice set of bounds to be used for the first iteration, and then just
uses one and the same fixed "ladder" of bounds to climb up.)

> Any hint what to do? 

(1) Nothing, just wait and watch it.

(2) If you can apply a second CPU to the task, run factorint() again
on that one, but using the flag argument to suppress the stages which
the first one has already been doing, and go straight to the final ECM
phase.  (Different curves will be used, so this won't duplicate any
work done by first-stage ECM on the first CPU.)

> By the way, could someone explain to me the meaning of the output? I did
> not find any documentation in the User's guide. I understand that B1 and
> B2 should be the bounds in the present ECM round, but what is the meaning
> of dsn, gss, and p ?

dsn is lost in the depths of history.  Something like "digit size of n"
may have been the original meaning - I'll be happy to be corrected. :)

gss means "giant step size", and gives a hint of how the B2 phase
works internally.  p is the current small prime.  Ultrabrief sketch
of ECM, for context and for the benefit of the gallery:

We know that Z/NZ is a ring with zero divisors and are trying to
find one by doing many computations mod N in the hope of finding
to ring elements whose difference has a nontrivial common divisor
with N.

Our computations are arranged around an object which would be an
elliptic curve if Z/NZ was a field, but which projects to an honest-
to-god elliptic curve over the field Z/PZ for any prime divisor P
of N  (as yet unknown).  We try very hard to hit the point at
infinity of such a projected curve, because once we get there, we
have that P.  (It will show up during an attempted division a/b
mod N, in the form of a denominator b which isn't invertible mod N.)

Now any elliptic curve over Z/PZ has a number of points which is
not too far from P+1  (Hasse's theorem),  and moreover, any elliptic
curve over Z/PZ is a lovely abelian group.  The idea is to work on
many such curves, start from an essentially random point, and take
multiples of that point under the group law in the hope of reaching
the neutral element  (= point at infinity).  We're gambling on, by
chance and luck, guessing a curve over Z/PZ whose order, as a group,
is a product of  (moderate powers of)  many rather small primes p,
with _no_ huge factor.  So we pick some point on a curve, compute
its [2^reasonable]-fold multiple under the group law, then the
[3^something] multiple of the result, then the [5^blah] multiple
of that, hoping to descend into smaller and smaller subgroups on
the curve, and finally to hit the one-element subgroup.  This is
the first or B1 phase, and B1 essentially prescribes how large our
powers p^blah are chosen.

The B2 phase gambles on just *one* prime p between B1 and B2 dividing
the order of the group to the first power.  Instead of successively
taking p-multiples of the current point, we look at a great many
p-multiples of one and the same point  (but with different p's)  at
once, because that saves a great many arithmetical steps -- essen-
tially, we stop computing numerators and taking quotients, we just
compute denominators, and do so by stepping along differences of
(successive)  primes.  And we multiply up lots of denominators, too,
since all we want to know in the end is whether their product mod N
has a nontrivial factor in common with N.

This phase is organized around some kind of baby-step-giant-step
scheme, and gss is the size of the giant steps -- very roughly, over
what intervals we lob all p's together before taking a gcd with N.

(The 420 in there refers to using the prime residue classes mod 420
for locating primes.  The implementation keeps track of where the
current p is mod 420, which helps in locating the next p.  Think of
climbing up a huge helical staircase with 420 steps per revolution,
in jumps so you hit only the prime residue classes, and taking a
little rest to catch your breath after each set of gss = 1024*420
steps have been passed.  For smaller input, gss is chosen smaller.--
Also, as the diagnostics state explicitly, we work on up to 64
curves in parallel for each set of bounds, which helps since when-
ever we need to divide something by something for each curve, we
can invert the 64 denominators at once by doing a single extended
gcd and a few extra multiplications - thanks to a nice trick of
Peter Montgomery's.)

Hope this helps some!

Cheers, Gerhard