Peter Bruin on Thu, 13 Feb 2014 23:14:00 +0100


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: precision of result when adding t_REAL with lots of cancellation


Bonjour Karim et Denis,

> * Peter Bruin [2014-02-12 13:12]:
>> Bonjour Denis et pari-dev,
>> 
>> The following issue came up when I tried to apply Denis Simon's
>> 2-descent program for elliptic curves over number fields (ell.gp) to
>> a certain elliptic curve over the quadratic field with defining
>> polynomial y^2 - 229.  The linear polynomial b in ZZ[y] below takes
>> positive values at both roots of y^2 - 229, but one of these values
>> is very small.  Denis's program takes this into account and increases
>> the precision if b(y) is exactly 0 or has precision less than 10
>> digits.  In the example below, there is a huge cancellation between
>> two real numbers that only differ by one bit, but PARI rounds up the
>> precision to 19 digits.
>> 
>> A quick fix for Denis's program is to increase the bound for the
>> precision test in ell.gp:nfrealsign() from 10 to 38 digits.
>
> A better fix would be for us to export nfsign(). Then
>
>   install(nfsign,GG);
>   K = bnfinit(y^2 - 229);
>   b = -1554544300737274875964190134520312870631312460283689944298138572669148295776039072867720281361776956435252620954745928376624817557704277432961924925312*y + 23524523971732905757341977352314040726186200302188191824300117738073539522011689544444863977622786771332621915440577829842674416407299864303146477224320;
>
> (13:26) gp > nfsign(K, b)
>   ***   at top-level: nfsign(K,b)
>   ***                 ^-----------
>   *** nfsign: precision too low in nfsign_arch.
>
> (13:26) gp > \p115
>    realprecision = 115 significant digits
> (13:26) gp > K = nfnewprec(K);
> (13:27) gp > nfsign(K, b)
> %7 = Vecsmall([0, 0])
>
> Something like
>
>   while(1,
>     iferr(nfsign(K,b),e,
>       if (errname(e)!="e_PREC", error(e))
>       default(realprecision, 2*precision(1.));
>       K = newnewpref(K)));
>
> would work more reliably.  
>
> That one must currently change the global variable 'realprecision' for
> this is an unfortunate, and serious, known bug. Having a way to
> locally change 'defaults' (or at least 'realprecision', e.g. myprec()
> or mybitprec()) has been in the TODO list for some time.

OK, thanks for this idea.  To me this certainly looks like a better fix
than just enlarging the bound from 10 to 38 digits.

Denis: what is your opinion about this solution?

Here is an example that causes the bug, by the way:

\r ell.gp
K = bnfinit(y^2 - 229);
c4 = 2173 - 235*(1 - y)/2;
c6= -124369 + 15988*(1 - y)/2;
E = ellinit([0, 0, 0, -c4/48, -c6/864]);
bnfellrank(K, E)
  ***   at top-level: bnfellrank(K,E)
  ***                 ^---------------
  ***   in function bnfellrank: ...eqtheta,rnfeq,bbnf];rang=
  ***   bnfell2descent_gen(b
  ***   ^--------------------
  ***   in function bnfell2descent_gen: ...und,r=nfsqrt(nf,norm(zc))
  ***   [1];if(DEBUGLEVEL_el
  ***   ^--------------------
  ***   array index (1) out of allowed range [none].
  ***   Break loop: type 'break' to go back to GP

>> However, I am wondering if in general the current PARI behaviour of
>> rounding up the precision of the result of t_REAL + t_REAL is really
>> desirable in cases where the actual precision is extremely small (say
>> at most four bits).  Would it be reasonable to round the precision
>> _down_ in such cases?  This would mean that in some computations
>> slightly more precision will be lost than necessary, and maybe this
>> case distinction has a negative effect on speed, but erring on the
>> side of caution here seems possibly worthwile here.
>
> Relying on the the PARI "floating point philosophy" for "real world"
> programs is a bug in itself. It just cannot work reliably.
>
> One must proceed as usual in numerical analysis: avoid catastrophic
> cancellation a priori, bound the sizes and errors in terms of known
> quantities (here the sup norm of b and K.roots), then use
> multiprecision at the appropriate 'realprecision' to guarantee
> results. Using interval arithmetic as a black box would not solve
> magically such instances: one would obtain [-infinity, +infinity] for
> all practical purposes (e.g. in this case, it would not be possible to
> determine the sign of the real embedding).

On further reflection, I think you are right and the idea of fixing the
PARI floating point philosophy in this respect is not the way to go.

> Independently, in this particular case, the appearance of such a large
> 'b' might be a design problem. Libpari programs almost never use huge
> algebraic numbers in this (expanded) form, but restrict to (possibly
> large) factorizations in terms of (small) S-units.  This is
> unfortunately not easy to do in GP. But most functions, e.g.  nfsign
> here (or ideallog mentioned yesterday), accept such factorizations as
> inputs.

I am not familiar enough with descent on elliptic curves to say if there
is any way to avoid such large number field elements (compared to the
apperently modest input size), or if one can somehow keep track of these
elements in a more compact factored form.  Maybe Denis can enlighten us?

Best wishes,

Peter