Bill Allombert on Wed, 28 Sep 2005 10:33:10 +0200 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: setting output precision |
On Tue, Sep 27, 2005 at 07:59:39PM +0200, Karim Belabas wrote: > * Bill Allombert [2005-09-27 19:02]: > > The only problem I see is that you need to set it _after_ realprecision > > which seems strange. > > This is the historical behaviour, unfortunately not really documented: > * changing 'realprecision' automatically updates the (maximal) number of > significant digits printed to current 'realprecision'. > > * changing 'format' doesn't affect 'realprecision'. > > The apparent idea is that one should always print as many digits as are > available. Trivial to change, but... If you fear of breaking old behaviour, we could have format default to 'automatic' (where it depends on realprecision) When the user set it to something, it go to manual mode and is not affected by realprecision anymore. One way to denote 'automatic' mode could be to use a format string of "g" or "f" instead of "g0.28" or "f0.28", etc (I hope I make sense). By the way, ??format with -detex has a display glitch : "and64-bit" which is not in the TeX source. Cheers, Bill.