SIGFPE signal handling

Q&A's, tips, howto's
Locked
nigelbrown
Posts: 429
Joined: Tue Nov 11, 2003 2:11 am
Location: Brisbane, Australia

SIGFPE signal handling

Post by nigelbrown »

Hi,
I've started a new thread as signal handling is the issue
with (add 1e308 1e308) terminating newlisp.
Borland's default for catching a SIGFPE (floating point exception signal)
is program termination (that is, the library's action is a "feature" not a bug?).
From the borland help docs I pasted some code that catches the SIGFPE
and noew get:

newLISP v7.3.3 Copyright (c) 2003 Lutz Mueller. All rights reserved.

> (add 1e308 1e308)
Caught it!
1e+308
>
the diff format comparison below shows the inserted signal handling code,
I don't know if you want to include signal handling for the Borland compile
or write it off as a Borland 'feature'
Regards
Nigel
Examdiff Diff output: nl-math.c v7303 vs v7301
56c56,76
< int _matherr(struct _exception *e) {return 0;};
---
> int _matherr(struct _exception *e) {return 0;};
> /* try catch floating point exception signal */
> #include <signal.h>
>
> #ifdef __cplusplus
> typedef void (*fptr)(int);
> #else
> typedef void (*fptr)();
>
> #endif
>
>
>
> void Catcher()
> {
> signal(SIGFPE, (fptr)Catcher); // ******reinstall signal handler
>
> printf("Caught it!\n");
> }
>
> /* signal(SIGFPE, (fptr)Catcher); install catcher later*/
192a213
> #ifdef __BORLANDC__
193a215,216
> signal(SIGFPE, (fptr)Catcher); /* install catcher */
> #endif
204a228
> /* case OP_ADD: result += number; break; */

nigelbrown
Posts: 429
Joined: Tue Nov 11, 2003 2:11 am
Location: Brisbane, Australia

Post by nigelbrown »

In aprt answer to my previous post-
the function|
#include <float.h>

unsigned int _control87(unsigned int newcw, unsigned int mask);

Description
This function sets and retrieves the FPU's control word.
described at:

http://www.ludd.luth.se/~ams/djgpp/cvs/ ... ntrl87.txh

can be used to mask FPU exceptions and enable internal FPU processing of exceptions as below:
A masked exception will be handled internally by the
coprocessor. In general, that means that it will generate special
results, such as @dfn{NaN}, Not-a-Number (e.g., when you attempt to
compute a square root of a negative number), denormalized result (in
case of underflow), or infinity (e.g., in the case of division by zero,
or when the result overflows).

It would appear that by default gcc masks exceptions while Borland does not (and the pros and cons of such defaults could be argued).
Looks like it becomes just a matter of enabling FPU internel exception handling at the start of Borland code to bring GCC and BCC into line.
Regards
Nigel

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Post by Lutz »

Thankyou Nigel for researching this,

just tried it out and it works like expected. Now we have common floating point exception behaviour across all platforms, great!

There will be a new development version 7.2.5 out Wednesday or Thursday.

Lutz

nigelbrown
Posts: 429
Joined: Tue Nov 11, 2003 2:11 am
Location: Brisbane, Australia

Post by nigelbrown »

Thank you Lutz for your speedy response.
I was thinking that perhaps the behaviour regarding floating point error signals should be a compile time option (I don't know that runtime setting of this behaviour is justified - what do you think?).
Code for making GCC adopt the current borland behavior of NOT masking exceptions is given here:
http://www.fortran-2000.com/ArnaudRecip ... l#Glibc_FP
(more or less -it is actually a discussion for when gcc is used as backend for fortran g77)
Viz:
#include <fenv.h>
static void __attribute__ ((constructor)) trapfpe(void)
{
/* Enable some exceptions. At startup all exceptions are masked. */
feenableexcept(FE_INVALID|FE_DIVBYZERO|FE_OVERFLOW);
}

When thinking about it I decided I might like to be alerted to overflow rather than having +INF slipped in - but perhaps with a catching of SIGFPE followed by a graceful termination with an error message.
Thoughts?

Nigel

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Post by Lutz »

I was playing around with exceptions in newLISP for a variety of things some time ago. I never could get a consistent behaviour across platforms. When dealing with exceptions its not only GCC/Unix versus Borland/Win32, but there are inconsistencies between Linux and freeBSD too and I haven't checked OSX and Solaris yet.

I promise, I'll look into it again when I can allocate alonger time segemt for it.

Thre are also other math things I am concerned about and would like to her yours (Nigel's) and others (Eddie, aren't you a math person too ?) opinions.

(1) I have been asked for 64 bit integer support (currently 32 bit), the LISP cell in newLISP has space for it (like for floats already), but not all compilers on 32bit platforms suport it?

(2) decimal based arithmetik: for people doing financial stuff dealing with binary based arithmetik is alwaysa a hassle becuase of rounding errors, not having it, excludes newLISP from a big segment of software.

(3) infinite precision integer arithmetik: I don't know much about it, but see that some other languages are supporting it.

What do think?

Lutz

nigelbrown
Posts: 429
Joined: Tue Nov 11, 2003 2:11 am
Location: Brisbane, Australia

Post by nigelbrown »

Hi Lutz,
I can imagine that trying to have consistency across platforms is tricky - probably for now just letting the FPU handle things internally is a reasonable position (although I don't know what the platform inconsistency implications of different FPUs that may be in, say, Solaris systems).
Re your questions -note I come from a float (largely workday stats /graphics) interest background
>(1) I have been asked for 64 bit integer support (currently 32 bit), the >LISP cell in newLISP has space for it (like for floats already), but not all >compilers on 32bit platforms suport it?
If its too hard to be consistent across platforms I'd stick to 32 for now

>(2) decimal based arithmetik: for people doing financial stuff dealing with >binary based arithmetik is alwaysa a hassle becuase of rounding errors, >not having it, excludes newLISP from a big segment of software.
Apart from Cobol and some specialised areas of Basic I don't know that it's really properly supported elsewhere much - look at discussions of use of
this type in Ada and you see that most implementations there are really just floats under it.

>(3) infinite precision integer arithmetik: I don't know much about it, but >see that some other languages are supporting it.
It looks cute to be able to calculate some big factorial or prime but outside number theory I don't know that there is much need for it - some of the complications in the maths of Common Lisps seem to come from the associated automatic type conversions between various big number
representations.

I'd be more interested in a complex type package being available before the integer side was upgraded - but that's just a personal interest and not likely to be commercially relevant.

As long as newlisp is competent and consistent in double size floats (as it is) I would see the rest as having the potential of bloating newlisp with little payback.

Regards
Nigel

eddier
Posts: 289
Joined: Mon Oct 07, 2002 2:48 pm
Location: Blue Mountain College, MS US

Post by eddier »

The Intel (except for the flawed early pentiums), AMD, newer Alphas, Sparcs, G4s, and G5s all follow the IEEE 754 floating point standard. However, some compilers don't convert decimal format to binary according to IEEE 754 's specification. There used to be some tests available to see if a compiler conformed to the standard but I've forgotten where I've seen them.

>(1) I have been asked for 64 bit integer support (currently 32 bit), the >LISP cell in newLISP has space for it (like for floats already), but not all >compilers on 32bit platforms suport it?

I don't think it will be long before the compilers will. I would wait until they do.

>(2) decimal based arithmetik: for people doing financial stuff dealing with >binary based arithmetik is alwaysa a hassle becuase of rounding errors,

Decimal based arithmetic is supported by the Intel x86 architecure but I don't know if compilers actually take advantage of it. Since rounding errors compound themselves during a series of computations, I would use doubles for everything until something needs to be printed.

>(3) infinite precision integer arithmetik: I don't know much about it, but >see that some other languages are supporting it.

I know Plt. Scheme, Python, and Ruby support it, but it seems to me to be just slow arithmetic. If supported, I would like the option to cut the infinite precision off.

Eddie

Lutz
Posts: 5289
Joined: Thu Sep 26, 2002 4:45 pm
Location: Pasadena, California
Contact:

Post by Lutz »

Thanks to both of you for your input. Looks like at least the 3 of us are more or less on the same page with all of this.

(1) 64 bit integers, yes I will wait until 64 bit CPUs go more into the mainstream. Once 64 bit is out, I would have to revise the newLISP LISP cell, which at the moment has space for 64 bit integers (use only 32) and floats, but only allocates 32 bit for string and linked-lisp-cell memory pointers. I will always want real integers, to be able to access binary structures and do address arithmetik and bit acrobatik. Some scripting languages rely solely on floats, if anybody prefers this, one can always do: (constant + add) ... etc., and to everything in double floats using +,-,*/.

The way newLISP is architectured it relies heavily on the same width of integers and memory pointers. So if I go to a 64 bit newLISP the 64 bit integers fall out naturally.

(2, 3) I have an algorithm from an earlier language (written in ASM68) , which could do decimal arithmetik and you could set the precision deliberately high, but it relies on the x86 instruction set, which has packed (4 bit per decimal digit) decimal support. As both of you seem to think, aplications for very high precision arithmetik are exotic anyway and have no room in a LISP which tries to be mainstream and light weight.

But I will revisit exception handling at some point for floating point math and to have the possibility of breaking out of infinitive or long lasting processes. This happens to me all the time when developing stuff interactively and I cannot reset the system but have to kill the whole thing and start over again.

Lutz

Locked