In the original Unix systems, when a handler that was established using signal() was invoked by the delivery of a
signal, the disposition of the signal would be reset to SIG_DFL, and the system did not block delivery of further
instances of the signal. System V also provides these semantics for signal(). This was bad because the signal
might be delivered again before the handler had a chance to reestablish itself. Furthermore, rapid deliveries of
the same signal could result in recursive invocations of the handler.
BSD improved on this situation by changing the semantics of signal handling (but, unfortunately, silently changed
the semantics when establishing a handler with signal()). On BSD, when a signal handler is invoked, the signal
disposition is not reset, and further instances of the signal are blocked from being delivered while the handler is
The situation on Linux is as follows:
* The kernel's signal() system call provides System V semantics.
* By default, in glibc 2 and later, the signal() wrapper function does not invoke the kernel system call. Instead,
it calls sigaction(2) using flags that supply BSD semantics. This default behavior is provided as long as the
_BSD_SOURCE feature test macro is defined. By default, _BSD_SOURCE is defined; it is also implicitly defined if
one defines _GNU_SOURCE, and can of course be explicitly defined.
On glibc 2 and later, if the _BSD_SOURCE feature test macro is not defined, then signal() provides System V
semantics. (The default implicit definition of _BSD_SOURCE is not provided if one invokes gcc(1) in one of its
standard modes (-std=xxx or -ansi) or defines various other feature test macros such as _POSIX_SOURCE,
_XOPEN_SOURCE, or _SVID_SOURCE; see feature_test_macros(7).)
* The signal() function in Linux libc4 and libc5 provide System V semantics. If one on a libc5 system includes
<bsd/signal.h> instead of <signal.h>, then signal() provides BSD semantics.