On Mac OS X (and probably the other BSD's too), the ioctl() syscall
takes an 'unsigned long' integer as the request parameter. On 64bit
systems this is a 64bit type, while on 32bit systems it's a 32bit type.
Some of the request constants are defined as 32 bit negative numbers.
Casting it to a 64bit value will perform a sign extension operation to
preserve the negative value. Because this results in a different request
code when interpreted as an unsigned integer, the ioctl() call fails
with ENOTTY. For example TIOCMBIS is defined as 0x8004746c and becomes
0xffffffff8004746 after the sign extension.
Linux 64bit is unaffected by this problem. None of the request constants
has the sign bit set, and thus the sign extension has no effect. For
example TIOCMBIS is defined as 0x5416.
By using an unsigned integer type, the sign extension can be avoided. We
use the 'unsigned long' type in case one of the request constants
happens to be defined as a 64bit number.
When using half-duplex communication (e.g. only a single wire for both
Tx and Rx) a data packet needs to be transmitted entirely before
attempting to switch into receiving mode.
For legacy serial hardware, the tcdrain() probably works as advertised,
and waits until the data has been transmitted. However for common
usb-serial converters, the hardware doesn't provide any feedback to the
driver, and the tcdrain() function can only wait until the data has been
transmitted to the usb-serial chip. There is no guarantee that the data
has actually been transmitted by the usb-serial chip.
As a workaround, we wait at least the minimum amount of time required to
transmit the data packet over a serial line, taking into account the
current configuration.
On Mac OS X they disable the definition of the timeval macros and on
Linux they are defined by default. Thus removing them makes everything
work on both platforms.