With the introduction of the device descriptors, the new dc_device_open()
convenience function can take care of the mapping from a particular model to
the corresponding backend internally, without needing any device specific
knowledge in the application. An application can simply query the list of
supported devices, and the library will automatically do the right thing.
Applications can now enumerate all the supported devices at runtime,
and don't have to maintain their own list anymore. The internal list
does include only those devices that have been confirmed to work at
least once without any major problems.
As the name already indicates, a device descriptor is lightweight
object which describes a single device. Currently, the api supports
getting the device name (vendor and product) and model number. But
this can extended with other features when necessary.
Adding the "dc_" namespace prefix (which is of course an abbreviation
for libdivecomputer) should avoid conflicts with other libraries. For
the time being, only the high-level device and parser layers are
changed.
The public header files are moved to a new subdirectory, to separate
the definition of the public interface from the actual implementation.
Using an identical directory layout as the final installation has the
advantage that the example code can be build outside the project tree
without any modifications to the #include statements.
It looks like the Icon HD erases old dives partially with 0xFF bytes
before overwriting them with new dives. If the head of the oldest dive
has been erased, the length field which is stored in the first 4 bytes
is erased as well, and we can use it to detect the last dive.
For development snapshots, a 'devel' suffix is added to distinguish from
the final release. If necessary, the suffix can also be used for 'alpha'
and 'beta' releases.
For development snapshots, a 'devel' suffix is added to distinguish from
the final release. If necessary, the suffix can also be used for 'alpha'
and 'beta' releases.
The Veo 1.0 has very limited memory and doesn't have a logbook and
profile ringbuffer. Hence downloading dives isn't really supported, but
even this limited amount of data might be useful for someone.
Sometimes there are a few garbages bytes received before the preamble
bytes. This typically happens when trying to download again after a
failed attempt. However trying to flush them immediately after opening
the serial port doesn't work.
When the OSTC receives the download dives command, it responds
immediately with the preamble bytes. But then it does a linear search
through its internal memory to locate the end-of-profile marker. As a
result the response time increases when the marker is located near the
end of the memory area. In the worst case scenario, the response time
can exceed the 3 second read timeout with a few milliseconds.
Since the required timeout depends on the total amount of profile
memory, this problem was indirectly introduced with firmware v1.91,
which doubled the amount of profile memory from 32K to 64K.
When using half-duplex communication (e.g. only a single wire for both
Tx and Rx) a data packet needs to be transmitted entirely before
attempting to switch into receiving mode.
For legacy serial hardware, the tcdrain() probably works as advertised,
and waits until the data has been transmitted. However for common
usb-serial converters, the hardware doesn't provide any feedback to the
driver, and the tcdrain() function can only wait until the data has been
transmitted to the usb-serial chip. There is no guarantee that the data
has actually been transmitted by the usb-serial chip.
As a workaround, we wait at least the minimum amount of time required to
transmit the data packet over a serial line, taking into account the
current configuration.
The OSTC doesn't store the start of the dive, but the exit time. Hence
the dive time needs to be substracted.
For dives with format version 0x21, we prefer the total dive time in
seconds stored in the extended header. This time value also includes the
shallow parts of the dive, and therefore yields the most accurate start
time. The dive time is rounded down towards the nearest minute, to match
the value displayed by the ostc. For dives with the older format version
0x20, this value isn't available and we default to the normal dive time.
When the dive mode setting is set to air, the oxygen percentage stored
in the header is different from the expected 21%. It might be the last
used nitrox percentage.
The assumption that two consecutive dive profiles are stored without any
gaps in between them, appears to be incorrect in some cases. Instead of
failing with an error we just skip those gaps now.
We received a report of a Darwin Air device which has a very high error
rate. The majority of the echo packets is incorrect, but since this
doesn't seem to have any effect on the actual data packet, we can just
ignore this error. If there happens to be a more serious error, it will
be detect in the data packet.
Sometimes there were also a some garbage bytes received at startup.
Adding a small delay seems to fix this.
The linux USB CDC-ACM driver, which is used by the Mares Icon HD
interface, doesn't support the ioctl's to configure a custom baudrate.
But since the actual baudrate doesn't seem to matter at all, we revert
back to the nearest standard baudrate.
Because custom baudrates are confirmed to be supported on Windows and
Mac OS X, those platforms can keep using the non-standard baudrate.
When trying to send the commands as fast as possible, without any delay,
the failure rate is very high. Almost every single packet fails with a
timeout at first. Retrying the packet works, but those many timeouts
make the download extremely slow. Adding a small delay avoids the much
more expensive timeout and speeds up the transfer significantly.