The current algorithm always downloads a full memory dump, and extracts
the dives afterwards. For the typical scenario where only a few dives
are being downloaded, this is inefficient because most of the data isn't
needed. This can easily be avoided by downloading the data on the fly.
Reading a ringbuffer backwards in order to process the most recent data
first, is a very common operation. Nearly every dive computer backend
has its own implementation. Thus with a common implementation, the
amount of code duplication and complexity in the dive computer backends
can be greatly reduced.
The common algorithm is implemented as a simple ringbuffer stream, which
takes care of all the technical details like the ringbuffer boundaries,
alignment to the page size, using the optimal packet size and caching
the remaining data.
The select system call modifies the file descriptor set, and depending
on the underlying implementation also the timeout. Therefore these
parameters should be re-initialized before every call.
The existing code also didn't handle EINTR and EAGAIN correct.
With a time based sample interval, the number of samples for a single
timestamp should be constant. However in practice some devices
occasionally store fewer samples. Since our sample time is based purely
on the sample interval, it goes completely out of sync with the sample
timestamp. To avoid this problem, the sample timestamp is used as the
base value.
For the Oceanic Pro Plus 2, this problem is very noticable. After about
115 minutes into a dive, the sample interval appears to increase to 60
seconds. Thus, without this fix, the resulting dive time for long dives
is suddenly much shorter than it should be.
The Aqualung i450T appears to ignore the fixed sample rate and instead
store a timestamp in each sample.
The presence of the surface samples in combination with this timestamp
based format is odd. Even the official Diverlog software is confused:
the Windows versions seems to ignore them, but the Mac version takes
them into account.
After the previous commit, the raw data is now reported with one large
vendor sample. Because that makes the data more difficult to interpret
(for example during debugging), a small helper function is added to
split the data again in multiple vendor samples.
Originally, the time and vendor sample values are emitted immediately
after the previous sample is complete. This is now postponed until all
raw samples are available.
This will be required for the Aqualung i450t. That model appears to
ignore the fixed sample rate and instead store a timestamp in each
sample. That means the timestamp is only available once the last raw
sample data has been reached.
Skipping the extra samples by increasing the length is not always
reliable. If there are empty samples present, they will get skipped
instead of the real samples. And if the number of samples isn't an exact
multiple of the samplerate, we're accessing data beyond the end of the
dive profile.
The Cressi Drake is a mainly a freedive computer. The data format is
almost identical to the Leonardo. The main difference is that a single
dive now contains an entire freedive session. Each freedive in the
session is delimited with a 4 byte header containing the surface
interval and a special marker.
The logbook entries are stored separately from the profile data. If the
profile ringbuffer is filled faster than the logbook ringbuffer, then
the oldest logbook entries can still point to profile data that has
already been overwritten with newer data.
To detect such overwritten profile data, we keep track of the remaining
space in the profile ringbuffer.
The sample interval is stored in the settings, and thus there is no need
to use a hardcoded value. In practice all dives appear to be using the
default value (5 seconds), so this is more about being future proof.
On linux, several users are reporting download problems, while on
windows everything works fine. Simply toggling the DTR line appears to
fix the problem.
A possible explanation is that on windows, the SetCommState() function
not only configures the serial protocol parameters, but also initializes
the DTR and RTS lines. In the libdivecomputer implementation the default
state is enabled (DTR_CONTROL_ENABLE and RTS_CONTROL_ENABLE). The result
is that the DTR line gets automatically initialized to enabled, and then
manually disabled again.
On linux, the DTR and RTS lines are not automatically initialized during
configuration, and need to be controlled explicitely. The result is that
the DTR line ends up disabled without being toggled.
The read command appears to be limited to the range 0x1000-0x1100. That
range seems to correspond with the first 256 bytes of the full memory
dump. The packet size of 32 bytes is an arbitrary choice.
When building the Windows version resource, the -DHAVE_CONFIG_H option
isn't passed to resource compiler automatically. The result is that
development builds don't have their git revision embedded in the DLL.
The dive mode is stored in each sample, and can change during the dive.
In order to report a single value for the entire dive, we assume the
value of the first sample is representive for the entire dive. For
example a dive started as a CC dive but with a bailout to OC during the
dive, is still considered to be a CC dive.
A warning is generated if the dive mode changes.
For dives with multiple gas mixes, an application doesn't have enough
info to figure out which one is the initial gas mix. Usually it's the
first gas mix, but that's not guaranteed. Reporting the intial gas mix
on the first sample avoids this problem.
In the public header files, all symbols are marked extern C. When using
a C compiler, there is usually no problem if the header isn't included
in the C file. But the msvc build system uses the C++ compiler (due to
the use of some C99 features not supported by the msvc C compiler).