Currently the dive computer backends are responsible for opening (and
closing) the underlying I/O stream internally. The consequence is that
each backend is hardwired to a specific transport type (e.g. serial,
irda or usbhid). In order to remove this dependency and support more
than one transport type in the same backend, the opening (and closing)
of the I/O stream is moved to the application.
The dc_device_open() function is modified to accept a pointer to the I/O
stream, instead of a string with the device node (which only makes sense
for serial communication). The dive computer backends only depend on the
common I/O interface.
Being able to synchronize the dive computer clock with the host system
is a very useful feature. Add the infrastructure to support this feature
through the public api.
The vendor_product_parser_create() and vendor_product_device_open()
functions should be called indirectly, through the generic
dc_device_open() and dc_parser_new() functions. And the
vendor_product_extract_dives() functions are internal functions that
should never have been part of the public api in the first place.
The low level serial and IrDA functions are modified to:
- Use the libdivecomputer namespace prefix.
- Return a more detailed status code instead of the zero on success and
negative on error return value. This will allow to return more
fine-grained error codes.
- The read and write functions have an additional output parameter to
return the actual number of bytes transferred. Since these functions
are not atomic, some data might still be transferred successfully if
an error occurs.
The dive computer backends are updated to use the new api.
Both the allocation and initialization of the object data structure is
now moved to a single function. The corresponding deallocation function
is intended to free objects that have been allocated, but are not fully
initialized yet. The public cleanup function shouldn't be used in such
case, because it may try to release resources that haven't been
initialized yet.
Instead of freeing the object data structure in the backend specific
cleanup function, the memory is now freed automatically in the base
class function. This reduces the amount of boilerplate code in the
backends. Backends that don't allocate any additional resources, do no
longer require a cleanup function at all.
When the close function returns, all resources should be freed,
regardless of whether an error has occured or not. The error code is
purely informative.
However, in order to return the first error code, which is usually the
most interesting one, the current implementation is unnecessary
complicated. If an error occurs, there is no need to exit immediately.
Simply store the error code unless there is already a previous one, and
then continue.
The Tusa IQ-700 is very similar to the other Seiko based models. The
most important change is that due the smaller amount of memory (8K vs
32K), the logbook entries are only 1 byte large instead of two bytes.
Currently the profile ringbuffer starts at the base address 0x4000, but
I believe the real start is one 0x20 byte page earlier, at 0x3FE0. I
have two reasons for this:
1. To locate the start of a dive, we always have to substract one page
from the pointers in the logbook ringbuffer. With the new base address,
they would point directly to the start of the dive, which makes a lot
more sense.
2. When comparing the divetime as stored in the header with the one
obtained by counting the number of samples, they always match except for
dives that span the ringbuffer wrap point. If those extra 0x20 bytes are
included, the counts do match again.
Unfortunately, this change breaks the assumption that the ringbuffer is
aligned to packet boundaries. As a workaround, we define a virtual
ringbuffer that is slightly larger than the actual ringbuffer, but
properly aligned. Data outside the real ringbuffer is downloaded and
then immediately dropped.
Packets have a fixed size of 0x80 bytes, while a single page is only
0x20 bytes long. Thus each read operation always returns 4 pages at
once. Now, if the end-of-profile pointer is not nicely aligned on a
packet boundary, then the download algorithm won't arrive exactly at the
start address of the ringbuffer, because the ringbuffer is properly
aligned. The consequence is that we won't even notice we reached the
ringbuffer boundary and happily continue reading outside the ringbuffer.
Oops!
This is fixed by aligning the end-of-profile pointer, which guarantees
that all read operations are now nicely aligned to packet boundaries.
When trying to send a command, the first attempt always fails. We
receive the echo, but no data packet. A second attempt usually works,
but we always get back the same data packet. That's cleary wrong.
Now, when comparing the data packets with those of the Tusa application,
I noticed something very interesting. When we request the first packet
(page 0x0000), we get:
W: 520000
R: 520000
R: 00880124056202000250002890470824...19202720002000200020002000204145
The Tusa application also request this page, but the response is
completely different:
W: 520000
R: 520000
R: 22182224222322092203220522112210...0000000000f021fc0000000000000045
The response we get is identical to the response that the Tusa
application gets for page 0x0052:
W: 520052
R: 520052
R: 00880124056202000250002890470824...19202720002000200020002000204145
The only difference here is the echo of the command. But the echo should
be ignored, because it's generated by the pc interface, and not send by
the dive computer. This is easily verified by the fact that we always
receive an echo, even without a dive computer connected (e.g. only the
pc interface).
Notice how the command type (first byte) and page number (last byte) are
identical (0x52) for this request! I suspect that somehow the command
type ends up being interpreted as the page number. That would explain
why we're always getting the same response: as far as the device is
concerned we're always requesting page 0x52. This is probably also
related to the fact that the device doesn't respond after the first
request. It's not impossible that if the first command wasn't received
correctly and we resend the command, the device receives something that
contains parts of both attempts.
By sending the command and reading the echo byte by byte instead of all
at once, the above problem disappears.
Without the delay, the communication immediately fails. We receive the
command echo, but not the actual data packet. I suspect the device is
still be busy with the initialization and needs a bit more time before
it's ready to accept a request.
The fingerprint is used unconditionally, regardless of whether it's
explicitly set by the application or not. Therefore it needs to be
initialized properly.
Currently, each backend has it's own function to verify whether the
object vtable pointer is the expected one. All these functions can be
removed in favor of a single isintance function in the base class,
which takes the expected vtable pointer as a parameter.
Functions which are called through the vtable, don't need to verify the
vtable pointer, and those checks are removed.
The term "backend" can be confusing because it can refer to both the
virtual function table and the device/parser backends. The use of the
term "vtable" avoids this.
The version function requires device specific knowledge to use it (at
least the required buffer size), it is already called internally when
necessary, and only a few backends support it. Thus there is no good
reason to keep it in the high-level public api.
These macros are used internally and don't need to be exposed. In some
cases, the actual values are not even constant, but dependant on the
model and/or the firmware version.
I forgot to update the device and parser initialization functions to
store the context pointer into the objects. As a result, the internal
context pointers were always NULL.
The public api is changed to require a context object for all
operations. Because other library objects store the context pointer
internally, only the constructor functions need an explicit context
object as a parameter.
Adding the "dc_" namespace prefix (which is of course an abbreviation
for libdivecomputer) should avoid conflicts with other libraries. For
the time being, only the high-level device and parser layers are
changed.
The public header files are moved to a new subdirectory, to separate
the definition of the public interface from the actual implementation.
Using an identical directory layout as the final installation has the
advantage that the example code can be build outside the project tree
without any modifications to the #include statements.
The internal memory appears to contain two separate areas. One for the
normal dives and one for the freedives. Currently, only the freedive
section is processed.