This converts most of the cached data to the field cache, leaving some
garmin-specific fields alone (but removing them from the "cache"
structure in the process).
This means that all of the users of the string fields have been
converted, and we don't have the duplicate string interfaces any more.
Some of the other "dc_field_cache_t" fields could easily be used by
other backends (including some of the partial conversions like the
Shearwater one, but also backends that don't do any string fields at
all), but this conversion was a fairly minimal "set up the
infrastructure, and convert the easy parts".
Considering that the string field stuff still isn't upstream, I'm not
going to push any other backends to do more conversions.
On the whole, the string code de-duplication was a fairly nice cleanup:
8 files changed, 340 insertions(+), 484 deletions(-)
and perhaps more importantly will make it easier to do new backends in
the future with smaller diffs against upstream.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We used to require that we have one of the documented dive types in the
'sub_sport' type field. But apparently Garmin added a new type number
for CCR diving, so CCR dives weren't recognized at all.
Add the new CCR case, but also say that if we have seen a DIVE_SUMMARY
record with average depth information, we'll just assume it's a dive
even for unrecognized sub_sport numbers.
Reported-by: Thomas Jacob <opiffe@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
.. the parsing was actually already there, but we never generated the
event to report it. I hadn't had any files with HR data.
Reported-by: Primoz P <primozicp@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Based on information from Ryan January, who got it from his Garmin
contacts and got the go-ahead to share the data.
This is mainly the water density and deco model. It has a few other
fields troo, but nothing necessarily worth reporting.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Based on information from Ryan January, who got it from his Garmin
contacts and got the go-ahead to share the data.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
While it seems like a safe assumption, the cost of being careful and assembling
the full record and taking the one for device_index 0 seems worth it to me.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A typical FIT file contains several DEVICE_INFO messages. We need to
identify the one(s) for the creator (i.e. the actual device, not a
sub-component).
Note, Garmin identifies the Descent Mk1 as product 2859. I think we
should use this as the model number (instead of currently using 0.
Also, the vendor event is not to send the vendor name of the device, but
in order to send vendor specific events :-)
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dives are identified by a sub_sport range of 53-57 in the SPORT message.
This means that we need to parse the files before we actually offer them to the
application, which means we parse them three times all together, but I don't
see a way around that. Thankfully parsing a memory buffer is reasonably fast.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yes, the hexdump was simple, but it was really hard to read, and we can
do so much better, including taking empty data into account, and
formatting it by the actual size of the individual fields.
So this improves on the debug log, so that when we decide to try to
parse new field information, the data will be in a somewhat more legible
format that makes more sense. And getting rid of the empty fields makes
it much clearer what data might be even remotely interesting.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The magic numbers in the decoding are all based on Wojciech Więckowski's
fit2subs python script.
The event decoding is incomplete, but it should be easy enough to add
new events as people figure them out or as the FIT SDK documentation
improves, whichever comes first.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The logic to suppress multiple redundant time samples in the garmin
parser also always suppressed the time sample at 0:00, which was not
intentional.
Fix it by simply making the "suppress before" logic be "suppress until"
instead.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Several dive computers support this, so why not have the proper
interface for it in libdivecomputer?
Triggered by the fact that the Python scripts to generate XML files from
the Garmin FIT files can import this information, but the native
libdivecomputer model couldn't.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This actually takes the gas status information into account, and doesn't
show gas mixes that are disabled.
All the Garmin Descent data now looks reasonable, but we're not
generating any events (so no warnings, but also no gas change events
etc).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This clarifies and generalizes the "pending sample data" a bit to also
work for gas mixes, since it's one of those things where you get
multiple fields in random order, and it needs to be batched up into one
"this gas for this cylinder" thing.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This adds all the GPS information I found, although for dives the
primary ones do seem to be the "session" entry and exit ones.
But I'm exporting all of them as strings, so that we can try to figure
out what they mean.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This gets me real profiles, with depth and temperature information.
Sadly, the temperature data seems to be in whole degrees C, which is not
good for diving. But certainly not unheard of.
Also, while this does actually transfer a lot of other information too,
there are certainly things missing. No gas information is gathered
(although we do parse it, we just don't save it), and none of the events
are parsed at all.
And the GPS information that we have isn't passed on yet, because there
are no libdivecomputer interfaces to do that. I'll have to come up with
something.
But it's actually almost useful. All the basics seem to be there. How
*buggy* it is, I do not know, but the profiles don't look obviously
broken.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There aren't that many relevant ones left, and I have reached the point
where I think the remaining missing fields just aren't that important
any more. You can always get them by saving the libdivecomputer
log-file and see the debug messages that way.
Now I'll need to turn the parsing skeleton into actually generating the
actual libdivecomputer data.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This actually seems to cover almost all of the relevant dive fields.
There's a lot of different GPS coordinates, I have no idea what they all
are, but they are at least marked in the definitions.
NOTE! None of this actually fills in any information yet. It's all
about just parsing things and getting the types etc right.
On that note, this also adds a bit of dynamic type checking, which
caught a mistake or two.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Make it easier to see which of the unknown fields don't contain anything
interesting.
Soon this will be at the stage where the parser skeleton itself doesn't
need much work, and I should look at the actual data and turn it into
samples instead.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Oops. I used array_uint16_le() to get the data size. Too much
copy-and-paste from the profile version (which is indeed 16 bits).
The data size is a 32-bit entity, and this would truncate the data we
read.
Also, verify that there is space for the final CRC in the file, even if
we don't actually check it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The invalid values skip the parser callback function entirely. Of
course, since it's not really doing anything right now, that's mostly
costmetic.
Extend the FIT type declarations to also have the invalid values.
Also, add a few timestamp entries, and print them out to show the
timestamps in a human-legible format.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It turns out that the timestamp field can exist across all message
types, as can a few other special fields.
Split those out as "ANY" type fields, so that we get the field
descriptor without having to fill in every message descriptor.
This also makes the message descriptors smaller, since we no longer need
to worry about the high-numbered (253) timestamp field in the arrays.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is _very_ incomplete. The FIT file is really fairly generic, but
this has the basics for parsing, with tables to look up the low-level
parsers by the FIT "message ID" and "field nr".
It doesn't actually parse anything yet, so consider this a FIT decoder
skeleton.
Right now it basically prints out the different record values, and names
then for the (few) cases where I've found or guessed the numbers.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This does absolutely nothing, but it adds the basic skeleton for a new
dive computer support.
Not only don't I have any real code for any of this yet, but I actually
think it might be useful to have a "this is how to add a new dive
computer" example commit.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>