Compare commits

...

88 Commits

Author SHA1 Message Date
Michael Keller
b126fccb1b Desktop: Fix Inconsistencies in Handling of Salinity.
- add correct setting of the water type drop down for the dive shown
  initially after program start;
- change salinity to have 3 decimals in planner, to make it consistency
  with the log.

Fixes #4240.

Reported-by: @ccsieh
Signed-off-by: Michael Keller <github@ike.ch>
2024-06-10 15:54:22 +12:00
Michael Keller
10fc3bfd47 Bugfix: Fix Incorrect Volumes Displayed for Tank Types.
Fix an issue introduced in #4148.
Essentially the refactoring missed the fact that in the imperial system
tank size is tracked as the free gas volume, but in the metric system
(which is the one used in most of Subsurface's calculations) tank size
is tracked as water capacity.
So when updating a tank template tracking imperial measurements, the
given (metric) volume in l has to be multiplied by the working pressure,
and vice versa.
This also combines all the logic dealing with `tank_info` data in one
place, hopefully making it less likely that this will be broken by
inconsistencies in the future.

Fixes #4239.

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-09 11:15:59 +02:00
Berthold Stoeger
a8c9781205 cleanup: remove unused function create_and_hookup_trip_from_dive()
It seems that the last user was removed 5 years ago: ff9506b21?

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-06-08 15:59:53 +02:00
Michael Keller
291ed9d7e3 Documentation: Update Information on Available Versions in README.md.
Update the information on the available versions of Subsurface in
README.

Also update the documentation to reflect the renaming of `INSTALL` to
`INSTALL.md`.

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-06 16:17:44 +12:00
Michael Keller
a39f0e2891 Mobile: Fix QML Warnings.
Fix some runtime warnings when running the mobile build caused by
binding loops and deprecated handler syntax.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-06-06 16:17:32 +12:00
Dirk Hohndel
d9f50bb8e0 add Ubuntu 24.04 / Noble Numbat to our builds
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
2024-06-05 13:23:52 -07:00
Michael Keller
d1db85005b CICD: Remove Workaround for Broken ubuntu 16.04 Repository.
ATTENTION: Only merge this when CICD starts working (will need a rebase
to trigger a build).

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-05 10:52:34 +12:00
Berthold Stoeger
e2d3a12555 cleanup: remove unused roles in DiveTripModelBase
The roles DIVE_IDX and SELECTED_ROLE were used for the old selection
system and removed in b8e7a600d2d2a30f7e0646fc164ab6e57fd4782f.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-06-05 09:16:32 +12:00
Michał Sawicz
568aeb7bce snap: drop candidate channel
Building directly into `stable` from the `current` branch.

Signed-off-by: Michał Sawicz <michal@sawicz.net>
2024-06-03 07:59:22 -07:00
Berthold Stoeger
ca5f28206b tests: make profile test work with non-C locales
For reasons unknown to me, the profile test is executed with a
weird locale, resulting in wrong formatting.

By setting the locale manually to "C", the tests start to work.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-06-03 07:41:47 +02:00
Michael Keller
adaa52bf9b Desktop: Fix Undo for Gaschanges on Manually Added Dives.
Fix the undo functionality for gaschanges edited on manually added
dives.

Pointed-out-by: @bstoeger
Signed-off-by: Michael Keller <github@ike.ch>
2024-06-02 11:38:21 +02:00
Michael Keller
692ec9ee5c Update libdivecomputer to latest on 'Subsurface-DS9'.
Signed-off-by: Michael Keller <github@ike.ch>
2024-06-02 16:33:19 +12:00
Michael Keller
c2c5faeaad Add the change for MacOS builds with Qt6 as well.
Signed-off-by: Michael Keller <github@ike.ch>
2024-06-02 09:42:14 +12:00
jme
88acef7f0f release build google maps
After the Mac QT upgrade to 5.15.13 google maps stopped working because a debug plugin was built and not deployed.   This changes forces a release build.   It may or may not be the best alternative, but if nothing else it's a starting point for discussion with people who know more about qmake than I do.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-06-02 09:42:14 +12:00
Michael Keller
32cd52869b CICD: Fix the AppImage Workflow.
Work around an upstream version inconsistency by pinning the versions in
our build.

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-02 01:35:29 +12:00
Berthold Stoeger
3d96642b8d smartrak: remove copy_string() that makes little sense
Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-06-01 14:43:33 +02:00
Michael Keller
c5546fb52f Cleanup: Optimise String Handling.
Cleanup of the sub-optimal string handling in #4222.

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-01 14:43:03 +02:00
Michael Keller
f65afaf5d2 Desktop: Fix Gas Editing for Manually Added Dives.
- show the correct gasmix in the profile;
- make gases available for gas switches in the profile after they have
  been added;
- persist gas changes;
- add air as a default gas when adding a dive.

This still has problems when undoing a gas switch - instead of
completely removing the gas switch it is just moved to the next point in the
profile.

Signed-off-by: Michael Keller <github@ike.ch>
2024-06-01 23:22:40 +12:00
Berthold Stoeger
9243921cbb test: fix subtle bug in testplan.cpp
testplan.cpp had a subtle bug since converting from a fixed-size
cylinder table to a dynamic cylinder table.

As noted in equipment.h, pointers to cylinders are *not* stable
when the cylinder table grows. Therefore, a construct such as
        cylinder_t *cyl0 = get_or_create_cylinder(&dive, 0);
        cylinder_t *cyl1 = get_or_create_cylinder(&dive, 1);
        cylinder_t *cyl2 = get_or_create_cylinder(&dive, 2);
can give dangling cyl0 and cyl1 pointers. This was not an issue
with the old table code, since it had a rather liberal allocation
pattern. However, when switching to std::vector<>, the problem
becomes active.

To "fix" this, simply access the highest index first. Of course,
this should never be done in real code! Therefore, add a
comment at each instance.

Quickly checked all other get_or_create_cylinder() calls and
they seemed to be safe.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-31 22:50:49 +02:00
Michael Keller
d27451979d Profile: Add Gas Description to Disambiguate.
Add the gas description to the label on pressure graphs to disambiguate
if multiple identical gasmixes are shown.

Also move the label to the right, where the end pressures will typically
be more spread out than the starting pressures.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-31 22:50:09 +02:00
Berthold Stoeger
e7d486982f core: remove put_format_loc()
This was replaced by C++ functions in ae299d5e663c.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-31 18:55:47 +02:00
Michael Keller
5b941ea34e Mobile: Fix Build Warnings.
Fix build warnings from building functions not used in the mobile
version.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-30 11:54:51 +02:00
Michael Keller
56f1e7027f Documentation: Update INSTALL and Convert it to Markdown.
Update the instructions for the Windows build and convert the file to
markdown.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-30 14:22:16 +12:00
Berthold Stoeger
64d4de4a1b fix memory leak
logfile_name was converted to std::string. Assigning a strdup()ed
string to it will leak memory.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-29 10:03:41 +12:00
Berthold Stoeger
e39b42df53 cleanup: remove disfunct add_cloned_weightsystem_at()
Clearly, a development artifact.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-29 09:33:23 +12:00
Berthold Stoeger
398cc2b639 cleanup: remove localized snprintf() functions
The last use of these functions was removed in ae299d5e663c.

And that's a good thing, because snprintf-style interfaces
make zero sense in times of variable-length character
encodings.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-29 09:33:23 +12:00
Berthold Stoeger
2776a2fe48 import: fix memory leak when importing dives
A long standing issue: the dives_to_add, etc. tables need to be
manually freed. This kind of problem wouldn't arise with proper
C++ data structures.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-27 20:11:37 +12:00
Michael Keller
1aa5438b2d Cleanup: Improve the Use of 'Planned dive' and 'Manually added dive'.
- standardise the naming;
- use it consistently;
- apply the 'samples < 50' only when putting manually added dives into
  edit mode - everywhere else manually added dives should be treated as
  such;
- do not show a warning before editing a manually added dive in planner.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-25 20:13:45 +02:00
=Michael Keller
ecc6f64d10 Cleanup: Improve Connection Handling in Profile.
- improve naming;
- remove unneeded disconnects.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-25 16:31:04 +02:00
=Michael Keller
8c14fb971c Update livdivecomputer to latest on 'Subsurface-DS9'.
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-25 08:54:53 +12:00
Dirk Hohndel
6bdfee080d Merge remote-tracking branch 'origin/translations_translations-subsurface-source-ts--master_de_DE' 2024-05-21 08:04:37 -07:00
Dirk Hohndel
21269183bf Merge remote-tracking branch 'origin/translations_translations-subsurface-source-ts--master_pt_PT' 2024-05-21 08:04:17 -07:00
=Michael Keller
245f8002a8 CICD: Remove Workflow to Build ubuntu 14.04 Docker Image.
Remove the workflow for building an ubuntu 14.04 Docker image. This is
no longer needed since the AppImage is now built on 16.04.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-20 09:56:01 +12:00
=Michael Keller
c3d807802d Desktop: Fix Finding Reported by Coverity.
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-18 14:07:15 +02:00
Michael Keller
a66bdb1bf5 Planner: Improve Exit Warning.
Improve the warning shown to the user when closing the application wile
in the planner. We now allow the user to directly discard the planned
dive, save it into the dive log, or cancel the operation altogether.
If they save into the dive log, or if they modified the dive log before
starting the planner, a second warning about the unsaved dive log
changes will be shown.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-17 16:44:04 +12:00
Michael Keller
b579342639 Cleanup: Remove 'context' Reference from Logging Defines.
Remove the reference to `context` from the defines used for logging, as
this is not used.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-16 16:39:48 +02:00
Michael Keller
888704e816 CICD: Have the Artifact Comment Workflow Suppress 'No Artifacts' Errors.
Suppress errors in the 'Add Artifact Comment' workflow if there are no
artifacts produced by the pull request workflow - this gets rid of
follow-on error messages when a pull request workflow encounters a build
error.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-15 13:29:25 +12:00
Berthold Stoeger
6880937838 core: fix INFO() and ERROR() macros in core/serial_ftdi.cpp
Error introduced in da7ea17b66: the INFO() and ERROR() macros
pass stdout instead of the format string as first parameter
to report_error(). Ooooops. How did this ever pass the
compile tests!?

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-15 09:19:05 +12:00
Michael Keller
d018b72dab CICD: Fix Signing of Android CICD Built Packages.
Fix the signing of Android .apk packages when they are build in CICD.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 15:21:05 +12:00
Michael Keller
b3d6920de4 CICD: Remove Environment Dumping in Artifact Comment Workflow.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 13:36:26 +12:00
Michael Keller
912badadd4 CICD: Restrict Artifact Comment Workflow to only Run on Pull Requests.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 13:35:12 +12:00
Michael Keller
1c0fe2fa1f Fix GitHub Workflow definition.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 12:53:10 +12:00
Michael Keller
48ef4b3a01 CICD: Debug GitHub Workflow Webhook
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 12:06:51 +12:00
Michael Keller
22082bd60a CICD: Fix Coverity Scan Workflow.
Change the ordering of steps so that git is installed before the
checkout is performed.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-14 10:57:04 +12:00
Michael Keller
be1b80ea8a CICD: Fix the AppImage Workflow.
Fix the workflow by removing the dependency on node 20, which is not
supported in Ubuntu 16.04.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 14:54:49 +12:00
Michael Keller
e81b42d533 Add environment variable required to be able to use the GitHub CLI.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 12:55:09 +12:00
Michael Keller
dd50ab0106 Fix incorrect script references.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 12:39:09 +12:00
Michael Keller
0d6b572a9f Fix script permissions.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 12:32:07 +12:00
Michael Keller
21f64134b7 Fix custom action YML.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 12:17:13 +12:00
Michael Keller
7bf40d659c Fix custom action.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 12:13:41 +12:00
Michael Keller
6ae2844f24 CICD: Fixup Merge Build Workflows on master.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 11:07:05 +12:00
Michael Keller
447f9709f7 CICD: Fixup Merge Build Workflows on master.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-13 10:46:33 +12:00
=Michael Keller
4ae6c0bbc4 CICD: Add Workflow to Pin the Generated Artifacts.
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-13 10:19:59 +12:00
=Michael Keller
6fc8310705 CICD: Improve Workflows.
Make multiple improvements to the existing workflows:
- create a shared custom action to deal with version number tracking
  and generation;
- use this action to add the branch name to the version for pull
  request builds;
- create a shared workflow for all debian-ish builds to avoid re-use
  by copy / paste;
- remove potential security risks by eliminating the use of
  pre-evaluated expressions (`${{ ... }}`) inside scripts;
- update outdated GitHub action versions;
- improve the consistency by renaming scripts acording to have a `.sh`
  extension;
- improve naming of generated artefacts for pull requests to include
  the correct version.

@dirkh: Unfortunately this is potentially going to break builds when it is
merged, as there is no good way to 'test' a merge build short of
merging.
We'll just have to deal with the fallout of it in a follow-up pull
request.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-13 10:19:59 +12:00
Berthold Stoeger
e20ec9248c core: fix leak when clearing dive log
46cf2fc0867 fixed a bug where clearing of a divelog, such as the one
used for import, would erase dives in the global(!) divelog.

However, the new code used the function clear_dive_table(), which
only cleared the table without unregistering the dives. In particular,
the dives were not removed from the trips, which means that the trips
were not free()d.

This reinstates the old code, but now passes a divelog paremeter
to delete_single_dive() instead of accessing the global divelog.
Moreover, delete dives from the back to avoid unnecessary
copying.

An alternative and definitely simpler solution might be to just
add a "clear_trip_table()" after "clear_dive_table()".

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-13 08:58:03 +12:00
Berthold Stoeger
8769b1232e planner: initialize currCombo.ignoreSelection
I am not sure what this does, but it should be initialized before
it is tested.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-12 13:23:21 +02:00
Berthold Stoeger
d061a54e3d planner: fix gas selection
The lambda that created the list of gases took a copy not a
reference of the planned dive. Of course, that never had its
gases updated. Ultimately this would crash, because this sent
an index of "-1" on change.

Fix by
1) Using a reference to the dive, not the copy
2) Catch an invalid "-1" index (by Michael Keller <github@ike.ch>)

Fixes #4188

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-12 13:23:21 +02:00
=Michael Keller
306dad575c CICD: Update the AppImage Build to Use ubuntu 16.04.
Update the linux AppImage build to use ubuntu 16.04, and simplify it to
a single workflow running on a vanilla docker image.

This still uses the upload-artifact@v3 Action that will be EOL in
November 2024, because v4 relies on node 20 which has an unmet glibc
dependency in ubuntu 16.04. But this workflow can be updated to run on
ubuntu 18.04 by a simple search / replace and disabling some 16.04
specific PPAs.

@dirkh, @probonopd: I have moved this here from #4183 to be able to
review and discuss it without the noise of the workflow cleanup.

The workflow now also publishes the AppImage as an artifact on pull
request builds, available under Checks / Details / Summary.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-12 13:52:12 +12:00
Michael Keller
331d6712c6 CICD: Move MacOS / iOS Build Qt Resources into GitHub.
Move the Qt resources required for the build for MacOS and iOS into
GitHub, into their own repositories. This removes the need to publish
them on an external file server and download them from there for every
build.
It will also make it easier for contributors to update these resources
if needed.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-12 10:19:24 +12:00
Michael Keller
f4e61aa5dc Import: Make Directory Selectable when Importing .fit Files.
In the 'Download from dive computer' dialogue, make it possible to
select the source directory for the import.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-11 12:52:05 +12:00
Michael Keller
528532572f Planner: Fix Editing of Plans in Multi-Divecomputer Dives.
Currently editing of planned dives that have been merged with actual
(logged) dives only works if the 'Planned dive' divecomputer is the
first divecomputer, and this divecomputer is selected when clicking
'Edit planned dive'. In other cases the profile of the first
divecomputer is overlaid with the profile of the planned dive, and the
first divecomputer's profile is overwritten when saving the dive plan.
Fix this problem.

Triggered by @SeppoTakalo's comment (https://github.com/subsurface/subsurface/issues/1913#issuecomment-2075562119): Users don't like that planned dives show up as their own entries in the dive list, so being able to merge them with the actual dive after it has been executed is a good feature - but this wasn't working well until now.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-11 12:51:45 +12:00
=Michael Keller
a83349015a CICD: Improve the GitHub Actions for Linux.
Do a few things:
- add a build for Debian trixie (as discussed in #4182);
- add a build for Ubuntu 24.04;
- rename the build definitions to match the build names;
- update the artifact uploads to use a non-deprecated version of the
  action, and name the artifact appropriately;
- remove a stale workflow file.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-11 12:51:33 +12:00
=Michael Keller
8627f6fc4a Desktop: Add Auto-sizing to the Extra Info Table.
Add auto-sizing to the extra info table - resize the columns so that all
rows are shown in full whenever the data is updated.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-08 08:26:56 -07:00
Michael Keller
5bad522390 Update livdivecomputer to latest on 'Subsurface-DS9'.
Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-08 12:34:31 +12:00
Richard Fuchs
edb771e8e6 uemis-downloader: convert strings to std::string
Convert some C-style strings in uemis-downloader.cpp to std::string.
This has the side effect of fixing builds on Debian Trixie, which
currently fail with the (rather silly) error:

/build/subsurface-beta-202405060411/core/uemis-downloader.cpp: In function 'char* build_ans_path(const char*, int)':
/build/subsurface-beta-202405060411/core/uemis-downloader.cpp:290:32: error: '%s' directive output between 0 and 12 bytes may cause result to exceed 'INT_MAX' [-Werror=format-truncation=]
  290 |         snprintf(buf, len, "%s/%s", path, name);
      |                                ^~
......
  529 |         ans_path = build_filename(intermediate, fl);
      |                                                 ~~
cc1plus: some warnings being treated as errors

Signed-off-by: Richard Fuchs <dfx@dfx.at>
2024-05-07 22:34:00 +12:00
=Michael Keller
17d83acdab Documentation: Add Instructions for Using Qt 5.15.13 on MacOS.
Add instructions for using Qt 5.15.13 on MacOS, which seems to have
better support for Apple silicon.`

Provided-by: jme <32236882+notrege@users.noreply.github.com>
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-07 18:43:26 +12:00
=Michael Keller
133354d51d CICD: Update Qt Version Used in the MacOS Build to 5.15.13.
Update the version of Qt that is used in the CICD build for MacOS to
5.15.13. This version is showing promise for building binaries that work
on Apple silicon.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-07 18:43:26 +12:00
Michael Keller
46cf2fc086 Import: Fix Application Hang when Cancelling the Download Dialogue.
Fix a bug causing the 'Download from dive computer' dialogue to hang
when the user attempts to cancel the dialogue after successfully
downloading one or more dives.

Fixes #4176.

Signed-off-by: Michael Keller <github@ike.ch>
2024-05-05 19:15:26 +02:00
Michael Keller
5ac1922d84 Cleanup: Improve (Android) Build Scripts.
Add a script for building the Android APK in the docker container.
Also make some improvements to the Windows build scripts, and update the
documentation for both builds.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-05-06 00:45:51 +12:00
Michael Keller
3153a139b3 Update livdivecomputer to latest on 'Subsurface-DS9'.
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-04 01:41:51 +12:00
Michael Keller
e65c7cedc8 Refactoring: Improve Naming of FRACTION and SIGNED_FRAC defines.
Make it more obvious that the FRACTION and SIGNED_FRAC defines return a
tuple / triplet of values.

Fixes https://github.com/subsurface/subsurface/pull/4171#discussion_r1585941133

Complained-about-by: @bstoeger
Signed-off-by: Michael Keller <github@ike.ch>
2024-05-02 20:36:26 +02:00
Berthold Stoeger
32a08735c3 profile: fix string formating in profile.cpp
ae299d5e663cd672d1114c3fe90cf026b9ab463e introduced a format-
string bug by splitting a format-string in two and splitting
the arguments at the wrong place.

The compiler doesn't warn in this case, because the format-
string is passed through translate(...).

This should have crashed, but for some reason didn't, at least
on Linux.

Fix the arguments.

Signed-off-by: Berthold Stoeger <bstoeger@mail.tuwien.ac.at>
2024-05-01 20:39:59 +12:00
Michael Keller
af6caa6fa2 Import: Improve Error Logging.
Add logging of the libdivecomputer return code for errors. Also, switch
logging of errors in the background thread to callback based logging to
make it visible.

Signed-off-by: Michael Keller <github@ike.ch>
2024-04-30 12:26:18 +12:00
Michael Keller
f3c7dcf9c9 Desktop: Fix 'planned' and 'logged' Filters.
Fix the filters for planned (i.e. has at least one dive plan attached)
and logged (i.e. has at least one dive computer log attached) dives.
Also refactor the respective functions for improved readability.

Signed-off-by: Michael Keller <github@ike.ch>
2024-04-30 12:25:31 +12:00
Michael Keller
bb00a9728f Cleanup: More Fixes for Problems Reported by Coverity.
Fix the problem another way as Coverity was still not happy with it.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-04-28 23:44:16 +12:00
Michael Keller
a2cd621819 Update livdivecomputer to latest on 'Subsurface-DS9'.
Signed-off-by: Michael Keller <github@ike.ch>
2024-04-28 18:52:19 +12:00
Michael Keller
d92777a3ff Packaging: Cleanup Windows Build Scripts.
Do some housekeeping and cleanup on the build scripts for Windows:
- remove Windows 32bit builds as support for this has been removed from
  the mxe container;
- fix some warnings in the smtk2ssrf installer configuration;
- sanitise the output colour of the smtk2ssrf build script;
- add a docker based build script for the Windows installers;
- remove outdated and deprecated documentation and scripts.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-04-26 07:42:59 -07:00
jme
e09a134a3f Delete desktop-widgets/preferences/preferences_dc.ui
Remove preferences "Dive Download" window.    Delete all dive computers no longer needed now that they can be deleted on the import window.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-04-26 08:14:16 +12:00
jme
aecb4f5005 Delete desktop-widgets/preferences/preferences_dc.h
Remove preferences "Dive Download" window.    Delete all dive computers no longer needed now that they can be deleted on the import window.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-04-26 08:14:16 +12:00
jme
358b9186bf Delete desktop-widgets/preferences/preferences_dc.cpp
Remove preferences "Dive Download" window.    Delete all dive computers no longer needed now that they can be deleted on the import window.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-04-26 08:14:16 +12:00
jme
12ae3d4e96 Update preferencesdialog.cpp
Remove preferences "Dive Download" window.    Delete all dive computers no longer needed now that they can be dleted on the import window.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-04-26 08:14:16 +12:00
jme
34926f1325 Update CMakeLists.txt
Remove preferences "Dive Download" window.    Delete all dive computers no longer needed now that they can be dleted on the import window.

Signed-off-by: jme <32236882+notrege@users.noreply.github.com>
2024-04-26 08:14:16 +12:00
Michael Keller
da8509d29b Cleanup: Actualise README.md
Actualise the status badges in README.md, and remove the outdated
reference to the current release.

Signed-off-by: Michael Keller <mikeller@042.ch>
2024-04-26 01:14:23 +12:00
transifex-integration[bot]
176ee1ac9c
Translate subsurface_source.ts in pt_PT
100% translated source file: 'subsurface_source.ts'
on 'pt_PT'.
2024-03-30 16:39:17 +00:00
transifex-integration[bot]
0ea287cc2c
Translate subsurface_source.ts in de_DE
100% translated source file: 'subsurface_source.ts'
on 'de_DE'.
2024-03-09 16:14:26 +00:00
transifex-integration[bot]
8c861f749f
Translate subsurface_source.ts in de_DE
100% translated source file: 'subsurface_source.ts'
on 'de_DE'.
2024-03-09 16:07:59 +00:00
transifex-integration[bot]
f1bd5dc051
Translate subsurface_source.ts in de_DE
100% translated source file: 'subsurface_source.ts'
on 'de_DE'.
2024-03-09 16:07:04 +00:00
transifex-integration[bot]
d64986415c
Translate subsurface_source.ts in de_DE
100% translated source file: 'subsurface_source.ts'
on 'de_DE'.
2024-03-09 16:06:38 +00:00
162 changed files with 3578 additions and 3607 deletions

View File

@ -0,0 +1,56 @@
name: Manage the Subsurface CICD versioning
inputs:
no-increment:
description: 'Only get the current version, do not increment it even for push events (Caution: not actually a boolean)'
default: false
nightly-builds-secret:
description: The secret to access the nightly builds repository
default: ''
outputs:
version:
description: The long form version number
value: ${{ steps.version_number.outputs.version }}
buildnr:
description: The build number
value: ${{ steps.version_number.outputs.buildnr }}
runs:
using: composite
steps:
- name: atomically create or retrieve the build number and assemble release notes for a push (i.e. merging of a pull request)
if: github.event_name == 'push' && inputs.no-increment == 'false'
env:
NIGHTLY_BUILDS_SECRET: ${{ inputs.nightly-builds-secret }}
shell: bash
run: |
if [ -z "$NIGHTLY_BUILDS_SECRET" ]; then
echo "Need to supply the secret for the nightly-builds repository to increment the version number, aborting."
exit 1
fi
scripts/get-atomic-buildnr.sh $GITHUB_SHA $NIGHTLY_BUILDS_SECRET "CICD-release"
- name: retrieve the current version number in all other cases
if: github.event_name != 'push' || inputs.no-increment != 'false'
env:
PULL_REQUEST_BRANCH: ${{ github.event.pull_request.head.ref }}
shell: bash
run: |
echo "pull-request-$PULL_REQUEST_BRANCH" > latest-subsurface-buildnumber-extension
- name: store version number for the build
id: version_number
env:
PULL_REQUEST_HEAD_SHA: ${{ github.event.pull_request.head.sha }}
shell: bash
run: |
git config --global --add safe.directory $GITHUB_WORKSPACE
# For a pull request we need the information from the pull request branch
# and not from the merge branch on the pull request
git checkout $PULL_REQUEST_HEAD_SHA
version=$(scripts/get-version.sh)
echo "version=$version" >> $GITHUB_OUTPUT
buildnr=$(scripts/get-version.sh 1)
echo "buildnr=$buildnr" >> $GITHUB_OUTPUT
git checkout $GITHUB_SHA

View File

@ -15,17 +15,17 @@ jobs:
VERSION: ${{ '5.15.2' }} # the version numbers here is based on the Qt version, the third digit is the rev of the docker image VERSION: ${{ '5.15.2' }} # the version numbers here is based on the Qt version, the third digit is the rev of the docker image
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v4
- name: Build the name for the docker image - name: Build the name for the docker image
id: build_name id: build_name
run: | run: |
v=${{ env.VERSION }} v=$VERSION
b=${{ github.ref }} # -BRANCH suffix, unless the branch is master b=$GITHUB_REF # -BRANCH suffix, unless the branch is master
b=${b/refs\/heads\//} b=${b/refs\/heads\//}
b=${b,,} # the name needs to be all lower case b=${b,,} # the name needs to be all lower case
if [ $b = "master" ] ; then b="" ; else b="-$b" ; fi if [ $b = "master" ] ; then b="" ; else b="-$b" ; fi
echo "NAME=subsurface/android-build${b}:${v}" >> $GITHUB_OUTPUT echo "NAME=$GITHUB_REPOSITORY_OWNER/android-build${b}:${v}" >> $GITHUB_OUTPUT
- name: Build and Publish Linux Docker image to Dockerhub - name: Build and Publish Linux Docker image to Dockerhub
uses: elgohr/Publish-Docker-Github-Action@v5 uses: elgohr/Publish-Docker-Github-Action@v5

View File

@ -1,4 +1,5 @@
name: Android name: Android
on: on:
push: push:
paths-ignore: paths-ignore:
@ -11,12 +12,10 @@ on:
branches: branches:
- master - master
env:
BUILD_ROOT: ${{ github.workspace }}/..
KEYSTORE_FILE: ${{ github.workspace }}/../subsurface.keystore
jobs: jobs:
buildAndroid: build:
env:
KEYSTORE_FILE: ${{ github.workspace }}/../subsurface.keystore
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: docker://subsurface/android-build:5.15.2 image: docker://subsurface/android-build:5.15.2
@ -24,32 +23,33 @@ jobs:
steps: steps:
- name: checkout sources - name: checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: atomically create or retrieve the build number and assemble release notes - name: set the version information
id: version_number id: version_number
if: github.event_name == 'push' uses: ./.github/actions/manage-version
run: | with:
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
- name: store dummy version and build number for non-push build runs
if: github.event_name != 'push'
run: |
echo "100" > latest-subsurface-buildnumber
echo "CICD-pull-request" > latest-subsurface-buildnumber-extension
- name: set up the keystore - name: set up the keystore
if: github.event_name == 'push' if: github.event_name == 'push'
env:
ANDROID_KEYSTORE_BASE64: ${{ secrets.ANDROID_KEYSTORE_BASE64 }}
run: | run: |
echo "${{ secrets.ANDROID_KEYSTORE_BASE64 }}" | base64 -d > $KEYSTORE_FILE echo "$ANDROID_KEYSTORE_BASE64" | base64 -d > $KEYSTORE_FILE
- name: run build - name: run build
id: build id: build
env:
KEYSTORE_PASSWORD: pass:${{ secrets.ANDROID_KEYSTORE_PASSWORD }}
KEYSTORE_ALIAS: ${{ secrets.ANDROID_KEYSTORE_ALIAS }}
BUILDNR: ${{ steps.version_number.outputs.buildnr }}
run: | run: |
# this is rather awkward, but it allows us to use the preinstalled # this is rather awkward, but it allows us to use the preinstalled
# Android and Qt versions with relative paths # Android and Qt versions with relative paths
cd $BUILD_ROOT cd ..
ln -s /android/5.15.* . ln -s /android/5.15.* .
ln -s /android/build-tools . ln -s /android/build-tools .
ln -s /android/cmdline-tools . ln -s /android/cmdline-tools .
@ -62,17 +62,25 @@ jobs:
git config --global --add safe.directory $GITHUB_WORKSPACE git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
# get the build number via curl so this works both for a pull request as well as a push # get the build number via curl so this works both for a pull request as well as a push
BUILDNR=$(curl -q https://raw.githubusercontent.com/subsurface/nightly-builds/main/latest-subsurface-buildnumber)
export OUTPUT_DIR="$GITHUB_WORKSPACE" export OUTPUT_DIR="$GITHUB_WORKSPACE"
export KEYSTORE_FILE="$KEYSTORE_FILE" bash -x ./subsurface/packaging/android/qmake-build.sh -buildnr $BUILDNR
export KEYSTORE_PASSWORD="pass:${{ secrets.ANDROID_KEYSTORE_PASSWORD }}"
export KEYSTORE_ALIAS="${{ secrets.ANDROID_KEYSTORE_ALIAS }}" - name: delete the keystore
bash -x ./subsurface/packaging/android/qmake-build.sh -buildnr ${BUILDNR} if: github.event_name == 'push'
run: |
rm $KEYSTORE_FILE
- name: publish pull request artifacts
if: github.event_name == 'pull_request'
uses: actions/upload-artifact@v4
with:
name: Subsurface-Android-${{ steps.version_number.outputs.version }}
path: Subsurface-mobile-*.apk
# only publish a 'release' on push events (those include merging a PR) # only publish a 'release' on push events (those include merging a PR)
- name: upload binaries - name: upload binaries
if: github.event_name == 'push' if: github.event_name == 'push'
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v2
with: with:
tag_name: v${{ steps.version_number.outputs.version }} tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds repository: ${{ github.repository_owner }}/nightly-builds
@ -81,8 +89,3 @@ jobs:
fail_on_unmatched_files: true fail_on_unmatched_files: true
files: | files: |
Subsurface-mobile-${{ steps.version_number.outputs.version }}.apk Subsurface-mobile-${{ steps.version_number.outputs.version }}.apk
- name: delete the keystore
if: github.event_name == 'push'
run: |
rm $KEYSTORE_FILE

24
.github/workflows/artifact-links.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Add artifact links to pull request
on:
workflow_run:
workflows: ["Ubuntu 16.04 / Qt 5.15-- for AppImage", "Mac", "Windows", "Android", "iOS"]
types: [completed]
jobs:
artifacts-url-comments:
name: Add artifact links to PR and issues
runs-on: ubuntu-22.04
steps:
- name: Add artifact links to PR and issues
if: github.event.workflow_run.event == 'pull_request'
uses: tonyhallett/artifacts-url-comments@v1.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
prefix: "**Artifacts:**"
suffix: "_**WARNING:** Use at your own risk._"
format: name
addTo: pull
errorNoArtifacts: false

View File

@ -25,20 +25,19 @@ jobs:
matrix: matrix:
# Override automatic language detection by changing the below list # Override automatic language detection by changing the below list
# Supported options are ['csharp', 'cpp', 'go', 'java', 'javascript', 'python'] # Supported options are ['csharp', 'cpp', 'go', 'java', 'javascript', 'python']
language: ['cpp', 'javascript'] language: ['c-cpp', 'javascript-typescript']
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
# We must fetch at least the immediate parents so that if this is fetch-depth: 0
# a pull request then we can checkout the head. submodules: recursive
fetch-depth: 2
- name: get container ready for build - name: get container ready for build
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y -q --force-yes \ sudo apt-get install -y -q \
autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \ autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \
@ -51,7 +50,7 @@ jobs:
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v2 uses: github/codeql-action/init@v3
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@ -60,13 +59,11 @@ jobs:
# queries: ./path/to/local/query, your-org/your-repo/queries@main # queries: ./path/to/local/query, your-org/your-repo/queries@main
- name: Build - name: Build
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: | run: |
cd .. cd ..
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH} git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
bash -e -x subsurface/scripts/build.sh -desktop -build-with-webkit bash -e -x subsurface/scripts/build.sh -desktop -build-with-webkit
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2 uses: github/codeql-action/analyze@v3

View File

@ -1,4 +1,5 @@
name: Coverity Scan Linux Qt 5.9 name: Coverity Scan Linux Qt 5.9
on: on:
schedule: schedule:
- cron: '0 18 * * *' # Daily at 18:00 UTC - cron: '0 18 * * *' # Daily at 18:00 UTC
@ -10,14 +11,11 @@ jobs:
image: ubuntu:22.04 image: ubuntu:22.04
steps: steps:
- name: checkout sources
uses: actions/checkout@v1
- name: add build dependencies - name: add build dependencies
run: | run: |
apt-get update apt-get update
apt-get upgrade -y apt-get dist-upgrade -y
DEBIAN_FRONTEND=noninteractive apt-get install -y -q --force-yes \ DEBIAN_FRONTEND=noninteractive apt-get install -y -q \
wget curl \ wget curl \
autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \ autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
@ -29,12 +27,22 @@ jobs:
qtpositioning5-dev qtscript5-dev qttools5-dev qttools5-dev-tools \ qtpositioning5-dev qtscript5-dev qttools5-dev qttools5-dev-tools \
qtquickcontrols2-5-dev libbluetooth-dev libmtp-dev qtquickcontrols2-5-dev libbluetooth-dev libmtp-dev
- name: checkout sources
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: configure environment - name: configure environment
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: | run: |
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH} git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
- name: get the version information
id: version_number
uses: ./.github/actions/manage-version
with:
no-increment: true
- name: run coverity scan - name: run coverity scan
uses: vapier/coverity-scan-action@v1 uses: vapier/coverity-scan-action@v1
@ -44,5 +52,5 @@ jobs:
email: glance@acc.umu.se email: glance@acc.umu.se
command: subsurface/scripts/build.sh -desktop -build-with-webkit command: subsurface/scripts/build.sh -desktop -build-with-webkit
working-directory: ${{ github.workspace }}/.. working-directory: ${{ github.workspace }}/..
version: $(/scripts/get-version) version: ${{ steps.version_number.outputs.version }}
description: Automatic scan on github actions description: Automatic scan on github actions

View File

@ -26,6 +26,9 @@ jobs:
- name: Checkout Sources - name: Checkout Sources
uses: actions/checkout@v4 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: Process the Documentation - name: Process the Documentation
id: process_documentation id: process_documentation

View File

@ -11,30 +11,32 @@ jobs:
setup-build: setup-build:
name: Submit build to Fedora COPR name: Submit build to Fedora COPR
# this seems backwards, but we want to run under Fedora, but Github doesn' support that # this seems backwards, but we want to run under Fedora, but Github doesn' support that
container: fedora:latest
runs-on: ubuntu-latest runs-on: ubuntu-latest
container:
image: fedora:latest
steps: steps:
- name: Check out sources
uses: actions/checkout@v1
- name: Setup build dependencies in the Fedora container - name: Setup build dependencies in the Fedora container
run: | run: |
dnf -y install @development-tools @rpm-development-tools dnf -y install @development-tools @rpm-development-tools
dnf -y install copr-cli make dnf -y install copr-cli make
- name: Check out sources
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: setup git - name: setup git
run: | run: |
git config --global --add safe.directory /__w/subsurface/subsurface git config --global --add safe.directory /__w/subsurface/subsurface
git config --global --add safe.directory /__w/subsurface/subsurface/libdivecomputer git config --global --add safe.directory /__w/subsurface/subsurface/libdivecomputer
- name: atomically create or retrieve the build number - name: set the version information
id: version_number id: version_number
if: github.event_name == 'push' uses: ./.github/actions/manage-version
run: | with:
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
- name: Setup API token for copr-cli - name: Setup API token for copr-cli
env: env:
@ -53,5 +55,5 @@ jobs:
- name: run the copr build script - name: run the copr build script
run: | run: |
cd .. cd ..
bash -x subsurface/packaging/copr/make-package.sh ${{ github.ref_name }} bash -x subsurface/packaging/copr/make-package.sh $GITHUB_REF_NAME

View File

@ -1,4 +1,5 @@
name: iOS name: iOS
on: on:
push: push:
paths-ignore: paths-ignore:
@ -12,37 +13,49 @@ on:
- master - master
jobs: jobs:
iOSBuild: build:
runs-on: macOS-11 runs-on: macOS-11
steps: steps:
- name: switch to Xcode 11 - name: switch to Xcode 11
run: sudo xcode-select -s "/Applications/Xcode_11.7.app" run: sudo xcode-select -s "/Applications/Xcode_11.7.app"
- name: checkout sources - name: checkout sources
uses: actions/checkout@v1 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: setup Homebrew - name: setup Homebrew
run: brew install autoconf automake libtool pkg-config run: brew install autoconf automake libtool pkg-config
- name: set our Qt build - name: checkout Qt resources
run: | uses: actions/checkout@v4
env with:
curl -L --output Qt-5.14.1-ios.tar.xz https://f002.backblazeb2.com/file/Subsurface-Travis/Qt-5.14.1-ios.tar.xz repository: subsurface/qt-ios
mkdir -p $HOME/Qt ref: main
xzcat Qt-5.14.1-ios.tar.xz | tar -x -C $HOME/Qt -f - path: qt-ios
- name: store dummy version and build number for test build - name: set the version information
run: | id: version_number
echo "100" > latest-subsurface-buildnumber uses: ./.github/actions/manage-version
echo "CICD-test-build" > latest-subsurface-buildnumber-extension with:
nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
- name: build Subsurface-mobile for iOS - name: build Subsurface-mobile for iOS
env: env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }} VERSION: ${{ steps.version_number.outputs.version }}
run: | run: |
cd ${SUBSURFACE_REPO_PATH}/.. cd ..
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH} git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
ln -s $HOME/Qt Qt export IOS_QT=$GITHUB_WORKSPACE/qt-ios
echo "build for simulator" echo "build for simulator"
bash -x $GITHUB_WORKSPACE/packaging/ios/build.sh -simulator bash -x $GITHUB_WORKSPACE/packaging/ios/build.sh -simulator
# We need this in order to be able to access the file and publish it
mv build-Subsurface-mobile-Qt_5_14_1_for_iOS-Release/Release-iphonesimulator/Subsurface-mobile.app $GITHUB_WORKSPACE/Subsurface-mobile-$VERSION.app
- name: publish artifacts
uses: actions/upload-artifact@v4
with:
name: Subsurface-iOS-${{ steps.version_number.outputs.version }}
path: Subsurface-mobile-*.app

View File

@ -1,55 +0,0 @@
name: Ubuntu 18.04 / Qt 5.9--
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
buildOnBionic:
runs-on: ubuntu-18.04
container:
image: ubuntu:18.04 # yes, this looks redundant, but something is messed up with their Ubuntu image that causes our builds to fail
steps:
- name: checkout sources
uses: actions/checkout@v1
- name: add build dependencies
run: |
apt update
apt install -y \
autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \
libtool libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make \
pkg-config qml-module-qtlocation qml-module-qtpositioning \
qml-module-qtquick2 qt5-default qt5-qmake qtchooser qtconnectivity5-dev \
qtdeclarative5-dev qtdeclarative5-private-dev qtlocation5-dev \
qtpositioning5-dev qtscript5-dev qttools5-dev qttools5-dev-tools \
qtquickcontrols2-5-dev xvfb libbluetooth-dev libmtp-dev
- name: store dummy version and build number for pull request
if: github.event_name == 'pull_request'
run: |
echo "6.0.100" > latest-subsurface-buildnumber
- name: build Subsurface
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: |
cd ..
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer
bash -x subsurface/scripts/build.sh -desktop -build-with-webkit
- name: test desktop build
run: |
# and now run the tests - with Qt 5.9 we can only run the desktop flavor
echo "------------------------------------"
echo "run tests"
cd build/tests
# xvfb-run --auto-servernum ./TestGitStorage -v2
xvfb-run --auto-servernum make check

View File

@ -1,36 +1,27 @@
name: Ubuntu 22.04 / Qt 5.15-- name: Generic workflow for Debian and derivatives
on: on:
push: workflow_call:
paths-ignore: inputs:
- scripts/docker/** container-image:
branches: required: true
- master type: string
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs: jobs:
buildUbuntuJammy: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ubuntu:22.04 image: ${{ inputs.container-image }}
steps: steps:
- name: checkout sources
uses: actions/checkout@v1
- name: get container ready for build - name: get container ready for build
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: | run: |
echo "--------------------------------------------------------------" echo "--------------------------------------------------------------"
echo "update distro and install dependencies" echo "update distro and install dependencies"
apt-get update apt-get update
apt-get upgrade -y apt-get dist-upgrade -y
DEBIAN_FRONTEND=noninteractive apt-get install -y -q --force-yes \ DEBIAN_FRONTEND=noninteractive apt-get install -y -q \
autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \ autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \
@ -44,13 +35,20 @@ jobs:
git config --global user.email "ci@subsurface-divelog.org" git config --global user.email "ci@subsurface-divelog.org"
git config --global user.name "Subsurface CI" git config --global user.name "Subsurface CI"
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH} git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
# needs git from the previous step
- name: checkout sources
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: store dummy version and build number for test build - name: set the version information
run: | id: version_number
echo "100" > latest-subsurface-buildnumber uses: ./.github/actions/manage-version
echo "CICD-test-build" > latest-subsurface-buildnumber-extension with:
no-increment: true
- name: build subsurface-mobile - name: build subsurface-mobile
run: | run: |

View File

@ -0,0 +1,19 @@
name: Debian trixie / Qt 5.15--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
do-build-test:
uses: ./.github/workflows/linux-debian-generic.yml
with:
container-image: debian:trixie

View File

@ -1,39 +0,0 @@
name: Linux Qt 5.12 Docker Image CI
#on:
# push:
# paths:
# - scripts/docker/trusty-qt512/Dockerfile
# - .github/workflows/linux-docker*
jobs:
trusty-qt512:
runs-on: ubuntu-latest
env:
VERSION: ${{ '1.0' }} # 'official' images should have a dot-zero version
steps:
- uses: actions/checkout@v1
- name: Get our pre-reqs
run: |
cd scripts/docker/trusty-qt512
bash getpackages.sh
- name: set env
run: |
v=${{ env.VERSION }}
b=${{ github.ref }} # -BRANCH suffix, unless the branch is master
b=${b/refs\/heads\//}
b=${b,,} # the name needs to be all lower case
if [ $b = "master" ] ; then b="" ; else b="-$b" ; fi
echo "::set-env name=NAME::subsurface/trusty-qt512${b}:${v}"
- name: Build and Publish Linux Docker image to Dockerhub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: ${{ env.NAME }}
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: 'Dockerfile'
workdir: './scripts/docker/trusty-qt512/'

View File

@ -1,4 +1,5 @@
name: Fedora 35 / Qt 6-- name: Fedora 35 / Qt 6--
on: on:
push: push:
paths-ignore: paths-ignore:
@ -12,15 +13,12 @@ on:
- master - master
jobs: jobs:
buildFedoraQt6: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: fedora:35 image: fedora:35
steps: steps:
- name: checkout sources
uses: actions/checkout@v1
- name: get container ready for build - name: get container ready for build
run: | run: |
echo "--------------------------------------------------------------" echo "--------------------------------------------------------------"
@ -37,22 +35,27 @@ jobs:
bluez-libs-devel libgit2-devel libzip-devel libmtp-devel \ bluez-libs-devel libgit2-devel libzip-devel libmtp-devel \
xorg-x11-server-Xvfb xorg-x11-server-Xvfb
- name: store dummy version and build number for test build - name: checkout sources
run: | uses: actions/checkout@v4
echo "100" > latest-subsurface-buildnumber with:
echo "CICD-test-build" > latest-subsurface-buildnumber-extension fetch-depth: 0
submodules: recursive
- name: set the version information
id: version_number
uses: ./.github/actions/manage-version
with:
no-increment: true
- name: build Subsurface - name: build Subsurface
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: | run: |
echo "--------------------------------------------------------------" echo "--------------------------------------------------------------"
echo "building desktop" echo "building desktop"
# now build for the desktop version (without WebKit) # now build for the desktop version (without WebKit)
cd .. cd ..
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH} git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory $GITHUB_WORKSPACE/libdivecomputer
git config --global --get-all safe.directory git config --global --get-all safe.directory
bash -e -x subsurface/scripts/build.sh -desktop -build-with-qt6 bash -e -x subsurface/scripts/build.sh -desktop -build-with-qt6

View File

@ -1,85 +0,0 @@
name: Ubuntu 20.04 / Qt 5.12--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
buildUbuntuFocal:
runs-on: ubuntu-latest
container:
image: ubuntu:20.04
steps:
- name: checkout sources
uses: actions/checkout@v1
- name: get container ready for build
run: |
echo "--------------------------------------------------------------"
echo "update distro and install dependencies"
apt-get update
apt-get upgrade -y
DEBIAN_FRONTEND=noninteractive apt-get install -y -q --force-yes \
autoconf automake cmake g++ git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \
libtool libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make \
pkg-config qml-module-qtlocation qml-module-qtpositioning \
qml-module-qtquick2 qt5-qmake qtchooser qtconnectivity5-dev \
qtdeclarative5-dev qtdeclarative5-private-dev qtlocation5-dev \
qtpositioning5-dev qtscript5-dev qttools5-dev qttools5-dev-tools \
qtquickcontrols2-5-dev xvfb libbluetooth-dev libmtp-dev
- name: store dummy version and build number for test build
run: |
echo "100" > latest-subsurface-buildnumber
echo "CICD-test-build" > latest-subsurface-buildnumber-extension
- name: build Subsurface-mobile
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: |
echo "--------------------------------------------------------------"
echo "building mobile"
git config --global user.email "ci@subsurface-divelog.org"
git config --global user.name "Subsurface CI"
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer
cd ..
bash -e -x subsurface/scripts/build.sh -mobile
- name: test mobile build
run: |
echo "--------------------------------------------------------------"
echo "running tests for mobile"
cd build-mobile/tests
# xvfb-run --auto-servernum ./TestGitStorage -v2
xvfb-run --auto-servernum make check
- name: build Subsurface
run: |
echo "--------------------------------------------------------------"
echo "building desktop"
# now build for the desktop version (including WebKit)
cd ..
bash -e -x subsurface/scripts/build.sh -desktop -build-with-webkit
- name: test desktop build
run: |
echo "--------------------------------------------------------------"
echo "running tests for desktop"
cd build/tests
# xvfb-run --auto-servernum ./TestGitStorage -v2
xvfb-run --auto-servernum make check

View File

@ -19,16 +19,16 @@ jobs:
timeout-minutes: 60 timeout-minutes: 60
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
# Needed for version determination to work
fetch-depth: 0 fetch-depth: 0
submodules: recursive
- name: atomically create or retrieve the build number - name: set the version information
id: version_number id: version_number
if: github.event_name == 'push' uses: ./.github/actions/manage-version
run: | with:
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
- name: store dummy version and build number for pull request - name: store dummy version and build number for pull request
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
@ -48,11 +48,11 @@ jobs:
/snap/bin/lxc profile device add default ccache disk source=${HOME}/.ccache/ path=/root/.ccache /snap/bin/lxc profile device add default ccache disk source=${HOME}/.ccache/ path=/root/.ccache
# Patch snapcraft.yaml to enable ccache # Patch snapcraft.yaml to enable ccache
patch -p1 < .github/workflows/linux-snap.patch patch -p1 < .github/workflows/scripts/linux-snap.patch
# Find common base between master and HEAD to use as cache key. # Find common base between master and HEAD to use as cache key.
git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules origin master git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules origin master
echo "key=$( git merge-base origin/master ${{ github.sha }} )" >> $GITHUB_OUTPUT echo "key=$( git merge-base origin/master $GITHUB_SHA )" >> $GITHUB_OUTPUT
- name: CCache - name: CCache
uses: actions/cache@v3 uses: actions/cache@v3
@ -73,7 +73,7 @@ jobs:
- name: Upload the snap - name: Upload the snap
if: github.event_name == 'push' if: github.event_name == 'push'
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: ${{ steps.build-snap.outputs.snap-name }} name: ${{ steps.build-snap.outputs.snap-name }}
path: ${{ steps.build-snap.outputs.snap-path }} path: ${{ steps.build-snap.outputs.snap-path }}

View File

@ -1,77 +0,0 @@
name: Ubuntu 14.04 / Qt 5.12 for AppImage--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
buildAppImage:
runs-on: ubuntu-latest
container:
image: docker://subsurface/trusty-qt512:1.1
steps:
- name: checkout sources
uses: actions/checkout@v1
- name: atomically create or retrieve the build number and assemble release notes
id: version_number
if: github.event_name == 'push'
run: |
bash ./scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release"
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
- name: store dummy version and build number for pull request
if: github.event_name == 'pull_request'
run: |
echo "100" > latest-subsurface-buildnumber
echo "CICD-pull-request" > latest-subsurface-buildnumber-extension
- name: run build
env:
SUBSURFACE_REPO_PATH: ${{ github.workspace }}
run: |
cd ..
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer
rm -rf /install-root/include/libdivecomputer
bash -x subsurface/.github/workflows/scripts/linux-in-container-build.sh
- name: prepare PR artifacts
if: github.event_name == 'pull_request'
run: |
mkdir -p Linux-artifacts
mv Subsurface.AppImage Linux-artifacts
- name: PR artifacts
if: github.event_name == 'pull_request'
uses: actions/upload-artifact@v3
with:
name: Linux-artifacts
path: Linux-artifacts
- name: prepare release artifacts
if: github.event_name == 'push'
run: |
mv Subsurface.AppImage Subsurface-v${{ steps.version_number.outputs.version }}.AppImage
# only publish a 'release' on push events (those include merging a PR)
- name: upload binaries
if: github.event_name == 'push'
uses: softprops/action-gh-release@v1
with:
tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds
token: ${{ secrets.NIGHTLY_BUILDS }}
prerelease: false
fail_on_unmatched_files: true
files: |
./Subsurface*.AppImage

View File

@ -0,0 +1,149 @@
name: Ubuntu 16.04 / Qt 5.15-- for AppImage
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
container:
image: ubuntu:16.04
steps:
- name: get container ready for build
run: |
echo "--------------------------------------------------------------"
echo "update distro and install dependencies"
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y -q \
software-properties-common
add-apt-repository -y ppa:savoury1/qt-5-15
add-apt-repository -y ppa:savoury1/kde-5-80
add-apt-repository -y ppa:savoury1/gpg
add-apt-repository -y ppa:savoury1/ffmpeg4
add-apt-repository -y ppa:savoury1/vlc3
add-apt-repository -y ppa:savoury1/gcc-9
add-apt-repository -y ppa:savoury1/display
add-apt-repository -y ppa:savoury1/apt-xenial
add-apt-repository -y ppa:savoury1/gtk-xenial
add-apt-repository -y ppa:savoury1/qt-xenial
add-apt-repository -y ppa:savoury1/kde-xenial
add-apt-repository -y ppa:savoury1/backports
add-apt-repository -y ppa:savoury1/build-tools
apt-get update
apt-get dist-upgrade -y
DEBIAN_FRONTEND=noninteractive apt-get install -y -q \
autoconf automake cmake g++ g++-9 git libcrypto++-dev libcurl4-gnutls-dev \
libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libssl-dev \
libtool libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make \
pkg-config qml-module-qtlocation qml-module-qtpositioning \
qml-module-qtquick2 qt5-qmake qtchooser qtconnectivity5-dev \
qtdeclarative5-dev qtdeclarative5-private-dev qtlocation5-dev \
qtpositioning5-dev qtscript5-dev qttools5-dev qttools5-dev-tools \
qtquickcontrols2-5-dev xvfb libbluetooth-dev libmtp-dev liblzma-dev \
curl
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 60 \
--slave /usr/bin/g++ g++ /usr/bin/g++-9
- name: checkout sources
# We cannot update this as glibc on 16.04 is too old for node 20.
uses: actions/checkout@v3
with:
fetch-depth: 0
submodules: recursive
- name: set the version information
id: version_number
uses: ./.github/actions/manage-version
with:
nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
- name: build Subsurface
run: |
echo "--------------------------------------------------------------"
echo "building desktop"
# now build the appimage
cd ..
bash -e -x subsurface/scripts/build.sh -desktop -create-appdir -build-with-webkit
- name: test desktop build
run: |
echo "--------------------------------------------------------------"
echo "running tests for desktop"
cd build/tests
# xvfb-run --auto-servernum ./TestGitStorage -v2
xvfb-run --auto-servernum make check
- name: build appimage
env:
VERSION: ${{ steps.version_number.outputs.version }}
run: |
echo "--------------------------------------------------------------"
echo "assembling AppImage"
export QT_PLUGIN_PATH=$QT_ROOT/plugins
export QT_QPA_PLATFORM_PLUGIN_PATH=$QT_ROOT/plugins
export QT_DEBUG_PLUGINS=1
cd ..
# set up the appdir
mkdir -p appdir/usr/plugins/
# mv googlemaps plugins into place
mv appdir/usr/usr/lib/x86_64-linux-gnu/qt5/plugins/* appdir/usr/plugins # the usr/usr is not a typo, that's where it ends up
rm -rf appdir/usr/home/ appdir/usr/include/ appdir/usr/share/man/ # No need to ship developer and man files as part of the AppImage
rm -rf appdir/usr/usr appdir/usr/lib/x86_64-linux-gnu/cmake appdir/usr/lib/pkgconfig
cp /usr/lib/x86_64-linux-gnu/libssl.so.1.1 appdir/usr/lib/
cp /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 appdir/usr/lib/
# get the linuxdeployqt tool and run it to collect the libraries
curl -L -O "https://github.com/probonopd/linuxdeployqt/releases/download/7/linuxdeployqt-7-x86_64.AppImage"
chmod a+x linuxdeployqt*.AppImage
unset QTDIR
unset QT_PLUGIN_PATH
unset LD_LIBRARY_PATH
./linuxdeployqt*.AppImage --appimage-extract-and-run ./appdir/usr/share/applications/*.desktop -exclude-libs=libdbus-1.so.3 -bundle-non-qt-libs -qmldir=./subsurface/stats -qmldir=./subsurface/map-widget/ -verbose=2
# create the AppImage
./linuxdeployqt*.AppImage --appimage-extract-and-run ./appdir/usr/share/applications/*.desktop -exclude-libs=libdbus-1.so.3 -appimage -qmldir=./subsurface/stats -qmldir=./subsurface/map-widget/ -verbose=2
# copy AppImage to the calling VM
# with GitHub Actions the $GITHUB_WORKSPACE directory is the current working directory at the start of a step
cp Subsurface*.AppImage* $GITHUB_WORKSPACE/Subsurface-$VERSION.AppImage
- name: PR artifacts
if: github.event_name == 'pull_request'
# We cannot update this as glibc on 16.04 is too old for node 20.
uses: actions/upload-artifact@v3
with:
name: Subsurface-Linux-AppImage-${{ steps.version_number.outputs.version }}
path: Subsurface-*.AppImage
compression-level: 0
# only publish a 'release' on push events (those include merging a PR)
- name: upload binaries
if: github.event_name == 'push'
uses: softprops/action-gh-release@v1
with:
tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds
token: ${{ secrets.NIGHTLY_BUILDS }}
prerelease: false
fail_on_unmatched_files: true
files: |
./Subsurface-*.AppImage

View File

@ -0,0 +1,19 @@
name: Ubuntu 20.04 / Qt 5.12--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
do-build-test:
uses: ./.github/workflows/linux-debian-generic.yml
with:
container-image: ubuntu:20.04

View File

@ -0,0 +1,19 @@
name: Ubuntu 22.04 / Qt 5.15--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
do-build-test:
uses: ./.github/workflows/linux-debian-generic.yml
with:
container-image: ubuntu:22.04

View File

@ -0,0 +1,19 @@
name: Ubuntu 24.04 / Qt 5.15--
on:
push:
paths-ignore:
- scripts/docker/**
branches:
- master
pull_request:
paths-ignore:
- scripts/docker/**
branches:
- master
jobs:
do-build-test:
uses: ./.github/workflows/linux-debian-generic.yml
with:
container-image: ubuntu:24.04

View File

@ -1,4 +1,5 @@
name: Mac name: Mac
on: on:
push: push:
paths-ignore: paths-ignore:
@ -11,38 +12,38 @@ on:
branches: branches:
- master - master
jobs: jobs:
buildMac: build:
runs-on: macOS-11 runs-on: macOS-11
steps: steps:
- name: checkout sources - name: checkout sources
uses: actions/checkout@v1 uses: actions/checkout@v4
with:
- name: atomically create or retrieve the build number and assemble release notes fetch-depth: 0
id: version_number submodules: recursive
if: github.event_name == 'push'
run: |
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release"
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
- name: store dummy version and build number for pull request
if: github.event_name == 'pull_request'
run: |
echo "100" > latest-subsurface-buildnumber
echo "CICD-pull-request" > latest-subsurface-buildnumber-extension
- name: setup Homebrew - name: setup Homebrew
run: brew install hidapi libxslt libjpg libmtp create-dmg confuse run: brew install hidapi libxslt libjpg libmtp create-dmg confuse
- name: set our Qt build
run: | - name: checkout Qt resources
curl --output ssrf-Qt-5.15.2-mac.tar.xz https://f002.backblazeb2.com/file/Subsurface-Travis/ssrf-Qt5.15.2.tar.xz uses: actions/checkout@v4
tar -xJf ssrf-Qt-5.15.2-mac.tar.xz with:
repository: subsurface/qt-mac
ref: main
path: qt-mac
- name: set the version information
id: version_number
uses: ./.github/actions/manage-version
with:
nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
- name: build Subsurface - name: build Subsurface
id: build id: build
run: | run: |
cd ${GITHUB_WORKSPACE}/.. cd ${GITHUB_WORKSPACE}/..
export QT_ROOT=${GITHUB_WORKSPACE}/Qt5.15.2/5.15.2/clang_64 export QT_ROOT=${GITHUB_WORKSPACE}/qt-mac/Qt5.15.13
export QT_QPA_PLATFORM_PLUGIN_PATH=$QT_ROOT/plugins export QT_QPA_PLATFORM_PLUGIN_PATH=$QT_ROOT/plugins
export PATH=$QT_ROOT/bin:$PATH export PATH=$QT_ROOT/bin:$PATH
export CMAKE_PREFIX_PATH=$QT_ROOT/lib/cmake export CMAKE_PREFIX_PATH=$QT_ROOT/lib/cmake
@ -58,10 +59,18 @@ jobs:
echo "Created $IMG" echo "Created $IMG"
echo "dmg=$IMG" >> $GITHUB_OUTPUT echo "dmg=$IMG" >> $GITHUB_OUTPUT
- name: publish pull request artifacts
if: github.event_name == 'pull_request'
uses: actions/upload-artifact@v4
with:
name: Subsurface-MacOS-${{ steps.version_number.outputs.version }}
path: ${{ steps.build.outputs.dmg }}
compression-level: 0
# only publish a 'release' on push events (those include merging a PR) # only publish a 'release' on push events (those include merging a PR)
- name: upload binaries - name: upload binaries
if: github.event_name == 'push' if: github.event_name == 'push'
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v2
with: with:
tag_name: v${{ steps.version_number.outputs.version }} tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds repository: ${{ github.repository_owner }}/nightly-builds

View File

@ -1,4 +1,5 @@
name: Post Release name: Post Release Notes
on: on:
push: push:
paths-ignore: paths-ignore:
@ -6,29 +7,35 @@ on:
branches: branches:
- master - master
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs: jobs:
postRelease: postRelease:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: checkout sources - name: checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: set the version information
id: version_number
uses: ./.github/actions/manage-version
with:
nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
# since we are running this step on a pull request, we will skip build numbers in releases # since we are running this step on a pull request, we will skip build numbers in releases
- name: atomically create or retrieve the build number and assemble release notes - name: assemble release notes
id: version_number env:
EVENT_HEAD_COMMIT_ID: ${{ github.event.head_commit.id }}
# Required because we are using the GitHub CLI in 'create-releasenotes.sh'
GH_TOKEN: ${{ github.token }}
run: | run: |
bash -x ./scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" scripts/create-releasenotes.sh $EVENT_HEAD_COMMIT_ID
bash scripts/create-releasenotes.sh ${{ github.event.head_commit.id }}
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
# add a file containing the release title so it can be picked up and listed on the release page on our web server # add a file containing the release title so it can be picked up and listed on the release page on our web server
- name: publish release - name: publish release
if: github.event_name == 'push' if: github.event_name == 'push'
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v2
with: with:
tag_name: v${{ steps.version_number.outputs.version }} tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds repository: ${{ github.repository_owner }}/nightly-builds

View File

@ -23,13 +23,11 @@ logger.setLevel(logging.INFO)
APPLICATION = "subsurface-ci" APPLICATION = "subsurface-ci"
LAUNCHPAD = "production" LAUNCHPAD = "production"
RELEASE = "bionic"
TEAM = "subsurface" TEAM = "subsurface"
SOURCE_NAME = "subsurface" SOURCE_NAME = "subsurface"
SNAPS = { SNAPS = {
"subsurface": { "subsurface": {
"stable": {"recipe": "subsurface-stable"}, "stable": {"recipe": "subsurface-stable"},
"candidate": {"recipe": "subsurface-candidate"},
}, },
} }

View File

@ -1,58 +0,0 @@
#!/bin/bash
set -x
set -e
# this gets executed by the GitHub Action when building an AppImage for Linux
# inside of the trusty-qt512 container
export PATH=$QT_ROOT/bin:$PATH # Make sure correct qmake is found on the $PATH for linuxdeployqt
export CMAKE_PREFIX_PATH=$QT_ROOT/lib/cmake
# echo "--------------------------------------------------------------"
# echo "install missing packages"
# apt install -y libbluetooth-dev libmtp-dev
# the container currently has things under / that need to be under /__w/subsurface/subsurface instead
cp -a /appdir /__w/subsurface/
cp -a /install-root /__w/subsurface/
echo "--------------------------------------------------------------"
echo "building desktop"
# now build our AppImage
bash -e -x subsurface/scripts/build.sh -desktop -create-appdir -build-with-webkit -quick
echo "--------------------------------------------------------------"
echo "assembling AppImage"
export QT_PLUGIN_PATH=$QT_ROOT/plugins
export QT_QPA_PLATFORM_PLUGIN_PATH=$QT_ROOT/plugins
export QT_DEBUG_PLUGINS=1
# set up the appdir
mkdir -p appdir/usr/plugins/
# mv googlemaps plugins into place
mv appdir/usr/usr/local/Qt/5.*/gcc_64/plugins/* appdir/usr/plugins # the usr/usr is not a typo, that's where it ends up
rm -rf appdir/usr/home/ appdir/usr/include/ appdir/usr/share/man/ # No need to ship developer and man files as part of the AppImage
rm -rf appdir/usr/usr appdir/usr/lib/cmake appdir/usr/lib/pkgconfig
cp /ssllibs/libssl.so appdir/usr/lib/libssl.so.1.1
cp /ssllibs/libcrypto.so appdir/usr/lib/libcrypto.so.1.1
# get the linuxdeployqt tool and run it to collect the libraries
curl -L -O "https://github.com/probonopd/linuxdeployqt/releases/download/7/linuxdeployqt-7-x86_64.AppImage"
chmod a+x linuxdeployqt*.AppImage
unset QTDIR
unset QT_PLUGIN_PATH
unset LD_LIBRARY_PATH
./linuxdeployqt*.AppImage --appimage-extract-and-run ./appdir/usr/share/applications/*.desktop -exclude-libs=libdbus-1.so.3 -bundle-non-qt-libs -qmldir=./subsurface/stats -qmldir=./subsurface/map-widget/ -verbose=2
# create the AppImage
export VERSION=$(cd subsurface/scripts ; ./get-version) # linuxdeployqt uses this for naming the file
./linuxdeployqt*.AppImage --appimage-extract-and-run ./appdir/usr/share/applications/*.desktop -exclude-libs=libdbus-1.so.3 -appimage -qmldir=./subsurface/stats -qmldir=./subsurface/map-widget/ -verbose=2
# copy AppImage to the calling VM
# with GitHub Actions the /${GITHUB_WORKSPACE} directory is the current working directory at the start of a step
cp Subsurface*.AppImage* /${GITHUB_WORKSPACE}/Subsurface.AppImage
ls -l /${GITHUB_WORKSPACE}/Subsurface.AppImage

View File

@ -15,13 +15,16 @@ jobs:
steps: steps:
- name: Check out sources - name: Check out sources
uses: actions/checkout@v1 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: atomically create or retrieve the build number - name: set the version information
id: version_number id: version_number
if: github.event_name == 'push' uses: ./.github/actions/manage-version
run: | with:
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
- name: Setup build dependencies - name: Setup build dependencies
run: | run: |
@ -48,5 +51,5 @@ jobs:
- name: run the launchpad make-package script - name: run the launchpad make-package script
run: | run: |
cd .. cd ..
bash -x subsurface/packaging/ubuntu/make-package.sh ${{ github.ref_name }} bash -x subsurface/packaging/ubuntu/make-package.sh $GITHUB_REF_NAME

View File

@ -16,17 +16,17 @@ jobs:
mxe_sha: 'c0bfefc57a00fdf6cb5278263e21a478e47b0bf5' mxe_sha: 'c0bfefc57a00fdf6cb5278263e21a478e47b0bf5'
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v4
- name: Build the name for the docker image - name: Build the name for the docker image
id: build_name id: build_name
run: | run: |
v=${{ env.VERSION }} v=$VERSION
b=${{ github.ref }} # -BRANCH suffix, unless the branch is master b=$GITHUB_REF # -BRANCH suffix, unless the branch is master
b=${b/refs\/heads\//} b=${b/refs\/heads\//}
b=${b,,} # the name needs to be all lower case b=${b,,} # the name needs to be all lower case
if [ $b = "master" ] ; then b="" ; else b="-$b" ; fi if [ $b = "master" ] ; then b="" ; else b="-$b" ; fi
echo "NAME=${{ github.repository_owner }}/mxe-build${b}:${v}" >> $GITHUB_OUTPUT echo "NAME=$GITHUB_REPOSITORY_OWNER/mxe-build${b}:${v}" >> $GITHUB_OUTPUT
- name: Build and Publish Linux Docker image to Dockerhub - name: Build and Publish Linux Docker image to Dockerhub
uses: elgohr/Publish-Docker-Github-Action@v5 uses: elgohr/Publish-Docker-Github-Action@v5

View File

@ -1,4 +1,5 @@
name: Windows name: Windows
on: on:
push: push:
paths-ignore: paths-ignore:
@ -12,28 +13,23 @@ on:
- master - master
jobs: jobs:
buildWindows: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: docker://subsurface/mxe-build:3.1.0 image: docker://subsurface/mxe-build:3.1.0
steps: steps:
- name: checkout sources - name: checkout sources
uses: actions/checkout@v1 uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: recursive
- name: atomically create or retrieve the build number and assemble release notes - name: set the version information
id: version_number id: version_number
if: github.event_name == 'push' uses: ./.github/actions/manage-version
run: | with:
bash scripts/get-atomic-buildnr.sh ${{ github.sha }} ${{ secrets.NIGHTLY_BUILDS }} "CICD-release" nightly-builds-secret: ${{ secrets.NIGHTLY_BUILDS }}
version=$(cat release-version)
echo "version=$version" >> $GITHUB_OUTPUT
- name: store dummy version and build number for pull request
if: github.event_name == 'pull_request'
run: |
echo "100" > latest-subsurface-buildnumber
echo "CICD-pull-request" > latest-subsurface-buildnumber-extension
- name: get other dependencies - name: get other dependencies
env: env:
@ -44,18 +40,28 @@ jobs:
git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer git config --global --add safe.directory ${SUBSURFACE_REPO_PATH}/libdivecomputer
cd /win cd /win
ln -s /__w/subsurface/subsurface . ln -s /__w/subsurface/subsurface .
bash -x subsurface/.github/workflows/scripts/windows-container-prep.sh 2>&1 | tee pre-build.log bash -x subsurface/packaging/windows/container-prep.sh 2>&1 | tee pre-build.log
- name: run build - name: run build
run: | run: |
export OUTPUT_DIR="$GITHUB_WORKSPACE"
cd /win cd /win
bash -x subsurface/.github/workflows/scripts/windows-in-container-build.sh 2>&1 | tee build.log bash -x subsurface/packaging/windows/in-container-build.sh 2>&1 | tee build.log
grep "Built target installer" build.log grep "Built target installer" build.log
- name: publish pull request artifacts
if: github.event_name == 'pull_request'
uses: actions/upload-artifact@v4
with:
name: Subsurface-Windows-${{ steps.version_number.outputs.version }}
path: |
subsurface*.exe*
smtk2ssrf*.exe
# only publish a 'release' on push events (those include merging a PR) # only publish a 'release' on push events (those include merging a PR)
- name: upload binaries - name: upload binaries
if: github.event_name == 'push' if: github.event_name == 'push'
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v2
with: with:
tag_name: v${{ steps.version_number.outputs.version }} tag_name: v${{ steps.version_number.outputs.version }}
repository: ${{ github.repository_owner }}/nightly-builds repository: ${{ github.repository_owner }}/nightly-builds

1
.gitignore vendored
View File

@ -49,3 +49,4 @@ appdata/subsurface.appdata.xml
android-mobile/Roboto-Regular.ttf android-mobile/Roboto-Regular.ttf
gh_release_notes.md gh_release_notes.md
release_content_title.txt release_content_title.txt
/output/

View File

@ -320,7 +320,7 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
endif() endif()
elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin") elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
execute_process( execute_process(
COMMAND bash scripts/get-version COMMAND bash scripts/get-version.sh
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
OUTPUT_VARIABLE SSRF_VERSION_STRING OUTPUT_VARIABLE SSRF_VERSION_STRING
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE

View File

@ -135,7 +135,7 @@ msgid ""
"mailto:subsurface@subsurface-divelog.org[our mailing list] and report bugs " "mailto:subsurface@subsurface-divelog.org[our mailing list] and report bugs "
"at https://github.com/Subsurface/subsurface/issues[our bugtracker]. " "at https://github.com/Subsurface/subsurface/issues[our bugtracker]. "
"For instructions on how to build the software and (if needed) its " "For instructions on how to build the software and (if needed) its "
"dependencies please consult the INSTALL file included with the source code." "dependencies please consult the INSTALL.md file included with the source code."
msgstr "" msgstr ""
#. type: Plain text #. type: Plain text

View File

@ -175,7 +175,7 @@ msgid ""
"an email to mailto:subsurface@subsurface-divelog.org[our mailing list] and " "an email to mailto:subsurface@subsurface-divelog.org[our mailing list] and "
"report bugs at https://github.com/Subsurface-divelog/subsurface/issues[our " "report bugs at https://github.com/Subsurface-divelog/subsurface/issues[our "
"bugtracker]. For instructions on how to build the software and (if needed) " "bugtracker]. For instructions on how to build the software and (if needed) "
"its dependencies please consult the INSTALL file included with the source " "its dependencies please consult the INSTALL.md file included with the source "
"code." "code."
msgstr "" msgstr ""
"Ce manuel explique comment utiliser le programme _Subsurface_. Pour " "Ce manuel explique comment utiliser le programme _Subsurface_. Pour "
@ -184,7 +184,7 @@ msgstr ""
"pouvez envoyer un e-mail sur mailto:subsurface@subsurface-divelog.org[notre " "pouvez envoyer un e-mail sur mailto:subsurface@subsurface-divelog.org[notre "
"liste de diffusion] et rapportez les bogues sur http://trac.hohndel." "liste de diffusion] et rapportez les bogues sur http://trac.hohndel."
"org[notre bugtracker]. Pour des instructions de compilation du logiciel et " "org[notre bugtracker]. Pour des instructions de compilation du logiciel et "
"(si besoin) de ses dépendances, merci de consulter le fichier INSTALL inclus " "(si besoin) de ses dépendances, merci de consulter le fichier INSTALL.md inclus "
"dans les sources logicielles." "dans les sources logicielles."
#. type: Plain text #. type: Plain text

View File

@ -460,7 +460,7 @@ the software, consult the <em>Downloads</em> page on the
Please discuss issues with this program by sending an email to Please discuss issues with this program by sending an email to
<a href="mailto:subsurface@subsurface-divelog.org">our mailing list</a> and report bugs at <a href="mailto:subsurface@subsurface-divelog.org">our mailing list</a> and report bugs at
<a href="https://github.com/Subsurface/subsurface/issues">our bugtracker</a>. For instructions on how to build the <a href="https://github.com/Subsurface/subsurface/issues">our bugtracker</a>. For instructions on how to build the
software and (if needed) its dependencies please consult the INSTALL file software and (if needed) its dependencies please consult the INSTALL.md file
included with the source code.</p></div> included with the source code.</p></div>
<div class="paragraph"><p><strong>Audience</strong>: Recreational Scuba Divers, Free Divers, Tec Divers, Professional <div class="paragraph"><p><strong>Audience</strong>: Recreational Scuba Divers, Free Divers, Tec Divers, Professional
Divers</p></div> Divers</p></div>

View File

@ -34,7 +34,7 @@ https://subsurface-divelog.org/[_Subsurface_ web site].
Please discuss issues with this program by sending an email to Please discuss issues with this program by sending an email to
mailto:subsurface@subsurface-divelog.org[our mailing list] and report bugs at mailto:subsurface@subsurface-divelog.org[our mailing list] and report bugs at
https://github.com/Subsurface/subsurface/issues[our bugtracker]. For instructions on how to build the https://github.com/Subsurface/subsurface/issues[our bugtracker]. For instructions on how to build the
software and (if needed) its dependencies please consult the INSTALL file software and (if needed) its dependencies please consult the INSTALL.md file
included with the source code. included with the source code.
*Audience*: Recreational Scuba Divers, Free Divers, Tec Divers, Professional *Audience*: Recreational Scuba Divers, Free Divers, Tec Divers, Professional

View File

@ -517,7 +517,7 @@ web</a>. Por favor, comenta los problemas que tengas con este programa enviando
mail a <a href="mailto:subsurface@subsurface-divelog.org">nuestra lista de correo</a> e informa de mail a <a href="mailto:subsurface@subsurface-divelog.org">nuestra lista de correo</a> e informa de
fallos en <a href="https://github.com/Subsurface/subsurface/issues">nuestro bugtracker</a>. fallos en <a href="https://github.com/Subsurface/subsurface/issues">nuestro bugtracker</a>.
Para instrucciones acerca de como compilar el software y (en caso necesario) Para instrucciones acerca de como compilar el software y (en caso necesario)
sus dependencias, por favor, consulta el archivo INSTALL incluido con el código sus dependencias, por favor, consulta el archivo INSTALL.md incluido con el código
fuente.</p></div> fuente.</p></div>
<div class="paragraph"><p><strong>Audiencia</strong>: Buceadores recreativos, Buceadores en apnea, Buceadores técnicos, <div class="paragraph"><p><strong>Audiencia</strong>: Buceadores recreativos, Buceadores en apnea, Buceadores técnicos,
Buceadores profesionales.</p></div> Buceadores profesionales.</p></div>

View File

@ -61,7 +61,7 @@ web]. Por favor, comenta los problemas que tengas con este programa enviando un
mail a mailto:subsurface@subsurface-divelog.org[nuestra lista de correo] e informa de mail a mailto:subsurface@subsurface-divelog.org[nuestra lista de correo] e informa de
fallos en https://github.com/Subsurface/subsurface/issues[nuestro bugtracker]. fallos en https://github.com/Subsurface/subsurface/issues[nuestro bugtracker].
Para instrucciones acerca de como compilar el software y (en caso necesario) Para instrucciones acerca de como compilar el software y (en caso necesario)
sus dependencias, por favor, consulta el archivo INSTALL incluido con el código sus dependencias, por favor, consulta el archivo INSTALL.md incluido con el código
fuente. fuente.
*Audiencia*: Buceadores recreativos, Buceadores en apnea, Buceadores técnicos, *Audiencia*: Buceadores recreativos, Buceadores en apnea, Buceadores técnicos,

View File

@ -526,7 +526,7 @@ problème, vous pouvez envoyer un e-mail sur
<a href="mailto:subsurface@subsurface-divelog.org">notre liste de diffusion</a> et <a href="mailto:subsurface@subsurface-divelog.org">notre liste de diffusion</a> et
rapportez les bogues sur <a href="http://trac.hohndel.org">notre bugtracker</a>. Pour rapportez les bogues sur <a href="http://trac.hohndel.org">notre bugtracker</a>. Pour
des instructions de compilation du logiciel et (si besoin) de ses des instructions de compilation du logiciel et (si besoin) de ses
dépendances, merci de consulter le fichier INSTALL inclus dans les sources dépendances, merci de consulter le fichier INSTALL.md inclus dans les sources
logicielles.</p></div> logicielles.</p></div>
<div class="paragraph"><p><strong>Public</strong> : Plongeurs loisirs, apnéistes, plongeurs Tek et plongeurs <div class="paragraph"><p><strong>Public</strong> : Plongeurs loisirs, apnéistes, plongeurs Tek et plongeurs
professionnels</p></div> professionnels</p></div>

View File

@ -61,7 +61,7 @@ problème, vous pouvez envoyer un e-mail sur
mailto:subsurface@subsurface-divelog.org[notre liste de diffusion] et mailto:subsurface@subsurface-divelog.org[notre liste de diffusion] et
rapportez les bogues sur http://trac.hohndel.org[notre bugtracker]. Pour rapportez les bogues sur http://trac.hohndel.org[notre bugtracker]. Pour
des instructions de compilation du logiciel et (si besoin) de ses des instructions de compilation du logiciel et (si besoin) de ses
dépendances, merci de consulter le fichier INSTALL inclus dans les sources dépendances, merci de consulter le fichier INSTALL.md inclus dans les sources
logicielles. logicielles.
*Public* : Plongeurs loisirs, apnéistes, plongeurs Tek et plongeurs *Public* : Plongeurs loisirs, apnéistes, plongeurs Tek et plongeurs

View File

@ -516,7 +516,7 @@ het programma kunnen bij de ontwikkelaars gemeld worden via email op
<a href="mailto:subsurface@subsurface-divelog.org">onze mailinglijst</a>. Fouten kunnen <a href="mailto:subsurface@subsurface-divelog.org">onze mailinglijst</a>. Fouten kunnen
ook gemeld worden op <a href="https://github.com/Subsurface/subsurface/issues">onze bugtracker</a>. ook gemeld worden op <a href="https://github.com/Subsurface/subsurface/issues">onze bugtracker</a>.
Instructies hoe <em>Subsurface</em> zelf te compileren vanuit de broncode staan ook op Instructies hoe <em>Subsurface</em> zelf te compileren vanuit de broncode staan ook op
onze website en in het INSTALL bestand in de broncode.</p></div> onze website en in het INSTALL.md bestand in de broncode.</p></div>
<div class="paragraph"><p><strong>Doelgroep</strong>: Recreatieve duikers, Tec duikers, Apneu duikers, <div class="paragraph"><p><strong>Doelgroep</strong>: Recreatieve duikers, Tec duikers, Apneu duikers,
Professionele duikers.</p></div> Professionele duikers.</p></div>
<div id="toc"> <div id="toc">

View File

@ -59,7 +59,7 @@ het programma kunnen bij de ontwikkelaars gemeld worden via email op
mailto:subsurface@subsurface-divelog.org[onze mailinglijst]. Fouten kunnen mailto:subsurface@subsurface-divelog.org[onze mailinglijst]. Fouten kunnen
ook gemeld worden op https://github.com/Subsurface/subsurface/issues[onze bugtracker]. ook gemeld worden op https://github.com/Subsurface/subsurface/issues[onze bugtracker].
Instructies hoe _Subsurface_ zelf te compileren vanuit de broncode staan ook op Instructies hoe _Subsurface_ zelf te compileren vanuit de broncode staan ook op
onze website en in het INSTALL bestand in de broncode. onze website en in het INSTALL.md bestand in de broncode.
*Doelgroep*: Recreatieve duikers, Tec duikers, Apneu duikers, *Doelgroep*: Recreatieve duikers, Tec duikers, Apneu duikers,
Professionele duikers. Professionele duikers.

View File

@ -1,5 +1,4 @@
Building Subsurface from Source # Building Subsurface from Source
===============================
Subsurface uses quite a few open source libraries and frameworks to do its Subsurface uses quite a few open source libraries and frameworks to do its
job. The most important ones include libdivecomputer, Qt, libxml2, libxslt, job. The most important ones include libdivecomputer, Qt, libxml2, libxslt,
@ -13,23 +12,27 @@ Below are instructions for building Subsurface
- iOS (cross-building) - iOS (cross-building)
Getting Subsurface source ## Getting Subsurface source
-------------------------
You can get the sources to the latest development version from our git You can get the sources to the latest development version from our git
repository: repository:
git clone http://github.com/Subsurface/subsurface.git
cd subsurface ```
git submodule init # this will give you our flavor of libdivecomputer git clone http://github.com/Subsurface/subsurface.git
cd subsurface
git submodule init # this will give you our flavor of libdivecomputer
```
You keep it updated by doing: You keep it updated by doing:
git checkout master
git pull -r ```
git submodule update git checkout master
git pull -r
git submodule update
```
Our flavor of libdivecomputer ### Our flavor of libdivecomputer
-----------------------------
Subsurface requires its own flavor of libdivecomputer which is inclduded Subsurface requires its own flavor of libdivecomputer which is inclduded
above as git submodule above as git submodule
@ -37,7 +40,7 @@ above as git submodule
The branches won't have a pretty history and will include ugly merges, The branches won't have a pretty history and will include ugly merges,
but they should always allow a fast forward pull that tracks what we but they should always allow a fast forward pull that tracks what we
believe developers should build against. All our patches are contained believe developers should build against. All our patches are contained
in the "Subsurface-DS9" branch. in the `Subsurface-DS9` branch.
This should allow distros to see which patches we have applied on top of This should allow distros to see which patches we have applied on top of
upstream. They will receive force pushes as we rebase to newer versions of upstream. They will receive force pushes as we rebase to newer versions of
@ -53,8 +56,7 @@ Subsurface or trying to understand what we have done relative to their
respective upstreams. respective upstreams.
Getting Qt5 ### Getting Qt5
-----------
We use Qt5 in order to only maintain one UI across platforms. We use Qt5 in order to only maintain one UI across platforms.
@ -74,36 +76,41 @@ significantly reduced flexibility.
As of this writing, there is thankfully a thirdparty offline installer still As of this writing, there is thankfully a thirdparty offline installer still
available: available:
pip3 install aqtinstall ```
aqt install -O <Qt Location> 5.15.2 mac desktop pip3 install aqtinstall
aqt install -O <Qt Location> 5.15.2 mac desktop
```
(or whatever version / OS you need). This installer is surprisingly fast (or whatever version / OS you need). This installer is surprisingly fast
and seems well maintained - note that we don't use this for Windows as and seems well maintained - note that we don't use this for Windows as
that is completely built from source using MXE. that is completely built from source using MXE.
In order to use this Qt installation, simply add it to your PATH: In order to use this Qt installation, simply add it to your PATH:
```
PATH=<Qt Location>/<version>/<type>/bin:$PATH PATH=<Qt Location>/<version>/<type>/bin:$PATH
```
QtWebKit is needed, if you want to print, but no longer part of Qt5, QtWebKit is needed, if you want to print, but no longer part of Qt5,
so you need to download it and compile. In case you just want to test so you need to download it and compile. In case you just want to test
without print possibility omit this step. without print possibility omit this step.
git clone -b 5.212 https://github.com/qt/qtwebkit ```
mkdir -p qtwebkit/WebKitBuild/Release git clone -b 5.212 https://github.com/qt/qtwebkit
cd qtwebkit/WebKitBuild/Release mkdir -p qtwebkit/WebKitBuild/Release
cmake -DPORT=Qt -DCMAKE_BUILD_TYPE=Release -DQt5_DIR=/<Qt Location>/<version>/<type>/lib/cmake/Qt5 ../.. cd qtwebkit/WebKitBuild/Release
make install cmake -DPORT=Qt -DCMAKE_BUILD_TYPE=Release -DQt5_DIR=/<Qt Location>/<version>/<type>/lib/cmake/Qt5 ../..
make install
```
Other third party library dependencies ### Other third party library dependencies
--------------------------------------
In order for our cloud storage to be fully functional you need In order for our cloud storage to be fully functional you need
libgit2 0.26 or newer. libgit2 0.26 or newer.
cmake build system ### cmake build system
------------------
Our main build system is based on cmake. But qmake is needed Our main build system is based on cmake. But qmake is needed
for the googlemaps plugin and the iOS build. for the googlemaps plugin and the iOS build.
@ -114,109 +121,127 @@ distribution (see build instructions).
Build options for Subsurface ## Build options for Subsurface
----------------------------
The following options are recognised when passed to cmake: The following options are recognised when passed to cmake:
-DCMAKE_BUILD_TYPE=Release create a release build `-DCMAKE_BUILD_TYPE=Release` create a release build
-DCMAKE_BUILD_TYPE=Debug create a debug build `-DCMAKE_BUILD_TYPE=Debug` create a debug build
The Makefile that was created using cmake can be forced into a much more The Makefile that was created using cmake can be forced into a much more
verbose mode by calling verbose mode by calling
make VERBOSE=1 ```
make VERBOSE=1
```
Many more variables are supported, the easiest way to interact with them is Many more variables are supported, the easiest way to interact with them is
to call to call
ccmake . ```
ccmake .
```
in your build directory. in your build directory.
Building the development version of Subsurface under Linux ### Building the development version of Subsurface under Linux
----------------------------------------------------------
On Fedora you need On Fedora you need
```
sudo dnf install autoconf automake bluez-libs-devel cmake gcc-c++ git \ sudo dnf install autoconf automake bluez-libs-devel cmake gcc-c++ git \
libcurl-devel libsqlite3x-devel libssh2-devel libtool libudev-devel \ libcurl-devel libsqlite3x-devel libssh2-devel libtool libudev-devel \
libusbx-devel libxml2-devel libxslt-devel make \ libusbx-devel libxml2-devel libxslt-devel make \
qt5-qtbase-devel qt5-qtconnectivity-devel qt5-qtdeclarative-devel \ qt5-qtbase-devel qt5-qtconnectivity-devel qt5-qtdeclarative-devel \
qt5-qtlocation-devel qt5-qtscript-devel qt5-qtsvg-devel \ qt5-qtlocation-devel qt5-qtscript-devel qt5-qtsvg-devel \
qt5-qttools-devel qt5-qtwebkit-devel redhat-rpm-config \ qt5-qttools-devel qt5-qtwebkit-devel redhat-rpm-config \
bluez-libs-devel libgit2-devel libzip-devel libmtp-devel bluez-libs-devel libgit2-devel libzip-devel libmtp-devel
```
Package names are sadly different on OpenSUSE Package names are sadly different on OpenSUSE
```
sudo zypper install git gcc-c++ make autoconf automake libtool cmake libzip-devel \ sudo zypper install git gcc-c++ make autoconf automake libtool cmake libzip-devel \
libxml2-devel libxslt-devel sqlite3-devel libusb-1_0-devel \ libxml2-devel libxslt-devel sqlite3-devel libusb-1_0-devel \
libqt5-linguist-devel libqt5-qttools-devel libQt5WebKitWidgets-devel \ libqt5-linguist-devel libqt5-qttools-devel libQt5WebKitWidgets-devel \
libqt5-qtbase-devel libQt5WebKit5-devel libqt5-qtsvg-devel \ libqt5-qtbase-devel libQt5WebKit5-devel libqt5-qtsvg-devel \
libqt5-qtscript-devel libqt5-qtdeclarative-devel \ libqt5-qtscript-devel libqt5-qtdeclarative-devel \
libqt5-qtconnectivity-devel libqt5-qtlocation-devel libcurl-devel \ libqt5-qtconnectivity-devel libqt5-qtlocation-devel libcurl-devel \
bluez-devel libgit2-devel libmtp-devel bluez-devel libgit2-devel libmtp-devel
```
On Debian Bookworm this seems to work On Debian Bookworm this seems to work
```
sudo apt install \ sudo apt install \
autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \ autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \
libcurl4-openssl-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libcurl4-openssl-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \
libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \ libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \
qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \ qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \
qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \ qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \
qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \ qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \
qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev
```
In order to build and run mobile-on-desktop, you also need In order to build and run mobile-on-desktop, you also need
```
sudo apt install \ sudo apt install \
qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \ qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \
qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \ qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \
qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls
```
Package names for Ubuntu 21.04 Package names for Ubuntu 21.04
```
sudo apt install \ sudo apt install \
autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \ autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \
libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \
libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \ libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \
qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \ qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \
qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \ qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \
qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \ qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \
qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev
```
In order to build and run mobile-on-desktop, you also need In order to build and run mobile-on-desktop, you also need
```
sudo apt install \ sudo apt install \
qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \ qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \
qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \ qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \
qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls
```
On Raspberry Pi (Raspian Buster and Ubuntu Mate 20.04.1) this seems to work On Raspberry Pi (Raspian Buster and Ubuntu Mate 20.04.1) this seems to work
```
sudo apt install \ sudo apt install \
autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \ autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \
libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \
libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \ libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \
qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \ qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \
qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \ qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \
qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \ qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \
qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev
```
In order to build and run mobile-on-desktop, you also need In order to build and run mobile-on-desktop, you also need
```
sudo apt install \ sudo apt install \
qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \ qtquickcontrols2-5-dev qml-module-qtquick-window2 qml-module-qtquick-dialogs \
qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \ qml-module-qtquick-layouts qml-module-qtquick-controls2 qml-module-qtquick-templates2 \
qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls
```
Note that on Ubuntu Mate on the Raspberry Pi, you may need to configure Note that on Ubuntu Mate on the Raspberry Pi, you may need to configure
@ -226,42 +251,46 @@ swap space configured by default. See the dphys-swapfile package.
On Raspberry Pi OS with Desktop (64-bit) Released April 4th, 2022, this seems On Raspberry Pi OS with Desktop (64-bit) Released April 4th, 2022, this seems
to work to work
```
sudo apt install \ sudo apt install \
autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \ autoconf automake cmake g++ git libbluetooth-dev libcrypto++-dev \
libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \ libcurl4-gnutls-dev libgit2-dev libqt5qml5 libqt5quick5 libqt5svg5-dev \
libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \ libqt5webkit5-dev libsqlite3-dev libssh2-1-dev libssl-dev libtool \
libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \ libusb-1.0-0-dev libxml2-dev libxslt1-dev libzip-dev make pkg-config \
qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \ qml-module-qtlocation qml-module-qtpositioning qml-module-qtquick2 \
qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \ qt5-qmake qtchooser qtconnectivity5-dev qtdeclarative5-dev \
qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \ qtdeclarative5-private-dev qtlocation5-dev qtpositioning5-dev \
qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev qtscript5-dev qttools5-dev qttools5-dev-tools libmtp-dev
```
Note that you'll need to increase the swap space as the default of 100MB Note that you'll need to increase the swap space as the default of 100MB
doesn't seem to be enough. 1024MB worked on a 3B+. doesn't seem to be enough. 1024MB worked on a 3B+.
If maps aren't working, copy the googlemaps plugin If maps aren't working, copy the googlemaps plugin
from <build_dir>/subsurface/googlemaps/build/libqtgeoservices_googlemaps.so from `<build_dir>/subsurface/googlemaps/build/libqtgeoservices_googlemaps.so`
to /usr/lib/aarch64-linux-gnu/qt5/plugins/geoservices. to `/usr/lib/aarch64-linux-gnu/qt5/plugins/geoservices/`.
If Subsurface can't seem to see your dive computer on /dev/ttyUSB0, even after If Subsurface can't seem to see your dive computer on `/dev/ttyUSB0`, even after
adjusting your account's group settings (see note below about usermod), it adjusting your account's group settings (see note below about usermod), it
might be that the FTDI driver doesn't recognize the VendorID/ProductID of your might be that the FTDI driver doesn't recognize the VendorID/ProductID of your
computer. Follow the instructions here: computer. Follow the instructions here:
https://www.ftdichip.com/Support/Documents/TechnicalNotes/TN_101_Customising_FTDI_VID_PID_In_Linux(FT_000081).pdf https://www.ftdichip.com/Support/Documents/TechnicalNotes/TN_101_Customising_FTDI_VID_PID_In_Linux(FT_000081).pdf
If you're unsure of the VID/PID of your device, plug your dive computer in to If you're unsure of the VID/PID of your device, plug your dive computer in to
your host and run `dmesg`. That should show the codes that are needed to your host and run `dmesg`. That should show the codes that are needed to
follow TN_101. follow TN_101.
On PCLinuxOS you appear to need the following packages On PCLinuxOS you appear to need the following packages
su -c "apt-get install -y autoconf automake cmake gcc-c++ git libtool \ ```
lib64bluez-devel lib64qt5bluetooth-devel lib64qt5concurrent-devel \ su -c "apt-get install -y autoconf automake cmake gcc-c++ git libtool \
lib64qt5help-devel lib64qt5location-devel lib64qt5quicktest-devel \ lib64bluez-devel lib64qt5bluetooth-devel lib64qt5concurrent-devel \
lib64qt5quickwidgets-devel lib64qt5script-devel lib64qt5svg-devel \ lib64qt5help-devel lib64qt5location-devel lib64qt5quicktest-devel \
lib64qt5test-devel lib64qt5webkitwidgets-devel lib64qt5xml-devel \ lib64qt5quickwidgets-devel lib64qt5script-devel lib64qt5svg-devel \
lib64ssh2-devel lib64usb1.0-devel lib64zip-devel qttools5 qttranslations5" lib64qt5test-devel lib64qt5webkitwidgets-devel lib64qt5xml-devel \
lib64ssh2-devel lib64usb1.0-devel lib64zip-devel qttools5 qttranslations5"
```
In order to build Subsurface, use the supplied build script. This should In order to build Subsurface, use the supplied build script. This should
work on most systems that have all the prerequisite packages installed. work on most systems that have all the prerequisite packages installed.
@ -269,109 +298,121 @@ work on most systems that have all the prerequisite packages installed.
You should have Subsurface sources checked out in a sane place, something You should have Subsurface sources checked out in a sane place, something
like this: like this:
```
mkdir -p ~/src mkdir -p ~/src
cd ~/src cd ~/src
git clone https://github.com/Subsurface/subsurface.git git clone https://github.com/Subsurface/subsurface.git
./subsurface/scripts/build.sh # <- this step will take quite a while as it ./subsurface/scripts/build.sh # <- this step will take quite a while as it
# compiles a handful of libraries before # compiles a handful of libraries before
# building Subsurface # building Subsurface
```
Now you can run Subsurface like this: Now you can run Subsurface like this:
```
cd ~/src/subsurface/build cd ~/src/subsurface/build
./subsurface ./subsurface
```
Note: on many Linux versions (for example on Kubuntu 15.04) the user must Note: on many Linux versions (for example on Kubuntu 15.04) the user must
belong to the dialout group. belong to the `dialout` group.
You may need to run something like You may need to run something like
sudo usermod -a -G dialout username ```
sudo usermod -a -G dialout $USER
```
with your correct username and log out and log in again for that to take with your correct username and log out and log in again for that to take
effect. effect.
If you get errors like: If you get errors like:
```
./subsurface: error while loading shared libraries: libGrantlee_Templates.so.5: cannot open shared object file: No such file or directory ./subsurface: error while loading shared libraries: libGrantlee_Templates.so.5: cannot open shared object file: No such file or directory
```
You can run the following command: You can run the following command:
```
sudo ldconfig ~/src/install-root/lib sudo ldconfig ~/src/install-root/lib
```
Building Subsurface under MacOSX ### Building Subsurface under MacOSX
--------------------------------
While it is possible to build all required components completely from source, While it is possible to build all required components completely from source,
at this point the preferred way to build Subsurface is to set up the build at this point the preferred way to build Subsurface is to set up the build
infrastructure via Homebrew and then build the dependencies from source. infrastructure via Homebrew and then build the dependencies from source.
0) You need to have XCode installed. The first time (and possibly after updating OSX) 0. You need to have XCode installed. The first time (and possibly after updating OSX)
```
xcode-select --install xcode-select --install
```
1) install Homebrew (see https://brew.sh) and then the required build infrastructure: 1. install Homebrew (see https://brew.sh) and then the required build infrastructure:
```
brew install autoconf automake libtool pkg-config gettext brew install autoconf automake libtool pkg-config gettext
```
2) install Qt 2. install Qt
download the macOS installer from https://download.qt.io/official_releases/online_installers download the macOS installer from https://download.qt.io/official_releases/online_installers
and use it to install the desired Qt version. At this point the latest Qt5 version is still and use it to install the desired Qt version. At this point the latest Qt5 version is still
preferred over Qt6. preferred over Qt6.
3) now build Subsurface If you plan to deploy your build to an Apple Silicon Mac, you may have better results with
Bluetooth connections if you install Qt5.15.13. If Qt5.15.13 is not available via the
installer, you can download from https://download.qt.io/official_releases/qt/5.15/5.15.13
and build using the usual configure, make, and make install.
3. now build Subsurface
```
cd ~/src; bash subsurface/scripts/build.sh -build-deps cd ~/src; bash subsurface/scripts/build.sh -build-deps
```
if you are building against Qt6 (still experimental) you can create a universal binary with if you are building against Qt6 (still experimental) you can create a universal binary with
```
cd ~/src; bash subsurface/scripts/build.sh -build-with-qt6 -build-deps -fat-build cd ~/src; bash subsurface/scripts/build.sh -build-with-qt6 -build-deps -fat-build
```
After the above is done, Subsurface.app will be available in the After the above is done, Subsurface.app will be available in the
subsurface/build directory. You can run Subsurface with the command subsurface/build directory. You can run Subsurface with the command
A) open subsurface/build/Subsurface.app A. `open subsurface/build/Subsurface.app`
this will however not show diagnostic output this will however not show diagnostic output
B) subsurface/build/Subsurface.app/Contents/MacOS/Subsurface B. `subsurface/build/Subsurface.app/Contents/MacOS/Subsurface`
the TAB key is your friend :-) the [Tab] key is your friend :-)
Debugging can be done with either Xcode or QtCreator. Debugging can be done with either Xcode or QtCreator.
To install the app for all users, move subsurface/build/Subsurface.app to /Applications. To install the app for all users, move subsurface/build/Subsurface.app to /Applications.
Cross-building Subsurface on MacOSX for iOS ### Cross-building Subsurface on MacOSX for iOS
-------------------------------------------
1) build SubSurface under MacOSX and iOS 0. build SubSurface under MacOSX and iOS
1.1) cd <repo>/..; bash <repo>/scripts/build.sh -build-deps -both 1. `cd <repo>/..; bash <repo>/scripts/build.sh -build-deps -both`
note: this is mainly done to ensure all external dependencies are downloaded and set note: this is mainly done to ensure all external dependencies are downloaded and set
to the correct versions to the correct versions
2) continue as described in subsurface/packaging/ios 2. follow [these instructions](packaging/ios/README.md)
Cross-building Subsurface on Linux for Windows ### Cross-building Subsurface on Linux for Windows
----------------------------------------------
Subsurface builds nicely with MinGW - the official builds are done as Subsurface for Windows builds on linux by using the [MXE (M cross environment)](https://github.com/mxe/mxe). The easiest way to do this is to use a Docker container with a pre-built MXE for Subsurface by following [these instructions](packaging/windows/README.md).
cross builds under Linux (currently on Ubuntu 20.04). A shell script to do
that (plus the .nsi file to create the installer with makensis) are
included in the packaging/windows directory.
Please read through the explanations and instructions in
packaging/windows/README.md, packaging/windows/create-win-installer.sh, and
packaging/windows/mxe-based-build.sh if you want to build the Windows version
on your Linux system.
Building Subsurface on Windows ### Building Subsurface on Windows
------------------------------
This is NOT RECOMMENDED. To the best of our knowledge there is one single This is NOT RECOMMENDED. To the best of our knowledge there is one single
person who regularly does this. The Subsurface team does not provide support person who regularly does this. The Subsurface team does not provide support
@ -381,8 +422,9 @@ The lack of a working package management system for Windows makes it
really painful to build Subsurface natively under Windows, really painful to build Subsurface natively under Windows,
so we don't support that at all. so we don't support that at all.
But if you want to build Subsurface on a Windows system, the docker based [cross-build for Windows](packaging/windows/README.md) works just fine in WSL2 on Windows.
Cross-building Subsurface on Linux for Android
----------------------------------------------
Follow the instructions in packaging/android/README ### Cross-building Subsurface on Linux for Android
Follow [these instructions](packaging/android/README.md).

View File

@ -1,20 +1,17 @@
# Subsurface # Subsurface
![Build Status](https://github.com/subsurface/subsurface/workflows/Windows/badge.svg) [![Windows](https://github.com/subsurface/subsurface/actions/workflows/windows.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/windows.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Mac/badge.svg) [![Mac](https://github.com/subsurface/subsurface/actions/workflows/mac.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/mac.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/iOS/badge.svg) [![iOS](https://github.com/subsurface/subsurface/actions/workflows/ios.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/ios.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Android/badge.svg) [![Android](https://github.com/subsurface/subsurface/actions/workflows/android.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/android.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Linux%20Snap/badge.svg) [![Snap](https://github.com/subsurface/subsurface/actions/workflows/linux-snap.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/linux-snap.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Ubuntu%2014.04%20/%20Qt%205.12%20for%20AppImage--/badge.svg) [![Ubuntu 16.04 / Qt 5.15-- for AppImage](https://github.com/subsurface/subsurface/actions/workflows/linux-ubuntu-16.04-5.12-appimage.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/linux-ubuntu-16.04-5.12-appimage.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Ubuntu%2018.04%20/%20Qt%205.9--/badge.svg) [![Ubuntu 24.04 / Qt 5.15--](https://github.com/subsurface/subsurface/actions/workflows/linux-ubuntu-24.04-5.15.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/linux-ubuntu-24.04-5.15.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Ubuntu%2020.04%20/%20Qt%205.12--/badge.svg) [![Fedora 35 / Qt 6--](https://github.com/subsurface/subsurface/actions/workflows/linux-fedora-35-qt6.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/linux-fedora-35-qt6.yml)
![Build Status](https://github.com/subsurface/subsurface/workflows/Ubuntu%2022.04%20/%20Qt%205.15--/badge.svg) [![Debian trixie / Qt 5.15--](https://github.com/subsurface/subsurface/actions/workflows/linux-debian-trixie-5.15.yml/badge.svg)](https://github.com/subsurface/subsurface/actions/workflows/linux-debian-trixie-5.15.yml)
This is the README file for Subsurface 5.0.10 [![Coverity Scan Results](https://scan.coverity.com/projects/14405/badge.svg)](https://scan.coverity.com/projects/subsurface-divelog-subsurface)
Please check the `ReleaseNotes.txt` for details about new features and
changes since Subsurface 5.0.9 (and earlier versions).
Subsurface can be found at http://subsurface-divelog.org Subsurface can be found at http://subsurface-divelog.org
@ -24,16 +21,9 @@ Report bugs and issues at https://github.com/Subsurface/subsurface/issues
License: GPLv2 License: GPLv2
We frequently make new test versions of Subsurface available at We are releasing 'nightly' builds of Subsurface that are built from the latest version of the code. Versions of this build for Windows, macOS, Android (requiring sideloading), and a Linux AppImage can be downloaded from the [Latest Dev Release](https://www.subsurface-divelog.org/latest-release/) page on [our website](https://www.subsurface-divelog.org/). Alternatively, they can be downloaded [directly from GitHub](https://github.com/subsurface/nightly-builds/releases). Additionally, those same versions are
http://subsurface-divelog.org/downloads/test/ and there you can always get
the latest builds for Mac, Windows, Linux AppImage and Android (with some
caveats about installability). Additionally, those same versions are
posted to the Subsurface-daily repos on Ubuntu Launchpad, Fedora COPR, and posted to the Subsurface-daily repos on Ubuntu Launchpad, Fedora COPR, and
OpenSUSE OBS. OpenSUSE OBS, and released to [Snapcraft](https://snapcraft.io/subsurface) into the 'edge' channel of subsurface.
These tend to contain the latest bug fixes and features, but also
occasionally the latest bugs and issues. Please understand when using them
that these are primarily intended for testing.
You can get the sources to the latest development version from the git You can get the sources to the latest development version from the git
repository: repository:
@ -45,17 +35,11 @@ git clone https://github.com/Subsurface/subsurface.git
You can also fork the repository and browse the sources at the same site, You can also fork the repository and browse the sources at the same site,
simply using https://github.com/Subsurface/subsurface simply using https://github.com/Subsurface/subsurface
If you want the latest release (instead of the bleeding edge Additionally, artifacts for Windows, macOS, Android, Linux AppImage, and iOS (simulator build) are generated for all open pull requests and linked in pull request comments. Use these if you want to test the changes in a specific pull request and provide feedback before it has been merged.
development version) you can either get this via git or the release tar
ball. After cloning run the following command:
``` If you want a more stable version that is a little bit more tested you can get this from the [Curent Release](https://www.subsurface-divelog.org/current-release/) page on [our website](https://www.subsurface-divelog.org/).
git checkout v5.0.10 (or whatever the last release is)
```
or download a tarball from http://subsurface-divelog.org/downloads/Subsurface-5.0.10.tgz Detailed build instructions can be found in the [INSTALL.md](/INSTALL.md) file.
Detailed build instructions can be found in the INSTALL file.
## System Requirements ## System Requirements

View File

@ -181,7 +181,7 @@ void export_TeX(const char *filename, bool selected_only, bool plain, ExportCall
site ? put_format(&buf, "\\def\\%sgpslon{%f}\n", ssrf, site->location.lon.udeg / 1000000.0) : put_format(&buf, "\\def\\gpslon{}\n"); site ? put_format(&buf, "\\def\\%sgpslon{%f}\n", ssrf, site->location.lon.udeg / 1000000.0) : put_format(&buf, "\\def\\gpslon{}\n");
put_format(&buf, "\\def\\%scomputer{%s}\n", ssrf, dive->dc.model); put_format(&buf, "\\def\\%scomputer{%s}\n", ssrf, dive->dc.model);
put_format(&buf, "\\def\\%scountry{%s}\n", ssrf, country ?: ""); put_format(&buf, "\\def\\%scountry{%s}\n", ssrf, country ?: "");
put_format(&buf, "\\def\\%stime{%u:%02u}\n", ssrf, FRACTION(dive->duration.seconds, 60)); put_format(&buf, "\\def\\%stime{%u:%02u}\n", ssrf, FRACTION_TUPLE(dive->duration.seconds, 60));
put_format(&buf, "\n%% Dive Profile Details:\n"); put_format(&buf, "\n%% Dive Profile Details:\n");
dive->maxtemp.mkelvin ? put_format(&buf, "\\def\\%smaxtemp{%.1f\\%stemperatureunit}\n", ssrf, get_temp_units(dive->maxtemp.mkelvin, &unit), ssrf) : put_format(&buf, "\\def\\%smaxtemp{}\n", ssrf); dive->maxtemp.mkelvin ? put_format(&buf, "\\def\\%smaxtemp{%.1f\\%stemperatureunit}\n", ssrf, get_temp_units(dive->maxtemp.mkelvin, &unit), ssrf) : put_format(&buf, "\\def\\%smaxtemp{}\n", ssrf);

View File

@ -1,20 +1,20 @@
execute_process( execute_process(
COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version 4 COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version.sh 4
WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR} WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR}
OUTPUT_VARIABLE CANONICAL_VERSION_STRING_4 OUTPUT_VARIABLE CANONICAL_VERSION_STRING_4
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE
) )
execute_process( execute_process(
COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version 3 COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version.sh 3
WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR} WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR}
OUTPUT_VARIABLE CANONICAL_VERSION_STRING_3 OUTPUT_VARIABLE CANONICAL_VERSION_STRING_3
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE
) )
execute_process( execute_process(
COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version COMMAND bash ${CMAKE_TOP_SRC_DIR}/scripts/get-version.sh
WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR} WORKING_DIRECTORY ${CMAKE_TOP_SRC_DIR}
OUTPUT_VARIABLE CANONICAL_VERSION_STRING OUTPUT_VARIABLE CANONICAL_VERSION_STRING
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE

View File

@ -100,6 +100,7 @@ enum class EditProfileType {
ADD, ADD,
REMOVE, REMOVE,
MOVE, MOVE,
EDIT,
}; };
void replanDive(dive *d); // dive computer(s) and cylinder(s) of first argument will be consumed! void replanDive(dive *d); // dive computer(s) and cylinder(s) of first argument will be consumed!
void editProfile(const dive *d, int dcNr, EditProfileType type, int count); void editProfile(const dive *d, int dcNr, EditProfileType type, int count);

View File

@ -521,6 +521,11 @@ ImportDives::ImportDives(struct divelog *log, int flags, const QString &source)
continue; continue;
filterPresetsToAdd.emplace_back(preset.name, preset.data); filterPresetsToAdd.emplace_back(preset.name, preset.data);
} }
free(dives_to_add.dives);
free(dives_to_remove.dives);
free(trips_to_add.trips);
free(sites_to_add.dive_sites);
} }
bool ImportDives::workToBeDone() bool ImportDives::workToBeDone()

View File

@ -879,6 +879,7 @@ QString editProfileTypeToString(EditProfileType type, int count)
case EditProfileType::ADD: return Command::Base::tr("Add stop"); case EditProfileType::ADD: return Command::Base::tr("Add stop");
case EditProfileType::REMOVE: return Command::Base::tr("Remove %n stop(s)", "", count); case EditProfileType::REMOVE: return Command::Base::tr("Remove %n stop(s)", "", count);
case EditProfileType::MOVE: return Command::Base::tr("Move %n stop(s)", "", count); case EditProfileType::MOVE: return Command::Base::tr("Move %n stop(s)", "", count);
case EditProfileType::EDIT: return Command::Base::tr("Edit stop");
} }
} }
@ -904,7 +905,7 @@ EditProfile::EditProfile(const dive *source, int dcNr, EditProfileType type, int
copy_samples(sdc, &dc); copy_samples(sdc, &dc);
copy_events(sdc, &dc); copy_events(sdc, &dc);
setText(editProfileTypeToString(type, count) + diveNumberOrDate(d)); setText(editProfileTypeToString(type, count) + " " + diveNumberOrDate(d));
} }
EditProfile::~EditProfile() EditProfile::~EditProfile()
@ -925,6 +926,7 @@ void EditProfile::undo()
std::swap(sdc->samples, dc.samples); std::swap(sdc->samples, dc.samples);
std::swap(sdc->alloc_samples, dc.alloc_samples); std::swap(sdc->alloc_samples, dc.alloc_samples);
std::swap(sdc->sample, dc.sample); std::swap(sdc->sample, dc.sample);
std::swap(sdc->events, dc.events);
std::swap(sdc->maxdepth, dc.maxdepth); std::swap(sdc->maxdepth, dc.maxdepth);
std::swap(d->maxdepth, maxdepth); std::swap(d->maxdepth, maxdepth);
std::swap(d->meandepth, meandepth); std::swap(d->meandepth, meandepth);
@ -1125,7 +1127,7 @@ AddCylinder::AddCylinder(bool currentDiveOnly) :
setText(Command::Base::tr("Add cylinder")); setText(Command::Base::tr("Add cylinder"));
else else
setText(Command::Base::tr("Add cylinder (%n dive(s))", "", dives.size())); setText(Command::Base::tr("Add cylinder (%n dive(s))", "", dives.size()));
cyl = create_new_cylinder(dives[0]); cyl = create_new_manual_cylinder(dives[0]);
indexes.reserve(dives.size()); indexes.reserve(dives.size());
} }
@ -1317,8 +1319,7 @@ EditCylinder::EditCylinder(int index, cylinder_t cylIn, EditCylinderType typeIn,
void EditCylinder::redo() void EditCylinder::redo()
{ {
for (size_t i = 0; i < dives.size(); ++i) { for (size_t i = 0; i < dives.size(); ++i) {
set_tank_info_size(&tank_info_table, cyl[i].type.description, cyl[i].type.size); set_tank_info_data(&tank_info_table, cyl[i].type.description, cyl[i].type.size, cyl[i].type.workingpressure);
set_tank_info_workingpressure(&tank_info_table, cyl[i].type.description, cyl[i].type.workingpressure);
std::swap(*get_cylinder(dives[i], indexes[i]), cyl[i]); std::swap(*get_cylinder(dives[i], indexes[i]), cyl[i]);
update_cylinder_related_info(dives[i]); update_cylinder_related_info(dives[i]);
emit diveListNotifier.cylinderEdited(dives[i], indexes[i]); emit diveListNotifier.cylinderEdited(dives[i], indexes[i]);

View File

@ -339,28 +339,27 @@ extern "C" void selective_copy_dive(const struct dive *s, struct dive *d, struct
} }
#undef CONDITIONAL_COPY_STRING #undef CONDITIONAL_COPY_STRING
/* copies all events from all dive computers before a given time /* copies all events from the given dive computer before a given time
this is used when editing a dive in the planner to preserve the events this is used when editing a dive in the planner to preserve the events
of the old dive */ of the old dive */
extern "C" void copy_events_until(const struct dive *sd, struct dive *dd, int time) extern "C" void copy_events_until(const struct dive *sd, struct dive *dd, int dcNr, int time)
{ {
if (!sd || !dd) if (!sd || !dd)
return; return;
const struct divecomputer *s = &sd->dc; const struct divecomputer *s = &sd->dc;
struct divecomputer *d = &dd->dc; struct divecomputer *d = get_dive_dc(dd, dcNr);
while (s && d) { if (!s || !d)
const struct event *ev; return;
ev = s->events;
while (ev != NULL) { const struct event *ev;
// Don't add events the planner knows about ev = s->events;
if (ev->time.seconds < time && !event_is_gaschange(ev) && !event_is_divemodechange(ev)) while (ev != NULL) {
add_event(d, ev->time.seconds, ev->type, ev->flags, ev->value, ev->name); // Don't add events the planner knows about
ev = ev->next; if (ev->time.seconds < time && !event_is_gaschange(ev) && !event_is_divemodechange(ev))
} add_event(d, ev->time.seconds, ev->type, ev->flags, ev->value, ev->name);
s = s->next; ev = ev->next;
d = d->next;
} }
} }
@ -969,7 +968,7 @@ static void fixup_dc_depths(struct dive *dive, struct divecomputer *dc)
} }
update_depth(&dc->maxdepth, maxdepth); update_depth(&dc->maxdepth, maxdepth);
if (!has_planned(dive, false) || !is_dc_planner(dc)) if (!is_logged(dive) || !is_dc_planner(dc))
if (maxdepth > dive->maxdepth.mm) if (maxdepth > dive->maxdepth.mm)
dive->maxdepth.mm = maxdepth; dive->maxdepth.mm = maxdepth;
} }
@ -2310,8 +2309,8 @@ static int likely_same_dive(const struct dive *a, const struct dive *b)
int match, fuzz = 20 * 60; int match, fuzz = 20 * 60;
/* don't merge manually added dives with anything */ /* don't merge manually added dives with anything */
if (is_manually_added_dc(&a->dc) || if (is_dc_manually_added_dive(&a->dc) ||
is_manually_added_dc(&b->dc)) is_dc_manually_added_dive(&b->dc))
return 0; return 0;
/* /*
@ -2550,19 +2549,29 @@ static void join_dive_computers(struct dive *d, struct divecomputer *res,
remove_redundant_dc(res, prefer_downloaded); remove_redundant_dc(res, prefer_downloaded);
} }
// Does this dive have a dive computer for which is_dc_planner has value planned static bool has_dc_type(const struct dive *dive, bool dc_is_planner)
extern "C" bool has_planned(const struct dive *dive, bool planned)
{ {
const struct divecomputer *dc = &dive->dc; const struct divecomputer *dc = &dive->dc;
while (dc) { while (dc) {
if (is_dc_planner(&dive->dc) == planned) if (is_dc_planner(dc) == dc_is_planner)
return true; return true;
dc = dc->next; dc = dc->next;
} }
return false; return false;
} }
// Does this dive have a dive computer for which is_dc_planner has value planned
extern "C" bool is_planned(const struct dive *dive)
{
return has_dc_type(dive, true);
}
extern "C" bool is_logged(const struct dive *dive)
{
return has_dc_type(dive, false);
}
/* /*
* Merging two dives can be subtle, because there's two different ways * Merging two dives can be subtle, because there's two different ways
* of merging: * of merging:
@ -3235,11 +3244,11 @@ extern "C" int depth_to_mbar(int depth, const struct dive *dive)
extern "C" double depth_to_mbarf(int depth, const struct dive *dive) extern "C" double depth_to_mbarf(int depth, const struct dive *dive)
{ {
// To downloaded and planned dives, use DC's values // For downloaded and planned dives, use DC's values
int salinity = dive->dc.salinity; int salinity = dive->dc.salinity;
pressure_t surface_pressure = dive->dc.surface_pressure; pressure_t surface_pressure = dive->dc.surface_pressure;
if (is_manually_added_dc(&dive->dc)) { // To manual dives, salinity and pressure in another place... if (is_dc_manually_added_dive(&dive->dc)) { // For manual dives, salinity and pressure in another place...
surface_pressure = dive->surface_pressure; surface_pressure = dive->surface_pressure;
salinity = dive->user_salinity; salinity = dive->user_salinity;
} }
@ -3262,8 +3271,8 @@ extern "C" double depth_to_atm(int depth, const struct dive *dive)
* take care of this, but the Uemis we support natively */ * take care of this, but the Uemis we support natively */
extern "C" int rel_mbar_to_depth(int mbar, const struct dive *dive) extern "C" int rel_mbar_to_depth(int mbar, const struct dive *dive)
{ {
// To downloaded and planned dives, use DC's salinity. Manual dives, use user's salinity // For downloaded and planned dives, use DC's salinity. Manual dives, use user's salinity
int salinity = is_manually_added_dc(&dive->dc) ? dive->user_salinity : dive->dc.salinity; int salinity = is_dc_manually_added_dive(&dive->dc) ? dive->user_salinity : dive->dc.salinity;
if (!salinity) if (!salinity)
salinity = SEAWATER_SALINITY; salinity = SEAWATER_SALINITY;
@ -3274,8 +3283,8 @@ extern "C" int rel_mbar_to_depth(int mbar, const struct dive *dive)
extern "C" int mbar_to_depth(int mbar, const struct dive *dive) extern "C" int mbar_to_depth(int mbar, const struct dive *dive)
{ {
// To downloaded and planned dives, use DC's pressure. Manual dives, use user's pressure // For downloaded and planned dives, use DC's pressure. Manual dives, use user's pressure
pressure_t surface_pressure = is_manually_added_dc(&dive->dc) pressure_t surface_pressure = is_dc_manually_added_dive(&dive->dc)
? dive->surface_pressure ? dive->surface_pressure
: dive->dc.surface_pressure; : dive->dc.surface_pressure;

View File

@ -141,8 +141,7 @@ void split_divecomputer(const struct dive *src, int num, struct dive **out1, str
for (_dc = &_dive->dc; _dc; _dc = _dc->next) for (_dc = &_dive->dc; _dc; _dc = _dc->next)
#define for_each_relevant_dc(_dive, _dc) \ #define for_each_relevant_dc(_dive, _dc) \
bool _all_planned = !has_planned(_dive, false); \ for (_dc = &_dive->dc; _dc; _dc = _dc->next) if (!is_logged(_dive) || !is_dc_planner(_dc))
for (_dc = &_dive->dc; _dc; _dc = _dc->next) if (_all_planned || !is_dc_planner(_dc))
extern struct dive *get_dive_by_uniq_id(int id); extern struct dive *get_dive_by_uniq_id(int id);
extern int get_idx_by_uniq_id(int id); extern int get_idx_by_uniq_id(int id);
@ -187,7 +186,7 @@ extern int split_dive(const struct dive *dive, struct dive **new1, struct dive *
extern int split_dive_at_time(const struct dive *dive, duration_t time, struct dive **new1, struct dive **new2); extern int split_dive_at_time(const struct dive *dive, duration_t time, struct dive **new1, struct dive **new2);
extern struct dive *merge_dives(const struct dive *a, const struct dive *b, int offset, bool prefer_downloaded, struct dive_trip **trip, struct dive_site **site); extern struct dive *merge_dives(const struct dive *a, const struct dive *b, int offset, bool prefer_downloaded, struct dive_trip **trip, struct dive_site **site);
extern struct dive *try_to_merge(struct dive *a, struct dive *b, bool prefer_downloaded); extern struct dive *try_to_merge(struct dive *a, struct dive *b, bool prefer_downloaded);
extern void copy_events_until(const struct dive *sd, struct dive *dd, int time); extern void copy_events_until(const struct dive *sd, struct dive *dd, int dcNr, int time);
extern void copy_used_cylinders(const struct dive *s, struct dive *d, bool used_only); extern void copy_used_cylinders(const struct dive *s, struct dive *d, bool used_only);
extern bool is_cylinder_used(const struct dive *dive, int idx); extern bool is_cylinder_used(const struct dive *dive, int idx);
extern bool is_cylinder_prot(const struct dive *dive, int idx); extern bool is_cylinder_prot(const struct dive *dive, int idx);
@ -207,7 +206,8 @@ extern void invalidate_dive_cache(struct dive *dc);
extern int total_weight(const struct dive *); extern int total_weight(const struct dive *);
extern bool has_planned(const struct dive *dive, bool planned); extern bool is_planned(const struct dive *dive);
extern bool is_logged(const struct dive *dive);
/* Get gasmixes at increasing timestamps. /* Get gasmixes at increasing timestamps.
* In "evp", pass a pointer to a "struct event *" which is NULL-initialized on first invocation. * In "evp", pass a pointer to a "struct event *" which is NULL-initialized on first invocation.

View File

@ -492,11 +492,6 @@ void add_extra_data(struct divecomputer *dc, const char *key, const char *value)
} }
} }
bool is_dc_planner(const struct divecomputer *dc)
{
return same_string(dc->model, "planned dive");
}
/* /*
* Match two dive computer entries against each other, and * Match two dive computer entries against each other, and
* tell if it's the same dive. Return 0 if "don't know", * tell if it's the same dive. Return 0 if "don't know",
@ -548,14 +543,27 @@ void free_dc(struct divecomputer *dc)
free(dc); free(dc);
} }
static const char *manual_dc_name = "manually added dive"; static const char *planner_dc_name = "planned dive";
bool is_manually_added_dc(const struct divecomputer *dc)
bool is_dc_planner(const struct divecomputer *dc)
{ {
return dc && dc->samples <= 50 && return dc && same_string(dc->model, planner_dc_name);
same_string(dc->model, manual_dc_name);
} }
void make_manually_added_dc(struct divecomputer *dc) void make_planner_dc(struct divecomputer *dc)
{
free((void *)dc->model);
dc->model = strdup(planner_dc_name);
}
const char *manual_dc_name = "manually added dive";
bool is_dc_manually_added_dive(const struct divecomputer *dc)
{
return dc && same_string(dc->model, manual_dc_name);
}
void make_manually_added_dive_dc(struct divecomputer *dc)
{ {
free((void *)dc->model); free((void *)dc->model);
dc->model = strdup(manual_dc_name); dc->model = strdup(manual_dc_name);

View File

@ -67,10 +67,12 @@ extern void add_event_to_dc(struct divecomputer *dc, struct event *ev);
extern struct event *add_event(struct divecomputer *dc, unsigned int time, int type, int flags, int value, const char *name); extern struct event *add_event(struct divecomputer *dc, unsigned int time, int type, int flags, int value, const char *name);
extern void remove_event_from_dc(struct divecomputer *dc, struct event *event); extern void remove_event_from_dc(struct divecomputer *dc, struct event *event);
extern void add_extra_data(struct divecomputer *dc, const char *key, const char *value); extern void add_extra_data(struct divecomputer *dc, const char *key, const char *value);
extern bool is_dc_planner(const struct divecomputer *dc);
extern uint32_t calculate_string_hash(const char *str); extern uint32_t calculate_string_hash(const char *str);
extern bool is_manually_added_dc(const struct divecomputer *dc); extern bool is_dc_planner(const struct divecomputer *dc);
extern void make_manually_added_dc(struct divecomputer *dc); extern void make_planner_dc(struct divecomputer *dc);
extern const char *manual_dc_name;
extern bool is_dc_manually_added_dive(const struct divecomputer *dc);
extern void make_manually_added_dive_dc(struct divecomputer *dc);
/* Check if two dive computer entries are the exact same dive (-1=no/0=maybe/1=yes) */ /* Check if two dive computer entries are the exact same dive (-1=no/0=maybe/1=yes) */
extern int match_one_dc(const struct divecomputer *a, const struct divecomputer *b); extern int match_one_dc(const struct divecomputer *a, const struct divecomputer *b);

View File

@ -561,7 +561,7 @@ int init_decompression(struct deco_state *ds, const struct dive *dive, bool in_p
} }
add_segment(ds, surface_pressure, air, surface_time, 0, OC, prefs.decosac, in_planner); add_segment(ds, surface_pressure, air, surface_time, 0, OC, prefs.decosac, in_planner);
#if DECO_CALC_DEBUG & 2 #if DECO_CALC_DEBUG & 2
printf("Tissues after surface intervall of %d:%02u:\n", FRACTION(surface_time, 60)); printf("Tissues after surface intervall of %d:%02u:\n", FRACTION_TUPLE(surface_time, 60));
dump_tissues(ds); dump_tissues(ds);
#endif #endif
} }
@ -598,7 +598,7 @@ int init_decompression(struct deco_state *ds, const struct dive *dive, bool in_p
} }
add_segment(ds, surface_pressure, air, surface_time, 0, OC, prefs.decosac, in_planner); add_segment(ds, surface_pressure, air, surface_time, 0, OC, prefs.decosac, in_planner);
#if DECO_CALC_DEBUG & 2 #if DECO_CALC_DEBUG & 2
printf("Tissues after surface intervall of %d:%02u:\n", FRACTION(surface_time, 60)); printf("Tissues after surface intervall of %d:%02u:\n", FRACTION_TUPLE(surface_time, 60));
dump_tissues(ds); dump_tissues(ds);
#endif #endif
} }
@ -767,18 +767,6 @@ struct dive *unregister_dive(int idx)
return dive; return dive;
} }
/* this implements the mechanics of removing the dive from the global
* dive table and the trip, but doesn't deal with updating dive trips, etc */
void delete_single_dive(int idx)
{
struct dive *dive = get_dive(idx);
if (!dive)
return; /* this should never happen */
remove_dive_from_trip(dive, divelog.trips);
unregister_dive_from_dive_site(dive);
delete_dive_from_table(divelog.dives, idx);
}
void process_loaded_dives() void process_loaded_dives()
{ {
sort_dive_table(divelog.dives); sort_dive_table(divelog.dives);
@ -989,7 +977,7 @@ void add_imported_dives(struct divelog *import_log, int flags)
/* Remove old dives */ /* Remove old dives */
for (i = 0; i < dives_to_remove.nr; i++) { for (i = 0; i < dives_to_remove.nr; i++) {
idx = get_divenr(dives_to_remove.dives[i]); idx = get_divenr(dives_to_remove.dives[i]);
delete_single_dive(idx); delete_single_dive(&divelog, idx);
} }
dives_to_remove.nr = 0; dives_to_remove.nr = 0;
@ -1019,6 +1007,10 @@ void add_imported_dives(struct divelog *import_log, int flags)
current_dive = divelog.dives->nr > 0 ? divelog.dives->dives[divelog.dives->nr - 1] : NULL; current_dive = divelog.dives->nr > 0 ? divelog.dives->dives[divelog.dives->nr - 1] : NULL;
free_device_table(devices_to_add); free_device_table(devices_to_add);
free(dives_to_add.dives);
free(dives_to_remove.dives);
free(trips_to_add.trips);
free(dive_sites_to_add.dive_sites);
/* Inform frontend of reset data. This should reset all the models. */ /* Inform frontend of reset data. This should reset all the models. */
emit_reset_signal(); emit_reset_signal();

View File

@ -62,7 +62,6 @@ void clear_dive_file_data();
void clear_dive_table(struct dive_table *table); void clear_dive_table(struct dive_table *table);
void move_dive_table(struct dive_table *src, struct dive_table *dst); void move_dive_table(struct dive_table *src, struct dive_table *dst);
struct dive *unregister_dive(int idx); struct dive *unregister_dive(int idx);
extern void delete_single_dive(int idx);
extern bool has_dive(unsigned int deviceid, unsigned int diveid); extern bool has_dive(unsigned int deviceid, unsigned int diveid);
#ifdef __cplusplus #ifdef __cplusplus

View File

@ -64,10 +64,24 @@ struct divelog &divelog::operator=(divelog &&log)
return *this; return *this;
} }
/* this implements the mechanics of removing the dive from the
* dive log and the trip, but doesn't deal with updating dive trips, etc */
void delete_single_dive(struct divelog *log, int idx)
{
if (idx < 0 || idx > log->dives->nr) {
report_info("Warning: deleting unexisting dive with index %d", idx);
return;
}
struct dive *dive = log->dives->dives[idx];
remove_dive_from_trip(dive, log->trips);
unregister_dive_from_dive_site(dive);
delete_dive_from_table(log->dives, idx);
}
void divelog::clear() void divelog::clear()
{ {
while (dives->nr) while (dives->nr > 0)
delete_single_dive(0); delete_single_dive(this, dives->nr - 1);
while (sites->nr) while (sites->nr)
delete_dive_site(get_dive_site(0, sites), sites); delete_dive_site(get_dive_site(0, sites), sites);
if (trips->nr != 0) { if (trips->nr != 0) {

View File

@ -34,6 +34,7 @@ extern "C" {
#endif #endif
void clear_divelog(struct divelog *); void clear_divelog(struct divelog *);
extern void delete_single_dive(struct divelog *, int idx);
#ifdef __cplusplus #ifdef __cplusplus
} }

View File

@ -109,7 +109,7 @@ void add_tank_info_imperial(struct tank_info_table *table, const char *name, int
add_to_tank_info_table(table, table->nr, info); add_to_tank_info_table(table, table->nr, info);
} }
extern struct tank_info *get_tank_info(struct tank_info_table *table, const char *name) static struct tank_info *get_tank_info(struct tank_info_table *table, const char *name)
{ {
for (int i = 0; i < table->nr; ++i) { for (int i = 0; i < table->nr; ++i) {
if (same_string(table->infos[i].name, name)) if (same_string(table->infos[i].name, name))
@ -118,34 +118,41 @@ extern struct tank_info *get_tank_info(struct tank_info_table *table, const char
return NULL; return NULL;
} }
extern void set_tank_info_size(struct tank_info_table *table, const char *name, volume_t size) extern void set_tank_info_data(struct tank_info_table *table, const char *name, volume_t size, pressure_t working_pressure)
{ {
struct tank_info *info = get_tank_info(table, name); struct tank_info *info = get_tank_info(table, name);
if (info) { if (info) {
// Try to be smart about metric vs. imperial if (info->ml != 0 || info->bar != 0) {
if (info->cuft == 0 && info->psi == 0) info->bar = working_pressure.mbar / 1000;
info->ml = size.mliter; info->ml = size.mliter;
else } else {
info->cuft = lrint(ml_to_cuft(size.mliter)); info->psi = lrint(to_PSI(working_pressure));
info->cuft = lrint(ml_to_cuft(size.mliter) * mbar_to_atm(working_pressure.mbar));
}
} else { } else {
// By default add metric...? // Metric is a better choice as the volume is independent of the working pressure
add_tank_info_metric(table, name, size.mliter, 0); add_tank_info_metric(table, name, size.mliter, working_pressure.mbar / 1000);
} }
} }
extern void set_tank_info_workingpressure(struct tank_info_table *table, const char *name, pressure_t working_pressure) extern void extract_tank_info(const struct tank_info *info, volume_t *size, pressure_t *working_pressure)
{
working_pressure->mbar = info->bar != 0 ? info->bar * 1000 : psi_to_mbar(info->psi);
if (info->ml != 0)
size->mliter = info->ml;
else if (working_pressure->mbar != 0)
size->mliter = lrint(cuft_to_l(info->cuft) * 1000 / mbar_to_atm(working_pressure->mbar));
}
extern bool get_tank_info_data(struct tank_info_table *table, const char *name, volume_t *size, pressure_t *working_pressure)
{ {
struct tank_info *info = get_tank_info(table, name); struct tank_info *info = get_tank_info(table, name);
if (info) { if (info) {
// Try to be smart about metric vs. imperial extract_tank_info(info, size, working_pressure);
if (info->cuft == 0 && info->psi == 0)
info->bar = working_pressure.mbar / 1000; return true;
else
info->psi = lrint(mbar_to_PSI(working_pressure.mbar));
} else {
// By default add metric...?
add_tank_info_metric(table, name, 0, working_pressure.mbar / 1000);
} }
return false;
} }
/* placeholders for a few functions that we need to redesign for the Qt UI */ /* placeholders for a few functions that we need to redesign for the Qt UI */
@ -207,13 +214,6 @@ void add_cloned_weightsystem(struct weightsystem_table *t, weightsystem_t ws)
add_to_weightsystem_table(t, t->nr, clone_weightsystem(ws)); add_to_weightsystem_table(t, t->nr, clone_weightsystem(ws));
} }
/* Add a clone of a weightsystem to the end of a weightsystem table.
* Cloned means that the description-string is copied. */
void add_cloned_weightsystem_at(struct weightsystem_table *t, weightsystem_t ws)
{
add_to_weightsystem_table(t, t->nr, clone_weightsystem(ws));
}
cylinder_t clone_cylinder(cylinder_t cyl) cylinder_t clone_cylinder(cylinder_t cyl)
{ {
cylinder_t res = cyl; cylinder_t res = cyl;
@ -510,12 +510,38 @@ cylinder_t create_new_cylinder(const struct dive *d)
cylinder_t cyl = empty_cylinder; cylinder_t cyl = empty_cylinder;
fill_default_cylinder(d, &cyl); fill_default_cylinder(d, &cyl);
cyl.start = cyl.type.workingpressure; cyl.start = cyl.type.workingpressure;
cyl.manually_added = true;
cyl.cylinder_use = OC_GAS; cyl.cylinder_use = OC_GAS;
return cyl; return cyl;
} }
static bool show_cylinder(const struct dive *d, int i) cylinder_t create_new_manual_cylinder(const struct dive *d)
{
cylinder_t cyl = create_new_cylinder(d);
cyl.manually_added = true;
return cyl;
}
void add_default_cylinder(struct dive *d)
{
// Only add if there are no cylinders yet
if (d->cylinders.nr > 0)
return;
cylinder_t cyl;
if (!empty_string(prefs.default_cylinder)) {
cyl = create_new_cylinder(d);
} else {
cyl = empty_cylinder;
// roughly an AL80
cyl.type.description = strdup(translate("gettextFromC", "unknown"));
cyl.type.size.mliter = 11100;
cyl.type.workingpressure.mbar = 207000;
}
add_cylinder(&d->cylinders, 0, cyl);
reset_cylinders(d, false);
}
static bool show_cylinder(const struct dive *d, int i)
{ {
if (is_cylinder_used(d, i)) if (is_cylinder_used(d, i))
return true; return true;

View File

@ -93,7 +93,8 @@ extern void reset_cylinders(struct dive *dive, bool track_gas);
extern int gas_volume(const cylinder_t *cyl, pressure_t p); /* Volume in mliter of a cylinder at pressure 'p' */ extern int gas_volume(const cylinder_t *cyl, pressure_t p); /* Volume in mliter of a cylinder at pressure 'p' */
extern int find_best_gasmix_match(struct gasmix mix, const struct cylinder_table *cylinders); extern int find_best_gasmix_match(struct gasmix mix, const struct cylinder_table *cylinders);
extern void fill_default_cylinder(const struct dive *dive, cylinder_t *cyl); /* dive is needed to fill out MOD, which depends on salinity. */ extern void fill_default_cylinder(const struct dive *dive, cylinder_t *cyl); /* dive is needed to fill out MOD, which depends on salinity. */
extern cylinder_t create_new_cylinder(const struct dive *dive); /* dive is needed to fill out MOD, which depends on salinity. */ extern cylinder_t create_new_manual_cylinder(const struct dive *dive); /* dive is needed to fill out MOD, which depends on salinity. */
extern void add_default_cylinder(struct dive *dive);
extern int first_hidden_cylinder(const struct dive *d); extern int first_hidden_cylinder(const struct dive *d);
#ifdef DEBUG_CYL #ifdef DEBUG_CYL
extern void dump_cylinders(struct dive *dive, bool verbose); extern void dump_cylinders(struct dive *dive, bool verbose);
@ -125,9 +126,9 @@ extern void reset_tank_info_table(struct tank_info_table *table);
extern void clear_tank_info_table(struct tank_info_table *table); extern void clear_tank_info_table(struct tank_info_table *table);
extern void add_tank_info_metric(struct tank_info_table *table, const char *name, int ml, int bar); extern void add_tank_info_metric(struct tank_info_table *table, const char *name, int ml, int bar);
extern void add_tank_info_imperial(struct tank_info_table *table, const char *name, int cuft, int psi); extern void add_tank_info_imperial(struct tank_info_table *table, const char *name, int cuft, int psi);
extern void set_tank_info_size(struct tank_info_table *table, const char *name, volume_t size); extern void extract_tank_info(const struct tank_info *info, volume_t *size, pressure_t *working_pressure);
extern void set_tank_info_workingpressure(struct tank_info_table *table, const char *name, pressure_t working_pressure); extern bool get_tank_info_data(struct tank_info_table *table, const char *name, volume_t *size, pressure_t *pressure);
extern struct tank_info *get_tank_info(struct tank_info_table *table, const char *name); extern void set_tank_info_data(struct tank_info_table *table, const char *name, volume_t size, pressure_t working_pressure);
struct ws_info_t { struct ws_info_t {
const char *name; const char *name;

View File

@ -1074,9 +1074,9 @@ bool filter_constraint_match_dive(const filter_constraint &c, const struct dive
case FILTER_CONSTRAINT_SAC: case FILTER_CONSTRAINT_SAC:
return check_numerical_range_non_zero(c, d->sac); return check_numerical_range_non_zero(c, d->sac);
case FILTER_CONSTRAINT_LOGGED: case FILTER_CONSTRAINT_LOGGED:
return has_planned(d, false) != c.negate; return is_logged(d) != c.negate;
case FILTER_CONSTRAINT_PLANNED: case FILTER_CONSTRAINT_PLANNED:
return has_planned(d, true) != c.negate; return is_planned(d) != c.negate;
case FILTER_CONSTRAINT_DIVE_MODE: case FILTER_CONSTRAINT_DIVE_MODE:
return check_multiple_choice(c, (int)d->dc.divemode); // should we be smarter and check all DCs? return check_multiple_choice(c, (int)d->dc.divemode); // should we be smarter and check all DCs?
case FILTER_CONSTRAINT_TAGS: case FILTER_CONSTRAINT_TAGS:

View File

@ -343,71 +343,6 @@ QString vqasprintf_loc(const char *fmt, va_list ap_in)
return ret; return ret;
} }
// Put a formated string respecting the default locale into a C-style array in UTF-8 encoding.
// The only complication arises from the fact that we don't want to cut through multi-byte UTF-8 code points.
extern "C" int snprintf_loc(char *dst, size_t size, const char *cformat, ...)
{
va_list ap;
va_start(ap, cformat);
int res = vsnprintf_loc(dst, size, cformat, ap);
va_end(ap);
return res;
}
extern "C" int vsnprintf_loc(char *dst, size_t size, const char *cformat, va_list ap)
{
QByteArray utf8 = vqasprintf_loc(cformat, ap).toUtf8();
const char *data = utf8.constData();
size_t utf8_size = utf8.size();
if (size == 0)
return utf8_size;
if (size < utf8_size + 1) {
memcpy(dst, data, size - 1);
if ((data[size - 1] & 0xC0) == 0x80) {
// We truncated a multi-byte UTF-8 encoding.
--size;
// Jump to last copied byte.
if (size > 0)
--size;
while(size > 0 && (dst[size] & 0xC0) == 0x80)
--size;
dst[size] = 0;
} else {
dst[size - 1] = 0;
}
} else {
memcpy(dst, data, utf8_size + 1); // QByteArray guarantees a trailing 0
}
return utf8_size;
}
int asprintf_loc(char **dst, const char *cformat, ...)
{
va_list ap;
va_start(ap, cformat);
int res = vasprintf_loc(dst, cformat, ap);
va_end(ap);
return res;
}
int vasprintf_loc(char **dst, const char *cformat, va_list ap)
{
QByteArray utf8 = vqasprintf_loc(cformat, ap).toUtf8();
*dst = strdup(utf8.constData());
return utf8.size();
}
extern "C" void put_vformat_loc(struct membuffer *b, const char *fmt, va_list args)
{
QByteArray utf8 = vqasprintf_loc(fmt, args).toUtf8();
const char *data = utf8.constData();
size_t utf8_size = utf8.size();
make_room(b, utf8_size);
memcpy(b->buffer + b->len, data, utf8_size);
b->len += utf8_size;
}
// TODO: Avoid back-and-forth conversion between UTF16 and UTF8. // TODO: Avoid back-and-forth conversion between UTF16 and UTF8.
std::string casprintf_loc(const char *cformat, ...) std::string casprintf_loc(const char *cformat, ...)
{ {

View File

@ -12,22 +12,7 @@
__printf(1, 2) QString qasprintf_loc(const char *cformat, ...); __printf(1, 2) QString qasprintf_loc(const char *cformat, ...);
__printf(1, 0) QString vqasprintf_loc(const char *cformat, va_list ap); __printf(1, 0) QString vqasprintf_loc(const char *cformat, va_list ap);
__printf(1, 2) std::string casprintf_loc(const char *cformat, ...); __printf(1, 2) std::string casprintf_loc(const char *cformat, ...);
#endif
#ifdef __cplusplus
extern "C" {
#endif
__printf(3, 4) int snprintf_loc(char *dst, size_t size, const char *cformat, ...);
__printf(3, 0) int vsnprintf_loc(char *dst, size_t size, const char *cformat, va_list ap);
__printf(2, 3) int asprintf_loc(char **dst, const char *cformat, ...);
__printf(2, 0) int vasprintf_loc(char **dst, const char *cformat, va_list ap);
#ifdef __cplusplus
}
__printf(1, 2) std::string format_string_std(const char *fmt, ...); __printf(1, 2) std::string format_string_std(const char *fmt, ...);
#endif #endif
#endif #endif

View File

@ -102,8 +102,8 @@ static void dump_pr_track(int cyl, pr_track_t *track_pr)
printf(" start %f end %f t_start %d:%02d t_end %d:%02d pt %d\n", printf(" start %f end %f t_start %d:%02d t_end %d:%02d pt %d\n",
mbar_to_PSI(list->start), mbar_to_PSI(list->start),
mbar_to_PSI(list->end), mbar_to_PSI(list->end),
FRACTION(list->t_start, 60), FRACTION_TUPLE(list->t_start, 60),
FRACTION(list->t_end, 60), FRACTION_TUPLE(list->t_end, 60),
list->pressure_time); list->pressure_time);
list = list->next; list = list->next;
} }

View File

@ -51,8 +51,8 @@ static int stoptime, stopdepth, ndl, po2, cns, heartbeat, bearing;
static bool in_deco, first_temp_is_air; static bool in_deco, first_temp_is_air;
static int current_gas_index; static int current_gas_index;
#define INFO(context, fmt, ...) report_info("INFO: " fmt, ##__VA_ARGS__) #define INFO(fmt, ...) report_info("INFO: " fmt, ##__VA_ARGS__)
#define ERROR(context, fmt, ...) report_info("ERROR: " fmt, ##__VA_ARGS__) #define ERROR(fmt, ...) report_info("ERROR: " fmt, ##__VA_ARGS__)
/* /*
* Directly taken from libdivecomputer's examples/common.c to improve * Directly taken from libdivecomputer's examples/common.c to improve
@ -136,7 +136,7 @@ static dc_status_t parse_gasmixes(device_data_t *devdata, struct dive *dive, dc_
{ {
static bool shown_warning = false; static bool shown_warning = false;
unsigned int i; unsigned int i;
int rc; dc_status_t rc;
unsigned int ntanks = 0; unsigned int ntanks = 0;
rc = dc_parser_get_field(parser, DC_FIELD_TANK_COUNT, 0, &ntanks); rc = dc_parser_get_field(parser, DC_FIELD_TANK_COUNT, 0, &ntanks);
@ -441,7 +441,7 @@ sample_cb(dc_sample_type_t type, const dc_sample_value_t *pvalue, void *userdata
break; break;
#ifdef DEBUG_DC_VENDOR #ifdef DEBUG_DC_VENDOR
case DC_SAMPLE_VENDOR: case DC_SAMPLE_VENDOR:
printf(" <vendor time='%u:%02u' type=\"%u\" size=\"%u\">", FRACTION(sample->time.seconds, 60), printf(" <vendor time='%u:%02u' type=\"%u\" size=\"%u\">", FRACTION_TUPLE(sample->time.seconds, 60),
value.vendor.type, value.vendor.size); value.vendor.type, value.vendor.size);
for (int i = 0; i < value.vendor.size; ++i) for (int i = 0; i < value.vendor.size; ++i)
printf("%02X", ((unsigned char *)value.vendor.data)[i]); printf("%02X", ((unsigned char *)value.vendor.data)[i]);
@ -497,7 +497,7 @@ static void dev_info(device_data_t *, const char *fmt, ...)
va_end(ap); va_end(ap);
progress_bar_text = buffer; progress_bar_text = buffer;
if (verbose) if (verbose)
INFO(0, "dev_info: %s", buffer); INFO("dev_info: %s", buffer);
if (progress_callback) if (progress_callback)
(*progress_callback)(buffer); (*progress_callback)(buffer);
@ -516,7 +516,7 @@ static void download_error(const char *fmt, ...)
report_error("Dive %d: %s", import_dive_number, buffer); report_error("Dive %d: %s", import_dive_number, buffer);
} }
static int parse_samples(device_data_t *, struct divecomputer *dc, dc_parser_t *parser) static dc_status_t parse_samples(device_data_t *, struct divecomputer *dc, dc_parser_t *parser)
{ {
// Parse the sample data. // Parse the sample data.
return dc_parser_samples_foreach(parser, sample_cb, dc); return dc_parser_samples_foreach(parser, sample_cb, dc);
@ -815,7 +815,7 @@ static int dive_cb(const unsigned char *data, unsigned int size,
const unsigned char *fingerprint, unsigned int fsize, const unsigned char *fingerprint, unsigned int fsize,
void *userdata) void *userdata)
{ {
int rc; dc_status_t rc;
dc_parser_t *parser = NULL; dc_parser_t *parser = NULL;
device_data_t *devdata = (device_data_t *)userdata; device_data_t *devdata = (device_data_t *)userdata;
struct dive *dive = NULL; struct dive *dive = NULL;
@ -830,7 +830,7 @@ static int dive_cb(const unsigned char *data, unsigned int size,
rc = dc_parser_new(&parser, devdata->device, data, size); rc = dc_parser_new(&parser, devdata->device, data, size);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
download_error(translate("gettextFromC", "Unable to create parser for %s %s"), devdata->vendor, devdata->product); download_error(translate("gettextFromC", "Unable to create parser for %s %s: %d"), devdata->vendor, devdata->product, errmsg(rc));
return true; return true;
} }
@ -843,14 +843,14 @@ static int dive_cb(const unsigned char *data, unsigned int size,
// Parse the dive's header data // Parse the dive's header data
rc = libdc_header_parser (parser, devdata, dive); rc = libdc_header_parser (parser, devdata, dive);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
download_error(translate("getextFromC", "Error parsing the header")); download_error(translate("getextFromC", "Error parsing the header: %s"), errmsg(rc));
goto error_exit; goto error_exit;
} }
// Initialize the sample data. // Initialize the sample data.
rc = parse_samples(devdata, &dive->dc, parser); rc = parse_samples(devdata, &dive->dc, parser);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
download_error(translate("gettextFromC", "Error parsing the samples")); download_error(translate("gettextFromC", "Error parsing the samples: %s"), errmsg(rc));
goto error_exit; goto error_exit;
} }
@ -1154,13 +1154,19 @@ static const char *do_device_import(device_data_t *data)
// Register the event handler. // Register the event handler.
int events = DC_EVENT_WAITING | DC_EVENT_PROGRESS | DC_EVENT_DEVINFO | DC_EVENT_CLOCK | DC_EVENT_VENDOR; int events = DC_EVENT_WAITING | DC_EVENT_PROGRESS | DC_EVENT_DEVINFO | DC_EVENT_CLOCK | DC_EVENT_VENDOR;
rc = dc_device_set_events(device, events, event_cb, data); rc = dc_device_set_events(device, events, event_cb, data);
if (rc != DC_STATUS_SUCCESS) if (rc != DC_STATUS_SUCCESS) {
dev_info(data, "Import error: %s", errmsg(rc));
return translate("gettextFromC", "Error registering the event handler."); return translate("gettextFromC", "Error registering the event handler.");
}
// Register the cancellation handler. // Register the cancellation handler.
rc = dc_device_set_cancel(device, cancel_cb, data); rc = dc_device_set_cancel(device, cancel_cb, data);
if (rc != DC_STATUS_SUCCESS) if (rc != DC_STATUS_SUCCESS) {
dev_info(data, "Import error: %s", errmsg(rc));
return translate("gettextFromC", "Error registering the cancellation handler."); return translate("gettextFromC", "Error registering the cancellation handler.");
}
if (data->libdc_dump) { if (data->libdc_dump) {
dc_buffer_t *buffer = dc_buffer_new(0); dc_buffer_t *buffer = dc_buffer_new(0);
@ -1182,6 +1188,8 @@ static const char *do_device_import(device_data_t *data)
if (rc == DC_STATUS_UNSUPPORTED) if (rc == DC_STATUS_UNSUPPORTED)
return translate("gettextFromC", "Dumping not supported on this device"); return translate("gettextFromC", "Dumping not supported on this device");
dev_info(data, "Import error: %s", errmsg(rc));
return translate("gettextFromC", "Dive data dumping error"); return translate("gettextFromC", "Dive data dumping error");
} }
} else { } else {
@ -1190,6 +1198,8 @@ static const char *do_device_import(device_data_t *data)
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
progress_bar_fraction = 0.0; progress_bar_fraction = 0.0;
dev_info(data, "Import error: %s", errmsg(rc));
return translate("gettextFromC", "Dive data import error"); return translate("gettextFromC", "Dive data import error");
} }
} }
@ -1281,7 +1291,7 @@ static dc_status_t usbhid_device_open(dc_iostream_t **iostream, dc_context_t *co
dc_iterator_free (iterator); dc_iterator_free (iterator);
if (!device) { if (!device) {
ERROR(context, "didn't find HID device"); ERROR("didn't find HID device");
return DC_STATUS_NODEVICE; return DC_STATUS_NODEVICE;
} }
dev_info(data, "Opening USB HID device for %04x:%04x", dev_info(data, "Opening USB HID device for %04x:%04x",
@ -1356,7 +1366,7 @@ static dc_status_t bluetooth_device_open(dc_context_t *context, device_data_t *d
dc_iterator_free (iterator); dc_iterator_free (iterator);
if (!address) { if (!address) {
report_error("No rfcomm device found"); dev_info(data, "No rfcomm device found");
return DC_STATUS_NODEVICE; return DC_STATUS_NODEVICE;
} }
@ -1376,7 +1386,7 @@ dc_status_t divecomputer_device_open(device_data_t *data)
transports &= supported; transports &= supported;
if (!transports) { if (!transports) {
report_error("Dive computer transport not supported"); dev_info(data, "Dive computer transport not supported");
return DC_STATUS_UNSUPPORTED; return DC_STATUS_UNSUPPORTED;
} }
@ -1493,12 +1503,12 @@ const char *do_libdivecomputer_import(device_data_t *data)
rc = divecomputer_device_open(data); rc = divecomputer_device_open(data);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
report_error("%s", errmsg(rc)); dev_info(data, "Import error: %s", errmsg(rc));
} else { } else {
dev_info(data, "Connecting ..."); dev_info(data, "Connecting ...");
rc = dc_device_open(&data->device, data->context, data->descriptor, data->iostream); rc = dc_device_open(&data->device, data->context, data->descriptor, data->iostream);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
INFO(0, "dc_device_open error value of %d", rc); INFO("dc_device_open error value of %d", rc);
if (subsurface_access(data->devname, R_OK | W_OK) != 0) if (subsurface_access(data->devname, R_OK | W_OK) != 0)
#if defined(SUBSURFACE_MOBILE) #if defined(SUBSURFACE_MOBILE)
err = translate("gettextFromC", "Error opening the device %s %s (%s).\nIn most cases, in order to debug this issue, it is useful to send the developers the log files. You can copy them to the clipboard in the About dialog."); err = translate("gettextFromC", "Error opening the device %s %s (%s).\nIn most cases, in order to debug this issue, it is useful to send the developers the log files. You can copy them to the clipboard in the About dialog.");
@ -1606,12 +1616,12 @@ dc_status_t libdc_buffer_parser(struct dive *dive, device_data_t *data, unsigned
if (dc_descriptor_get_type(data->descriptor) != DC_FAMILY_UWATEC_ALADIN && dc_descriptor_get_type(data->descriptor) != DC_FAMILY_UWATEC_MEMOMOUSE) { if (dc_descriptor_get_type(data->descriptor) != DC_FAMILY_UWATEC_ALADIN && dc_descriptor_get_type(data->descriptor) != DC_FAMILY_UWATEC_MEMOMOUSE) {
rc = libdc_header_parser (parser, data, dive); rc = libdc_header_parser (parser, data, dive);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
report_error("Error parsing the dive header data. Dive # %d\nStatus = %s", dive->number, errmsg(rc)); report_error("Error parsing the dive header data. Dive # %d: %s", dive->number, errmsg(rc));
} }
} }
rc = dc_parser_samples_foreach (parser, sample_cb, &dive->dc); rc = dc_parser_samples_foreach (parser, sample_cb, &dive->dc);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
report_error("Error parsing the sample data. Dive # %d\nStatus = %s", dive->number, errmsg(rc)); report_error("Error parsing the sample data. Dive # %d: %s", dive->number, errmsg(rc));
dc_parser_destroy (parser); dc_parser_destroy (parser);
return rc; return rc;
} }
@ -1632,7 +1642,7 @@ dc_descriptor_t *get_descriptor(dc_family_t type, unsigned int model)
rc = dc_descriptor_iterator(&iterator); rc = dc_descriptor_iterator(&iterator);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
report_info("Error creating the device descriptor iterator."); report_info("Error creating the device descriptor iterator: %s", errmsg(rc));
return NULL; return NULL;
} }
while ((dc_iterator_next(iterator, &descriptor)) == DC_STATUS_SUCCESS) { while ((dc_iterator_next(iterator, &descriptor)) == DC_STATUS_SUCCESS) {

View File

@ -169,15 +169,6 @@ void put_format(struct membuffer *b, const char *fmt, ...)
va_end(args); va_end(args);
} }
void put_format_loc(struct membuffer *b, const char *fmt, ...)
{
va_list args;
va_start(args, fmt);
put_vformat_loc(b, fmt, args);
va_end(args);
}
void put_milli(struct membuffer *b, const char *pre, int value, const char *post) void put_milli(struct membuffer *b, const char *pre, int value, const char *post)
{ {
int i; int i;
@ -219,7 +210,7 @@ void put_depth(struct membuffer *b, depth_t depth, const char *pre, const char *
void put_duration(struct membuffer *b, duration_t duration, const char *pre, const char *post) void put_duration(struct membuffer *b, duration_t duration, const char *pre, const char *post)
{ {
if (duration.seconds) if (duration.seconds)
put_format(b, "%s%u:%02u%s", pre, FRACTION(duration.seconds, 60), post); put_format(b, "%s%u:%02u%s", pre, FRACTION_TUPLE(duration.seconds, 60), post);
} }
void put_pressure(struct membuffer *b, pressure_t pressure, const char *pre, const char *post) void put_pressure(struct membuffer *b, pressure_t pressure, const char *pre, const char *post)
@ -243,7 +234,7 @@ void put_degrees(struct membuffer *b, degrees_t value, const char *pre, const ch
udeg = -udeg; udeg = -udeg;
sign = "-"; sign = "-";
} }
put_format(b, "%s%s%u.%06u%s", pre, sign, FRACTION(udeg, 1000000), post); put_format(b, "%s%s%u.%06u%s", pre, sign, FRACTION_TUPLE(udeg, 1000000), post);
} }
void put_location(struct membuffer *b, const location_t *loc, const char *pre, const char *post) void put_location(struct membuffer *b, const location_t *loc, const char *pre, const char *post)

View File

@ -75,9 +75,7 @@ extern void strip_mb(struct membuffer *);
/* The pointer obtained by mb_cstring is invalidated by any modifictation to the membuffer! */ /* The pointer obtained by mb_cstring is invalidated by any modifictation to the membuffer! */
extern const char *mb_cstring(struct membuffer *); extern const char *mb_cstring(struct membuffer *);
extern __printf(2, 0) void put_vformat(struct membuffer *, const char *, va_list); extern __printf(2, 0) void put_vformat(struct membuffer *, const char *, va_list);
extern __printf(2, 0) void put_vformat_loc(struct membuffer *, const char *, va_list);
extern __printf(2, 3) void put_format(struct membuffer *, const char *fmt, ...); extern __printf(2, 3) void put_format(struct membuffer *, const char *fmt, ...);
extern __printf(2, 3) void put_format_loc(struct membuffer *, const char *fmt, ...);
extern __printf(2, 0) char *add_to_string_va(char *old, const char *fmt, va_list args); extern __printf(2, 0) char *add_to_string_va(char *old, const char *fmt, va_list args);
extern __printf(2, 3) char *add_to_string(char *old, const char *fmt, ...); extern __printf(2, 3) char *add_to_string(char *old, const char *fmt, ...);

View File

@ -71,6 +71,7 @@ int get_picture_idx(const struct picture_table *t, const char *filename)
return -1; return -1;
} }
#if !defined(SUBSURFACE_MOBILE)
/* Return distance of timestamp to time of dive. Result is always positive, 0 means during dive. */ /* Return distance of timestamp to time of dive. Result is always positive, 0 means during dive. */
static timestamp_t time_from_dive(const struct dive *d, timestamp_t timestamp) static timestamp_t time_from_dive(const struct dive *d, timestamp_t timestamp)
{ {
@ -118,7 +119,6 @@ static bool dive_check_picture_time(const struct dive *d, timestamp_t timestamp)
return time_from_dive(d, timestamp) < D30MIN; return time_from_dive(d, timestamp) < D30MIN;
} }
#if !defined(SUBSURFACE_MOBILE)
/* Creates a picture and indicates the dive to which this picture should be added. /* Creates a picture and indicates the dive to which this picture should be added.
* The caller is responsible for actually adding the picture to the dive. * The caller is responsible for actually adding the picture to the dive.
* If no appropriate dive was found, no picture is created and NULL is returned. * If no appropriate dive was found, no picture is created and NULL is returned.

View File

@ -59,7 +59,7 @@ extern "C" void dump_plan(struct diveplan *diveplan)
diveplan->surface_pressure); diveplan->surface_pressure);
dp = diveplan->dp; dp = diveplan->dp;
while (dp) { while (dp) {
printf("\t%3u:%02u: %6dmm cylid: %2d setpoint: %d\n", FRACTION(dp->time, 60), dp->depth, dp->cylinderid, dp->setpoint); printf("\t%3u:%02u: %6dmm cylid: %2d setpoint: %d\n", FRACTION_TUPLE(dp->time, 60), dp->depth, dp->cylinderid, dp->setpoint);
dp = dp->next; dp = dp->next;
} }
} }
@ -111,9 +111,8 @@ static void interpolate_transition(struct deco_state *ds, struct dive *dive, dur
} }
/* returns the tissue tolerance at the end of this (partial) dive */ /* returns the tissue tolerance at the end of this (partial) dive */
static int tissue_at_end(struct deco_state *ds, struct dive *dive, deco_state_cache &cache) static int tissue_at_end(struct deco_state *ds, struct dive *dive, const struct divecomputer *dc, deco_state_cache &cache)
{ {
struct divecomputer *dc;
struct sample *sample, *psample; struct sample *sample, *psample;
int i; int i;
depth_t lastdepth = {}; depth_t lastdepth = {};
@ -129,7 +128,6 @@ static int tissue_at_end(struct deco_state *ds, struct dive *dive, deco_state_ca
surface_interval = init_decompression(ds, dive, true); surface_interval = init_decompression(ds, dive, true);
cache.cache(ds); cache.cache(ds);
} }
dc = &dive->dc;
if (!dc->samples) if (!dc->samples)
return 0; return 0;
psample = sample = dc->sample; psample = sample = dc->sample;
@ -208,10 +206,9 @@ static void update_cylinder_pressure(struct dive *d, int old_depth, int new_dept
/* overwrite the data in dive /* overwrite the data in dive
* return false if something goes wrong */ * return false if something goes wrong */
static void create_dive_from_plan(struct diveplan *diveplan, struct dive *dive, bool track_gas) static void create_dive_from_plan(struct diveplan *diveplan, struct dive *dive, struct divecomputer *dc, bool track_gas)
{ {
struct divedatapoint *dp; struct divedatapoint *dp;
struct divecomputer *dc;
struct sample *sample; struct sample *sample;
struct event *ev; struct event *ev;
cylinder_t *cyl; cylinder_t *cyl;
@ -219,7 +216,7 @@ static void create_dive_from_plan(struct diveplan *diveplan, struct dive *dive,
int lasttime = 0, last_manual_point = 0; int lasttime = 0, last_manual_point = 0;
depth_t lastdepth = {.mm = 0}; depth_t lastdepth = {.mm = 0};
int lastcylid; int lastcylid;
enum divemode_t type = dive->dc.divemode; enum divemode_t type = dc->divemode;
if (!diveplan || !diveplan->dp) if (!diveplan || !diveplan->dp)
return; return;
@ -231,7 +228,6 @@ static void create_dive_from_plan(struct diveplan *diveplan, struct dive *dive,
// reset the cylinders and clear out the samples and events of the // reset the cylinders and clear out the samples and events of the
// dive-to-be-planned so we can restart // dive-to-be-planned so we can restart
reset_cylinders(dive, track_gas); reset_cylinders(dive, track_gas);
dc = &dive->dc;
dc->when = dive->when = diveplan->when; dc->when = dive->when = diveplan->when;
dc->surface_pressure.mbar = diveplan->surface_pressure; dc->surface_pressure.mbar = diveplan->surface_pressure;
dc->salinity = diveplan->salinity; dc->salinity = diveplan->salinity;
@ -319,7 +315,7 @@ static void create_dive_from_plan(struct diveplan *diveplan, struct dive *dive,
finish_sample(dc); finish_sample(dc);
dp = dp->next; dp = dp->next;
} }
dive->dc.last_manual_time.seconds = last_manual_point; dc->last_manual_time.seconds = last_manual_point;
#if DEBUG_PLAN & 32 #if DEBUG_PLAN & 32
save_dive(stdout, dive); save_dive(stdout, dive);
@ -655,7 +651,7 @@ static void average_max_depth(struct diveplan *dive, int *avg_depth, int *max_de
*avg_depth = *max_depth = 0; *avg_depth = *max_depth = 0;
} }
bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, int timestep, struct decostop *decostoptable, deco_state_cache &cache, bool is_planner, bool show_disclaimer) bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, int dcNr, int timestep, struct decostop *decostoptable, deco_state_cache &cache, bool is_planner, bool show_disclaimer)
{ {
int bottom_depth; int bottom_depth;
@ -690,15 +686,16 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
int laststoptime = timestep; int laststoptime = timestep;
bool o2breaking = false; bool o2breaking = false;
int decostopcounter = 0; int decostopcounter = 0;
enum divemode_t divemode = dive->dc.divemode; struct divecomputer *dc = get_dive_dc(dive, dcNr);
enum divemode_t divemode = dc->divemode;
set_gf(diveplan->gflow, diveplan->gfhigh); set_gf(diveplan->gflow, diveplan->gfhigh);
set_vpmb_conservatism(diveplan->vpmb_conservatism); set_vpmb_conservatism(diveplan->vpmb_conservatism);
if (!diveplan->surface_pressure) { if (!diveplan->surface_pressure) {
// Lets use dive's surface pressure in planner, if have one... // Lets use dive's surface pressure in planner, if have one...
if (dive->dc.surface_pressure.mbar) { // First from DC... if (dc->surface_pressure.mbar) { // First from DC...
diveplan->surface_pressure = dive->dc.surface_pressure.mbar; diveplan->surface_pressure = dc->surface_pressure.mbar;
} }
else if (dive->surface_pressure.mbar) { // After from user... else if (dive->surface_pressure.mbar) { // After from user...
diveplan->surface_pressure = dive->surface_pressure.mbar; diveplan->surface_pressure = dive->surface_pressure.mbar;
@ -707,10 +704,10 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
diveplan->surface_pressure = SURFACE_PRESSURE; diveplan->surface_pressure = SURFACE_PRESSURE;
} }
} }
clear_deco(ds, dive->surface_pressure.mbar / 1000.0, true); clear_deco(ds, dive->surface_pressure.mbar / 1000.0, true);
ds->max_bottom_ceiling_pressure.mbar = ds->first_ceiling_pressure.mbar = 0; ds->max_bottom_ceiling_pressure.mbar = ds->first_ceiling_pressure.mbar = 0;
create_dive_from_plan(diveplan, dive, is_planner); create_dive_from_plan(diveplan, dive, dc, is_planner);
// Do we want deco stop array in metres or feet? // Do we want deco stop array in metres or feet?
if (prefs.units.length == units::METERS ) { if (prefs.units.length == units::METERS ) {
@ -731,20 +728,20 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
*(decostoplevels + 1) = M_OR_FT(3,10); *(decostoplevels + 1) = M_OR_FT(3,10);
/* Let's start at the last 'sample', i.e. the last manually entered waypoint. */ /* Let's start at the last 'sample', i.e. the last manually entered waypoint. */
sample = &dive->dc.sample[dive->dc.samples - 1]; sample = &dc->sample[dc->samples - 1];
/* Keep time during the ascend */ /* Keep time during the ascend */
bottom_time = clock = previous_point_time = dive->dc.sample[dive->dc.samples - 1].time.seconds; bottom_time = clock = previous_point_time = dc->sample[dc->samples - 1].time.seconds;
current_cylinder = get_cylinderid_at_time(dive, &dive->dc, sample->time); current_cylinder = get_cylinderid_at_time(dive, dc, sample->time);
// Find the divemode at the end of the dive // Find the divemode at the end of the dive
const struct event *ev = NULL; const struct event *ev = NULL;
divemode = UNDEF_COMP_TYPE; divemode = UNDEF_COMP_TYPE;
divemode = get_current_divemode(&dive->dc, bottom_time, &ev, &divemode); divemode = get_current_divemode(dc, bottom_time, &ev, &divemode);
gas = get_cylinder(dive, current_cylinder)->gasmix; gas = get_cylinder(dive, current_cylinder)->gasmix;
po2 = sample->setpoint.mbar; po2 = sample->setpoint.mbar;
depth = dive->dc.sample[dive->dc.samples - 1].depth.mm; depth = dc->sample[dc->samples - 1].depth.mm;
average_max_depth(diveplan, &avg_depth, &max_depth); average_max_depth(diveplan, &avg_depth, &max_depth);
last_ascend_rate = ascent_velocity(depth, avg_depth, bottom_time); last_ascend_rate = ascent_velocity(depth, avg_depth, bottom_time);
@ -755,7 +752,7 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
*/ */
transitiontime = lrint(depth / (double)prefs.ascratelast6m); transitiontime = lrint(depth / (double)prefs.ascratelast6m);
plan_add_segment(diveplan, transitiontime, 0, current_cylinder, po2, false, divemode); plan_add_segment(diveplan, transitiontime, 0, current_cylinder, po2, false, divemode);
create_dive_from_plan(diveplan, dive, is_planner); create_dive_from_plan(diveplan, dive, dc, is_planner);
return false; return false;
} }
@ -782,7 +779,7 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
gi = static_cast<int>(gaschanges.size()) - 1; gi = static_cast<int>(gaschanges.size()) - 1;
/* Set tissue tolerance and initial vpmb gradient at start of ascent phase */ /* Set tissue tolerance and initial vpmb gradient at start of ascent phase */
diveplan->surface_interval = tissue_at_end(ds, dive, cache); diveplan->surface_interval = tissue_at_end(ds, dive, dc, cache);
nuclear_regeneration(ds, clock); nuclear_regeneration(ds, clock);
vpmb_start_gradient(ds); vpmb_start_gradient(ds);
if (decoMode(true) == RECREATIONAL) { if (decoMode(true) == RECREATIONAL) {
@ -830,9 +827,9 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
} }
} while (depth > 0); } while (depth > 0);
plan_add_segment(diveplan, clock - previous_point_time, 0, current_cylinder, po2, false, divemode); plan_add_segment(diveplan, clock - previous_point_time, 0, current_cylinder, po2, false, divemode);
create_dive_from_plan(diveplan, dive, is_planner); create_dive_from_plan(diveplan, dive, dc, is_planner);
add_plan_to_notes(diveplan, dive, show_disclaimer, error); add_plan_to_notes(diveplan, dive, show_disclaimer, error);
fixup_dc_duration(&dive->dc); fixup_dc_duration(dc);
return false; return false;
} }
@ -848,7 +845,7 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
} }
// VPM-B or Buehlmann Deco // VPM-B or Buehlmann Deco
tissue_at_end(ds, dive, cache); tissue_at_end(ds, dive, dc, cache);
if ((divemode == CCR || divemode == PSCR) && prefs.dobailout) { if ((divemode == CCR || divemode == PSCR) && prefs.dobailout) {
divemode = OC; divemode = OC;
po2 = 0; po2 = 0;
@ -1112,9 +1109,9 @@ bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, i
current_cylinder = dive->cylinders.nr; current_cylinder = dive->cylinders.nr;
plan_add_segment(diveplan, prefs.surface_segment, 0, current_cylinder, 0, false, OC); plan_add_segment(diveplan, prefs.surface_segment, 0, current_cylinder, 0, false, OC);
} }
create_dive_from_plan(diveplan, dive, is_planner); create_dive_from_plan(diveplan, dive, dc, is_planner);
add_plan_to_notes(diveplan, dive, show_disclaimer, error); add_plan_to_notes(diveplan, dive, show_disclaimer, error);
fixup_dc_duration(&dive->dc); fixup_dc_duration(dc);
return decodive; return decodive;
} }

View File

@ -60,6 +60,6 @@ struct decostop {
#include <string> #include <string>
extern std::string get_planner_disclaimer_formatted(); extern std::string get_planner_disclaimer_formatted();
extern bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, int timestep, struct decostop *decostoptable, deco_state_cache &cache, bool is_planner, bool show_disclaimer); extern bool plan(struct deco_state *ds, struct diveplan *diveplan, struct dive *dive, int dcNr, int timestep, struct decostop *decostoptable, deco_state_cache &cache, bool is_planner, bool show_disclaimer);
#endif #endif
#endif // PLANNER_H #endif // PLANNER_H

View File

@ -156,7 +156,7 @@ extern "C" void add_plan_to_notes(struct diveplan *diveplan, struct dive *dive,
translate("gettextFromC", "Subsurface"), translate("gettextFromC", "Subsurface"),
subsurface_canonical_version(), subsurface_canonical_version(),
translate("gettextFromC", "dive plan</b> (surface interval "), translate("gettextFromC", "dive plan</b> (surface interval "),
FRACTION(diveplan->surface_interval / 60, 60), FRACTION_TUPLE(diveplan->surface_interval / 60, 60),
translate("gettextFromC", "created on"), translate("gettextFromC", "created on"),
get_current_date().c_str()); get_current_date().c_str());
} }
@ -234,16 +234,16 @@ extern "C" void add_plan_to_notes(struct diveplan *diveplan, struct dive *dive,
buf += casprintf_loc(translate("gettextFromC", "%s to %.*f %s in %d:%02d min - runtime %d:%02u on %s (SP = %.1fbar)"), buf += casprintf_loc(translate("gettextFromC", "%s to %.*f %s in %d:%02d min - runtime %d:%02u on %s (SP = %.1fbar)"),
dp->depth.mm < lastprintdepth ? translate("gettextFromC", "Ascend") : translate("gettextFromC", "Descend"), dp->depth.mm < lastprintdepth ? translate("gettextFromC", "Ascend") : translate("gettextFromC", "Descend"),
decimals, depthvalue, depth_unit, decimals, depthvalue, depth_unit,
FRACTION(dp->time - lasttime, 60), FRACTION_TUPLE(dp->time - lasttime, 60),
FRACTION(dp->time, 60), FRACTION_TUPLE(dp->time, 60),
gasname(gasmix), gasname(gasmix),
(double) dp->setpoint / 1000.0); (double) dp->setpoint / 1000.0);
} else { } else {
buf += casprintf_loc(translate("gettextFromC", "%s to %.*f %s in %d:%02d min - runtime %d:%02u on %s"), buf += casprintf_loc(translate("gettextFromC", "%s to %.*f %s in %d:%02d min - runtime %d:%02u on %s"),
dp->depth.mm < lastprintdepth ? translate("gettextFromC", "Ascend") : translate("gettextFromC", "Descend"), dp->depth.mm < lastprintdepth ? translate("gettextFromC", "Ascend") : translate("gettextFromC", "Descend"),
decimals, depthvalue, depth_unit, decimals, depthvalue, depth_unit,
FRACTION(dp->time - lasttime, 60), FRACTION_TUPLE(dp->time - lasttime, 60),
FRACTION(dp->time, 60), FRACTION_TUPLE(dp->time, 60),
gasname(gasmix)); gasname(gasmix));
} }
@ -256,15 +256,15 @@ extern "C" void add_plan_to_notes(struct diveplan *diveplan, struct dive *dive,
if (dp->setpoint) { if (dp->setpoint) {
buf += casprintf_loc(translate("gettextFromC", "Stay at %.*f %s for %d:%02d min - runtime %d:%02u on %s (SP = %.1fbar CCR)"), buf += casprintf_loc(translate("gettextFromC", "Stay at %.*f %s for %d:%02d min - runtime %d:%02u on %s (SP = %.1fbar CCR)"),
decimals, depthvalue, depth_unit, decimals, depthvalue, depth_unit,
FRACTION(dp->time - lasttime, 60), FRACTION_TUPLE(dp->time - lasttime, 60),
FRACTION(dp->time, 60), FRACTION_TUPLE(dp->time, 60),
gasname(gasmix), gasname(gasmix),
(double) dp->setpoint / 1000.0); (double) dp->setpoint / 1000.0);
} else { } else {
buf += casprintf_loc(translate("gettextFromC", "Stay at %.*f %s for %d:%02d min - runtime %d:%02u on %s %s"), buf += casprintf_loc(translate("gettextFromC", "Stay at %.*f %s for %d:%02d min - runtime %d:%02u on %s %s"),
decimals, depthvalue, depth_unit, decimals, depthvalue, depth_unit,
FRACTION(dp->time - lasttime, 60), FRACTION_TUPLE(dp->time - lasttime, 60),
FRACTION(dp->time, 60), FRACTION_TUPLE(dp->time, 60),
gasname(gasmix), gasname(gasmix),
translate("gettextFromC", divemode_text_ui[dp->divemode])); translate("gettextFromC", divemode_text_ui[dp->divemode]));
} }
@ -594,7 +594,7 @@ extern "C" void add_plan_to_notes(struct diveplan *diveplan, struct dive *dive,
buf += "<div>\n"; buf += "<div>\n";
o2warning_exist = true; o2warning_exist = true;
temp = casprintf_loc(translate("gettextFromC", "high pO₂ value %.2f at %d:%02u with gas %s at depth %.*f %s"), temp = casprintf_loc(translate("gettextFromC", "high pO₂ value %.2f at %d:%02u with gas %s at depth %.*f %s"),
pressures.o2, FRACTION(dp->time, 60), gasname(gasmix), decimals, depth_value, depth_unit); pressures.o2, FRACTION_TUPLE(dp->time, 60), gasname(gasmix), decimals, depth_value, depth_unit);
buf += format_string_std("<span style='color: red;'>%s </span> %s<br/>\n", translate("gettextFromC", "Warning:"), temp.c_str()); buf += format_string_std("<span style='color: red;'>%s </span> %s<br/>\n", translate("gettextFromC", "Warning:"), temp.c_str());
} else if (pressures.o2 < 0.16) { } else if (pressures.o2 < 0.16) {
const char *depth_unit; const char *depth_unit;
@ -604,7 +604,7 @@ extern "C" void add_plan_to_notes(struct diveplan *diveplan, struct dive *dive,
buf += "<div>"; buf += "<div>";
o2warning_exist = true; o2warning_exist = true;
temp = casprintf_loc(translate("gettextFromC", "low pO₂ value %.2f at %d:%02u with gas %s at depth %.*f %s"), temp = casprintf_loc(translate("gettextFromC", "low pO₂ value %.2f at %d:%02u with gas %s at depth %.*f %s"),
pressures.o2, FRACTION(dp->time, 60), gasname(gasmix), decimals, depth_value, depth_unit); pressures.o2, FRACTION_TUPLE(dp->time, 60), gasname(gasmix), decimals, depth_value, depth_unit);
buf += format_string_std("<span style='color: red;'>%s </span> %s<br/>\n", translate("gettextFromC", "Warning:"), temp.c_str()); buf += format_string_std("<span style='color: red;'>%s </span> %s<br/>\n", translate("gettextFromC", "Warning:"), temp.c_str());
} }
} }

View File

@ -762,7 +762,8 @@ static void setup_gas_sensor_pressure(const struct dive *dive, const struct dive
std::vector<int> last(num_cyl, INT_MAX); std::vector<int> last(num_cyl, INT_MAX);
const struct divecomputer *secondary; const struct divecomputer *secondary;
unsigned prev = (unsigned)explicit_first_cylinder(dive, dc); int prev = explicit_first_cylinder(dive, dc);
prev = prev >= 0 ? prev : 0;
seen[prev] = 1; seen[prev] = 1;
for (ev = get_next_event(dc->events, "gaschange"); ev != NULL; ev = get_next_event(ev->next, "gaschange")) { for (ev = get_next_event(dc->events, "gaschange"); ev != NULL; ev = get_next_event(ev->next, "gaschange")) {
@ -1367,8 +1368,8 @@ static std::vector<std::string> plot_string(const struct dive *d, const struct p
std::vector<std::string> res; std::vector<std::string> res;
depthvalue = get_depth_units(entry->depth, NULL, &depth_unit); depthvalue = get_depth_units(entry->depth, NULL, &depth_unit);
res.push_back(casprintf_loc(translate("gettextFromC", "@: %d:%02d"), FRACTION(entry->sec, 60), depthvalue)); res.push_back(casprintf_loc(translate("gettextFromC", "@: %d:%02d"), FRACTION_TUPLE(entry->sec, 60)));
res.push_back(casprintf_loc(translate("gettextFromC", "D: %.1f%s"), depth_unit)); res.push_back(casprintf_loc(translate("gettextFromC", "D: %.1f%s"), depthvalue, depth_unit));
for (cyl = 0; cyl < pi->nr_cylinders; cyl++) { for (cyl = 0; cyl < pi->nr_cylinders; cyl++) {
int mbar = get_plot_pressure(pi, idx, cyl); int mbar = get_plot_pressure(pi, idx, cyl);
if (!mbar) if (!mbar)

View File

@ -135,9 +135,9 @@ static void put_gasmix(struct membuffer *b, struct gasmix mix)
int he = mix.he.permille; int he = mix.he.permille;
if (o2) { if (o2) {
put_format(b, " o2=%u.%u%%", FRACTION(o2, 10)); put_format(b, " o2=%u.%u%%", FRACTION_TUPLE(o2, 10));
if (he) if (he)
put_format(b, " he=%u.%u%%", FRACTION(he, 10)); put_format(b, " he=%u.%u%%", FRACTION_TUPLE(he, 10));
} }
} }
@ -251,7 +251,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
{ {
int idx; int idx;
put_format(b, "%3u:%02u", FRACTION(sample->time.seconds, 60)); put_format(b, "%3u:%02u", FRACTION_TUPLE(sample->time.seconds, 60));
put_milli(b, " ", sample->depth.mm, "m"); put_milli(b, " ", sample->depth.mm, "m");
put_temperature(b, sample->temperature, " ", "°C"); put_temperature(b, sample->temperature, " ", "°C");
@ -293,11 +293,11 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
/* the deco/ndl values are stored whenever they change */ /* the deco/ndl values are stored whenever they change */
if (sample->ndl.seconds != old->ndl.seconds) { if (sample->ndl.seconds != old->ndl.seconds) {
put_format(b, " ndl=%u:%02u", FRACTION(sample->ndl.seconds, 60)); put_format(b, " ndl=%u:%02u", FRACTION_TUPLE(sample->ndl.seconds, 60));
old->ndl = sample->ndl; old->ndl = sample->ndl;
} }
if (sample->tts.seconds != old->tts.seconds) { if (sample->tts.seconds != old->tts.seconds) {
put_format(b, " tts=%u:%02u", FRACTION(sample->tts.seconds, 60)); put_format(b, " tts=%u:%02u", FRACTION_TUPLE(sample->tts.seconds, 60));
old->tts = sample->tts; old->tts = sample->tts;
} }
if (sample->in_deco != old->in_deco) { if (sample->in_deco != old->in_deco) {
@ -305,7 +305,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
old->in_deco = sample->in_deco; old->in_deco = sample->in_deco;
} }
if (sample->stoptime.seconds != old->stoptime.seconds) { if (sample->stoptime.seconds != old->stoptime.seconds) {
put_format(b, " stoptime=%u:%02u", FRACTION(sample->stoptime.seconds, 60)); put_format(b, " stoptime=%u:%02u", FRACTION_TUPLE(sample->stoptime.seconds, 60));
old->stoptime = sample->stoptime; old->stoptime = sample->stoptime;
} }
@ -320,7 +320,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
} }
if (sample->rbt.seconds != old->rbt.seconds) { if (sample->rbt.seconds != old->rbt.seconds) {
put_format(b, " rbt=%u:%02u", FRACTION(sample->rbt.seconds, 60)); put_format(b, " rbt=%u:%02u", FRACTION_TUPLE(sample->rbt.seconds, 60));
old->rbt.seconds = sample->rbt.seconds; old->rbt.seconds = sample->rbt.seconds;
} }
@ -393,7 +393,7 @@ static void save_samples(struct membuffer *b, struct dive *dive, struct divecomp
static void save_one_event(struct membuffer *b, struct dive *dive, struct event *ev) static void save_one_event(struct membuffer *b, struct dive *dive, struct event *ev)
{ {
put_format(b, "event %d:%02d", FRACTION(ev->time.seconds, 60)); put_format(b, "event %d:%02d", FRACTION_TUPLE(ev->time.seconds, 60));
show_index(b, ev->type, "type=", ""); show_index(b, ev->type, "type=", "");
show_index(b, ev->flags, "flags=", ""); show_index(b, ev->flags, "flags=", "");
@ -456,7 +456,7 @@ static void create_dive_buffer(struct dive *dive, struct membuffer *b)
{ {
pressure_t surface_pressure = un_fixup_surface_pressure(dive); pressure_t surface_pressure = un_fixup_surface_pressure(dive);
if (dive->dc.duration.seconds > 0) if (dive->dc.duration.seconds > 0)
put_format(b, "duration %u:%02u min\n", FRACTION(dive->dc.duration.seconds, 60)); put_format(b, "duration %u:%02u min\n", FRACTION_TUPLE(dive->dc.duration.seconds, 60));
SAVE("rating", rating); SAVE("rating", rating);
SAVE("visibility", visibility); SAVE("visibility", visibility);
SAVE("wavesize", wavesize); SAVE("wavesize", wavesize);
@ -653,7 +653,7 @@ static int save_one_picture(git_repository *repo, struct dir *dir, struct pictur
h = offset / 3600; h = offset / 3600;
offset -= h *3600; offset -= h *3600;
return blob_insert(repo, dir, &buf, "%c%02u=%02u=%02u", return blob_insert(repo, dir, &buf, "%c%02u=%02u=%02u",
sign, h, FRACTION(offset, 60)); sign, h, FRACTION_TUPLE(offset, 60));
} }
static int save_pictures(git_repository *repo, struct dir *dir, struct dive *dive) static int save_pictures(git_repository *repo, struct dir *dir, struct dive *dive)

View File

@ -163,8 +163,8 @@ static void put_cylinder_HTML(struct membuffer *b, struct dive *dive)
} }
if (cylinder->gasmix.o2.permille) { if (cylinder->gasmix.o2.permille) {
put_format(b, "\"O2\":\"%u.%u%%\",", FRACTION(cylinder->gasmix.o2.permille, 10)); put_format(b, "\"O2\":\"%u.%u%%\",", FRACTION_TUPLE(cylinder->gasmix.o2.permille, 10));
put_format(b, "\"He\":\"%u.%u%%\"", FRACTION(cylinder->gasmix.he.permille, 10)); put_format(b, "\"He\":\"%u.%u%%\"", FRACTION_TUPLE(cylinder->gasmix.he.permille, 10));
} else { } else {
write_attribute(b, "O2", "Air", ""); write_attribute(b, "O2", "Air", "");
} }
@ -364,7 +364,7 @@ static void write_one_dive(struct membuffer *b, struct dive *dive, const char *p
put_format(b, "\"surge\":%d,", dive->surge); put_format(b, "\"surge\":%d,", dive->surge);
put_format(b, "\"chill\":%d,", dive->chill); put_format(b, "\"chill\":%d,", dive->chill);
put_format(b, "\"dive_duration\":\"%u:%02u min\",", put_format(b, "\"dive_duration\":\"%u:%02u min\",",
FRACTION(dive->duration.seconds, 60)); FRACTION_TUPLE(dive->duration.seconds, 60));
put_string(b, "\"temperature\":{"); put_string(b, "\"temperature\":{");
put_HTML_airtemp(b, dive, "\"air\":\"", "\","); put_HTML_airtemp(b, dive, "\"air\":\"", "\",");
put_HTML_watertemp(b, dive, "\"water\":\"", "\""); put_HTML_watertemp(b, dive, "\"water\":\"", "\"");

View File

@ -179,7 +179,7 @@ static void put_st_event(struct membuffer *b, struct plot_data *entry, int offse
put_video_time(b, entry->sec - offset); put_video_time(b, entry->sec - offset);
put_video_time(b, (entry+1)->sec - offset < length ? (entry+1)->sec - offset : length); put_video_time(b, (entry+1)->sec - offset < length ? (entry+1)->sec - offset : length);
put_format(b, "Default,,0,0,0,,"); put_format(b, "Default,,0,0,0,,");
put_format(b, "%d:%02d ", FRACTION(entry->sec, 60)); put_format(b, "%d:%02d ", FRACTION_TUPLE(entry->sec, 60));
value = get_depth_units(entry->depth, &decimals, &unit); value = get_depth_units(entry->depth, &decimals, &unit);
put_format(b, "D=%02.2f %s ", value, unit); put_format(b, "D=%02.2f %s ", value, unit);
if (entry->temperature) { if (entry->temperature) {
@ -189,10 +189,10 @@ static void put_st_event(struct membuffer *b, struct plot_data *entry, int offse
// Only show NDL if it is not essentially infinite, show TTS for mandatory stops. // Only show NDL if it is not essentially infinite, show TTS for mandatory stops.
if (entry->ndl_calc < 3600) { if (entry->ndl_calc < 3600) {
if (entry->ndl_calc > 0) if (entry->ndl_calc > 0)
put_format(b, "NDL=%d:%02d ", FRACTION(entry->ndl_calc, 60)); put_format(b, "NDL=%d:%02d ", FRACTION_TUPLE(entry->ndl_calc, 60));
else else
if (entry->tts_calc > 0) if (entry->tts_calc > 0)
put_format(b, "TTS=%d:%02d ", FRACTION(entry->tts_calc, 60)); put_format(b, "TTS=%d:%02d ", FRACTION_TUPLE(entry->tts_calc, 60));
} }
if (entry->surface_gf > 0.0) { if (entry->surface_gf > 0.0) {
put_format(b, "sGF=%.1f%% ", entry->surface_gf); put_format(b, "sGF=%.1f%% ", entry->surface_gf);

View File

@ -169,9 +169,9 @@ static void put_gasmix(struct membuffer *b, struct gasmix mix)
int he = mix.he.permille; int he = mix.he.permille;
if (o2) { if (o2) {
put_format(b, " o2='%u.%u%%'", FRACTION(o2, 10)); put_format(b, " o2='%u.%u%%'", FRACTION_TUPLE(o2, 10));
if (he) if (he)
put_format(b, " he='%u.%u%%'", FRACTION(he, 10)); put_format(b, " he='%u.%u%%'", FRACTION_TUPLE(he, 10));
} }
} }
@ -236,7 +236,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
{ {
int idx; int idx;
put_format(b, " <sample time='%u:%02u min'", FRACTION(sample->time.seconds, 60)); put_format(b, " <sample time='%u:%02u min'", FRACTION_TUPLE(sample->time.seconds, 60));
put_milli(b, " depth='", sample->depth.mm, " m'"); put_milli(b, " depth='", sample->depth.mm, " m'");
if (sample->temperature.mkelvin && sample->temperature.mkelvin != old->temperature.mkelvin) { if (sample->temperature.mkelvin && sample->temperature.mkelvin != old->temperature.mkelvin) {
put_temperature(b, sample->temperature, " temp='", " C'"); put_temperature(b, sample->temperature, " temp='", " C'");
@ -278,15 +278,15 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
/* the deco/ndl values are stored whenever they change */ /* the deco/ndl values are stored whenever they change */
if (sample->ndl.seconds != old->ndl.seconds) { if (sample->ndl.seconds != old->ndl.seconds) {
put_format(b, " ndl='%u:%02u min'", FRACTION(sample->ndl.seconds, 60)); put_format(b, " ndl='%u:%02u min'", FRACTION_TUPLE(sample->ndl.seconds, 60));
old->ndl = sample->ndl; old->ndl = sample->ndl;
} }
if (sample->tts.seconds != old->tts.seconds) { if (sample->tts.seconds != old->tts.seconds) {
put_format(b, " tts='%u:%02u min'", FRACTION(sample->tts.seconds, 60)); put_format(b, " tts='%u:%02u min'", FRACTION_TUPLE(sample->tts.seconds, 60));
old->tts = sample->tts; old->tts = sample->tts;
} }
if (sample->rbt.seconds != old->rbt.seconds) { if (sample->rbt.seconds != old->rbt.seconds) {
put_format(b, " rbt='%u:%02u min'", FRACTION(sample->rbt.seconds, 60)); put_format(b, " rbt='%u:%02u min'", FRACTION_TUPLE(sample->rbt.seconds, 60));
old->rbt = sample->rbt; old->rbt = sample->rbt;
} }
if (sample->in_deco != old->in_deco) { if (sample->in_deco != old->in_deco) {
@ -294,7 +294,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
old->in_deco = sample->in_deco; old->in_deco = sample->in_deco;
} }
if (sample->stoptime.seconds != old->stoptime.seconds) { if (sample->stoptime.seconds != old->stoptime.seconds) {
put_format(b, " stoptime='%u:%02u min'", FRACTION(sample->stoptime.seconds, 60)); put_format(b, " stoptime='%u:%02u min'", FRACTION_TUPLE(sample->stoptime.seconds, 60));
old->stoptime = sample->stoptime; old->stoptime = sample->stoptime;
} }
@ -355,7 +355,7 @@ static void save_sample(struct membuffer *b, struct sample *sample, struct sampl
static void save_one_event(struct membuffer *b, struct dive *dive, struct event *ev) static void save_one_event(struct membuffer *b, struct dive *dive, struct event *ev)
{ {
put_format(b, " <event time='%d:%02d min'", FRACTION(ev->time.seconds, 60)); put_format(b, " <event time='%d:%02d min'", FRACTION_TUPLE(ev->time.seconds, 60));
show_index(b, ev->type, "type='", "'"); show_index(b, ev->type, "type='", "'");
show_index(b, ev->flags, "flags='", "'"); show_index(b, ev->flags, "flags='", "'");
if (!strcmp(ev->name,"modechange")) if (!strcmp(ev->name,"modechange"))
@ -490,7 +490,7 @@ static void save_picture(struct membuffer *b, struct picture *pic)
sign = '-'; sign = '-';
offset = -offset; offset = -offset;
} }
put_format(b, " offset='%c%u:%02u min'", sign, FRACTION(offset, 60)); put_format(b, " offset='%c%u:%02u min'", sign, FRACTION_TUPLE(offset, 60));
} }
put_location(b, &pic->location, " gps='","'"); put_location(b, &pic->location, " gps='","'");
@ -525,7 +525,7 @@ extern "C" void save_one_dive_to_mb(struct membuffer *b, struct dive *dive, bool
// These three are calculated, and not read when loading. // These three are calculated, and not read when loading.
// But saving them into the XML is useful for data export. // But saving them into the XML is useful for data export.
if (dive->sac > 100) if (dive->sac > 100)
put_format(b, " sac='%d.%03d l/min'", FRACTION(dive->sac, 1000)); put_format(b, " sac='%d.%03d l/min'", FRACTION_TUPLE(dive->sac, 1000));
if (dive->otu) if (dive->otu)
put_format(b, " otu='%d'", dive->otu); put_format(b, " otu='%d'", dive->otu);
if (dive->maxcns) if (dive->maxcns)
@ -541,7 +541,7 @@ extern "C" void save_one_dive_to_mb(struct membuffer *b, struct dive *dive, bool
put_pressure(b, surface_pressure, " airpressure='", " bar'"); put_pressure(b, surface_pressure, " airpressure='", " bar'");
if (dive->dc.duration.seconds > 0) if (dive->dc.duration.seconds > 0)
put_format(b, " duration='%u:%02u min'>\n", put_format(b, " duration='%u:%02u min'>\n",
FRACTION(dive->dc.duration.seconds, 60)); FRACTION_TUPLE(dive->dc.duration.seconds, 60));
else else
put_format(b, ">\n"); put_format(b, ">\n");
save_overview(b, dive, anonymize); save_overview(b, dive, anonymize);

View File

@ -38,10 +38,10 @@
#endif #endif
#include "errorhelper.h" #include "errorhelper.h"
#define INFO(context, fmt, ...) report_info(stderr, "INFO: " fmt, ##__VA_ARGS__) #define INFO(fmt, ...) report_info("INFO: " fmt, ##__VA_ARGS__)
#define ERROR(context, fmt, ...) report_info(stderr, "ERROR: " fmt, ##__VA_ARGS__) #define ERROR(fmt, ...) report_info("ERROR: " fmt, ##__VA_ARGS__)
//#define SYSERROR(context, errcode) ERROR(__FILE__ ":" __LINE__ ": %s", strerror(errcode)) //#define SYSERROR(context, errcode) ERROR(__FILE__ ":" __LINE__ ": %s", strerror(errcode))
#define SYSERROR(context, errcode) ; #define SYSERROR(errcode) (void)errcode
#include "libdivecomputer.h" #include "libdivecomputer.h"
#include <libdivecomputer/context.h> #include <libdivecomputer/context.h>
@ -119,7 +119,7 @@ static dc_status_t serial_ftdi_sleep (void *io, unsigned int timeout)
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "Sleep: value=%u", timeout); INFO ("Sleep: value=%u", timeout);
#ifdef _WIN32 #ifdef _WIN32
Sleep((DWORD)timeout); Sleep((DWORD)timeout);
@ -130,7 +130,7 @@ static dc_status_t serial_ftdi_sleep (void *io, unsigned int timeout)
while (nanosleep (&ts, &ts) != 0) { while (nanosleep (&ts, &ts) != 0) {
if (errno != EINTR ) { if (errno != EINTR ) {
SYSERROR (device->context, errno); SYSERROR (errno);
return DC_STATUS_IO; return DC_STATUS_IO;
} }
} }
@ -143,7 +143,7 @@ static dc_status_t serial_ftdi_sleep (void *io, unsigned int timeout)
// Used internally for opening ftdi devices // Used internally for opening ftdi devices
static int serial_ftdi_open_device (struct ftdi_context *ftdi_ctx) static int serial_ftdi_open_device (struct ftdi_context *ftdi_ctx)
{ {
INFO(0, "serial_ftdi_open_device called"); INFO("serial_ftdi_open_device called");
int accepted_pids[] = { int accepted_pids[] = {
0x6001, 0x6010, 0x6011, // Suunto (Smart Interface), Heinrichs Weikamp 0x6001, 0x6010, 0x6011, // Suunto (Smart Interface), Heinrichs Weikamp
0x6015, // possibly Aqualung 0x6015, // possibly Aqualung
@ -156,7 +156,7 @@ static int serial_ftdi_open_device (struct ftdi_context *ftdi_ctx)
for (i = 0; i < num_accepted_pids; i++) { for (i = 0; i < num_accepted_pids; i++) {
pid = accepted_pids[i]; pid = accepted_pids[i];
ret = ftdi_usb_open (ftdi_ctx, VID, pid); ret = ftdi_usb_open (ftdi_ctx, VID, pid);
INFO(0, "FTDI tried VID %04x pid %04x ret %d", VID, pid, ret); INFO("FTDI tried VID %04x pid %04x ret %d", VID, pid, ret);
if (ret == -3) // Device not found if (ret == -3) // Device not found
continue; continue;
else else
@ -171,20 +171,20 @@ static int serial_ftdi_open_device (struct ftdi_context *ftdi_ctx)
// Initialise ftdi_context and use it to open the device // Initialise ftdi_context and use it to open the device
static dc_status_t serial_ftdi_open (void **io, dc_context_t *context) static dc_status_t serial_ftdi_open (void **io, dc_context_t *context)
{ {
INFO(0, "serial_ftdi_open called"); INFO("serial_ftdi_open called");
// Allocate memory. // Allocate memory.
ftdi_serial_t *device = (ftdi_serial_t *) malloc (sizeof (ftdi_serial_t)); ftdi_serial_t *device = (ftdi_serial_t *) malloc (sizeof (ftdi_serial_t));
if (device == NULL) { if (device == NULL) {
INFO(0, "couldn't allocate memory"); INFO("couldn't allocate memory");
SYSERROR (context, errno); SYSERROR (errno);
return DC_STATUS_NOMEMORY; return DC_STATUS_NOMEMORY;
} }
INFO(0, "setting up ftdi_ctx"); INFO("setting up ftdi_ctx");
struct ftdi_context *ftdi_ctx = ftdi_new(); struct ftdi_context *ftdi_ctx = ftdi_new();
if (ftdi_ctx == NULL) { if (ftdi_ctx == NULL) {
INFO(0, "failed ftdi_new()"); INFO("failed ftdi_new()");
free(device); free(device);
SYSERROR (context, errno); SYSERROR (errno);
return DC_STATUS_NOMEMORY; return DC_STATUS_NOMEMORY;
} }
@ -202,31 +202,31 @@ static dc_status_t serial_ftdi_open (void **io, dc_context_t *context)
device->parity = 0; device->parity = 0;
// Initialize device ftdi context // Initialize device ftdi context
INFO(0, "initialize ftdi_ctx"); INFO("initialize ftdi_ctx");
ftdi_init(ftdi_ctx); ftdi_init(ftdi_ctx);
if (ftdi_set_interface(ftdi_ctx,INTERFACE_ANY)) { if (ftdi_set_interface(ftdi_ctx,INTERFACE_ANY)) {
free(device); free(device);
ERROR (context, "%s", ftdi_get_error_string(ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
INFO(0, "call serial_ftdi_open_device"); INFO("call serial_ftdi_open_device");
if (serial_ftdi_open_device(ftdi_ctx) < 0) { if (serial_ftdi_open_device(ftdi_ctx) < 0) {
free(device); free(device);
ERROR (context, "%s", ftdi_get_error_string(ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
if (ftdi_usb_reset(ftdi_ctx)) { if (ftdi_usb_reset(ftdi_ctx)) {
free(device); free(device);
ERROR (context, "%s", ftdi_get_error_string(ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
if (ftdi_usb_purge_buffers(ftdi_ctx)) { if (ftdi_usb_purge_buffers(ftdi_ctx)) {
free(device); free(device);
ERROR (context, "%s", ftdi_get_error_string(ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -252,7 +252,7 @@ static dc_status_t serial_ftdi_close (void *io)
int ret = ftdi_usb_close(device->ftdi_ctx); int ret = ftdi_usb_close(device->ftdi_ctx);
if (ret < 0) { if (ret < 0) {
ERROR (device->context, "Unable to close the ftdi device : %d (%s)", ERROR ("Unable to close the ftdi device : %d (%s)",
ret, ftdi_get_error_string(device->ftdi_ctx)); ret, ftdi_get_error_string(device->ftdi_ctx));
return ret; return ret;
} }
@ -275,7 +275,7 @@ static dc_status_t serial_ftdi_configure (void *io, unsigned int baudrate, unsig
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "Configure: baudrate=%i, databits=%i, parity=%i, stopbits=%i, flowcontrol=%i", INFO ("Configure: baudrate=%i, databits=%i, parity=%i, stopbits=%i, flowcontrol=%i",
baudrate, databits, parity, stopbits, flowcontrol); baudrate, databits, parity, stopbits, flowcontrol);
enum ftdi_bits_type ft_bits; enum ftdi_bits_type ft_bits;
@ -283,7 +283,7 @@ static dc_status_t serial_ftdi_configure (void *io, unsigned int baudrate, unsig
enum ftdi_parity_type ft_parity; enum ftdi_parity_type ft_parity;
if (ftdi_set_baudrate(device->ftdi_ctx, baudrate) < 0) { if (ftdi_set_baudrate(device->ftdi_ctx, baudrate) < 0) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -331,7 +331,7 @@ static dc_status_t serial_ftdi_configure (void *io, unsigned int baudrate, unsig
// Set the attributes // Set the attributes
if (ftdi_set_line_property(device->ftdi_ctx, ft_bits, ft_stopbits, ft_parity)) { if (ftdi_set_line_property(device->ftdi_ctx, ft_bits, ft_stopbits, ft_parity)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -339,19 +339,19 @@ static dc_status_t serial_ftdi_configure (void *io, unsigned int baudrate, unsig
switch (flowcontrol) { switch (flowcontrol) {
case DC_FLOWCONTROL_NONE: /**< No flow control */ case DC_FLOWCONTROL_NONE: /**< No flow control */
if (ftdi_setflowctrl(device->ftdi_ctx, SIO_DISABLE_FLOW_CTRL) < 0) { if (ftdi_setflowctrl(device->ftdi_ctx, SIO_DISABLE_FLOW_CTRL) < 0) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
case DC_FLOWCONTROL_HARDWARE: /**< Hardware (RTS/CTS) flow control */ case DC_FLOWCONTROL_HARDWARE: /**< Hardware (RTS/CTS) flow control */
if (ftdi_setflowctrl(device->ftdi_ctx, SIO_RTS_CTS_HS) < 0) { if (ftdi_setflowctrl(device->ftdi_ctx, SIO_RTS_CTS_HS) < 0) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
case DC_FLOWCONTROL_SOFTWARE: /**< Software (XON/XOFF) flow control */ case DC_FLOWCONTROL_SOFTWARE: /**< Software (XON/XOFF) flow control */
if (ftdi_setflowctrl(device->ftdi_ctx, SIO_XON_XOFF_HS) < 0) { if (ftdi_setflowctrl(device->ftdi_ctx, SIO_XON_XOFF_HS) < 0) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
@ -378,7 +378,7 @@ static dc_status_t serial_ftdi_set_timeout (void *io, int timeout)
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "Timeout: value=%i", timeout); INFO ("Timeout: value=%i", timeout);
device->timeout = timeout; device->timeout = timeout;
@ -406,11 +406,11 @@ static dc_status_t serial_ftdi_read (void *io, void *data, size_t size, size_t *
if (n < 0) { if (n < 0) {
if (n == LIBUSB_ERROR_INTERRUPTED) if (n == LIBUSB_ERROR_INTERRUPTED)
continue; //Retry. continue; //Retry.
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; //Error during read call. return DC_STATUS_IO; //Error during read call.
} else if (n == 0) { } else if (n == 0) {
if (serial_ftdi_get_msec() - start_time > timeout) { if (serial_ftdi_get_msec() - start_time > timeout) {
ERROR(device->context, "%s", "FTDI read timed out."); ERROR("FTDI read timed out.");
return DC_STATUS_TIMEOUT; return DC_STATUS_TIMEOUT;
} }
serial_ftdi_sleep (device, 1); serial_ftdi_sleep (device, 1);
@ -419,7 +419,7 @@ static dc_status_t serial_ftdi_read (void *io, void *data, size_t size, size_t *
nbytes += n; nbytes += n;
} }
INFO (device->context, "Read %d bytes", nbytes); INFO ("Read %d bytes", nbytes);
if (actual) if (actual)
*actual = nbytes; *actual = nbytes;
@ -441,7 +441,7 @@ static dc_status_t serial_ftdi_write (void *io, const void *data, size_t size, s
if (n < 0) { if (n < 0) {
if (n == LIBUSB_ERROR_INTERRUPTED) if (n == LIBUSB_ERROR_INTERRUPTED)
continue; // Retry. continue; // Retry.
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; // Error during write call. return DC_STATUS_IO; // Error during write call.
} else if (n == 0) { } else if (n == 0) {
break; // EOF. break; // EOF.
@ -450,7 +450,7 @@ static dc_status_t serial_ftdi_write (void *io, const void *data, size_t size, s
nbytes += n; nbytes += n;
} }
INFO (device->context, "Wrote %d bytes", nbytes); INFO ("Wrote %d bytes", nbytes);
if (actual) if (actual)
*actual = nbytes; *actual = nbytes;
@ -467,26 +467,26 @@ static dc_status_t serial_ftdi_purge (void *io, dc_direction_t queue)
size_t input; size_t input;
serial_ftdi_get_available (io, &input); serial_ftdi_get_available (io, &input);
INFO (device->context, "Flush: queue=%u, input=%lu, output=%i", queue, input, INFO ("Flush: queue=%u, input=%lu, output=%i", queue, (unsigned long)input,
serial_ftdi_get_transmitted (device)); serial_ftdi_get_transmitted (device));
switch (queue) { switch (queue) {
case DC_DIRECTION_INPUT: /**< Input direction */ case DC_DIRECTION_INPUT: /**< Input direction */
if (ftdi_usb_purge_tx_buffer(device->ftdi_ctx)) { if (ftdi_usb_purge_tx_buffer(device->ftdi_ctx)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
case DC_DIRECTION_OUTPUT: /**< Output direction */ case DC_DIRECTION_OUTPUT: /**< Output direction */
if (ftdi_usb_purge_rx_buffer(device->ftdi_ctx)) { if (ftdi_usb_purge_rx_buffer(device->ftdi_ctx)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
case DC_DIRECTION_ALL: /**< All directions */ case DC_DIRECTION_ALL: /**< All directions */
default: default:
if (ftdi_usb_reset(device->ftdi_ctx)) { if (ftdi_usb_reset(device->ftdi_ctx)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
break; break;
@ -502,10 +502,10 @@ static dc_status_t serial_ftdi_set_break (void *io, unsigned int level)
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "Break: value=%i", level); INFO ("Break: value=%i", level);
if (ftdi_set_line_property2(device->ftdi_ctx, device->databits, device->stopbits, device->parity, level)) { if (ftdi_set_line_property2(device->ftdi_ctx, device->databits, device->stopbits, device->parity, level)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -519,10 +519,10 @@ static dc_status_t serial_ftdi_set_dtr (void *io, unsigned int value)
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "DTR: value=%u", value); INFO ("DTR: value=%u", value);
if (ftdi_setdtr(device->ftdi_ctx, value)) { if (ftdi_setdtr(device->ftdi_ctx, value)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -536,10 +536,10 @@ static dc_status_t serial_ftdi_set_rts (void *io, unsigned int level)
if (device == NULL) if (device == NULL)
return DC_STATUS_INVALIDARGS; return DC_STATUS_INVALIDARGS;
INFO (device->context, "RTS: value=%u", level); INFO ("RTS: value=%u", level);
if (ftdi_setrts(device->ftdi_ctx, level)) { if (ftdi_setrts(device->ftdi_ctx, level)) {
ERROR (device->context, "%s", ftdi_get_error_string(device->ftdi_ctx)); ERROR ("%s", ftdi_get_error_string(device->ftdi_ctx));
return DC_STATUS_IO; return DC_STATUS_IO;
} }
@ -565,12 +565,12 @@ dc_status_t ftdi_open(dc_iostream_t **iostream, dc_context_t *context)
.close = serial_ftdi_close, .close = serial_ftdi_close,
}; };
INFO(device->contxt, "%s", "in ftdi_open"); INFO("in ftdi_open");
rc = serial_ftdi_open(&io, context); rc = serial_ftdi_open(&io, context);
if (rc != DC_STATUS_SUCCESS) { if (rc != DC_STATUS_SUCCESS) {
INFO(device->contxt, "%s", "serial_ftdi_open() failed"); INFO("serial_ftdi_open() failed");
return rc; return rc;
} }
INFO(device->contxt, "%s", "calling dc_custom_open())"); INFO("calling dc_custom_open())");
return dc_custom_open(iostream, context, DC_TRANSPORT_SERIAL, &callbacks, io); return dc_custom_open(iostream, context, DC_TRANSPORT_SERIAL, &callbacks, io);
} }

View File

@ -15,13 +15,12 @@
#include "serial_usb_android.h" #include "serial_usb_android.h"
#define INFO(context, fmt, ...) __android_log_print(ANDROID_LOG_DEBUG, __FILE__, "INFO: " fmt "\n", ##__VA_ARGS__) #define INFO(fmt, ...) __android_log_print(ANDROID_LOG_DEBUG, __FILE__, "INFO: " fmt "\n", ##__VA_ARGS__)
#define ERROR(context, fmt, ...) __android_log_print(ANDROID_LOG_DEBUG, __FILE__, "ERROR: " fmt "\n", ##__VA_ARGS__)
#define TRACE INFO #define TRACE INFO
static dc_status_t serial_usb_android_sleep(void *io, unsigned int timeout) static dc_status_t serial_usb_android_sleep(void *io, unsigned int timeout)
{ {
TRACE (device->context, "%s: %i", __FUNCTION__, timeout); TRACE ("%s: %i", __FUNCTION__, timeout);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -33,7 +32,7 @@ static dc_status_t serial_usb_android_sleep(void *io, unsigned int timeout)
static dc_status_t serial_usb_android_set_timeout(void *io, int timeout) static dc_status_t serial_usb_android_set_timeout(void *io, int timeout)
{ {
TRACE (device->context, "%s: %i", __FUNCTION__, timeout); TRACE ("%s: %i", __FUNCTION__, timeout);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -44,7 +43,7 @@ static dc_status_t serial_usb_android_set_timeout(void *io, int timeout)
static dc_status_t serial_usb_android_set_dtr(void *io, unsigned int value) static dc_status_t serial_usb_android_set_dtr(void *io, unsigned int value)
{ {
TRACE (device->context, "%s: %i", __FUNCTION__, value); TRACE ("%s: %i", __FUNCTION__, value);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -55,7 +54,7 @@ static dc_status_t serial_usb_android_set_dtr(void *io, unsigned int value)
static dc_status_t serial_usb_android_set_rts(void *io, unsigned int value) static dc_status_t serial_usb_android_set_rts(void *io, unsigned int value)
{ {
TRACE (device->context, "%s: %i", __FUNCTION__, value); TRACE ("%s: %i", __FUNCTION__, value);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -66,7 +65,7 @@ static dc_status_t serial_usb_android_set_rts(void *io, unsigned int value)
static dc_status_t serial_usb_android_close(void *io) static dc_status_t serial_usb_android_close(void *io)
{ {
TRACE (device->context, "%s", __FUNCTION__); TRACE ("%s", __FUNCTION__);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -79,7 +78,7 @@ static dc_status_t serial_usb_android_close(void *io)
static dc_status_t serial_usb_android_purge(void *io, dc_direction_t direction) static dc_status_t serial_usb_android_purge(void *io, dc_direction_t direction)
{ {
TRACE (device->context, "%s: %i", __FUNCTION__, direction); TRACE ("%s: %i", __FUNCTION__, direction);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == nullptr) if (device == nullptr)
@ -91,7 +90,7 @@ static dc_status_t serial_usb_android_purge(void *io, dc_direction_t direction)
static dc_status_t serial_usb_android_configure(void *io, unsigned int baudrate, unsigned int databits, dc_parity_t parity, static dc_status_t serial_usb_android_configure(void *io, unsigned int baudrate, unsigned int databits, dc_parity_t parity,
dc_stopbits_t stopbits, dc_flowcontrol_t flowcontrol) dc_stopbits_t stopbits, dc_flowcontrol_t flowcontrol)
{ {
TRACE (device->context, "%s: baudrate=%i, databits=%i, parity=%i, stopbits=%i, flowcontrol=%i", __FUNCTION__, TRACE ("%s: baudrate=%i, databits=%i, parity=%i, stopbits=%i, flowcontrol=%i", __FUNCTION__,
baudrate, databits, parity, stopbits, flowcontrol); baudrate, databits, parity, stopbits, flowcontrol);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
@ -101,30 +100,9 @@ static dc_status_t serial_usb_android_configure(void *io, unsigned int baudrate,
return static_cast<dc_status_t>(device->callMethod<jint>("configure", "(IIII)I", baudrate, databits, parity, stopbits)); return static_cast<dc_status_t>(device->callMethod<jint>("configure", "(IIII)I", baudrate, databits, parity, stopbits));
} }
/*
static dc_status_t serial_usb_android_get_available (void *io, size_t *value)
{
INFO (device->context, "%s", __FUNCTION__);
QAndroidJniObject *device = static_cast<QAndroidJniObject*>(io);
if (device == NULL)
return DC_STATUS_INVALIDARGS;
auto retval = device->callMethod<jint>("get_available", "()I");
if(retval < 0){
INFO (device->context, "Error in %s, retval %i", __FUNCTION__, retval);
return static_cast<dc_status_t>(retval);
}
*value = retval;
return DC_STATUS_SUCCESS;
}
*/
static dc_status_t serial_usb_android_read(void *io, void *data, size_t size, size_t *actual) static dc_status_t serial_usb_android_read(void *io, void *data, size_t size, size_t *actual)
{ {
TRACE (device->context, "%s: size: %zu", __FUNCTION__, size); TRACE ("%s: size: %zu", __FUNCTION__, size);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == NULL) if (device == NULL)
@ -137,13 +115,13 @@ static dc_status_t serial_usb_android_read(void *io, void *data, size_t size, si
auto retval = device->callMethod<jint>("read", "([B)I", array); auto retval = device->callMethod<jint>("read", "([B)I", array);
if (retval < 0) { if (retval < 0) {
env->DeleteLocalRef(array); env->DeleteLocalRef(array);
INFO (device->context, "Error in %s, retval %i", __FUNCTION__, retval); INFO ("Error in %s, retval %i", __FUNCTION__, retval);
return static_cast<dc_status_t>(retval); return static_cast<dc_status_t>(retval);
} }
*actual = retval; *actual = retval;
env->GetByteArrayRegion(array, 0, retval, (jbyte *) data); env->GetByteArrayRegion(array, 0, retval, (jbyte *) data);
env->DeleteLocalRef(array); env->DeleteLocalRef(array);
TRACE (device->context, "%s: actual read size: %i", __FUNCTION__, retval); TRACE ("%s: actual read size: %i", __FUNCTION__, retval);
if (retval < size) if (retval < size)
return DC_STATUS_TIMEOUT; return DC_STATUS_TIMEOUT;
@ -153,7 +131,7 @@ static dc_status_t serial_usb_android_read(void *io, void *data, size_t size, si
static dc_status_t serial_usb_android_write(void *io, const void *data, size_t size, size_t *actual) static dc_status_t serial_usb_android_write(void *io, const void *data, size_t size, size_t *actual)
{ {
TRACE (device->context, "%s: size: %zu", __FUNCTION__, size); TRACE ("%s: size: %zu", __FUNCTION__, size);
QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io); QAndroidJniObject *device = static_cast<QAndroidJniObject *>(io);
if (device == NULL) if (device == NULL)
@ -166,17 +144,17 @@ static dc_status_t serial_usb_android_write(void *io, const void *data, size_t s
auto retval = device->callMethod<jint>("write", "([B)I", array); auto retval = device->callMethod<jint>("write", "([B)I", array);
env->DeleteLocalRef(array); env->DeleteLocalRef(array);
if (retval < 0) { if (retval < 0) {
INFO (device->context, "Error in %s, retval %i", __FUNCTION__, retval); INFO ("Error in %s, retval %i", __FUNCTION__, retval);
return static_cast<dc_status_t>(retval); return static_cast<dc_status_t>(retval);
} }
*actual = retval; *actual = retval;
TRACE (device->context, "%s: actual write size: %i", __FUNCTION__, retval); TRACE ("%s: actual write size: %i", __FUNCTION__, retval);
return DC_STATUS_SUCCESS; return DC_STATUS_SUCCESS;
} }
dc_status_t serial_usb_android_open(dc_iostream_t **iostream, dc_context_t *context, QAndroidJniObject usbDevice, std::string driverClassName) dc_status_t serial_usb_android_open(dc_iostream_t **iostream, dc_context_t *context, QAndroidJniObject usbDevice, std::string driverClassName)
{ {
TRACE(device->contxt, "%s", __FUNCTION__); TRACE("%s", __FUNCTION__);
static const dc_custom_cbs_t callbacks = { static const dc_custom_cbs_t callbacks = {
.set_timeout = serial_usb_android_set_timeout, /* set_timeout */ .set_timeout = serial_usb_android_set_timeout, /* set_timeout */
@ -200,7 +178,7 @@ dc_status_t serial_usb_android_open(dc_iostream_t **iostream, dc_context_t *cont
return DC_STATUS_IO; return DC_STATUS_IO;
QAndroidJniObject *device = new QAndroidJniObject(localdevice); QAndroidJniObject *device = new QAndroidJniObject(localdevice);
TRACE(device->contxt, "%s", "calling dc_custom_open())"); TRACE("%s", "calling dc_custom_open())");
return dc_custom_open(iostream, context, DC_TRANSPORT_SERIAL, &callbacks, device); return dc_custom_open(iostream, context, DC_TRANSPORT_SERIAL, &callbacks, device);
} }

View File

@ -302,7 +302,7 @@ QString formatDayOfWeek(int day)
QString formatMinutes(int seconds) QString formatMinutes(int seconds)
{ {
return QString::asprintf("%d:%.2d", FRACTION(seconds, 60)); return QString::asprintf("%d:%.2d", FRACTION_TUPLE(seconds, 60));
} }
QString formatTripTitle(const dive_trip *trip) QString formatTripTitle(const dive_trip *trip)

View File

@ -153,17 +153,6 @@ dive_trip_t *create_trip_from_dive(struct dive *dive)
return trip; return trip;
} }
dive_trip_t *create_and_hookup_trip_from_dive(struct dive *dive, struct trip_table *trip_table_arg)
{
dive_trip_t *dive_trip;
dive_trip = create_trip_from_dive(dive);
add_dive_to_trip(dive, dive_trip);
insert_trip(dive_trip, trip_table_arg);
return dive_trip;
}
/* random threshold: three days without diving -> new trip /* random threshold: three days without diving -> new trip
* this works very well for people who usually dive as part of a trip and don't * this works very well for people who usually dive as part of a trip and don't
* regularly dive at a local facility; this is why trips are an optional feature */ * regularly dive at a local facility; this is why trips are an optional feature */

View File

@ -43,7 +43,6 @@ extern void sort_trip_table(struct trip_table *table);
extern dive_trip_t *alloc_trip(void); extern dive_trip_t *alloc_trip(void);
extern dive_trip_t *create_trip_from_dive(struct dive *dive); extern dive_trip_t *create_trip_from_dive(struct dive *dive);
extern dive_trip_t *create_and_hookup_trip_from_dive(struct dive *dive, struct trip_table *trip_table_arg);
extern dive_trip_t *get_dives_to_autogroup(struct dive_table *table, int start, int *from, int *to, bool *allocated); extern dive_trip_t *get_dives_to_autogroup(struct dive_table *table, int start, int *from, int *to, bool *allocated);
extern dive_trip_t *get_trip_for_new_dive(struct dive *new_dive, bool *allocated); extern dive_trip_t *get_trip_for_new_dive(struct dive *new_dive, bool *allocated);
extern dive_trip_t *get_trip_by_uniq_id(int tripId); extern dive_trip_t *get_trip_by_uniq_id(int tripId);

View File

@ -21,6 +21,7 @@
#include <string.h> #include <string.h>
#include <errno.h> #include <errno.h>
#include <stdlib.h> #include <stdlib.h>
#include <string>
#include "gettext.h" #include "gettext.h"
#include "libdivecomputer.h" #include "libdivecomputer.h"
@ -74,7 +75,7 @@ static int debug_round = 0;
#endif #endif
static const char *param_buff[NUM_PARAM_BUFS]; static const char *param_buff[NUM_PARAM_BUFS];
static char *reqtxt_path; static std::string reqtxt_path;
static int reqtxt_file; static int reqtxt_file;
static int filenr; static int filenr;
static int number_of_files; static int number_of_files;
@ -248,15 +249,15 @@ static long bytes_available(int file)
return result; return result;
} }
static int number_of_file(char *path) static int number_of_file(const std::string& path)
{ {
int count = 0; int count = 0;
#ifdef WIN32 #ifdef WIN32
struct _wdirent *entry; struct _wdirent *entry;
_WDIR *dirp = (_WDIR *)subsurface_opendir(path); _WDIR *dirp = (_WDIR *)subsurface_opendir(path.c_str());
#else #else
struct dirent *entry; struct dirent *entry;
DIR *dirp = (DIR *)subsurface_opendir(path); DIR *dirp = (DIR *)subsurface_opendir(path.c_str());
#endif #endif
while (dirp) { while (dirp) {
@ -280,16 +281,23 @@ static int number_of_file(char *path)
return count; return count;
} }
static char *build_filename(const char *path, const char *name) static std::string build_filename(const std::string& path, const std::string& name)
{ {
int len = strlen(path) + strlen(name) + 2; std::string str;
char *buf = (char *)malloc(len);
#if WIN32 #if WIN32
snprintf(buf, len, "%s\\%s", path, name); str = path + "\\" + name;
#else #else
snprintf(buf, len, "%s/%s", path, name); str = path + "/" + name;
#endif #endif
return buf; return str;
}
static std::string build_filename(const std::string& path, const char* name)
{
return build_filename(path, std::string(name));
}
static std::string build_filename(const char* path, const char* name)
{
return build_filename(std::string(path), std::string(name));
} }
/* Check if there's a req.txt file and get the starting filenr from it. /* Check if there's a req.txt file and get the starting filenr from it.
@ -298,14 +306,14 @@ static char *build_filename(const char *path, const char *name)
* code is easy enough */ * code is easy enough */
static bool uemis_init(const char *path) static bool uemis_init(const char *path)
{ {
char *ans_path; std::string ans_path;
int i; int i;
erase_divespot_mapping(); erase_divespot_mapping();
if (!path) if (!path)
return false; return false;
/* let's check if this is indeed a Uemis DC */ /* let's check if this is indeed a Uemis DC */
reqtxt_path = build_filename(path, "req.txt"); reqtxt_path = build_filename(path, "req.txt");
reqtxt_file = subsurface_open(reqtxt_path, O_RDONLY | O_CREAT, 0666); reqtxt_file = subsurface_open(reqtxt_path.c_str(), O_RDONLY | O_CREAT, 0666);
if (reqtxt_file < 0) { if (reqtxt_file < 0) {
#if UEMIS_DEBUG & 1 #if UEMIS_DEBUG & 1
fprintf(debugfile, ":EE req.txt can't be opened\n"); fprintf(debugfile, ":EE req.txt can't be opened\n");
@ -334,7 +342,6 @@ static bool uemis_init(const char *path)
* ANS files. But with a FAT filesystem that isn't possible */ * ANS files. But with a FAT filesystem that isn't possible */
ans_path = build_filename(path, "ANS"); ans_path = build_filename(path, "ANS");
number_of_files = number_of_file(ans_path); number_of_files = number_of_file(ans_path);
free(ans_path);
/* initialize the array in which we collect the answers */ /* initialize the array in which we collect the answers */
for (i = 0; i < NUM_PARAM_BUFS; i++) for (i = 0; i < NUM_PARAM_BUFS; i++)
param_buff[i] = ""; param_buff[i] = "";
@ -515,19 +522,13 @@ static void uemis_increased_timeout(int *timeout)
usleep(*timeout); usleep(*timeout);
} }
static char *build_ans_path(const char *path, int filenumber) static std::string build_ans_path(const std::string& path, int filenumber)
{ {
char *intermediate, *ans_path, fl[13]; std::string intermediate, ans_path;
/* Clamp filenumber into the 0..9999 range. This is never necessary, std::string fl = std::string("ANS") + std::to_string(filenumber) + ".TXT";
* as filenumber can never go above UEMIS_MAX_FILES, but gcc doesn't
* recognize that and produces very noisy warnings. */
filenumber = filenumber < 0 ? 0 : filenumber % 10000;
snprintf(fl, 13, "ANS%d.TXT", filenumber);
intermediate = build_filename(path, "ANS"); intermediate = build_filename(path, "ANS");
ans_path = build_filename(intermediate, fl); ans_path = build_filename(intermediate, fl);
free(intermediate);
return ans_path; return ans_path;
} }
@ -546,11 +547,11 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
bool found_answer = false; bool found_answer = false;
bool more_files = true; bool more_files = true;
bool answer_in_mbuf = false; bool answer_in_mbuf = false;
char *ans_path; std::string ans_path;
int ans_file; int ans_file;
int timeout = UEMIS_LONG_TIMEOUT; int timeout = UEMIS_LONG_TIMEOUT;
reqtxt_file = subsurface_open(reqtxt_path, O_RDWR | O_CREAT, 0666); reqtxt_file = subsurface_open(reqtxt_path.c_str(), O_RDWR | O_CREAT, 0666);
if (reqtxt_file < 0) { if (reqtxt_file < 0) {
*error_text = "can't open req.txt"; *error_text = "can't open req.txt";
#ifdef UEMIS_DEBUG #ifdef UEMIS_DEBUG
@ -599,17 +600,15 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
return false; return false;
progress_bar_fraction = filenr / (double)UEMIS_MAX_FILES; progress_bar_fraction = filenr / (double)UEMIS_MAX_FILES;
ans_path = build_ans_path(path, filenr - 1); ans_path = build_ans_path(path, filenr - 1);
ans_file = subsurface_open(ans_path, O_RDONLY, 0666); ans_file = subsurface_open(ans_path.c_str(), O_RDONLY, 0666);
if (ans_file < 0) { if (ans_file < 0) {
*error_text = "can't open Uemis response file"; *error_text = "can't open Uemis response file";
#ifdef UEMIS_DEBUG #ifdef UEMIS_DEBUG
fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno); fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno);
#endif #endif
free(ans_path);
return false; return false;
} }
if (read(ans_file, tmp, 100) < 3) { if (read(ans_file, tmp, 100) < 3) {
free(ans_path);
close(ans_file); close(ans_file);
return false; return false;
} }
@ -625,7 +624,6 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
pbuf[3] = 0; pbuf[3] = 0;
fprintf(debugfile, "::t %s \"%s...\"\n", ans_path, pbuf); fprintf(debugfile, "::t %s \"%s...\"\n", ans_path, pbuf);
#endif #endif
free(ans_path);
if (tmp[0] == '1') { if (tmp[0] == '1') {
searching = false; searching = false;
if (tmp[1] == 'm') { if (tmp[1] == 'm') {
@ -640,10 +638,10 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
more_files = false; more_files = false;
assembling_mbuf = false; assembling_mbuf = false;
} }
reqtxt_file = subsurface_open(reqtxt_path, O_RDWR | O_CREAT, 0666); reqtxt_file = subsurface_open(reqtxt_path.c_str(), O_RDWR | O_CREAT, 0666);
if (reqtxt_file < 0) { if (reqtxt_file < 0) {
*error_text = "can't open req.txt"; *error_text = "can't open req.txt";
report_info("open %s failed with errno %d", reqtxt_path, errno); report_info("open %s failed with errno %d", reqtxt_path.c_str(), errno);
return false; return false;
} }
trigger_response(reqtxt_file, "n", filenr, file_length); trigger_response(reqtxt_file, "n", filenr, file_length);
@ -655,10 +653,10 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
assembling_mbuf = false; assembling_mbuf = false;
searching = false; searching = false;
} }
reqtxt_file = subsurface_open(reqtxt_path, O_RDWR | O_CREAT, 0666); reqtxt_file = subsurface_open(reqtxt_path.c_str(), O_RDWR | O_CREAT, 0666);
if (reqtxt_file < 0) { if (reqtxt_file < 0) {
*error_text = "can't open req.txt"; *error_text = "can't open req.txt";
report_info("open %s failed with errno %d", reqtxt_path, errno); report_info("open %s failed with errno %d", reqtxt_path.c_str(), errno);
return false; return false;
} }
trigger_response(reqtxt_file, "r", filenr, file_length); trigger_response(reqtxt_file, "r", filenr, file_length);
@ -667,16 +665,14 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
if (ismulti && more_files && tmp[0] == '1') { if (ismulti && more_files && tmp[0] == '1') {
int size; int size;
ans_path = build_ans_path(path, assembling_mbuf ? filenr - 2 : filenr - 1); ans_path = build_ans_path(path, assembling_mbuf ? filenr - 2 : filenr - 1);
ans_file = subsurface_open(ans_path, O_RDONLY, 0666); ans_file = subsurface_open(ans_path.c_str(), O_RDONLY, 0666);
if (ans_file < 0) { if (ans_file < 0) {
*error_text = "can't open Uemis response file"; *error_text = "can't open Uemis response file";
#ifdef UEMIS_DEBUG #ifdef UEMIS_DEBUG
fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno); fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno);
#endif #endif
free(ans_path);
return false; return false;
} }
free(ans_path);
size = bytes_available(ans_file); size = bytes_available(ans_file);
if (size > 3) { if (size > 3) {
char *buf; char *buf;
@ -705,13 +701,12 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
if (!ismulti) { if (!ismulti) {
ans_path = build_ans_path(path, filenr - 1); ans_path = build_ans_path(path, filenr - 1);
ans_file = subsurface_open(ans_path, O_RDONLY, 0666); ans_file = subsurface_open(ans_path.c_str(), O_RDONLY, 0666);
if (ans_file < 0) { if (ans_file < 0) {
*error_text = "can't open Uemis response file"; *error_text = "can't open Uemis response file";
#ifdef UEMIS_DEBUG #ifdef UEMIS_DEBUG
fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno); fprintf(debugfile, "open %s failed with errno %d\n", ans_path, errno);
#endif #endif
free(ans_path);
return false; return false;
} }
@ -733,7 +728,6 @@ static bool uemis_get_answer(const char *path, const char *request, int n_param_
#endif #endif
} }
size -= 3; size -= 3;
free(ans_path);
close(ans_file); close(ans_file);
} else { } else {
ismulti = false; ismulti = false;
@ -1343,7 +1337,6 @@ const char *do_uemis_import(device_data_t *data)
#endif #endif
uemis_info(translate("gettextFromC", "Initialise communication")); uemis_info(translate("gettextFromC", "Initialise communication"));
if (!uemis_init(mountpath)) { if (!uemis_init(mountpath)) {
free(reqtxt_path);
return translate("gettextFromC", "Uemis init failed"); return translate("gettextFromC", "Uemis init failed");
} }
@ -1532,7 +1525,6 @@ bail:
result = param_buff[2]; result = param_buff[2];
} }
free(deviceid); free(deviceid);
free(reqtxt_path);
if (!data->log->dives->nr) if (!data->log->dives->nr)
result = translate("gettextFromC", ERR_NO_FILES); result = translate("gettextFromC", ERR_NO_FILES);
return result; return result;

View File

@ -380,11 +380,11 @@ void uemis_parse_divelog_binary(char *base64, void *datap)
add_extra_data(dc, "Serial", buffer); add_extra_data(dc, "Serial", buffer);
snprintf(buffer, sizeof(buffer), "%d", *(uint16_t *)(data + i + 35)); snprintf(buffer, sizeof(buffer), "%d", *(uint16_t *)(data + i + 35));
add_extra_data(dc, "main battery after dive", buffer); add_extra_data(dc, "main battery after dive", buffer);
snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION(*(uint16_t *)(data + i + 24), 60)); snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION_TUPLE(*(uint16_t *)(data + i + 24), 60));
add_extra_data(dc, "no fly time", buffer); add_extra_data(dc, "no fly time", buffer);
snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION(*(uint16_t *)(data + i + 26), 60)); snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION_TUPLE(*(uint16_t *)(data + i + 26), 60));
add_extra_data(dc, "no dive time", buffer); add_extra_data(dc, "no dive time", buffer);
snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION(*(uint16_t *)(data + i + 28), 60)); snprintf(buffer, sizeof(buffer), "%0u:%02u", FRACTION_TUPLE(*(uint16_t *)(data + i + 28), 60));
add_extra_data(dc, "desat time", buffer); add_extra_data(dc, "desat time", buffer);
snprintf(buffer, sizeof(buffer), "%u", *(uint16_t *)(data + i + 30)); snprintf(buffer, sizeof(buffer), "%u", *(uint16_t *)(data + i + 30));
add_extra_data(dc, "allowed altitude", buffer); add_extra_data(dc, "allowed altitude", buffer);

View File

@ -13,8 +13,8 @@ extern "C" {
#include <stdbool.h> #include <stdbool.h>
#endif #endif
#define FRACTION(n, x) ((unsigned)(n) / (x)), ((unsigned)(n) % (x)) #define FRACTION_TUPLE(n, x) ((unsigned)(n) / (x)), ((unsigned)(n) % (x))
#define SIGNED_FRAC(n, x) ((n) >= 0 ? '+': '-'), ((n) >= 0 ? (unsigned)(n) / (x) : (-(n) / (x))), ((unsigned)((n) >= 0 ? (n) : -(n)) % (x)) #define SIGNED_FRAC_TRIPLET(n, x) ((n) >= 0 ? '+': '-'), ((n) >= 0 ? (unsigned)(n) / (x) : (-(n) / (x))), ((unsigned)((n) >= 0 ? (n) : -(n)) % (x))
#define O2_IN_AIR 209 // permille #define O2_IN_AIR 209 // permille
#define N2_IN_AIR 781 #define N2_IN_AIR 781

View File

@ -23,7 +23,7 @@
#include <QBuffer> #include <QBuffer>
#endif #endif
DivePlannerWidget::DivePlannerWidget(dive &planned_dive, PlannerWidgets *parent) DivePlannerWidget::DivePlannerWidget(dive &planned_dive, int dcNr, PlannerWidgets *parent)
{ {
DivePlannerPointsModel *plannerModel = DivePlannerPointsModel::instance(); DivePlannerPointsModel *plannerModel = DivePlannerPointsModel::instance();
CylindersModel *cylinders = DivePlannerPointsModel::instance()->cylindersModel(); CylindersModel *cylinders = DivePlannerPointsModel::instance()->cylindersModel();
@ -52,7 +52,7 @@ DivePlannerWidget::DivePlannerWidget(dive &planned_dive, PlannerWidgets *parent)
view->setColumnHidden(CylindersModel::SENSORS, true); view->setColumnHidden(CylindersModel::SENSORS, true);
view->setItemDelegateForColumn(CylindersModel::TYPE, new TankInfoDelegate(this)); view->setItemDelegateForColumn(CylindersModel::TYPE, new TankInfoDelegate(this));
auto tankUseDelegate = new TankUseDelegate(this); auto tankUseDelegate = new TankUseDelegate(this);
tankUseDelegate->setCurrentDC(get_dive_dc(&planned_dive, 0)); tankUseDelegate->setCurrentDC(get_dive_dc(&planned_dive, dcNr));
view->setItemDelegateForColumn(CylindersModel::USE, tankUseDelegate); view->setItemDelegateForColumn(CylindersModel::USE, tankUseDelegate);
connect(ui.cylinderTableWidget, &TableView::addButtonClicked, plannerModel, &DivePlannerPointsModel::addCylinder_clicked); connect(ui.cylinderTableWidget, &TableView::addButtonClicked, plannerModel, &DivePlannerPointsModel::addCylinder_clicked);
connect(ui.tableWidget, &TableView::addButtonClicked, plannerModel, &DivePlannerPointsModel::addDefaultStop); connect(ui.tableWidget, &TableView::addButtonClicked, plannerModel, &DivePlannerPointsModel::addDefaultStop);
@ -185,13 +185,12 @@ void DivePlannerWidget::heightChanged(const int height)
void DivePlannerWidget::waterTypeUpdateTexts() void DivePlannerWidget::waterTypeUpdateTexts()
{ {
double density;
/* Do not set text in last/custom element */ /* Do not set text in last/custom element */
for (int i = 0; i < ui.waterType->count()-1; i++) { for (int i = 0; i < ui.waterType->count()-1; i++) {
if (ui.waterType->itemData(i) != QVariant::Invalid) { if (ui.waterType->itemData(i) != QVariant::Invalid) {
QString densityText = ui.waterType->itemText(i).split("(")[0].trimmed(); QString densityText = ui.waterType->itemText(i).split("(")[0].trimmed();
density = ui.waterType->itemData(i).toInt() / 10000.0; double density = ui.waterType->itemData(i).toInt() / 10000.0;
densityText.append(QString(" (%L1%2)").arg(density, 0, 'f', 2).arg(tr("kg/"))); densityText.append(QStringLiteral(" (%L1%2)").arg(density, 0, 'f', 3).arg(tr("kg/")));
ui.waterType->setItemText(i, densityText); ui.waterType->setItemText(i, densityText);
} }
} }
@ -539,7 +538,8 @@ void PlannerDetails::setPlanNotes(QString plan)
PlannerWidgets::PlannerWidgets() : PlannerWidgets::PlannerWidgets() :
planned_dive(alloc_dive()), planned_dive(alloc_dive()),
plannerWidget(*planned_dive, this), dcNr(0),
plannerWidget(*planned_dive, dcNr, this),
plannerSettingsWidget(this) plannerSettingsWidget(this)
{ {
connect(plannerDetails.printPlan(), &QPushButton::pressed, this, &PlannerWidgets::printDecoPlan); connect(plannerDetails.printPlan(), &QPushButton::pressed, this, &PlannerWidgets::printDecoPlan);
@ -556,21 +556,27 @@ struct dive *PlannerWidgets::getDive() const
return planned_dive.get(); return planned_dive.get();
} }
divemode_t PlannerWidgets::getRebreatherMode() const int PlannerWidgets::getDcNr()
{ {
return planned_dive->dc.divemode; return dcNr;
} }
void PlannerWidgets::preparePlanDive(const dive *currentDive) divemode_t PlannerWidgets::getRebreatherMode() const
{
return get_dive_dc_const(planned_dive.get(), dcNr)->divemode;
}
void PlannerWidgets::preparePlanDive(const dive *currentDive, int currentDcNr)
{ {
DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::PLAN); DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::PLAN);
// create a simple starting dive, using the first gas from the just copied cylinders // create a simple starting dive, using the first gas from the just copied cylinders
DivePlannerPointsModel::instance()->createSimpleDive(planned_dive.get()); DivePlannerPointsModel::instance()->createSimpleDive(planned_dive.get());
dcNr = 0;
// plan the dive in the same mode as the currently selected one // plan the dive in the same mode as the currently selected one
if (currentDive) { if (currentDive) {
plannerSettingsWidget.setDiveMode(currentDive->dc.divemode); plannerSettingsWidget.setDiveMode(get_dive_dc_const(currentDive, currentDcNr)->divemode);
plannerSettingsWidget.setBailoutVisibility(currentDive->dc.divemode); plannerSettingsWidget.setBailoutVisibility(get_dive_dc_const(currentDive, currentDcNr)->divemode);
if (currentDive->salinity) if (currentDive->salinity)
plannerWidget.setSalinity(currentDive->salinity); plannerWidget.setSalinity(currentDive->salinity);
else // No salinity means salt water else // No salinity means salt water
@ -586,15 +592,16 @@ void PlannerWidgets::planDive()
plannerWidget.setupStartTime(timestampToDateTime(planned_dive->when)); // This will reload the profile! plannerWidget.setupStartTime(timestampToDateTime(planned_dive->when)); // This will reload the profile!
} }
void PlannerWidgets::prepareReplanDive(const dive *d) void PlannerWidgets::prepareReplanDive(const dive *currentDive, int currentDcNr)
{ {
copy_dive(d, planned_dive.get()); copy_dive(currentDive, planned_dive.get());
dcNr = currentDcNr;
} }
void PlannerWidgets::replanDive(int currentDC) void PlannerWidgets::replanDive()
{ {
DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::PLAN); DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::PLAN);
DivePlannerPointsModel::instance()->loadFromDive(planned_dive.get(), currentDC); DivePlannerPointsModel::instance()->loadFromDive(planned_dive.get(), dcNr);
plannerWidget.setReplanButton(true); plannerWidget.setReplanButton(true);
plannerWidget.setupStartTime(timestampToDateTime(planned_dive->when)); plannerWidget.setupStartTime(timestampToDateTime(planned_dive->when));
@ -603,7 +610,7 @@ void PlannerWidgets::replanDive(int currentDC)
if (planned_dive->salinity) if (planned_dive->salinity)
plannerWidget.setSalinity(planned_dive->salinity); plannerWidget.setSalinity(planned_dive->salinity);
reset_cylinders(planned_dive.get(), true); reset_cylinders(planned_dive.get(), true);
DivePlannerPointsModel::instance()->cylindersModel()->updateDive(planned_dive.get(), currentDC); DivePlannerPointsModel::instance()->cylindersModel()->updateDive(planned_dive.get(), dcNr);
} }
void PlannerWidgets::printDecoPlan() void PlannerWidgets::printDecoPlan()

View File

@ -18,7 +18,7 @@ struct dive;
class DivePlannerWidget : public QWidget { class DivePlannerWidget : public QWidget {
Q_OBJECT Q_OBJECT
public: public:
explicit DivePlannerWidget(dive &planned_dive, PlannerWidgets *parent); explicit DivePlannerWidget(dive &planned_dive, int dcNr, PlannerWidgets *parent);
~DivePlannerWidget(); ~DivePlannerWidget();
void setReplanButton(bool replan); void setReplanButton(bool replan);
public public
@ -80,17 +80,20 @@ class PlannerWidgets : public QObject {
public: public:
PlannerWidgets(); PlannerWidgets();
~PlannerWidgets(); ~PlannerWidgets();
void preparePlanDive(const dive *currentDive); // Create a new planned dive void preparePlanDive(const dive *currentDive, int currentDc); // Create a new planned dive
void planDive(); void planDive();
void prepareReplanDive(const dive *d); // Make a copy of the dive to be replanned void prepareReplanDive(const dive *currentDive, int currentDc); // Make a copy of the dive to be replanned
void replanDive(int currentDC); void replanDive();
struct dive *getDive() const; struct dive *getDive() const;
int getDcNr();
divemode_t getRebreatherMode() const; divemode_t getRebreatherMode() const;
public public
slots: slots:
void printDecoPlan(); void printDecoPlan();
public: private:
OwningDivePtr planned_dive; OwningDivePtr planned_dive;
int dcNr;
public:
DivePlannerWidget plannerWidget; DivePlannerWidget plannerWidget;
PlannerSettingsWidget plannerSettingsWidget; PlannerSettingsWidget plannerSettingsWidget;
PlannerDetails plannerDetails; PlannerDetails plannerDetails;

View File

@ -210,7 +210,7 @@
</property> </property>
<property name="maximumSize"> <property name="maximumSize">
<size> <size>
<width>90</width> <width>100</width>
<height>16777215</height> <height>16777215</height>
</size> </size>
</property> </property>
@ -232,6 +232,9 @@
<property name="value"> <property name="value">
<double>1.000000000000000</double> <double>1.000000000000000</double>
</property> </property>
<property name="decimals">
<double>3</double>
</property>
</widget> </widget>
</item> </item>
<item row="4" column="0" colspan="4"> <item row="4" column="0" colspan="4">

View File

@ -24,7 +24,7 @@
static bool is_vendor_searchable(QString vendor) static bool is_vendor_searchable(QString vendor)
{ {
return vendor == "Uemis" || vendor == "Garmin"; return vendor == "Uemis" || vendor == "Garmin" || vendor == "FIT";
} }
DownloadFromDCWidget::DownloadFromDCWidget(const QString &filename, QWidget *parent) : QDialog(parent, QFlag(0)), DownloadFromDCWidget::DownloadFromDCWidget(const QString &filename, QWidget *parent) : QDialog(parent, QFlag(0)),
@ -380,8 +380,13 @@ void DownloadFromDCWidget::on_device_currentTextChanged(const QString &device)
void DownloadFromDCWidget::on_search_clicked() void DownloadFromDCWidget::on_search_clicked()
{ {
if (is_vendor_searchable(ui.vendor->currentText())) { if (is_vendor_searchable(ui.vendor->currentText())) {
QString dialogTitle = ui.vendor->currentText() == "Uemis" ? QString dialogTitle;
tr("Find Uemis dive computer") : tr("Find Garmin dive computer"); if (ui.vendor->currentText() == "Uemis")
dialogTitle = tr("Find Uemis dive computer");
else if (ui.vendor->currentText() == "Garmin")
dialogTitle = tr("Find Garmin dive computer");
else if (ui.vendor->currentText() == "FIT")
dialogTitle = tr("Select diretory to import .fit files from");
QString dirName = QFileDialog::getExistingDirectory(this, QString dirName = QFileDialog::getExistingDirectory(this,
dialogTitle, dialogTitle,
QDir::homePath(), QDir::homePath(),

View File

@ -609,7 +609,7 @@ void MainWindow::on_actionPreferences_triggered()
void MainWindow::on_actionQuit_triggered() void MainWindow::on_actionQuit_triggered()
{ {
if (!okToClose(tr("Please save or cancel the current dive edit before quiting the application."))) if (!okToClose(tr("Please save or cancel the current dive edit before quitting the application.")))
return; return;
writeSettings(); writeSettings();
@ -665,8 +665,10 @@ void MainWindow::on_actionReplanDive_triggered()
{ {
if (!plannerStateClean() || !current_dive || !userMayChangeAppState()) if (!plannerStateClean() || !current_dive || !userMayChangeAppState())
return; return;
else if (!is_dc_planner(&current_dive->dc)) {
if (QMessageBox::warning(this, tr("Warning"), tr("Trying to replan a dive that's not a planned dive."), const struct divecomputer *dc = get_dive_dc(current_dive, profile->dc);
if (!(is_dc_planner(dc) || is_dc_manually_added_dive(dc))) {
if (QMessageBox::warning(this, tr("Warning"), tr("Trying to replan a dive profile that has not been manually added."),
QMessageBox::Ok | QMessageBox::Cancel) == QMessageBox::Cancel) QMessageBox::Ok | QMessageBox::Cancel) == QMessageBox::Cancel)
return; return;
} }
@ -675,9 +677,9 @@ void MainWindow::on_actionReplanDive_triggered()
setApplicationState(ApplicationState::PlanDive); setApplicationState(ApplicationState::PlanDive);
disableShortcuts(true); disableShortcuts(true);
plannerWidgets->prepareReplanDive(current_dive); plannerWidgets->prepareReplanDive(current_dive, profile->dc);
profile->setPlanState(plannerWidgets->getDive(), profile->dc); profile->setPlanState(plannerWidgets->getDive(), plannerWidgets->getDcNr());
plannerWidgets->replanDive(profile->dc); plannerWidgets->replanDive();
} }
void MainWindow::on_actionDivePlanner_triggered() void MainWindow::on_actionDivePlanner_triggered()
@ -689,8 +691,8 @@ void MainWindow::on_actionDivePlanner_triggered()
setApplicationState(ApplicationState::PlanDive); setApplicationState(ApplicationState::PlanDive);
disableShortcuts(true); disableShortcuts(true);
plannerWidgets->preparePlanDive(current_dive); plannerWidgets->preparePlanDive(current_dive, profile->dc);
profile->setPlanState(plannerWidgets->getDive(), 0); profile->setPlanState(plannerWidgets->getDive(), plannerWidgets->getDcNr());
plannerWidgets->planDive(); plannerWidgets->planDive();
} }
@ -707,8 +709,9 @@ void MainWindow::on_actionAddDive_triggered()
d.dc.duration.seconds = 40 * 60; d.dc.duration.seconds = 40 * 60;
d.dc.maxdepth.mm = M_OR_FT(15, 45); d.dc.maxdepth.mm = M_OR_FT(15, 45);
d.dc.meandepth.mm = M_OR_FT(13, 39); // this creates a resonable looking safety stop d.dc.meandepth.mm = M_OR_FT(13, 39); // this creates a resonable looking safety stop
make_manually_added_dc(&d.dc); make_manually_added_dive_dc(&d.dc);
fake_dc(&d.dc); fake_dc(&d.dc);
add_default_cylinder(&d);
fixup_dive(&d); fixup_dive(&d);
Command::addDive(&d, divelog.autogroup, true); Command::addDive(&d, divelog.autogroup, true);
@ -986,14 +989,10 @@ QString MainWindow::filter_import_dive_sites()
return f; return f;
} }
bool MainWindow::askSaveChanges() int MainWindow::saveChangesConfirmationBox(QString message)
{ {
QMessageBox response(this); QMessageBox response(this);
QString message = !existing_filename.empty() ?
tr("Do you want to save the changes that you made in the file %1?").arg(displayedFilename(existing_filename)) :
tr("Do you want to save the changes that you made in the data file?");
response.setStandardButtons(QMessageBox::Save | QMessageBox::Discard | QMessageBox::Cancel); response.setStandardButtons(QMessageBox::Save | QMessageBox::Discard | QMessageBox::Cancel);
response.setDefaultButton(QMessageBox::Save); response.setDefaultButton(QMessageBox::Save);
response.setText(message); response.setText(message);
@ -1001,8 +1000,17 @@ bool MainWindow::askSaveChanges()
response.setInformativeText(tr("Changes will be lost if you don't save them.")); response.setInformativeText(tr("Changes will be lost if you don't save them."));
response.setIcon(QMessageBox::Warning); response.setIcon(QMessageBox::Warning);
response.setWindowModality(Qt::WindowModal); response.setWindowModality(Qt::WindowModal);
int ret = response.exec();
return response.exec();
}
bool MainWindow::askSaveChanges()
{
QString message = !existing_filename.empty() ?
tr("Do you want to save the changes that you made in the file %1?").arg(displayedFilename(existing_filename)) :
tr("Do you want to save the changes that you made in the data file?");
int ret = saveChangesConfirmationBox(std::move(message));
switch (ret) { switch (ret) {
case QMessageBox::Save: case QMessageBox::Save:
file_save(); file_save();
@ -1057,9 +1065,21 @@ void MainWindow::writeSettings()
void MainWindow::closeEvent(QCloseEvent *event) void MainWindow::closeEvent(QCloseEvent *event)
{ {
if (inPlanner()) { if (inPlanner()) {
on_actionQuit_triggered(); int ret = saveChangesConfirmationBox("Do you want to save the changes that you made in the planner into your dive log?");
event->ignore(); switch (ret) {
return; case QMessageBox::Save:
DivePlannerPointsModel::instance()->savePlan();
break;
case QMessageBox::Cancel:
event->ignore();
return;
case QMessageBox::Discard:
DivePlannerPointsModel::instance()->cancelPlan();
break;
}
} }
if (!Command::isClean() && (askSaveChanges() == false)) { if (!Command::isClean() && (askSaveChanges() == false)) {

View File

@ -178,6 +178,7 @@ private:
QString filter_import_dive_sites(); QString filter_import_dive_sites();
static MainWindow *m_Instance; static MainWindow *m_Instance;
QString displayedFilename(const std::string &fullFilename); QString displayedFilename(const std::string &fullFilename);
int saveChangesConfirmationBox(QString message);
bool askSaveChanges(); bool askSaveChanges();
bool okToClose(QString message); bool okToClose(QString message);
void closeCurrentFile(); void closeCurrentFile();

View File

@ -130,6 +130,7 @@ QWidget *ComboBoxDelegate::createEditor(QWidget *parent, const QStyleOptionViewI
currCombo.currRow = index.row(); currCombo.currRow = index.row();
currCombo.model = const_cast<QAbstractItemModel *>(index.model()); currCombo.model = const_cast<QAbstractItemModel *>(index.model());
currCombo.activeText = currCombo.model->data(index).toString(); currCombo.activeText = currCombo.model->data(index).toString();
currCombo.ignoreSelection = false;
return comboDelegate; return comboDelegate;
} }
@ -217,18 +218,13 @@ void TankInfoDelegate::setModelData(QWidget *, QAbstractItemModel *model, const
mymodel->setData(IDX(CylindersModel::TYPE), cylinderName, CylindersModel::TEMP_ROLE); mymodel->setData(IDX(CylindersModel::TYPE), cylinderName, CylindersModel::TEMP_ROLE);
return; return;
} }
int tankSize = 0;
int tankPressure = 0;
tank_info *info = get_tank_info(&tank_info_table, qPrintable(cylinderName));
if (info) {
// OMG, the units here are a mess.
tankSize = info->ml != 0 ? info->ml : lrint(cuft_to_l(info->cuft) * 1000.0);
tankPressure = info->bar != 0 ? info->bar * 1000 : psi_to_mbar(info->psi);
}
volume_t tankSize = {0};
pressure_t tankPressure = {0};
get_tank_info_data(&tank_info_table, qPrintable(cylinderName), &tankSize, &tankPressure);
mymodel->setData(IDX(CylindersModel::TYPE), cylinderName, CylindersModel::TEMP_ROLE); mymodel->setData(IDX(CylindersModel::TYPE), cylinderName, CylindersModel::TEMP_ROLE);
mymodel->setData(IDX(CylindersModel::WORKINGPRESS), tankPressure, CylindersModel::TEMP_ROLE); mymodel->setData(IDX(CylindersModel::WORKINGPRESS), tankPressure.mbar, CylindersModel::TEMP_ROLE);
mymodel->setData(IDX(CylindersModel::SIZE), tankSize, CylindersModel::TEMP_ROLE); mymodel->setData(IDX(CylindersModel::SIZE), tankSize.mliter, CylindersModel::TEMP_ROLE);
} }
static QAbstractItemModel *createTankInfoModel(QWidget *parent) static QAbstractItemModel *createTankInfoModel(QWidget *parent)
@ -371,8 +367,8 @@ void AirTypesDelegate::setModelData(QWidget *editor, QAbstractItemModel *model,
} }
AirTypesDelegate::AirTypesDelegate(const dive &d, QObject *parent) : AirTypesDelegate::AirTypesDelegate(const dive &d, QObject *parent) :
ComboBoxDelegate([d] (QWidget *parent) { return new GasSelectionModel(d, parent); }, ComboBoxDelegate([&d] (QWidget *parent) { return new GasSelectionModel(d, parent); },
parent, false) parent, false)
{ {
} }

View File

@ -6,7 +6,6 @@ include_directories(.
set(SUBSURFACE_PREFERENCES_UI set(SUBSURFACE_PREFERENCES_UI
preferences_cloud.ui preferences_cloud.ui
preferences_dc.ui
preferences_defaults.ui preferences_defaults.ui
preferences_equipment.ui preferences_equipment.ui
preferences_georeference.ui preferences_georeference.ui
@ -32,8 +31,6 @@ set(SUBSURFACE_PREFERENCES_LIB_SRCS
abstractpreferenceswidget.h abstractpreferenceswidget.h
preferences_cloud.cpp preferences_cloud.cpp
preferences_cloud.h preferences_cloud.h
preferences_dc.cpp
preferences_dc.h
preferences_defaults.cpp preferences_defaults.cpp
preferences_defaults.h preferences_defaults.h
preferences_equipment.cpp preferences_equipment.cpp

View File

@ -1,39 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include "preferences_dc.h"
#include "ui_preferences_dc.h"
#include "core/dive.h"
#include "core/settings/qPrefDisplay.h"
#include "core/settings/qPrefCloudStorage.h"
#include "core/settings/qPrefDiveComputer.h"
#include <QFileDialog>
#include <QProcess>
#include <QMessageBox>
PreferencesDc::PreferencesDc(): AbstractPreferencesWidget(tr("Dive download"), QIcon(":preferences-dc-icon"), 3 ), ui(new Ui::PreferencesDc())
{
ui->setupUi(this);
const QSize BUTTON_SIZE = QSize(200, 22);
ui->resetRememberedDCs->resize(BUTTON_SIZE);
}
PreferencesDc::~PreferencesDc()
{
delete ui;
}
void PreferencesDc::on_resetRememberedDCs_clicked()
{
qPrefDiveComputer::set_vendor1(QString());
qPrefDiveComputer::set_vendor2(QString());
qPrefDiveComputer::set_vendor3(QString());
qPrefDiveComputer::set_vendor4(QString());
}
void PreferencesDc::refreshSettings()
{
}
void PreferencesDc::syncSettings()
{
}

View File

@ -1,27 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#ifndef PREFERENCES_DC_H
#define PREFERENCES_DC_H
#include "abstractpreferenceswidget.h"
#include "core/pref.h"
namespace Ui {
class PreferencesDc;
}
class PreferencesDc : public AbstractPreferencesWidget {
Q_OBJECT
public:
PreferencesDc();
~PreferencesDc();
void refreshSettings() override;
void syncSettings() override;
public slots:
void on_resetRememberedDCs_clicked();
private:
Ui::PreferencesDc *ui;
};
#endif

View File

@ -1,97 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>PreferencesDc</class>
<widget class="QWidget" name="PreferencesDc">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>561</width>
<height>558</height>
</rect>
</property>
<property name="windowTitle">
<string>Form</string>
</property>
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<widget class="QLabel" name="label_dc_help">
<property name="toolTip">
<string extracomment="Help info 1"/>
</property>
<property name="text">
<string>DIVE COMPUTER</string>
</property>
</widget>
</item>
<item>
<widget class="QGroupBox" name="groupBox_10">
<layout class="QGridLayout" name="gridlayout">
<item row="0" column="0">
<widget class="QLabel" name="label_dc_help1">
<property name="toolTip">
<string extracomment="Help info 1"/>
</property>
<property name="text">
<string>Delete connections</string>
</property>
</widget>
</item>
<item row="1" column="0" colspan="2">
<widget class="QLabel" name="label_dc_help2">
<property name="toolTip">
<string extracomment="Help info 1"/>
</property>
<property name="wordWrap">
<bool>true</bool>
</property>
<property name="text">
<string>When importing dives from a dive computer (DC), Subsurface remembers the connection(s), showing them as selectable buttons in the Download panel. This is useful for DCs using Bluetooth for communication. In order to clear all this information, click on the button below. After clearing the information the buttons on the Download panel disappear and it is necessary to establish new connection(s) with dive computer(s) before importing dives again.</string>
</property>
</widget>
</item>
<item row="2" column="0">
<widget class="QPushButton" name="resetRememberedDCs">
<property name="text">
<string>Delete all dive computer connections</string>
</property>
</widget>
</item>
<item row="2" column="1">
<widget class="QLabel" name="label_dc_help3">
<property name="text">
<string> </string>
</property>
</widget>
</item>
</layout>
</widget>
</item>
<item>
<spacer name="verticalSpacer">
<property name="orientation">
<enum>Qt::Vertical</enum>
</property>
<property name="sizeHint" stdset="0">
<size>
<width>20</width>
<height>40</height>
</size>
</property>
</spacer>
</item>
</layout>
</widget>
<resources/>
<connections>
</connections>
</ui>

View File

@ -11,7 +11,6 @@
#include "preferences_cloud.h" #include "preferences_cloud.h"
#include "preferences_equipment.h" #include "preferences_equipment.h"
#include "preferences_media.h" #include "preferences_media.h"
#include "preferences_dc.h"
#include "preferences_log.h" #include "preferences_log.h"
#include "preferences_reset.h" #include "preferences_reset.h"
@ -74,7 +73,6 @@ PreferencesDialog::PreferencesDialog()
pages.push_back(new PreferencesCloud); pages.push_back(new PreferencesCloud);
pages.push_back(new PreferencesEquipment); pages.push_back(new PreferencesEquipment);
pages.push_back(new PreferencesMedia); pages.push_back(new PreferencesMedia);
pages.push_back(new PreferencesDc);
pages.push_back(new PreferencesLog); pages.push_back(new PreferencesLog);
pages.push_back(new PreferencesReset); pages.push_back(new PreferencesReset);
std::sort(pages.begin(), pages.end(), abstractpreferenceswidget_lessthan); std::sort(pages.begin(), pages.end(), abstractpreferenceswidget_lessthan);

View File

@ -52,7 +52,7 @@ void EmptyView::resizeEvent(QResizeEvent *)
update(); update();
} }
ProfileWidget::ProfileWidget() : d(nullptr), dc(0), originalDive(nullptr), placingCommand(false) ProfileWidget::ProfileWidget() : d(nullptr), dc(0), placingCommand(false)
{ {
ui.setupUi(this); ui.setupUi(this);
@ -122,9 +122,13 @@ ProfileWidget::ProfileWidget() : d(nullptr), dc(0), originalDive(nullptr), placi
connect(&diveListNotifier, &DiveListNotifier::divesChanged, this, &ProfileWidget::divesChanged); connect(&diveListNotifier, &DiveListNotifier::divesChanged, this, &ProfileWidget::divesChanged);
connect(&diveListNotifier, &DiveListNotifier::settingsChanged, view.get(), &ProfileWidget2::settingsChanged); connect(&diveListNotifier, &DiveListNotifier::settingsChanged, view.get(), &ProfileWidget2::settingsChanged);
connect(&diveListNotifier, &DiveListNotifier::cylinderAdded, this, &ProfileWidget::cylindersChanged);
connect(&diveListNotifier, &DiveListNotifier::cylinderRemoved, this, &ProfileWidget::cylindersChanged);
connect(&diveListNotifier, &DiveListNotifier::cylinderEdited, this, &ProfileWidget::cylindersChanged);
connect(view.get(), &ProfileWidget2::stopAdded, this, &ProfileWidget::stopAdded); connect(view.get(), &ProfileWidget2::stopAdded, this, &ProfileWidget::stopAdded);
connect(view.get(), &ProfileWidget2::stopRemoved, this, &ProfileWidget::stopRemoved); connect(view.get(), &ProfileWidget2::stopRemoved, this, &ProfileWidget::stopRemoved);
connect(view.get(), &ProfileWidget2::stopMoved, this, &ProfileWidget::stopMoved); connect(view.get(), &ProfileWidget2::stopMoved, this, &ProfileWidget::stopMoved);
connect(view.get(), &ProfileWidget2::stopEdited, this, &ProfileWidget::stopEdited);
ui.profCalcAllTissues->setChecked(qPrefTechnicalDetails::calcalltissues()); ui.profCalcAllTissues->setChecked(qPrefTechnicalDetails::calcalltissues());
ui.profCalcCeiling->setChecked(qPrefTechnicalDetails::calcceiling()); ui.profCalcCeiling->setChecked(qPrefTechnicalDetails::calcceiling());
@ -157,11 +161,11 @@ void ProfileWidget::setEnabledToolbar(bool enabled)
b->setEnabled(enabled); b->setEnabled(enabled);
} }
void ProfileWidget::setDive(const struct dive *d) void ProfileWidget::setDive(const struct dive *d, int dcNr)
{ {
stack->setCurrentIndex(1); // show profile stack->setCurrentIndex(1); // show profile
bool freeDiveMode = d->dc.divemode == FREEDIVE; bool freeDiveMode = get_dive_dc_const(d, dcNr)->divemode == FREEDIVE;
ui.profCalcCeiling->setDisabled(freeDiveMode); ui.profCalcCeiling->setDisabled(freeDiveMode);
ui.profCalcCeiling->setDisabled(freeDiveMode); ui.profCalcCeiling->setDisabled(freeDiveMode);
ui.profCalcAllTissues ->setDisabled(freeDiveMode); ui.profCalcAllTissues ->setDisabled(freeDiveMode);
@ -192,6 +196,10 @@ void ProfileWidget::plotCurrentDive()
void ProfileWidget::plotDive(dive *dIn, int dcIn) void ProfileWidget::plotDive(dive *dIn, int dcIn)
{ {
bool endEditMode = false;
if (editedDive && (dIn != d || dcIn != dc))
endEditMode = true;
d = dIn; d = dIn;
if (dcIn >= 0) if (dcIn >= 0)
@ -202,7 +210,7 @@ void ProfileWidget::plotDive(dive *dIn, int dcIn)
dc = std::min(dc, (int)number_of_computers(current_dive) - 1); dc = std::min(dc, (int)number_of_computers(current_dive) - 1);
// Exit edit mode if the dive changed // Exit edit mode if the dive changed
if (editedDive && (originalDive != d || editedDc != dc)) if (endEditMode)
exitEditMode(); exitEditMode();
// If this is a manually added dive and we are not in the planner // If this is a manually added dive and we are not in the planner
@ -210,19 +218,19 @@ void ProfileWidget::plotDive(dive *dIn, int dcIn)
if (d && !editedDive && if (d && !editedDive &&
DivePlannerPointsModel::instance()->currentMode() == DivePlannerPointsModel::NOTHING) { DivePlannerPointsModel::instance()->currentMode() == DivePlannerPointsModel::NOTHING) {
struct divecomputer *comp = get_dive_dc(d, dc); struct divecomputer *comp = get_dive_dc(d, dc);
if (comp && is_manually_added_dc(comp) && comp->samples) if (comp && is_dc_manually_added_dive(comp) && comp->samples && comp->samples <= 50)
editDive(); editDive();
} }
setEnabledToolbar(d != nullptr); setEnabledToolbar(d != nullptr);
if (editedDive) { if (editedDive) {
view->plotDive(editedDive.get(), editedDc); view->plotDive(editedDive.get(), dc);
setDive(editedDive.get()); setDive(editedDive.get(), dc);
} else if (d) { } else if (d) {
view->setProfileState(d, dc); view->setProfileState(d, dc);
view->resetZoom(); // when switching dive, reset the zoomLevel view->resetZoom(); // when switching dive, reset the zoomLevel
view->plotDive(d, dc); view->plotDive(d, dc);
setDive(d); setDive(d, dc);
} else { } else {
view->clear(); view->clear();
stack->setCurrentIndex(0); stack->setCurrentIndex(0);
@ -256,29 +264,50 @@ void ProfileWidget::rotateDC(int dir)
void ProfileWidget::divesChanged(const QVector<dive *> &dives, DiveField field) void ProfileWidget::divesChanged(const QVector<dive *> &dives, DiveField field)
{ {
// If the current dive is not in list of changed dives, do nothing. // If the current dive is not in list of changed dives, do nothing.
// Only if duration or depth changed, the profile needs to be replotted.
// Also, if we are currently placing a command, don't do anything. // Also, if we are currently placing a command, don't do anything.
// Note that we cannot use Command::placingCommand(), because placing // Note that we cannot use Command::placingCommand(), because placing
// a depth or time change on the maintab requires an update. // a depth or time change on the maintab requires an update.
if (!d || !dives.contains(d) || !(field.duration || field.depth) || placingCommand) if (!d || !dives.contains(d) || !(field.duration || field.depth) || placingCommand)
return; return;
// If were editing the current dive and not currently // If we're editing the current dive and not currently
// placing command, we have to update the edited dive. // placing command, we have to update the edited dive.
if (editedDive) { if (editedDive) {
copy_dive(d, editedDive.get()); copy_dive(d, editedDive.get());
// TODO: Holy moly that function sends too many signals. Fix it! // TODO: Holy moly that function sends too many signals. Fix it!
DivePlannerPointsModel::instance()->loadFromDive(editedDive.get(), editedDc); DivePlannerPointsModel::instance()->loadFromDive(editedDive.get(), dc);
} }
plotCurrentDive(); // Only if duration or depth changed, the profile needs to be replotted.
if (field.duration || field.depth)
plotCurrentDive();
} }
void ProfileWidget::setPlanState(const struct dive *d, int dc) void ProfileWidget::cylindersChanged(struct dive *changed, int pos)
{
// If the current dive is not in list of changed dives, do nothing.
// Only if duration or depth changed, the profile needs to be replotted.
// Also, if we are currently placing a command, don't do anything.
// Note that we cannot use Command::placingCommand(), because placing
// a depth or time change on the maintab requires an update.
if (!d || changed != d || !editedDive)
return;
// If we're editing the current dive we have to update the
// cylinders of the edited dive.
if (editedDive) {
copy_cylinders(&d->cylinders, &editedDive.get()->cylinders);
// TODO: Holy moly that function sends too many signals. Fix it!
DivePlannerPointsModel::instance()->loadFromDive(editedDive.get(), dc);
}
}
void ProfileWidget::setPlanState(const struct dive *d, int dcNr)
{ {
exitEditMode(); exitEditMode();
view->setPlanState(d, dc); dc = dcNr;
setDive(d); view->setPlanState(d, dcNr);
setDive(d, dcNr);
} }
void ProfileWidget::unsetProfHR() void ProfileWidget::unsetProfHR()
@ -296,22 +325,20 @@ void ProfileWidget::unsetProfTissues()
void ProfileWidget::editDive() void ProfileWidget::editDive()
{ {
editedDive.reset(alloc_dive()); editedDive.reset(alloc_dive());
editedDc = dc;
copy_dive(d, editedDive.get()); // Work on a copy of the dive copy_dive(d, editedDive.get()); // Work on a copy of the dive
originalDive = d; DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::EDIT);
DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::ADD); DivePlannerPointsModel::instance()->loadFromDive(editedDive.get(), dc);
DivePlannerPointsModel::instance()->loadFromDive(editedDive.get(), editedDc); view->setEditState(editedDive.get(), dc);
view->setEditState(editedDive.get(), editedDc);
} }
void ProfileWidget::exitEditMode() void ProfileWidget::exitEditMode()
{ {
if (!editedDive) if (!editedDive)
return; return;
DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::NOTHING); DivePlannerPointsModel::instance()->setPlanMode(DivePlannerPointsModel::NOTHING);
view->setProfileState(d, dc); // switch back to original dive before erasing the copy. view->setProfileState(d, dc); // switch back to original dive before erasing the copy.
editedDive.reset(); editedDive.reset();
originalDive = nullptr;
} }
// Update depths of edited dive // Update depths of edited dive
@ -337,25 +364,34 @@ void ProfileWidget::stopAdded()
{ {
if (!editedDive) if (!editedDive)
return; return;
calcDepth(*editedDive, editedDc); calcDepth(*editedDive, dc);
Setter s(placingCommand, true); Setter s(placingCommand, true);
Command::editProfile(editedDive.get(), editedDc, Command::EditProfileType::ADD, 0); Command::editProfile(editedDive.get(), dc, Command::EditProfileType::ADD, 0);
} }
void ProfileWidget::stopRemoved(int count) void ProfileWidget::stopRemoved(int count)
{ {
if (!editedDive) if (!editedDive)
return; return;
calcDepth(*editedDive, editedDc); calcDepth(*editedDive, dc);
Setter s(placingCommand, true); Setter s(placingCommand, true);
Command::editProfile(editedDive.get(), editedDc, Command::EditProfileType::REMOVE, count); Command::editProfile(editedDive.get(), dc, Command::EditProfileType::REMOVE, count);
} }
void ProfileWidget::stopMoved(int count) void ProfileWidget::stopMoved(int count)
{ {
if (!editedDive) if (!editedDive)
return; return;
calcDepth(*editedDive, editedDc); calcDepth(*editedDive, dc);
Setter s(placingCommand, true); Setter s(placingCommand, true);
Command::editProfile(editedDive.get(), editedDc, Command::EditProfileType::MOVE, count); Command::editProfile(editedDive.get(), dc, Command::EditProfileType::MOVE, count);
}
void ProfileWidget::stopEdited()
{
if (!editedDive)
return;
Setter s(placingCommand, true);
Command::editProfile(editedDive.get(), dc, Command::EditProfileType::EDIT, 0);
} }

View File

@ -34,23 +34,23 @@ public:
private private
slots: slots:
void divesChanged(const QVector<dive *> &dives, DiveField field); void divesChanged(const QVector<dive *> &dives, DiveField field);
void cylindersChanged(struct dive *changed, int pos);
void unsetProfHR(); void unsetProfHR();
void unsetProfTissues(); void unsetProfTissues();
void stopAdded(); void stopAdded();
void stopRemoved(int count); void stopRemoved(int count);
void stopMoved(int count); void stopMoved(int count);
void stopEdited();
private: private:
std::unique_ptr<EmptyView> emptyView; std::unique_ptr<EmptyView> emptyView;
std::vector<QAction *> toolbarActions; std::vector<QAction *> toolbarActions;
Ui::ProfileWidget ui; Ui::ProfileWidget ui;
QStackedWidget *stack; QStackedWidget *stack;
void setDive(const struct dive *d); void setDive(const struct dive *d, int dcNr);
void editDive(); void editDive();
void exitEditMode(); void exitEditMode();
void rotateDC(int dir); void rotateDC(int dir);
OwningDivePtr editedDive; OwningDivePtr editedDive;
int editedDc;
dive *originalDive;
bool placingCommand; bool placingCommand;
}; };

View File

@ -24,6 +24,10 @@ void TabDiveExtraInfo::updateData(const std::vector<dive *> &, dive *currentDive
const struct divecomputer *currentdc = get_dive_dc(currentDive, currentDC); const struct divecomputer *currentdc = get_dive_dc(currentDive, currentDC);
if (currentdc) if (currentdc)
extraDataModel->updateDiveComputer(currentdc); extraDataModel->updateDiveComputer(currentdc);
ui->extraData->setVisible(false); // This will cause the resize to include rows outside the current viewport
ui->extraData->resizeColumnsToContents();
ui->extraData->setVisible(true);
} }
void TabDiveExtraInfo::clear() void TabDiveExtraInfo::clear()

View File

@ -224,11 +224,8 @@ void TabDiveInformation::updateData(const std::vector<dive *> &, dive *currentDi
setIndexNoSignal(ui->atmPressType, 0); // Set the atmospheric pressure combo box to mbar setIndexNoSignal(ui->atmPressType, 0); // Set the atmospheric pressure combo box to mbar
salinity_value = get_dive_salinity(currentDive); salinity_value = get_dive_salinity(currentDive);
if (salinity_value) { // Set water type indicator (EN13319 = 1.020 g/l) if (salinity_value) { // Set water type indicator (EN13319 = 1.020 g/l)
if (ui->waterTypeCombo->isVisible()) { // If water salinity is editable then set correct water type in combobox: setIndexNoSignal(ui->waterTypeCombo, updateSalinityComboIndex(salinity_value));
setIndexNoSignal(ui->waterTypeCombo, updateSalinityComboIndex(salinity_value)); ui->waterTypeText->setText(get_water_type_string(salinity_value));
} else { // If water salinity is not editable: show water type as a text label
ui->waterTypeText->setText(get_water_type_string(salinity_value));
}
ui->salinityText->setText(get_salinity_string(salinity_value)); ui->salinityText->setText(get_salinity_string(salinity_value));
} else { } else {
setIndexNoSignal(ui->waterTypeCombo, -1); setIndexNoSignal(ui->waterTypeCombo, -1);
@ -349,6 +346,7 @@ void TabDiveInformation::divesChanged(const QVector<dive *> &dives, DiveField fi
else else
salinity_value = currentDive->salinity; salinity_value = currentDive->salinity;
setIndexNoSignal(ui->waterTypeCombo, updateSalinityComboIndex(salinity_value)); setIndexNoSignal(ui->waterTypeCombo, updateSalinityComboIndex(salinity_value));
ui->waterTypeText->setText(get_water_type_string(salinity_value));
ui->salinityText->setText(QString("%L1g/").arg(salinity_value / 10.0)); ui->salinityText->setText(QString("%L1g/").arg(salinity_value / 10.0));
} }

View File

@ -254,7 +254,7 @@ void TabDiveNotes::updateData(const std::vector<dive *> &, dive *currentDive, in
ui.LocationLabel->setText(tr("Location")); ui.LocationLabel->setText(tr("Location"));
ui.NotesLabel->setText(tr("Notes")); ui.NotesLabel->setText(tr("Notes"));
ui.tagWidget->setText(QString::fromStdString(taglist_get_tagstring(currentDive->tag_list))); ui.tagWidget->setText(QString::fromStdString(taglist_get_tagstring(currentDive->tag_list)));
bool isManual = is_manually_added_dc(&currentDive->dc); bool isManual = is_dc_manually_added_dive(&currentDive->dc);
ui.depth->setVisible(isManual); ui.depth->setVisible(isManual);
ui.depthLabel->setVisible(isManual); ui.depthLabel->setVisible(isManual);
ui.duration->setVisible(isManual); ui.duration->setVisible(isManual);

@ -1 +1 @@
Subproject commit 62a29eea15137ba7b0f2e10fae095517cc9d8341 Subproject commit 9641883f2fc63928c6513895959fe72ed990e117

Some files were not shown because too many files have changed in this diff Show More