Compare commits

...

79 Commits

Author SHA1 Message Date
Jakob Rath defe5d6549
Merge 114724b0a4 into a7e6c2db68 2024-06-09 14:15:51 +02:00
Jakob Rath 114724b0a4 Replicate all dependencies of a dataset first
Assuming we want to replicate the following pool:

```
NAME            USED  AVAIL  REFER  MOUNTPOINT              ORIGIN
testpool1      1.10M  38.2M   288K  /Volumes/testpool1      -
testpool1/A     326K  38.2M   293K  /Volumes/testpool1/A    testpool1/B@b
testpool1/A/D   303K  38.2M   288K  /Volumes/testpool1/A/D  -
testpool1/B    35.5K  38.2M   292K  /Volumes/testpool1/B    testpool1/C@a
testpool1/C     306K  38.2M   290K  /Volumes/testpool1/C    -
```

Note the clone dependencies: `A -> B -> C`.

Currently, syncoid notices that `A` and `B` are clones and defers syncing them.
There are two problems:

1. Syncing `A/D` fails because we have deferred `A`.

2. The clone relation `A -> B` will not be recreated since the list of deferred datasets does not take into account clone relations between them.

This PR solves both of these problems by collecting all dependencies of a dataset and syncing them before the dataset itself.

---

One problematic case remains: if a dataset depends (transitively) on one of its own children, e.g.:

```
NAME            USED  AVAIL  REFER  MOUNTPOINT              ORIGIN
testpool1/E    58.5K  38.7M   298K  /Volumes/testpool1/E    testpool1/E/D@e
testpool1/E/D  37.5K  38.7M   296K  /Volumes/testpool1/E/D  testpool1/A@d
```

Here, the first run of syncoid will fail to sync `E/D`.
I've chosen to ignore this case for now because
1) it seems quite artificial and not like something that would occur in practice very often, and
2) a second run of syncoid will successfully sync `E/D` too (although the clone relation `E -> E/D` is lost).
2024-06-09 14:13:15 +02:00
Jim Salter a7e6c2db68
Merge pull request #920 from phreaker0/dataset-cache
[sanoid] implemented dataset cache and fix race conditions
2024-04-26 16:28:48 -04:00
Christoph Klaffl 9c0468ee45
write cache files in an atomic way to prevent race conditions 2024-04-24 00:09:40 +02:00
Christoph Klaffl 6f74c7c4b3
* improve performance (especially for monitor commands) by caching the dataset list
* list snapshots only when needed
2024-04-23 23:38:47 +02:00
Jim Salter b31ed6e325
Merge pull request #916 from 0xFelix/zstdmt
syncoid: Add zstdmt compress options
2024-04-22 12:14:40 -04:00
Jim Salter fa2c16d65a
Merge pull request #905 from phreaker0/findoid-relative-path
[findoid] support relative paths
2024-04-22 12:13:43 -04:00
Jim Salter 1207ea0062
Merge pull request #904 from phreaker0/tests-restructure
test adaptions
2024-04-22 12:13:28 -04:00
Jim Salter d800e5e17d
Merge pull request #903 from spicyFajitas/regather_snapshots--delete-target-snaps_task
fix(syncoid): regather $snaps on --delete-target-snapshots flag
2024-04-22 12:12:58 -04:00
Jim Salter 1ee6815e5e
Merge pull request #910 from phreaker0/improve-output
added missing status information about what is done and provide more details
2024-04-22 12:12:28 -04:00
0xFelix 8b7d29d5a0 syncoid: Add zstdmt compress options
Add the zstdmt-fast and zstdmt-slow compress options to allow use of
multithreading when using zstd compression.

Signed-off-by: 0xFelix <felix@matouschek.org>
2024-04-20 18:41:43 +02:00
Christoph Klaffl b4c8e4b499
Merge branch 'master' into improve-output 2024-04-18 14:30:04 +02:00
Jim Salter 45b1ce9e5d
Merge pull request #911 from phreaker0/fix-error-handling
handle error output for filtered replications
2024-04-18 08:25:30 -04:00
Christoph Klaffl 6c1e31e551
handle error output for filtered replications 2024-04-18 08:22:37 +02:00
Christoph Klaffl eb4fe8a01c
added missing status information about what is done and provide more details 2024-04-18 07:42:47 +02:00
Jim Salter fdbbe28ac7
Merge pull request #909 from phreaker0/socket-rename
rename ssh control socket to avoid problem with length limits and con…
2024-04-17 09:11:04 -04:00
Christoph Klaffl a059054ffb
rename ssh control socket to avoid problem with length limits and conflicts 2024-04-17 08:14:04 +02:00
Christoph Klaffl d7ed4bdf54
support relative paths 2024-04-05 15:24:42 +02:00
Christoph Klaffl 4e86733c1a
missed debug statement 2024-04-05 15:22:13 +02:00
Christoph Klaffl 7c8a34eceb
* proper order of tests
* timing fixes for fast NVME pools
* skip invasive tests by default
2024-04-05 15:20:28 +02:00
Adam Fulton d08b2882b7 finish rebase to master 2024-04-01 13:16:16 -05:00
Adam Fulton f89372967f fix(syncoid): regather $snaps on --delete-target-snapshots flag 2024-04-01 13:12:59 -05:00
Jim Salter 19fc237476
Update INSTALL.md 2024-02-01 15:05:08 -05:00
Jim Salter d5ce1889d6
Create SECURITY.md 2024-02-01 14:59:43 -05:00
Jim Salter 4e101bbc16
Create CONTRIBUTING.md 2024-02-01 14:52:25 -05:00
Jim Salter b420048d95
Create CODE_OF_CONDUCT.md 2024-02-01 14:45:33 -05:00
Jim Salter 5de562eb7f
Update README.md 2024-02-01 14:38:45 -05:00
Jim Salter 7940f65941
Update README.md 2024-02-01 13:58:33 -05:00
Jim Salter 6919bc3324
Update README.md 2024-02-01 13:57:02 -05:00
Jim Salter 7c225a1d7b
Merge pull request #818 from Deltik/fix/815
syncoid: Sort snapshots by `createtxg` if possible (fallback to `creation`)
2024-02-01 13:13:35 -05:00
Jim Salter acdc0938c9
Merge pull request #884 from dlangille/master
sanoid.conf: document two options for recursive
2024-01-26 14:16:44 -05:00
Jim Salter e0bd202c41
Merge pull request #856 from Pajkastare/master
Fixes jimsalterjrs/sanoid#851
2024-01-26 14:16:14 -05:00
Christoph Klaffl 6667f02d35
Update sanoid.conf 2024-01-25 21:13:00 +01:00
Christoph Klaffl 7dae0e5a9b
Merge branch 'master' into master 2024-01-25 21:12:11 +01:00
pajkastare 01053e6cce Removed unnecessary comment, no code change 2024-01-24 13:51:24 +01:00
pajkastare a8c15c977a Fixes jimsalterjrs/sanoid#851, updated based on review in discussion thread 2024-01-24 13:32:22 +01:00
Dan Langille 9ed32d177d sanoid.conf: document two options for recursive
zfs and yes are the options, one uses zfs, the other sanoid code
2024-01-15 09:56:47 -05:00
Jim Salter a5fa5e7bad
Merge pull request #843 from mjeanson/master
Fix typos in syncoid documentation
2024-01-13 21:32:08 -05:00
Jim Salter d60ee1ffc7
Merge pull request #855 from EchterAgo/debian_depends_openzfs_native_deb
debian: add openzfs-zfsutils as an alternative to zfsutils-linux
2024-01-13 21:31:50 -05:00
Jim Salter c30d485383
Merge pull request #872 from Rantherhin/zfs-get
fix zfs-get for "--preserve-properties" and tests
2024-01-13 21:30:56 -05:00
Jim Salter c02defd80b
Merge pull request #841 from thecatontheflat/patch-1
Update INSTALL.md
2024-01-13 21:30:38 -05:00
Christoph Klaffl 790ea544ff
Merge branch 'master' into zfs-get 2024-01-13 23:27:38 +01:00
Jim Salter b100ba43ac
Merge pull request #859 from jan-krieg/master
fix "creation"/"guid" regex detection
2024-01-13 16:33:01 -05:00
Christoph Klaffl 0361faac76
Merge branch 'master' into master 2024-01-13 21:56:31 +01:00
Jim Salter d01eef7555
Merge pull request #846 from jiawen/master-1
Fix tiny typo in README.md
2024-01-13 15:47:19 -05:00
Jim Salter 54c2dacd20
Merge pull request #881 from phreaker0/force-delete-skip-root
prevent destroying of root dataset which leads to infinite loop
2024-01-13 15:44:18 -05:00
Jim Salter 4e8b881da7
Merge pull request #882 from phreaker0/preserve-properties-handle-special-symbols
escape property key and value pair in case of property preservation
2024-01-13 15:41:57 -05:00
Jim Salter af732daccf
Merge pull request #883 from phreaker0/update-send-recv-options
update possible zfs send options
2024-01-13 15:41:41 -05:00
Christoph Klaffl becddb854f
Merge branch 'master' into preserve-properties-handle-special-symbols 2024-01-13 21:34:45 +01:00
Christoph Klaffl 85e7fca30e
Merge branch 'master' into force-delete-skip-root 2024-01-13 21:29:40 +01:00
Christoph Klaffl ca6e60b920
Merge branch 'master' into update-send-recv-options 2024-01-13 21:22:51 +01:00
Jim Salter 680bf23412
Merge pull request #699 from mr-vinn/filter-snaps
Add --include-snaps and --exclude-snaps options to syncoid
2024-01-13 14:45:03 -05:00
Christoph Klaffl 8ce1ea4dc8
fixed refactoring regression 2024-01-13 19:49:20 +01:00
Christoph Klaffl e9eb05e840
Merge branch 'master' into filter-snaps 2024-01-13 19:40:28 +01:00
Christoph Klaffl 6761004939
update possible zfs send options 2024-01-11 21:02:04 +01:00
Christoph Klaffl 4369576ac4
escape property key and value pair in case of property preservation 2024-01-09 20:40:33 +01:00
Christoph Klaffl 48d89c785e
prevent destroying of root dataset which leads to infinite loop because it can't be destroyed 2024-01-09 19:53:03 +01:00
Justin Wolf dbbaac8ac3 modify zfs-get argument order for portability 2023-12-10 21:16:42 -06:00
Jan Krieg 605b7bac1c
fix "creation"/"guid" regex detection 2023-10-29 17:46:28 +01:00
pajkastare a5a6fc0f58 Fixes jimsalterjrs/sanoid#851 2023-10-23 21:43:46 +02:00
Axel Gembe 07b6d6344c
debian: add openzfs-zfsutils as an alternative to zfsutils-linux
The package produced by ZFS 2.2.0 `make native-deb-utils` is called
`openzfs-zfsutils`.
2023-10-15 14:07:09 +07:00
Jiawen (Kevin) Chen 18ccb7df35
Fix tiny typo in README.md 2023-08-14 22:52:16 -07:00
Michael Jeanson 6b874a7e3c Fix typos in syncoid documentation
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
2023-08-03 16:17:51 -04:00
Vitalii Zurian a881d22c85
Update INSTALL.md 2023-08-01 10:05:46 +02:00
Nick Liu a904ba02f3
enh(run-tests.sh): Sort tests with "general numeric sort"
The sort before tended to be alphabetical, which put test
`8_force_delete_snapshot` after `815_sync_out-of-order_snapshots`, but
`8` should come before `815`.

Before:

```
root@demo:~/sanoid/tests/syncoid# ./run-tests.sh
Running test 1_bookmark_replication_intermediate ... [PASS]
Running test 2_bookmark_replication_no_intermediate ... [PASS]
Running test 3_force_delete ... [PASS]
Running test 4_bookmark_replication_edge_case ... [PASS]
Running test 5_reset_resume_state ... mbuffer: error: outputThread: error writing to <stdout> at offset 0x90000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
[PASS]
Running test 6_reset_resume_state2 ... [PASS]
Running test 7_preserve_recordsize ... [PASS]
Running test 815_sync_out-of-order_snapshots ... [PASS]
Running test 8_force_delete_snapshot ... [PASS]
```

After:

```
root@demo:~/sanoid/tests/syncoid# ./run-tests.sh
Running test 1_bookmark_replication_intermediate ... [PASS]
Running test 2_bookmark_replication_no_intermediate ... [PASS]
Running test 3_force_delete ... [PASS]
Running test 4_bookmark_replication_edge_case ... [PASS]
Running test 5_reset_resume_state ... mbuffer: error: outputThread: error writing to <stdout> at offset 0xf0000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
[PASS]
Running test 6_reset_resume_state2 ... [PASS]
Running test 7_preserve_recordsize ... [PASS]
Running test 8_force_delete_snapshot ... [PASS]
Running test 815_sync_out-of-order_snapshots ... [PASS]
```
2023-04-28 01:00:03 -05:00
Nick Liu b37092f376
test(syncoid): Add test to verify out-of-order snapshot sync
See https://github.com/jimsalterjrs/sanoid/issues/815 for the original
test.
2023-04-25 17:35:45 -05:00
Nick Liu ab361017e7
feat(syncoid): Match snapshots to bookmarks by `createtxg` if possible
This is a continuation of a previous commit to sort snapshots by
`createtxg` if possible.  Now, we have to match the behavior when
selecting an appropriate snapshot based on the transaction group of the
relevant bookmark in `syncdataset()`.

Supersedes: https://github.com/jimsalterjrs/sanoid/pull/667
2023-04-25 17:07:32 -05:00
Nick Liu 8907e0cb2f
feat(syncoid): Sort snapshots by `createtxg` if possible
It is possible for `creation` of a subsequent snapshot to be in the past
compared to the current snapshot due to system clock discrepancies,
which leads to earlier snapshots not being replicated in the initial
syncoid sync.

Also, `syncoid --no-sync-snap` might not pick up the most recently taken
snapshot if the clock moved backwards before taking that snapshot.

Sorting snapshots by the `createtxg` value is reliable and documented
in `man 8 zfsprops` as the proper way to order snapshots, but it was not
available in ZFS versions before 0.7.  To maintain backwards
compatibility, the sorting falls back to sorting by the `creation`
property, which was the old behavior.

Fixes: https://github.com/jimsalterjrs/sanoid/issues/815
2023-04-28 00:43:47 -05:00
Nick Liu 8fabaae5b8
feat(syncoid): Add "createtxg" property to `getsnaps`
The `getsnaps` subroutine now retrieves the "createtxg" property of the
snapshot.

This is necessary to support the fix for
https://github.com/jimsalterjrs/sanoid/issues/815 (Syncoid: Data loss
because getoldestsnapshot() might not choose the first snapshot).
2023-04-25 14:01:54 -05:00
Nick Liu e301b5b153
refactor(syncoid): Simplify getsnaps to parse a hash rather than lines
* The part that was "a little obnoxious" has been rewritten to extract
  the desired properties in a single loop after importing each line into
  a hash rather than processing line by line with a state tracking flag.
* The `getsnapsfallback` subroutine had duplicated logic that has been
  absorbed into `getsnaps` with a recursion argument to enable the
  fallback mode.
2023-04-25 13:58:40 -05:00
Christoph Klaffl 0b27059133
Merge remote-tracking branch 'upstream/master' into filter-snaps 2023-03-21 16:35:20 +01:00
Vinnie Okada 0c577fc735 Deprecate the --exclude option
Add a new option, --exclude-datasets, to replace --exclude. This makes
the naming more consistent now that there are options to filter both
snapshots and datasets.

Also add more information to the README about the distinction between
--exclude-datasets and --(in|ex)clude-snaps.
2022-12-24 12:59:59 -07:00
Vinnie Okada 14ed85163a Filter snapshots in getsnapsfallback() 2022-12-24 12:55:17 -07:00
Vinnie Okada 8e867c6f14 Add new syncoid tests
Test the new --include-snaps and --exclude-snaps options for syncoid.
2022-12-24 12:55:15 -07:00
Vinnie Okada 3a1b1b006f Add new syncoid options to the README
Update the README with the new --include-snaps and --exclude-snaps
 syncoid options.
2022-12-24 12:55:13 -07:00
Vinnie Okada 9a067729a9 Implement include-snaps and exclude-snaps
Add --include-snaps and --exclude-snaps options to filter the snapshots
that syncoid uses.
2022-12-24 12:55:08 -07:00
Vinnie Okada 603c286b50 Don't iterate over snaps twice
Process snapshots in one pass rather than looping separately for both
guid and create time.
2022-12-24 12:52:06 -07:00
Vinnie Okada 09b42d6ade Refactor system calls
Build the zfs send and receive commands in a new subroutine, and
implement other subroutines that can be called instead of building a zfs
command and running it with system();
2022-12-24 12:51:58 -07:00
Vinnie Okada c4e7028022 Refactor terminal output
Replace `print` and `warn` statements with a logging function.
2022-12-24 12:48:54 -07:00
23 changed files with 1195 additions and 546 deletions

128
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

1
CONTRIBUTING.md Normal file
View File

@ -0,0 +1 @@
Any and all contributions made to this project must be compatible with the project's own GPLv3 license.

View File

@ -160,7 +160,7 @@ Now, proceed to configure [**Sanoid**](#configuration)
Install prerequisite software:
```bash
pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop
pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop sanoid
```
**Additional notes:**
@ -169,7 +169,7 @@ pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop
* Simplest path workaround is symlinks, eg `ln -s /usr/local/bin/lzop /usr/bin/lzop` or similar, as appropriate to create links in **/usr/bin** to wherever the utilities actually are on your system.
* See note about mbuffer and other things in FREEBSD.readme
* See note about tcsh unpleasantness and other things in FREEBSD.readme
## Alpine Linux / busybox based distributions

View File

@ -1,6 +1,17 @@
<p align="center"><img src="http://www.openoid.net/wp-content/themes/openoid/images/sanoid_logo.png" alt="sanoid logo" title="sanoid logo"></p>
<table align="center">
<tr>
<td border="1" width="750">
<p align="center">
<img src="http://www.openoid.net/wp-content/themes/openoid/images/sanoid_logo.png" alt="sanoid logo" title="sanoid logo">
</p>
<img src="https://openoid.net/gplv3-127x51.png" width=127 height=51 align="right">
<p align="left">Sanoid is provided to you completely free and libre, now and in perpetuity, via the GPL v3.0 license. If you find the project useful, please consider either a recurring or one-time donation at <a href="https://www.patreon.com/PracticalZFS" target="_blank">Patreon</a> or <a href="https://www.paypal.com/donate/?hosted_button_id=5BLPNV86D4S9N" target="_blank">PayPal</a>—your contributions will support both this project and the Practical ZFS <a href="https://discourse.practicalzfs.com/" target="_blank">forum</a>.
</p>
</td>
</tr>
</table>
<img src="http://openoid.net/gplv3-127x51.png" width=127 height=51 align="right">Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems <a href="http://openoid.net/transcend" target="_blank">functionally immortal</a>.
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems <a href="https://openoid.net/transcend" target="_blank">functionally immortal</a> via automated snapshot management and over-the-air replication.
<p align="center"><a href="https://youtu.be/ZgowLNBsu00" target="_blank"><img src="http://www.openoid.net/sanoid_video_launcher.png" alt="sanoid rollback demo" title="sanoid rollback demo"></a><br clear="all"><sup>(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)</sup></p>
@ -317,7 +328,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
This argument tells syncoid to create a zfs bookmark for the newest snapshot after it got replicated successfully. The bookmark name will be equal to the snapshot name. Only works in combination with the --no-sync-snap option. This can be very useful for irregular replication where the last matching snapshot on the source was already deleted but the bookmark remains so a replication is still possible.
+ --use-hold
This argument tells syncoid to add a hold to the newest snapshot on the source and target after replication succeeds and to remove the hold after the next succesful replication. Setting a hold prevents the snapshots from being destroyed. The hold name incldues the identifier if set. This allows for separate holds in case of replication to multiple targets.
This argument tells syncoid to add a hold to the newest snapshot on the source and target after replication succeeds and to remove the hold after the next successful replication. Setting a hold prevents the snapshots from being destroyed. The hold name includes the identifier if set. This allows for separate holds in case of replication to multiple targets.
+ --preserve-recordsize
@ -342,7 +353,21 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --exclude=REGEX
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times.
__DEPRECATION NOTICE:__ `--exclude` has been deprecated and will be removed in a future release. Please use `--exclude-datasets` instead.
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times. The provided regex pattern is matched against the dataset name only; this option does not affect which snapshots are synchronized. If both `--exclude` and `--exclude-datasets` are provided, then `--exclude` is ignored.
+ --exclude-datasets=REGEX
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times. The provided regex pattern is matched against the dataset name only; this option does not affect which snapshots are synchronized.
+ --exclude-snaps=REGEX
Exclude specific snapshots that match the given regular expression. The provided regex pattern is matched against the snapshot name only. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
+ --include-snaps=REGEX
Only include snapshots that match the given regular expression. The provided regex pattern is matched against the snapshot name only. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
+ --no-resume
@ -391,7 +416,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --debug
This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
This prints out quite a lot of additional information during a syncoid run, and is normally not needed.
+ --help

13
SECURITY.md Normal file
View File

@ -0,0 +1,13 @@
# Security Policy
## Supported Versions
The Sanoid project directly supports both the code in the main branch, and the last two releases found here on GitHub.
Community support is available for all versions, with the understanding that in some cases "upgrade to a newer version" may be the support offered.
If you've installed Sanoid from your distribution's repositories, we're happy to offer community support with the same caveat!
## Reporting a Vulnerability
If you believe you've found a serious security vulnerability in Sanoid, please create an Issue here on GitHub. If you prefer a private contact channel to disclose
particularly sensitive or private details, you may request one in the GitHub Issue you create.

View File

@ -25,6 +25,9 @@ if ($args{'path'} eq '') {
}
}
# resolve given path to a canonical one
$args{'path'} = Cwd::realpath($args{'path'});
my $dataset = getdataset($args{'path'});
my %versions = getversions($args{'path'}, $dataset);

View File

@ -12,7 +12,7 @@ Package: sanoid
Architecture: all
Depends: libcapture-tiny-perl,
libconfig-inifiles-perl,
zfsutils-linux | zfs,
zfsutils-linux | zfs | openzfs-zfsutils,
${misc:Depends},
${perl:Depends}
Recommends: gzip,

142
sanoid
View File

@ -46,26 +46,70 @@ my $zpool = 'zpool';
my $conf_file = "$args{'configdir'}/sanoid.conf";
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
# parse config file
my %config = init($conf_file,$default_conf_file);
my $cache_dir = $args{'cache-dir'};
my $run_dir = $args{'run-dir'};
make_path($cache_dir);
make_path($run_dir);
# if we call getsnaps(%config,1) it will forcibly update the cache, TTL or no TTL
my $forcecacheupdate = 0;
my $cacheTTL = 1200; # 20 minutes
# Allow a much older snapshot cache file than default if _only_ "--monitor-*" action commands are given
# (ignore "--verbose", "--configdir" etc)
if (
(
$args{'monitor-snapshots'}
|| $args{'monitor-health'}
|| $args{'monitor-capacity'}
) && ! (
$args{'cron'}
|| $args{'force-update'}
|| $args{'take-snapshots'}
|| $args{'prune-snapshots'}
|| $args{'force-prune'}
)
) {
# The command combination above must not assert true for any command that takes or prunes snapshots
$cacheTTL = 18000; # 5 hours
if ($args{'debug'}) { print "DEBUG: command combo means that the cache file (provided it exists) will be allowed to be older than default.\n"; }
}
# snapshot cache
my $cache = "$cache_dir/snapshots.txt";
my $cacheTTL = 900; # 15 minutes
my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate );
# configured dataset cache
my $cachedatasetspath = "$cache_dir/datasets.txt";
my @cachedatasets;
# parse config file
my %config = init($conf_file,$default_conf_file);
my %pruned;
my %capacitycache;
my %snapsbytype = getsnapsbytype( \%config, \%snaps );
my %snaps;
my %snapsbytype;
my %snapsbypath;
my %snapsbypath = getsnapsbypath( \%config, \%snaps );
# get snapshot list only if needed
if ($args{'monitor-snapshots'}
|| $args{'monitor-health'}
|| $args{'cron'}
|| $args{'take-snapshots'}
|| $args{'prune-snapshots'}
|| $args{'force-update'}
|| $args{'debug'}
) {
my $forcecacheupdate = 0;
if ($args{'force-update'}) {
$forcecacheupdate = 1;
}
%snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate);
%snapsbytype = getsnapsbytype( \%config, \%snaps );
%snapsbypath = getsnapsbypath( \%config, \%snaps );
}
# let's make it a little easier to be consistent passing these hashes in the same order to each sub
my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
@ -74,7 +118,6 @@ if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
if ($args{'monitor-health'}) { monitor_health(@params); }
if ($args{'monitor-capacity'}) { monitor_capacity(@params); }
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
if ($args{'cron'}) {
if ($args{'quiet'}) { $args{'verbose'} = 0; }
@ -265,7 +308,6 @@ sub prune_snapshots {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $forcecacheupdate = 0;
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
@ -816,7 +858,7 @@ sub getsnaps {
if (checklock('sanoid_cacheupdate')) {
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
if ($args{'force-update'}) {
if ($forcecacheupdate) {
print "INFO: cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: cache expired - updating from zfs list.\n";
@ -826,9 +868,10 @@ sub getsnaps {
@rawsnaps = <FH>;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
open FH, "> $cache.tmp" or die 'Could not write to $cache.tmp!\n';
print FH @rawsnaps;
close FH;
rename("$cache.tmp", "$cache") or die 'Could not rename to $cache!\n';
removelock('sanoid_cacheupdate');
} else {
if ($args{'verbose'}) { print "INFO: deferring cache update - valid cache update lock held by another sanoid process.\n"; }
@ -891,6 +934,20 @@ sub init {
die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION";
}
my @updatedatasets;
# load dataset cache if valid
if (!$args{'force-update'} && -f $cachedatasetspath) {
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cachedatasetspath);
if ((time() - $mtime) <= $cacheTTL) {
if ($args{'debug'}) { print "DEBUG: dataset cache not expired (" . (time() - $mtime) . " seconds old with TTL of $cacheTTL): pulling dataset list from cache.\n"; }
open FH, "< $cachedatasetspath";
@cachedatasets = <FH>;
close FH;
}
}
foreach my $section (keys %ini) {
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
@ -980,6 +1037,10 @@ sub init {
$config{$section}{'path'} = $section;
}
if (! @cachedatasets) {
push (@updatedatasets, "$config{$section}{'path'}\n");
}
# how 'bout some recursion? =)
if ($config{$section}{'zfs_recursion'} && $config{$section}{'zfs_recursion'} == 1 && $config{$section}{'autosnap'} == 1) {
warn "ignored autosnap configuration for '$section' because it's part of a zfs recursion.\n";
@ -997,6 +1058,10 @@ sub init {
@datasets = getchilddatasets($config{$section}{'path'});
DATASETS: foreach my $dataset(@datasets) {
if (! @cachedatasets) {
push (@updatedatasets, $dataset);
}
chomp $dataset;
if ($zfsRecursive) {
@ -1028,9 +1093,27 @@ sub init {
$config{$dataset}{'initialized'} = 1;
}
}
}
# update dataset cache if it was unused
if (! @cachedatasets) {
if (checklock('sanoid_cachedatasetupdate')) {
writelock('sanoid_cachedatasetupdate');
if ($args{'verbose'}) {
if ($args{'force-update'}) {
print "INFO: dataset cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: dataset cache expired - updating from zfs list.\n";
}
}
open FH, "> $cachedatasetspath.tmp" or die 'Could not write to $cachedatasetspath.tmp!\n';
print FH @updatedatasets;
close FH;
rename("$cachedatasetspath.tmp", "$cachedatasetspath") or die 'Could not rename to $cachedatasetspath!\n';
removelock('sanoid_cachedatasetupdate');
} else {
if ($args{'verbose'}) { print "INFO: deferring dataset cache update - valid cache update lock held by another sanoid process.\n"; }
}
}
return %config;
@ -1580,6 +1663,30 @@ sub getchilddatasets {
my $fs = shift;
my $mysudocmd = '';
# use dataset cache if available
if (@cachedatasets) {
my $foundparent = 0;
my @cachechildren = ();
foreach my $dataset (@cachedatasets) {
chomp $dataset;
my $ret = rindex $dataset, "${fs}/", 0;
if ($ret == 0) {
push (@cachechildren, $dataset);
} else {
if ($dataset eq $fs) {
$foundparent = 1;
}
}
}
# sanity check
if ($foundparent) {
return @cachechildren;
}
# fallback if cache misses items for whatever reason
}
my $getchildrencmd = "$mysudocmd $zfs list -o name -t filesystem,volume -Hr $fs |";
if ($args{'debug'}) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
open FH, $getchildrencmd;
@ -1626,16 +1733,17 @@ sub removecachedsnapshots {
my @rawsnaps = <FH>;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
open FH, "> $cache.tmp" or die 'Could not write to $cache.tmp!\n';
foreach my $snapline ( @rawsnaps ) {
my @columns = split("\t", $snapline);
my $snap = $columns[0];
print FH $snapline unless ( exists($pruned{$snap}) );
}
close FH;
rename("$cache.tmp", "$cache") or die 'Could not rename to $cache!\n';
removelock('sanoid_cacheupdate');
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
%snaps = getsnaps(\%config,$cacheTTL,0);
# clear hash
undef %pruned;

View File

@ -31,6 +31,10 @@
# you can also handle datasets recursively in an atomic way without the possibility to override settings for child datasets.
[zpoolname/parent2]
use_template = production
# there are two options for recursive: zfs or yes
# * zfs - taken a zfs snapshot with the '-r' flag; zfs will recursively take a snapshot of the whole
# dataset tree which is consistent.
# * yes - the snapshots will be taken one-at-time through the sanoid code; not necessarily consistent.
recursive = zfs

1168
syncoid

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
# run's all the available tests
for test in */; do
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do
if [ ! -x "${test}/run.sh" ]; then
continue
fi
@ -17,8 +17,11 @@ for test in */; do
cd "${test}"
echo -n y | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
ret=$?
if [ $ret -eq 0 ]; then
echo "[PASS]"
elif [ $ret -eq 130 ]; then
echo "[SKIPPED]"
else
echo "[FAILED] (see ${LOGFILE})"
fi

View File

@ -28,6 +28,8 @@ zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
sleep 1
../../../syncoid --debug --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!
sleep 5

View File

@ -28,6 +28,8 @@ zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
sleep 1
zfs snapshot "${POOL_NAME}"/src@big
../../../syncoid --debug --no-sync-snap --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!

View File

@ -32,17 +32,17 @@ zfs create -o recordsize=32k "${POOL_NAME}"/src/32
zfs create -o recordsize=128k "${POOL_NAME}"/src/128
../../../syncoid --preserve-recordsize --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
zfs get recordsize -t filesystem -r "${POOL_NAME}"/dst
zfs get volblocksize -t volume -r "${POOL_NAME}"/dst
zfs get -t filesystem -r recordsize "${POOL_NAME}"/dst
zfs get -t volume -r volblocksize "${POOL_NAME}"/dst
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "16K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "32K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/128)" != "128K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/128)" != "128K" ]; then
exit 1
fi

View File

@ -29,38 +29,43 @@ zfs create -V 100M -o volblocksize=16k -o primarycache=all "${POOL_NAME}"/src/zv
zfs create -V 100M -o volblocksize=64k "${POOL_NAME}"/src/zvol64
zfs create -o recordsize=16k -o primarycache=none "${POOL_NAME}"/src/16
zfs create -o recordsize=32k -o acltype=posixacl "${POOL_NAME}"/src/32
zfs set 'net.openoid:var-name'='with whitespace and !"§$%&/()= symbols' "${POOL_NAME}"/src/32
../../../syncoid --preserve-properties --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst)" != "16K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get mountpoint -H -o value -t filesystem "${POOL_NAME}"/dst)" != "none" ]; then
if [ "$(zfs get -H -o value -t filesystem mountpoint "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get xattr -H -o value -t filesystem "${POOL_NAME}"/dst)" != "on" ]; then
if [ "$(zfs get -H -o value -t filesystem xattr "${POOL_NAME}"/dst)" != "on" ]; then
exit 1
fi
if [ "$(zfs get primarycache -H -o value -t filesystem "${POOL_NAME}"/dst)" != "none" ]; then
if [ "$(zfs get -H -o value -t filesystem primarycache "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "16K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get primarycache -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "none" ]; then
if [ "$(zfs get -H -o value -t filesystem primarycache "${POOL_NAME}"/dst/16)" != "none" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "32K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get acltype -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "posix" ]; then
if [ "$(zfs get -H -o value -t filesystem acltype "${POOL_NAME}"/dst/32)" != "posix" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem 'net.openoid:var-name' "${POOL_NAME}"/dst/32)" != "with whitespace and !\"§$%&/()= symbols" ]; then
exit 1
fi

View File

@ -0,0 +1,142 @@
#!/bin/bash
# test filtering snapshot names using --include-snaps and --exclude-snaps
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-10.zpool"
MOUNT_TARGET="/tmp/syncoid-test-10.mount"
POOL_SIZE="100M"
POOL_NAME="syncoid-test-10"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
#####
# Create source snapshots and destroy the destination snaps and dataset.
#####
function setup_snaps {
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@monthly1
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily1
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily2
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly1
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly2
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily3
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly3
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly4
}
#####
# Remove the destination snapshots and dataset so that each test starts with a
# blank slate.
#####
function clean_snaps {
zfs destroy "${POOL_NAME}"/dst@%
zfs destroy "${POOL_NAME}"/dst
}
#####
# Verify that the correct set of snapshots is present on the destination.
#####
function verify_checksum {
zfs list -r -t snap "${POOL_NAME}"
checksum=$(zfs list -t snap -r -H -o name "${POOL_NAME}" | sed 's/@syncoid_.*/@syncoid_/' | shasum -a 256)
echo "Expected checksum: $1"
echo "Actual checksum: $checksum"
return $( [[ "$checksum" == "$1" ]] )
}
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
setup_snaps
#####
# TEST 1
#
# --exclude-snaps is provided and --no-stream is omitted. Hourly snaps should
# be missing from the destination, and all other intermediate snaps should be
# present.
#####
../../../syncoid --debug --compress=none --no-sync-snap --exclude-snaps='hourly' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '494b6860415607f1d670e4106a10e1316924ba6cd31b4ddacffe0ad6d30a6339 -'
clean_snaps
#####
# TEST 2
#
# --exclude-snaps and --no-stream are provided. Only the daily3 snap should be
# present on the destination.
#####
../../../syncoid --debug --compress=none --no-sync-snap --exclude-snaps='hourly' --no-stream "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '0a5072f42180d231cfdd678682972fbbb689140b7f3e996b3c348b7e78d67ea2 -'
clean_snaps
#####
# TEST 3
#
# --include-snaps is provided and --no-stream is omitted. Hourly snaps should
# be present on the destination, and all other snaps should be missing
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum 'd32862be4c71c6cde846322a7d006fd5e8edbd3520d3c7b73953492946debb7f -'
clean_snaps
#####
# TEST 4
#
# --include-snaps and --no-stream are provided. Only the hourly4 snap should
# be present on the destination.
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' --no-stream "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '81ef1a8298006a7ed856430bb7e05e8b85bbff530ca9dd7831f1da782f8aa4c7 -'
clean_snaps
#####
# TEST 5
#
# --include-snaps='hourly' and --exclude-snaps='3' are both provided. The
# hourly snaps should be present on the destination except for hourly3; daily
# and monthly snaps should be missing.
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' --exclude-snaps='3' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '5a9dd92b7d4b8760a1fcad03be843da4f43b915c64caffc1700c0d59a1581239 -'
clean_snaps
#####
# TEST 6
#
# --exclude-snaps='syncoid' and --no-stream are provided, and --no-sync-snap is
# omitted. The sync snap should be created on the source but not sent to the
# destination; only hourly4 should be sent.
#####
../../../syncoid --debug --compress=none --no-stream --exclude-snaps='syncoid' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '9394fdac44ec72764a4673202552599684c83530a2a724dae5b411aaea082b02 -'
clean_snaps

View File

@ -0,0 +1,50 @@
#!/bin/bash
# test verifying snapshots with out-of-order snapshot creation datetimes
set -x
set -e
. ../../common/lib.sh
if [ -z "$ALLOW_INVASIVE_TESTS" ]; then
exit 130
fi
POOL_IMAGE="/tmp/syncoid-test-11.zpool"
POOL_SIZE="64M"
POOL_NAME="syncoid-test-11"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# export pool and remove the image in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/before
zfs snapshot "${POOL_NAME}"/before@this-snapshot-should-make-it-into-the-after-dataset
disableTimeSync
setdate 1155533696
zfs snapshot "${POOL_NAME}"/before@oldest-snapshot
zfs snapshot "${POOL_NAME}"/before@another-snapshot-does-not-matter
../../../syncoid --sendoptions="Lec" "${POOL_NAME}"/before "${POOL_NAME}"/after
# verify
saveSnapshotList "${POOL_NAME}" "snapshot-list.txt"
grep "${POOL_NAME}/before@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
exit 0

View File

@ -2,7 +2,7 @@
# run's all the available tests
for test in */; do
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do
if [ ! -x "${test}/run.sh" ]; then
continue
fi
@ -17,8 +17,11 @@ for test in */; do
cd "${test}"
echo | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
ret=$?
if [ $ret -eq 0 ]; then
echo "[PASS]"
elif [ $ret -eq 130 ]; then
echo "[SKIPPED]"
else
echo "[FAILED] (see ${LOGFILE})"
fi