Merge branch 'master' into patch-1

This commit is contained in:
Jim Salter 2025-08-24 11:28:40 -04:00 committed by GitHub
commit 0f3a9c94d9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
34 changed files with 1805 additions and 478 deletions

View File

@ -1,8 +1,42 @@
2.3.0 [overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
2.2.0 [overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)
[syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0)
[syncoid] implemented target snapshot deletion (@mat813)
[syncoid] support bookmarks which are taken in the same second (@delxg, @phreaker0)
[syncoid] exit with an error if the specified src dataset doesn't exist (@phreaker0)
[syncoid] rollback is now done implicitly instead of explicit (@jimsalterjrs, @phreaker0)
[syncoid] append a rand int to the socket name to prevent collisions with parallel invocations (@Gryd3)
[syncoid] implemented support for ssh_config(5) files (@endreszabo)
[syncoid] snapshot hold/unhold support (@rbike)
[sanoid] handle duplicate key definitions gracefully (@phreaker0)
[syncoid] implemented removal of conflicting snapshots with force-delete option (@phreaker0)
[sanoid] implemented pre pruning script hook (@phreaker0)
[syncoid] implemented direct connection support (bypass ssh) for the actual data transfer (@phreaker0)
2.1.0 [overall] documentation updates, small fixes (@HavardLine, @croadfeldt, @jimsalterjrs, @jim-perkins, @kr4z33, @phreaker0)
[syncoid] do not require user to be specified for syncoid (@aerusso)
[syncoid] implemented option for keeping sync snaps (@phreaker0)
[syncoid] use sudo if neccessary for checking pool capabilities regarding resumeable send (@phreaker0)
[syncoid] catch another case were the resume state isn't availabe anymore (@phreaker0)
[syncoid] use sudo if necessary for checking pool capabilities regarding resumable send (@phreaker0)
[syncoid] catch another case were the resume state isn't available anymore (@phreaker0)
[syncoid] check for an invalid argument combination (@phreaker0)
[syncoid] fix iszfsbusy check for similar dataset names (@phreaker0)
[syncoid] append timezone offset to the syncoid snapshot name to fix DST collisions (@phreaker0)
@ -29,7 +63,7 @@
2.0.2 [overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0)
[sanoid] changed and simplified DST handling (@shodanshok)
[syncoid] reset partially resume state automatically (@phreaker0)
[syncoid] handle some zfs erros automatically by parsing the stderr outputs (@phreaker0)
[syncoid] handle some zfs errors automatically by parsing the stderr outputs (@phreaker0)
[syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0)
[syncoid] don't use hardcoded paths (@phreaker0)
[syncoid] fix for special setup with listsnapshots=on (@phreaker0)
@ -84,7 +118,7 @@
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
@ -118,12 +152,12 @@
replicating to target/parent/child2. This could still use some cleanup TBH; syncoid SHOULD exit 3
if any of these errors happen (to assist detection of errors in scripting) but now would exit 0.
1.4.12 Sanoid now strips trailing whitespace in template definitions in sanoid.conf, per Github #61
1.4.12 Sanoid now strips trailing whitespace in template definitions in sanoid.conf, per GitHub #61
1.4.11 enhanced Syncoid to use zfs `guid` property rather than `creation` property to ensure snapshots on source
and target actually match. This immediately prevents conflicts due to timezone differences on source and target,
and also paves the way in the future for Syncoid to find matching snapshots even after `zfs rename` on source
or target. Thank you Github user @mailinglists35 for the idea!
or target. Thank you GitHub user @mailinglists35 for the idea!
1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression
to --compress=gzip but with compressed throughput of 75.2 MiB/s instead of 18.1 MiB/s. pigz-slow is around 5%
@ -241,4 +275,4 @@
1.0.1 ported slightly modified iszfsbusy sub from syncoid to sanoid (to keep from thinning snapshots during replications)
1.0.0 initial commit to Github
1.0.0 initial commit to GitHub

128
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

1
CONTRIBUTING.md Normal file
View File

@ -0,0 +1 @@
Any and all contributions made to this project must be compatible with the project's own GPLv3 license.

View File

@ -6,7 +6,7 @@
- [Installation](#installation)
- [Debian/Ubuntu](#debianubuntu)
- [CentOS](#centos)
- [RHEL/CentOS/AlmaLinux](#RHEL/CentOS/AlmaLinux)
- [FreeBSD](#freebsd)
- [Alpine Linux / busybox](#alpine-Linux-or-busybox-based-distributions)
- [OmniOS](#OmniOS)
@ -23,49 +23,61 @@ Install prerequisite software:
```bash
apt install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer build-essential
apt install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer build-essential git
```
Clone this repo, build the debian package and install it (alternatively you can skip the package and do it manually like described below for CentOS):
Clone this repo under /tmp (to make sure the apt user has access to the unpacked clone), build the debian package and install it (alternatively you can skip the package and do it manually like described below for CentOS):
```bash
# Download the repo as root to avoid changing permissions later
sudo git clone https://github.com/jimsalterjrs/sanoid.git
cd /tmp
git clone https://github.com/jimsalterjrs/sanoid.git
cd sanoid
# checkout latest stable release or stay on master for bleeding edge stuff (but expect bugs!)
git checkout $(git tag | grep "^v" | tail -n 1)
ln -s packages/debian .
dpkg-buildpackage -uc -us
apt install ../sanoid_*_all.deb
sudo apt install ../sanoid_*_all.deb
```
Enable sanoid timer:
```bash
# enable and start the sanoid timer
sudo systemctl enable sanoid.timer
sudo systemctl start sanoid.timer
sudo systemctl enable --now sanoid.timer
```
## CentOS
## RHEL/CentOS/AlmaLinux
Install prerequisite software:
```bash
# Install and enable epel if we don't already have it, and git too
# Install and enable EPEL if we don't already have it, and git too:
# (Note that on RHEL we cannot enable EPEL with the epel-release
# package, so you should follow the instructions on the main EPEL site.)
sudo yum install -y epel-release git
# On CentOS, we also need to enable the PowerTools repo:
sudo yum config-manager --set-enabled powertools
# For Centos 8 you need to enable the PowerTools repo to make all the needed Perl modules available (Recommended)
sudo dnf config-manager --set-enabled powertools
# On RHEL, instead of PowerTools, we need to enable the CodeReady Builder repo:
sudo subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms
# For Rocky Linux 9 or AlmaLinux 9 you need the CodeReady Builder repo, and it is labelled `crb`
sudo dnf config-manager --set-enabled crb
# Install the packages that Sanoid depends on:
sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny lzop mbuffer mhash pv
# if the perl dependencies can't be found in the configured repositories you can install them from CPAN manually:
sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny perl-Getopt-Long lzop mbuffer mhash pv
# The repositories above should contain all the relevant Perl modules, but if you
# still cannot find them then you can install them from CPAN manually:
sudo dnf install perl-CPAN perl-CPAN
cpan # answer the questions and past the following lines
cpan # answer the questions and paste the following lines:
# install Capture::Tiny
# install Config::IniFiles
# install Getopt::Long
```
Clone this repo, then put the executables and config files into the appropriate directories:
```bash
cd /tmp
# Download the repo as root to avoid changing permissions later
sudo git clone https://github.com/jimsalterjrs/sanoid.git
cd sanoid
@ -143,8 +155,7 @@ sudo systemctl daemon-reload
# Enable sanoid-prune.service to allow it to be triggered by sanoid.service
sudo systemctl enable sanoid-prune.service
# Enable and start the Sanoid timer
sudo systemctl enable sanoid.timer
sudo systemctl start sanoid.timer
sudo systemctl enable --now sanoid.timer
```
Now, proceed to configure [**Sanoid**](#configuration)
@ -154,7 +165,7 @@ Now, proceed to configure [**Sanoid**](#configuration)
Install prerequisite software:
```bash
pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop
pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop sanoid
```
**Additional notes:**
@ -163,7 +174,7 @@ pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop
* Simplest path workaround is symlinks, eg `ln -s /usr/local/bin/lzop /usr/bin/lzop` or similar, as appropriate to create links in **/usr/bin** to wherever the utilities actually are on your system.
* See note about mbuffer and other things in FREEBSD.readme
* See note about tcsh unpleasantness and other things in FREEBSD.readme
## Alpine Linux or busybox based distributions
@ -253,11 +264,57 @@ Further steps (not OmniOS specific):
- set up SSH connections between two remote hosts
- create a cron job that runs sanoid --cron --quiet periodically
=======
## MacOS
Install prerequisite software:
```
perl -MCPAN -e install Config::IniFiles
```
The crontab can be used as on a normal unix. To use launchd instead, this example config file can be use can be used. Modify it for your needs. In particular, adjust the sanoid path.
It will start sanoid once per hour, at minute 51. Missed invocations due to standby will be merged into a single invocation at the next wakeup.
```bash
cat << "EOF" | sudo tee /Library/LaunchDaemons/net.openoid.Sanoid.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>net.openoid.Sanoid</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/sanoid/sanoid</string>
<string>--cron</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>TZ</key>
<string>UTC</string>
<key>PATH</key>
<string>/usr/local/zfs/bin:$PATH:/usr/local/bin</string>
</dict>
<key>StartCalendarInterval</key>
<array>
<dict>
<key>Minute</key>
<integer>51</integer>
</dict>
</array>
</dict>
</plist>
EOF
sudo launchctl load /Library/LaunchDaemons/net.openoid.Sanoid.plist
```
## Other OSes
**Sanoid** depends on the Perl module Config::IniFiles and will not operate without it. Config::IniFiles may be installed from CPAN, though the project strongly recommends using your distribution's repositories instead.
**Sanoid** depends on the Perl modules Config::IniFiles and Capture::Tiny and will not operate without them. These modules may be installed from CPAN, though the project strongly recommends using your distribution's repositories instead.
**Syncoid** depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced functionality in the absence of any or all of the above. SSH is only required for remote synchronization. On newer FreeBSD and Ubuntu Xenial chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the default for SSH transport since v1.4.6. Syncoid runs will fail if one of them is not available on either end of the transport.
**Syncoid** depends on ssh, pv, gzip, lzop, and mbuffer as well as sharing sanoid's dependency on Capture::Tiny. Capture::Tiny is mandatory, but syncoid can function with reduced functionality without any or all of the command-line dependencies. SSH is only required for remote synchronization. On newer FreeBSD and Ubuntu Xenial chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the default for SSH transport since v1.4.6. Syncoid runs will fail if one of them is not available on either end of the transport.
### General outline for installation
@ -288,3 +345,12 @@ Adapt the timer interval to the lowest configured snapshot interval.
Take a look at the files `sanoid.defaults.conf` and `sanoid.conf` for all possible configuration options.
Also have a look at the README.md for a simpler suggestion for `sanoid.conf`.
## Syncoid
If you are pushing or pulling from a remote host, create a user with privileges to `ssh` as well as `sudo`. To ensure that `zfs send/receive` can execute, adjust the privileges of the user to execute `sudo` **without** a password for only the `zfs` binary (run `which zfs` to find the path of the `zfs` binary). Modify `/etc/sudoers` by running `# visudo`. Add the following line for your user.
```
...
<user> ALL=NOPASSWD: <path of zfs binary>
...
```

View File

@ -1,15 +1,26 @@
<p align="center"><img src="http://www.openoid.net/wp-content/themes/openoid/images/sanoid_logo.png" alt="sanoid logo" title="sanoid logo"></p>
<table align="center">
<tr>
<td border="1" width="750">
<p align="center">
<img src="http://www.openoid.net/wp-content/themes/openoid/images/sanoid_logo.png" alt="sanoid logo" title="sanoid logo">
</p>
<img src="https://openoid.net/gplv3-127x51.png" width=127 height=51 align="right">
<p align="left">Sanoid is provided to you completely free and libre, now and in perpetuity, via the GPL v3.0 license. If you find the project useful, please consider either a recurring or one-time donation at <a href="https://www.patreon.com/PracticalZFS" target="_blank">Patreon</a> or <a href="https://www.paypal.com/donate/?hosted_button_id=5BLPNV86D4S9N" target="_blank">PayPal</a>—your contributions will support both this project and the Practical ZFS <a href="https://discourse.practicalzfs.com/" target="_blank">forum</a>.
</p>
</td>
</tr>
</table>
<img src="http://openoid.net/gplv3-127x51.png" width=127 height=51 align="right">Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems <a href="http://openoid.net/transcend" target="_blank">functionally immortal</a>.
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems <a href="https://openoid.net/transcend" target="_blank">functionally immortal</a> via automated snapshot management and over-the-air replication.
<p align="center"><a href="https://youtu.be/ZgowLNBsu00" target="_blank"><img src="http://www.openoid.net/sanoid_video_launcher.png" alt="sanoid rollback demo" title="sanoid rollback demo"></a><br clear="all"><sup>(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)</sup></p>
More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job but see INSTALL.md fore more details:
More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job but see INSTALL.md for more details:
```
* * * * * TZ=UTC /usr/local/bin/sanoid --cron
```
`Note`: Using UTC as timezone is recommend to prevent problems with daylight saving times
`Note`: Using UTC as timezone is recommended to prevent problems with daylight saving times
And its /etc/sanoid/sanoid.conf might look something like this:
@ -69,10 +80,6 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
+ --force-prune
Purges expired snapshots even if a send/recv is in progress
+ --monitor-snapshots
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
@ -89,13 +96,17 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
+ --cache-ttl=SECONDS
Set custom cache expire time in seconds (default: 20 minutes).
+ --version
This prints the version number, and exits.
+ --quiet
Supress non-error output.
Suppress non-error output.
+ --verbose
@ -103,7 +114,7 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
+ --debug
This prints out quite alot of additional information during a sanoid run, and is normally not needed.
This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
+ --readonly
@ -115,7 +126,9 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
### Sanoid script hooks
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot:
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot.
**Note** that snapshots related script are triggered only if you have `autosnap = yes` and pruning scripts are triggered only if you have `autoprune = yes`.
#### `pre_snapshot_script`
@ -125,7 +138,7 @@ Will be executed before the snapshot(s) of a single dataset are taken. The follo
| ----------------- | ----------- |
| `SANOID_SCRIPT` | The type of script being executed, one of `pre`, `post`, or `prune`. Allows for one script to be used for multiple tasks |
| `SANOID_TARGET` | **DEPRECATED** The dataset about to be snapshot (only the first dataset will be provided) |
| `SANOID_TARGETS` | Comma separated list of all datasets to be snapshoted (currently only a single dataset, multiple datasets will be possible later with atomic groups) |
| `SANOID_TARGETS` | Comma separated list of all datasets to be snapshotted (currently only a single dataset, multiple datasets will be possible later with atomic groups) |
| `SANOID_SNAPNAME` | **DEPRECATED** The name of the snapshot that will be taken (only the first name will be provided, does not include the dataset name) |
| `SANOID_SNAPNAMES` | Comma separated list of all snapshot names that will be taken (does not include the dataset name) |
| `SANOID_TYPES` | Comma separated list of all snapshot types to be taken (yearly, monthly, weekly, daily, hourly, frequently) |
@ -232,7 +245,7 @@ syncoid root@remotehost:data/images/vm backup/images/vm
Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel.
Syncoid supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used.
If ZFS supports resumeable send/receive streams on both the source and target those will be enabled as default.
If ZFS supports resumable send/receive streams on both the source and target those will be enabled as default.
As of 1.4.18, syncoid also automatically supports and enables resume of interrupted replication when both source and target support this feature.
@ -274,7 +287,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --identifier=
Adds the given identifier to the snapshot name after "syncoid_" prefix and before the hostname. This enables the use case of reliable replication to multiple targets from the same host. The following chars are allowed: a-z, A-Z, 0-9, _, -, : and . .
Adds the given identifier to the snapshot and hold name after "syncoid_" prefix and before the hostname. This enables the use case of reliable replication to multiple targets from the same host. The following chars are allowed: a-z, A-Z, 0-9, _, -, : and . .
+ -r --recursive
@ -286,7 +299,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --compress <compression type>
Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
Compression method to use for network transfer. Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ --source-bwlimit <limit t|g|m|k>
@ -294,7 +307,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --target-bwlimit <limit t|g|m|k>
This is the bandwidth limit in bytes (kbytes, mbytesm etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired.
This is the bandwidth limit in bytes (kbytes, mbytes, etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired.
+ --no-command-checks
@ -316,10 +329,23 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
This argument tells syncoid to create a zfs bookmark for the newest snapshot after it got replicated successfully. The bookmark name will be equal to the snapshot name. Only works in combination with the --no-sync-snap option. This can be very useful for irregular replication where the last matching snapshot on the source was already deleted but the bookmark remains so a replication is still possible.
+ --use-hold
This argument tells syncoid to add a hold to the newest snapshot on the source and target after replication succeeds and to remove the hold after the next successful replication. Setting a hold prevents the snapshots from being destroyed. The hold name includes the identifier if set. This allows for separate holds in case of replication to multiple targets.
+ --preserve-recordsize
This argument tells syncoid to set the recordsize on the target before writing any data to it matching the one set on the replication src. This only applies to initial sends.
+ --preserve-properties
This argument tells syncoid to get all locally set dataset properties from the source and apply all supported ones on the target before writing any data. It's similar to the '-p' flag for zfs send but also works for encrypted datasets in non raw sends. This only applies to initial sends.
+ --delete-target-snapshots
With this argument snapshots which are missing on the source will be destroyed on the target. Use this if you only want to handle snapshots on the source.
Note that snapshot deletion is only done after a successful synchronization. If no new snapshots are found, no synchronization is done and no deletion either.
+ --no-clone-rollback
Do not rollback clones on target
@ -330,19 +356,33 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --exclude=REGEX
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times.
__DEPRECATION NOTICE:__ `--exclude` has been deprecated and will be removed in a future release. Please use `--exclude-datasets` instead.
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times. The provided regex pattern is matched against the dataset name only; this option does not affect which snapshots are synchronized. If both `--exclude` and `--exclude-datasets` are provided, then `--exclude` is ignored.
+ --exclude-datasets=REGEX
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times. The provided regex pattern is matched against the dataset name only; this option does not affect which snapshots are synchronized.
+ --exclude-snaps=REGEX
Exclude specific snapshots that match the given regular expression. The provided regex pattern is matched against the snapshot name only. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
+ --include-snaps=REGEX
Only include snapshots that match the given regular expression. The provided regex pattern is matched against the snapshot name only. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
+ --no-resume
This argument tells syncoid to not use resumeable zfs send/receive streams.
This argument tells syncoid to not use resumable zfs send/receive streams.
+ --force-delete
Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks.
Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks. Also removes conflicting snapshots if the replication would fail because of a snapshot which has the same name between source and target but different contents.
+ --no-clone-handling
This argument tells syncoid to not recreate clones on the targe on initial sync and doing a normal replication instead.
This argument tells syncoid to not recreate clones on the target on initial sync, and do a normal replication instead.
+ --dumpsnaps
@ -368,13 +408,18 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
Use specified identity file as per ssh -i.
+ --insecure-direct-connection=IP:PORT[,IP:PORT,[TIMEOUT,[mbuffer]]]
WARNING: This is an insecure option as the data is not encrypted while being sent over the network. Only use if you trust the complete network path.
Use a direct tcp connection (with socat and busybox nc/mbuffer) for the actual zfs send/recv stream. All control commands are still executed via the ssh connection. The first address pair is used for connecting to the target host from the source host and the second pair is for listening on the target host. If the later isn't provided the same as the former is used. This can be used for saturating high throughput connection like >= 10GBe network which isn't easy with the overhead off ssh. It can also be useful for encrypted datasets to lower the cpu usage needed for replication but be aware that metadata is NOT ENCRYPTED in this case. The default timeout is 60 seconds and can be overridden by providing it as third argument. By default busybox nc is used for the listeing tcp socket, if mbuffer is preferred specify its name as fourth argument but be aware that mbuffer listens on all interfaces and uses an optionally provided ip address for access restriction (This option can't be used for relaying between two remote hosts)
+ --quiet
Supress non-error output.
Suppress non-error output.
+ --debug
This prints out quite alot of additional information during a sanoid run, and is normally not needed.
This prints out quite a lot of additional information during a syncoid run, and is normally not needed.
+ --help

13
SECURITY.md Normal file
View File

@ -0,0 +1,13 @@
# Security Policy
## Supported Versions
The Sanoid project directly supports both the code in the main branch, and the last two releases found here on GitHub.
Community support is available for all versions, with the understanding that in some cases "upgrade to a newer version" may be the support offered.
If you've installed Sanoid from your distribution's repositories, we're happy to offer community support with the same caveat!
## Reporting a Vulnerability
If you believe you've found a serious security vulnerability in Sanoid, please create an Issue here on GitHub. If you prefer a private contact channel to disclose
particularly sensitive or private details, you may request one in the GitHub Issue you create.

View File

@ -1 +1 @@
2.1.0
2.3.0

View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.1.0';
$::VERSION = '2.3.0';
use strict;
use warnings;
@ -25,6 +25,9 @@ if ($args{'path'} eq '') {
}
}
# resolve given path to a canonical one
$args{'path'} = Cwd::realpath($args{'path'});
my $dataset = getdataset($args{'path'});
my %versions = getversions($args{'path'}, $dataset);

View File

@ -1,10 +1,52 @@
sanoid (2.3.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
-- Jim Salter <github@jrs-s.net> Tue, 05 Jun 2025 22:47:00 +0200
sanoid (2.2.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)
[syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0)
[syncoid] implemented target snapshot deletion (@mat813)
[syncoid] support bookmarks which are taken in the same second (@delxg, @phreaker0)
[syncoid] exit with an error if the specified src dataset doesn't exist (@phreaker0)
[syncoid] rollback is now done implicitly instead of explicit (@jimsalterjrs, @phreaker0)
[syncoid] append a rand int to the socket name to prevent collisions with parallel invocations (@Gryd3)
[syncoid] implemented support for ssh_config(5) files (@endreszabo)
[syncoid] snapshot hold/unhold support (@rbike)
[sanoid] handle duplicate key definitions gracefully (@phreaker0)
[syncoid] implemented removal of conflicting snapshots with force-delete option (@phreaker0)
[sanoid] implemented pre pruning script hook (@phreaker0)
[syncoid] implemented direct connection support (bypass ssh) for the actual data transfer (@phreaker0)
-- Jim Salter <github@jrs-s.net> Tue, 18 Jul 2023 10:04:00 +0200
sanoid (2.1.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@HavardLine, @croadfeldt, @jimsalterjrs, @jim-perkins, @kr4z33, @phreaker0)
[syncoid] do not require user to be specified for syncoid (@aerusso)
[syncoid] implemented option for keeping sync snaps (@phreaker0)
[syncoid] use sudo if neccessary for checking pool capabilities regarding resumeable send (@phreaker0)
[syncoid] catch another case were the resume state isn't availabe anymore (@phreaker0)
[syncoid] use sudo if necessary for checking pool capabilities regarding resumable send (@phreaker0)
[syncoid] catch another case were the resume state isn't available anymore (@phreaker0)
[syncoid] check for an invalid argument combination (@phreaker0)
[syncoid] fix iszfsbusy check for similar dataset names (@phreaker0)
[syncoid] append timezone offset to the syncoid snapshot name to fix DST collisions (@phreaker0)
@ -39,7 +81,7 @@ sanoid (2.0.2) unstable; urgency=medium
[overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0)
[syncoid] changed and simplified DST handling (@shodanshok)
[syncoid] reset partially resume state automatically (@phreaker0)
[syncoid] handle some zfs erros automatically by parsing the stderr outputs (@phreaker0)
[syncoid] handle some zfs errors automatically by parsing the stderr outputs (@phreaker0)
[syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0)
[syncoid] don't use hardcoded paths (@phreaker0)
[syncoid] fix for special setup with listsnapshots=on (@phreaker0)
@ -102,7 +144,7 @@ sanoid (2.0.0) unstable; urgency=medium
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)

View File

@ -12,7 +12,7 @@ Package: sanoid
Architecture: all
Depends: libcapture-tiny-perl,
libconfig-inifiles-perl,
zfsutils-linux | zfs,
zfsutils-linux | zfs | openzfs-zfsutils,
${misc:Depends},
${perl:Depends}
Recommends: gzip,

View File

@ -2,3 +2,5 @@
# remove old cache file
[ -f /var/cache/sanoidsnapshots.txt ] && rm /var/cache/sanoidsnapshots.txt || true
[ -f /var/cache/sanoid/snapshots.txt ] && rm /var/cache/sanoid/snapshots.txt || true
[ -f /var/cache/sanoid/datasets.txt ] && rm /var/cache/sanoid/datasets.txt || true

View File

@ -12,10 +12,6 @@ override_dh_auto_install:
install -d $(DESTDIR)/etc/sanoid
install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid
install -d $(DESTDIR)/lib/systemd/system
install -m 664 debian/sanoid-prune.service debian/sanoid.timer \
$(DESTDIR)/lib/systemd/system
install -d $(DESTDIR)/usr/sbin
install -m 775 \
findoid sanoid sleepymutex syncoid \
@ -25,6 +21,8 @@ override_dh_auto_install:
install -m 664 sanoid.conf \
$(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example
dh_installsystemd --name sanoid-prune
override_dh_installinit:
dh_installinit --noscripts

View File

@ -1,4 +1,4 @@
%global version 2.1.0
%global version 2.3.0
%global git_tag v%{version}
# Enable with systemctl "enable sanoid.timer"
@ -111,13 +111,17 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%endif
%changelog
* Wed Nov 24 2020 Christoph Klaffl <christoph@phreaker.eu> - 2.1.0
* Tue Jun 05 2025 Christoph Klaffl <christoph@phreaker.eu> - 2.3.0
- Bump to 2.3.0
* Tue Jul 18 2023 Christoph Klaffl <christoph@phreaker.eu> - 2.2.0
- Bump to 2.2.0
* Tue Nov 24 2020 Christoph Klaffl <christoph@phreaker.eu> - 2.1.0
- Bump to 2.1.0
* Wed Oct 02 2019 Christoph Klaffl <christoph@phreaker.eu> - 2.0.3
- Bump to 2.0.3
* Wed Sep 25 2019 Christoph Klaffl <christoph@phreaker.eu> - 2.0.2
- Bump to 2.0.2
* Wed Dec 04 2018 Christoph Klaffl <christoph@phreaker.eu> - 2.0.0
* Tue Dec 04 2018 Christoph Klaffl <christoph@phreaker.eu> - 2.0.0
- Bump to 2.0.0
* Sat Apr 28 2018 Dominic Robinson <github@dcrdev.com> - 1.4.18-1
- Bump to 1.4.18

348
sanoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.1.0';
$::VERSION = '2.3.0';
my $MINIMUM_DEFAULTS_VERSION = 2;
use strict;
@ -12,6 +12,7 @@ use warnings;
use Config::IniFiles; # read samba-style conf file
use Data::Dumper; # debugging - print contents of hash
use File::Path 'make_path';
use File::Copy;
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse
@ -26,11 +27,11 @@ GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
"configdir=s", "cache-dir=s", "run-dir=s",
"monitor-health", "force-update",
"monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune",
"monitor-capacity"
"monitor-capacity", "cache-ttl=i"
) or pod2usage(2);
# If only config directory (or nothing) has been specified, default to --cron --verbose
if (keys %args < 2) {
if (keys %args < 4) {
$args{'cron'} = 1;
$args{'verbose'} = 1;
}
@ -46,26 +47,82 @@ my $zpool = 'zpool';
my $conf_file = "$args{'configdir'}/sanoid.conf";
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
# parse config file
my %config = init($conf_file,$default_conf_file);
my $cache_dir = $args{'cache-dir'};
my $run_dir = $args{'run-dir'};
make_path($cache_dir);
make_path($run_dir);
# if we call getsnaps(%config,1) it will forcibly update the cache, TTL or no TTL
my $forcecacheupdate = 0;
my $cacheTTL = 1200; # 20 minutes
if ($args{'force-prune'}) {
warn "WARN: --force-prune argument is deprecated and its behavior is now standard";
}
if ($args{'cache-ttl'}) {
if ($args{'cache-ttl'} < 0) {
die "ERROR: cache-ttl needs to be positive!\n";
}
$cacheTTL = $args{'cache-ttl'};
}
# Allow a much older snapshot cache file than default if _only_ "--monitor-*" action commands are given
# (ignore "--verbose", "--configdir" etc)
if (
(
$args{'monitor-snapshots'}
|| $args{'monitor-health'}
|| $args{'monitor-capacity'}
) && ! (
$args{'cron'}
|| $args{'force-update'}
|| $args{'take-snapshots'}
|| $args{'prune-snapshots'}
|| $args{'cache-ttl'}
)
) {
# The command combination above must not assert true for any command that takes or prunes snapshots
$cacheTTL = 18000; # 5 hours
if ($args{'debug'}) { print "DEBUG: command combo means that the cache file (provided it exists) will be allowed to be older than default.\n"; }
}
# snapshot cache
my $cache = "$cache_dir/snapshots.txt";
my $cacheTTL = 900; # 15 minutes
my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate );
# configured dataset cache
my $cachedatasetspath = "$cache_dir/datasets.txt";
my @cachedatasets;
# parse config file
my %config = init($conf_file,$default_conf_file);
my %pruned;
my %capacitycache;
my %taken;
my %snapsbytype = getsnapsbytype( \%config, \%snaps );
my %snaps;
my %snapsbytype;
my %snapsbypath;
my %snapsbypath = getsnapsbypath( \%config, \%snaps );
# get snapshot list only if needed
if ($args{'monitor-snapshots'}
|| $args{'monitor-health'}
|| $args{'cron'}
|| $args{'take-snapshots'}
|| $args{'prune-snapshots'}
|| $args{'force-update'}
|| $args{'debug'}
) {
my $forcecacheupdate = 0;
if ($args{'force-update'}) {
$forcecacheupdate = 1;
}
%snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate);
%snapsbytype = getsnapsbytype( \%config, \%snaps );
%snapsbypath = getsnapsbypath( \%config, \%snaps );
}
# let's make it a little easier to be consistent passing these hashes in the same order to each sub
my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
@ -74,7 +131,6 @@ if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
if ($args{'monitor-health'}) { monitor_health(@params); }
if ($args{'monitor-capacity'}) { monitor_capacity(@params); }
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
if ($args{'cron'}) {
if ($args{'quiet'}) { $args{'verbose'} = 0; }
@ -130,7 +186,7 @@ sub monitor_snapshots {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $errorlevel = 0;
my $errlevel = 0;
my $msg;
my @msgs;
my @paths;
@ -169,7 +225,7 @@ sub monitor_snapshots {
my $dispcrit = displaytime($crit);
if ( $elapsed > $crit || $elapsed == -1) {
if ($crit > 0) {
if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; }
if (! $config{$section}{'monitor_dont_crit'}) { $errlevel = 2; }
if ($elapsed == -1) {
push @msgs, "CRIT: $path has no $type snapshots at all!";
} else {
@ -178,7 +234,7 @@ sub monitor_snapshots {
}
} elsif ($elapsed > $warn) {
if ($warn > 0) {
if (! $config{$section}{'monitor_dont_warn'} && ($errorlevel < 2) ) { $errorlevel = 1; }
if (! $config{$section}{'monitor_dont_warn'} && ($errlevel < 2) ) { $errlevel = 1; }
push @msgs, "WARN: $path newest $type snapshot is $dispelapsed old (should be < $dispwarn)";
}
} else {
@ -196,7 +252,7 @@ sub monitor_snapshots {
if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; }
print "$msg\n";
exit $errorlevel;
exit $errlevel;
}
@ -265,7 +321,6 @@ sub prune_snapshots {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $forcecacheupdate = 0;
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
@ -319,29 +374,43 @@ sub prune_snapshots {
if (checklock('sanoid_pruning')) {
writelock('sanoid_pruning');
foreach my $snap( @prunesnaps ){
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (!$args{'force-prune'} && iszfsbusy($path)) {
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
} else {
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
my $dataset = (split '@', $snap)[0];
my $snapname = (split '@', $snap)[1];
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
$ENV{'SANOID_SCRIPT'} = 'prune';
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
my $dataset = (split '@', $snap)[0];
my $snapname = (split '@', $snap)[1];
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SCRIPT'};
}
} else {
warn "could not remove $snap : $?";
if (! $args{'readonly'} && $config{$dataset}{'pre_pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
if ($args{'verbose'}) { print "executing pre_pruning_script '".$config{$dataset}{'pre_pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pre_pruning_script', $dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
if ($ret != 0) {
# warning was already thrown by runscript function
# skip pruning if pre snapshot script returns non zero exit code
next;
}
}
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
$ENV{'SANOID_SCRIPT'} = 'prune';
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SCRIPT'};
}
} else {
warn "could not remove $snap : $?";
}
}
}
@ -533,6 +602,7 @@ sub take_snapshots {
}
if (%newsnapsgroup) {
$forcecacheupdate = 0;
while ((my $path, my $snapData) = each(%newsnapsgroup)) {
my $recursiveFlag = $snapData->{recursive};
my $dstHandling = $snapData->{handleDst};
@ -603,9 +673,17 @@ sub take_snapshots {
}
};
if ($exit == 0) {
$taken{$snap} = {
'time' => time(),
'recursive' => $recursiveFlag
};
}
$exit == 0 or do {
if ($dstHandling) {
if ($stderr =~ /already exists/) {
$forcecacheupdate = 1;
$exit = 0;
$snap =~ s/_([a-z]+)$/dst_$1/g;
if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; }
@ -655,8 +733,8 @@ sub take_snapshots {
}
}
}
$forcecacheupdate = 1;
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
addcachedsnapshots();
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
}
}
@ -799,7 +877,7 @@ sub getsnaps {
if (checklock('sanoid_cacheupdate')) {
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
if ($args{'force-update'}) {
if ($forcecacheupdate) {
print "INFO: cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: cache expired - updating from zfs list.\n";
@ -809,9 +887,10 @@ sub getsnaps {
@rawsnaps = <FH>;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
open FH, "> $cache.tmp" or die "Could not write to $cache.tmp!\n";
print FH @rawsnaps;
close FH;
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
} else {
if ($args{'verbose'}) { print "INFO: deferring cache update - valid cache update lock held by another sanoid process.\n"; }
@ -874,6 +953,20 @@ sub init {
die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION";
}
my @updatedatasets;
# load dataset cache if valid
if (!$args{'force-update'} && -f $cachedatasetspath) {
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cachedatasetspath);
if ((time() - $mtime) <= $cacheTTL) {
if ($args{'debug'}) { print "DEBUG: dataset cache not expired (" . (time() - $mtime) . " seconds old with TTL of $cacheTTL): pulling dataset list from cache.\n"; }
open FH, "< $cachedatasetspath";
@cachedatasets = <FH>;
close FH;
}
}
foreach my $section (keys %ini) {
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
@ -881,6 +974,15 @@ sub init {
if (! defined ($defaults{'template_default'}{$key})) {
die "FATAL ERROR: I don't understand the setting $key you've set in \[$section\] in $conf_file.\n";
}
# in case of duplicate lines we will end up with an array of all values
my $value = $ini{$section}{$key};
if (ref($value) eq 'ARRAY') {
warn "duplicate key '$key' in section '$section', using the value from the first occurence and ignoring the others.\n";
$ini{$section}{$key} = $value->[0];
}
# trim
$ini{$section}{$key} =~ s/^\s+|\s+$//g;
}
if ($section =~ /^template_/) { next; } # don't process templates directly
@ -889,7 +991,7 @@ sub init {
# for sections directly when they've already been defined recursively, without starting them over from scratch.
if (! defined ($config{$section}{'initialized'})) {
if ($args{'debug'}) { print "DEBUG: initializing \$config\{$section\} with default values from $default_conf_file.\n"; }
# set default values from %defaults, which can then be overriden by template
# set default values from %defaults, which can then be overridden by template
# and/or local settings within the module.
foreach my $key (keys %{$defaults{'template_default'}}) {
if (! ($key =~ /template|recursive|children_only/)) {
@ -925,6 +1027,12 @@ sub init {
}
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key};
my $value = $config{$section}{$key};
if (ref($value) eq 'ARRAY') {
# handle duplicates silently (warning was already printed above)
$config{$section}{$key} = $value->[0];
}
}
}
}
@ -954,6 +1062,10 @@ sub init {
$config{$section}{'path'} = $section;
}
if (! @cachedatasets) {
push (@updatedatasets, "$config{$section}{'path'}\n");
}
# how 'bout some recursion? =)
if ($config{$section}{'zfs_recursion'} && $config{$section}{'zfs_recursion'} == 1 && $config{$section}{'autosnap'} == 1) {
warn "ignored autosnap configuration for '$section' because it's part of a zfs recursion.\n";
@ -971,7 +1083,9 @@ sub init {
@datasets = getchilddatasets($config{$section}{'path'});
DATASETS: foreach my $dataset(@datasets) {
chomp $dataset;
if (! @cachedatasets) {
push (@updatedatasets, "$dataset\n");
}
if ($zfsRecursive) {
# don't try to take the snapshot ourself, recursive zfs snapshot will take care of that
@ -1002,9 +1116,27 @@ sub init {
$config{$dataset}{'initialized'} = 1;
}
}
}
# update dataset cache if it was unused
if (! @cachedatasets) {
if (checklock('sanoid_cachedatasetupdate')) {
writelock('sanoid_cachedatasetupdate');
if ($args{'verbose'}) {
if ($args{'force-update'}) {
print "INFO: dataset cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: dataset cache expired - updating from zfs list.\n";
}
}
open FH, "> $cachedatasetspath.tmp" or die "Could not write to $cachedatasetspath.tmp!\n";
print FH @updatedatasets;
close FH;
rename("$cachedatasetspath.tmp", "$cachedatasetspath") or die "Could not rename to $cachedatasetspath!\n";
removelock('sanoid_cachedatasetupdate');
} else {
if ($args{'verbose'}) { print "INFO: deferring dataset cache update - valid cache update lock held by another sanoid process.\n"; }
}
}
return %config;
@ -1137,7 +1269,7 @@ sub check_zpool() {
}
}
# Tony: Debuging
# Tony: Debugging
# print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n";
close(STAT);
@ -1239,7 +1371,7 @@ sub check_zpool() {
## no display for verbose level 1
next if ($verbose==1);
## don't display working devices for verbose level 2
if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")) {
if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL")) {
# check for io/checksum errors
my @vdeverr = ();
@ -1521,30 +1653,6 @@ sub writelock {
close FH;
}
sub iszfsbusy {
# check to see if ZFS filesystem passed in as argument currently has a zfs send or zfs receive process referencing it.
# return true if busy (currently being sent or received), return false if not.
my $fs = shift;
# if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; }
open PL, "$pscmd -Ao args= |";
my @processes = <PL>;
close PL;
foreach my $process (@processes) {
# if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; }
if ($process =~ /zfs *(send|receive|recv).*$fs/) {
# there's already a zfs send/receive process for our target filesystem - return true
# if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
}
}
# no zfs receive processes for our target filesystem found - return false
return 0;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
@ -1554,10 +1662,34 @@ sub getchilddatasets {
my $fs = shift;
my $mysudocmd = '';
# use dataset cache if available
if (@cachedatasets) {
my $foundparent = 0;
my @cachechildren = ();
foreach my $dataset (@cachedatasets) {
chomp $dataset;
my $ret = rindex $dataset, "${fs}/", 0;
if ($ret == 0) {
push (@cachechildren, $dataset);
} else {
if ($dataset eq $fs) {
$foundparent = 1;
}
}
}
# sanity check
if ($foundparent) {
return @cachechildren;
}
# fallback if cache misses items for whatever reason
}
my $getchildrencmd = "$mysudocmd $zfs list -o name -t filesystem,volume -Hr $fs |";
if ($args{'debug'}) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
open FH, $getchildrencmd;
my @children = <FH>;
chomp( my @children = <FH> );
close FH;
# parent dataset is the first element
@ -1600,7 +1732,7 @@ sub removecachedsnapshots {
my @rawsnaps = <FH>;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
open FH, "> $cache.tmp" or die "Could not write to $cache.tmp!\n";
foreach my $snapline ( @rawsnaps ) {
my @columns = split("\t", $snapline);
my $snap = $columns[0];
@ -1608,8 +1740,14 @@ sub removecachedsnapshots {
}
close FH;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
%snaps = getsnaps(\%config,$cacheTTL,0);
# clear hash
undef %pruned;
@ -1619,6 +1757,62 @@ sub removecachedsnapshots {
#######################################################################################################################3
#######################################################################################################################3
sub addcachedsnapshots {
if (not %taken) {
return;
}
my $unlocked = checklock('sanoid_cacheupdate');
# wait until we can get a lock to do our cache changes
while (not $unlocked) {
if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; }
sleep(10);
$unlocked = checklock('sanoid_cacheupdate');
}
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
print "INFO: adding taken snapshots to cache.\n";
}
copy($cache, "$cache.tmp") or die "Could not copy to $cache.tmp!\n";
open my $fh, ">> $cache.tmp" or die "Could not write to $cache.tmp!\n";
while((my $snap, my $details) = each(%taken)) {
my @parts = split("@", $snap, 2);
my $suffix = $parts[1] . "\tcreation\t" . $details->{time} . "\t-";
my $dataset = $parts[0];
print $fh "${dataset}\@${suffix}\n";
if ($details->{recursive}) {
my @datasets = getchilddatasets($dataset);
foreach my $dataset(@datasets) {
print "${dataset}\@${suffix}\n";
print $fh "${dataset}\@${suffix}\n";
}
}
}
close $fh;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub runscript {
my $key=shift;
my $dataset=shift;
@ -1716,7 +1910,7 @@ Options:
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
--take-snapshots Creates snapshots as specified in sanoid.conf
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
--force-prune Purges expired snapshots even if a send/recv is in progress
--cache-ttl=SECONDS Set custom cache expire time in seconds (default: 20 minutes)
--help Prints this helptext
--version Prints the version number

View File

@ -31,6 +31,13 @@
# you can also handle datasets recursively in an atomic way without the possibility to override settings for child datasets.
[zpoolname/parent2]
use_template = production
# there are two options for recursive: zfs or yes
# * zfs - taken a zfs snapshot with the '-r' flag; zfs will recursively take a snapshot of the whole
# dataset tree which is consistent. Newly-added child datasets will not immediately get snapshots,
# and must instead slowly catch up to policy over time. Slightly lower storage load.
#
# * yes - the snapshots will be taken one-at-time through the sanoid code; not necessarily consistent.
# newly added child datasets will be immediately brought into policy. Slightly higher storage load.
recursive = zfs
@ -102,6 +109,8 @@
pre_snapshot_script = /path/to/script.sh
### run script after snapshot
post_snapshot_script = /path/to/script.sh
### run script before pruning snapshot
pre_pruning_script = /path/to/script.sh
### run script after pruning snapshot
pruning_script = /path/to/script.sh
### don't take an inconsistent snapshot (skip if pre script fails)

View File

@ -22,6 +22,7 @@ skip_children =
# See "Sanoid script hooks" in README.md for information about scripts.
pre_snapshot_script =
post_snapshot_script =
pre_pruning_script =
pruning_script =
script_timeout = 5
no_inconsistent_snapshot =

1103
syncoid

File diff suppressed because it is too large Load Diff

View File

@ -39,7 +39,7 @@ function cleanUp {
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+3600))
done

View File

@ -42,7 +42,7 @@ function cleanUp {
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+900))
done

View File

@ -10,7 +10,10 @@ function setup {
export SANOID="../../sanoid"
# make sure that there is no cache file
rm -f /var/cache/sanoidsnapshots.txt
rm -f /var/cache/sanoid/snapshots.txt
rm -f /var/cache/sanoid/datasets.txt
mkdir -p /etc/sanoid
# install needed sanoid configuration files
[ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf
@ -34,7 +37,7 @@ function checkEnvironment {
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo "you should be running this test in a"
echo "dedicated vm, as it will mess with your system!"
echo "Are you sure you wan't to continue? (y)"
echo "Are you sure you want to continue? (y)"
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
set -x
@ -51,6 +54,11 @@ function disableTimeSync {
if [ $? -eq 0 ]; then
timedatectl set-ntp 0
fi
which systemctl > /dev/null
if [ $? -eq 0 ]; then
systemctl is-active virtualbox-guest-utils.service && systemctl stop virtualbox-guest-utils.service
fi
}
function saveSnapshotList {

View File

@ -17,8 +17,11 @@ for test in */; do
cd "${test}"
echo -n y | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
ret=$?
if [ $ret -eq 0 ]; then
echo "[PASS]"
elif [ $ret -eq 130 ]; then
echo "[SKIPPED]"
else
echo "[FAILED] (see ${LOGFILE})"
fi

View File

@ -28,6 +28,8 @@ zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
sleep 1
../../../syncoid --debug --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!
sleep 5
@ -45,6 +47,9 @@ wait
sleep 1
../../../syncoid --debug --compress=none --no-resume "${POOL_NAME}"/src "${POOL_NAME}"/dst | grep "reset partial receive state of syncoid"
sleep 1
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
exit $?

View File

@ -28,6 +28,8 @@ zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
sleep 1
zfs snapshot "${POOL_NAME}"/src@big
../../../syncoid --debug --no-sync-snap --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!
@ -47,6 +49,9 @@ sleep 1
zfs destroy "${POOL_NAME}"/src@big
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # | grep "reset partial receive state of syncoid"
sleep 1
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
exit $?

View File

@ -32,17 +32,17 @@ zfs create -o recordsize=32k "${POOL_NAME}"/src/32
zfs create -o recordsize=128k "${POOL_NAME}"/src/128
../../../syncoid --preserve-recordsize --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
zfs get recordsize -t filesystem -r "${POOL_NAME}"/dst
zfs get volblocksize -t volume -r "${POOL_NAME}"/dst
zfs get -t filesystem -r recordsize "${POOL_NAME}"/dst
zfs get -t volume -r volblocksize "${POOL_NAME}"/dst
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "16K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "32K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/128)" != "128K" ]; then
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/128)" != "128K" ]; then
exit 1
fi

View File

@ -0,0 +1,48 @@
#!/bin/bash
# test replication with deletion of conflicting snapshot on target
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-8.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-8"
TARGET_CHECKSUM="ee439200c9fa54fc33ce301ef64d4240a6c5587766bfeb651c5cf358e11ec89d -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@duplicate
# initial replication
../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# recreate snapshot with the same name on src
zfs destroy "${POOL_NAME}"/src@duplicate
zfs snapshot "${POOL_NAME}"/src@duplicate
sleep 1
../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output1=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/src | sed 's/@syncoid_.*$'/@syncoid_/)
checksum1=$(echo "${output1}" | shasum -a 256)
output2=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/dst | sed 's/@syncoid_.*$'/@syncoid_/ | sed 's/dst/src/')
checksum2=$(echo "${output2}" | shasum -a 256)
if [ "${checksum1}" != "${checksum2}" ]; then
exit 1
fi
exit 0

View File

@ -0,0 +1,71 @@
#!/bin/bash
# test preserving locally set properties from the src dataset to the target one
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-9.zpool"
MOUNT_TARGET="/tmp/syncoid-test-9.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-9"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create -o recordsize=16k -o xattr=on -o mountpoint=none -o primarycache=none "${POOL_NAME}"/src
zfs create -V 100M -o volblocksize=8k "${POOL_NAME}"/src/zvol8
zfs create -V 100M -o volblocksize=16k -o primarycache=all "${POOL_NAME}"/src/zvol16
zfs create -V 100M -o volblocksize=64k "${POOL_NAME}"/src/zvol64
zfs create -o recordsize=16k -o primarycache=none "${POOL_NAME}"/src/16
zfs create -o recordsize=32k -o acltype=posixacl "${POOL_NAME}"/src/32
zfs set 'net.openoid:var-name'='with whitespace and !"§$%&/()= symbols' "${POOL_NAME}"/src/32
../../../syncoid --preserve-properties --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem mountpoint "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem xattr "${POOL_NAME}"/dst)" != "on" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem primarycache "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem primarycache "${POOL_NAME}"/dst/16)" != "none" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem acltype "${POOL_NAME}"/dst/32)" != "posix" ]; then
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem 'net.openoid:var-name' "${POOL_NAME}"/dst/32)" != "with whitespace and !\"§$%&/()= symbols" ]; then
exit 1
fi

View File

@ -0,0 +1,142 @@
#!/bin/bash
# test filtering snapshot names using --include-snaps and --exclude-snaps
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-10.zpool"
MOUNT_TARGET="/tmp/syncoid-test-10.mount"
POOL_SIZE="100M"
POOL_NAME="syncoid-test-10"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
#####
# Create source snapshots and destroy the destination snaps and dataset.
#####
function setup_snaps {
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@monthly1
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily1
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily2
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly1
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly2
sleep 1
zfs snapshot "${POOL_NAME}"/src@daily3
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly3
sleep 1
zfs snapshot "${POOL_NAME}"/src@hourly4
}
#####
# Remove the destination snapshots and dataset so that each test starts with a
# blank slate.
#####
function clean_snaps {
zfs destroy "${POOL_NAME}"/dst@%
zfs destroy "${POOL_NAME}"/dst
}
#####
# Verify that the correct set of snapshots is present on the destination.
#####
function verify_checksum {
zfs list -r -t snap "${POOL_NAME}"
checksum=$(zfs list -t snap -r -H -o name "${POOL_NAME}" | sed 's/@syncoid_.*/@syncoid_/' | shasum -a 256)
echo "Expected checksum: $1"
echo "Actual checksum: $checksum"
return $( [[ "$checksum" == "$1" ]] )
}
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
setup_snaps
#####
# TEST 1
#
# --exclude-snaps is provided and --no-stream is omitted. Hourly snaps should
# be missing from the destination, and all other intermediate snaps should be
# present.
#####
../../../syncoid --debug --compress=none --no-sync-snap --exclude-snaps='hourly' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '494b6860415607f1d670e4106a10e1316924ba6cd31b4ddacffe0ad6d30a6339 -'
clean_snaps
#####
# TEST 2
#
# --exclude-snaps and --no-stream are provided. Only the daily3 snap should be
# present on the destination.
#####
../../../syncoid --debug --compress=none --no-sync-snap --exclude-snaps='hourly' --no-stream "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '0a5072f42180d231cfdd678682972fbbb689140b7f3e996b3c348b7e78d67ea2 -'
clean_snaps
#####
# TEST 3
#
# --include-snaps is provided and --no-stream is omitted. Hourly snaps should
# be present on the destination, and all other snaps should be missing
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum 'd32862be4c71c6cde846322a7d006fd5e8edbd3520d3c7b73953492946debb7f -'
clean_snaps
#####
# TEST 4
#
# --include-snaps and --no-stream are provided. Only the hourly4 snap should
# be present on the destination.
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' --no-stream "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '81ef1a8298006a7ed856430bb7e05e8b85bbff530ca9dd7831f1da782f8aa4c7 -'
clean_snaps
#####
# TEST 5
#
# --include-snaps='hourly' and --exclude-snaps='3' are both provided. The
# hourly snaps should be present on the destination except for hourly3; daily
# and monthly snaps should be missing.
#####
../../../syncoid --debug --compress=none --no-sync-snap --include-snaps='hourly' --exclude-snaps='3' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '5a9dd92b7d4b8760a1fcad03be843da4f43b915c64caffc1700c0d59a1581239 -'
clean_snaps
#####
# TEST 6
#
# --exclude-snaps='syncoid' and --no-stream are provided, and --no-sync-snap is
# omitted. The sync snap should be created on the source but not sent to the
# destination; only hourly4 should be sent.
#####
../../../syncoid --debug --compress=none --no-stream --exclude-snaps='syncoid' "${POOL_NAME}"/src "${POOL_NAME}"/dst
verify_checksum '9394fdac44ec72764a4673202552599684c83530a2a724dae5b411aaea082b02 -'
clean_snaps

View File

@ -0,0 +1,55 @@
#!/bin/bash
# test verifying syncoid behavior with partial transfers
set -x
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-012.zpool"
POOL_SIZE="128M"
POOL_NAME="syncoid-test-012"
MOUNT_TARGET="/tmp/syncoid-test-012.mount"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -O mountpoint="${MOUNT_TARGET}" -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool destroy "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# Clean up the pool and image file on exit
trap cleanUp EXIT
zfs create "${POOL_NAME}/source"
zfs snap "${POOL_NAME}/source@empty"
dd if=/dev/urandom of="${MOUNT_TARGET}/source/garbage.bin" bs=1M count=16
zfs snap "${POOL_NAME}/source@something"
# Simulate interrupted transfer
zfs send -pwR "${POOL_NAME}/source@something" | head --bytes=8M | zfs recv -s "${POOL_NAME}/destination"
# Using syncoid to continue interrupted transfer
../../../syncoid --sendoptions="pw" "${POOL_NAME}/source" "${POOL_NAME}/destination"
# Check if syncoid succeeded in handling the interrupted transfer
if [ $? -eq 0 ]; then
echo "Syncoid resumed transfer successfully."
# Verify data integrity with sha256sum comparison
original_sum=$(sha256sum "${MOUNT_TARGET}/source/garbage.bin" | cut -d ' ' -f 1)
received_sum=$(sha256sum "${MOUNT_TARGET}/destination/garbage.bin" | cut -d ' ' -f 1)
if [ "${original_sum}" == "${received_sum}" ]; then
echo "Data integrity verified."
exit 0
else
echo "Data integrity check failed."
exit 1
fi
else
echo "Regression detected: syncoid did not handle the resuming correctly."
exit 1
fi

View File

@ -17,8 +17,11 @@ for test in */; do
cd "${test}"
echo | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
ret=$?
if [ $ret -eq 0 ]; then
echo "[PASS]"
elif [ $ret -eq 130 ]; then
echo "[SKIPPED]"
else
echo "[FAILED] (see ${LOGFILE})"
fi