+...
+```
diff --git a/README.md b/README.md
index 81d70d5..ebbb818 100644
--- a/README.md
+++ b/README.md
@@ -1,15 +1,15 @@

-
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.
+
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)
-More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job but see INSTALL.md fore more details:
+More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job but see INSTALL.md for more details:
```
* * * * * TZ=UTC /usr/local/bin/sanoid --cron
```
-`Note`: Using UTC as timezone is recommend to prevent problems with daylight saving times
+`Note`: Using UTC as timezone is recommended to prevent problems with daylight saving times
And its /etc/sanoid/sanoid.conf might look something like this:
@@ -95,7 +95,7 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
+ --quiet
- Supress non-error output.
+ Suppress non-error output.
+ --verbose
@@ -103,7 +103,7 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
+ --debug
- This prints out quite alot of additional information during a sanoid run, and is normally not needed.
+ This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
+ --readonly
@@ -125,7 +125,7 @@ Will be executed before the snapshot(s) of a single dataset are taken. The follo
| ----------------- | ----------- |
| `SANOID_SCRIPT` | The type of script being executed, one of `pre`, `post`, or `prune`. Allows for one script to be used for multiple tasks |
| `SANOID_TARGET` | **DEPRECATED** The dataset about to be snapshot (only the first dataset will be provided) |
-| `SANOID_TARGETS` | Comma separated list of all datasets to be snapshoted (currently only a single dataset, multiple datasets will be possible later with atomic groups) |
+| `SANOID_TARGETS` | Comma separated list of all datasets to be snapshotted (currently only a single dataset, multiple datasets will be possible later with atomic groups) |
| `SANOID_SNAPNAME` | **DEPRECATED** The name of the snapshot that will be taken (only the first name will be provided, does not include the dataset name) |
| `SANOID_SNAPNAMES` | Comma separated list of all snapshot names that will be taken (does not include the dataset name) |
| `SANOID_TYPES` | Comma separated list of all snapshot types to be taken (yearly, monthly, weekly, daily, hourly, frequently) |
@@ -232,7 +232,7 @@ syncoid root@remotehost:data/images/vm backup/images/vm
Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel.
Syncoid supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used.
-If ZFS supports resumeable send/receive streams on both the source and target those will be enabled as default.
+If ZFS supports resumable send/receive streams on both the source and target those will be enabled as default.
As of 1.4.18, syncoid also automatically supports and enables resume of interrupted replication when both source and target support this feature.
@@ -286,7 +286,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --compress
- Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ Compression method to use for network transfer. Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ --source-bwlimit
@@ -294,7 +294,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --target-bwlimit
- This is the bandwidth limit in bytes (kbytes, mbytesm etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired.
+ This is the bandwidth limit in bytes (kbytes, mbytes, etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired.
+ --no-command-checks
@@ -348,15 +348,15 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --no-resume
- This argument tells syncoid to not use resumeable zfs send/receive streams.
+ This argument tells syncoid to not use resumable zfs send/receive streams.
+ --force-delete
- Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks.
+ Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks. Also removes conflicting snapshots if the replication would fail because of a snapshot which has the same name between source and target but different contents.
+ --no-clone-handling
- This argument tells syncoid to not recreate clones on the targe on initial sync and doing a normal replication instead.
+ This argument tells syncoid to not recreate clones on the target on initial sync, and do a normal replication instead.
+ --dumpsnaps
@@ -384,11 +384,11 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --quiet
- Supress non-error output.
+ Suppress non-error output.
+ --debug
- This prints out quite alot of additional information during a sanoid run, and is normally not needed.
+ This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
+ --help
diff --git a/packages/debian/changelog b/packages/debian/changelog
index b394acb..4cab69b 100644
--- a/packages/debian/changelog
+++ b/packages/debian/changelog
@@ -3,8 +3,8 @@ sanoid (2.1.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@HavardLine, @croadfeldt, @jimsalterjrs, @jim-perkins, @kr4z33, @phreaker0)
[syncoid] do not require user to be specified for syncoid (@aerusso)
[syncoid] implemented option for keeping sync snaps (@phreaker0)
- [syncoid] use sudo if neccessary for checking pool capabilities regarding resumeable send (@phreaker0)
- [syncoid] catch another case were the resume state isn't availabe anymore (@phreaker0)
+ [syncoid] use sudo if necessary for checking pool capabilities regarding resumable send (@phreaker0)
+ [syncoid] catch another case were the resume state isn't available anymore (@phreaker0)
[syncoid] check for an invalid argument combination (@phreaker0)
[syncoid] fix iszfsbusy check for similar dataset names (@phreaker0)
[syncoid] append timezone offset to the syncoid snapshot name to fix DST collisions (@phreaker0)
@@ -39,7 +39,7 @@ sanoid (2.0.2) unstable; urgency=medium
[overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0)
[syncoid] changed and simplified DST handling (@shodanshok)
[syncoid] reset partially resume state automatically (@phreaker0)
- [syncoid] handle some zfs erros automatically by parsing the stderr outputs (@phreaker0)
+ [syncoid] handle some zfs errors automatically by parsing the stderr outputs (@phreaker0)
[syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0)
[syncoid] don't use hardcoded paths (@phreaker0)
[syncoid] fix for special setup with listsnapshots=on (@phreaker0)
@@ -102,7 +102,7 @@ sanoid (2.0.0) unstable; urgency=medium
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
- [syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
+ [syncoid] correctly parse zfs column output, fixes resumable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
diff --git a/packages/rhel/sanoid.spec b/packages/rhel/sanoid.spec
index b4452e8..376f58a 100644
--- a/packages/rhel/sanoid.spec
+++ b/packages/rhel/sanoid.spec
@@ -111,13 +111,13 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%endif
%changelog
-* Wed Nov 24 2020 Christoph Klaffl - 2.1.0
+* Tue Nov 24 2020 Christoph Klaffl - 2.1.0
- Bump to 2.1.0
* Wed Oct 02 2019 Christoph Klaffl - 2.0.3
- Bump to 2.0.3
* Wed Sep 25 2019 Christoph Klaffl - 2.0.2
- Bump to 2.0.2
-* Wed Dec 04 2018 Christoph Klaffl - 2.0.0
+* Tue Dec 04 2018 Christoph Klaffl - 2.0.0
- Bump to 2.0.0
* Sat Apr 28 2018 Dominic Robinson - 1.4.18-1
- Bump to 1.4.18
diff --git a/sanoid b/sanoid
index 13ea085..6de6c30 100755
--- a/sanoid
+++ b/sanoid
@@ -130,7 +130,7 @@ sub monitor_snapshots {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
- my $errorlevel = 0;
+ my $errlevel = 0;
my $msg;
my @msgs;
my @paths;
@@ -169,7 +169,7 @@ sub monitor_snapshots {
my $dispcrit = displaytime($crit);
if ( $elapsed > $crit || $elapsed == -1) {
if ($crit > 0) {
- if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; }
+ if (! $config{$section}{'monitor_dont_crit'}) { $errlevel = 2; }
if ($elapsed == -1) {
push @msgs, "CRIT: $path has no $type snapshots at all!";
} else {
@@ -178,7 +178,7 @@ sub monitor_snapshots {
}
} elsif ($elapsed > $warn) {
if ($warn > 0) {
- if (! $config{$section}{'monitor_dont_warn'} && ($errorlevel < 2) ) { $errorlevel = 1; }
+ if (! $config{$section}{'monitor_dont_warn'} && ($errlevel < 2) ) { $errlevel = 1; }
push @msgs, "WARN: $path newest $type snapshot is $dispelapsed old (should be < $dispwarn)";
}
} else {
@@ -196,7 +196,7 @@ sub monitor_snapshots {
if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; }
print "$msg\n";
- exit $errorlevel;
+ exit $errlevel;
}
@@ -319,6 +319,25 @@ sub prune_snapshots {
if (checklock('sanoid_pruning')) {
writelock('sanoid_pruning');
foreach my $snap( @prunesnaps ){
+ my $dataset = (split '@', $snap)[0];
+ my $snapname = (split '@', $snap)[1];
+
+ if (! $args{'readonly'} && $config{$dataset}{'pre_pruning_script'}) {
+ $ENV{'SANOID_TARGET'} = $dataset;
+ $ENV{'SANOID_SNAPNAME'} = $snapname;
+ if ($args{'verbose'}) { print "executing pre_pruning_script '".$config{$dataset}{'pre_pruning_script'}."' on dataset '$dataset'\n"; }
+ my $ret = runscript('pre_pruning_script', $dataset);
+
+ delete $ENV{'SANOID_TARGET'};
+ delete $ENV{'SANOID_SNAPNAME'};
+
+ if ($ret != 0) {
+ # warning was already thrown by runscript function
+ # skip pruning if pre snapshot script returns non zero exit code
+ next;
+ }
+ }
+
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (!$args{'force-prune'} && iszfsbusy($path)) {
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
@@ -326,8 +345,6 @@ sub prune_snapshots {
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
- my $dataset = (split '@', $snap)[0];
- my $snapname = (split '@', $snap)[1];
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
@@ -881,6 +898,13 @@ sub init {
if (! defined ($defaults{'template_default'}{$key})) {
die "FATAL ERROR: I don't understand the setting $key you've set in \[$section\] in $conf_file.\n";
}
+
+ # in case of duplicate lines we will end up with an array of all values
+ my $value = $ini{$section}{$key};
+ if (ref($value) eq 'ARRAY') {
+ warn "duplicate key '$key' in section '$section', using the value from the first occurence and ignoring the others.\n";
+ $ini{$section}{$key} = $value->[0];
+ }
}
if ($section =~ /^template_/) { next; } # don't process templates directly
@@ -889,7 +913,7 @@ sub init {
# for sections directly when they've already been defined recursively, without starting them over from scratch.
if (! defined ($config{$section}{'initialized'})) {
if ($args{'debug'}) { print "DEBUG: initializing \$config\{$section\} with default values from $default_conf_file.\n"; }
- # set default values from %defaults, which can then be overriden by template
+ # set default values from %defaults, which can then be overridden by template
# and/or local settings within the module.
foreach my $key (keys %{$defaults{'template_default'}}) {
if (! ($key =~ /template|recursive|children_only/)) {
@@ -1137,7 +1161,7 @@ sub check_zpool() {
}
}
- # Tony: Debuging
+ # Tony: Debugging
# print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n";
close(STAT);
@@ -1239,7 +1263,7 @@ sub check_zpool() {
## no display for verbose level 1
next if ($verbose==1);
## don't display working devices for verbose level 2
- if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")) {
+ if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL")) {
# check for io/checksum errors
my @vdeverr = ();
diff --git a/sanoid.conf b/sanoid.conf
index 8504b93..c082bac 100644
--- a/sanoid.conf
+++ b/sanoid.conf
@@ -102,6 +102,8 @@
pre_snapshot_script = /path/to/script.sh
### run script after snapshot
post_snapshot_script = /path/to/script.sh
+ ### run script before pruning snapshot
+ pre_pruning_script = /path/to/script.sh
### run script after pruning snapshot
pruning_script = /path/to/script.sh
### don't take an inconsistent snapshot (skip if pre script fails)
diff --git a/sanoid.defaults.conf b/sanoid.defaults.conf
index 2eb6c55..0e46699 100644
--- a/sanoid.defaults.conf
+++ b/sanoid.defaults.conf
@@ -22,6 +22,7 @@ skip_children =
# See "Sanoid script hooks" in README.md for information about scripts.
pre_snapshot_script =
post_snapshot_script =
+pre_pruning_script =
pruning_script =
script_timeout = 5
no_inconsistent_snapshot =
diff --git a/syncoid b/syncoid
index 6e37af2..a4201d8 100755
--- a/syncoid
+++ b/syncoid
@@ -20,11 +20,11 @@ my $pvoptions = "-p -t -e -r -b";
# Blank defaults to use ssh client's default
# TODO: Merge into a single "sshflags" option?
-my %args = ('sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => '');
+my %args = ('sshconfig' => '', 'sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => '');
GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r", "sendoptions=s", "recvoptions=s",
- "source-bwlimit=s", "target-bwlimit=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@",
+ "source-bwlimit=s", "target-bwlimit=s", "sshconfig=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@",
"debug", "quiet", "no-stream", "no-sync-snap", "no-resume", "exclude=s@", "skip-parent", "identifier=s",
- "no-clone-handling", "no-privilege-elevation", "force-delete", "create-bookmark",
+ "no-clone-handling", "no-privilege-elevation", "force-delete", "no-rollback", "create-bookmark",
"pv-options=s" => \$pvoptions, "keep-sync-snap", "preserve-recordsize", "mbuffer-size=s" => \$mbuffer_size,
"include-snaps=s@", "exclude-snaps=s@", "exclude-datasets=s@") or pod2usage(2);
@@ -118,6 +118,9 @@ if (length $args{'sshcipher'}) {
if (length $args{'sshport'}) {
$args{'sshport'} = "-p $args{'sshport'}";
}
+if (length $args{'sshconfig'}) {
+ $args{'sshconfig'} = "-F $args{'sshconfig'}";
+}
if (length $args{'sshkey'}) {
$args{'sshkey'} = "-i $args{'sshkey'}";
}
@@ -135,7 +138,7 @@ if (length $args{'identifier'}) {
}
# figure out if source and/or target are remote.
-$sshcmd = "$sshcmd $args{'sshcipher'} $sshoptions $args{'sshport'} $args{'sshkey'}";
+$sshcmd = "$sshcmd $args{'sshconfig'} $args{'sshcipher'} $sshoptions $args{'sshport'} $args{'sshkey'}";
writelog('DEBUG', "SSHCMD: $sshcmd");
my ($sourcehost,$sourcefs,$sourceisroot) = getssh($rawsourcefs);
my ($targethost,$targetfs,$targetisroot) = getssh($rawtargetfs);
@@ -151,6 +154,8 @@ my %avail = checkcommands();
my %snaps;
my $exitcode = 0;
+my $replicationCount = 0;
+
## break here to call replication individually so that we ##
## can loop across children separately, for recursive ##
## replication ##
@@ -294,13 +299,22 @@ sub syncdataset {
my $stdout;
my $exit;
+ my $sourcefsescaped = escapeshellparam($sourcefs);
+ my $targetfsescaped = escapeshellparam($targetfs);
+
+ # if no rollbacks are allowed, disable forced receive
+ my $forcedrecv = "-F";
+ if (defined $args{'no-rollback'}) {
+ $forcedrecv = "";
+ }
+
writelog('DEBUG', "syncing source $sourcefs to target $targetfs.");
my ($sync, $error) = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'syncoid:sync');
if (!defined $sync) {
# zfs already printed the corresponding error
- if ($error =~ /\bdataset does not exist\b/) {
+ if ($error =~ /\bdataset does not exist\b/ && $replicationCount > 0) {
writelog('WARN', "Skipping dataset (dataset no longer exists): $sourcefs...");
return 0;
}
@@ -694,6 +708,35 @@ sub syncdataset {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
+ } elsif ($args{'force-delete'} && $stdout =~ /\Qdestination already exists\E/) {
+ (my $existing) = $stdout =~ m/^cannot restore to ([^:]*): destination already exists$/g;
+ if ($existing eq "") {
+ if ($exitcode < 2) { $exitcode = 2; }
+ return 0;
+ }
+
+ if (!$quiet) { print "WARN: removing existing destination: $existing\n"; }
+ my $rcommand = '';
+ my $mysudocmd = '';
+ my $existingescaped = escapeshellparam($existing);
+
+ if ($targethost ne '') { $rcommand = "$sshcmd $targethost"; }
+ if (!$targetisroot) { $mysudocmd = $sudocmd; }
+
+ my $prunecmd = "$mysudocmd $zfscmd destroy $existingescaped; ";
+ if ($targethost ne '') {
+ $prunecmd = escapeshellparam($prunecmd);
+ }
+
+ my $ret = system("$rcommand $prunecmd");
+ if ($ret != 0) {
+ warn "CRITICAL ERROR: $rcommand $prunecmd failed: $?";
+ if ($exitcode < 2) { $exitcode = 2; }
+ return 0;
+ } else {
+ # redo sync and skip snapshot creation (already taken)
+ return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1);
+ }
} else {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
@@ -708,11 +751,13 @@ sub syncdataset {
}
}
+ $replicationCount++;
+
if (defined $args{'no-sync-snap'}) {
if (defined $args{'create-bookmark'}) {
my $ret = createbookmark($sourcehost, $sourcefs, $newsyncsnap, $newsyncsnap);
$ret == 0 or do {
- # fallback: assume nameing conflict and try again with guid based suffix
+ # fallback: assume naming conflict and try again with guid based suffix
my $guid = $snaps{'source'}{$newsyncsnap}{'guid'};
$guid = substr($guid, 0, 6);
@@ -1179,7 +1224,7 @@ sub iszfsbusy {
close PL;
foreach my $process (@processes) {
- if ($process =~ /zfs *(receive|recv).*\Q$fs\E\Z/) {
+ if ($process =~ /zfs *(receive|recv)[^\/]*\Q$fs\E\Z/) {
# there's already a zfs receive process for our target filesystem - return true
writelog('DEBUG', "process $process matches target $fs!");
return 1;
@@ -1568,6 +1613,8 @@ sub getsnaps() {
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
+ my $rhostOriginal = $rhost;
+
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
@@ -1585,7 +1632,7 @@ sub getsnaps() {
my @rawsnaps = ;
close FH or do {
# fallback (solaris for example doesn't support the -t option)
- return getsnapsfallback($type,$rhost,$fs,$isroot,%snaps);
+ return getsnapsfallback($type,$rhostOriginal,$fs,$isroot,%snaps);
};
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
@@ -1772,7 +1819,7 @@ sub getbookmarks() {
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
- $bookmarks{$lastguid}{'creation'}=$creation;
+ $bookmarks{$lastguid}{'creation'}=$creation . "000";
}
}
@@ -2075,12 +2122,14 @@ Options:
--keep-sync-snap Don't destroy created sync snapshots
--create-bookmark Creates a zfs bookmark for the newest snapshot on the source after replication succeeds (only works with --no-sync-snap)
--preserve-recordsize Preserves the recordsize on initial sends to the target
+ --no-rollback Does not rollback snapshots on target (it probably requires a readonly target)
--exclude=REGEX DEPRECATED. Equivalent to --exclude-datasets, but will be removed in a future release. Ignored if --exclude-datasets is also provided.
--exclude-datasets=REGEX Exclude specific datasets which match the given regular expression. Can be specified multiple times
--exclude-snaps=REGEX Exclude specific snapshots that match the given regular expression. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
--include-snaps=REGEX Only include snapshots that match the given regular expression. Can be specified multiple times. If a snapshot matches both the exclude-snaps and include-snaps patterns, then it will be excluded.
--sendoptions=OPTIONS Use advanced options for zfs send (the arguments are filtered as needed), e.g. syncoid --sendoptions="Lc e" sets zfs send -L -c -e ...
--recvoptions=OPTIONS Use advanced options for zfs receive (the arguments are filtered as needed), e.g. syncoid --recvoptions="ux recordsize o compression=lz4" sets zfs receive -u -x recordsize -o compression=lz4 ...
+ --sshconfig=FILE Specifies an ssh_config(5) file to be used
--sshkey=FILE Specifies a ssh key to use to connect
--sshport=PORT Connects to remote on a particular port
--sshcipher|c=CIPHER Passes CIPHER to ssh to use a particular cipher set
@@ -2097,4 +2146,4 @@ Options:
--no-clone-handling Don't try to recreate clones on target
--no-privilege-elevation Bypass the root check, for use with ZFS permission delegation
- --force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks
+ --force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks (also overwrites conflicting named snapshots)
diff --git a/tests/common/lib.sh b/tests/common/lib.sh
index 904c98f..9c88eff 100644
--- a/tests/common/lib.sh
+++ b/tests/common/lib.sh
@@ -34,7 +34,7 @@ function checkEnvironment {
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo "you should be running this test in a"
echo "dedicated vm, as it will mess with your system!"
- echo "Are you sure you wan't to continue? (y)"
+ echo "Are you sure you want to continue? (y)"
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
set -x
diff --git a/tests/syncoid/8_force_delete_snapshot/run.sh b/tests/syncoid/8_force_delete_snapshot/run.sh
new file mode 100755
index 0000000..899092a
--- /dev/null
+++ b/tests/syncoid/8_force_delete_snapshot/run.sh
@@ -0,0 +1,48 @@
+#!/bin/bash
+
+# test replication with deletion of conflicting snapshot on target
+
+set -x
+set -e
+
+. ../../common/lib.sh
+
+POOL_IMAGE="/tmp/syncoid-test-8.zpool"
+POOL_SIZE="200M"
+POOL_NAME="syncoid-test-8"
+TARGET_CHECKSUM="ee439200c9fa54fc33ce301ef64d4240a6c5587766bfeb651c5cf358e11ec89d -"
+
+truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
+
+zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
+
+function cleanUp {
+ zpool export "${POOL_NAME}"
+}
+
+# export pool in any case
+trap cleanUp EXIT
+
+zfs create "${POOL_NAME}"/src
+zfs snapshot "${POOL_NAME}"/src@duplicate
+
+# initial replication
+../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
+# recreate snapshot with the same name on src
+zfs destroy "${POOL_NAME}"/src@duplicate
+zfs snapshot "${POOL_NAME}"/src@duplicate
+sleep 1
+../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
+
+# verify
+output1=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/src | sed 's/@syncoid_.*$'/@syncoid_/)
+checksum1=$(echo "${output1}" | shasum -a 256)
+
+output2=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/dst | sed 's/@syncoid_.*$'/@syncoid_/ | sed 's/dst/src/')
+checksum2=$(echo "${output2}" | shasum -a 256)
+
+if [ "${checksum1}" != "${checksum2}" ]; then
+ exit 1
+fi
+
+exit 0