Compare commits

...

27 Commits

Author SHA1 Message Date
Kenny Phelps-McKeown 509490ec52
Merge b50e55c17a into 8d4abf14b2 2025-06-12 00:18:54 +02:00
Jim Salter 8d4abf14b2
Merge pull request #1006 from phreaker0/prepare-2.3.0
prepare 2.3.0
2025-06-11 08:10:52 -04:00
Christoph Klaffl becec66320
prepare v2.3.0 2025-06-09 23:21:44 +02:00
Adam Fulton aa2c693e62
fix(syncoid): regather $snaps on --delete-target-snapshots flag 2025-06-05 23:31:47 +02:00
Christoph Klaffl b794da6f14
Revert "Merge pull request #818 from Deltik/fix/815"
This reverts commit 7c225a1d7b, reversing
changes made to acdc0938c9.
2025-06-05 23:23:30 +02:00
Christoph Klaffl b9bcb6a9d3
Merge branch 'cache-add' into prepare-2.3.0 2025-06-05 22:44:59 +02:00
Christoph Klaffl f0a2b120d9
Merge branch 'blacklist-encryption' into prepare-2.3.0 2025-06-05 22:42:49 +02:00
Christoph Klaffl a546b7d162
Merge branch 'fix-warning' into prepare-2.3.0 2025-06-05 22:42:22 +02:00
Christoph Klaffl b1f191ff8f
Merge branch 'pr-964' into prepare-2.3.0 2025-06-05 22:42:10 +02:00
Christoph Klaffl 003dd4635a
Merge branch 'patch-1' into prepare-2.3.0 2025-06-05 22:40:47 +02:00
Christoph Klaffl 1915ea29a2
Merge branch 'ignore-duplicate-template-keys' into prepare-2.3.0 2025-06-05 22:40:11 +02:00
Christoph Klaffl 6bda64508b
Merge branch 'ossimoi/master' into prepare-2.3.0 2025-06-05 22:39:38 +02:00
Christoph Klaffl 5109a51b68
Merge branch 'fix-debian-package' into prepare-2.3.0 2025-06-05 22:37:29 +02:00
Christoph Klaffl 67b9dec294
Merge branch 'fix/918' into prepare-2.3.0 2025-06-05 22:37:14 +02:00
Christoph Klaffl 44a9b71d5f
Merge branch 'dev/sde/remove-force-prune' into prepare-2.3.0 2025-06-05 22:36:12 +02:00
Christoph Klaffl 27fc179490
implemented adding of taken snapshot to the cache file and a new parameter for setting an custom cache expire time 2025-06-05 21:59:30 +02:00
Christoph Klaffl 7062b7347e
blacklist encryption property from preserving 2025-01-24 14:01:59 +01:00
Christoph Klaffl 4a9db9541d
fix warning in edge cases ("Use of uninitialized value in numeric comparison") 2024-12-03 19:32:47 +01:00
Christopher Morrow f4e425d682
Add Install instructions for EL9 systems
Added to INSTALL.md the command to add the `crb` repo for Rocky Linux 9 and AlmaLinux 9.
Necessary for perl-Capture-Tiny package.
2024-11-27 18:24:08 -08:00
Alex Garel 19f8877dcb
docs: clarify that scripts are run only if autosnap or autoprune are set 2024-11-05 11:20:31 +01:00
Christoph Klaffl 3942254e30
ignore duplicate template keys 2024-09-20 07:38:12 +02:00
Ossi A b27b120c19 syncoid: add -X send option in special options 2024-06-25 11:11:29 +03:00
Christoph Klaffl cf0ecb30ae
added deprecation warning for removed force-prune 2024-06-04 08:40:41 +02:00
Christoph Klaffl 7ba73acea9
Merge branch 'master' into dev/sde/remove-force-prune 2024-06-04 08:27:45 +02:00
Christoph Klaffl 4d39e39217
fix debian packaging with debian 12 and ubuntu 24.04 2024-06-03 23:46:54 +02:00
Nick Liu fab4b4076c
fix(syncoid): `zfs send` arg allowlist when sendsource is receivetoken
The `runsynccmd` subroutine was not matching the `$sendsource` when a
receive resume token is passed in. All usages that pass in the receive
resume token do not begin with a space; instead, they start with `-t `.

Fixes: https://github.com/jimsalterjrs/sanoid/issues/918
2024-04-26 02:18:19 -05:00
Steffen Dettmer 03c3db3d9a sanoid #912: sanoid --prune-snapshots performance boost by removing unneeded iszfsbusy() 2024-04-20 13:08:21 +02:00
19 changed files with 430 additions and 226 deletions

View File

@ -1,3 +1,23 @@
2.3.0 [overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
2.2.0 [overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)
[syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0)
[syncoid] implemented target snapshot deletion (@mat813)

View File

@ -60,6 +60,8 @@ sudo yum config-manager --set-enabled powertools
sudo dnf config-manager --set-enabled powertools
# On RHEL, instead of PowerTools, we need to enable the CodeReady Builder repo:
sudo subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms
# For Rocky Linux 9 or AlmaLinux 9 you need the CodeReady Builder repo, and it is labelled `crb`
sudo dnf config-manager --set-enabled crb
# Install the packages that Sanoid depends on:
sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny perl-Getopt-Long lzop mbuffer mhash pv
# The repositories above should contain all the relevant Perl modules, but if you

View File

@ -80,10 +80,6 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
+ --force-prune
Purges expired snapshots even if a send/recv is in progress
+ --monitor-snapshots
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
@ -100,6 +96,10 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
+ --cache-ttl=SECONDS
Set custom cache expire time in seconds (default: 20 minutes).
+ --version
This prints the version number, and exits.
@ -126,7 +126,9 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
### Sanoid script hooks
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot:
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot.
**Note** that snapshots related script are triggered only if you have `autosnap = yes` and pruning scripts are triggered only if you have `autoprune = yes`.
#### `pre_snapshot_script`

View File

@ -1 +1 @@
2.2.0
2.3.0

View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
$::VERSION = '2.3.0';
use strict;
use warnings;

View File

@ -1,3 +1,27 @@
sanoid (2.3.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
-- Jim Salter <github@jrs-s.net> Tue, 05 Jun 2025 22:47:00 +0200
sanoid (2.2.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)

View File

@ -2,3 +2,5 @@
# remove old cache file
[ -f /var/cache/sanoidsnapshots.txt ] && rm /var/cache/sanoidsnapshots.txt || true
[ -f /var/cache/sanoid/snapshots.txt ] && rm /var/cache/sanoid/snapshots.txt || true
[ -f /var/cache/sanoid/datasets.txt ] && rm /var/cache/sanoid/datasets.txt || true

View File

@ -12,10 +12,6 @@ override_dh_auto_install:
install -d $(DESTDIR)/etc/sanoid
install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid
install -d $(DESTDIR)/lib/systemd/system
install -m 664 debian/sanoid-prune.service debian/sanoid.timer \
$(DESTDIR)/lib/systemd/system
install -d $(DESTDIR)/usr/sbin
install -m 775 \
findoid sanoid sleepymutex syncoid \
@ -25,6 +21,8 @@ override_dh_auto_install:
install -m 664 sanoid.conf \
$(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example
dh_installsystemd --name sanoid-prune
override_dh_installinit:
dh_installinit --noscripts

View File

@ -1,4 +1,4 @@
%global version 2.2.0
%global version 2.3.0
%global git_tag v%{version}
# Enable with systemctl "enable sanoid.timer"
@ -111,6 +111,8 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%endif
%changelog
* Tue Jun 05 2025 Christoph Klaffl <christoph@phreaker.eu> - 2.3.0
- Bump to 2.3.0
* Tue Jul 18 2023 Christoph Klaffl <christoph@phreaker.eu> - 2.2.0
- Bump to 2.2.0
* Tue Nov 24 2020 Christoph Klaffl <christoph@phreaker.eu> - 2.1.0

157
sanoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
$::VERSION = '2.3.0';
my $MINIMUM_DEFAULTS_VERSION = 2;
use strict;
@ -12,6 +12,7 @@ use warnings;
use Config::IniFiles; # read samba-style conf file
use Data::Dumper; # debugging - print contents of hash
use File::Path 'make_path';
use File::Copy;
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse
@ -27,7 +28,7 @@ GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
"configdir=s", "cache-dir=s", "run-dir=s",
"monitor-health", "force-update",
"monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune",
"monitor-capacity"
"monitor-capacity", "cache-ttl=i"
) or pod2usage(2);
# If only config directory (or nothing) has been specified, default to --cron --verbose
@ -55,6 +56,17 @@ make_path($run_dir);
my $cacheTTL = 1200; # 20 minutes
if ($args{'force-prune'}) {
warn "WARN: --force-prune argument is deprecated and its behavior is now standard";
}
if ($args{'cache-ttl'}) {
if ($args{'cache-ttl'} < 0) {
die "ERROR: cache-ttl needs to be positive!\n";
}
$cacheTTL = $args{'cache-ttl'};
}
# Allow a much older snapshot cache file than default if _only_ "--monitor-*" action commands are given
# (ignore "--verbose", "--configdir" etc)
if (
@ -67,7 +79,7 @@ if (
|| $args{'force-update'}
|| $args{'take-snapshots'}
|| $args{'prune-snapshots'}
|| $args{'force-prune'}
|| $args{'cache-ttl'}
)
) {
# The command combination above must not assert true for any command that takes or prunes snapshots
@ -87,6 +99,7 @@ my %config = init($conf_file,$default_conf_file);
my %pruned;
my %capacitycache;
my %taken;
my %snaps;
my %snapsbytype;
@ -382,26 +395,23 @@ sub prune_snapshots {
}
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (!$args{'force-prune'} && iszfsbusy($path)) {
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
} else {
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
$ENV{'SANOID_SCRIPT'} = 'prune';
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SCRIPT'};
}
} else {
warn "could not remove $snap : $?";
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
$ENV{'SANOID_SCRIPT'} = 'prune';
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SCRIPT'};
}
} else {
warn "could not remove $snap : $?";
}
}
}
@ -593,6 +603,7 @@ sub take_snapshots {
}
if (%newsnapsgroup) {
$forcecacheupdate = 0;
while ((my $path, my $snapData) = each(%newsnapsgroup)) {
my $recursiveFlag = $snapData->{recursive};
my $dstHandling = $snapData->{handleDst};
@ -667,9 +678,17 @@ sub take_snapshots {
}
};
if ($exit == 0) {
$taken{$snap} = {
'time' => time(),
'recursive' => $recursiveFlag
};
}
$exit == 0 or do {
if ($dstHandling) {
if ($stderr =~ /already exists/) {
$forcecacheupdate = 1;
$exit = 0;
$snap =~ s/_([a-z]+)$/dst_$1/g;
if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; }
@ -719,8 +738,8 @@ sub take_snapshots {
}
}
}
$forcecacheupdate = 1;
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
addcachedsnapshots();
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
}
}
@ -1014,6 +1033,12 @@ sub init {
}
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key};
my $value = $config{$section}{$key};
if (ref($value) eq 'ARRAY') {
# handle duplicates silently (warning was already printed above)
$config{$section}{$key} = $value->[0];
}
}
}
}
@ -1635,30 +1660,6 @@ sub writelock {
close FH;
}
sub iszfsbusy {
# check to see if ZFS filesystem passed in as argument currently has a zfs send or zfs receive process referencing it.
# return true if busy (currently being sent or received), return false if not.
my $fs = shift;
# if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; }
open PL, "$pscmd -Ao args= |";
my @processes = <PL>;
close PL;
foreach my $process (@processes) {
# if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; }
if ($process =~ /zfs *(send|receive|recv).*$fs/) {
# there's already a zfs send/receive process for our target filesystem - return true
# if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
}
}
# no zfs receive processes for our target filesystem found - return false
return 0;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
@ -1745,6 +1746,11 @@ sub removecachedsnapshots {
print FH $snapline unless ( exists($pruned{$snap}) );
}
close FH;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
@ -1758,6 +1764,61 @@ sub removecachedsnapshots {
#######################################################################################################################3
#######################################################################################################################3
sub addcachedsnapshots {
if (not %taken) {
return;
}
my $unlocked = checklock('sanoid_cacheupdate');
# wait until we can get a lock to do our cache changes
while (not $unlocked) {
if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; }
sleep(10);
$unlocked = checklock('sanoid_cacheupdate');
}
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
print "INFO: adding taken snapshots to cache.\n";
}
copy($cache, "$cache.tmp") or die "Could not copy to $cache.tmp!\n";
open FH, ">> $cache.tmp" or die "Could not write to $cache.tmp!\n";
while((my $snap, my $details) = each(%taken)) {
my @parts = split("@", $snap, 2);
my $suffix = $parts[1] . "\tcreation\t" . $details->{time} . "\t-";
my $dataset = $parts[0];
print FH "${dataset}\@${suffix}\n";
if ($details->{recursive}) {
my @datasets = getchilddatasets($dataset);
foreach my $dataset(@datasets) {
print FH "${dataset}\@${suffix}\n";
}
}
}
close FH;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub runscript {
my $key=shift;
my $dataset=shift;
@ -1855,7 +1916,7 @@ Options:
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
--take-snapshots Creates snapshots as specified in sanoid.conf
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
--force-prune Purges expired snapshots even if a send/recv is in progress
--cache-ttl=SECONDS Set custom cache expire time in seconds (default: 20 minutes)
--help Prints this helptext
--version Prints the version number

302
syncoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
$::VERSION = '2.3.0';
use strict;
use warnings;
@ -415,10 +415,10 @@ sub syncdataset {
if (!defined($receivetoken)) {
# build hashes of the snaps on the source and target filesystems.
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot,0);
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot,0);
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps);
}
@ -438,7 +438,7 @@ sub syncdataset {
# Don't send the sync snap if it's filtered out by --exclude-snaps or
# --include-snaps
if (!snapisincluded($newsyncsnap)) {
$newsyncsnap = getnewestsnapshot(\%snaps);
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; }
@ -447,7 +447,7 @@ sub syncdataset {
}
} else {
# we don't want sync snapshots created, so use the newest snapshot we can find.
$newsyncsnap = getnewestsnapshot(\%snaps);
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; }
@ -575,7 +575,8 @@ sub syncdataset {
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my %bookmark = ();
my $bookmark = 0;
my $bookmarkcreation = 0;
$matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps);
if (! $matchingsnap) {
@ -583,18 +584,19 @@ sub syncdataset {
my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
# check for matching guid of source bookmark and target snapshot (oldest first)
foreach my $snap ( sort { sortsnapshots(\%snaps, $b, $a) } keys %{ $snaps{'target'} }) {
foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) {
my $guid = $snaps{'target'}{$snap}{'guid'};
if (defined $bookmarks{$guid}) {
# found a match
%bookmark = %{ $bookmarks{$guid} };
$bookmark = $bookmarks{$guid}{'name'};
$bookmarkcreation = $bookmarks{$guid}{'creation'};
$matchingsnap = $snap;
last;
}
}
if (! %bookmark) {
if (! $bookmark) {
# force delete is not possible for the root dataset
if ($args{'force-delete'} && index($targetfs, '/') != -1) {
writelog('INFO', "Removing $targetfs because no matching snapshots were found");
@ -667,18 +669,15 @@ sub syncdataset {
my $nextsnapshot = 0;
if (%bookmark) {
if ($bookmark) {
my $bookmarkescaped = escapeshellparam($bookmark);
if (!defined $args{'no-stream'}) {
# if intermediate snapshots are needed we need to find the next oldest snapshot,
# do an replication to it and replicate as always from oldest to newest
# because bookmark sends doesn't support intermediates directly
foreach my $snap ( sort { sortsnapshots(\%snaps, $a, $b) } keys %{ $snaps{'source'} }) {
my $comparisonkey = 'creation';
if (defined $snaps{'source'}{$snap}{'createtxg'} && defined $bookmark{'createtxg'}) {
$comparisonkey = 'createtxg';
}
if ($snaps{'source'}{$snap}{$comparisonkey} >= $bookmark{$comparisonkey}) {
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) {
$nextsnapshot = $snap;
last;
}
@ -686,13 +685,13 @@ sub syncdataset {
}
if ($nextsnapshot) {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot);
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
$ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
@ -706,13 +705,13 @@ sub syncdataset {
$matchingsnap = $nextsnapshot;
$matchingsnapescaped = escapeshellparam($matchingsnap);
} else {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap);
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
$ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
@ -727,7 +726,7 @@ sub syncdataset {
# do a normal replication if bookmarks aren't used or if previous
# bookmark replication was only done to the next oldest snapshot
if (!%bookmark || $nextsnapshot) {
if (!$bookmark || $nextsnapshot) {
if ($matchingsnap eq $newsyncsnap) {
# edge case: bookmark replication used the latest snapshot
return 0;
@ -858,15 +857,15 @@ sub syncdataset {
# snapshots first.
# regather snapshots on source and target
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot,0);
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot,0);
my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps);
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps);
}
my @to_delete = sort { sortsnapshots(\%snaps, $a, $b) } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
my @to_delete = sort { $snaps{'target'}{$a}{'creation'}<=>$snaps{'target'}{$b}{'creation'} } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
while (@to_delete) {
# Create batch of snapshots to remove
my $snaps = join ',', splice(@to_delete, 0, 50);
@ -898,7 +897,7 @@ sub runsynccmd {
my $disp_pvsize = $pvsize == 0 ? 'UNKNOWN' : readablebytes($pvsize);
my $sendoptions;
if ($sendsource =~ / -t /) {
if ($sendsource =~ /^-t /) {
$sendoptions = getoptionsline(\@sendoptions, ('P','V','e','v'));
} elsif ($sendsource =~ /#/) {
$sendoptions = getoptionsline(\@sendoptions, ('L','V','c','e','w'));
@ -1501,7 +1500,8 @@ sub getlocalzfsvalues {
"receive_resume_token", "redact_snaps", "referenced", "refcompressratio", "snapshot_count",
"type", "used", "usedbychildren", "usedbydataset", "usedbyrefreservation",
"usedbysnapshots", "userrefs", "snapshots_changed", "volblocksize", "written",
"version", "volsize", "casesensitivity", "normalization", "utf8only"
"version", "volsize", "casesensitivity", "normalization", "utf8only",
"encryption"
);
my %blacklisthash = map {$_ => 1} @blacklist;
@ -1530,17 +1530,9 @@ sub readablebytes {
return $disp;
}
sub sortsnapshots {
my ($snaps, $left, $right) = @_;
if (defined $snaps->{'source'}{$left}{'createtxg'} && defined $snaps->{'source'}{$right}{'createtxg'}) {
return $snaps->{'source'}{$left}{'createtxg'} <=> $snaps->{'source'}{$right}{'createtxg'};
}
return $snaps->{'source'}{$left}{'creation'} <=> $snaps->{'source'}{$right}{'creation'};
}
sub getoldestsnapshot {
my $snaps = shift;
foreach my $snap (sort { sortsnapshots($snaps, $a, $b) } keys %{ $snaps{'source'} }) {
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the oldest
return $snap;
}
@ -1554,7 +1546,7 @@ sub getoldestsnapshot {
sub getnewestsnapshot {
my $snaps = shift;
foreach my $snap (sort { sortsnapshots($snaps, $b, $a) } keys %{ $snaps{'source'} }) {
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the newest
writelog('DEBUG', "NEWEST SNAPSHOT: $snap");
return $snap;
@ -1733,7 +1725,7 @@ sub pruneoldsyncsnaps {
sub getmatchingsnapshot {
my ($sourcefs, $targetfs, $snaps) = @_;
foreach my $snap ( sort { sortsnapshots($snaps, $b, $a) } keys %{ $snaps{'source'} }) {
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
if (defined $snaps{'target'}{$snap}) {
if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) {
return $snap;
@ -1868,8 +1860,88 @@ sub dumphash() {
writelog('INFO', Dumper($hash));
}
sub getsnaps {
my ($type,$rhost,$fs,$isroot,$use_fallback,%snaps) = @_;
sub getsnaps() {
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $rhostOriginal = $rhost;
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot guid,creation $fsescaped";
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
writelog('DEBUG', "getting list of snapshots on $fs using $getsnapcmd...");
} else {
$getsnapcmd = "$getsnapcmd 2>/dev/null |";
}
open FH, $getsnapcmd;
my @rawsnaps = <FH>;
close FH or do {
# fallback (solaris for example doesn't support the -t option)
return getsnapsfallback($type,$rhostOriginal,$fs,$isroot,%snaps);
};
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my %creationtimes=();
foreach my $line (@rawsnaps) {
$line =~ /\Q$fs\E\@(\S*)/;
my $snapname = $1;
if (!snapisincluded($snapname)) { next; }
# only import snap guids from the specified filesystem
if ($line =~ /\Q$fs\E\@.*\tguid/) {
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
$snaps{$type}{$snap}{'guid'}=$guid;
}
# only import snap creations from the specified filesystem
elsif ($line =~ /\Q$fs\E\@.*\tcreation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
}
}
return %snaps;
}
sub getsnapsfallback() {
# fallback (solaris for example doesn't support the -t option)
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
@ -1880,67 +1952,73 @@ sub getsnaps {
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = $use_fallback
? "$rhost $mysudocmd $zfscmd get -Hpd 1 all $fsescaped"
: "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot all $fsescaped";
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
writelog('DEBUG', "getting list of snapshots on $fs using $getsnapcmd...");
} else {
$getsnapcmd = "$getsnapcmd 2>/dev/null |";
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 type,guid,creation $fsescaped |";
writelog('WARN', "snapshot listing failed, trying fallback command");
writelog('DEBUG', "FALLBACK, getting list of snapshots on $fs using $getsnapcmd...");
open FH, $getsnapcmd;
my @rawsnaps = <FH>;
close FH or do {
if (!$use_fallback) {
writelog('WARN', "snapshot listing failed, trying fallback command");
return getsnaps($type, $rhost, $fs, $isroot, 1, %snaps);
close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
my %creationtimes=();
my $state = 0;
foreach my $line (@rawsnaps) {
if ($state < 0) {
$state++;
next;
}
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
};
my %snap_data;
my %creationtimes;
if ($state eq 0) {
if ($line !~ /\Q$fs\E\@.*\ttype\s*snapshot/) {
# skip non snapshot type object
$state = -2;
next;
}
} elsif ($state eq 1) {
if ($line !~ /\Q$fs\E\@.*\tguid/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (guid parser error)";
}
for my $line (@rawsnaps) {
chomp $line;
my ($dataset, $property, $value) = split /\t/, $line;
die "CRITICAL ERROR: Unexpected line format in $line" unless defined $value;
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
if (!snapisincluded($snap)) { next; }
$snaps{$type}{$snap}{'guid'}=$guid;
} elsif ($state eq 2) {
if ($line !~ /\Q$fs\E\@.*\tcreation/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (creation parser error)";
}
my (undef, $snap) = split /@/, $dataset;
die "CRITICAL ERROR: Unexpected dataset format in $line" unless $snap;
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
if (!snapisincluded($snap)) { next; }
if (!snapisincluded($snap)) { next; }
$snap_data{$snap}{$property} = $value;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
if ($property eq 'creation') {
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $value, $counter);
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snap_data{$snap}{'creation'} = $creationsuffix;
}
}
for my $snap (keys %snap_data) {
if (!$use_fallback || $snap_data{$snap}{'type'} eq 'snapshot') {
$snaps{$type}{$snap}{'guid'} = $snap_data{$snap}{'guid'};
$snaps{$type}{$snap}{'createtxg'} = $snap_data{$snap}{'createtxg'};
$snaps{$type}{$snap}{'creation'} = $snap_data{$snap}{'creation'};
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
$state = -1;
}
$state++;
}
return %snaps;
@ -1959,7 +2037,7 @@ sub getbookmarks() {
}
my $error = 0;
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark all $fsescaped 2>&1 |";
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |";
writelog('DEBUG', "getting list of bookmarks on $fs using $getbookmarkcmd...");
open FH, $getbookmarkcmd;
my @rawbookmarks = <FH>;
@ -1974,44 +2052,46 @@ sub getbookmarks() {
die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)";
}
my %bookmark_data;
my %creationtimes;
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
for my $line (@rawbookmarks) {
chomp $line;
my ($dataset, $property, $value) = split /\t/, $line;
die "CRITICAL ERROR: Unexpected line format in $line" unless defined $value;
my $lastguid;
my %creationtimes=();
my (undef, $bookmark) = split /#/, $dataset;
die "CRITICAL ERROR: Unexpected dataset format in $line" unless $bookmark;
foreach my $line (@rawbookmarks) {
# only import bookmark guids, creation from the specified filesystem
if ($line =~ /\Q$fs\E\#.*\tguid/) {
chomp $line;
$lastguid = $line;
$lastguid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tguid.*$/$1/;
$bookmarks{$lastguid}{'name'}=$bookmark;
} elsif ($line =~ /\Q$fs\E\#.*\tcreation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
$bookmark_data{$bookmark}{$property} = $value;
# the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp
if ($property eq 'creation') {
# the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $value, $counter);
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$bookmark_data{$bookmark}{'creation'} = $creationsuffix;
}
}
for my $bookmark (keys %bookmark_data) {
my $guid = $bookmark_data{$bookmark}{'guid'};
$bookmarks{$guid}{'name'} = $bookmark;
$bookmarks{$guid}{'creation'} = $bookmark_data{$bookmark}{'creation'};
$bookmarks{$guid}{'createtxg'} = $bookmark_data{$bookmark}{'createtxg'};
$bookmarks{$lastguid}{'creation'}=$creationsuffix;
}
}
return %bookmarks;
@ -2175,7 +2255,7 @@ sub parsespecialoptions {
return undef;
}
if ($char eq 'o' || $char eq 'x') {
if ($char eq 'o' || $char eq 'x' || $char eq 'X') {
$lastOption = $char;
$optionValue = 1;
} else {

View File

@ -39,7 +39,7 @@ function cleanUp {
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+3600))
done

View File

@ -42,7 +42,7 @@ function cleanUp {
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+900))
done

View File

@ -10,7 +10,10 @@ function setup {
export SANOID="../../sanoid"
# make sure that there is no cache file
rm -f /var/cache/sanoidsnapshots.txt
rm -f /var/cache/sanoid/snapshots.txt
rm -f /var/cache/sanoid/datasets.txt
mkdir -p /etc/sanoid
# install needed sanoid configuration files
[ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf
@ -51,6 +54,11 @@ function disableTimeSync {
if [ $? -eq 0 ]; then
timedatectl set-ntp 0
fi
which systemctl > /dev/null
if [ $? -eq 0 ]; then
systemctl is-active virtualbox-guest-utils.service && systemctl stop virtualbox-guest-utils.service
fi
}
function saveSnapshotList {

View File

@ -2,7 +2,7 @@
# run's all the available tests
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue
fi

View File

@ -1,50 +0,0 @@
#!/bin/bash
# test verifying snapshots with out-of-order snapshot creation datetimes
set -x
set -e
. ../../common/lib.sh
if [ -z "$ALLOW_INVASIVE_TESTS" ]; then
exit 130
fi
POOL_IMAGE="/tmp/syncoid-test-11.zpool"
POOL_SIZE="64M"
POOL_NAME="syncoid-test-11"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# export pool and remove the image in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/before
zfs snapshot "${POOL_NAME}"/before@this-snapshot-should-make-it-into-the-after-dataset
disableTimeSync
setdate 1155533696
zfs snapshot "${POOL_NAME}"/before@oldest-snapshot
zfs snapshot "${POOL_NAME}"/before@another-snapshot-does-not-matter
../../../syncoid --sendoptions="Lec" "${POOL_NAME}"/before "${POOL_NAME}"/after
# verify
saveSnapshotList "${POOL_NAME}" "snapshot-list.txt"
grep "${POOL_NAME}/before@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
exit 0

View File

@ -0,0 +1,55 @@
#!/bin/bash
# test verifying syncoid behavior with partial transfers
set -x
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-012.zpool"
POOL_SIZE="128M"
POOL_NAME="syncoid-test-012"
MOUNT_TARGET="/tmp/syncoid-test-012.mount"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -O mountpoint="${MOUNT_TARGET}" -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool destroy "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# Clean up the pool and image file on exit
trap cleanUp EXIT
zfs create "${POOL_NAME}/source"
zfs snap "${POOL_NAME}/source@empty"
dd if=/dev/urandom of="${MOUNT_TARGET}/source/garbage.bin" bs=1M count=16
zfs snap "${POOL_NAME}/source@something"
# Simulate interrupted transfer
zfs send -pwR "${POOL_NAME}/source@something" | head --bytes=8M | zfs recv -s "${POOL_NAME}/destination"
# Using syncoid to continue interrupted transfer
../../../syncoid --sendoptions="pw" "${POOL_NAME}/source" "${POOL_NAME}/destination"
# Check if syncoid succeeded in handling the interrupted transfer
if [ $? -eq 0 ]; then
echo "Syncoid resumed transfer successfully."
# Verify data integrity with sha256sum comparison
original_sum=$(sha256sum "${MOUNT_TARGET}/source/garbage.bin" | cut -d ' ' -f 1)
received_sum=$(sha256sum "${MOUNT_TARGET}/destination/garbage.bin" | cut -d ' ' -f 1)
if [ "${original_sum}" == "${received_sum}" ]; then
echo "Data integrity verified."
exit 0
else
echo "Data integrity check failed."
exit 1
fi
else
echo "Regression detected: syncoid did not handle the resuming correctly."
exit 1
fi

View File

@ -2,7 +2,7 @@
# run's all the available tests
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue
fi