Merge pull request #1006 from phreaker0/prepare-2.3.0

prepare 2.3.0
This commit is contained in:
Jim Salter 2025-06-11 08:10:52 -04:00 committed by GitHub
commit 8d4abf14b2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
19 changed files with 430 additions and 226 deletions

View File

@ -1,3 +1,23 @@
2.3.0 [overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
2.2.0 [overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0) 2.2.0 [overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)
[syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0) [syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0)
[syncoid] implemented target snapshot deletion (@mat813) [syncoid] implemented target snapshot deletion (@mat813)

View File

@ -60,6 +60,8 @@ sudo yum config-manager --set-enabled powertools
sudo dnf config-manager --set-enabled powertools sudo dnf config-manager --set-enabled powertools
# On RHEL, instead of PowerTools, we need to enable the CodeReady Builder repo: # On RHEL, instead of PowerTools, we need to enable the CodeReady Builder repo:
sudo subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms sudo subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms
# For Rocky Linux 9 or AlmaLinux 9 you need the CodeReady Builder repo, and it is labelled `crb`
sudo dnf config-manager --set-enabled crb
# Install the packages that Sanoid depends on: # Install the packages that Sanoid depends on:
sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny perl-Getopt-Long lzop mbuffer mhash pv sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny perl-Getopt-Long lzop mbuffer mhash pv
# The repositories above should contain all the relevant Perl modules, but if you # The repositories above should contain all the relevant Perl modules, but if you

View File

@ -80,10 +80,6 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones. This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
+ --force-prune
Purges expired snapshots even if a send/recv is in progress
+ --monitor-snapshots + --monitor-snapshots
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots. This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
@ -100,6 +96,10 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
This clears out sanoid's zfs snapshot listing cache. This is normally not needed. This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
+ --cache-ttl=SECONDS
Set custom cache expire time in seconds (default: 20 minutes).
+ --version + --version
This prints the version number, and exits. This prints the version number, and exits.
@ -126,7 +126,9 @@ For more full details on sanoid.conf settings see [Wiki page](https://github.com
### Sanoid script hooks ### Sanoid script hooks
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot: There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot.
**Note** that snapshots related script are triggered only if you have `autosnap = yes` and pruning scripts are triggered only if you have `autoprune = yes`.
#### `pre_snapshot_script` #### `pre_snapshot_script`

View File

@ -1 +1 @@
2.2.0 2.3.0

View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0'; $::VERSION = '2.3.0';
use strict; use strict;
use warnings; use warnings;

View File

@ -1,3 +1,27 @@
sanoid (2.3.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)
[sanoid] implemented adding of taken snapshots to the cache file and a new parameter for setting an custom cache expire time (@phreaker0)
[sanoid] ignore duplicate template keys (@phreaker0)
[packaging] fix debian packaging with debian 12 and ubuntu 24.04 (@phreaker0)
[syncoid] fix typo preventing resumed transfer with --sendoptions (@Deltik)
[sanoid] remove iszfsbusy check to boost performance (@sdettmer)
[sanoid] write cache files in an atomic way to prevent race conditions (@phreaker0)
[sanoid] improve performance (especially for monitor commands) by caching the dataset list (@phreaker0)
[syncoid] add zstdmt compress options (@0xFelix)
[syncoid] added missing status information about what is done and provide more details (@phreaker0)
[syncoid] rename ssh control socket to avoid problem with length limits and conflicts (@phreaker0)
[syncoid] support relative paths (@phreaker0)
[syncoid] regather snapshots on --delete-target-snapshots flag (@Adam Fulton)
[sanoid] allow monitor commands to be run without root by using only the cache file (@Pajkastare)
[syncoid] add --include-snaps and --exclude-snaps options (@mr-vinn, @phreaker0)
[syncoid] escape property key and value pair in case of property preservation (@phreaker0)
[syncoid] prevent destroying of root dataset which leads to infinite loop because it can't be destroyed (@phreaker0)
[syncoid] modify zfs-get argument order for portability (@Rantherhin)
[sanoid] trim config values (@phreaker0)
-- Jim Salter <github@jrs-s.net> Tue, 05 Jun 2025 22:47:00 +0200
sanoid (2.2.0) unstable; urgency=medium sanoid (2.2.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0) [overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)

View File

@ -2,3 +2,5 @@
# remove old cache file # remove old cache file
[ -f /var/cache/sanoidsnapshots.txt ] && rm /var/cache/sanoidsnapshots.txt || true [ -f /var/cache/sanoidsnapshots.txt ] && rm /var/cache/sanoidsnapshots.txt || true
[ -f /var/cache/sanoid/snapshots.txt ] && rm /var/cache/sanoid/snapshots.txt || true
[ -f /var/cache/sanoid/datasets.txt ] && rm /var/cache/sanoid/datasets.txt || true

View File

@ -12,10 +12,6 @@ override_dh_auto_install:
install -d $(DESTDIR)/etc/sanoid install -d $(DESTDIR)/etc/sanoid
install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid
install -d $(DESTDIR)/lib/systemd/system
install -m 664 debian/sanoid-prune.service debian/sanoid.timer \
$(DESTDIR)/lib/systemd/system
install -d $(DESTDIR)/usr/sbin install -d $(DESTDIR)/usr/sbin
install -m 775 \ install -m 775 \
findoid sanoid sleepymutex syncoid \ findoid sanoid sleepymutex syncoid \
@ -25,6 +21,8 @@ override_dh_auto_install:
install -m 664 sanoid.conf \ install -m 664 sanoid.conf \
$(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example
dh_installsystemd --name sanoid-prune
override_dh_installinit: override_dh_installinit:
dh_installinit --noscripts dh_installinit --noscripts

View File

@ -1,4 +1,4 @@
%global version 2.2.0 %global version 2.3.0
%global git_tag v%{version} %global git_tag v%{version}
# Enable with systemctl "enable sanoid.timer" # Enable with systemctl "enable sanoid.timer"
@ -111,6 +111,8 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%endif %endif
%changelog %changelog
* Tue Jun 05 2025 Christoph Klaffl <christoph@phreaker.eu> - 2.3.0
- Bump to 2.3.0
* Tue Jul 18 2023 Christoph Klaffl <christoph@phreaker.eu> - 2.2.0 * Tue Jul 18 2023 Christoph Klaffl <christoph@phreaker.eu> - 2.2.0
- Bump to 2.2.0 - Bump to 2.2.0
* Tue Nov 24 2020 Christoph Klaffl <christoph@phreaker.eu> - 2.1.0 * Tue Nov 24 2020 Christoph Klaffl <christoph@phreaker.eu> - 2.1.0

129
sanoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0'; $::VERSION = '2.3.0';
my $MINIMUM_DEFAULTS_VERSION = 2; my $MINIMUM_DEFAULTS_VERSION = 2;
use strict; use strict;
@ -12,6 +12,7 @@ use warnings;
use Config::IniFiles; # read samba-style conf file use Config::IniFiles; # read samba-style conf file
use Data::Dumper; # debugging - print contents of hash use Data::Dumper; # debugging - print contents of hash
use File::Path 'make_path'; use File::Path 'make_path';
use File::Copy;
use Getopt::Long qw(:config auto_version auto_help); use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse use Time::Local; # to parse dates in reverse
@ -26,7 +27,7 @@ GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
"configdir=s", "cache-dir=s", "run-dir=s", "configdir=s", "cache-dir=s", "run-dir=s",
"monitor-health", "force-update", "monitor-health", "force-update",
"monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune", "monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune",
"monitor-capacity" "monitor-capacity", "cache-ttl=i"
) or pod2usage(2); ) or pod2usage(2);
# If only config directory (or nothing) has been specified, default to --cron --verbose # If only config directory (or nothing) has been specified, default to --cron --verbose
@ -54,6 +55,17 @@ make_path($run_dir);
my $cacheTTL = 1200; # 20 minutes my $cacheTTL = 1200; # 20 minutes
if ($args{'force-prune'}) {
warn "WARN: --force-prune argument is deprecated and its behavior is now standard";
}
if ($args{'cache-ttl'}) {
if ($args{'cache-ttl'} < 0) {
die "ERROR: cache-ttl needs to be positive!\n";
}
$cacheTTL = $args{'cache-ttl'};
}
# Allow a much older snapshot cache file than default if _only_ "--monitor-*" action commands are given # Allow a much older snapshot cache file than default if _only_ "--monitor-*" action commands are given
# (ignore "--verbose", "--configdir" etc) # (ignore "--verbose", "--configdir" etc)
if ( if (
@ -66,7 +78,7 @@ if (
|| $args{'force-update'} || $args{'force-update'}
|| $args{'take-snapshots'} || $args{'take-snapshots'}
|| $args{'prune-snapshots'} || $args{'prune-snapshots'}
|| $args{'force-prune'} || $args{'cache-ttl'}
) )
) { ) {
# The command combination above must not assert true for any command that takes or prunes snapshots # The command combination above must not assert true for any command that takes or prunes snapshots
@ -86,6 +98,7 @@ my %config = init($conf_file,$default_conf_file);
my %pruned; my %pruned;
my %capacitycache; my %capacitycache;
my %taken;
my %snaps; my %snaps;
my %snapsbytype; my %snapsbytype;
@ -381,9 +394,7 @@ sub prune_snapshots {
} }
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; } if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (!$args{'force-prune'} && iszfsbusy($path)) {
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
} else {
if (! $args{'readonly'}) { if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) { if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1; $pruned{$snap} = 1;
@ -403,7 +414,6 @@ sub prune_snapshots {
} }
} }
} }
}
removelock('sanoid_pruning'); removelock('sanoid_pruning');
removecachedsnapshots(0); removecachedsnapshots(0);
} else { } else {
@ -592,6 +602,7 @@ sub take_snapshots {
} }
if (%newsnapsgroup) { if (%newsnapsgroup) {
$forcecacheupdate = 0;
while ((my $path, my $snapData) = each(%newsnapsgroup)) { while ((my $path, my $snapData) = each(%newsnapsgroup)) {
my $recursiveFlag = $snapData->{recursive}; my $recursiveFlag = $snapData->{recursive};
my $dstHandling = $snapData->{handleDst}; my $dstHandling = $snapData->{handleDst};
@ -662,9 +673,17 @@ sub take_snapshots {
} }
}; };
if ($exit == 0) {
$taken{$snap} = {
'time' => time(),
'recursive' => $recursiveFlag
};
}
$exit == 0 or do { $exit == 0 or do {
if ($dstHandling) { if ($dstHandling) {
if ($stderr =~ /already exists/) { if ($stderr =~ /already exists/) {
$forcecacheupdate = 1;
$exit = 0; $exit = 0;
$snap =~ s/_([a-z]+)$/dst_$1/g; $snap =~ s/_([a-z]+)$/dst_$1/g;
if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; } if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; }
@ -714,8 +733,8 @@ sub take_snapshots {
} }
} }
} }
$forcecacheupdate = 1; addcachedsnapshots();
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate); %snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
} }
} }
@ -1008,6 +1027,12 @@ sub init {
} }
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; } if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key}; $config{$section}{$key} = $ini{$template}{$key};
my $value = $config{$section}{$key};
if (ref($value) eq 'ARRAY') {
# handle duplicates silently (warning was already printed above)
$config{$section}{$key} = $value->[0];
}
} }
} }
} }
@ -1630,30 +1655,6 @@ sub writelock {
close FH; close FH;
} }
sub iszfsbusy {
# check to see if ZFS filesystem passed in as argument currently has a zfs send or zfs receive process referencing it.
# return true if busy (currently being sent or received), return false if not.
my $fs = shift;
# if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; }
open PL, "$pscmd -Ao args= |";
my @processes = <PL>;
close PL;
foreach my $process (@processes) {
# if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; }
if ($process =~ /zfs *(send|receive|recv).*$fs/) {
# there's already a zfs send/receive process for our target filesystem - return true
# if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
}
}
# no zfs receive processes for our target filesystem found - return false
return 0;
}
#######################################################################################################################3 #######################################################################################################################3
#######################################################################################################################3 #######################################################################################################################3
#######################################################################################################################3 #######################################################################################################################3
@ -1740,6 +1741,11 @@ sub removecachedsnapshots {
print FH $snapline unless ( exists($pruned{$snap}) ); print FH $snapline unless ( exists($pruned{$snap}) );
} }
close FH; close FH;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n"; rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate'); removelock('sanoid_cacheupdate');
@ -1753,6 +1759,61 @@ sub removecachedsnapshots {
#######################################################################################################################3 #######################################################################################################################3
#######################################################################################################################3 #######################################################################################################################3
sub addcachedsnapshots {
if (not %taken) {
return;
}
my $unlocked = checklock('sanoid_cacheupdate');
# wait until we can get a lock to do our cache changes
while (not $unlocked) {
if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; }
sleep(10);
$unlocked = checklock('sanoid_cacheupdate');
}
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
print "INFO: adding taken snapshots to cache.\n";
}
copy($cache, "$cache.tmp") or die "Could not copy to $cache.tmp!\n";
open FH, ">> $cache.tmp" or die "Could not write to $cache.tmp!\n";
while((my $snap, my $details) = each(%taken)) {
my @parts = split("@", $snap, 2);
my $suffix = $parts[1] . "\tcreation\t" . $details->{time} . "\t-";
my $dataset = $parts[0];
print FH "${dataset}\@${suffix}\n";
if ($details->{recursive}) {
my @datasets = getchilddatasets($dataset);
foreach my $dataset(@datasets) {
print FH "${dataset}\@${suffix}\n";
}
}
}
close FH;
# preserve mtime of cache for expire check
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
utime($atime, $mtime, "$cache.tmp");
rename("$cache.tmp", "$cache") or die "Could not rename to $cache!\n";
removelock('sanoid_cacheupdate');
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub runscript { sub runscript {
my $key=shift; my $key=shift;
my $dataset=shift; my $dataset=shift;
@ -1850,7 +1911,7 @@ Options:
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format --monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
--take-snapshots Creates snapshots as specified in sanoid.conf --take-snapshots Creates snapshots as specified in sanoid.conf
--prune-snapshots Purges expired snapshots as specified in sanoid.conf --prune-snapshots Purges expired snapshots as specified in sanoid.conf
--force-prune Purges expired snapshots even if a send/recv is in progress --cache-ttl=SECONDS Set custom cache expire time in seconds (default: 20 minutes)
--help Prints this helptext --help Prints this helptext
--version Prints the version number --version Prints the version number

274
syncoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0'; $::VERSION = '2.3.0';
use strict; use strict;
use warnings; use warnings;
@ -415,10 +415,10 @@ sub syncdataset {
if (!defined($receivetoken)) { if (!defined($receivetoken)) {
# build hashes of the snaps on the source and target filesystems. # build hashes of the snaps on the source and target filesystems.
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot,0); %snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) { if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot,0); my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps; my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps); %snaps = (%sourcesnaps, %targetsnaps);
} }
@ -438,7 +438,7 @@ sub syncdataset {
# Don't send the sync snap if it's filtered out by --exclude-snaps or # Don't send the sync snap if it's filtered out by --exclude-snaps or
# --include-snaps # --include-snaps
if (!snapisincluded($newsyncsnap)) { if (!snapisincluded($newsyncsnap)) {
$newsyncsnap = getnewestsnapshot(\%snaps); $newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) { if ($newsyncsnap eq 0) {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap."); writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; } if ($exitcode < 1) { $exitcode = 1; }
@ -447,7 +447,7 @@ sub syncdataset {
} }
} else { } else {
# we don't want sync snapshots created, so use the newest snapshot we can find. # we don't want sync snapshots created, so use the newest snapshot we can find.
$newsyncsnap = getnewestsnapshot(\%snaps); $newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) { if ($newsyncsnap eq 0) {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap."); writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; } if ($exitcode < 1) { $exitcode = 1; }
@ -575,7 +575,8 @@ sub syncdataset {
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used'); my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my %bookmark = (); my $bookmark = 0;
my $bookmarkcreation = 0;
$matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps); $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps);
if (! $matchingsnap) { if (! $matchingsnap) {
@ -583,18 +584,19 @@ sub syncdataset {
my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot); my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
# check for matching guid of source bookmark and target snapshot (oldest first) # check for matching guid of source bookmark and target snapshot (oldest first)
foreach my $snap ( sort { sortsnapshots(\%snaps, $b, $a) } keys %{ $snaps{'target'} }) { foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) {
my $guid = $snaps{'target'}{$snap}{'guid'}; my $guid = $snaps{'target'}{$snap}{'guid'};
if (defined $bookmarks{$guid}) { if (defined $bookmarks{$guid}) {
# found a match # found a match
%bookmark = %{ $bookmarks{$guid} }; $bookmark = $bookmarks{$guid}{'name'};
$bookmarkcreation = $bookmarks{$guid}{'creation'};
$matchingsnap = $snap; $matchingsnap = $snap;
last; last;
} }
} }
if (! %bookmark) { if (! $bookmark) {
# force delete is not possible for the root dataset # force delete is not possible for the root dataset
if ($args{'force-delete'} && index($targetfs, '/') != -1) { if ($args{'force-delete'} && index($targetfs, '/') != -1) {
writelog('INFO', "Removing $targetfs because no matching snapshots were found"); writelog('INFO', "Removing $targetfs because no matching snapshots were found");
@ -667,18 +669,15 @@ sub syncdataset {
my $nextsnapshot = 0; my $nextsnapshot = 0;
if (%bookmark) { if ($bookmark) {
my $bookmarkescaped = escapeshellparam($bookmark);
if (!defined $args{'no-stream'}) { if (!defined $args{'no-stream'}) {
# if intermediate snapshots are needed we need to find the next oldest snapshot, # if intermediate snapshots are needed we need to find the next oldest snapshot,
# do an replication to it and replicate as always from oldest to newest # do an replication to it and replicate as always from oldest to newest
# because bookmark sends doesn't support intermediates directly # because bookmark sends doesn't support intermediates directly
foreach my $snap ( sort { sortsnapshots(\%snaps, $a, $b) } keys %{ $snaps{'source'} }) { foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
my $comparisonkey = 'creation'; if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) {
if (defined $snaps{'source'}{$snap}{'createtxg'} && defined $bookmark{'createtxg'}) {
$comparisonkey = 'createtxg';
}
if ($snaps{'source'}{$snap}{$comparisonkey} >= $bookmark{$comparisonkey}) {
$nextsnapshot = $snap; $nextsnapshot = $snap;
last; last;
} }
@ -686,13 +685,13 @@ sub syncdataset {
} }
if ($nextsnapshot) { if ($nextsnapshot) {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot); ($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
$exit == 0 or do { $exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) { if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state"); writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot); resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot); (my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
$ret == 0 or do { $ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; } if ($exitcode < 2) { $exitcode = 2; }
return 0; return 0;
@ -706,13 +705,13 @@ sub syncdataset {
$matchingsnap = $nextsnapshot; $matchingsnap = $nextsnapshot;
$matchingsnapescaped = escapeshellparam($matchingsnap); $matchingsnapescaped = escapeshellparam($matchingsnap);
} else { } else {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap); ($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
$exit == 0 or do { $exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) { if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state"); writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot); resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap); (my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
$ret == 0 or do { $ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; } if ($exitcode < 2) { $exitcode = 2; }
return 0; return 0;
@ -727,7 +726,7 @@ sub syncdataset {
# do a normal replication if bookmarks aren't used or if previous # do a normal replication if bookmarks aren't used or if previous
# bookmark replication was only done to the next oldest snapshot # bookmark replication was only done to the next oldest snapshot
if (!%bookmark || $nextsnapshot) { if (!$bookmark || $nextsnapshot) {
if ($matchingsnap eq $newsyncsnap) { if ($matchingsnap eq $newsyncsnap) {
# edge case: bookmark replication used the latest snapshot # edge case: bookmark replication used the latest snapshot
return 0; return 0;
@ -858,15 +857,15 @@ sub syncdataset {
# snapshots first. # snapshots first.
# regather snapshots on source and target # regather snapshots on source and target
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot,0); %snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) { if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot,0); my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps; my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps); %snaps = (%sourcesnaps, %targetsnaps);
} }
my @to_delete = sort { sortsnapshots(\%snaps, $a, $b) } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} }; my @to_delete = sort { $snaps{'target'}{$a}{'creation'}<=>$snaps{'target'}{$b}{'creation'} } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
while (@to_delete) { while (@to_delete) {
# Create batch of snapshots to remove # Create batch of snapshots to remove
my $snaps = join ',', splice(@to_delete, 0, 50); my $snaps = join ',', splice(@to_delete, 0, 50);
@ -898,7 +897,7 @@ sub runsynccmd {
my $disp_pvsize = $pvsize == 0 ? 'UNKNOWN' : readablebytes($pvsize); my $disp_pvsize = $pvsize == 0 ? 'UNKNOWN' : readablebytes($pvsize);
my $sendoptions; my $sendoptions;
if ($sendsource =~ / -t /) { if ($sendsource =~ /^-t /) {
$sendoptions = getoptionsline(\@sendoptions, ('P','V','e','v')); $sendoptions = getoptionsline(\@sendoptions, ('P','V','e','v'));
} elsif ($sendsource =~ /#/) { } elsif ($sendsource =~ /#/) {
$sendoptions = getoptionsline(\@sendoptions, ('L','V','c','e','w')); $sendoptions = getoptionsline(\@sendoptions, ('L','V','c','e','w'));
@ -1501,7 +1500,8 @@ sub getlocalzfsvalues {
"receive_resume_token", "redact_snaps", "referenced", "refcompressratio", "snapshot_count", "receive_resume_token", "redact_snaps", "referenced", "refcompressratio", "snapshot_count",
"type", "used", "usedbychildren", "usedbydataset", "usedbyrefreservation", "type", "used", "usedbychildren", "usedbydataset", "usedbyrefreservation",
"usedbysnapshots", "userrefs", "snapshots_changed", "volblocksize", "written", "usedbysnapshots", "userrefs", "snapshots_changed", "volblocksize", "written",
"version", "volsize", "casesensitivity", "normalization", "utf8only" "version", "volsize", "casesensitivity", "normalization", "utf8only",
"encryption"
); );
my %blacklisthash = map {$_ => 1} @blacklist; my %blacklisthash = map {$_ => 1} @blacklist;
@ -1530,17 +1530,9 @@ sub readablebytes {
return $disp; return $disp;
} }
sub sortsnapshots {
my ($snaps, $left, $right) = @_;
if (defined $snaps->{'source'}{$left}{'createtxg'} && defined $snaps->{'source'}{$right}{'createtxg'}) {
return $snaps->{'source'}{$left}{'createtxg'} <=> $snaps->{'source'}{$right}{'createtxg'};
}
return $snaps->{'source'}{$left}{'creation'} <=> $snaps->{'source'}{$right}{'creation'};
}
sub getoldestsnapshot { sub getoldestsnapshot {
my $snaps = shift; my $snaps = shift;
foreach my $snap (sort { sortsnapshots($snaps, $a, $b) } keys %{ $snaps{'source'} }) { foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the oldest # return on first snap found - it's the oldest
return $snap; return $snap;
} }
@ -1554,7 +1546,7 @@ sub getoldestsnapshot {
sub getnewestsnapshot { sub getnewestsnapshot {
my $snaps = shift; my $snaps = shift;
foreach my $snap (sort { sortsnapshots($snaps, $b, $a) } keys %{ $snaps{'source'} }) { foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the newest # return on first snap found - it's the newest
writelog('DEBUG', "NEWEST SNAPSHOT: $snap"); writelog('DEBUG', "NEWEST SNAPSHOT: $snap");
return $snap; return $snap;
@ -1733,7 +1725,7 @@ sub pruneoldsyncsnaps {
sub getmatchingsnapshot { sub getmatchingsnapshot {
my ($sourcefs, $targetfs, $snaps) = @_; my ($sourcefs, $targetfs, $snaps) = @_;
foreach my $snap ( sort { sortsnapshots($snaps, $b, $a) } keys %{ $snaps{'source'} }) { foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
if (defined $snaps{'target'}{$snap}) { if (defined $snaps{'target'}{$snap}) {
if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) { if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) {
return $snap; return $snap;
@ -1868,8 +1860,88 @@ sub dumphash() {
writelog('INFO', Dumper($hash)); writelog('INFO', Dumper($hash));
} }
sub getsnaps { sub getsnaps() {
my ($type,$rhost,$fs,$isroot,$use_fallback,%snaps) = @_; my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $rhostOriginal = $rhost;
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot guid,creation $fsescaped";
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
writelog('DEBUG', "getting list of snapshots on $fs using $getsnapcmd...");
} else {
$getsnapcmd = "$getsnapcmd 2>/dev/null |";
}
open FH, $getsnapcmd;
my @rawsnaps = <FH>;
close FH or do {
# fallback (solaris for example doesn't support the -t option)
return getsnapsfallback($type,$rhostOriginal,$fs,$isroot,%snaps);
};
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my %creationtimes=();
foreach my $line (@rawsnaps) {
$line =~ /\Q$fs\E\@(\S*)/;
my $snapname = $1;
if (!snapisincluded($snapname)) { next; }
# only import snap guids from the specified filesystem
if ($line =~ /\Q$fs\E\@.*\tguid/) {
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
$snaps{$type}{$snap}{'guid'}=$guid;
}
# only import snap creations from the specified filesystem
elsif ($line =~ /\Q$fs\E\@.*\tcreation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
}
}
return %snaps;
}
sub getsnapsfallback() {
# fallback (solaris for example doesn't support the -t option)
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd; my $mysudocmd;
my $fsescaped = escapeshellparam($fs); my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
@ -1880,67 +1952,73 @@ sub getsnaps {
$fsescaped = escapeshellparam($fsescaped); $fsescaped = escapeshellparam($fsescaped);
} }
my $getsnapcmd = $use_fallback my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 type,guid,creation $fsescaped |";
? "$rhost $mysudocmd $zfscmd get -Hpd 1 all $fsescaped" writelog('WARN', "snapshot listing failed, trying fallback command");
: "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot all $fsescaped"; writelog('DEBUG', "FALLBACK, getting list of snapshots on $fs using $getsnapcmd...");
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
writelog('DEBUG', "getting list of snapshots on $fs using $getsnapcmd...");
} else {
$getsnapcmd = "$getsnapcmd 2>/dev/null |";
}
open FH, $getsnapcmd; open FH, $getsnapcmd;
my @rawsnaps = <FH>; my @rawsnaps = <FH>;
close FH or do { close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
if (!$use_fallback) {
writelog('WARN', "snapshot listing failed, trying fallback command"); my %creationtimes=();
return getsnaps($type, $rhost, $fs, $isroot, 1, %snaps);
my $state = 0;
foreach my $line (@rawsnaps) {
if ($state < 0) {
$state++;
next;
} }
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
};
my %snap_data; if ($state eq 0) {
my %creationtimes; if ($line !~ /\Q$fs\E\@.*\ttype\s*snapshot/) {
# skip non snapshot type object
$state = -2;
next;
}
} elsif ($state eq 1) {
if ($line !~ /\Q$fs\E\@.*\tguid/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (guid parser error)";
}
for my $line (@rawsnaps) {
chomp $line; chomp $line;
my ($dataset, $property, $value) = split /\t/, $line; my $guid = $line;
die "CRITICAL ERROR: Unexpected line format in $line" unless defined $value; $guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
my (undef, $snap) = split /@/, $dataset; $snap =~ s/^.*\@(.*)\tguid.*$/$1/;
die "CRITICAL ERROR: Unexpected dataset format in $line" unless $snap;
if (!snapisincluded($snap)) { next; } if (!snapisincluded($snap)) { next; }
$snaps{$type}{$snap}{'guid'}=$guid;
} elsif ($state eq 2) {
if ($line !~ /\Q$fs\E\@.*\tcreation/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (creation parser error)";
}
$snap_data{$snap}{$property} = $value; chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
if (!snapisincluded($snap)) { next; }
# the accuracy of the creation timestamp is only for a second, but # the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command # snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number # has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly # to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp # for snapshot with the same creation timestamp
if ($property eq 'creation') {
my $counter = 0; my $counter = 0;
my $creationsuffix; my $creationsuffix;
while ($counter < 999) { while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $value, $counter); $creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) { if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1; $creationtimes{$creationsuffix} = 1;
last; last;
} }
$counter += 1; $counter += 1;
} }
$snap_data{$snap}{'creation'} = $creationsuffix;
} $snaps{$type}{$snap}{'creation'}=$creationsuffix;
$state = -1;
} }
for my $snap (keys %snap_data) { $state++;
if (!$use_fallback || $snap_data{$snap}{'type'} eq 'snapshot') {
$snaps{$type}{$snap}{'guid'} = $snap_data{$snap}{'guid'};
$snaps{$type}{$snap}{'createtxg'} = $snap_data{$snap}{'createtxg'};
$snaps{$type}{$snap}{'creation'} = $snap_data{$snap}{'creation'};
}
} }
return %snaps; return %snaps;
@ -1959,7 +2037,7 @@ sub getbookmarks() {
} }
my $error = 0; my $error = 0;
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark all $fsescaped 2>&1 |"; my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |";
writelog('DEBUG', "getting list of bookmarks on $fs using $getbookmarkcmd..."); writelog('DEBUG', "getting list of bookmarks on $fs using $getbookmarkcmd...");
open FH, $getbookmarkcmd; open FH, $getbookmarkcmd;
my @rawbookmarks = <FH>; my @rawbookmarks = <FH>;
@ -1974,44 +2052,46 @@ sub getbookmarks() {
die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)"; die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)";
} }
my %bookmark_data; # this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
my %creationtimes; # as though each were an entirely separate get command.
for my $line (@rawbookmarks) { my $lastguid;
my %creationtimes=();
foreach my $line (@rawbookmarks) {
# only import bookmark guids, creation from the specified filesystem
if ($line =~ /\Q$fs\E\#.*\tguid/) {
chomp $line; chomp $line;
my ($dataset, $property, $value) = split /\t/, $line; $lastguid = $line;
die "CRITICAL ERROR: Unexpected line format in $line" unless defined $value; $lastguid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $bookmark = $line;
my (undef, $bookmark) = split /#/, $dataset; $bookmark =~ s/^.*\#(.*)\tguid.*$/$1/;
die "CRITICAL ERROR: Unexpected dataset format in $line" unless $bookmark; $bookmarks{$lastguid}{'name'}=$bookmark;
} elsif ($line =~ /\Q$fs\E\#.*\tcreation/) {
$bookmark_data{$bookmark}{$property} = $value; chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but # the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command # bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number # has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly # to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp # for bookmarks with the same creation timestamp
if ($property eq 'creation') {
my $counter = 0; my $counter = 0;
my $creationsuffix; my $creationsuffix;
while ($counter < 999) { while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $value, $counter); $creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) { if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1; $creationtimes{$creationsuffix} = 1;
last; last;
} }
$counter += 1; $counter += 1;
} }
$bookmark_data{$bookmark}{'creation'} = $creationsuffix;
}
}
for my $bookmark (keys %bookmark_data) { $bookmarks{$lastguid}{'creation'}=$creationsuffix;
my $guid = $bookmark_data{$bookmark}{'guid'}; }
$bookmarks{$guid}{'name'} = $bookmark;
$bookmarks{$guid}{'creation'} = $bookmark_data{$bookmark}{'creation'};
$bookmarks{$guid}{'createtxg'} = $bookmark_data{$bookmark}{'createtxg'};
} }
return %bookmarks; return %bookmarks;
@ -2175,7 +2255,7 @@ sub parsespecialoptions {
return undef; return undef;
} }
if ($char eq 'o' || $char eq 'x') { if ($char eq 'o' || $char eq 'x' || $char eq 'X') {
$lastOption = $char; $lastOption = $char;
$optionValue = 1; $optionValue = 1;
} else { } else {

View File

@ -39,7 +39,7 @@ function cleanUp {
trap cleanUp EXIT trap cleanUp EXIT
while [ $timestamp -le $END ]; do while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+3600)) timestamp=$((timestamp+3600))
done done

View File

@ -42,7 +42,7 @@ function cleanUp {
trap cleanUp EXIT trap cleanUp EXIT
while [ $timestamp -le $END ]; do while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose setdate $timestamp; date; "${SANOID}" --cron --verbose --cache-ttl=2592000
timestamp=$((timestamp+900)) timestamp=$((timestamp+900))
done done

View File

@ -10,7 +10,10 @@ function setup {
export SANOID="../../sanoid" export SANOID="../../sanoid"
# make sure that there is no cache file # make sure that there is no cache file
rm -f /var/cache/sanoidsnapshots.txt rm -f /var/cache/sanoid/snapshots.txt
rm -f /var/cache/sanoid/datasets.txt
mkdir -p /etc/sanoid
# install needed sanoid configuration files # install needed sanoid configuration files
[ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf [ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf
@ -51,6 +54,11 @@ function disableTimeSync {
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
timedatectl set-ntp 0 timedatectl set-ntp 0
fi fi
which systemctl > /dev/null
if [ $? -eq 0 ]; then
systemctl is-active virtualbox-guest-utils.service && systemctl stop virtualbox-guest-utils.service
fi
} }
function saveSnapshotList { function saveSnapshotList {

View File

@ -2,7 +2,7 @@
# run's all the available tests # run's all the available tests
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do for test in */; do
if [ ! -x "${test}/run.sh" ]; then if [ ! -x "${test}/run.sh" ]; then
continue continue
fi fi

View File

@ -1,50 +0,0 @@
#!/bin/bash
# test verifying snapshots with out-of-order snapshot creation datetimes
set -x
set -e
. ../../common/lib.sh
if [ -z "$ALLOW_INVASIVE_TESTS" ]; then
exit 130
fi
POOL_IMAGE="/tmp/syncoid-test-11.zpool"
POOL_SIZE="64M"
POOL_NAME="syncoid-test-11"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# export pool and remove the image in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/before
zfs snapshot "${POOL_NAME}"/before@this-snapshot-should-make-it-into-the-after-dataset
disableTimeSync
setdate 1155533696
zfs snapshot "${POOL_NAME}"/before@oldest-snapshot
zfs snapshot "${POOL_NAME}"/before@another-snapshot-does-not-matter
../../../syncoid --sendoptions="Lec" "${POOL_NAME}"/before "${POOL_NAME}"/after
# verify
saveSnapshotList "${POOL_NAME}" "snapshot-list.txt"
grep "${POOL_NAME}/before@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
exit 0

View File

@ -0,0 +1,55 @@
#!/bin/bash
# test verifying syncoid behavior with partial transfers
set -x
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-012.zpool"
POOL_SIZE="128M"
POOL_NAME="syncoid-test-012"
MOUNT_TARGET="/tmp/syncoid-test-012.mount"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -O mountpoint="${MOUNT_TARGET}" -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool destroy "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# Clean up the pool and image file on exit
trap cleanUp EXIT
zfs create "${POOL_NAME}/source"
zfs snap "${POOL_NAME}/source@empty"
dd if=/dev/urandom of="${MOUNT_TARGET}/source/garbage.bin" bs=1M count=16
zfs snap "${POOL_NAME}/source@something"
# Simulate interrupted transfer
zfs send -pwR "${POOL_NAME}/source@something" | head --bytes=8M | zfs recv -s "${POOL_NAME}/destination"
# Using syncoid to continue interrupted transfer
../../../syncoid --sendoptions="pw" "${POOL_NAME}/source" "${POOL_NAME}/destination"
# Check if syncoid succeeded in handling the interrupted transfer
if [ $? -eq 0 ]; then
echo "Syncoid resumed transfer successfully."
# Verify data integrity with sha256sum comparison
original_sum=$(sha256sum "${MOUNT_TARGET}/source/garbage.bin" | cut -d ' ' -f 1)
received_sum=$(sha256sum "${MOUNT_TARGET}/destination/garbage.bin" | cut -d ' ' -f 1)
if [ "${original_sum}" == "${received_sum}" ]; then
echo "Data integrity verified."
exit 0
else
echo "Data integrity check failed."
exit 1
fi
else
echo "Regression detected: syncoid did not handle the resuming correctly."
exit 1
fi

View File

@ -2,7 +2,7 @@
# run's all the available tests # run's all the available tests
for test in $(find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n" | sort -g); do for test in */; do
if [ ! -x "${test}/run.sh" ]; then if [ ! -x "${test}/run.sh" ]; then
continue continue
fi fi