Compare commits

...

35 Commits

Author SHA1 Message Date
Kenny Phelps-McKeown 5f6c67c033
Merge 19b8ef2e9c into 8e1d11e0b2 2026-03-01 22:05:49 -05:00
kennypm 19b8ef2e9c full flexibility for naming template 2026-03-01 19:43:56 -05:00
kennypm 4387419c35 custom name prefix without breaking getsnaps()
custom datestamp format
no name reordering yet as getsnaps() expects leading prefix and trailing snap type
2026-03-01 19:43:56 -05:00
Jim Salter 8e1d11e0b2
Merge pull request #572 from JakobR/dataset-dependencies
Replicate all dependencies of a dataset first
2026-02-18 17:58:51 -05:00
Jim Salter fc010c9118
Merge pull request #1077 from phreaker0/fix-zero-named-bookmarks
fix replication with '0' named bookmarks
2026-02-18 17:13:29 -05:00
Christoph Klaffl dcb3978fed
Merge branch 'master' into fix-zero-named-bookmarks 2026-02-18 23:12:03 +01:00
Jim Salter ba495e58af
Merge pull request #626 from 0xFelix/ignore-failed-create-bookmark
syncoid: Compare existing bookmarks
2026-02-18 17:08:18 -05:00
Jim Salter 04ca8f4e58
Merge pull request #1071 from r-ricci/print
don't print cache updates on stdout
2026-02-18 17:07:53 -05:00
Jim Salter 31104a4488
Merge pull request #1009 from bjoern-r/snaplist-from-configured-datasets
get snpshots from configured list of datasets only
2026-02-18 17:07:02 -05:00
Jim Salter 20abb530c8
Merge pull request #1078 from phreaker0/fix-009-test
fix test with latest zfs versions
2026-02-18 17:06:25 -05:00
Jim Salter 9f76aab5d6
Merge pull request #1051 from ifazk/busyfix
Fix iszfsbusy to not match pool prefix
2026-02-18 17:04:44 -05:00
Jim Salter d763e45dd4
Merge pull request #1007 from Deltik/fix/815
syncoid: Sort snapshots by `createtxg` if possible (fallback to `creation`) (redux of #818)
2026-02-18 17:02:00 -05:00
Jim Salter b4a5394d1f
Merge pull request #1079 from phreaker0/doc-weeklies
add weeklies documentation
2026-02-18 17:00:11 -05:00
Christoph Klaffl 3b606768fa
Merge branch 'master' into fix/815 2026-02-18 22:59:26 +01:00
Jim Salter 3d866c9473
Merge pull request #1054 from ifazk/unused-forcerecv
Remove unused forcerecv variable
2026-02-18 16:59:00 -05:00
Jim Salter 07e32cad71
Merge pull request #1044 from numerfolt/master
Change syncoid snapshots date and time divider from ":" to "_"
2026-02-18 16:49:55 -05:00
Christoph Klaffl 4415b36ba8
fix error handling for fallback (old behaviour) and codestyle 2026-02-18 20:36:42 +01:00
Christoph Klaffl 730bce6d38
add weeklies documentation 2026-02-18 15:20:23 +01:00
Christoph Klaffl 2875e10adb
fix test with latest zfs versions 2026-02-18 15:07:10 +01:00
Christoph Klaffl 3a1a19b39b
fix replication with '0' named bookmarks 2026-02-18 14:57:38 +01:00
Roberto Ricci 2343089a08 don't print cache updates on stdout
Fixes 393a4672e5
2026-01-31 19:12:56 +01:00
Felix Matouschek dcae5ce4b5 syncoid: Compare existing bookmarks
When creating bookmarks compare the GUID of existing bookmarks before
failing the creation of a duplicate bookmark.

Signed-off-by: Felix Matouschek <felix@matouschek.org>
2026-01-10 11:36:25 +00:00
Ifaz Kabir 225c9f99d1 Remove unused forcerecv variable 2025-12-10 21:10:32 -05:00
Ifaz Kabir 4543705ffc Fix iszfsbusy to not match pool prefix 2025-12-03 23:49:22 -05:00
Bjoern 29f05ff5c4 fix dataset filter if autoprune is set 2025-11-18 12:14:06 +01:00
Bjoern Riemer 1c6d7d6459 onl query not ignored datasets for snapshots 2025-11-09 22:44:20 +01:00
numerfolt 4c9fba2277
Change date and time divider from : to _ 2025-09-29 22:04:29 +02:00
Björn 24b0293b0f
fallback to old zfs list snapshot method in case of error 2025-08-25 17:52:48 +02:00
Nick Liu 1952e96846
fix(tests/common/lib.sh): Support set -e in test scripts
The `systemctl is-active virtualbox-guest-utils.service` command returns
a non-zero exit status when the service is inactive, causing early exit
in tests that use `set -e`.

This change suppresses that error with `|| true` to prevent test failure
when the service check returns non-zero.

Fixes this test:

```
root@demo:~/sanoid/tests/syncoid# cat /tmp/syncoid_test_run_011_sync_out-of-order_snapshots.log
+ set -e
+ . ../../common/lib.sh
+++ uname
++ unamestr=Linux
+ POOL_IMAGE=/tmp/jimsalterjrs_sanoid_815.img
+ POOL_SIZE=64M
+ POOL_NAME=jimsalterjrs_sanoid_815
+ truncate -s 64M /tmp/jimsalterjrs_sanoid_815.img
+ zpool create -m none -f jimsalterjrs_sanoid_815 /tmp/jimsalterjrs_sanoid_815.img
+ trap cleanUp EXIT
+ zfs create jimsalterjrs_sanoid_815/before
+ zfs snapshot jimsalterjrs_sanoid_815/before@this-snapshot-should-make-it-into-the-after-dataset
+ disableTimeSync
+ which timedatectl
+ '[' 0 -eq 0 ']'
+ timedatectl set-ntp 0
+ which systemctl
+ '[' 0 -eq 0 ']'
+ systemctl is-active virtualbox-guest-utils.service
inactive
+ cleanUp
+ zpool export jimsalterjrs_sanoid_815
+ rm -f /tmp/jimsalterjrs_sanoid_815.img
```
2025-06-18 00:15:44 +02:00
Björn 510becee2f
get snpshots from list of datasets
query the existing snapshots only from the configured datasets to avoud spinning up disks that are not in the config
2025-06-17 13:32:59 +02:00
Nick Liu f2e1e4f8a0
test(syncoid): Add test to verify out-of-order snapshot sync
This commit adds a regression test for the out-of-order snapshot
replication issue.

The new test case manipulates the system clock with `setdate` to create
snapshots with non-monotonic `creation` timestamps but a correct,
sequential `createtxg`. It then runs `syncoid` and verifies that all
snapshots were replicated, which is only possible if they are ordered
correctly by `createtxg`.

See #815 for the original test.
2025-06-11 13:01:00 -05:00
Nick Liu 258a664dc0
fix(syncoid): Harden bookmark replication against timestamp non-monotony
* Replaced direct sorting on the `creation` property with calls to the
  `sortsnapshots()` helper subroutine. As with other usages, this
  ensures that when `syncoid` searches for the next snapshot to
  replicate from a bookmark, it preferentially uses the monotonic
  `createtxg` for sorting.
* Refactored the variables holding bookmark details from separate
  scalars (`$bookmark`, `$bookmarkcreation`) into a single hash
  (`%bookmark`). This allows for cleaner handling of all
  relevant bookmark properties (`name`, `creation`, `createtxg`).
* Fixed a code comment that incorrectly described the snapshot search
  order. The search for a matching target snapshot now correctly states
  it proceeds from newest-to-oldest to find the most recent common
  ancestor.
2025-06-11 12:40:57 -05:00
Nick Liu be52f8ab1e
feat(syncoid): Sort snapshots reliably using the `createtxg` property
System clock adjustments from manual changes or NTP synchronization can
cause ZFS snapshot creation timestamps to be non-monotonic. This can
cause `syncoid` to select the wrong "oldest" snapshot for initial
replication or the wrong "newest" snapshot with `--no-sync-snap`,
potentially losing data from the source to the target.

This change adds the `sortsnapshots()` helper that prefers to compare
the `createtxg` (creation transaction group) property over the
`creation` property when available. The `createtxg` property is
guaranteed to be strictly sequential within a ZFS pool, unlike the
`creation` property, which depends on the system clock's accuracy.

Unlike the first iteration of `sortsnapshots()` in #818, the subroutine
takes a specific snapshot sub-hash ('source' or 'target') as input
rather than both, ensuring createtxg comparisons occur within the same
zpool context. The first iteration that took the entire snapshots hash
had no mechanism to sort target snapshots, which could have caused
issues in usages that expected target snapshots to be sorted.

Most snapshot sorting call sites now use the new `sortsnapshots()`
subroutine. Two more usages involving bookmarks are updated in a
different commit for independent testing of a bookmarks-related
refactoring.

Fixes: #815
2025-06-11 12:25:14 -05:00
Nick Liu a6d417113b
fix(syncoid): Rework snap/bookmark fetching with feature detection
The original snapshot fetching relied on a complex state-dependent
`getsnaps()` subroutine with a separate `getsnapsfallback()` for older
ZFS versions. The first refactor attempt in #818 simplified this but
introduced performance regressions by using `zfs get all`, which was
inefficient for large datasets.

This commit avoids that overhead by integrating proactive `zfs get`
feature detection through a new `check_zfs_get_features()` subroutine
that determines the command's capabilities by testing for `-t` (type
filter) support and the availability of the `createtxg` property.
Results are cached per host to avoid redundant checks.
`check_zfs_get_features()` came from the #931, which this change
supersedes.

The `getsnaps()` and `getbookmarks()` subroutines now use this
information to build optimized `zfs get` commands that query only
necessary properties. As before in #818, the parsing logic is refactored
to populate property hashes for each item, eliminating the old
multi-loop state-dependent approach and the need for mostly duplicated
fallback logic.

This resolves both the original complexity and the performance issues
from the first attempted fix. Now there is a foundation for fixing the
snapshot ordering bug reported in #815.
2025-06-11 12:10:59 -05:00
Jakob Rath 114724b0a4 Replicate all dependencies of a dataset first
Assuming we want to replicate the following pool:

```
NAME            USED  AVAIL  REFER  MOUNTPOINT              ORIGIN
testpool1      1.10M  38.2M   288K  /Volumes/testpool1      -
testpool1/A     326K  38.2M   293K  /Volumes/testpool1/A    testpool1/B@b
testpool1/A/D   303K  38.2M   288K  /Volumes/testpool1/A/D  -
testpool1/B    35.5K  38.2M   292K  /Volumes/testpool1/B    testpool1/C@a
testpool1/C     306K  38.2M   290K  /Volumes/testpool1/C    -
```

Note the clone dependencies: `A -> B -> C`.

Currently, syncoid notices that `A` and `B` are clones and defers syncing them.
There are two problems:

1. Syncing `A/D` fails because we have deferred `A`.

2. The clone relation `A -> B` will not be recreated since the list of deferred datasets does not take into account clone relations between them.

This PR solves both of these problems by collecting all dependencies of a dataset and syncing them before the dataset itself.

---

One problematic case remains: if a dataset depends (transitively) on one of its own children, e.g.:

```
NAME            USED  AVAIL  REFER  MOUNTPOINT              ORIGIN
testpool1/E    58.5K  38.7M   298K  /Volumes/testpool1/E    testpool1/E/D@e
testpool1/E/D  37.5K  38.7M   296K  /Volumes/testpool1/E/D  testpool1/A@d
```

Here, the first run of syncoid will fail to sync `E/D`.
I've chosen to ignore this case for now because
1) it seems quite artificial and not like something that would occur in practice very often, and
2) a second run of syncoid will successfully sync `E/D` too (although the clone relation `E -> E/D` is lost).
2024-06-09 14:13:15 +02:00
11 changed files with 502 additions and 241 deletions

View File

@ -42,13 +42,14 @@ And its /etc/sanoid/sanoid.conf might look something like this:
frequently = 0
hourly = 36
daily = 30
weekly = 4
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
```
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 4 weeklies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
For more full details on sanoid.conf settings see [Wiki page](https://github.com/jimsalterjrs/sanoid/wiki/Sanoid#options).

View File

@ -1,3 +1,9 @@
sanoid (2.3.1-SNAPSHOT) unstable; urgency=medium
SNAPSHOT
-- Jim Salter <github@jrs-s.net> Tue, 12 Aug 2025 14:43:00 +0200
sanoid (2.3.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@thecatontheflat, @mjeanson, @jiawen, @EchterAgo, @jan-krieg, @dlangille, @rightaditya, @MynaITLabs, @ossimoi, @alexgarel, @TopherIsSwell, @jimsalterjrs, @phreaker0)

60
sanoid
View File

@ -17,6 +17,7 @@ use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse
use Capture::Tiny ':all';
use POSIX 'strftime';
my %args = (
"configdir" => "/etc/sanoid",
@ -144,6 +145,28 @@ if ($args{'cron'}) {
exit 0;
####################################################################################
####################################################################################
####################################################################################
sub get_active_datasets {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my @paths;
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if ((! $config{$section}{'autoprune'}) and (! $config{$section}{'autosnap'})) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
push @paths, $path;
}
my @sorted_paths = sort { lc($a) cmp lc($b) } @paths;
my $paths = join (" ", @sorted_paths);
return $paths
}
####################################################################################
####################################################################################
####################################################################################
@ -616,7 +639,11 @@ sub take_snapshots {
my @snapshots;
foreach my $type (@types) {
my $snapname = "autosnap_$datestamp{'sortable'}_$type";
my $sortable = strftime($config{$dataset}{'datestamp_format'}, localtime($datestamp{'unix_time'}));
my $snapname = $config{$dataset}{'snapname_format'};
$snapname =~ s/IDENTIFIER/$config{$dataset}{'identifier'}/g;
$snapname =~ s/DATE/$sortable/g;
$snapname =~ s/TYPE/$type/g;
push(@snapshots, $snapname);
}
@ -870,6 +897,7 @@ sub getsnaps {
my ($config, $cacheTTL, $forcecacheupdate) = @_;
my @rawsnaps;
my $exitcode;
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
@ -882,11 +910,24 @@ sub getsnaps {
} else {
print "INFO: cache expired - updating from zfs list.\n";
}
if ($args{'debug'}) {
print "INFO: running: $zfs get -Hrpt snapshot creation " . get_active_datasets(@params) . "\n";
}
}
open FH, "$zfs get -Hrpt snapshot creation |";
# just get snapshots from configured datasets
open FH, "$zfs get -Hrpt snapshot creation " . get_active_datasets(@params) . " |";
@rawsnaps = <FH>;
close FH;
my $exitcode = $? >> 8;
if ($exitcode != 0) {
print "INFO: zfs list shapshots with dataset names does not work, retrying without dataset names\n";
open FH, "$zfs get -Hrpt snapshot creation |";
@rawsnaps = <FH>;
close FH;
}
open FH, "> $cache.tmp" or die "Could not write to $cache.tmp!\n";
print FH @rawsnaps;
close FH;
@ -906,12 +947,13 @@ sub getsnaps {
}
foreach my $snap (@rawsnaps) {
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\t*creation\t*(\d*)/);
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*?)\t*creation\t*(\d*)/);
# avoid pissing off use warnings
if (defined $snapname) {
my ($snaptype) = ($snapname =~ m/.*_(\w*ly)/);
if ($snapname =~ /^autosnap/) {
if ($snapname =~ /$config{$fs}{'identifier'}/) {
my @types = qw(yearly monthly weekly daily hourly frequently);
my ($snaptype) = grep { $snapname =~ /$_/ } @types;
$snaps{$fs}{$snapname}{'ctime'}=$snapdate;
$snaps{$fs}{$snapname}{'type'}=$snaptype;
}
@ -1148,16 +1190,15 @@ sub init {
sub get_date {
my %datestamp;
($datestamp{'sec'},$datestamp{'min'},$datestamp{'hour'},$datestamp{'mday'},$datestamp{'mon'},$datestamp{'year'},$datestamp{'wday'},$datestamp{'yday'},$datestamp{'isdst'}) = localtime(time);
$datestamp{'unix_time'} = time();
($datestamp{'sec'},$datestamp{'min'},$datestamp{'hour'},$datestamp{'mday'},$datestamp{'mon'},$datestamp{'year'},$datestamp{'wday'},$datestamp{'yday'},$datestamp{'isdst'}) = localtime($datestamp{'unix_time'});
$datestamp{'year'} += 1900;
$datestamp{'unix_time'} = (((((((($datestamp{'year'} - 1971) * 365) + $datestamp{'yday'}) * 24) + $datestamp{'hour'}) * 60) + $datestamp{'min'}) * 60) + $datestamp{'sec'};
$datestamp{'sec'} = sprintf ("%02u", $datestamp{'sec'});
$datestamp{'min'} = sprintf ("%02u", $datestamp{'min'});
$datestamp{'hour'} = sprintf ("%02u", $datestamp{'hour'});
$datestamp{'mday'} = sprintf ("%02u", $datestamp{'mday'});
$datestamp{'mon'} = sprintf ("%02u", ($datestamp{'mon'} + 1));
$datestamp{'noseconds'} = "$datestamp{'year'}-$datestamp{'mon'}-$datestamp{'mday'}_$datestamp{'hour'}:$datestamp{'min'}";
$datestamp{'sortable'} = "$datestamp{'noseconds'}:$datestamp{'sec'}";
return %datestamp;
}
@ -1792,7 +1833,6 @@ sub addcachedsnapshots {
my @datasets = getchilddatasets($dataset);
foreach my $dataset(@datasets) {
print "${dataset}\@${suffix}\n";
print $fh "${dataset}\@${suffix}\n";
}
}

View File

@ -55,6 +55,7 @@
frequently = 0
hourly = 36
daily = 30
weekly = 4
monthly = 3
yearly = 0
autosnap = yes
@ -65,6 +66,7 @@
frequently = 0
hourly = 30
daily = 90
weekly = 4
monthly = 12
yearly = 0
@ -86,6 +88,7 @@
frequently = 0
hourly = 30
daily = 90
weekly = 4
monthly = 3
yearly = 0

View File

@ -113,3 +113,9 @@ yearly_crit = 0
# for overriding these values one needs to specify them in a root pool section! ([tank]\n ...)
capacity_warn = 80
capacity_crit = 95
# snapshot name formats can be overridden
identifier = autosnap
# strftime-style format string
datestamp_format = %Y-%m-%d_%H:%M:%S
snapname_format = IDENTIFIER_DATE_TYPE

467
syncoid
View File

@ -194,6 +194,9 @@ if (length $args{'insecure-direct-connection'}) {
# warn user of anything missing, then continue with sync.
my %avail = checkcommands();
# host => { supports_type_filter => 1/0, supported_properties => ['guid', 'creation', ...] }
my %host_zfs_get_features;
my %snaps;
my $exitcode = 0;
@ -215,53 +218,86 @@ if (!defined $args{'recursive'}) {
$exitcode = 2;
}
my @deferred;
foreach my $datasetProperties(@datasets) {
my %datasetsByName;
foreach my $datasetProperties (@datasets) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
if ($origin eq "-" || defined $args{'no-clone-handling'}) {
$origin = undef;
} else {
# check if clone source is replicated too
my @values = split(/@/, $origin, 2);
my $srcdataset = $values[0];
$datasetsByName{$dataset} = $datasetProperties;
my $found = 0;
foreach my $datasetProperties(@datasets) {
if ($datasetProperties->{'name'} eq $srcdataset) {
$found = 1;
last;
# Clean the 'origin' property
# (we set 'origin' to undef whenever we don't want to handle it during sync)
if ($origin eq "-" || defined $args{'no-clone-handling'}) {
$datasetProperties->{'origin'} = undef;
}
}
my %synced;
foreach my $dataset1Properties (@datasets) {
my $dataset1 = $dataset1Properties->{'name'};
# Collect all transitive dependencies of this dataset.
# A dataset can have two dependencies:
# - the parent dataset
# - the origin (if it is a clone)
my @todo = ($dataset1); # the datasets whose dependencies we still have to collect
my @tosync; # the datasets we have to sync (in the correct order)
my %tosyncSet; # set of synced datasets to check for dependency cycles
while (@todo) {
my $dataset = shift(@todo);
if (exists $synced{$dataset}) {
# We already synced this dataset, thus also all its dependencies => skip
next;
}
if (exists $tosyncSet{$dataset}) {
# We already processed this dataset once during this loop,
# so we do not need to do it again.
# This check is also necessary to break dependency cycles.
#
# NOTE:
# If there is a cycle, multiple syncoid runs might be necessary to replicate all datasets,
# and not all clone relationships will be preserved
# (it seems like huge effort to handle this case properly, and it should be quite rare in practice)
next;
}
unshift @tosync, $dataset;
$tosyncSet{$dataset} = 1;
my ($parent) = $dataset =~ /(.*)\/[^\/]+/;
if (defined $parent) {
# If parent is replicated too, sync it first
if (exists $datasetsByName{$parent}) {
push @todo, $parent;
}
}
if ($found == 0) {
# clone source is not replicated, do a full replication
$origin = undef;
} else {
# clone source is replicated, defer until all non clones are replicated
push @deferred, $datasetProperties;
next;
my $origin = $datasetsByName{$dataset}->{'origin'};
if (defined $origin) {
# If clone source is replicated too, sync it first
my @values = split(/@/, $origin, 2);
my $srcdataset = $values[0];
if (exists $datasetsByName{$srcdataset}) {
push @todo, $srcdataset;
} else {
$datasetsByName{$dataset}->{'origin'} = undef;
}
}
}
$dataset =~ s/\Q$sourcefs\E//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
}
# replicate cloned datasets and if this is the initial run, recreate them on the target
foreach my $datasetProperties(@deferred) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
$dataset =~ s/\Q$sourcefs\E//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
foreach my $dataset (@tosync) {
my $origin = $datasetsByName{$dataset}->{'origin'};
my $datasetPath = $dataset;
$datasetPath =~ s/\Q$sourcefs\E//;
chomp $datasetPath;
my $childsourcefs = $sourcefs . $datasetPath;
my $childtargetfs = $targetfs . $datasetPath;
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
$synced{$dataset} = 1;
}
}
}
@ -345,12 +381,6 @@ sub syncdataset {
my $sourcefsescaped = escapeshellparam($sourcefs);
my $targetfsescaped = escapeshellparam($targetfs);
# if no rollbacks are allowed, disable forced receive
my $forcedrecv = "-F";
if (defined $args{'no-rollback'}) {
$forcedrecv = "";
}
writelog('DEBUG', "syncing source $sourcefs to target $targetfs.");
my ($sync, $error) = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'syncoid:sync');
@ -438,7 +468,7 @@ sub syncdataset {
# Don't send the sync snap if it's filtered out by --exclude-snaps or
# --include-snaps
if (!snapisincluded($newsyncsnap)) {
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
$newsyncsnap = getnewestsnapshot(\%snaps);
if ($newsyncsnap eq '') {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; }
@ -447,7 +477,7 @@ sub syncdataset {
}
} else {
# we don't want sync snapshots created, so use the newest snapshot we can find.
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
$newsyncsnap = getnewestsnapshot(\%snaps);
if ($newsyncsnap eq '') {
writelog('WARN', "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.");
if ($exitcode < 1) { $exitcode = 1; }
@ -575,28 +605,26 @@ sub syncdataset {
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my $bookmark = 0;
my $bookmarkcreation = 0;
my %bookmark = ();
$matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps);
if (! $matchingsnap) {
# no matching snapshots, check for bookmarks as fallback
my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
# check for matching guid of source bookmark and target snapshot (oldest first)
foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) {
# check for matching guid of source bookmark and target snapshot (newest first)
foreach my $snap ( sort { sortsnapshots($snaps{'target'}, $b, $a) } keys %{ $snaps{'target'} }) {
my $guid = $snaps{'target'}{$snap}{'guid'};
if (defined $bookmarks{$guid}) {
# found a match
$bookmark = $bookmarks{$guid}{'name'};
$bookmarkcreation = $bookmarks{$guid}{'creation'};
%bookmark = %{ $bookmarks{$guid} };
$matchingsnap = $snap;
last;
}
}
if (! $bookmark) {
if (! %bookmark) {
# force delete is not possible for the root dataset
if ($args{'force-delete'} && index($targetfs, '/') != -1) {
writelog('INFO', "Removing $targetfs because no matching snapshots were found");
@ -669,15 +697,18 @@ sub syncdataset {
my $nextsnapshot = 0;
if ($bookmark) {
my $bookmarkescaped = escapeshellparam($bookmark);
if (%bookmark) {
if (!defined $args{'no-stream'}) {
# if intermediate snapshots are needed we need to find the next oldest snapshot,
# do an replication to it and replicate as always from oldest to newest
# because bookmark sends doesn't support intermediates directly
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) {
foreach my $snap ( sort { sortsnapshots($snaps{'source'}, $a, $b) } keys %{ $snaps{'source'} }) {
my $comparisonkey = 'creation';
if (defined $snaps{'source'}{$snap}{'createtxg'} && defined $bookmark{'createtxg'}) {
$comparisonkey = 'createtxg';
}
if ($snaps{'source'}{$snap}{$comparisonkey} >= $bookmark{$comparisonkey}) {
$nextsnapshot = $snap;
last;
}
@ -685,13 +716,13 @@ sub syncdataset {
}
if ($nextsnapshot) {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot);
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $nextsnapshot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $nextsnapshot);
$ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
@ -705,13 +736,13 @@ sub syncdataset {
$matchingsnap = $nextsnapshot;
$matchingsnapescaped = escapeshellparam($matchingsnap);
} else {
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
($exit, $stdout) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap);
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
writelog('WARN', "resetting partially receive state");
resetreceivestate($targethost,$targetfs,$targetisroot);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark, $newsyncsnap);
(my $ret) = syncbookmark($sourcehost, $sourcefs, $targethost, $targetfs, $bookmark{'name'}, $newsyncsnap);
$ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
@ -726,8 +757,8 @@ sub syncdataset {
# do a normal replication if bookmarks aren't used or if previous
# bookmark replication was only done to the next oldest snapshot
# edge case: skip repilcation if bookmark replication used the latest snapshot
if ((!$bookmark || $nextsnapshot) && !($matchingsnap eq $newsyncsnap)) {
# edge case: skip replication if bookmark replication used the latest snapshot
if ((!%bookmark || $nextsnapshot) && !($matchingsnap eq $newsyncsnap)) {
($exit, $stdout) = syncincremental($sourcehost, $sourcefs, $targethost, $targetfs, $matchingsnap, $newsyncsnap, defined($args{'no-stream'}));
@ -826,16 +857,25 @@ sub syncdataset {
if (defined $args{'create-bookmark'}) {
my $ret = createbookmark($sourcehost, $sourcefs, $newsyncsnap, $newsyncsnap);
$ret == 0 or do {
# fallback: assume naming conflict and try again with guid based suffix
my %existingbookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
my $guid = $snaps{'source'}{$newsyncsnap}{'guid'};
$guid = substr($guid, 0, 6);
writelog('INFO', "bookmark creation failed, retrying with guid based suffix ($guid)...");
if (defined $existingbookmarks{$guid} && $existingbookmarks{$guid}{'name'} eq $newsyncsnap) {
writelog('INFO', "bookmark already exists, skipping creation");
} else {
# fallback: assume naming conflict and try again with guid based suffix
my $suffix = substr($guid, 0, 6);
my $ret = createbookmark($sourcehost, $sourcefs, $newsyncsnap, "$newsyncsnap$guid");
$ret == 0 or do {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
writelog('INFO', "bookmark creation failed, retrying with guid based suffix ($suffix)...");
my $newsyncsnapsuffix = "$newsyncsnap$suffix";
my $ret = createbookmark($sourcehost, $sourcefs, $newsyncsnap, $newsyncsnapsuffix);
$ret == 0 or do {
if (! defined $existingbookmarks{$guid} || $existingbookmarks{$guid}{'name'} ne $newsyncsnapsuffix) {
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
}
}
};
}
@ -862,7 +902,7 @@ sub syncdataset {
%snaps = (%sourcesnaps, %targetsnaps);
}
my @to_delete = sort { $snaps{'target'}{$a}{'creation'}<=>$snaps{'target'}{$b}{'creation'} } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
my @to_delete = sort { sortsnapshots($snaps{'target'}, $a, $b) } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
while (@to_delete) {
# Create batch of snapshots to remove
my $snaps = join ',', splice(@to_delete, 0, 50);
@ -1389,6 +1429,47 @@ sub checkcommands {
return %avail;
}
sub check_zfs_get_features {
my ($rhost, $mysudocmd, $zfscmd) = @_;
my $host = $rhost ? (split(/\s+/, $rhost))[-1] : "localhost";
return $host_zfs_get_features{$host} if exists $host_zfs_get_features{$host};
writelog('DEBUG', "Checking `zfs get` features on host \"$host\"...");
$host_zfs_get_features{$host} = {
supports_type_filter => 0,
supported_properties => ['guid', 'creation']
};
my $check_t_option_cmd = "$rhost $mysudocmd $zfscmd get -H -t snapshot '' ''";
open my $fh_t, "$check_t_option_cmd 2>&1 |";
my $output_t = <$fh_t>;
close $fh_t;
if ($output_t !~ /^\Qinvalid option\E/) {
$host_zfs_get_features{$host}->{supports_type_filter} = 1;
}
writelog('DEBUG', "Host \"$host\" has `zfs get -t`?: $host_zfs_get_features{$host}->{supports_type_filter}");
my @properties_to_check = ('createtxg');
foreach my $prop (@properties_to_check) {
my $check_prop_cmd = "$rhost $mysudocmd $zfscmd get -H $prop ''";
open my $fh_p, "$check_prop_cmd 2>&1 |";
my $output_p = <$fh_p>;
close $fh_p;
if ($output_p !~ /^\Qbad property list: invalid property\E/) {
push @{$host_zfs_get_features{$host}->{supported_properties}}, $prop;
}
}
writelog('DEBUG', "Host \"$host\" ZFS properties: @{$host_zfs_get_features{$host}->{supported_properties}}");
return $host_zfs_get_features{$host};
}
sub iszfsbusy {
my ($rhost,$fs,$isroot) = @_;
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
@ -1399,7 +1480,7 @@ sub iszfsbusy {
close PL;
foreach my $process (@processes) {
if ($process =~ /zfs *(receive|recv)[^\/]*\Q$fs\E\Z/) {
if ($process =~ /zfs *(receive|recv)[^\/]*\s\Q$fs\E\Z/) {
# there's already a zfs receive process for our target filesystem - return true
writelog('DEBUG', "process $process matches target $fs!");
return 1;
@ -1527,9 +1608,22 @@ sub readablebytes {
return $disp;
}
sub sortsnapshots {
my ($snapdata, $left, $right) = @_;
if (defined $snapdata->{$left}{'createtxg'} && defined $snapdata->{$right}{'createtxg'}) {
return $snapdata->{$left}{'createtxg'} <=> $snapdata->{$right}{'createtxg'};
}
if (defined $snapdata->{$left}{'creation'} && defined $snapdata->{$right}{'creation'}) {
return $snapdata->{$left}{'creation'} <=> $snapdata->{$right}{'creation'};
}
return 0;
}
sub getoldestsnapshot {
my $snaps = shift;
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
foreach my $snap (sort { sortsnapshots($snaps{'source'}, $a, $b) } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the oldest
return $snap;
}
@ -1543,7 +1637,7 @@ sub getoldestsnapshot {
sub getnewestsnapshot {
my $snaps = shift;
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
foreach my $snap (sort { sortsnapshots($snaps{'source'}, $b, $a) } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the newest
writelog('DEBUG', "NEWEST SNAPSHOT: $snap");
return $snap;
@ -1722,7 +1816,7 @@ sub pruneoldsyncsnaps {
sub getmatchingsnapshot {
my ($sourcefs, $targetfs, $snaps) = @_;
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
foreach my $snap ( sort { sortsnapshots($snaps{'source'}, $b, $a) } keys %{ $snaps{'source'} }) {
if (defined $snaps{'target'}{$snap}) {
if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) {
return $snap;
@ -1857,21 +1951,30 @@ sub dumphash() {
writelog('INFO', Dumper($hash));
}
sub getsnaps() {
sub getsnaps {
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $rhostOriginal = $rhost;
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot guid,creation $fsescaped";
my $host_features = check_zfs_get_features($rhost, $mysudocmd, $zfscmd);
my @properties = @{$host_features->{supported_properties}};
my $type_filter = "";
if ($host_features->{supports_type_filter}) {
$type_filter = "-t snapshot";
} else {
push @properties, 'type';
}
my $properties_string = join(',', @properties);
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 $type_filter $properties_string $fsescaped";
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
writelog('DEBUG', "getting list of snapshots on $fs using $getsnapcmd...");
@ -1880,142 +1983,50 @@ sub getsnaps() {
}
open FH, $getsnapcmd;
my @rawsnaps = <FH>;
close FH or do {
# fallback (solaris for example doesn't support the -t option)
return getsnapsfallback($type,$rhostOriginal,$fs,$isroot,%snaps);
};
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my %creationtimes=();
foreach my $line (@rawsnaps) {
$line =~ /\Q$fs\E\@(\S*)/;
my $snapname = $1;
if (!snapisincluded($snapname)) { next; }
# only import snap guids from the specified filesystem
if ($line =~ /\Q$fs\E\@.*\tguid/) {
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
$snaps{$type}{$snap}{'guid'}=$guid;
}
# only import snap creations from the specified filesystem
elsif ($line =~ /\Q$fs\E\@.*\tcreation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
}
}
return %snaps;
}
sub getsnapsfallback() {
# fallback (solaris for example doesn't support the -t option)
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 type,guid,creation $fsescaped |";
writelog('WARN', "snapshot listing failed, trying fallback command");
writelog('DEBUG', "FALLBACK, getting list of snapshots on $fs using $getsnapcmd...");
open FH, $getsnapcmd;
my @rawsnaps = <FH>;
close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
my %creationtimes=();
my %snap_data;
my %creationtimes;
my $state = 0;
foreach my $line (@rawsnaps) {
if ($state < 0) {
$state++;
next;
}
for my $line (@rawsnaps) {
chomp $line;
my ($dataset, $property, $value) = split /\t/, $line;
next unless defined $value;
if ($state eq 0) {
if ($line !~ /\Q$fs\E\@.*\ttype\s*snapshot/) {
# skip non snapshot type object
$state = -2;
next;
}
} elsif ($state eq 1) {
if ($line !~ /\Q$fs\E\@.*\tguid/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (guid parser error)";
}
my (undef, $snap) = split /@/, $dataset;
next unless length $snap;
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
if (!snapisincluded($snap)) { next; }
$snaps{$type}{$snap}{'guid'}=$guid;
} elsif ($state eq 2) {
if ($line !~ /\Q$fs\E\@.*\tcreation/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (creation parser error)";
}
if (!snapisincluded($snap)) { next; }
$snap_data{$snap}{$property} = $value;
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
if (!snapisincluded($snap)) { next; }
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
if ($property eq 'creation') {
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
$creationsuffix = sprintf("%s%03d", $value, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
$state = -1;
$snap_data{$snap}{'creation'} = $creationsuffix;
}
}
$state++;
for my $snap (keys %snap_data) {
if (length $type_filter || $snap_data{$snap}{'type'} eq 'snapshot') {
foreach my $prop (@{$host_features->{supported_properties}}) {
if (exists $snap_data{$snap}{$prop}) {
$snaps{$type}{$snap}{$prop} = $snap_data{$snap}{$prop};
}
}
}
}
return %snaps;
@ -2033,8 +2044,12 @@ sub getbookmarks() {
$fsescaped = escapeshellparam($fsescaped);
}
my $host_features = check_zfs_get_features($rhost, $mysudocmd, $zfscmd);
my @properties = @{$host_features->{supported_properties}};
my $properties_string = join(',', @properties);
my $error = 0;
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |";
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark $properties_string $fsescaped 2>&1 |";
writelog('DEBUG', "getting list of bookmarks on $fs using $getbookmarkcmd...");
open FH, $getbookmarkcmd;
my @rawbookmarks = <FH>;
@ -2049,48 +2064,46 @@ sub getbookmarks() {
die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)";
}
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my %bookmark_data;
my %creationtimes;
my $lastguid;
my %creationtimes=();
for my $line (@rawbookmarks) {
chomp $line;
my ($dataset, $property, $value) = split /\t/, $line;
next unless defined $value;
foreach my $line (@rawbookmarks) {
# only import bookmark guids, creation from the specified filesystem
if ($line =~ /\Q$fs\E\#.*\tguid/) {
chomp $line;
$lastguid = $line;
$lastguid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tguid.*$/$1/;
$bookmarks{$lastguid}{'name'}=$bookmark;
} elsif ($line =~ /\Q$fs\E\#.*\tcreation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
my (undef, $bookmark) = split /#/, $dataset;
next unless length $bookmark;
# the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp
$bookmark_data{$bookmark}{$property} = $value;
# the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp
if ($property eq 'creation') {
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
$creationsuffix = sprintf("%s%03d", $value, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$bookmarks{$lastguid}{'creation'}=$creationsuffix;
$bookmark_data{$bookmark}{'creation'} = $creationsuffix;
}
}
for my $bookmark (keys %bookmark_data) {
my $guid = $bookmark_data{$bookmark}{'guid'};
$bookmarks{$guid}{'name'} = $bookmark;
$bookmarks{$guid}{'creation'} = $bookmark_data{$bookmark}{'creation'};
$bookmarks{$guid}{'createtxg'} = $bookmark_data{$bookmark}{'createtxg'};
}
return %bookmarks;
}
@ -2194,7 +2207,7 @@ sub getdate {
$date{'mday'} = sprintf ("%02u", $mday);
$date{'mon'} = sprintf ("%02u", ($mon + 1));
$date{'tzoffset'} = sprintf ("GMT%s%02d:%02u", $sign, $hours, $minutes);
$date{'stamp'} = "$date{'year'}-$date{'mon'}-$date{'mday'}:$date{'hour'}:$date{'min'}:$date{'sec'}-$date{'tzoffset'}";
$date{'stamp'} = "$date{'year'}-$date{'mon'}-$date{'mday'}_$date{'hour'}:$date{'min'}:$date{'sec'}-$date{'tzoffset'}";
return %date;
}

View File

@ -23,7 +23,7 @@ function cleanUp {
# export pool in any case
trap cleanUp EXIT
zfs create -o recordsize=16k -o xattr=on -o mountpoint=none -o primarycache=none "${POOL_NAME}"/src
zfs create -o recordsize=16k -o xattr=sa -o mountpoint=none -o primarycache=none "${POOL_NAME}"/src
zfs create -V 100M -o volblocksize=8k "${POOL_NAME}"/src/zvol8
zfs create -V 100M -o volblocksize=16k -o primarycache=all "${POOL_NAME}"/src/zvol16
zfs create -V 100M -o volblocksize=64k "${POOL_NAME}"/src/zvol64
@ -33,7 +33,6 @@ zfs set 'net.openoid:var-name'='with whitespace and !"§$%&/()= symbols' "${POOL
../../../syncoid --preserve-properties --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
if [ "$(zfs get -H -o value -t filesystem recordsize "${POOL_NAME}"/dst)" != "16K" ]; then
exit 1
fi
@ -42,7 +41,7 @@ if [ "$(zfs get -H -o value -t filesystem mountpoint "${POOL_NAME}"/dst)" != "no
exit 1
fi
if [ "$(zfs get -H -o value -t filesystem xattr "${POOL_NAME}"/dst)" != "on" ]; then
if [ "$(zfs get -H -o value -t filesystem xattr "${POOL_NAME}"/dst)" != "sa" ]; then
exit 1
fi

View File

@ -0,0 +1,50 @@
#!/bin/bash
# test verifying snapshots with out-of-order snapshot creation datetimes
set -x
set -e
. ../../common/lib.sh
if [ "$INVASIVE_TESTS" != "1" ]; then
exit 130
fi
POOL_IMAGE="/tmp/syncoid-test-11.zpool"
POOL_SIZE="64M"
POOL_NAME="syncoid-test-11"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
rm -f "${POOL_IMAGE}"
}
# export pool and remove the image in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/before
zfs snapshot "${POOL_NAME}"/before@this-snapshot-should-make-it-into-the-after-dataset
disableTimeSync
setdate 1155533696
zfs snapshot "${POOL_NAME}"/before@oldest-snapshot
zfs snapshot "${POOL_NAME}"/before@another-snapshot-does-not-matter
../../../syncoid --sendoptions="Lec" "${POOL_NAME}"/before "${POOL_NAME}"/after
# verify
saveSnapshotList "${POOL_NAME}" "snapshot-list.txt"
grep "${POOL_NAME}/before@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@this-snapshot-should-make-it-into-the-after-dataset" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@oldest-snapshot" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/before@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
grep "${POOL_NAME}/after@another-snapshot-does-not-matter" "snapshot-list.txt" || exit $?
exit 0

View File

@ -0,0 +1,48 @@
#!/bin/bash
# test replication with fallback to bookmarks and special named snapshot/bookmark '0'
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-013.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-013"
TARGET_CHECKSUM="b927125d2113c8da1a7f0181516e8f57fee5d268bdd5386d6ff7ddf31d6d6a35 -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@0
# initial replication
../../../syncoid --no-sync-snap --create-bookmark --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@0
zfs snapshot "${POOL_NAME}"/src@1
# replicate which should fallback to bookmarks
../../../syncoid --no-sync-snap --create-bookmark --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# verify
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}"; zfs list -t bookmark -r -H -o name "${POOL_NAME}")
checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0

View File

@ -0,0 +1,93 @@
#!/bin/bash
# test if guid of existing bookmark matches new guid
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-014.zpool"
MOUNT_TARGET="/tmp/syncoid-test-014.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-014"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m "${MOUNT_TARGET}" -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
function getGuid {
zfs get -H guid "$1" | awk '{print $3}'
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}/a"
zfs snapshot "${POOL_NAME}/a@s0"
# This fully replicates a to b
../../../syncoid --debug --no-sync-snap --no-rollback --create-bookmark "${POOL_NAME}"/a "${POOL_NAME}"/b
# This fully replicates a to c
../../../syncoid --debug --no-sync-snap --no-rollback --create-bookmark "${POOL_NAME}"/a "${POOL_NAME}"/c
bookmark_guid=$(getGuid "${POOL_NAME}/a#s0")
snap_a_guid=$(getGuid "${POOL_NAME}/a@s0")
snap_b_guid=$(getGuid "${POOL_NAME}/b@s0")
snap_c_guid=$(getGuid "${POOL_NAME}/c@s0")
# Bookmark guid should equal guid of all snapshots
if [ "${bookmark_guid}" != "${snap_a_guid}" ] || \
[ "${bookmark_guid}" != "${snap_b_guid}" ] || \
[ "${bookmark_guid}" != "${snap_c_guid}" ]; then
exit 1
fi
bookmark_suffix="${bookmark_guid:0:6}"
fallback_bookmark="${POOL_NAME}/a#s0${bookmark_suffix}"
# Fallback bookmark should not exist
if zfs get guid "${fallback_bookmark}"; then
exit 1
fi
zfs snapshot "${POOL_NAME}/a@s1"
# Create bookmark so syncoid is forced to create fallback bookmark
zfs bookmark "${POOL_NAME}/a@s0" "${POOL_NAME}/a#s1"
# This incrementally replicates from a@s0 to a@s1 and should create a
# bookmark with fallback suffix
../../../syncoid --debug --no-sync-snap --no-rollback --create-bookmark "${POOL_NAME}"/a "${POOL_NAME}"/b
snap_guid=$(getGuid "${POOL_NAME}/a@s1")
bookmark_suffix="${snap_guid:0:6}"
fallback_bookmark="${POOL_NAME}/a#s1${bookmark_suffix}"
# Fallback bookmark guid should equal guid of snapshot
if [ "$(getGuid "${fallback_bookmark}")" != "${snap_guid}" ]; then
exit 1
fi
zfs snapshot "${POOL_NAME}/a@s2"
snap_guid=$(getGuid "${POOL_NAME}/a@s2")
bookmark_suffix="${snap_guid:0:6}"
fallback_bookmark="${POOL_NAME}/a#s2${bookmark_suffix}"
# Create bookmark and fallback bookmark so syncoid should fail
zfs bookmark "${POOL_NAME}/a@s0" "${POOL_NAME}/a#s2"
zfs bookmark "${POOL_NAME}/a@s0" "${fallback_bookmark}"
# This incrementally replicates from a@s1 to a@s2 and should fail to create a
# bookmark with fallback suffix
if ../../../syncoid --debug --no-sync-snap --no-rollback --create-bookmark "${POOL_NAME}"/a "${POOL_NAME}"/b; then
exit 1
fi
exit 0

View File

@ -2,6 +2,8 @@
# run's all the available tests
# set INVASIVE_TESTS=1 to also run invasive test which manipulate the system time
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue