diff --git a/CHANGELIST b/CHANGELIST
index ad3378c..4e80d00 100644
--- a/CHANGELIST
+++ b/CHANGELIST
@@ -1,16 +1,49 @@
+1.4.7 reverted Perl shebangs to #!/usr/bin/perl - sorry FreeBSD folks, shebanged to /usr/bin/env perl bare calls to syncoid
+ or sanoid (without explicit calls to Perl) don't work on EITHER of our systems. I'm not OK with that, this is going to be
+ an OS localization issue that can either be addressed with BSD-specific packaging, or you can individually address it
+ by editing the shebangs on your own systems OR by doing a one-time ln -s /usr/local/bin/perl to /usr/bin/perl, which will
+ fix the issue for this particular script AND all other Perl scripts developed on non-BSD systems.
+
+ also temporarily dyked out the set readonly functionality in syncoid - it was causing more problems than it prevented, and
+ using the -F argument with receive prevents incautious writes (including just cd'ing into mounted datasets, if atimes are on)
+ from interrupting syncoid runs anyway.
+
+1.4.6c merged @gusson's pull request to add -sshport argument
+
+1.4.6b updated default cipherlist for syncoid to
+ chacha20-poly1305@openssh.com,arcfour - arcfour isn't supported on
+ newer SSH (in Ubuntu Xenial and FreeBSD), chacha20 isn't supported on
+ some older SSH versions (Ubuntu Precise< I think?)
+
+1.4.6a due to bug in ZFS on Linux which frequently causes errors to return from `zfs set readonly`,
+ changed ==0 or die in setzfsvalue() to ==0 or [complain] - it's not worth causing replication
+ to fail while this ZFS on Linux bug exists.
+
+1.4.6 added a mollyguard to syncoid to help newbies who try to zfs create a new target dataset
+ before doing an initial replication, instead of letting the replication itself create
+ the target.
+
+ added "==0 or die" to all system() calls in sanoid and syncoid that didn't already have them.
+
+1.4.5 altered shebang to '#!/usr/bin/env perl' for enhanced FreeBSD compatibility
+
+1.4.4 merged pull requests from jjlawren for OmniOS compatibility, added --configdir=/path/to/configs CLI option to sanoid at jjlawrens' request presumably for same
+
1.4.3 added SSH persistence to syncoid - using socket speeds up SSH overhead 300%! =)
one extra commit to get rid of the "Exit request sent." SSH noise at the end.
-1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name child snapshots to be deleted - thank you Lenz Weber!
+1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name
+ child snapshots to be deleted - thank you Lenz Weber!
1.4.1 updated check_zpool() in sanoid to parse zpool list properly both pre- and post- ZoL v0.6.4
-1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots. use: findoid /path/to/file
+1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots.
+ use: findoid /path/to/file
1.3.1 whoops - prevent process_children_only from getting set from blank value in defaults
-1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with empty parent datasets at all.
- also more thoroughly documented features in default config files.
+1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with
+ empty parent datasets at all. also more thoroughly documented features in default config files.
1.2.0 added monitor_children_only parameter to sanoid.conf for use with recursive definitions - in cases where container dataset is kept empty
diff --git a/FREEBSD.readme b/FREEBSD.readme
new file mode 100644
index 0000000..d2d7889
--- /dev/null
+++ b/FREEBSD.readme
@@ -0,0 +1,13 @@
+FreeBSD users will need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl
+to #!/usr/local/bin/perl in most cases.
+
+Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
+from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
+systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
+the other way around. =)
+
+If you don't want to have to change the shebangs, your other option is to drop a symlink on your system:
+
+root@bsd:~# ln -s /usr/local/bin/perl /usr/bin/perl
+
+After putting this symlink in place, ANY perl script shebanged for Linux will work on your system too.
diff --git a/INSTALL b/INSTALL
new file mode 100644
index 0000000..149edc3
--- /dev/null
+++ b/INSTALL
@@ -0,0 +1,30 @@
+SYNCOID
+-------
+Syncoid depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced
+functionality in the absence of any or all of the above. SSH is only required
+for remote synchronization. Arcfour crypto is the default for SSH transport,
+and currently (v1.4.5) syncoid runs will fail if arcfour is not available
+on either end of the transport.
+
+On Ubuntu: apt install pv lzop mbuffer
+On FreeBSD: pkg install pv lzop
+
+FreeBSD notes: mbuffer is not currently recommended due to oddities in
+ FreeBSD's local implementation. Internal network buffering
+ capability is on the roadmap soon to remove mbuffer dependency
+ anyway. FreeBSD places pv and lzop in /usr/local/bin instead
+ of /usr/bin ; syncoid currently does not check path.
+
+ Simplest path workaround is symlinks, eg:
+ root@bsd:~# ln -s /usr/bin/lzop /usr/local/bin/lzop
+
+
+SANOID
+------
+Sanoid depends on the Perl module Config::IniFiles and will not operate
+without it. Config::IniFiles may be installed from CPAN, though the project
+strongly recommends using your distribution's repositories instead.
+
+On Ubuntu: apt install libconfig-inifiles-perl
+On FreeBSD: pkg install p5-Config-Inifiles
+
diff --git a/README.md b/README.md
index 218f121..a91146c 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,9 @@

======
-Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.
+
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.
+
+
(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)
More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job:
diff --git a/VERSION b/VERSION
index 428b770..be05bba 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-1.4.3
+1.4.7
diff --git a/findoid b/findoid
index 98d4fb7..d98268d 100755
--- a/findoid
+++ b/findoid
@@ -11,7 +11,7 @@ use warnings;
my $zfs = '/sbin/zfs';
my %args = getargs(@ARGV);
-my $progversion = '1.4.3';
+my $progversion = '1.4.7';
if ($args{'version'}) { print "$progversion\n"; exit 0; }
diff --git a/sanoid b/sanoid
index 1c43e47..12806e0 100755
--- a/sanoid
+++ b/sanoid
@@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
-my $version = '1.4.3';
+my $version = '1.4.7';
use strict;
use Config::IniFiles; # read samba-style conf file
@@ -12,15 +12,16 @@ use File::Path; # for rmtree command in use_prune
use Data::Dumper; # debugging - print contents of hash
use Time::Local; # to parse dates in reverse
+# parse CLI arguments
+my %args = getargs(@ARGV);
+
my $pscmd = '/bin/ps';
my $zfs = '/sbin/zfs';
-my $conf_file = '/etc/sanoid/sanoid.conf';
-my $default_conf_file = '/etc/sanoid/sanoid.defaults.conf';
-
-# parse CLI arguments
-my %args = getargs(@ARGV);
+if ($args{'configdir'} eq '') { $args{'configdir'} = '/etc/sanoid'; }
+my $conf_file = "$args{'configdir'}/sanoid.conf";
+my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
# parse config file
my %config = init($conf_file,$default_conf_file);
@@ -331,7 +332,8 @@ sub take_snapshots {
foreach my $snap ( @newsnaps ) {
if ($args{'verbose'}) { print "taking snapshot $snap\n"; }
if (!$args{'readonly'}) {
- system($zfs, "snapshot", "$snap");
+ system($zfs, "snapshot", "$snap") == 0
+ or die "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
# make sure we don't end up with multiple snapshots with the same ctime
sleep 1;
}
@@ -928,7 +930,7 @@ sub checklock {
return 2;
}
- open PL, "$pscmd p $lockpid -o args= |";
+ open PL, "$pscmd -p $lockpid -o args= |";
my @processlist = ;
close PL;
@@ -980,7 +982,7 @@ sub writelock {
my $pid = $$;
- open PL, "$pscmd p $$ -o args= |";
+ open PL, "$pscmd -p $$ -o args= |";
my @processlist = ;
close PL;
@@ -998,15 +1000,15 @@ sub iszfsbusy {
# return true if busy (currently being sent or received), return false if not.
my $fs = shift;
- # if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd axo args= ...\n"; }
+ # if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; }
- open PL, "$pscmd axo args= |";
+ open PL, "$pscmd -Ao args= |";
my @processes = ;
close PL;
foreach my $process (@processes) {
# if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; }
- if ($process =~ /zfs *(send|receive).*$fs/) {
+ if ($process =~ /zfs *(send|receive|recv).*$fs/) {
# there's already a zfs send/receive process for our target filesystem - return true
# if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
@@ -1030,7 +1032,7 @@ sub getargs {
my %validargs;
my %novalueargs;
- push my @validargs, 'verbose','debug','version','monitor-health','monitor-snapshots','force-update','cron','take-snapshots','prune-snapshots','readonly';
+ push my @validargs, 'verbose','debug','version','monitor-health','monitor-snapshots','force-update','cron','take-snapshots','prune-snapshots','readonly','configdir';
push my @novalueargs, 'verbose','debug','version','monitor-health','monitor-snapshots','force-update','cron','take-snapshots','prune-snapshots','readonly';
foreach my $item (@validargs) { $validargs{$item}=1; }
foreach my $item (@novalueargs) { $novalueargs{$item}=1; }
diff --git a/sanoid.spec b/sanoid.spec
new file mode 100644
index 0000000..7f7900c
--- /dev/null
+++ b/sanoid.spec
@@ -0,0 +1,48 @@
+Name: sanoid
+Version: 1.4.4
+Release: 1%{?dist}
+BuildArch: noarch
+Summary: A policy-driven snapshot management tool for ZFS filesystems
+
+Group: Applications/System
+License: GPLv3
+URL: https://github.com/jimsalterjrs/sanoid
+Source0: https://github.com/jimsalterjrs/sanoid/archive/sanoid-master.zip
+Patch0: sanoid-syncoid-sshkey.patch
+#BuildRequires:
+Requires: perl
+
+%description
+Sanoid is a policy-driven snapshot management
+tool for ZFS filesystems. You can use Sanoid
+to create, automatically thin, and monitor snapshots
+and pool health from a single eminently
+human-readable TOML config file.
+
+%prep
+%setup -q -n sanoid-master
+%patch0 -p1
+
+%build
+
+%install
+%{__install} -D -m 0644 sanoid.defaults.conf %{buildroot}/etc/sanoid/sanoid.defaults.conf
+%{__install} -d %{buildroot}%{_sbindir}
+%{__install} -m 0755 sanoid syncoid findoid sleepymutex %{buildroot}%{_sbindir}
+%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.conf
+echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.cron
+
+%files
+%doc CHANGELIST LICENSE VERSION README.md
+%{_sbindir}/sanoid
+%{_sbindir}/syncoid
+%{_sbindir}/findoid
+%{_sbindir}/sleepymutex
+%dir %{_sysconfdir}/%{name}
+%config %{_sysconfdir}/%{name}/sanoid.defaults.conf
+
+
+
+%changelog
+* Sat Feb 13 2016 Thomas M. Lapp - 1.4.4-1
+- Initial RPM Package
diff --git a/sleepymutex b/sleepymutex
index 1eb26c5..1361d8e 100755
--- a/sleepymutex
+++ b/sleepymutex
@@ -2,7 +2,8 @@
# this is just a cheap way to trigger mutex-based checks for process activity.
#
-# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around as long as necessary that will show up
-# to any routine that actively does something like "ps axo | grep 'zfs receive'" or whatever.
+# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
+# as long as necessary that will show up to any routine that actively does
+# something like "ps axo | grep 'zfs receive'" or whatever.
sleep 99999
diff --git a/syncoid b/syncoid
index a0a3d20..31c9955 100755
--- a/syncoid
+++ b/syncoid
@@ -4,9 +4,10 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
-my $version = '1.4.3';
+my $version = '1.4.7';
use strict;
+use warnings;
use Data::Dumper;
use Time::Local;
use Sys::Hostname;
@@ -21,11 +22,13 @@ if ($args{'version'}) {
my $rawsourcefs = $args{'source'};
my $rawtargetfs = $args{'target'};
my $debug = $args{'debug'};
+my $quiet = $args{'quiet'};
my $zfscmd = '/sbin/zfs';
my $sshcmd = '/usr/bin/ssh';
my $pscmd = '/bin/ps';
-my $sshcipher = '-c arcfour';
+my $sshcipher = '-c chacha20-poly1305@openssh.com,arcfour';
+my $sshport = '-p 22';
my $pvcmd = '/usr/bin/pv';
my $mbuffercmd = '/usr/bin/mbuffer';
my $sudocmd = '/usr/bin/sudo';
@@ -34,8 +37,16 @@ my $mbufferoptions = '-q -s 128k -m 16M 2>/dev/null';
# being present on remote machines.
my $lscmd = '/bin/ls';
+if ( $args{'sshport'} ) {
+ $sshport = "-p $args{'sshport'}";
+}
# figure out if source and/or target are remote.
-$sshcmd = "$sshcmd $sshcipher";
+if ( $args{'sshkey'} ) {
+ $sshcmd = "$sshcmd $sshcipher $sshport -i $args{'sshkey'}";
+}
+else {
+ $sshcmd = "$sshcmd $sshcipher $sshport";
+}
my ($sourcehost,$sourcefs,$sourceisroot) = getssh($rawsourcefs);
my ($targethost,$targetfs,$targetisroot) = getssh($rawtargetfs);
@@ -51,7 +62,9 @@ my %avail = checkcommands();
my %snaps;
-## break here to call replication individually so that we can loop across children separately, for recursive replication ##
+## break here to call replication individually so that we ##
+## can loop across children separately, for recursive ##
+## replication ##
if (! $args{'recursive'}) {
syncdataset($sourcehost, $sourcefs, $targethost, $targetfs);
@@ -135,7 +148,9 @@ sub syncdataset {
# been turned on... even when it's off... unless and
# until the filesystem is zfs umounted and zfs remounted.
# we're going to do the right thing anyway.
- my $originaltargetreadonly;
+ # dyking this functionality out for the time being due to buggy mount/unmount behavior
+ # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
+ #my $originaltargetreadonly;
# sync 'em up.
if (! $targetexists) {
@@ -155,22 +170,25 @@ sub syncdataset {
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
- print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
+ if (!$quiet) { print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
die "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
}
- system($synccmd);
+ system($synccmd) == 0
+ or die "CRITICAL ERROR: $synccmd failed: $?";
# now do an -I to the new sync snapshot, assuming there were any snapshots
# other than the new sync snapshot to begin with, of course
if ($oldestsnap ne $newsyncsnap) {
# get current readonly status of target, then set it to on during sync
- $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
- setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
+ # dyking this functionality out for the time being due to buggy mount/unmount behavior
+ # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
+ # $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
+ # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
$sendcmd = "$sourcesudocmd $zfscmd send -I $sourcefs\@$oldestsnap $sourcefs\@$newsyncsnap";
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
@@ -183,22 +201,29 @@ sub syncdataset {
die "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
}
- print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n";
+ if (!$quiet) { print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
- system($synccmd);
+ system($synccmd) == 0
+ or die "CRITICAL ERROR: $synccmd failed: $?";
# restore original readonly value to target after sync complete
- setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
+ # dyking this functionality out for the time being due to buggy mount/unmount behavior
+ # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
+ # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
} else {
# find most recent matching snapshot and do an -I
# to the new snapshot
# get current readonly status of target, then set it to on during sync
- $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
- setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
+ # dyking this functionality out for the time being due to buggy mount/unmount behavior
+ # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
+ # $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
+ # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
+
+ my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
- my $matchingsnap = getmatchingsnapshot(\%snaps);
+ my $matchingsnap = getmatchingsnapshot($targetsize, \%snaps);
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
@@ -216,18 +241,21 @@ sub syncdataset {
}
my $sendcmd = "$sourcesudocmd $zfscmd send -I $sourcefs\@$matchingsnap $sourcefs\@$newsyncsnap";
- my $recvcmd = "$targetsudocmd $zfscmd receive $targetfs";
+ my $recvcmd = "$targetsudocmd $zfscmd receive -F $targetfs";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
- print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n";
+ if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
- system("$synccmd");
+ system("$synccmd") == 0
+ or die "CRITICAL ERROR: $synccmd failed: $?";
# restore original readonly value to target after sync complete
- setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
+ # dyking this functionality out for the time being due to buggy mount/unmount behavior
+ # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
+ #setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
# prune obsolete sync snaps on source and target.
@@ -243,14 +271,14 @@ sub getargs {
my %novaluearg;
my %validarg;
- push my @validargs, ('debug','nocommandchecks','version','monitor-version','compress','source-bwlimit','target-bwlimit','dumpsnaps','recursive','r');
+ push my @validargs, ('debug','nocommandchecks','version','monitor-version','compress','source-bwlimit','target-bwlimit','dumpsnaps','recursive','r','sshkey','sshport','quiet');
foreach my $item (@validargs) { $validarg{$item} = 1; }
- push my @novalueargs, ('debug','nocommandchecks','version','monitor-version','dumpsnaps','recursive','r');
+ push my @novalueargs, ('debug','nocommandchecks','version','monitor-version','dumpsnaps','recursive','r','quiet');
foreach my $item (@novalueargs) { $novaluearg{$item} = 1; }
while (my $rawarg = shift(@args)) {
my $arg = $rawarg;
- my $argvalue;
+ my $argvalue = '';
if ($rawarg =~ /=/) {
# user specified the value for a CLI argument with =
# instead of with blank space. separate appropriately.
@@ -302,17 +330,19 @@ sub getargs {
}
}
- if (defined $args{'source-bwlimit'}) { $args{'source-bwlimit'} = "-R $args{'source-bwlimit'}"; }
- if (defined $args{'target-bwlimit'}) { $args{'target-bwlimit'} = "-r $args{'target-bwlimit'}"; }
+ if (defined $args{'source-bwlimit'}) { $args{'source-bwlimit'} = "-R $args{'source-bwlimit'}"; } else { $args{'source-bwlimit'} = ''; }
+ if (defined $args{'target-bwlimit'}) { $args{'target-bwlimit'} = "-r $args{'target-bwlimit'}"; } else { $args{'target-bwlimit'} = ''; }
if ($args{'r'}) { $args{'recursive'} = $args{'r'}; }
+ if (!defined $args{'compress'}) { $args{'compress'} = 'default'; }
+
if ($args{'compress'} eq 'gzip') {
$args{'rawcompresscmd'} = '/bin/gzip';
$args{'compressargs'} = '-3';
$args{'rawdecompresscmd'} = '/bin/zcat';
$args{'decompressargs'} = '';
- } elsif ( ($args{'compress'} eq 'lzo') || ! (defined $args{'compress'}) ) {
+ } elsif ( ($args{'compress'} eq 'lzo') || ($args{'compress'} eq 'default') ) {
$args{'rawcompresscmd'} = '/usr/bin/lzop';
$args{'compressargs'} = '';
$args{'rawdecompresscmd'} = '/usr/bin/lzop';
@@ -348,8 +378,11 @@ sub checkcommands {
return %avail;
}
- if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; }
- if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; }
+ if (!defined $sourcehost) { $sourcehost = ''; }
+ if (!defined $targethost) { $targethost = ''; }
+
+ if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; }
+ if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; } else { $targetssh = ''; }
# if raw compress command is null, we must have specified no compression. otherwise,
# make sure that compression is available everywhere we need it
@@ -389,6 +422,12 @@ sub checkcommands {
$t = "ssh:$t";
}
+ if (!defined $avail{'sourcecompress'}) { $avail{'sourcecompress'} = ''; }
+ if (!defined $avail{'targetcompress'}) { $avail{'targetcompress'} = ''; }
+ if (!defined $avail{'sourcembuffer'}) { $avail{'sourcembuffer'} = ''; }
+ if (!defined $avail{'targetmbuffer'}) { $avail{'targetmbuffer'} = ''; }
+
+
if ($avail{'sourcecompress'} eq '') {
if ($args{'rawcompresscmd'} ne '') {
print "WARN: $args{'compresscmd'} not available on source $s- sync will continue without compression.\n";
@@ -458,15 +497,15 @@ sub checkcommands {
sub iszfsbusy {
my ($rhost,$fs,$isroot) = @_;
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
- if ($debug) { print "DEBUG: checking to see if $fs on $rhost is already in zfs receive using $rhost $pscmd axo args= ...\n"; }
+ if ($debug) { print "DEBUG: checking to see if $fs on $rhost is already in zfs receive using $rhost $pscmd -Ao args= ...\n"; }
- open PL, "$rhost $pscmd axo args= |";
+ open PL, "$rhost $pscmd -Ao args= |";
my @processes = ;
close PL;
foreach my $process (@processes) {
# if ($debug) { print "DEBUG: checking process $process...\n"; }
- if ($process =~ /zfs receive.*$fs/) {
+ if ($process =~ /zfs *(receive|recv).*$fs/) {
# there's already a zfs receive process for our target filesystem - return true
if ($debug) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
@@ -484,7 +523,8 @@ sub setzfsvalue {
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($debug) { print "$rhost $mysudocmd $zfscmd set $property=$value $fs\n"; }
- system("$rhost $mysudocmd $zfscmd set $property=$value $fs");
+ system("$rhost $mysudocmd $zfscmd set $property=$value $fs") == 0
+ or print "WARNING: $rhost $mysudocmd $zfscmd set $property=$value $fs died: $?, proceeding anyway.\n";
return;
}
@@ -539,20 +579,21 @@ sub buildsynccmd {
# $synccmd = "$sendcmd | $mbuffercmd | $pvcmd | $recvcmd";
$synccmd = "$sendcmd |";
# avoid confusion - accept either source-bwlimit or target-bwlimit as the bandwidth limiting option here
- my $bwlimit;
- if ($args{'source-bwlimit'} eq '') {
+ my $bwlimit = '';
+ if (defined $args{'source-bwlimit'}) {
+ $bwlimit = $args{'source-bwlimit'};
+ } elsif (defined $args{'target-bwlimit'}) {
$bwlimit = $args{'target-bwlimit'};
- } else {
- $bwlimit = $args{'source-bwlimit'};
}
+
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $bwlimit $mbufferoptions |"; }
- if ($avail{'localpv'}) { $synccmd .= " $pvcmd -s $pvsize |"; }
+ if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd -s $pvsize |"; }
$synccmd .= " $recvcmd";
} elsif ($sourcehost eq '') {
# local source, remote target.
#$synccmd = "$sendcmd | $pvcmd | $args{'compresscmd'} | $mbuffercmd | $sshcmd $targethost '$args{'decompresscmd'} | $mbuffercmd | $recvcmd'";
$synccmd = "$sendcmd |";
- if ($avail{'localpv'}) { $synccmd .= " $pvcmd -s $pvsize |"; }
+ if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd -s $pvsize |"; }
if ($avail{'compress'}) { $synccmd .= " $args{'compresscmd'} |"; }
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $args{'source-bwlimit'} $mbufferoptions |"; }
$synccmd .= " $sshcmd $targethost '";
@@ -568,7 +609,7 @@ sub buildsynccmd {
$synccmd .= "' | ";
if ($avail{'targetmbuffer'}) { $synccmd .= "$mbuffercmd $args{'target-bwlimit'} $mbufferoptions | "; }
if ($avail{'compress'}) { $synccmd .= "$args{'decompresscmd'} | "; }
- if ($avail{'localpv'}) { $synccmd .= "$pvcmd -s $pvsize | "; }
+ if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; }
$synccmd .= "$recvcmd";
} else {
#remote source, remote target... weird, but whatever, I'm not here to judge you.
@@ -578,7 +619,7 @@ sub buildsynccmd {
if ($avail{'sourcembuffer'}) { $synccmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; }
$synccmd .= "' | ";
if ($avail{'compress'}) { $synccmd .= "$args{'decompresscmd'} | "; }
- if ($avail{'localpv'}) { $synccmd .= "$pvcmd -s $pvsize | "; }
+ if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; }
if ($avail{'compress'}) { $synccmd .= "$args{'compresscmd'} | "; }
if ($avail{'localmbuffer'}) { $synccmd .= "$mbuffercmd $mbufferoptions | "; }
$synccmd .= "$sshcmd $targethost '";
@@ -623,7 +664,8 @@ sub pruneoldsyncsnaps {
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
- system("$rhost $prunecmd");
+ system("$rhost $prunecmd") == 0
+ or die "CRITICAL ERROR: $rhost $prunecmd failed: $?";
$prunecmd = '';
$counter = 0;
}
@@ -635,19 +677,38 @@ sub pruneoldsyncsnaps {
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
- system("$rhost $prunecmd");
+ system("$rhost $prunecmd") == 0
+ or warn "WARNING: $rhost $prunecmd failed: $?";
}
return;
}
sub getmatchingsnapshot {
- my $snaps = shift;
+ my ($targetsize, $snaps) = shift;
foreach my $snap ( sort { $snaps{'source'}{$b}{'ctime'}<=>$snaps{'source'}{$a}{'ctime'} } keys %{ $snaps{'source'} }) {
- if ($snaps{'source'}{$snap}{'ctime'} == $snaps{'target'}{$snap}{'ctime'}) {
- return $snap;
+ if (defined $snaps{'target'}{$snap}{'ctime'}) {
+ if ($snaps{'source'}{$snap}{'ctime'} == $snaps{'target'}{$snap}{'ctime'}) {
+ return $snap;
+ }
}
}
- print "UNEXPECTED ERROR: target exists but has no matching snapshots!\n";
+
+ # if we got this far, we failed to find a matching snapshot.
+
+ print "\n";
+ print "CRITICAL ERROR: Target exists but has no matching snapshots!\n";
+ print " Replication to target would require destroying existing\n";
+ print " target. Cowardly refusing to destroy your existing target.\n\n";
+
+ # experience tells me we need a mollyguard for people who try to
+ # zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
+
+ if ( $targetsize < (64*1024*1024) ) {
+ print " NOTE: Target dataset is < 64MB used - did you mistakenly run\n";
+ print " \`zfs create $args{'target'}\` on the target? ZFS initial\n";
+ print " replication must be to a NON EXISTENT DATASET, which will\n";
+ print " then be CREATED BY the initial replication process.\n\n";
+ }
exit 256;
}
@@ -660,7 +721,8 @@ sub newsyncsnap {
my %date = getdate();
my $snapname = "syncoid\_$hostid\_$date{'stamp'}";
my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fs\@$snapname\n";
- system($snapcmd);
+ system($snapcmd) == 0
+ or die "CRITICAL ERROR: $snapcmd failed: $?";
return $snapname;
}
@@ -685,6 +747,7 @@ sub getssh {
my $rhost;
my $isroot;
my $socket;
+
# if we got passed something with an @ in it, we assume it's an ssh connection, eg root@myotherbox
if ($fs =~ /\@/) {
$rhost = $fs;
@@ -695,7 +758,7 @@ sub getssh {
if ($remoteuser eq 'root') { $isroot = 1; } else { $isroot = 0; }
# now we need to establish a persistent master SSH connection
$socket = "/tmp/syncoid-$remoteuser-$rhost-" . time();
- open FH, "$sshcmd -M -S $socket -o ControlPersist=yes $rhost exit |";
+ open FH, "$sshcmd -M -S $socket -o ControlPersist=yes $sshport $rhost exit |";
close FH;
$rhost = "-S $socket $rhost";
} else {
@@ -759,9 +822,7 @@ sub getsendsize {
}
my $sourcessh;
- if ($sourcehost ne '') {
- $sourcessh = "$sshcmd $sourcehost";
- }
+ if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; }
my $getsendsizecmd = "$sourcessh $mysudocmd $zfscmd send -nP $snaps";
if ($debug) { print "DEBUG: getting estimated transfer size from source $sourcehost using \"$getsendsizecmd 2>&1 |\"...\n"; }