mirror of https://github.com/jimsalterjrs/sanoid
Merge branch 'RulerOf-master'
This commit is contained in:
commit
1a374a3bc6
42
CHANGELIST
42
CHANGELIST
|
|
@ -1,3 +1,45 @@
|
|||
2.0.0 [overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0)
|
||||
[syncoid] added force delete flag (@phreaker0)
|
||||
[sanoid] removed sleeping between snapshot taking (@phreaker0)
|
||||
[syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98)
|
||||
[sanoid] implemented weekly period (@phreaker0)
|
||||
[syncoid] implemented support for zfs bookmarks as fallback (@phreaker0)
|
||||
[sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0)
|
||||
[sanoid] ignore snapshots types that are set to 0 (@muff1nman)
|
||||
[packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0)
|
||||
[syncoid] replicate clones (@phreaker0)
|
||||
[syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0)
|
||||
[sanoid] added option to defer pruning based on the available pool capacity (@phreaker0)
|
||||
[sanoid] implemented frequent snapshots with configurable period (@phreaker0)
|
||||
[syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0)
|
||||
[packaging] dependency fixes (@rodgerd, mabushey)
|
||||
[syncoid] implemented support for excluding children of a specific dataset (@phreaker0)
|
||||
[sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0)
|
||||
[syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie)
|
||||
[syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0)
|
||||
[syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0)
|
||||
[syncoid] make local source bwlimit work (@phreaker0)
|
||||
[syncoid] fix 'resume support' detection on FreeBSD (@pit3k)
|
||||
[sanoid] updated INSTALL with missing dependency
|
||||
[sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0)
|
||||
[sanoid] quiet flag suppresses all info output (@martinvw)
|
||||
[sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis)
|
||||
[sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0)
|
||||
[sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0)
|
||||
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
|
||||
[syncoid] Added support for ZStandard compression.(@danielewood)
|
||||
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
|
||||
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
|
||||
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
|
||||
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
|
||||
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
|
||||
[sanoid] use UTC by default in unit template and documentation (@phreaker0)
|
||||
[syncoid] don't prune snapshots if instructed to not create them either (@phreaker0)
|
||||
[syncoid] documented compatibility issues with (t)csh shells (@ecoutu)
|
||||
|
||||
1.4.18 implemented special character handling and support of ZFS resume/receive tokens by default in syncoid,
|
||||
thank you @phreaker0!
|
||||
|
||||
1.4.17 changed die to warn when unexpectedly unable to remove a snapshot - this
|
||||
allows sanoid to continue taking/removing other snapshots not affected by
|
||||
whatever lock prevented the first from being taken or removed
|
||||
|
|
|
|||
|
|
@ -11,3 +11,14 @@ If you don't want to have to change the shebangs, your other option is to drop a
|
|||
root@bsd:~# ln -s /usr/local/bin/perl /usr/bin/perl
|
||||
|
||||
After putting this symlink in place, ANY perl script shebanged for Linux will work on your system too.
|
||||
|
||||
Syncoid assumes a bourne style shell on remote hosts. Using (t)csh (the default for root under FreeBSD)
|
||||
has some known issues:
|
||||
|
||||
* If mbuffer is present, syncoid will fail with an "Ambiguous output redirect." error. So if you:
|
||||
root@bsd:~# ln -s /usr/local/bin/mbuffer /usr/bin/mbuffer
|
||||
make sure the remote user is using an sh compatible shell.
|
||||
|
||||
To change to a compatible shell, use the chsh command:
|
||||
|
||||
root@bsd:~# chsh -s /bin/sh
|
||||
|
|
|
|||
55
INSTALL.md
55
INSTALL.md
|
|
@ -5,7 +5,7 @@
|
|||
<!-- TOC depthFrom:1 depthTo:6 withLinks:1 updateOnSave:0 orderedList:0 -->
|
||||
|
||||
- [Installation](#installation)
|
||||
- [Ubuntu](#ubuntu)
|
||||
- [Debian/Ubuntu](#debianubuntu)
|
||||
- [CentOS](#centos)
|
||||
- [FreeBSD](#freebsd)
|
||||
- [Other OSes](#other-oses)
|
||||
|
|
@ -15,7 +15,7 @@
|
|||
<!-- /TOC -->
|
||||
|
||||
|
||||
## Ubuntu
|
||||
## Debian/Ubuntu
|
||||
|
||||
Install prerequisite software:
|
||||
|
||||
|
|
@ -23,6 +23,24 @@ Install prerequisite software:
|
|||
apt install libconfig-inifiles-perl pv lzop mbuffer
|
||||
```
|
||||
|
||||
Clone this repo, build the debian package and install it (alternatively you can skip the package and do it manually like described below for CentOS):
|
||||
|
||||
```bash
|
||||
# Download the repo as root to avoid changing permissions later
|
||||
sudo git clone https://github.com/jimsalterjrs/sanoid.git
|
||||
cd sanoid
|
||||
ln -s packages/debian .
|
||||
dpkg-buildpackage -uc -us
|
||||
apt install ../sanoid_*_all.deb
|
||||
```
|
||||
|
||||
Enable sanoid timer:
|
||||
```bash
|
||||
# enable and start the sanoid timer
|
||||
sudo systemctl enable sanoid.timer
|
||||
sudo systemctl start sanoid.timer
|
||||
```
|
||||
|
||||
## CentOS
|
||||
|
||||
Install prerequisite software:
|
||||
|
|
@ -60,23 +78,42 @@ cat << "EOF" | sudo tee /etc/systemd/system/sanoid.service
|
|||
Description=Snapshot ZFS Pool
|
||||
Requires=zfs.target
|
||||
After=zfs.target
|
||||
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
|
||||
|
||||
[Service]
|
||||
Environment=TZ=UTC
|
||||
Type=oneshot
|
||||
ExecStart=/usr/sbin/sanoid --cron
|
||||
ExecStart=/usr/sbin/sanoid --take-snapshots
|
||||
EOF
|
||||
|
||||
cat << "EOF" | sudo tee /etc/systemd/system/sanoid-prune.service
|
||||
[Unit]
|
||||
Description=Cleanup ZFS Pool
|
||||
Requires=zfs.target
|
||||
After=zfs.target sanoid.service
|
||||
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
|
||||
|
||||
[Service]
|
||||
Environment=TZ=UTC
|
||||
Type=oneshot
|
||||
ExecStart=/usr/sbin/sanoid --prune-snapshots
|
||||
|
||||
[Install]
|
||||
WantedBy=sanoid.service
|
||||
EOF
|
||||
```
|
||||
|
||||
And a systemd timer that will execute **Sanoid** once per minute:
|
||||
And a systemd timer that will execute **Sanoid** once per quarter hour
|
||||
(Decrease the interval as suitable for configuration):
|
||||
|
||||
```bash
|
||||
cat << "EOF" | sudo tee /etc/systemd/system/sanoid.timer
|
||||
[Unit]
|
||||
Description=Run Sanoid Every Minute
|
||||
Description=Run Sanoid Every 15 Minutes
|
||||
Requires=sanoid.service
|
||||
|
||||
[Timer]
|
||||
OnCalendar=*:0/1
|
||||
OnCalendar=*:0/15
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
|
|
@ -100,7 +137,7 @@ Now, proceed to configure [**Sanoid**](#configuration)
|
|||
Install prerequisite software:
|
||||
|
||||
```bash
|
||||
pkg install p5-Config-Inifiles pv lzop
|
||||
pkg install p5-Config-Inifiles pv mbuffer lzop
|
||||
```
|
||||
|
||||
**Additional notes:**
|
||||
|
|
@ -109,6 +146,8 @@ pkg install p5-Config-Inifiles pv lzop
|
|||
|
||||
* Simplest path workaround is symlinks, eg `ln -s /usr/local/bin/lzop /usr/bin/lzop` or similar, as appropriate to create links in **/usr/bin** to wherever the utilities actually are on your system.
|
||||
|
||||
* See note about mbuffer and other things in FREEBSD.readme
|
||||
|
||||
## Other OSes
|
||||
|
||||
**Sanoid** depends on the Perl module Config::IniFiles and will not operate without it. Config::IniFiles may be installed from CPAN, though the project strongly recommends using your distribution's repositories instead.
|
||||
|
|
@ -130,4 +169,4 @@ pkg install p5-Config-Inifiles pv lzop
|
|||
|
||||
## Sanoid
|
||||
|
||||
Instructions on how to set up `sanoid.conf`. Maybe just copy/paste the example `sanoid.conf` file in here but clean it up a little bit.
|
||||
Take a look at the files `sanoid.defaults.conf` and` sanoid.conf.example` for all possible configuration options. Also have a look at the README.md
|
||||
|
|
|
|||
103
README.md
103
README.md
|
|
@ -6,9 +6,11 @@
|
|||
|
||||
More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job:
|
||||
```
|
||||
* * * * * /usr/local/bin/sanoid --cron
|
||||
* * * * * TZ=UTC /usr/local/bin/sanoid --cron
|
||||
```
|
||||
|
||||
`Note`: Using UTC as timezone is recommend to prevent problems with daylight saving times
|
||||
|
||||
And its /etc/sanoid/sanoid.conf might look something like this:
|
||||
|
||||
```
|
||||
|
|
@ -26,6 +28,7 @@ And its /etc/sanoid/sanoid.conf might look something like this:
|
|||
#############################
|
||||
|
||||
[template_production]
|
||||
frequently = 0
|
||||
hourly = 36
|
||||
daily = 30
|
||||
monthly = 3
|
||||
|
|
@ -34,7 +37,7 @@ And its /etc/sanoid/sanoid.conf might look something like this:
|
|||
autoprune = yes
|
||||
```
|
||||
|
||||
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7-spice, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
|
||||
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
|
||||
|
||||
##### Sanoid Command Line Options
|
||||
|
||||
|
|
@ -54,6 +57,10 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
|
|||
|
||||
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
|
||||
|
||||
+ --force-prune
|
||||
|
||||
Purges expired snapshots even if a send/recv is in progress
|
||||
|
||||
+ --monitor-snapshots
|
||||
|
||||
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
|
||||
|
|
@ -62,6 +69,10 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
|
|||
|
||||
This option is designed to be run by a Nagios monitoring system. It reports on the health of the zpool your filesystems are on. It only monitors filesystems that are configured in the sanoid.conf file.
|
||||
|
||||
+ --monitor-capacity
|
||||
|
||||
This option is designed to be run by a Nagios monitoring system. It reports on the capacity of the zpool your filesystems are on. It only monitors pools that are configured in the sanoid.conf file.
|
||||
|
||||
+ --force-update
|
||||
|
||||
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
|
||||
|
|
@ -82,6 +93,13 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
|
|||
|
||||
This prints out quite alot of additional information during a sanoid run, and is normally not needed.
|
||||
|
||||
+ --readonly
|
||||
|
||||
Skip creation/deletion of snapshots (Simulate).
|
||||
|
||||
+ --help
|
||||
|
||||
Show help message.
|
||||
|
||||
----------
|
||||
|
||||
|
|
@ -108,6 +126,35 @@ syncoid root@remotehost:data/images/vm backup/images/vm
|
|||
Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel.
|
||||
|
||||
Syncoid supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used.
|
||||
If ZFS supports resumeable send/receive streams on both the source and target those will be enabled as default.
|
||||
|
||||
As of 1.4.18, syncoid also automatically supports and enables resume of interrupted replication when both source and target support this feature.
|
||||
|
||||
##### Syncoid Dataset Properties
|
||||
|
||||
+ syncoid:sync
|
||||
|
||||
Available values:
|
||||
|
||||
+ `true` (default if unset)
|
||||
|
||||
This dataset will be synchronised to all hosts.
|
||||
|
||||
+ `false`
|
||||
|
||||
This dataset will not be synchronised to any hosts - it will be skipped. This can be useful for preventing certain datasets from being transferred when recursively handling a tree.
|
||||
|
||||
+ `host1,host2,...`
|
||||
|
||||
A comma separated list of hosts. This dataset will only be synchronised by hosts listed in the property.
|
||||
|
||||
_Note_: this check is performed by the host running `syncoid`, thus the local hostname must be present for inclusion during a push operation // the remote hostname must be present for a pull.
|
||||
|
||||
_Note_: this will also prevent syncoid from handling the dataset if given explicitly on the command line.
|
||||
|
||||
_Note_: syncing a child of a no-sync dataset will currently result in a critical error.
|
||||
|
||||
_Note_: empty properties will be handled as if they were unset.
|
||||
|
||||
##### Syncoid Command Line Options
|
||||
|
||||
|
|
@ -119,13 +166,21 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
|
|||
|
||||
This is the destination dataset. It can be either local or remote.
|
||||
|
||||
+ --identifier=
|
||||
|
||||
Adds the given identifier to the snapshot name after "syncoid_" prefix and before the hostname. This enables the use case of reliable replication to multiple targets from the same host. The following chars are allowed: a-z, A-Z, 0-9, _, -, : and . .
|
||||
|
||||
+ -r --recursive
|
||||
|
||||
This will also transfer child datasets.
|
||||
|
||||
+ --skip-parent
|
||||
|
||||
This will skip the syncing of the parent dataset. Does nothing without '--recursive' option.
|
||||
|
||||
+ --compress <compression type>
|
||||
|
||||
Currently accepted options: gzip, pigz-fast, pigz-slow, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
|
||||
Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
|
||||
|
||||
+ --source-bwlimit <limit t|g|m|k>
|
||||
|
||||
|
|
@ -137,7 +192,7 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
|
|||
|
||||
+ --no-command-checks
|
||||
|
||||
Do not check the existance of commands before attempting the transfer. It assumes all programs are available. This should never be used.
|
||||
Does not check the existence of commands before attempting the transfer, providing administrators a way to run the tool with minimal overhead and maximum speed, at risk of potentially failed replication, or other possible edge cases. It assumes all programs are available, and should not be used in most situations. This is an not an officially supported run mode.
|
||||
|
||||
+ --no-stream
|
||||
|
||||
|
|
@ -147,14 +202,50 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
|
|||
|
||||
This argument tells syncoid to restrict itself to existing snapshots, instead of creating a semi-ephemeral syncoid snapshot at execution time. Especially useful in multi-target (A->B, A->C) replication schemes, where you might otherwise accumulate a large number of foreign syncoid snapshots.
|
||||
|
||||
+ --no-clone-rollback
|
||||
|
||||
Do not rollback clones on target
|
||||
|
||||
+ --no-rollback
|
||||
|
||||
Do not rollback anything (clones or snapshots) on target host
|
||||
|
||||
+ --exclude=REGEX
|
||||
|
||||
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times.
|
||||
|
||||
+ --no-resume
|
||||
|
||||
This argument tells syncoid to not use resumeable zfs send/receive streams.
|
||||
|
||||
+ --force-delete
|
||||
|
||||
Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks.
|
||||
|
||||
+ --no-clone-handling
|
||||
|
||||
This argument tells syncoid to not recreate clones on the targe on initial sync and doing a normal replication instead.
|
||||
|
||||
+ --dumpsnaps
|
||||
|
||||
This prints a list of snapshots during the run.
|
||||
|
||||
+ --no-privilege-elevation
|
||||
|
||||
Bypass the root check and assume syncoid has the necessary permissions (for use with ZFS permission delegation).
|
||||
|
||||
+ --sshport
|
||||
|
||||
Allow sync to/from boxes running SSH on non-standard ports.
|
||||
|
||||
+ --sshcipher
|
||||
|
||||
Instruct ssh to use a particular cipher set.
|
||||
|
||||
+ --sshoption
|
||||
|
||||
Passes option to ssh. This argument can be specified multiple times.
|
||||
|
||||
+ --sshkey
|
||||
|
||||
Use specified identity file as per ssh -i.
|
||||
|
|
@ -167,6 +258,10 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
|
|||
|
||||
This prints out quite alot of additional information during a sanoid run, and is normally not needed.
|
||||
|
||||
+ --help
|
||||
|
||||
Show help message.
|
||||
|
||||
+ --version
|
||||
|
||||
Print the version and exit.
|
||||
|
|
|
|||
|
|
@ -1,9 +0,0 @@
|
|||
sanoid (1.4.16) unstable; urgency=medium
|
||||
|
||||
* merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
|
||||
* off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
|
||||
* update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
|
||||
* encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
|
||||
* @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
|
||||
|
||||
-- Jim Salter <github@jrs-s.net> Wed, 9 Aug 2017 12:28:49 -0400
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
sanoid (2.0.0) unstable; urgency=medium
|
||||
|
||||
[overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0)
|
||||
[syncoid] added force delete flag (@phreaker0)
|
||||
[sanoid] removed sleeping between snapshot taking (@phreaker0)
|
||||
[syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98)
|
||||
[sanoid] implemented weekly period (@phreaker0)
|
||||
[syncoid] implemented support for zfs bookmarks as fallback (@phreaker0)
|
||||
[sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0)
|
||||
[sanoid] ignore snapshots types that are set to 0 (@muff1nman)
|
||||
[packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0)
|
||||
[syncoid] replicate clones (@phreaker0)
|
||||
[syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0)
|
||||
[sanoid] added option to defer pruning based on the available pool capacity (@phreaker0)
|
||||
[sanoid] implemented frequent snapshots with configurable period (@phreaker0)
|
||||
[syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0)
|
||||
[packaging] dependency fixes (@rodgerd, mabushey)
|
||||
[syncoid] implemented support for excluding children of a specific dataset (@phreaker0)
|
||||
[sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0)
|
||||
[syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie)
|
||||
[syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0)
|
||||
[syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0)
|
||||
[syncoid] make local source bwlimit work (@phreaker0)
|
||||
[syncoid] fix 'resume support' detection on FreeBSD (@pit3k)
|
||||
[sanoid] updated INSTALL with missing dependency
|
||||
[sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0)
|
||||
[sanoid] quiet flag suppresses all info output (@martinvw)
|
||||
[sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis)
|
||||
[sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0)
|
||||
[sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0)
|
||||
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
|
||||
[syncoid] Added support for ZStandard compression.(@danielewood)
|
||||
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
|
||||
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
|
||||
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
|
||||
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
|
||||
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
|
||||
[sanoid] use UTC by default in unit template and documentation (@phreaker0)
|
||||
[syncoid] don't prune snapshots if instructed to not create them either (@phreaker0)
|
||||
[syncoid] documented compatibility issues with (t)csh shells (@ecoutu)
|
||||
|
||||
-- Jim Salter <github@jrs-s.net> Wed, 04 Dec 2018 18:10:00 -0400
|
||||
|
||||
sanoid (1.4.18) unstable; urgency=medium
|
||||
|
||||
implemented special character handling and support of ZFS resume/receive tokens by default in syncoid,
|
||||
thank you @phreaker0!
|
||||
|
||||
-- Jim Salter <github@jrs-s.net> Wed, 25 Apr 2018 16:24:00 -0400
|
||||
|
||||
sanoid (1.4.17) unstable; urgency=medium
|
||||
|
||||
changed die to warn when unexpectedly unable to remove a snapshot - this
|
||||
allows sanoid to continue taking/removing other snapshots not affected by
|
||||
whatever lock prevented the first from being taken or removed
|
||||
|
||||
-- Jim Salter <github@jrs-s.net> Wed, 8 Nov 2017 15:25:00 -0400
|
||||
|
||||
sanoid (1.4.16) unstable; urgency=medium
|
||||
|
||||
* merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
|
||||
* off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
|
||||
* update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
|
||||
* encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
|
||||
* @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
|
||||
|
||||
-- Jim Salter <github@jrs-s.net> Wed, 9 Aug 2017 12:28:49 -0400
|
||||
|
|
@ -16,4 +16,14 @@ override_dh_auto_install:
|
|||
@mkdir -p $(DESTDIR)/usr/share/doc/sanoid; \
|
||||
cp sanoid.conf $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example;
|
||||
@mkdir -p $(DESTDIR)/lib/systemd/system; \
|
||||
cp debian/sanoid.timer $(DESTDIR)/lib/systemd/system;
|
||||
cp debian/sanoid-prune.service $(DESTDIR)/lib/systemd/system;
|
||||
|
||||
override_dh_installinit:
|
||||
dh_installinit --noscripts
|
||||
|
||||
override_dh_systemd_enable:
|
||||
dh_systemd_enable sanoid.timer
|
||||
dh_systemd_enable sanoid-prune.service
|
||||
|
||||
override_dh_systemd_start:
|
||||
dh_systemd_start sanoid.timer
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
[Unit]
|
||||
Description=Cleanup ZFS Pool
|
||||
Requires=zfs.target
|
||||
After=zfs.target sanoid.service
|
||||
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
|
||||
|
||||
[Service]
|
||||
Environment=TZ=UTC
|
||||
Type=oneshot
|
||||
ExecStart=/usr/sbin/sanoid --prune-snapshots
|
||||
|
||||
[Install]
|
||||
WantedBy=sanoid.service
|
||||
|
|
@ -5,5 +5,6 @@ After=zfs.target
|
|||
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
|
||||
|
||||
[Service]
|
||||
Environment=TZ=UTC
|
||||
Type=oneshot
|
||||
ExecStart=/usr/sbin/sanoid --cron
|
||||
ExecStart=/usr/sbin/sanoid --take-snapshots
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
AUX sanoid.cron 45 BLAKE2B 3f6294bbbf485dc21a565cd2c8da05a42fb21cdaabdf872a21500f1a7338786c60d4a1fd188bbf81ce85f06a376db16998740996f47c049707a5109bdf02c052 SHA512 7676b32f21e517e8c84a097c7934b54097cf2122852098ea756093ece242125da3f6ca756a6fbb82fc348f84b94bfd61639e86e0bfa4bbe7abf94a8a4c551419
|
||||
DIST sanoid-2.0.1.tar.gz 106981 BLAKE2B 824b7271266ac9f9bf1fef5374a442215c20a4f139081f77d5d8db2ec7db9b8b349d9d0394c76f9d421a957853af64ff069097243f69e7e4b83a804f5ba992a6 SHA512 9d999b0f071bc3c3ca956df11e1501fd72a842f7d3315ede3ab3b5e0a36351100b6edbab8448bba65a2e187e4e8f77ff24671ed33b28f2fca9bb6ad0801aba9d
|
||||
EBUILD sanoid-2.0.1.ebuild 772 BLAKE2B befbc479b5c79faa88ae21649ed31d1af70dbecb60416e8c879fffd9a3cdf9f3f508e12d8edc9f4e0afbf0e6ab0491a36fdae2af995a1984072dc5bffd63fe1d SHA512 d90a8b8ae40634e2f2e1fa11ba787cfcb461b75fa65b19c0d9a34eb458f07f510bbb1992f4a0e7a0e4aa5f55a5acdc064779c9a4f993b30eb5cbf39037f97858
|
||||
EBUILD sanoid-9999.ebuild 752 BLAKE2B 073533436c6f5c47b9e8410c898bf86b605d61c9b16a08b57253f5a87ad583e00d935ae9ea90f98b42c20dc1fbda0b9f1a8a7bf5be1cf3daf20afc640f1428ca SHA512 40ad34230fdb538bbdcda2d8149f37eac2a0e2accce5f79f7ba77d8e62e3fd78e997d8143baa0e050f548f90ce1cb6827e50b536b5e3acc444c6032f170251be
|
||||
|
|
@ -0,0 +1 @@
|
|||
* * * * * root TZ=UTC /usr/bin/sanoid --cron
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# Copyright 2019 Gentoo Authors
|
||||
# Distributed under the terms of the GNU General Public License v2
|
||||
|
||||
EAPI=7
|
||||
|
||||
DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS"
|
||||
HOMEPAGE="https://github.com/jimsalterjrs/sanoid"
|
||||
SRC_URI="https://github.com/jimsalterjrs/${PN}/archive/v${PV}.tar.gz -> ${P}.tar.gz"
|
||||
|
||||
LICENSE="GPL-3.0"
|
||||
SLOT="0"
|
||||
KEYWORDS="~x86 ~amd64"
|
||||
IUSE=""
|
||||
|
||||
DEPEND="app-arch/lzop
|
||||
dev-perl/Config-IniFiles
|
||||
sys-apps/pv
|
||||
sys-block/mbuffer
|
||||
virtual/perl-Data-Dumper"
|
||||
RDEPEND="${DEPEND}"
|
||||
BDEPEND=""
|
||||
|
||||
DOCS=( README.md )
|
||||
|
||||
src_install() {
|
||||
dobin findoid
|
||||
dobin sanoid
|
||||
dobin sleepymutex
|
||||
dobin syncoid
|
||||
keepdir /etc/${PN}
|
||||
insinto /etc/${PN}
|
||||
doins sanoid.conf
|
||||
doins sanoid.defaults.conf
|
||||
insinto /etc/cron.d
|
||||
newins "${FILESDIR}/${PN}.cron" ${PN}
|
||||
}
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
# Copyright 2019 Gentoo Authors
|
||||
# Distributed under the terms of the GNU General Public License v2
|
||||
|
||||
EAPI=7
|
||||
|
||||
EGIT_REPO_URI="https://github.com/jimsalterjrs/${PN}.git"
|
||||
inherit git-r3
|
||||
|
||||
DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS"
|
||||
HOMEPAGE="https://github.com/jimsalterjrs/sanoid"
|
||||
|
||||
LICENSE="GPL-3.0"
|
||||
SLOT="0"
|
||||
KEYWORDS="**"
|
||||
IUSE=""
|
||||
|
||||
DEPEND="app-arch/lzop
|
||||
dev-perl/Config-IniFiles
|
||||
sys-apps/pv
|
||||
sys-block/mbuffer
|
||||
virtual/perl-Data-Dumper"
|
||||
RDEPEND="${DEPEND}"
|
||||
BDEPEND=""
|
||||
|
||||
DOCS=( README.md )
|
||||
|
||||
src_install() {
|
||||
dobin findoid
|
||||
dobin sanoid
|
||||
dobin sleepymutex
|
||||
dobin syncoid
|
||||
keepdir /etc/${PN}
|
||||
insinto /etc/${PN}
|
||||
doins sanoid.conf
|
||||
doins sanoid.defaults.conf
|
||||
insinto /etc/cron.d
|
||||
newins "${FILESDIR}/${PN}.cron" ${PN}
|
||||
}
|
||||
Binary file not shown.
|
|
@ -1,4 +1,4 @@
|
|||
%global version 1.4.14
|
||||
%global version 2.0.0
|
||||
%global git_tag v%{version}
|
||||
|
||||
# Enable with systemctl "enable sanoid.timer"
|
||||
|
|
@ -6,15 +6,15 @@
|
|||
|
||||
Name: sanoid
|
||||
Version: %{version}
|
||||
Release: 2%{?dist}
|
||||
Release: 1%{?dist}
|
||||
BuildArch: noarch
|
||||
Summary: A policy-driven snapshot management tool for ZFS file systems
|
||||
Group: Applications/System
|
||||
License: GPLv3
|
||||
URL: https://github.com/jimsalterjrs/sanoid
|
||||
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
|
||||
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
|
||||
|
||||
Requires: perl, mbuffer, lzop, pv
|
||||
Requires: perl, mbuffer, lzop, pv, perl-Config-IniFiles
|
||||
%if 0%{?_with_systemd}
|
||||
Requires: systemd >= 212
|
||||
|
||||
|
|
@ -58,6 +58,7 @@ Requires=zfs.target
|
|||
After=zfs.target
|
||||
|
||||
[Service]
|
||||
Environment=TZ=UTC
|
||||
Type=oneshot
|
||||
ExecStart=%{_sbindir}/sanoid --cron
|
||||
EOF
|
||||
|
|
@ -110,6 +111,10 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
|
|||
%endif
|
||||
|
||||
%changelog
|
||||
* Wed Dec 04 2018 Christoph Klaffl <christoph@phreaker.eu> - 2.0.0
|
||||
- Bump to 2.0.0
|
||||
* Sat Apr 28 2018 Dominic Robinson <github@dcrdev.com> - 1.4.18-1
|
||||
- Bump to 1.4.18
|
||||
* Thu Aug 31 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-2
|
||||
- Add systemd timers
|
||||
* Wed Aug 30 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-1
|
||||
|
|
@ -121,6 +126,5 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
|
|||
- Version bump
|
||||
- Clean up variables and macros
|
||||
- Compatible with both Fedora and Red Hat
|
||||
|
||||
* Sat Feb 13 2016 Thomas M. Lapp <tmlapp@gmail.com> - 1.4.4-1
|
||||
- Initial RPM Package
|
||||
|
|
@ -0,0 +1 @@
|
|||
cf0ec23c310d2f9416ebabe48f5edb73 sanoid-1.4.18.tar.gz
|
||||
607
sanoid
607
sanoid
|
|
@ -4,7 +4,8 @@
|
|||
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
|
||||
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
|
||||
|
||||
$::VERSION = '1.4.17';
|
||||
$::VERSION = '2.0.0';
|
||||
my $MINIMUM_DEFAULTS_VERSION = 2;
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
|
@ -18,7 +19,8 @@ use Time::Local; # to parse dates in reverse
|
|||
my %args = ("configdir" => "/etc/sanoid");
|
||||
GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
|
||||
"monitor-health", "force-update", "configdir=s",
|
||||
"monitor-snapshots", "take-snapshots", "prune-snapshots"
|
||||
"monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune",
|
||||
"monitor-capacity"
|
||||
) or pod2usage(2);
|
||||
|
||||
# If only config directory (or nothing) has been specified, default to --cron --verbose
|
||||
|
|
@ -30,6 +32,7 @@ if (keys %args < 2) {
|
|||
my $pscmd = '/bin/ps';
|
||||
|
||||
my $zfs = '/sbin/zfs';
|
||||
my $zpool = '/sbin/zpool';
|
||||
|
||||
my $conf_file = "$args{'configdir'}/sanoid.conf";
|
||||
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
|
||||
|
|
@ -39,8 +42,11 @@ my %config = init($conf_file,$default_conf_file);
|
|||
|
||||
# if we call getsnaps(%config,1) it will forcibly update the cache, TTL or no TTL
|
||||
my $forcecacheupdate = 0;
|
||||
my $cache = '/var/cache/sanoidsnapshots.txt';
|
||||
my $cacheTTL = 900; # 15 minutes
|
||||
my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate );
|
||||
my %pruned;
|
||||
my %capacitycache;
|
||||
|
||||
my %snapsbytype = getsnapsbytype( \%config, \%snaps );
|
||||
|
||||
|
|
@ -52,6 +58,7 @@ my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
|
|||
if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
|
||||
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
|
||||
if ($args{'monitor-health'}) { monitor_health(@params); }
|
||||
if ($args{'monitor-capacity'}) { monitor_capacity(@params); }
|
||||
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
|
||||
|
||||
if ($args{'cron'}) {
|
||||
|
|
@ -121,20 +128,23 @@ sub monitor_snapshots {
|
|||
my $path = $config{$section}{'path'};
|
||||
push @paths, $path;
|
||||
|
||||
my @types = ('yearly','monthly','daily','hourly');
|
||||
my @types = ('yearly','monthly','weekly','daily','hourly','frequently');
|
||||
foreach my $type (@types) {
|
||||
if ($config{$section}{$type} == 0) { next; }
|
||||
|
||||
my $smallerperiod = 0;
|
||||
# we need to set the period length in seconds first
|
||||
if ($type eq 'hourly') { $smallerperiod = 60; }
|
||||
if ($type eq 'frequently') { $smallerperiod = 1; }
|
||||
elsif ($type eq 'hourly') { $smallerperiod = 60; }
|
||||
elsif ($type eq 'daily') { $smallerperiod = 60*60; }
|
||||
elsif ($type eq 'monthly') { $smallerperiod = 60*60*24; }
|
||||
elsif ($type eq 'yearly') { $smallerperiod = 60*60*24; }
|
||||
elsif ($type eq 'weekly') { $smallerperiod = 60*60*24; }
|
||||
elsif ($type eq 'monthly') { $smallerperiod = 60*60*24*7; }
|
||||
elsif ($type eq 'yearly') { $smallerperiod = 60*60*24*31; }
|
||||
|
||||
my $typewarn = $type . '_warn';
|
||||
my $typecrit = $type . '_crit';
|
||||
my $warn = $config{$section}{$typewarn} * $smallerperiod;
|
||||
my $crit = $config{$section}{$typecrit} * $smallerperiod;
|
||||
my $warn = convertTimePeriod($config{$section}{$typewarn}, $smallerperiod);
|
||||
my $crit = convertTimePeriod($config{$section}{$typecrit}, $smallerperiod);
|
||||
my $elapsed = -1;
|
||||
if (defined $snapsbytype{$path}{$type}{'newest'}) {
|
||||
$elapsed = $snapsbytype{$path}{$type}{'newest'};
|
||||
|
|
@ -143,7 +153,7 @@ sub monitor_snapshots {
|
|||
my $dispwarn = displaytime($warn);
|
||||
my $dispcrit = displaytime($crit);
|
||||
if ( $elapsed > $crit || $elapsed == -1) {
|
||||
if ($config{$section}{$typecrit} > 0) {
|
||||
if ($crit > 0) {
|
||||
if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; }
|
||||
if ($elapsed == -1) {
|
||||
push @msgs, "CRIT: $path has no $type snapshots at all!";
|
||||
|
|
@ -152,7 +162,7 @@ sub monitor_snapshots {
|
|||
}
|
||||
}
|
||||
} elsif ($elapsed > $warn) {
|
||||
if ($config{$section}{$typewarn} > 0) {
|
||||
if ($warn > 0) {
|
||||
if (! $config{$section}{'monitor_dont_warn'} && ($errorlevel < 2) ) { $errorlevel = 1; }
|
||||
push @msgs, "WARN: $path\'s newest $type snapshot is $dispelapsed old (should be < $dispwarn)";
|
||||
}
|
||||
|
|
@ -174,6 +184,61 @@ sub monitor_snapshots {
|
|||
exit $errorlevel;
|
||||
}
|
||||
|
||||
|
||||
####################################################################################
|
||||
####################################################################################
|
||||
####################################################################################
|
||||
|
||||
sub monitor_capacity {
|
||||
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
|
||||
my %pools;
|
||||
my @messages;
|
||||
my $errlevel=0;
|
||||
|
||||
# build pool list with corresponding capacity limits
|
||||
foreach my $section (keys %config) {
|
||||
my @pool = split ('/',$section);
|
||||
|
||||
if (scalar @pool == 1 || !defined($pools{$pool[0]}) ) {
|
||||
my %capacitylimits;
|
||||
|
||||
if (!check_capacity_limit($config{$section}{'capacity_warn'})) {
|
||||
die "ERROR: invalid zpool capacity warning limit!\n";
|
||||
}
|
||||
|
||||
if ($config{$section}{'capacity_warn'} != 0) {
|
||||
$capacitylimits{'warn'} = $config{$section}{'capacity_warn'};
|
||||
}
|
||||
|
||||
if (!check_capacity_limit($config{$section}{'capacity_crit'})) {
|
||||
die "ERROR: invalid zpool capacity critical limit!\n";
|
||||
}
|
||||
|
||||
if ($config{$section}{'capacity_crit'} != 0) {
|
||||
$capacitylimits{'crit'} = $config{$section}{'capacity_crit'};
|
||||
}
|
||||
|
||||
if (%capacitylimits) {
|
||||
$pools{$pool[0]} = \%capacitylimits;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
foreach my $pool (keys %pools) {
|
||||
my $capacitylimitsref = $pools{$pool};
|
||||
|
||||
my ($exitcode, $msg) = check_zpool_capacity($pool,\%$capacitylimitsref);
|
||||
if ($exitcode > $errlevel) { $errlevel = $exitcode; }
|
||||
chomp $msg;
|
||||
push (@messages, $msg);
|
||||
}
|
||||
|
||||
my @warninglevels = ('','*** WARNING *** ','*** CRITICAL *** ');
|
||||
my $message = $warninglevels[$errlevel] . join (', ',@messages);
|
||||
print "$message\n";
|
||||
exit $errlevel;
|
||||
}
|
||||
|
||||
####################################################################################
|
||||
####################################################################################
|
||||
####################################################################################
|
||||
|
|
@ -195,13 +260,19 @@ sub prune_snapshots {
|
|||
my $path = $config{$section}{'path'};
|
||||
|
||||
my $period = 0;
|
||||
if (check_prune_defer($config, $section)) {
|
||||
if ($args{'verbose'}) { print "INFO: deferring snapshot pruning ($section)...\n"; }
|
||||
next;
|
||||
}
|
||||
|
||||
foreach my $type (keys %{ $config{$section} }){
|
||||
unless ($type =~ /ly$/) { next; }
|
||||
|
||||
# we need to set the period length in seconds first
|
||||
if ($type eq 'hourly') { $period = 60*60; }
|
||||
if ($type eq 'frequently') { $period = 60 * $config{$section}{'frequent_period'}; }
|
||||
elsif ($type eq 'hourly') { $period = 60*60; }
|
||||
elsif ($type eq 'daily') { $period = 60*60*24; }
|
||||
elsif ($type eq 'weekly') { $period = 60*60*24*7; }
|
||||
elsif ($type eq 'monthly') { $period = 60*60*24*31; }
|
||||
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
|
||||
|
||||
|
|
@ -234,24 +305,42 @@ sub prune_snapshots {
|
|||
writelock('sanoid_pruning');
|
||||
foreach my $snap( @prunesnaps ){
|
||||
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
|
||||
if (iszfsbusy($path)) {
|
||||
print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n";
|
||||
if (!$args{'force-prune'} && iszfsbusy($path)) {
|
||||
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
|
||||
} else {
|
||||
if (! $args{'readonly'}) { system($zfs, "destroy",$snap) == 0 or warn "could not remove $snap : $?"; }
|
||||
if (! $args{'readonly'}) {
|
||||
if (system($zfs, "destroy", $snap) == 0) {
|
||||
$pruned{$snap} = 1;
|
||||
my $dataset = (split '@', $snap)[0];
|
||||
my $snapname = (split '@', $snap)[1];
|
||||
if ($config{$dataset}{'pruning_script'}) {
|
||||
$ENV{'SANOID_TARGET'} = $dataset;
|
||||
$ENV{'SANOID_SNAPNAME'} = $snapname;
|
||||
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
|
||||
my $ret = runscript('pruning_script',$dataset);
|
||||
|
||||
delete $ENV{'SANOID_TARGET'};
|
||||
delete $ENV{'SANOID_SNAPNAME'};
|
||||
}
|
||||
} else {
|
||||
warn "could not remove $snap : $?";
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
removelock('sanoid_pruning');
|
||||
$forcecacheupdate = 1;
|
||||
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
|
||||
removecachedsnapshots(0);
|
||||
} else {
|
||||
print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n";
|
||||
if ($args{'verbose'}) { print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n"; }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# if there were any deferred cache updates,
|
||||
# do them now and wait if necessary
|
||||
removecachedsnapshots(1);
|
||||
} # end prune_snapshots
|
||||
|
||||
|
||||
|
|
@ -268,6 +357,19 @@ sub take_snapshots {
|
|||
|
||||
my @newsnaps;
|
||||
|
||||
# get utc timestamp of the current day for DST check
|
||||
my $daystartUtc = timelocal(0, 0, 0, $datestamp{'mday'}, ($datestamp{'mon'}-1), $datestamp{'year'});
|
||||
my ($isdst) = (localtime($daystartUtc))[8];
|
||||
my $dstOffset = 0;
|
||||
|
||||
if ($isdst ne $datestamp{'isdst'}) {
|
||||
# current dst is different then at the beginning og the day
|
||||
if ($isdst) {
|
||||
# DST ended in the current day
|
||||
$dstOffset = 60*60;
|
||||
}
|
||||
}
|
||||
|
||||
if ($args{'verbose'}) { print "INFO: taking snapshots...\n"; }
|
||||
foreach my $section (keys %config) {
|
||||
if ($section =~ /^template/) { next; }
|
||||
|
|
@ -275,9 +377,9 @@ sub take_snapshots {
|
|||
if ($config{$section}{'process_children_only'}) { next; }
|
||||
|
||||
my $path = $config{$section}{'path'};
|
||||
my @types = ('yearly','monthly','weekly','daily','hourly','frequently');
|
||||
|
||||
foreach my $type (keys %{ $config{$section} }){
|
||||
unless ($type =~ /ly$/) { next; }
|
||||
foreach my $type (@types) {
|
||||
if ($config{$section}{$type} > 0) {
|
||||
|
||||
my $newestage; # in seconds
|
||||
|
|
@ -291,7 +393,21 @@ sub take_snapshots {
|
|||
my @preferredtime;
|
||||
my $lastpreferred;
|
||||
|
||||
if ($type eq 'hourly') {
|
||||
# to avoid duplicates with DST
|
||||
my $dateSuffix = "";
|
||||
|
||||
if ($type eq 'frequently') {
|
||||
my $frequentslice = int($datestamp{'min'} / $config{$section}{'frequent_period'});
|
||||
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$frequentslice * $config{$section}{'frequent_period'};
|
||||
push @preferredtime,$datestamp{'hour'};
|
||||
push @preferredtime,$datestamp{'mday'};
|
||||
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
|
||||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60 * $config{$section}{'frequent_period'}; } # preferred time is later this frequent period - so look at last frequent period
|
||||
} elsif ($type eq 'hourly') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'hourly_min'};
|
||||
push @preferredtime,$datestamp{'hour'};
|
||||
|
|
@ -299,6 +415,13 @@ sub take_snapshots {
|
|||
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
|
||||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
|
||||
if ($dstOffset ne 0) {
|
||||
# timelocal doesn't take DST into account
|
||||
$lastpreferred += $dstOffset;
|
||||
# DST ended, avoid duplicates
|
||||
$dateSuffix = "_y";
|
||||
}
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60; } # preferred time is later this hour - so look at last hour's
|
||||
} elsif ($type eq 'daily') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
|
|
@ -308,7 +431,47 @@ sub take_snapshots {
|
|||
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
|
||||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24; } # preferred time is later today - so look at yesterday's
|
||||
|
||||
# timelocal doesn't take DST into account
|
||||
$lastpreferred += $dstOffset;
|
||||
|
||||
# check if the planned time has different DST flag than the current
|
||||
my ($isdst) = (localtime($lastpreferred))[8];
|
||||
if ($isdst ne $datestamp{'isdst'}) {
|
||||
if (!$isdst) {
|
||||
# correct DST difference
|
||||
$lastpreferred -= 60*60;
|
||||
}
|
||||
}
|
||||
|
||||
if ($lastpreferred > time()) {
|
||||
$lastpreferred -= 60*60*24;
|
||||
|
||||
if ($dstOffset ne 0) {
|
||||
# because we are going back one day
|
||||
# the DST difference has to be accounted
|
||||
# for in reverse now
|
||||
$lastpreferred -= 2*$dstOffset;
|
||||
}
|
||||
} # preferred time is later today - so look at yesterday's
|
||||
} elsif ($type eq 'weekly') {
|
||||
# calculate offset in seconds for the desired weekday
|
||||
my $offset = 0;
|
||||
if ($config{$section}{'weekly_wday'} < $datestamp{'wday'}) {
|
||||
$offset += 7;
|
||||
}
|
||||
$offset += $config{$section}{'weekly_wday'} - $datestamp{'wday'};
|
||||
$offset *= 60*60*24; # full day
|
||||
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'weekly_min'};
|
||||
push @preferredtime,$config{$section}{'weekly_hour'};
|
||||
push @preferredtime,$datestamp{'mday'};
|
||||
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
|
||||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
$lastpreferred += $offset;
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*7; } # preferred time is later this week - so look at last week's
|
||||
} elsif ($type eq 'monthly') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'monthly_min'};
|
||||
|
|
@ -327,6 +490,9 @@ sub take_snapshots {
|
|||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31*365.25; } # preferred time is later this year - so look at last year
|
||||
} else {
|
||||
warn "WARN: unknown interval type $type in config!";
|
||||
next;
|
||||
}
|
||||
|
||||
# reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot
|
||||
|
|
@ -336,7 +502,7 @@ sub take_snapshots {
|
|||
# update to most current possible datestamp
|
||||
%datestamp = get_date();
|
||||
# print "we should have had a $type snapshot of $path $maxage seconds ago; most recent is $newestage seconds old.\n";
|
||||
push(@newsnaps, "$path\@autosnap_$datestamp{'sortable'}_$type");
|
||||
push(@newsnaps, "$path\@autosnap_$datestamp{'sortable'}${dateSuffix}_$type");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -344,12 +510,46 @@ sub take_snapshots {
|
|||
|
||||
if ( (scalar(@newsnaps)) > 0) {
|
||||
foreach my $snap ( @newsnaps ) {
|
||||
my $dataset = (split '@', $snap)[0];
|
||||
my $snapname = (split '@', $snap)[1];
|
||||
my $presnapshotfailure = 0;
|
||||
my $ret = 0;
|
||||
if ($config{$dataset}{'pre_snapshot_script'}) {
|
||||
$ENV{'SANOID_TARGET'} = $dataset;
|
||||
$ENV{'SANOID_SNAPNAME'} = $snapname;
|
||||
if ($args{'verbose'}) { print "executing pre_snapshot_script '".$config{$dataset}{'pre_snapshot_script'}."' on dataset '$dataset'\n"; }
|
||||
|
||||
if (!$args{'readonly'}) {
|
||||
$ret = runscript('pre_snapshot_script',$dataset);
|
||||
}
|
||||
|
||||
delete $ENV{'SANOID_TARGET'};
|
||||
delete $ENV{'SANOID_SNAPNAME'};
|
||||
|
||||
if ($ret != 0) {
|
||||
# warning was already thrown by runscript function
|
||||
$config{$dataset}{'no_inconsistent_snapshot'} and next;
|
||||
$presnapshotfailure = 1;
|
||||
}
|
||||
}
|
||||
if ($args{'verbose'}) { print "taking snapshot $snap\n"; }
|
||||
if (!$args{'readonly'}) {
|
||||
system($zfs, "snapshot", "$snap") == 0
|
||||
or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
|
||||
# make sure we don't end up with multiple snapshots with the same ctime
|
||||
sleep 1;
|
||||
}
|
||||
if ($config{$dataset}{'post_snapshot_script'}) {
|
||||
if (!$presnapshotfailure or $config{$dataset}{'force_post_snapshot_script'}) {
|
||||
$ENV{'SANOID_TARGET'} = $dataset;
|
||||
$ENV{'SANOID_SNAPNAME'} = $snapname;
|
||||
if ($args{'verbose'}) { print "executing post_snapshot_script '".$config{$dataset}{'post_snapshot_script'}."' on dataset '$dataset'\n"; }
|
||||
|
||||
if (!$args{'readonly'}) {
|
||||
runscript('post_snapshot_script',$dataset);
|
||||
}
|
||||
|
||||
delete $ENV{'SANOID_TARGET'};
|
||||
delete $ENV{'SANOID_SNAPNAME'};
|
||||
}
|
||||
}
|
||||
}
|
||||
$forcecacheupdate = 1;
|
||||
|
|
@ -381,16 +581,20 @@ sub blabber {
|
|||
my $path = $config{$section}{'path'};
|
||||
print "Filesystem $path has:\n";
|
||||
print " $snapsbypath{$path}{'numsnaps'} total snapshots ";
|
||||
print "(newest: ";
|
||||
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
|
||||
print "$newest hours old)\n";
|
||||
if ($snapsbypath{$path}{'numsnaps'} == 0) {
|
||||
print "(no current snapshots)"
|
||||
} else {
|
||||
print "(newest: ";
|
||||
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
|
||||
print "$newest hours old)\n";
|
||||
|
||||
foreach my $type (keys %{ $snapsbytype{$path} }){
|
||||
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
|
||||
print " desired: $config{$section}{$type}\n";
|
||||
print " newest: ";
|
||||
my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60));
|
||||
print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n";
|
||||
foreach my $type (keys %{ $snapsbytype{$path} }){
|
||||
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
|
||||
print " desired: $config{$section}{$type}\n";
|
||||
print " newest: ";
|
||||
my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60));
|
||||
print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n";
|
||||
}
|
||||
}
|
||||
print "\n\n";
|
||||
}
|
||||
|
|
@ -484,7 +688,6 @@ sub getsnaps {
|
|||
|
||||
my ($config, $cacheTTL, $forcecacheupdate) = @_;
|
||||
|
||||
my $cache = '/var/cache/sanoidsnapshots.txt';
|
||||
my @rawsnaps;
|
||||
|
||||
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
|
||||
|
|
@ -521,7 +724,7 @@ sub getsnaps {
|
|||
}
|
||||
|
||||
foreach my $snap (@rawsnaps) {
|
||||
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\s*creation\s*(\d*)/);
|
||||
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\t*creation\t*(\d*)/);
|
||||
|
||||
# avoid pissing off use warnings
|
||||
if (defined $snapname) {
|
||||
|
|
@ -551,10 +754,21 @@ sub init {
|
|||
tie my %ini, 'Config::IniFiles', ( -file => $conf_file ) or die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!";
|
||||
|
||||
# we'll use these later to normalize potentially true and false values on any toggle keys
|
||||
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only');
|
||||
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only','skip_children','no_inconsistent_snapshot','force_post_snapshot_script');
|
||||
my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON");
|
||||
my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF");
|
||||
|
||||
# check if default configuration file is up to date
|
||||
my $defaults_version = 1;
|
||||
if (defined $defaults{'version'}{'version'}) {
|
||||
$defaults_version = $defaults{'version'}{'version'};
|
||||
delete $defaults{'version'};
|
||||
}
|
||||
|
||||
if ($defaults_version < $MINIMUM_DEFAULTS_VERSION) {
|
||||
die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION";
|
||||
}
|
||||
|
||||
foreach my $section (keys %ini) {
|
||||
|
||||
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
|
||||
|
|
@ -581,10 +795,12 @@ sub init {
|
|||
# override with values from user-defined default template, if any
|
||||
|
||||
foreach my $key (keys %{$ini{'template_default'}}) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
|
||||
$config{$section}{$key} = $ini{'template_default'}{$key};
|
||||
if ($key =~ /template|recursive/) {
|
||||
warn "ignored key '$key' from user-defined default template.\n";
|
||||
next;
|
||||
}
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
|
||||
$config{$section}{$key} = $ini{'template_default'}{$key};
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -598,17 +814,19 @@ sub init {
|
|||
|
||||
my $template = 'template_'.$rawtemplate;
|
||||
foreach my $key (keys %{$ini{$template}}) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
|
||||
$config{$section}{$key} = $ini{$template}{$key};
|
||||
if ($key =~ /template|recursive/) {
|
||||
warn "ignored key '$key' from '$rawtemplate' template.\n";
|
||||
next;
|
||||
}
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
|
||||
$config{$section}{$key} = $ini{$template}{$key};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# override with any locally set values in the module itself
|
||||
foreach my $key (keys %{$ini{$section}} ) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if (! ($key =~ /template|recursive|skip_children/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value directly set in module.\n"; }
|
||||
$config{$section}{$key} = $ini{$section}{$key};
|
||||
}
|
||||
|
|
@ -632,11 +850,20 @@ sub init {
|
|||
}
|
||||
|
||||
# how 'bout some recursion? =)
|
||||
my $recursive = $ini{$section}{'recursive'} && grep( /^$ini{$section}{'recursive'}$/, @istrue );
|
||||
my $skipChildren = $ini{$section}{'skip_children'} && grep( /^$ini{$section}{'skip_children'}$/, @istrue );
|
||||
my @datasets;
|
||||
if ($ini{$section}{'recursive'}) {
|
||||
if ($recursive || $skipChildren) {
|
||||
@datasets = getchilddatasets($config{$section}{'path'});
|
||||
foreach my $dataset(@datasets) {
|
||||
DATASETS: foreach my $dataset(@datasets) {
|
||||
chomp $dataset;
|
||||
|
||||
if ($skipChildren) {
|
||||
if ($args{'debug'}) { print "DEBUG: ignoring $dataset.\n"; }
|
||||
delete $config{$dataset};
|
||||
next DATASETS;
|
||||
}
|
||||
|
||||
foreach my $key (keys %{$config{$section}} ) {
|
||||
if (! ($key =~ /template|recursive|children_only/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; }
|
||||
|
|
@ -762,7 +989,7 @@ sub check_zpool() {
|
|||
exit $ERRORS{$state};
|
||||
}
|
||||
|
||||
my $statcommand="/sbin/zpool list -o name,size,cap,health,free $pool";
|
||||
my $statcommand="$zpool list -o name,size,cap,health,free $pool";
|
||||
|
||||
if (! open STAT, "$statcommand|") {
|
||||
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
|
||||
|
|
@ -810,7 +1037,7 @@ sub check_zpool() {
|
|||
## flag to detect section of zpool status involving our zpool
|
||||
my $poolfind=0;
|
||||
|
||||
$statcommand="/sbin/zpool status $pool";
|
||||
$statcommand="$zpool status $pool";
|
||||
if (! open STAT, "$statcommand|") {
|
||||
$state = 'CRITICAL';
|
||||
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
|
||||
|
|
@ -864,7 +1091,12 @@ sub check_zpool() {
|
|||
}
|
||||
|
||||
## other cases
|
||||
my ($dev, $sta) = /^\s+(\S+)\s+(\S+)/;
|
||||
my ($dev, $sta, $read, $write, $cksum) = /^\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/;
|
||||
|
||||
if (!defined($sta)) {
|
||||
# cache and logs are special and don't have a status
|
||||
next;
|
||||
}
|
||||
|
||||
## pool online, not degraded thanks to dead/corrupted disk
|
||||
if ($state eq "OK" && $sta eq "UNAVAIL") {
|
||||
|
|
@ -879,8 +1111,21 @@ sub check_zpool() {
|
|||
## no display for verbose level 1
|
||||
next if ($verbose==1);
|
||||
## don't display working devices for verbose level 2
|
||||
next if ($verbose==2 && $state eq "OK");
|
||||
next if ($verbose==2 && ($sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE"));
|
||||
if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")) {
|
||||
# check for io/checksum errors
|
||||
|
||||
my @vdeverr = ();
|
||||
if ($read != 0) { push @vdeverr, "read" };
|
||||
if ($write != 0) { push @vdeverr, "write" };
|
||||
if ($cksum != 0) { push @vdeverr, "cksum" };
|
||||
|
||||
if (scalar @vdeverr) {
|
||||
$dmge=$dmge . "(" . $dev . ":" . join(", ", @vdeverr) . " errors) ";
|
||||
if ($state eq "OK") { $state = "WARNING" };
|
||||
}
|
||||
|
||||
next;
|
||||
}
|
||||
|
||||
## show everything else
|
||||
if (/^\s{3}(\S+)/) {
|
||||
|
|
@ -900,6 +1145,128 @@ sub check_zpool() {
|
|||
return ($ERRORS{$state},$msg);
|
||||
} # end check_zpool()
|
||||
|
||||
sub check_capacity_limit {
|
||||
my $value = shift;
|
||||
|
||||
if (!defined($value) || $value !~ /^\d+\z/) {
|
||||
return undef;
|
||||
}
|
||||
|
||||
if ($value < 0 || $value > 100) {
|
||||
return undef;
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
sub check_zpool_capacity() {
|
||||
my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2);
|
||||
my $state="UNKNOWN";
|
||||
my $msg="FAILURE";
|
||||
|
||||
my $pool=shift;
|
||||
my $capacitylimitsref=shift;
|
||||
my %capacitylimits=%$capacitylimitsref;
|
||||
|
||||
my $statcommand="$zpool list -H -o cap $pool";
|
||||
|
||||
if (! open STAT, "$statcommand|") {
|
||||
print ("$state '$statcommand' command returns no result!\n");
|
||||
exit $ERRORS{$state};
|
||||
}
|
||||
|
||||
my $line = <STAT>;
|
||||
close(STAT);
|
||||
|
||||
chomp $line;
|
||||
my @row = split(/ +/, $line);
|
||||
my $cap=$row[0];
|
||||
|
||||
## check for valid capacity value
|
||||
if ($cap !~ m/^[0-9]{1,3}%$/ ) {
|
||||
$state = "CRITICAL";
|
||||
$msg = sprintf "ZPOOL {%s} does not exist and/or is not responding!\n", $pool;
|
||||
print $state, " ", $msg;
|
||||
exit ($ERRORS{$state});
|
||||
}
|
||||
|
||||
$state="OK";
|
||||
|
||||
# check capacity
|
||||
my $capn = $cap;
|
||||
$capn =~ s/\D//g;
|
||||
|
||||
if (defined($capacitylimits{"warn"})) {
|
||||
if ($capn >= $capacitylimits{"warn"}) {
|
||||
$state = "WARNING";
|
||||
}
|
||||
}
|
||||
|
||||
if (defined($capacitylimits{"crit"})) {
|
||||
if ($capn >= $capacitylimits{"crit"}) {
|
||||
$state = "CRITICAL";
|
||||
}
|
||||
}
|
||||
|
||||
$msg = sprintf "ZPOOL %s : %s\n", $pool, $cap;
|
||||
$msg = "$state $msg";
|
||||
return ($ERRORS{$state},$msg);
|
||||
} # end check_zpool_capacity()
|
||||
|
||||
sub check_prune_defer {
|
||||
my ($config, $section) = @_;
|
||||
|
||||
my $limit = $config{$section}{"prune_defer"};
|
||||
|
||||
if (!check_capacity_limit($limit)) {
|
||||
die "ERROR: invalid prune_defer limit!\n";
|
||||
}
|
||||
|
||||
if ($limit eq 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
my @parts = split /\//, $section, 2;
|
||||
my $pool = $parts[0];
|
||||
|
||||
if (exists $capacitycache{$pool}) {
|
||||
} else {
|
||||
$capacitycache{$pool} = get_zpool_capacity($pool);
|
||||
}
|
||||
|
||||
if ($limit < $capacitycache{$pool}) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
sub get_zpool_capacity {
|
||||
my $pool = shift;
|
||||
|
||||
my $statcommand="$zpool list -H -o cap $pool";
|
||||
|
||||
if (! open STAT, "$statcommand|") {
|
||||
die "ERROR: '$statcommand' command returns no result!\n";
|
||||
}
|
||||
|
||||
my $line = <STAT>;
|
||||
close(STAT);
|
||||
|
||||
chomp $line;
|
||||
my @row = split(/ +/, $line);
|
||||
my $cap=$row[0];
|
||||
|
||||
## check for valid capacity value
|
||||
if ($cap !~ m/^[0-9]{1,3}%$/ ) {
|
||||
die "ERROR: '$statcommand' command returned invalid capacity value ($cap)!\n";
|
||||
}
|
||||
|
||||
$cap =~ s/\D//g;
|
||||
|
||||
return $cap;
|
||||
}
|
||||
|
||||
######################################################################################################
|
||||
######################################################################################################
|
||||
######################################################################################################
|
||||
|
|
@ -930,13 +1297,22 @@ sub checklock {
|
|||
# no lockfile
|
||||
return 1;
|
||||
}
|
||||
# make sure lockfile contains something
|
||||
if ( -z $lockfile) {
|
||||
# zero size lockfile, something is wrong
|
||||
die "ERROR: something is wrong! $lockfile is empty\n";
|
||||
}
|
||||
|
||||
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
|
||||
# there's still a process running with that pid and with the same mutex.
|
||||
|
||||
open FH, "< $lockfile";
|
||||
open FH, "< $lockfile" or die "ERROR: unable to open $lockfile";
|
||||
my @lock = <FH>;
|
||||
close FH;
|
||||
# if we didn't get exactly 2 items from the lock file there is a problem
|
||||
if (scalar(@lock) != 2) {
|
||||
die "ERROR: $lockfile is invalid.\n"
|
||||
}
|
||||
|
||||
my $lockmutex = pop(@lock);
|
||||
my $lockpid = pop(@lock);
|
||||
|
|
@ -948,7 +1324,6 @@ sub checklock {
|
|||
# we own the lockfile. no need to check any further.
|
||||
return 2;
|
||||
}
|
||||
|
||||
open PL, "$pscmd -p $lockpid -o args= |";
|
||||
my @processlist = <PL>;
|
||||
close PL;
|
||||
|
|
@ -1053,9 +1428,133 @@ sub getchilddatasets {
|
|||
my @children = <FH>;
|
||||
close FH;
|
||||
|
||||
# parent dataset is the first element
|
||||
shift @children;
|
||||
|
||||
return @children;
|
||||
}
|
||||
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
|
||||
sub removecachedsnapshots {
|
||||
my $wait = shift;
|
||||
|
||||
if (not %pruned) {
|
||||
return;
|
||||
}
|
||||
|
||||
my $unlocked = checklock('sanoid_cacheupdate');
|
||||
|
||||
if ($wait != 1 && not $unlocked) {
|
||||
if ($args{'verbose'}) { print "INFO: deferring cache update (snapshot removal) - valid cache update lock held by another sanoid process.\n"; }
|
||||
return;
|
||||
}
|
||||
|
||||
# wait until we can get a lock to do our cache changes
|
||||
while (not $unlocked) {
|
||||
if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; }
|
||||
sleep(10);
|
||||
$unlocked = checklock('sanoid_cacheupdate');
|
||||
}
|
||||
|
||||
writelock('sanoid_cacheupdate');
|
||||
|
||||
if ($args{'verbose'}) {
|
||||
print "INFO: removing destroyed snapshots from cache.\n";
|
||||
}
|
||||
open FH, "< $cache";
|
||||
my @rawsnaps = <FH>;
|
||||
close FH;
|
||||
|
||||
open FH, "> $cache" or die 'Could not write to $cache!\n';
|
||||
foreach my $snapline ( @rawsnaps ) {
|
||||
my @columns = split("\t", $snapline);
|
||||
my $snap = $columns[0];
|
||||
print FH $snapline unless ( exists($pruned{$snap}) );
|
||||
}
|
||||
close FH;
|
||||
|
||||
removelock('sanoid_cacheupdate');
|
||||
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
|
||||
|
||||
# clear hash
|
||||
undef %pruned;
|
||||
}
|
||||
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
|
||||
sub runscript {
|
||||
my $key=shift;
|
||||
my $dataset=shift;
|
||||
|
||||
my $timeout=$config{$dataset}{'script_timeout'};
|
||||
|
||||
my $ret;
|
||||
eval {
|
||||
if ($timeout gt 0) {
|
||||
local $SIG{ALRM} = sub { die "alarm\n" };
|
||||
alarm $timeout;
|
||||
}
|
||||
$ret = system($config{$dataset}{$key});
|
||||
alarm 0;
|
||||
};
|
||||
if ($@) {
|
||||
if ($@ eq "alarm\n") {
|
||||
warn "WARN: $key didn't finish in the allowed time!";
|
||||
} else {
|
||||
warn "CRITICAL ERROR: $@";
|
||||
}
|
||||
return -1;
|
||||
} else {
|
||||
if ($ret != 0) {
|
||||
warn "WARN: $key failed, $?";
|
||||
}
|
||||
}
|
||||
|
||||
return $ret;
|
||||
}
|
||||
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
#######################################################################################################################3
|
||||
|
||||
sub convertTimePeriod {
|
||||
my $value=shift;
|
||||
my $period=shift;
|
||||
|
||||
if ($value =~ /^\d+[yY]$/) {
|
||||
$period = 60*60*24*31*365;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+[wW]$/) {
|
||||
$period = 60*60*24*7;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+[dD]$/) {
|
||||
$period = 60*60*24;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+[hH]$/) {
|
||||
$period = 60*60;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+[mM]$/) {
|
||||
$period = 60;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+[sS]$/) {
|
||||
$period = 1;
|
||||
chop $value;
|
||||
} elsif ($value =~ /^\d+$/) {
|
||||
# no unit, provided fallback period is used
|
||||
} else {
|
||||
# invalid value, return smallest valid value as fallback
|
||||
# (will trigger a warning message for monitoring for sure)
|
||||
return 1;
|
||||
}
|
||||
|
||||
return $value * $period;
|
||||
}
|
||||
|
||||
__END__
|
||||
|
||||
=head1 NAME
|
||||
|
|
@ -1079,9 +1578,11 @@ Options:
|
|||
--force-update Clears out sanoid's zfs snapshot cache
|
||||
|
||||
--monitor-health Reports on zpool "health", in a Nagios compatible format
|
||||
--monitor-capacity Reports on zpool capacity, in a Nagios compatible format
|
||||
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
|
||||
--take-snapshots Creates snapshots as specified in sanoid.conf
|
||||
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
|
||||
--force-prune Purges expired snapshots even if a send/recv is in progress
|
||||
|
||||
--help Prints this helptext
|
||||
--version Prints the version number
|
||||
|
|
|
|||
38
sanoid.conf
38
sanoid.conf
|
|
@ -40,6 +40,7 @@
|
|||
daily = 60
|
||||
|
||||
[template_production]
|
||||
frequently = 0
|
||||
hourly = 36
|
||||
daily = 30
|
||||
monthly = 3
|
||||
|
|
@ -49,6 +50,7 @@
|
|||
|
||||
[template_backup]
|
||||
autoprune = yes
|
||||
frequently = 0
|
||||
hourly = 30
|
||||
daily = 90
|
||||
monthly = 12
|
||||
|
|
@ -67,6 +69,42 @@
|
|||
daily_warn = 48
|
||||
daily_crit = 60
|
||||
|
||||
[template_hotspare]
|
||||
autoprune = yes
|
||||
frequently = 0
|
||||
hourly = 30
|
||||
daily = 90
|
||||
monthly = 3
|
||||
yearly = 0
|
||||
|
||||
### don't take new snapshots - snapshots on backup
|
||||
### datasets are replicated in from source, not
|
||||
### generated locally
|
||||
autosnap = no
|
||||
|
||||
### monitor hourlies and dailies, but don't warn or
|
||||
### crit until they're over 4h old, since replication
|
||||
### is typically hourly only
|
||||
hourly_warn = 4h
|
||||
hourly_crit = 6h
|
||||
daily_warn = 2d
|
||||
daily_crit = 4d
|
||||
|
||||
[template_scripts]
|
||||
### dataset and snapshot name will be supplied as environment variables
|
||||
### for all pre/post/prune scripts ($SANOID_TARGET, $SANOID_SNAPNAME)
|
||||
### run script before snapshot
|
||||
pre_snapshot_script = /path/to/script.sh
|
||||
### run script after snapshot
|
||||
post_snapshot_script = /path/to/script.sh
|
||||
### run script after pruning snapshot
|
||||
pruning_script = /path/to/script.sh
|
||||
### don't take an inconsistent snapshot (skip if pre script fails)
|
||||
#no_inconsistent_snapshot = yes
|
||||
### run post_snapshot_script when pre_snapshot_script is failing
|
||||
#force_post_snapshot_script = yes
|
||||
### limit allowed execution time of scripts before continuing (<= 0: infinite)
|
||||
script_timeout = 5
|
||||
|
||||
[template_ignore]
|
||||
autoprune = no
|
||||
|
|
|
|||
|
|
@ -5,6 +5,8 @@
|
|||
# #
|
||||
# you have been warned. #
|
||||
###################################################################################
|
||||
[version]
|
||||
version = 2
|
||||
|
||||
[template_default]
|
||||
|
||||
|
|
@ -15,6 +17,26 @@ path =
|
|||
recursive =
|
||||
use_template =
|
||||
process_children_only =
|
||||
skip_children =
|
||||
|
||||
pre_snapshot_script =
|
||||
post_snapshot_script =
|
||||
pruning_script =
|
||||
script_timeout = 5
|
||||
no_inconsistent_snapshot =
|
||||
force_post_snapshot_script =
|
||||
|
||||
# for snapshots shorter than one hour, the period duration must be defined
|
||||
# in minutes. Because they are executed within a full hour, the selected
|
||||
# value should divide 60 minutes without remainder so taken snapshots
|
||||
# are apart in equal intervals. Values larger than 59 aren't practical
|
||||
# as only one snapshot will be taken on each full hour in this case.
|
||||
# examples:
|
||||
# frequent_period = 15 -> four snapshot each hour 15 minutes apart
|
||||
# frequent_period = 5 -> twelve snapshots each hour 5 minutes apart
|
||||
# frequent_period = 45 -> two snapshots each hour with different time gaps
|
||||
# between them: 45 minutes and 15 minutes in this case
|
||||
frequent_period = 15
|
||||
|
||||
# If any snapshot type is set to 0, we will not take snapshots for it - and will immediately
|
||||
# prune any of those type snapshots already present.
|
||||
|
|
@ -22,11 +44,15 @@ process_children_only =
|
|||
# Otherwise, if autoprune is set, we will prune any snapshots of that type which are older
|
||||
# than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days.
|
||||
autoprune = yes
|
||||
frequently = 0
|
||||
hourly = 48
|
||||
daily = 90
|
||||
weekly = 0
|
||||
monthly = 6
|
||||
yearly = 0
|
||||
min_percent_free = 10
|
||||
# pruning can be skipped based on the used capacity of the pool
|
||||
# (0: always prune, 1-100: only prune if used capacity is greater than this value)
|
||||
prune_defer = 0
|
||||
|
||||
# We will automatically take snapshots if autosnap is on, at the desired times configured
|
||||
# below (or immediately, if we don't have one since the last preferred time for that type).
|
||||
|
|
@ -40,6 +66,10 @@ hourly_min = 0
|
|||
# daily - at 23:59 (most people expect a daily to contain everything done DURING that day)
|
||||
daily_hour = 23
|
||||
daily_min = 59
|
||||
# weekly -at 23:30 each Monday
|
||||
weekly_wday = 1
|
||||
weekly_hour = 23
|
||||
weekly_min = 30
|
||||
# monthly - immediately at the beginning of the month (ie 00:00 of day 1)
|
||||
monthly_mday = 1
|
||||
monthly_hour = 0
|
||||
|
|
@ -53,7 +83,9 @@ yearly_min = 0
|
|||
# monitoring plugin - define warn / crit levels for each snapshot type by age, in units of one period down
|
||||
# example hourly_warn = 90 means issue WARNING if most recent hourly snapshot is not less than 90 minutes old,
|
||||
# daily_crit = 36 means issue CRITICAL if most recent daily snapshot is not less than 36 hours old,
|
||||
# monthly_warn = 36 means issue WARNING if most recent monthly snapshot is not less than 36 days old... etc.
|
||||
# monthly_warn = 5 means issue WARNING if most recent monthly snapshot is not less than 5 weeks old... etc.
|
||||
# the following time case insensitive suffixes can also be used:
|
||||
# y = years, w = weeks, d = days, h = hours, m = minutes, s = seconds
|
||||
#
|
||||
# monitor_dont_warn = yes will cause the monitoring service to report warnings as text, but with status OK.
|
||||
# monitor_dont_crit = yes will cause the monitoring service to report criticals as text, but with status OK.
|
||||
|
|
@ -62,11 +94,19 @@ yearly_min = 0
|
|||
monitor = yes
|
||||
monitor_dont_warn = no
|
||||
monitor_dont_crit = no
|
||||
hourly_warn = 90
|
||||
hourly_crit = 360
|
||||
daily_warn = 28
|
||||
daily_crit = 32
|
||||
monthly_warn = 32
|
||||
monthly_crit = 35
|
||||
frequently_warn = 0
|
||||
frequently_crit = 0
|
||||
hourly_warn = 90m
|
||||
hourly_crit = 360m
|
||||
daily_warn = 28h
|
||||
daily_crit = 32h
|
||||
weekly_warn = 0
|
||||
weekly_crit = 0
|
||||
monthly_warn = 32d
|
||||
monthly_crit = 40d
|
||||
yearly_warn = 0
|
||||
yearly_crit = 0
|
||||
|
||||
# default limits for capacity checks (if set to 0, limit will not be checked)
|
||||
capacity_warn = 80
|
||||
capacity_crit = 95
|
||||
|
|
|
|||
|
|
@ -0,0 +1,49 @@
|
|||
#!/bin/bash
|
||||
set -x
|
||||
|
||||
# this test will take hourly, daily and monthly snapshots
|
||||
# for the whole year of 2017 in the timezone Europe/Vienna
|
||||
# sanoid is run hourly and no snapshots are pruned
|
||||
|
||||
. ../common/lib.sh
|
||||
|
||||
POOL_NAME="sanoid-test-1"
|
||||
POOL_TARGET="" # root
|
||||
RESULT="/tmp/sanoid_test_result"
|
||||
RESULT_CHECKSUM="68c67161a59d0e248094a66061972f53613067c9db52ad981030f36bc081fed7"
|
||||
|
||||
# UTC timestamp of start and end
|
||||
START="1483225200"
|
||||
END="1514761199"
|
||||
|
||||
# prepare
|
||||
setup
|
||||
checkEnvironment
|
||||
disableTimeSync
|
||||
|
||||
# set timezone
|
||||
ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime
|
||||
|
||||
timestamp=$START
|
||||
|
||||
mkdir -p "${POOL_TARGET}"
|
||||
truncate -s 5120M "${POOL_TARGET}"/zpool.img
|
||||
|
||||
zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool.img
|
||||
|
||||
function cleanUp {
|
||||
zpool export "${POOL_NAME}"
|
||||
}
|
||||
|
||||
# export pool in any case
|
||||
trap cleanUp EXIT
|
||||
|
||||
while [ $timestamp -le $END ]; do
|
||||
setdate $timestamp; date; "${SANOID}" --cron --verbose
|
||||
timestamp=$((timestamp+3600))
|
||||
done
|
||||
|
||||
saveSnapshotList "${POOL_NAME}" "${RESULT}"
|
||||
|
||||
# hourly daily monthly
|
||||
verifySnapshotList "${RESULT}" 8760 365 12 "${RESULT_CHECKSUM}"
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
[sanoid-test-1]
|
||||
use_template = production
|
||||
|
||||
[template_production]
|
||||
hourly = 36
|
||||
daily = 30
|
||||
monthly = 3
|
||||
yearly = 0
|
||||
autosnap = yes
|
||||
autoprune = no
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
#!/bin/bash
|
||||
set -x
|
||||
|
||||
# this test will check the behaviour arround a date where DST ends
|
||||
# with hourly, daily and monthly snapshots checked in a 15 minute interval
|
||||
|
||||
# Daylight saving time 2017 in Europe/Vienna began at 02:00 on Sunday, 26 March
|
||||
# and ended at 03:00 on Sunday, 29 October. All times are in
|
||||
# Central European Time.
|
||||
|
||||
. ../common/lib.sh
|
||||
|
||||
POOL_NAME="sanoid-test-2"
|
||||
POOL_TARGET="" # root
|
||||
RESULT="/tmp/sanoid_test_result"
|
||||
RESULT_CHECKSUM="a916d9cd46f4b80f285d069f3497d02671bbb1bfd12b43ef93531cbdaf89d55c"
|
||||
|
||||
# UTC timestamp of start and end
|
||||
START="1509141600"
|
||||
END="1509400800"
|
||||
|
||||
# prepare
|
||||
setup
|
||||
checkEnvironment
|
||||
disableTimeSync
|
||||
|
||||
# set timezone
|
||||
ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime
|
||||
|
||||
timestamp=$START
|
||||
|
||||
mkdir -p "${POOL_TARGET}"
|
||||
truncate -s 512M "${POOL_TARGET}"/zpool2.img
|
||||
|
||||
zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool2.img
|
||||
|
||||
function cleanUp {
|
||||
zpool export "${POOL_NAME}"
|
||||
}
|
||||
|
||||
# export pool in any case
|
||||
trap cleanUp EXIT
|
||||
|
||||
while [ $timestamp -le $END ]; do
|
||||
setdate $timestamp; date; "${SANOID}" --cron --verbose
|
||||
timestamp=$((timestamp+900))
|
||||
done
|
||||
|
||||
saveSnapshotList "${POOL_NAME}" "${RESULT}"
|
||||
|
||||
# hourly daily monthly
|
||||
verifySnapshotList "${RESULT}" 73 3 1 "${RESULT_CHECKSUM}"
|
||||
|
||||
# one more hour because of DST
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
[sanoid-test-2]
|
||||
use_template = production
|
||||
|
||||
[template_production]
|
||||
hourly = 36
|
||||
daily = 30
|
||||
monthly = 3
|
||||
yearly = 0
|
||||
autosnap = yes
|
||||
autoprune = no
|
||||
|
|
@ -0,0 +1,123 @@
|
|||
#!/bin/bash
|
||||
|
||||
unamestr="$(uname)"
|
||||
|
||||
function setup {
|
||||
export LANG=C
|
||||
export LANGUAGE=C
|
||||
export LC_ALL=C
|
||||
|
||||
export SANOID="../../sanoid"
|
||||
|
||||
# make sure that there is no cache file
|
||||
rm -f /var/cache/sanoidsnapshots.txt
|
||||
|
||||
# install needed sanoid configuration files
|
||||
[ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf
|
||||
cp ../../sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf
|
||||
}
|
||||
|
||||
function checkEnvironment {
|
||||
ASK=1
|
||||
|
||||
which systemd-detect-virt > /dev/null
|
||||
if [ $? -eq 0 ]; then
|
||||
systemd-detect-virt --vm > /dev/null
|
||||
if [ $? -eq 0 ]; then
|
||||
# we are in a vm
|
||||
ASK=0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $ASK -eq 1 ]; then
|
||||
set +x
|
||||
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
|
||||
echo "you should be running this test in a"
|
||||
echo "dedicated vm, as it will mess with your system!"
|
||||
echo "Are you sure you wan't to continue? (y)"
|
||||
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
|
||||
set -x
|
||||
|
||||
read -n 1 c
|
||||
if [ "$c" != "y" ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
function disableTimeSync {
|
||||
# disable ntp sync
|
||||
which timedatectl > /dev/null
|
||||
if [ $? -eq 0 ]; then
|
||||
timedatectl set-ntp 0
|
||||
fi
|
||||
}
|
||||
|
||||
function saveSnapshotList {
|
||||
POOL_NAME="$1"
|
||||
RESULT="$2"
|
||||
|
||||
zfs list -t snapshot -o name -Hr "${POOL_NAME}" | sort > "${RESULT}"
|
||||
|
||||
# clear the seconds for comparing
|
||||
if [ "$unamestr" == 'FreeBSD' ]; then
|
||||
sed -i '' 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]_/\100_/g' "${RESULT}"
|
||||
else
|
||||
sed -i 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]_/\100_/g' "${RESULT}"
|
||||
fi
|
||||
}
|
||||
|
||||
function verifySnapshotList {
|
||||
RESULT="$1"
|
||||
HOURLY_COUNT=$2
|
||||
DAILY_COUNT=$3
|
||||
MONTHLY_COUNT=$4
|
||||
CHECKSUM="$5"
|
||||
|
||||
failed=0
|
||||
message=""
|
||||
|
||||
hourly_count=$(grep -c "autosnap_.*_hourly" < "${RESULT}")
|
||||
daily_count=$(grep -c "autosnap_.*_daily" < "${RESULT}")
|
||||
monthly_count=$(grep -c "autosnap_.*_monthly" < "${RESULT}")
|
||||
|
||||
if [ "${hourly_count}" -ne "${HOURLY_COUNT}" ]; then
|
||||
failed=1
|
||||
message="${message}hourly snapshot count is wrong: ${hourly_count}\n"
|
||||
fi
|
||||
|
||||
if [ "${daily_count}" -ne "${DAILY_COUNT}" ]; then
|
||||
failed=1
|
||||
message="${message}daily snapshot count is wrong: ${daily_count}\n"
|
||||
fi
|
||||
|
||||
if [ "${monthly_count}" -ne "${MONTHLY_COUNT}" ]; then
|
||||
failed=1
|
||||
message="${message}monthly snapshot count is wrong: ${monthly_count}\n"
|
||||
fi
|
||||
|
||||
checksum=$(shasum -a 256 "${RESULT}" | cut -d' ' -f1)
|
||||
if [ "${checksum}" != "${CHECKSUM}" ]; then
|
||||
failed=1
|
||||
message="${message}result checksum mismatch\n"
|
||||
fi
|
||||
|
||||
if [ "${failed}" -eq 0 ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "TEST FAILED:" >&2
|
||||
echo -n -e "${message}" >&2
|
||||
|
||||
exit 1
|
||||
}
|
||||
|
||||
function setdate {
|
||||
TIMESTAMP="$1"
|
||||
|
||||
if [ "$unamestr" == 'FreeBSD' ]; then
|
||||
date -u -f '%s' "${TIMESTAMP}"
|
||||
else
|
||||
date --utc --set "@${TIMESTAMP}"
|
||||
fi
|
||||
}
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
#!/bin/bash
|
||||
|
||||
# run's all the available tests
|
||||
|
||||
for test in */; do
|
||||
if [ ! -x "${test}/run.sh" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
testName="${test%/}"
|
||||
|
||||
LOGFILE=/tmp/sanoid_test_run_"${testName}".log
|
||||
|
||||
pushd . > /dev/null
|
||||
|
||||
echo -n "Running test ${testName} ... "
|
||||
cd "${test}"
|
||||
echo -n y | bash run.sh > "${LOGFILE}" 2>&1
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "[PASS]"
|
||||
else
|
||||
echo "[FAILED] (see ${LOGFILE})"
|
||||
fi
|
||||
|
||||
popd > /dev/null
|
||||
done
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
#!/bin/bash
|
||||
|
||||
# test replication with fallback to bookmarks and all intermediate snapshots
|
||||
|
||||
set -x
|
||||
set -e
|
||||
|
||||
. ../../common/lib.sh
|
||||
|
||||
POOL_IMAGE="/tmp/syncoid-test-1.zpool"
|
||||
POOL_SIZE="200M"
|
||||
POOL_NAME="syncoid-test-1"
|
||||
TARGET_CHECKSUM="a23564d5bb8a2babc3ac8936fd82825ad9fff9c82d4924f5924398106bbda9f0 -"
|
||||
|
||||
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
|
||||
|
||||
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
|
||||
|
||||
function cleanUp {
|
||||
zpool export "${POOL_NAME}"
|
||||
}
|
||||
|
||||
# export pool in any case
|
||||
trap cleanUp EXIT
|
||||
|
||||
zfs create "${POOL_NAME}"/src
|
||||
zfs snapshot "${POOL_NAME}"/src@snap1
|
||||
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
|
||||
# initial replication
|
||||
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
|
||||
# destroy last common snapshot on source
|
||||
zfs destroy "${POOL_NAME}"/src@snap1
|
||||
|
||||
# create intermediate snapshots
|
||||
# sleep is needed so creation time can be used for proper sorting
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap2
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap3
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap4
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap5
|
||||
|
||||
# replicate which should fallback to bookmarks
|
||||
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
|
||||
|
||||
# verify
|
||||
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}")
|
||||
checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256)
|
||||
|
||||
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
#!/bin/bash
|
||||
|
||||
# test replication with fallback to bookmarks and all intermediate snapshots
|
||||
|
||||
set -x
|
||||
set -e
|
||||
|
||||
. ../../common/lib.sh
|
||||
|
||||
POOL_IMAGE="/tmp/syncoid-test-2.zpool"
|
||||
POOL_SIZE="200M"
|
||||
POOL_NAME="syncoid-test-2"
|
||||
TARGET_CHECKSUM="2460d4d4417793d2c7a5c72cbea4a8a584c0064bf48d8b6daa8ba55076cba66d -"
|
||||
|
||||
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
|
||||
|
||||
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
|
||||
|
||||
function cleanUp {
|
||||
zpool export "${POOL_NAME}"
|
||||
}
|
||||
|
||||
# export pool in any case
|
||||
trap cleanUp EXIT
|
||||
|
||||
zfs create "${POOL_NAME}"/src
|
||||
zfs snapshot "${POOL_NAME}"/src@snap1
|
||||
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
|
||||
# initial replication
|
||||
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
|
||||
# destroy last common snapshot on source
|
||||
zfs destroy "${POOL_NAME}"/src@snap1
|
||||
|
||||
# create intermediate snapshots
|
||||
# sleep is needed so creation time can be used for proper sorting
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap2
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap3
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap4
|
||||
sleep 1
|
||||
zfs snapshot "${POOL_NAME}"/src@snap5
|
||||
|
||||
# replicate which should fallback to bookmarks
|
||||
../../../syncoid --no-stream --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
|
||||
|
||||
# verify
|
||||
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}")
|
||||
checksum=$(echo "${output}" | shasum -a 256)
|
||||
|
||||
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
#!/bin/bash
|
||||
|
||||
# test replication with deletion of target if no matches are found
|
||||
|
||||
set -x
|
||||
set -e
|
||||
|
||||
. ../../common/lib.sh
|
||||
|
||||
POOL_IMAGE="/tmp/syncoid-test-3.zpool"
|
||||
POOL_SIZE="200M"
|
||||
POOL_NAME="syncoid-test-3"
|
||||
TARGET_CHECKSUM="0409a2ac216e69971270817189cef7caa91f6306fad9eab1033955b7e7c6bd4c -"
|
||||
|
||||
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
|
||||
|
||||
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
|
||||
|
||||
function cleanUp {
|
||||
zpool export "${POOL_NAME}"
|
||||
}
|
||||
|
||||
# export pool in any case
|
||||
trap cleanUp EXIT
|
||||
|
||||
zfs create "${POOL_NAME}"/src
|
||||
zfs create "${POOL_NAME}"/src/1
|
||||
zfs create "${POOL_NAME}"/src/2
|
||||
zfs create "${POOL_NAME}"/src/3
|
||||
|
||||
# initial replication
|
||||
../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
|
||||
# destroy last common snapshot on source
|
||||
zfs destroy "${POOL_NAME}"/src/2@%
|
||||
zfs snapshot "${POOL_NAME}"/src/2@test
|
||||
sleep 1
|
||||
../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
|
||||
|
||||
# verify
|
||||
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}" | sed 's/@syncoid_.*$'/@syncoid_/)
|
||||
checksum=$(echo "${output}" | shasum -a 256)
|
||||
|
||||
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
#!/bin/bash
|
||||
|
||||
# run's all the available tests
|
||||
|
||||
for test in */; do
|
||||
if [ ! -x "${test}/run.sh" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
testName="${test%/}"
|
||||
|
||||
LOGFILE=/tmp/syncoid_test_run_"${testName}".log
|
||||
|
||||
pushd . > /dev/null
|
||||
|
||||
echo -n "Running test ${testName} ... "
|
||||
cd "${test}"
|
||||
echo | bash run.sh > "${LOGFILE}" 2>&1
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "[PASS]"
|
||||
else
|
||||
echo "[FAILED] (see ${LOGFILE})"
|
||||
fi
|
||||
|
||||
popd > /dev/null
|
||||
done
|
||||
Loading…
Reference in New Issue